hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4a056f8a7c2e2f2d7c1dca1d45c23da31b6e06ba
| 710,599 |
ipynb
|
Jupyter Notebook
|
modules/2_takeoff/4_Life_Expectancy.ipynb
|
gdv/EngComp
|
2e96511e83b39334cc27e350ca22c2d970cde651
|
[
"BSD-3-Clause"
] | null | null | null |
modules/2_takeoff/4_Life_Expectancy.ipynb
|
gdv/EngComp
|
2e96511e83b39334cc27e350ca22c2d970cde651
|
[
"BSD-3-Clause"
] | null | null | null |
modules/2_takeoff/4_Life_Expectancy.ipynb
|
gdv/EngComp
|
2e96511e83b39334cc27e350ca22c2d970cde651
|
[
"BSD-3-Clause"
] | 3 |
2017-10-23T13:55:25.000Z
|
2019-10-21T16:43:56.000Z
| 341.469966 | 460,456 | 0.908915 |
[
[
[
"###### Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2017 L.A. Barba, N.C. Clementi",
"_____no_output_____"
],
[
"# Life expectancy and wealth\n\nWelcome to **Lesson 4** of the second module in _Engineering Computations_. This module gives you hands-on data analysis experience with Python, using real-life applications. The first three lessons provide a foundation in data analysis using a computational approach. They are:\n\n1. [Lesson 1](http://go.gwu.edu/engcomp2lesson1): Cheers! Stats with beers.\n2. [Lesson 2](http://go.gwu.edu/engcomp2lesson2): Seeing stats in a new light.\n3. [Lesson 3](http://go.gwu.edu/engcomp2lesson3): Lead in lipstick.\n\nYou learned to do exploratory data analysis with data in the form of arrays: NumPy has built-in functions for many descriptive statistics, making it easy! And you also learned to make data visualizations that are both good-looking and effective in communicating and getting insights from data.\n\nBut NumPy can't do everything. So we introduced you to `pandas`, a Python library written _especially_ for data analysis. It offers a very powerful new data type: the _DataFrame_—you can think of it as a spreadsheet, conveniently stored in one Python variable. \n\nIn this lesson, you'll dive deeper into `pandas`, using data for life expectancy and per-capita income over time, across the world.",
"_____no_output_____"
],
[
"## The best stats you've ever seen\n\n[Hans Rosling](https://en.wikipedia.org/wiki/Hans_Rosling) was a professor of international health in Sweeden, until his death in Februrary of this year. He came to fame with the thrilling TED Talk he gave in 2006: [\"The best stats you've ever seen\"](https://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen) (also on [YouTube](https://youtu.be/RUwS1uAdUcI), with ads). We highly recommend that you watch it! \n\nIn that first TED Talk, and in many other talks and even a BBC documentary (see the [trailer](https://youtu.be/jbkSRLYSojo) on YouTube), Rosling uses data visualizations to tell stories about the world's health, wealth, inequality and development. Using software, he and his team created amazing animated graphics with data from the United Nations and World Bank.\n\nAccording to a [blog post](https://www.gatesnotes.com/About-Bill-Gates/Remembering-Hans-Rosling) by Bill and Melinda Gates after Prof. Rosling's death, his message was simple: _\"that the world is making progress, and that policy decisions should be grounded in data.\"_",
"_____no_output_____"
],
[
"In this lesson, we'll use data about life expectancy and per-capita income (in terms of the gross domestic product, GDP) around the world. Visualizing and analyzing the data will be our gateway to learning more about the world we live in.\n\nLet's begin! As always, we start by importing the Python libraries for data analysis (and setting some plot parameters).",
"_____no_output_____"
]
],
[
[
"import numpy\nimport pandas\nfrom matplotlib import pyplot\n%matplotlib inline\n\n#Import rcParams to set font styles\nfrom matplotlib import rcParams\n\n#Set font style and size \nrcParams['font.family'] = 'serif'\nrcParams['font.size'] = 16",
"_____no_output_____"
]
],
[
[
"## Load and inspect the data\n\nWe found a website called [The Python Graph Gallery](https://python-graph-gallery.com), which has a lot of data visualization examples. \nAmong them is a [Gapminder Animation](https://python-graph-gallery.com/341-python-gapminder-animation/), an animated GIF of bubble charts in the style of Hans Rosling. \nWe're not going to repeat the same example, but we do get some ideas from it and re-use their data set. \nThe data file is hosted on their website, and we can read it directly from there into a `pandas` dataframe, using the URL.",
"_____no_output_____"
]
],
[
[
"# Read a dataset for life expectancy from a CSV file hosted online\nurl = 'https://python-graph-gallery.com/wp-content/uploads/gapminderData.csv'\nlife_expect = pandas.read_csv(url)",
"_____no_output_____"
]
],
[
[
"The first thing to do always is to take a peek at the data. \nUsing the `shape` attribute of the dataframe, we find out how many rows and columns it has. In this case, it's kind of big to print it all out, so to save space we'll print a small portion of `life_expect`.\nYou can use a slice to do this, or you can use the [`DataFrame.head()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html) method, which returns by default the first 5 rows.",
"_____no_output_____"
]
],
[
[
"life_expect.shape",
"_____no_output_____"
],
[
"life_expect.head()",
"_____no_output_____"
]
],
[
[
"You can see that the columns hold six types of data: the country, the year, the population, the continent, the life expectancy, and the per-capita gross domestic product (GDP). \nRows are indexed from 0, and the columns each have a **label** (also called an index). Using labels to access data is one of the most powerful features of `pandas`.\n\nIn the first five rows, we see that the country repeats (Afghanistan), while the year jumps by five. We guess that the data is arranged in blocks of rows for each country.\n\nWe can get a useful summary of the dataframe with the [`DataFrame.info()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) method: it tells us the number of rows and the number of columns (matching the output of the `shape` attribute) and then for each column, it tells us the number of rows that are populated (have non-null entries) and the type of the entries; finally it gives a breakdown of the types of data and an estimate of the memory used by the dataframe.",
"_____no_output_____"
]
],
[
[
"life_expect.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1704 entries, 0 to 1703\nData columns (total 6 columns):\ncountry 1704 non-null object\nyear 1704 non-null int64\npop 1704 non-null float64\ncontinent 1704 non-null object\nlifeExp 1704 non-null float64\ngdpPercap 1704 non-null float64\ndtypes: float64(3), int64(1), object(2)\nmemory usage: 80.0+ KB\n"
]
],
[
[
"The dataframe has 1704 rows, and every column has 1704 non-null entries, so there is no missing data. Let's find out how many entries of the same year appear in the data. \nIn [Lesson 1](http://go.gwu.edu/engcomp2lesson1) of this module, you already learned to extract a column from a data frame, and use the [`series.value_counts()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) method to answer our question.",
"_____no_output_____"
]
],
[
[
"life_expect['year'].value_counts()",
"_____no_output_____"
]
],
[
[
"We have an even 142 occurrences of each year in the dataframe. The distinct entries must correspond to each country. It also is clear that we have data every five years, starting 1952 and ending 2007. We think we have a pretty clear picture of what is contained in this data set. What next?",
"_____no_output_____"
],
[
"## Grouping data for analysis\n\nWe have a dataframe with a `country` column, where countries repeat in blocks of rows, and a `year` column, where sets of 12 years (increasing by 5) repeat for every country. Tabled data commonly has this interleaved structure. And data analysis often involves grouping the data in various ways, to transform it, compute statistics, and visualize it.\n\nWith the life expectancy data, it's natural to want to analyze it by year (and look at geographical differences), and by country (and look at historical differences). \n\nIn [Lesson 2](http://go.gwu.edu/engcomp2lesson2) of this module, we already learned how useful it was to group the beer data by style, and calculate means within each style. Let's get better acquainted with the powerful `groupby()` method for dataframes. First, grouping by the values in the `year` column:",
"_____no_output_____"
]
],
[
[
"by_year = life_expect.groupby('year')",
"_____no_output_____"
],
[
"type(by_year)",
"_____no_output_____"
]
],
[
[
"Notice that the type of the new variable `by_year` is different: it's a _GroupBy_ object, which—without making a copy of the data—is able to apply operations on each of the groups.\n\nThe [`GroupBy.first()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.first.html) method, for example, returns the first row in each group—applied to our grouping `by_year`, it shows the list of years (as a label), with the first country that appears in each year-group.",
"_____no_output_____"
]
],
[
[
"by_year.first()",
"_____no_output_____"
]
],
[
[
"All the year-groups have the same first country, Afghanistan, so what we see is the population, life expectancy and per-capita income in Afghanistan for all the available years.\nLet's save that into a new dataframe, and make a line plot of the population and life expectancy over the years.",
"_____no_output_____"
]
],
[
[
"Afghanistan = by_year.first()",
"_____no_output_____"
],
[
"Afghanistan['pop'].plot(figsize=(8,4),\n title='Population of Afghanistan');",
"_____no_output_____"
],
[
"Afghanistan['lifeExp'].plot(figsize=(8,4),\n title='Life expectancy of Afghanistan');",
"_____no_output_____"
]
],
[
[
"Do you notice something interesting? It's curious to see that the population of Afghanistan took a fall after 1977. We have data every 5 years, so we don't know exactly when this fall began, but it's not hard to find the answer online. The USSR invaded Afghanistan in 1979, starting a conflict that lasted 9 years and resulted in an estimated death toll of one million civilians and 100,000 fighters [1]. Millions fled the war to neighboring countries, which may explain why we se a dip in population, but not a dip in life expectancy.\n\nWe can also get some descriptive statistics in one go with the [`DataFrame.describe()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html) method of `pandas`.",
"_____no_output_____"
]
],
[
[
"Afghanistan.describe()",
"_____no_output_____"
]
],
[
[
"Let's now group our data by country, and use the `GroupBy.first()` method again to get the first row of each group-by-country. We know that the first year for which we have data is 1952, so let's immediately save that into a new variable named `year1952`, and keep playing with it. Below, we double-check the type of `year1952`, print the first five rows using the `head()` method, and get the minimum value of the population column.",
"_____no_output_____"
]
],
[
[
"by_country = life_expect.groupby('country')",
"_____no_output_____"
]
],
[
[
"The first year for all groups-by-country is 1952. Let's save that first group into a new dataframe, and keep playing with it.",
"_____no_output_____"
]
],
[
[
"year1952 = by_country.first()",
"_____no_output_____"
],
[
"type(year1952)",
"_____no_output_____"
],
[
"year1952.head()",
"_____no_output_____"
],
[
"year1952['pop'].min()",
"_____no_output_____"
]
],
[
[
"## Visualizing the data\n\nIn [Lesson 3](http://go.gwu.edu/engcomp2lesson3) of this module, you learned to make bubble charts, allowing you to show at least three features of the data in one plot. We'd like to make a bubble chart of life expectancy vs. per-capita GDP, with the size of the bubble proportional to the population. To do that, we'll need to extract the population values into a NumPy array.",
"_____no_output_____"
]
],
[
[
"populations = year1952['pop'].values",
"_____no_output_____"
]
],
[
[
"If you use the `populations` array unmodified as the size of the bubbles, they come out _huge_ and you get one solid color covering the figure (we tried it!). To make the bubble sizes reasonable, we divide by 60,000—an approximation to the minimum population—so the smallest bubble size is about 1 pt. Finally, we choose a logarithmic scale in the absissa (the GDP). Check it out!",
"_____no_output_____"
]
],
[
[
"year1952.plot.scatter(figsize=(12,8), \n x='gdpPercap', y='lifeExp', s=populations/60000, \n title='Life expectancy in the year 1952',\n edgecolors=\"white\")\npyplot.xscale('log');",
"_____no_output_____"
]
],
[
[
"That's neat! But the Rosling bubble charts include one more feature in the data: the continent of each country, using a color scheme. Can we do that?\n\nMatplotlib [colormaps](https://matplotlib.org/examples/color/colormaps_reference.html) offer several options for _qualitative_ data, using discrete colors mapped to a sequence of numbers. We'd like to use the `Accent` colormap to code countries by continent. But we need a numeric code to assign to each continent, so it can be mapped to a color.\n\nThe [Gapminder Animation](https://python-graph-gallery.com/341-python-gapminder-animation/) example at The Python Graph Gallery has a good tip: using the `pandas` _Categorical_ data type, which associates a numerical value for each category in a column containing qualitative (categorical) data. \n\nLet's see what we get if we apply `pandas.Categorical()` to the `continent` column:",
"_____no_output_____"
]
],
[
[
"pandas.Categorical(year1952['continent'])",
"_____no_output_____"
]
],
[
[
"Right. We see that the `continent` column has repeated entries of 5 distinct categories, one for each continent. In order, they are: Africa, Americas, Asia, Europe, Oceania.\n\nApplying [`pandas.Categorical()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Categorical.html) to the `continent` column will create an integer value—the _code_ of the category—associated to each entry. We can then use these integer values to map to the colors in a colormap. The trick will be to extract the `codes` attribute of the _Categorical_ data and save that into a new variable named `colors` (a NumPy array).",
"_____no_output_____"
]
],
[
[
"colors = pandas.Categorical(year1952['continent']).codes",
"_____no_output_____"
],
[
"type(colors)",
"_____no_output_____"
],
[
"len(colors)",
"_____no_output_____"
],
[
"print(colors)",
"[2 3 0 0 1 4 3 2 2 3 0 1 3 0 1 3 0 0 2 0 1 0 0 1 2 1 0 0 0 1 0 3 1 3 3 0 1\n 1 0 1 0 0 0 3 3 0 0 3 0 3 1 0 0 1 1 2 3 3 2 2 2 2 3 2 3 1 2 2 0 2 2 2 2 0\n 0 0 0 0 2 0 0 0 1 2 3 0 0 2 0 2 3 4 1 0 0 3 2 2 1 1 1 2 3 3 1 0 3 0 0 2 0\n 3 0 2 3 3 0 0 3 2 0 0 3 3 2 2 0 2 0 1 0 3 0 3 1 1 1 2 2 2 0 0]\n"
]
],
[
[
"You see that `colors` is a NumPy array of 142 integers that can take the values: ` 0, 1, 2, 3, 4`. They are the codes to `continent` categories: `Africa, Americas, Asia, Europe, Oceania`. For example, the first entry is `2`, corresponding to Asia, the continent of Afghanistan.\n\nNow we're ready to re-do our bubble chart, using the array `colors` to set the color of the bubble (according to the continent for the given country).",
"_____no_output_____"
]
],
[
[
"year1952.plot.scatter(figsize=(12,8), \n x='gdpPercap', y='lifeExp', s=populations/60000, \n c=colors, cmap='Accent',\n title='Life expectancy vs. per-capita GDP in the year 1952,\\n color-coded by continent',\n logx = 'True',\n ylim = (25,85),\n xlim = (1e2, 1e5),\n edgecolors=\"white\",\n alpha=0.6);",
"_____no_output_____"
]
],
[
[
"##### Note:\n\nWe encountered a bug in `pandas` scatter plots! The labels of the $x$-axis disappeared when we added the colors to the bubbles. We tried several things to fix it, like adding the line `pyplot.xlabel(\"GDP per Capita\")` at the end of the cell, but nothing worked. Searching online, we found an open [issue report](https://github.com/pandas-dev/pandas/issues/10611) for this problem.\n",
"_____no_output_____"
],
[
"##### Discuss with your neighbor:\n\nWhat do you see in the colored bubble chart, in regards to 1952 conditions in different countries and different continents?\nCan you guess some countries? Can you figure out which color corresponds to which continent?",
"_____no_output_____"
],
[
"### Spaghetti plot of life expectancy\n\nThe bubble plot shows us that 1952 life expectancies varied quite a lot from country to country: from a minimum of under 30 years, to a maximum under 75 years. The first part of Prof. Rosling's dying message is _\"that the world is making progress_.\" Is it the case that countries around the world _all_ make progress in life expectancy over the years?\n\nWe have an idea: what if we plot a line of life expectancy over time, for every country in the data set? It could be a bit messy, but it may give an _overall view_ of the world-wide progress in life expectancy.\n\nBelow, we'll make such a plot, with 142 lines: one for each country. This type of graphic is called a **spaghetti plot** …for obvious reasons!\n\nTo add a line for each country on the same plot, we'll use a `for`-statement and the `by_country` groups. For each country-group, the line plot takes the series `year` and `lifeExp` as $(x,y)$ coordinates. Since the spaghetti plot is quite busy, we also took off the box around the plot. Study this code carefully.",
"_____no_output_____"
]
],
[
[
"pyplot.figure(figsize=(12,8))\n\nfor key,group in by_country:\n pyplot.plot(group['year'], group['lifeExp'], alpha=0.4)\n \npyplot.title('Life expectancy in the years 1952–2007, across 142 countries')\npyplot.box(on=None);",
"_____no_output_____"
]
],
[
[
"## Dig deeper and get insights from the data\n\nThe spaghetti plot shows a general upwards tendency, but clearly not all countries have a monotonically increasing life expectancy. Some show a one-year sharp drop (but remember, this data jumps every 5 years), while others drop over several years.\nAnd something catastrophic happened to one country in 1977, and to another country in 1992.\nLet's investigate this!\n\nWe'd like to explore the data for a particular year: first 1977, then 1992. For those years, we can get the minimum life expectancy, and then find out which country experienced it. \n\nTo access a particular group in _GroupBy_ data, `pandas` has a `get_group(key)` method, where `key` is the label of the group.\nFor example, we can access yearly data from the `by_year` groups using the year as key. The return type will be a dataframe, containing the same columns as the original data.",
"_____no_output_____"
]
],
[
[
"type(by_year.get_group(1977))",
"_____no_output_____"
],
[
"type(by_year['lifeExp'].get_group(1977))",
"_____no_output_____"
]
],
[
[
"Now we can find the minimum value of life expectancy at the specific years of interest, using the `Series.min()` method. Let' do this for 1977 and 1992, and save the values in new Python variables, to reuse later.",
"_____no_output_____"
]
],
[
[
"min_lifeExp1977 = by_year['lifeExp'].get_group(1977).min()\nmin_lifeExp1977",
"_____no_output_____"
],
[
"min_lifeExp1992 = by_year['lifeExp'].get_group(1992).min()\nmin_lifeExp1992",
"_____no_output_____"
]
],
[
[
"Those values of life expectancy are just terrible! Are you curious to know what countries experienced the dramatic drops in life expectancy?\n\nWe can find the row _index_ of the minimum value, thanks to the [`pandas.Series.idxmin()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.idxmin.html) method. The row indices are preserved from the original dataframe `life_expect` to its groupings, so the index will help us identify the country. Check it out.",
"_____no_output_____"
]
],
[
[
"by_year['lifeExp'].get_group(1977).idxmin()",
"_____no_output_____"
],
[
"life_expect['country'][221]",
"_____no_output_____"
],
[
"by_country.get_group('Cambodia')",
"_____no_output_____"
]
],
[
[
"We searched online to learn what was happening in Cambodia to cause such a drop in life expectancy in the 1970s. Indeed, Cambodia experienced a _mortality crisis_ due to several factors that combined into a perfect storm: war, ethnic cleansing and migration, collapse of the health system, and cruel famine [2].\nIt's hard for a country to keep vital statistics under such circumstances, and certainly there are uncertainties in the data for Cambodia in the 1970s.\nHowever, various sources report a life expectancy there in 1977 that was _under 20 years_.\nSee, for example, the World Bank's interactive web page on [Cambodia](https://data.worldbank.org/country/cambodia).\n\nThere is something strange with the data from the The Python Graph Gallery. Is it wrong?\nMaybe they are giving us _average_ life expectancy in a five-year period.\nLet's look at the other dip in life expectancy, in 1992.",
"_____no_output_____"
]
],
[
[
"by_year['lifeExp'].get_group(1992).idxmin()",
"_____no_output_____"
],
[
"life_expect['country'][1292]",
"_____no_output_____"
],
[
"by_country.get_group('Rwanda')",
"_____no_output_____"
]
],
[
[
"The World Bank's interactive web page on [Rwanda](https://data.worldbank.org/country/rwanda) gives a life expectancy of 28.1 in 1992, and even lower in 1993, at 27.6 years. \nThis doesn't match the value from the data set we sourced from The Python Graph Gallery, which gives 23.6—and since this value is _lower_ than the minimum value given by the World Bank, we conclude that the discepancy is not caused by 5-year averaging.",
"_____no_output_____"
],
[
"## Checking data quality\n\nAll our work here started with loading a data set we found online. What if this data set has _quality_ problems? \n\nWell, nothing better than asking the author of the web source for the data. We used Twitter to communicate with the author of The Python Graph Gallery, and he replied with a link to _his source_: a data package used for teaching a course in Exploratory Data Analysis at the University of British Columbia. ",
"_____no_output_____"
]
],
[
[
"%%html\n<blockquote class=\"twitter-tweet\" data-lang=\"en\"><p lang=\"en\" dir=\"ltr\">Hi. Didn't receive your email... Gapminder comes from this R library: <a href=\"https://t.co/BU1IFIGSxm\">https://t.co/BU1IFIGSxm</a>. I will add citation asap.</p>— R+Py Graph Galleries (@R_Graph_Gallery) <a href=\"https://twitter.com/R_Graph_Gallery/status/920074231269941248?ref_src=twsrc%5Etfw\">October 16, 2017</a></blockquote> <script async src=\"//platform.twitter.com/widgets.js\" charset=\"utf-8\"></script>",
"_____no_output_____"
]
],
[
[
"Note one immediate outcome of our reaching out to the author of The Python Graph Gallery: he realized he was not citing the source of his data [3], and promised to add proper credit. _It's always good form to credit your sources!_\n\nWe visited the online repository of the data source, and posted an [issue report](https://github.com/jennybc/gapminder/issues/18) there, with our questions about data quality. The author promptly responded, saying that _her_ source was the [Gapminder.org website](http://www.gapminder.org/data/)—**Gapminder** is the non-profit founded by Hans Rosling to host public data and visualizations. She also said: _\" I don't doubt there could be data quality problems! It should definitely NOT be used as an authoritative source for life expectancy\"_\n\nSo it turns out that the data we're using comes from a set of tools meant for teaching, and is not up-to-date with the latest vital statistics. The author ended up [adding a warning](https://github.com/jennybc/gapminder/commit/7b3ac7f477c78f21865fa7defea20e72cb9e2b8a) to make this clear to visitors of the repository on GitHub. \n\n#### This is a wonderful example of how people collaborate online via the open-source model.\n\n##### Note:\n\nFor the most accurate data, you can visit the website of the [World Bank](https://data.worldbank.org).",
"_____no_output_____"
],
[
"## Using widgets to visualize interactively\n\nOne more thing! This whole exploration began with our viewing the 2006 TED Talk by Hans Rosling: [\"The best stats you've ever seen\"](https://www.ted.com/talks/hans_rosling_shows_the_best_stats_you_ve_ever_seen). One of the most effective parts of the presentation is seeing the _animated_ bubble chart, illustrating how countries became healthier and richer over time. Do you want to make something like that?\n\nYou can! Introducing [Jupyter Widgets](https://ipywidgets.readthedocs.io/en/latest/user_guide.html). The magic of interactive widgets is that they tie together the running Python code in a Jupyter notebook with Javascript and HTML running in the browser. You can use widgets to build interactive controls on data visualizations, with buttons, sliders, and more.\n\nTo use widgets, the first step is to import the `widgets` module.",
"_____no_output_____"
]
],
[
[
"from ipywidgets import widgets",
"_____no_output_____"
]
],
[
[
"After importing `widgets`, you have available several UI (User Interaction) elements. One of our favorites is a _Slider_: an interactive sliding button. Here is a default slider that takes integer values, from 0 to 100 (but does nothing):",
"_____no_output_____"
]
],
[
[
"widgets.IntSlider()",
"_____no_output_____"
]
],
[
[
"What we'd like to do is make an interactive visualization of bubble charts, with the year in a slider, so that we can run forwards and backwards in time by sliding the button, watching our plot update the bubbles in real time. Sound like magic? It almost is.\n\nThe magic happens when you program what should happen when the value in the slider changes. A typical scenario is having a function that is executed with the value in the slider, interactively. To create that, we need two things:\n\n1. A function that will be called with the slider values, and\n2. A call to an _interaction_ function from the `ipywidgets` package.\n\nSeveral interaction functions are available, for different actions you expect from the user: a click, a text entered in a box, or sliding the button on a slider.\nYou will need to explore the Jupyter Widgets documentation [4] to learn more.\n\nFor this example, we'll be using a slider, a plotting function that makes our bubble chart, and the [`.interact()`](http://ipywidgets.readthedocs.io/en/stable/examples/Using%20Interact.html#) function to call our plotting function with each value of the slider.\n\nWe do everything in one cell below. The first line creates an integer-value slider with our known years—from a minimum 1952, to a maximum 2007, stepping by 5—and assigns it to the variable name `slider`.\n\nNext, we define the function `roslingplot()`, which re-calculates the array of population values, gets the year-group we need from the `by_year` _GroupBy_ object, and makes a scater plot of life expectancy vs. per-capita income, like we did above. The `populations` array (divided by 60,000) sets the size of the bubble, and the previously defined `colors` array sets the color coding by continent.\n\nWe also removed the colorbar (which added little information), and added the option `sharex=False` following the workaround suggested by someone on the open [issue report](https://github.com/pandas-dev/pandas/issues/10611) for the plotting bug we mentioned above.\n\nThe last line in the cell below is a call to `.interact()`, passing our plotting function and the slider value assigned to its argument, `year`. Watch the magic happen!",
"_____no_output_____"
]
],
[
[
"slider = widgets.IntSlider(min=1952, max=2007, step=5)\n\ndef roslingplot(year):\n populations = by_year.get_group(year)['pop'].values\n \n by_year.get_group(year).plot.scatter(figsize=(12,8), \n x='gdpPercap', y='lifeExp', s=populations/60000, \n c=colors, cmap='Accent',\n title='Life expectancy vs per-capita GDP in the year '+ str(year)+'\\n',\n logx = 'True',\n ylim = (25,85),\n xlim = (1e2, 1e5),\n edgecolors=\"white\",\n alpha=0.6,\n colorbar=False,\n sharex=False)\n pyplot.show();\n \nwidgets.interact(roslingplot, year=slider);",
"_____no_output_____"
]
],
[
[
"## References\n\n1. [The Soviet War in Afghanistan, 1979-1989](https://www.theatlantic.com/photo/2014/08/the-soviet-war-in-afghanistan-1979-1989/100786/), The Atlantic (2014), by Alan Taylor.\n\n2. US National Research Council Roundtable on the Demography of Forced Migration; H.E. Reed, C.B. Keely, editors. Forced Migration & Mortality (2001), National Academies Press, Washington DC; Chapter 5: The Demographic Analysis of Mortality Crises: The Case of Cambodia, 1970-1979, Patrick Heuveline. Available at: https://www.ncbi.nlm.nih.gov/books/NBK223346/\n\n3. gapminder R data package. Licensed CC-BY 3.0 by Jennifer (Jenny) Bryan (2015) https://github.com/jennybc/gapminder\n\n4. [Jupyter Widgets User Guide](https://ipywidgets.readthedocs.io/en/latest/user_guide.html)",
"_____no_output_____"
]
],
[
[
"# Execute this cell to load the notebook's style sheet, then ignore it\nfrom IPython.core.display import HTML\ncss_file = '../../style/custom.css'\nHTML(open(css_file, \"r\").read())",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a0570a7b26ab714c65205cfab5dcd3dbb3caaee
| 3,415 |
ipynb
|
Jupyter Notebook
|
miscellaneous_notebooks/OLD/Prior_and_Posterior_Distributions/Prior_and_Posterior_Distributions.ipynb
|
dcroce/jupyter-book
|
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
|
[
"MIT"
] | null | null | null |
miscellaneous_notebooks/OLD/Prior_and_Posterior_Distributions/Prior_and_Posterior_Distributions.ipynb
|
dcroce/jupyter-book
|
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
|
[
"MIT"
] | null | null | null |
miscellaneous_notebooks/OLD/Prior_and_Posterior_Distributions/Prior_and_Posterior_Distributions.ipynb
|
dcroce/jupyter-book
|
9ac4b502af8e8c5c3b96f5ec138602a0d3d8a624
|
[
"MIT"
] | null | null | null | 45.533333 | 451 | 0.674963 |
[
[
[
"# HIDDEN\nfrom datascience import *\nfrom prob140 import *\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')\n%matplotlib inline\nfrom scipy import stats",
"_____no_output_____"
]
],
[
[
"# Prior and Posterior Distributions #\nIn Data 8 we defined a parameter as a number associated with a population or with a distribution in a model. In all of the inference we have done so far, we have assumed that parameters are fixed numbers, possibly unknown. We have developed methods of estimation that attempt to capture the parameter in confidence intervals. \n\nBut there is another way of thinking about unknown numbers. Instead of imagining them as fixed, we can think of them as random, with the randomness coming in through our own degree of uncertainty about them. For example, if we think that the chance that a kind of email message is a phishing attempt is somewhere around 70%, then we can imagine the chance itself to be random, picked from a distribution that puts much of its mass around 70%.\n\nIf the distribution represents our belief at the outset of our analysis, we can call it a *prior* distribution. Once we have gathered data about various kinds of email messages and whether or not they are phishing attempts, we can update our belief based on the data. We can represent this updated opinion as a *posterior* distribution, calculated after the data have been collected. The calculation is almost invariably by Bayes' Rule.\n\nIn this way of thinking, we express our opinions as distributions on the space of parameters. For example, if we are running Bernoulli trials but are uncertain about the probability of success, we might want to think of the unit interval as the space of parameters. That is the main focus of this chapter.\n\nBefore we get started, it is worthwhile to remind ourselves of what we already know about conditioning on continuous random variables. We know that if $X$ and $Y$ have joint density $f$, then the conditional density of $Y$ given $X = x$ can be defined as\n\n$$\nf_{Y \\mid X = x} (y) ~ = ~ \\frac{f(x, y)}{f_X(x)}\n$$\n\nwhere $f_X$ is the marginal density of $X$. We had discussed what it means to \"condition on $X = x$\" when that event has probability zero. This chapter starts with a review of that discussion in a slightly different context, and then goes on to examine an area of probability that has acquired fundamental importance in machine learning.\n",
"_____no_output_____"
]
]
] |
[
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
]
] |
4a0575a0134b9226186e1c0ce7b4640880543284
| 6,213 |
ipynb
|
Jupyter Notebook
|
docs/_downloads/e9ab509705ca2d12c91d1f4103af830e/ng_generate_infile.ipynb
|
nunoedgarhubsoftphotoflow/py-fmas
|
241d942fe0cd6a49001b1bf110dd32bccc86bb16
|
[
"MIT"
] | 4 |
2021-04-28T07:02:54.000Z
|
2022-01-25T13:15:49.000Z
|
docs/_downloads/e9ab509705ca2d12c91d1f4103af830e/ng_generate_infile.ipynb
|
nunoedgarhubsoftphotoflow/py-fmas
|
241d942fe0cd6a49001b1bf110dd32bccc86bb16
|
[
"MIT"
] | 3 |
2021-06-10T07:11:35.000Z
|
2021-11-22T15:23:01.000Z
|
docs/_downloads/e9ab509705ca2d12c91d1f4103af830e/ng_generate_infile.ipynb
|
nunoedgarhubsoftphotoflow/py-fmas
|
241d942fe0cd6a49001b1bf110dd32bccc86bb16
|
[
"MIT"
] | 5 |
2021-05-20T08:53:44.000Z
|
2022-01-25T13:18:34.000Z
| 50.92623 | 1,569 | 0.56253 |
[
[
[
"%matplotlib inline",
"_____no_output_____"
]
],
[
[
"\n# Generating an input file\n\nThis examples shows how to generate an input file in HDF5-format, which can\nthen be processed by the `py-fmas` library code.\n\nThis is useful when the project-specific code is separate from the `py-fmas`\nlibrary code.\n\n.. codeauthor:: Oliver Melchert <[email protected]>\n",
"_____no_output_____"
],
[
"We start by importing the required `py-fmas` functionality. Since the\nfile-input for `py-fmas` is required to be provided in HDF5-format, we need\nsome python package that offers the possibility to read and write this\nformat. Here we opted for the python module h5py which is listed as one of\nthe dependencies of the `py-fmas` package.\n\n",
"_____no_output_____"
]
],
[
[
"import h5py\nimport numpy as np\nimport numpy.fft as nfft",
"_____no_output_____"
]
],
[
[
"We then define the desired propagation constant \n\n",
"_____no_output_____"
]
],
[
[
"def beta_fun_detuning(w):\n r'''Function defining propagation constant\n\n Implements group-velocity dispersion with expansion coefficients\n listed in Tab. I of Ref. [1]. Expansion coefficients are valid for\n :math:`lambda = 835\\,\\mathrm{nm}`, i.e. for :math:`\\omega_0 \\approx\n 2.56\\,\\mathrm{rad/fs}`.\n\n References:\n [1] J. M. Dudley, G. Genty, S. Coen,\n Supercontinuum generation in photonic crystal fiber,\n Rev. Mod. Phys. 78 (2006) 1135,\n http://dx.doi.org/10.1103/RevModPhys.78.1135\n\n Note:\n A corresponding propagation constant is implemented as function\n `define_beta_fun_PCF_Ranka2000` in `py-fmas` module\n `propatation_constant`.\n\n Args:\n w (:obj:`numpy.ndarray`): Angular frequency detuning.\n\n Returns:\n :obj:`numpy.ndarray` Propagation constant as function of\n frequency detuning.\n '''\n # ... EXPANSION COEFFICIENTS DISPERSION\n b2 = -1.1830e-2 # (fs^2/micron)\n b3 = 8.1038e-2 # (fs^3/micron)\n b4 = -0.95205e-1 # (fs^4/micron)\n b5 = 2.0737e-1 # (fs^5/micron)\n b6 = -5.3943e-1 # (fs^6/micron)\n b7 = 1.3486 # (fs^7/micron)\n b8 = -2.5495 # (fs^8/micron)\n b9 = 3.0524 # (fs^9/micron)\n b10 = -1.7140 # (fs^10/micron)\n # ... PROPAGATION CONSTANT (DEPENDING ON DETUNING)\n beta_fun_detuning = np.poly1d([b10/3628800, b9/362880, b8/40320,\n b7/5040, b6/720, b5/120, b4/24, b3/6, b2/2, 0., 0.])\n return beta_fun_detuning(w)",
"_____no_output_____"
]
],
[
[
"Next, we define all parameters needed to specify a simulation run \n\n",
"_____no_output_____"
]
],
[
[
"# -- DEFINE SIMULATION PARAMETERS\n# ... COMPUTATIONAL DOMAIN \nt_max = 3500. # (fs)\nt_num = 2**14 # (-)\nz_max = 0.1*1e6 # (micron)\nz_num = 4000 # (-)\nz_skip = 20 # (-)\nt = np.linspace(-t_max, t_max, t_num, endpoint=False)\nw = nfft.fftfreq(t.size, d=t[1]-t[0])*2*np.pi\n# ... MODEL SPECIFIC PARAMETERS \n# ... PROPAGATION CONSTANT\nc = 0.29979 # (fs/micron)\nlam0 = 0.835 # (micron)\nw0 = 2*np.pi*c/lam0 # (rad/fs)\nbeta_w = beta_fun_detuning(w-w0)\ngam0 = 0.11e-6 # (1/W/micron)\nn2 = gam0*c/w0 # (micron^2/W)\n# ... PARAMETERS FOR RAMAN RESPONSE \nfR = 0.18 # (-)\ntau1= 12.2 # (fs)\ntau2= 32.0 # (fs)\n# ... INITIAL CONDITION\nt0 = 28.4 # (fs)\nP0 = 1e4 # (W)\nE_0t_fun = lambda t: np.real(np.sqrt(P0)/np.cosh(t/t0)*np.exp(-1j*w0*t))\nE_0t = E_0t_fun(t)",
"_____no_output_____"
]
],
[
[
"The subsequent code will store the simulation parameters defined above to the\nfile `input_file.h5` in the current working directory.\n\n",
"_____no_output_____"
]
],
[
[
"def save_data_hdf5(file_path, data_dict):\n with h5py.File(file_path, 'w') as f:\n for key, val in data_dict.items():\n f.create_dataset(key, data=val)\n\ndata_dict = {\n 't_max': t_max,\n 't_num': t_num,\n 'z_min': 0.0,\n 'z_max': z_max,\n 'z_num': z_num,\n 'z_skip': z_skip,\n 'E_0t': E_0t,\n 'beta_w': beta_w,\n 'n2': n2,\n 'fR': fR,\n 'tau1': tau1,\n 'tau2': tau2,\n 'out_file_path': 'out_file.h5'\n}\n\nsave_data_hdf5('input_file.h5', data_dict)",
"_____no_output_____"
]
],
[
[
"An example, showing how to use `py-fmas` as a black-box simulation tool that\nperforms a simulation run for the propagation scenario stored under the file\n`input_file.h5` is available under the link below:\n\n`sphx_glr_auto_tutorials_basics_g_app.py`\n\n",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a057e06d0d2387b22862edc5df0bf473f6dee50
| 12,068 |
ipynb
|
Jupyter Notebook
|
EDA/EDA.ipynb
|
BengaliAI/asr2019
|
c645cecd410e04f3a42333175abff71de01a10c0
|
[
"MIT"
] | 1 |
2018-09-26T18:43:02.000Z
|
2018-09-26T18:43:02.000Z
|
EDA/EDA.ipynb
|
BengaliAI/asr2019
|
c645cecd410e04f3a42333175abff71de01a10c0
|
[
"MIT"
] | 1 |
2020-01-25T13:11:32.000Z
|
2020-01-25T16:29:33.000Z
|
EDA/EDA.ipynb
|
BengaliAI/asr2019
|
c645cecd410e04f3a42333175abff71de01a10c0
|
[
"MIT"
] | null | null | null | 24.780287 | 149 | 0.345873 |
[
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nfrom keras.preprocessing.text import text_to_word_sequence\nimport numpy as np\nfrom tqdm import tqdm\nfrom collections import Counter\nfrom pprint import pprint\n\ndf = pd.read_csv('./utt_spk_text.tsv', sep='\\t')\n\ndf.columns = ['id1', 'id2', 'transcript']",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"# Breaking a sentence into list of words then storing it\ntokenized = [ text_to_word_sequence(sentence) for sentence in tqdm(df['transcript']) ]",
"100%|██████████| 127564/127564 [00:01<00:00, 113768.04it/s]\n"
],
[
"# Squishing the 2d list into 1d\nall_tokens = np.hstack(tokenized).tolist()",
"_____no_output_____"
],
[
"# Token Counter\ntoken_counter = Counter(all_tokens)",
"_____no_output_____"
],
[
"# Unique Tokens Found in the dataset\nunique_tokens = list(token_counter.keys())",
"_____no_output_____"
],
[
"print(unique_tokens[:10])",
"['এ', 'ধরণের', 'কার্ড', 'নিয়ে', 'হতে', 'উপার্জিত', 'অর্থ', 'হাসির', 'বিষয়', 'হয়েই']\n"
],
[
"# Vocabulary size\nvocabulary_size = len(unique_tokens)",
"_____no_output_____"
],
[
"print(vocabulary_size)",
"142002\n"
]
],
[
[
"# Most N common tokens",
"_____no_output_____"
]
],
[
[
"token_counter.most_common(100)",
"_____no_output_____"
]
],
[
[
"# Least N Tokens",
"_____no_output_____"
]
],
[
[
"tokens_with_counts = list(sorted([ (token, count) for token, count in zip(token_counter.keys(), token_counter.values()) ], key=lambda x: x[1]))",
"_____no_output_____"
],
[
"pprint(tokens_with_counts[:100])",
"[('অন্তর্ভুক্তও', 1),\n ('মার্কিনি', 1),\n ('বাবুর্চি', 1),\n ('লকার্নো', 1),\n ('‘এককাট্টা', 1),\n ('শ্রেণী’', 1),\n ('আসানকারীদের', 1),\n ('প্রচারপত্রে', 1),\n ('বিনোদনপত্রিকায়', 1),\n ('আরামদায়ক', 1),\n ('ভবানীপুর', 1),\n ('খোয়াইলাম', 1),\n ('কাব্যকারের', 1),\n ('ইউনিয়নকারী', 1),\n ('কবিওয়ালের', 1),\n ('সারবত্তা', 1),\n ('ধপধপাইয়া', 1),\n ('হাঁটাটি', 1),\n ('দেমোফোনের', 1),\n ('আন্ন্যালাচা', 1),\n ('সেপাইরা', 1),\n ('স্বচ্ছতার', 1),\n ('৪৫মিনিটে', 1),\n ('খোঁপায়', 1),\n ('মহাসঙ্কট', 1),\n ('আলকাউসার', 1),\n ('সমাঝোতার', 1),\n ('ছটু', 1),\n ('ম্যাথিউসের', 1),\n ('পারবেন।', 1),\n ('মেমোরি', 1),\n ('সরকারপ্রধান', 1),\n ('সঞ্চয়', 1),\n ('জগতকে', 1),\n ('খুড়তুতো', 1),\n ('ঝরনার', 1),\n ('বাধ্যই', 1),\n ('মম’র', 1),\n ('বলিভিয়া', 1),\n ('চেয়েছেন।', 1),\n ('ঠাহর', 1),\n ('জিরাবাটা', 1),\n ('দিয়েনাড়ুন', 1),\n ('আর্কাইভিস্ট', 1),\n ('আর্লউইন', 1),\n ('মূঢ়', 1),\n ('কর্ণকুহর', 1),\n ('সিদ্ধান্তগুলো', 1),\n ('দৃষ্টিভঙ্গি’তে', 1),\n ('ক্ষুধা', 1),\n ('খোঁপা', 1),\n ('মর্জিশাসিত', 1),\n ('গোলমুখে', 1),\n ('১৩২', 1),\n ('গুলগুলিয়া', 1),\n ('ভলিতে', 1),\n ('‘মোশি’।', 1),\n ('উতপাদন', 1),\n ('টেট্রাঅ্যামিন', 1),\n ('ধর্মবর্ণের', 1),\n ('আমেরিকানই', 1),\n ('হিস্পানিক', 1),\n ('কমিউনিটির', 1),\n ('নরোডোম', 1),\n ('শিয়ামনি', 1),\n ('মেলান', 1),\n ('সূর্যোদয়', 1),\n ('রতিক্রিয়ার', 1),\n ('চতুর্মাত্রিক', 1),\n ('লাম্বের', 1),\n ('ভ্রম', 1),\n ('একবছরে', 1),\n ('নারীদেরকে', 1),\n ('স্বাস্থ্যও', 1),\n ('কালাসোনা', 1),\n ('কৃষিকাজই', 1),\n ('শুষে', 1),\n ('তত্ত্বাবধানের', 1),\n ('চুড়িগুলি', 1),\n ('ধানক্ষেতে', 1),\n ('দাবি।', 1),\n ('ইকুউপমেন্ট', 1),\n ('এ্যাটাক', 1),\n ('ইতিহাসবিদেরা', 1),\n ('অধিবাসীদেরই', 1),\n ('হাঁড়ি', 1),\n ('সহযোগিতাপূর্ণ', 1),\n ('ভরাডুবিতে', 1),\n ('সুরঙ্গ', 1),\n ('লাকমালের', 1),\n ('পুষ্ট', 1),\n ('সংখ্যাক', 1),\n ('স্ট্র্যাপের', 1),\n ('মালিনী', 1),\n ('গুইলেরমো', 1),\n ('কালোটাকার', 1),\n ('উপবিষ্ট', 1),\n ('পলকের', 1),\n (\"বেঈমান'\", 1),\n ('তয়ের', 1)]\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a058ae12a1cb6fcf064cf5c80e1d244eb05f5f5
| 37,495 |
ipynb
|
Jupyter Notebook
|
VacationPy/VacationPy.ipynb
|
rew0809/python-api-challenge
|
a4e828e490d381cb799d5270ff97f86cdbfbf928
|
[
"ADSL"
] | null | null | null |
VacationPy/VacationPy.ipynb
|
rew0809/python-api-challenge
|
a4e828e490d381cb799d5270ff97f86cdbfbf928
|
[
"ADSL"
] | null | null | null |
VacationPy/VacationPy.ipynb
|
rew0809/python-api-challenge
|
a4e828e490d381cb799d5270ff97f86cdbfbf928
|
[
"ADSL"
] | null | null | null | 33.748875 | 156 | 0.35114 |
[
[
[
"# VacationPy\n----\n\n#### Note\n* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nimport requests\nimport gmaps\nimport os\nfrom pprint import pprint as pp\n\n# Import API key\nfrom api_keys import g_key",
"_____no_output_____"
]
],
[
[
"### Store Part I results into DataFrame\n* Load the csv exported in Part I to a DataFrame",
"_____no_output_____"
]
],
[
[
"city_data = '../WeatherPy/city_data_REW.csv'\ncity_data_df = pd.read_csv(city_data)\ncity_data_df = city_data_df.dropna()\ncity_data_df",
"_____no_output_____"
]
],
[
[
"### Humidity Heatmap\n* Configure gmaps.\n* Use the Lat and Lng as locations and Humidity as the weight.\n* Add Heatmap layer to map.",
"_____no_output_____"
]
],
[
[
"#configure gmaps\ngmaps.configure(api_key=g_key)",
"_____no_output_____"
],
[
"#defining locations using Lat/Lng and humidity\nlocations = city_data_df[[\"Latitude\", \"Longitude\"]].astype(float)\nhumidity = city_data_df[\"Humidity (%)\"].astype(float)\n#printing map and defining weights using Humidity\nfig = gmaps.figure()\n\nheat_layer = gmaps.heatmap_layer(locations, weights=humidity, \n dissipating=False, max_intensity=100,\n point_radius = 1)\n\nfig.add_layer(heat_layer)\n\nfig",
"_____no_output_____"
]
],
[
[
"### Create new DataFrame fitting weather criteria\n* Narrow down the cities to fit weather conditions.\n* Drop any rows will null values.",
"_____no_output_____"
]
],
[
[
"#constructing new data frame with vacation parameters\nvacation_city_df = city_data_df.loc[city_data_df[\"Wind Speed (MPH)\"] < 10]\nvacation_city_df = vacation_city_df.loc[(vacation_city_df[\"Temperature (F)\"] < 80) | (vacation_city_df[\"Temperature (F)\"] > 70) ]\nvacation_city_df = vacation_city_df.loc[vacation_city_df[\"Cloudiness (%)\"] == 0]\nvacation_city_df = vacation_city_df.loc[vacation_city_df[\"Humidity (%)\"] < 30]\nif len(vacation_city_df) > 10:\n vacation_city_df = vacation_city_df[:-(len(vacation_city_df)-10)]\nvacation_city_df",
"_____no_output_____"
]
],
[
[
"### Hotel Map\n* Store into variable named `hotel_df`.\n* Add a \"Hotel Name\" column to the DataFrame.\n* Set parameters to search for hotels with 5000 meters.\n* Hit the Google Places API for each city's coordinates.\n* Store the first Hotel result into the DataFrame.\n* Plot markers on top of the heatmap.",
"_____no_output_____"
]
],
[
[
"#defining new data frame and adding hotel column\nhotel_df = vacation_city_df\nhotel_df[\"Hotel Name\"] = \"\"\nhotel_df",
"_____no_output_____"
],
[
"#hitting google places API for hotel data\nhotels = []\nfor i, row in hotel_df.iterrows():\n lat = row[\"Latitude\"] \n lng = row[\"Longitude\"] \n target_radius = 5000\n\n # set up a parameters dictionary\n params = {\n \"location\": f\"{lat}, {lng}\",\n \"type\": \"lodging\",\n \"key\": g_key,\n \"radius\": target_radius,\n }\n base_url = \"https://maps.googleapis.com/maps/api/place/nearbysearch/json\"\n response = requests.get(base_url, params=params).json()\n #row[\"Hotel Name\"] = response[\"results\"][0][\"name\"]\n try:\n hotels.append(response[\"results\"][0][\"name\"])\n except(IndexError):\n hotels.append(\"No Hotels in Target Radius\")\nhotel_df[\"Hotel Name\"] = hotels\nhotel_df",
"_____no_output_____"
],
[
"# NOTE: Do not change any of the code in this cell\n\n# Using the template add the hotel marks to the heatmap\ninfo_box_template = \"\"\"\n<dl>\n<dt>Name</dt><dd>{Hotel Name}</dd>\n<dt>City</dt><dd>{City}</dd>\n<dt>Country</dt><dd>{Country}</dd>\n</dl>\n\"\"\"\n# Store the DataFrame Row\n# NOTE: be sure to update with your DataFrame name\nhotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]\nlocations = hotel_df[[\"Latitude\", \"Longitude\"]]\n",
"_____no_output_____"
],
[
"# Add marker layer ontop of heat map\n#Plot the hotels on top of the humidity heatmap with each pin containing the Hotel Name, City, and Country.\nmarker_layer = gmaps.marker_layer(locations, info_box_content=hotel_info)\nfig.add_layer(marker_layer)\n# Display figure\nfig",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a0590caf5fe1921813f140c7895d4805a8e39c5
| 4,401 |
ipynb
|
Jupyter Notebook
|
Chapter 1/6_Defining networks using simple & efficient code with Gluon.ipynb
|
Anacoder1/Python_DeepLearning_Cookbook
|
0a26b3948930333a357193c979b18269e1772651
|
[
"MIT"
] | null | null | null |
Chapter 1/6_Defining networks using simple & efficient code with Gluon.ipynb
|
Anacoder1/Python_DeepLearning_Cookbook
|
0a26b3948930333a357193c979b18269e1772651
|
[
"MIT"
] | null | null | null |
Chapter 1/6_Defining networks using simple & efficient code with Gluon.ipynb
|
Anacoder1/Python_DeepLearning_Cookbook
|
0a26b3948930333a357193c979b18269e1772651
|
[
"MIT"
] | 2 |
2019-11-29T02:23:59.000Z
|
2020-11-30T06:49:29.000Z
| 24.314917 | 102 | 0.441945 |
[
[
[
"**Import gluon**",
"_____no_output_____"
]
],
[
[
"import mxnet as mx\nfrom mxnet import gluon",
"_____no_output_____"
]
],
[
[
"**Creating some dummy data. For this, we need the data to be in MXNet's NDArray or Symbol:**",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nx_input = mx.nd.empty((1, 5), mx.cpu())\nx_input[:] = np.array([[1, 2, 3, 4, 5]], np.float32)\n\ny_input = mx.nd.empty((1, 5), mx.cpu())\ny_input[:] = np.array([[10, 15, 20, 22.5, 25]], np.float32)",
"_____no_output_____"
]
],
[
[
"**With Gluon, it's really straightforward to build a neural network by stacking layers:**",
"_____no_output_____"
]
],
[
[
"net = gluon.nn.Sequential()\n\nwith net.name_scope():\n net.add(gluon.nn.Dense(16, activation = \"relu\"))\n net.add(gluon.nn.Dense(len(y_input)))",
"_____no_output_____"
]
],
[
[
"**Next, we initialize the parameters and we store these on our CPU as follows:**",
"_____no_output_____"
]
],
[
[
"net.collect_params().initialize(mx.init.Normal(), ctx = mx.cpu())",
"_____no_output_____"
]
],
[
[
"**With the following code, we set the loss function and the optimizer:**",
"_____no_output_____"
]
],
[
[
"softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()\n\ntrainer = gluon.Trainer(net.collect_params(), 'adam',\n {'learning_rate': .1})",
"_____no_output_____"
]
],
[
[
"**Finally, we train the model**",
"_____no_output_____"
]
],
[
[
"n_epochs = 10\n\nfor e in range(n_epochs):\n for i in range(len(x_input)):\n input = x_input[i]\n target = y_input[i]\n with mx.autograd.record():\n output = net(input)\n loss = softmax_cross_entropy(output, target)\n loss.backward()\n trainer.step(input.shape[0])",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a059407d936834249d70b8f8db5ce249f8f277f
| 15,271 |
ipynb
|
Jupyter Notebook
|
RedditPeruser.ipynb
|
nikmandava/cs194-project
|
cfc764f697d706ed41d841ebfd9ffc9e827f1855
|
[
"Apache-2.0"
] | null | null | null |
RedditPeruser.ipynb
|
nikmandava/cs194-project
|
cfc764f697d706ed41d841ebfd9ffc9e827f1855
|
[
"Apache-2.0"
] | null | null | null |
RedditPeruser.ipynb
|
nikmandava/cs194-project
|
cfc764f697d706ed41d841ebfd9ffc9e827f1855
|
[
"Apache-2.0"
] | null | null | null | 79.536458 | 1,959 | 0.678279 |
[
[
[
"import praw",
"_____no_output_____"
],
[
"reddit = praw.Reddit(\n user_agent= \"Automatic Reddit Relationship Reply bot v0.0 (by /u/rdrlrp )\",\n client_id=\"TiHrTwOaVXjsPQ\",\n client_secret=\"eelQVm-F-mB7k4549X-jP-Kdgkyokw\",\n username=\"rdrlrp\",\n password=\"Cs194Project\",\n)",
"_____no_output_____"
],
[
"posted_ids = set()\nsubreddit = reddit.subreddit(\"relationships\")\ni = 0\nfor submission in subreddit.stream.submissions():\n if submission.id not in posted_ids:\n print(i)\n print(\"Post ID\")\n print(submission.id)\n print(\"POST TIME\")\n print(submission.created_utc)\n posted_ids.add(submission.id)\n formatted = \"<BOT> \" + submission.title.strip().replace('\\n', ' ') + \" <EOT> <BOP> \" + submission.selftext.replace('\\n', ' ') + \" <EOP> <BOC>\"\n print(formatted)\n break\n# print(\"TITLE\")\n# print(submission.title.replace('\\n', ' '))\n# print(\"BODY OF POST\")\n# print(submission.selftext.replace('\\n', ' '))\n i += 1",
"0\nPost ID\nmv44f0\nPOST TIME\n1618964014.0\n<BOT> How do I[M28] express this one pet peeve I have to my gf[F27] of 1 month? <EOT> <BOP> So for as long as I can remember, I have this pet peeve about using other people's blankets, sheets, pillows, towels, etc. I cant do it unless they're clean. Like straight outta the washer and dryer clean. I dont know what it is but I can't stand using other people's bedding or other people using mine. This is only my second relationship, and my first in almost 5 years. The last one wasn't that hard to deal with cause due to her racist mom I never slept over. And due to my super uptight prudish mom she never came over to mine. So we stayed at motels a lot But im in a place now I can bring people over and stuff. But I dont know how to bring this up to my gf without being or coming off weird I dont want to weird her out first time we have sex in my bed and the first thing I do is rip all the bedding off and put it in the wash Tl;dr have a big pet peeve about using other peoples or other people using my bedding. Have no idea how to reveal this to new gf without weirding her out <EOP> <BOC>\n"
],
[
"!pwd",
"/Users/nikmandava/cs194fsdl/cs194-project\r\n"
],
[
"print(formatted)",
"<BOT> How do I[M28] express this one pet peeve I have to my gf[F27] of 1 month? <EOT> <BOP> So for as long as I can remember, I have this pet peeve about using other people's blankets, sheets, pillows, towels, etc. I cant do it unless they're clean. Like straight outta the washer and dryer clean. I dont know what it is but I can't stand using other people's bedding or other people using mine. This is only my second relationship, and my first in almost 5 years. The last one wasn't that hard to deal with cause due to her racist mom I never slept over. And due to my super uptight prudish mom she never came over to mine. So we stayed at motels a lot But im in a place now I can bring people over and stuff. But I dont know how to bring this up to my gf without being or coming off weird I dont want to weird her out first time we have sex in my bed and the first thing I do is rip all the bedding off and put it in the wash Tl;dr have a big pet peeve about using other peoples or other people using my bedding. Have no idea how to reveal this to new gf without weirding her out <EOP> <BOC>\n"
],
[
"from run_generation import generate_text",
"_____no_output_____"
],
[
"result = generate_text(formatted)",
"04/21/2021 08:05:37 - WARNING - run_generation - device: cpu, n_gpu: 0, 16-bits training: False\n04/21/2021 08:05:41 - INFO - run_generation - Namespace(device=device(type='cpu'), fp16=False, k=50, length=300, model_name_or_path='/Users/nikmandava/Downloads/checkpoint-82000', model_type='gpt2', n_gpu=0, no_cuda=False, num_return_sequences=5, p=0.9, padding_text='', prefix='', prompt=\"<BOT> How do I[M28] express this one pet peeve I have to my gf[F27] of 1 month? <EOT> <BOP> So for as long as I can remember, I have this pet peeve about using other people's blankets, sheets, pillows, towels, etc. I cant do it unless they're clean. Like straight outta the washer and dryer clean. I dont know what it is but I can't stand using other people's bedding or other people using mine. This is only my second relationship, and my first in almost 5 years. The last one wasn't that hard to deal with cause due to her racist mom I never slept over. And due to my super uptight prudish mom she never came over to mine. So we stayed at motels a lot But im in a place now I can bring people over and stuff. But I dont know how to bring this up to my gf without being or coming off weird I dont want to weird her out first time we have sex in my bed and the first thing I do is rip all the bedding off and put it in the wash Tl;dr have a big pet peeve about using other peoples or other people using my bedding. Have no idea how to reveal this to new gf without weirding her out <EOP> <BOC>\", repetition_penalty=1.0, seed=42, stop_token='<EOC>', temperature=1.0, xlm_language='')\nSetting `pad_token_id` to `eos_token_id`:50256 for open-end generation.\n"
],
[
"print(result)",
" > I cant stand using other people's bedding or other people using my bedding I have said \"no\" many times. I'm sure most people would agree with this. But it's weird and I just dont know how to bring it up to her. We have been dating for just under a year. I dont know what to say to her right now. \n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0597b0d98ba943885c9532c3b1c788130213e5
| 9,448 |
ipynb
|
Jupyter Notebook
|
train_churn_XGBoost.ipynb
|
davitbzh/churn
|
aaa26bca752af8b690748580acd194790a6e01a0
|
[
"Apache-2.0"
] | null | null | null |
train_churn_XGBoost.ipynb
|
davitbzh/churn
|
aaa26bca752af8b690748580acd194790a6e01a0
|
[
"Apache-2.0"
] | null | null | null |
train_churn_XGBoost.ipynb
|
davitbzh/churn
|
aaa26bca752af8b690748580acd194790a6e01a0
|
[
"Apache-2.0"
] | null | null | null | 32.027119 | 524 | 0.567739 |
[
[
[
"# Train Telecom Customer Churn Prediction with XGBoost",
"_____no_output_____"
],
[
"This tutorial is based on [this](https://www.kaggle.com/pavanraj159/telecom-customer-churn-prediction/comments#6.-Model-Performances) Kaggle notebook and [this](https://github.com/gojek/feast/tree/master/examples/feast-xgboost-churn-prediction-tutorial) Feast notebook",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nfrom hops import featurestore, hdfs\nfrom hops import numpy_helper as numpy\nfrom hops import pandas_helper as pandas\nimport os\nimport itertools\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport io\nimport statsmodels, yellowbrick\nimport sklearn # Tested with 0.22.1\nimport imblearn\nfrom slugify import slugify",
"Starting Spark application\n"
]
],
[
[
"### 1.1 Data",
"_____no_output_____"
]
],
[
[
"telecom_df = featurestore.get_featuregroup(\"telcom_featuregroup\", dataframe_type=\"pandas\")\ntelecom_df.head()",
"Running sql: use telecom_featurestore against offline feature store\nSQL string for the query created successfully\nRunning sql: SELECT * FROM telcom_featuregroup_1 against offline feature store\n churn ... tenure_group_tenure_gt_60\n0 0 ... 0\n1 1 ... 0\n2 1 ... 0\n3 0 ... 0\n4 1 ... 0\n\n[5 rows x 47 columns]"
]
],
[
[
"### 1.6 Data Preparation for Training",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import confusion_matrix,accuracy_score,classification_report\nfrom sklearn.metrics import roc_auc_score,roc_curve,scorer\nfrom sklearn.metrics import f1_score\nimport statsmodels.api as sm\nfrom sklearn.metrics import precision_score,recall_score\nfrom yellowbrick.classifier import DiscriminationThreshold\n\nId_col = ['customer_id']\ntarget_col = [\"churn\"]\n# Split into a train and test set\ntrain, test = train_test_split(telecom_df,test_size = .25 ,random_state = 111)\n \n# Seperating dependent and independent variables\ncols = [i for i in telecom_df.columns if i not in Id_col + target_col]\ntraining_x = train[cols]\ntraining_y = train[target_col]\ntesting_x = test[cols]\ntesting_y = test[target_col]",
"_____no_output_____"
]
],
[
[
"### 1.7 Training",
"_____no_output_____"
]
],
[
[
"from xgboost import XGBClassifier\n\nxgb_model = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,\n colsample_bytree=1, gamma=0, learning_rate=0.9, max_delta_step=0,\n max_depth=7, min_child_weight=1, missing=None, n_estimators=100,\n n_jobs=1, nthread=None, objective='binary:logistic', random_state=0,\n reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,\n silent=True, subsample=1)\n\n# Train model\nxgb_model.fit(training_x, training_y)\npredictions = xgb_model.predict(testing_x)\nprobabilities = xgb_model.predict_proba(testing_x)",
"_____no_output_____"
]
],
[
[
"### 1.8 Analysis",
"_____no_output_____"
]
],
[
[
"coefficients = pd.DataFrame(xgb_model.feature_importances_)\ncolumn_df = pd.DataFrame(cols)\ncoef_sumry = (pd.merge(coefficients, column_df, left_index=True,\n right_index=True, how=\"left\"))\ncoef_sumry.columns = [\"coefficients\", \"features\"]\ncoef_sumry = coef_sumry.sort_values(by=\"coefficients\", ascending=False)\n\nacc = accuracy_score(testing_y, predictions)\nprint(xgb_model)\nprint(\"\\n Classification report : \\n\", classification_report(testing_y, predictions))\nprint(\"Accuracy Score : \", acc)\n",
"XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,\n colsample_bynode=1, colsample_bytree=1, gamma=0,\n learning_rate=0.9, max_delta_step=0, max_depth=7,\n min_child_weight=1, missing=None, n_estimators=100, n_jobs=1,\n nthread=None, objective='binary:logistic', random_state=0,\n reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,\n silent=True, subsample=1, verbosity=1)\n\n Classification report : \n precision recall f1-score support\n\n 0 0.82 0.86 0.84 1282\n 1 0.57 0.50 0.54 476\n\n accuracy 0.76 1758\n macro avg 0.70 0.68 0.69 1758\nweighted avg 0.76 0.76 0.76 1758\n\nAccuracy Score : 0.7639362912400455"
],
[
"from hops import model\nimport pickle\nMODEL_NAME = \"XGBoost_Churn_Classifier\"\nfile_name = \"xgb_reg.pkl\"\nhdfs_path = \"Resources/xgboost_model\"\n\npickle.dump(xgb_model, open(file_name, \"wb\"))\nhdfs.mkdir(hdfs_path)\nhdfs.copy_to_hdfs(file_name, hdfs_path, overwrite=True)\n\n# test that we can load and use the model\nxgb_model_loaded = pickle.load(open(file_name, \"rb\"))\nxgb_model_loaded.predict(testing_x)[0] == xgb_model.predict(testing_x)[0]\n\n# save to the model registry\nmodel.export(hdfs_path, MODEL_NAME, metrics={'accuracy': acc})",
"Started copying local path xgb_reg.pkl to hdfs path hdfs://rpc.namenode.service.consul:8020/Projects/telecom/Resources/xgboost_model/xgb_reg.pkl\n\nFinished copying\n\nExported model XGBoost_Churn_Classifier as version 1 successfully.\nPolling XGBoost_Churn_Classifier version 1 for model availability.\nModel now available."
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a05a01bc77920459b74baf47735b1205e65d3db
| 8,372 |
ipynb
|
Jupyter Notebook
|
2_aging_signature/archive_initial_submission/DE_tissue_droplet.ipynb
|
gmstanle/msc_aging
|
3ea74dcfc48bc530ee47581ffe60e42f15b9178d
|
[
"MIT"
] | null | null | null |
2_aging_signature/archive_initial_submission/DE_tissue_droplet.ipynb
|
gmstanle/msc_aging
|
3ea74dcfc48bc530ee47581ffe60e42f15b9178d
|
[
"MIT"
] | null | null | null |
2_aging_signature/archive_initial_submission/DE_tissue_droplet.ipynb
|
gmstanle/msc_aging
|
3ea74dcfc48bc530ee47581ffe60e42f15b9178d
|
[
"MIT"
] | null | null | null | 29.375439 | 424 | 0.558528 |
[
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(style=\"whitegrid\")\nimport numpy as np\nimport scanpy.api as sc\nfrom anndata import read_h5ad\nfrom anndata import AnnData\nimport scipy as sp\nimport scipy.stats\nfrom gprofiler import GProfiler\nimport pickle\n# Other specific functions \nfrom itertools import product\nfrom statsmodels.stats.multitest import multipletests\nimport util\n# R related packages \nimport rpy2.rinterface_lib.callbacks\nimport logging\nfrom rpy2.robjects import pandas2ri\nimport anndata2ri",
"/home/martin/anaconda3/lib/python3.6/site-packages/sklearn/externals/joblib/__init__.py:15: DeprecationWarning: sklearn.externals.joblib is deprecated in 0.21 and will be removed in 0.23. Please import this functionality directly from joblib, which can be installed with: pip install joblib. If this warning is raised when loading pickled models, you may need to re-serialize those models with scikit-learn 0.21+.\n warnings.warn(msg, category=DeprecationWarning)\n"
],
[
"# Ignore R warning messages\n#Note: this can be commented out to get more verbose R output\nrpy2.rinterface_lib.callbacks.logger.setLevel(logging.ERROR)\n# Automatically convert rpy2 outputs to pandas dataframes\npandas2ri.activate()\nanndata2ri.activate()\n%load_ext rpy2.ipython\n# autoreload\n%load_ext autoreload\n%autoreload 2\n# logging\nsc.logging.print_versions()",
"scanpy==1.4.3 anndata==0.6.20 umap==0.3.8 numpy==1.16.4 scipy==1.2.1 pandas==0.25.0 scikit-learn==0.21.1 statsmodels==0.9.0 python-igraph==0.7.1 louvain==0.6.1 \n"
],
[
"%%R\nlibrary(MAST)",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
]
],
[
[
"# Data path\ndata_path = '/data3/martin/tms_gene_data'\noutput_folder = data_path + '/DE_result'",
"_____no_output_____"
],
[
"# Load the data \nadata_combine = util.load_normalized_data(data_path)",
"_____no_output_____"
],
[
"temp_facs = adata_combine[adata_combine.obs['b_method']=='facs',]\ntemp_droplet = adata_combine[adata_combine.obs['b_method']=='droplet',]",
"_____no_output_____"
]
],
[
[
"## Generate a list of tissues for DE testing",
"_____no_output_____"
]
],
[
[
"tissue_list = list(set(temp_droplet.obs['tissue']))\nmin_cell_number = 1\nanalysis_list = []\nanalysis_info = {}\n# for cell_type in cell_type_list:\nfor tissue in tissue_list:\n analyte = tissue\n ind_select = (temp_droplet.obs['tissue'] == tissue)\n n_young = (temp_droplet.obs['age'][ind_select].isin(['1m', '3m'])).sum()\n n_old = (temp_droplet.obs['age'][ind_select].isin(['18m', '21m',\n '24m', '30m'])).sum()\n analysis_info[analyte] = {}\n analysis_info[analyte]['n_young'] = n_young\n analysis_info[analyte]['n_old'] = n_old\n if (n_young>min_cell_number) & (n_old>min_cell_number):\n print('%s, n_young=%d, n_old=%d'%(analyte, n_young, n_old))\n analysis_list.append(analyte)",
"Tongue, n_young=12044, n_old=8613\nHeart_and_Aorta, n_young=1362, n_old=6554\nLung, n_young=6541, n_old=21216\nSpleen, n_young=7844, n_old=21478\nLiver, n_young=3234, n_old=3246\nBladder, n_young=3450, n_old=5367\nLimb_Muscle, n_young=8210, n_old=16759\nThymus, n_young=1145, n_old=6425\nKidney, n_young=4317, n_old=14784\nMarrow, n_young=6842, n_old=35099\nMammary_Gland, n_young=4343, n_old=7049\n"
]
],
[
[
"### DE using R package MAST ",
"_____no_output_____"
]
],
[
[
"## DE testing\ngene_name_list = np.array(temp_droplet.var_names)\nDE_result_MAST = {}\nfor i_analyte,analyte in enumerate(analysis_list):\n print(analyte, '%d/%d'%(i_analyte, len(analysis_list)))\n tissue = analyte\n ind_select = (temp_droplet.obs['tissue'] == tissue)\n adata_temp = temp_droplet[ind_select,]\n # reformatting\n adata_temp.X = np.array(adata_temp.X.todense())\n adata_temp.obs['condition'] = [int(x[:-1]) for x in adata_temp.obs['age']] \n adata_temp.obs = adata_temp.obs[['condition', 'sex']]\n if len(set(adata_temp.obs['sex'])) <2:\n covariate = ''\n else:\n covariate = '+sex'\n# # toy example\n# covariate = ''\n# np.random.seed(0)\n# ind_select = np.random.permutation(adata_temp.shape[0])[0:100]\n# ind_select = np.sort(ind_select)\n# adata_temp = adata_temp[ind_select, 0:3]\n# adata_temp.X[:,0] = (adata_temp.obs['sex'] == 'male')*3\n# adata_temp.X[:,1] = (adata_temp.obs['condition'])*3\n # DE using MAST \n R_cmd = util.call_MAST_age()\n get_ipython().run_cell_magic(u'R', u'-i adata_temp -i covariate -o de_res', R_cmd)\n de_res.columns = ['gene', 'raw-p', 'coef', 'bh-p']\n de_res.index = de_res['gene']\n DE_result_MAST[analyte] = pd.DataFrame(index = gene_name_list)\n DE_result_MAST[analyte] = DE_result_MAST[analyte].join(de_res)\n # fc between yound and old\n X = adata_temp.X\n y = (adata_temp.obs['condition']>10)\n DE_result_MAST[analyte]['fc'] = X[y,:].mean(axis=0) - X[~y,:].mean(axis=0)\n# break",
"Tongue 0/11\nHeart_and_Aorta 1/11\nLung 2/11\nSpleen 3/11\nLiver 4/11\nBladder 5/11\nLimb_Muscle 6/11\nThymus 7/11\nKidney 8/11\nMarrow 9/11\nMammary_Gland 10/11\n"
]
],
[
[
"### Save DE results",
"_____no_output_____"
]
],
[
[
"with open(output_folder+'/DE_tissue_droplet.pickle', 'wb') as handle:\n pickle.dump(DE_result_MAST, handle)\n pickle.dump(analysis_list, handle)\n pickle.dump(analysis_info, handle)",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a05b66dc07be86758ca698357428e409b901b57
| 4,976 |
ipynb
|
Jupyter Notebook
|
notebooks/02/variables.ipynb
|
matthew-brett/cfd-uob
|
cc9233a26457f5e688ed6297ebbf410786cfd806
|
[
"CC-BY-4.0"
] | 1 |
2019-09-30T13:31:41.000Z
|
2019-09-30T13:31:41.000Z
|
notebooks/02/variables.ipynb
|
matthew-brett/cfd-uob
|
cc9233a26457f5e688ed6297ebbf410786cfd806
|
[
"CC-BY-4.0"
] | 1 |
2021-03-30T01:51:11.000Z
|
2021-03-30T01:51:11.000Z
|
notebooks/02/variables.ipynb
|
matthew-brett/cfd-uob
|
cc9233a26457f5e688ed6297ebbf410786cfd806
|
[
"CC-BY-4.0"
] | 5 |
2019-12-03T00:54:39.000Z
|
2020-09-21T14:30:43.000Z
| 21.174468 | 87 | 0.505828 |
[
[
[
"Variables are - things that vary.\n\nYou remember variables like $x$ and $y$ from mathematics.\n\nIn mathematics, we can use names, such as $x$ and $y$, to represent any value.\n\nIn the piece of mathematics below, we define $y$ given any value for $x$:\n\n$$\ny = 3x + 2\n$$\n\nWhen we have some value for $x$, we can get the corresponding value of $y$.\n\nWe give $x$ some value:\n\n$$\nx = 4\n$$\n\nWe apply the rule above to give a value to $y$:\n\n$$\ny = 3 * x + 2 \\\\\n= 3 * 4 + 2 \\\\\n= 14\n$$\n\n\"x\" is a name that refers to a value. We use the name to *represent* the\nvalue. In the expression above, $x$ represents the value 4. Of course we\ncould give $x$ any value.",
"_____no_output_____"
],
[
"Variables in Python work in the same way.\n\nVariables are *names* given to *values*. We can use the name to refer to the\nvalue in calculations.\n\nFor example, here I say that the *name* `x` refers to the *value* 4:",
"_____no_output_____"
]
],
[
[
"x = 4",
"_____no_output_____"
]
],
[
[
"Now I can calculate a value, using that name:",
"_____no_output_____"
]
],
[
[
"3 * x + 2",
"_____no_output_____"
]
],
[
[
"## Variables in expressions\n\nThis is an *expression*. You have seen expressions with numbers, but here we\nhave an expression with numbers and a variable. When Python sees the variable\nname `x` in this expression, it *evaluates* `x`, and gets the value that it\nrefers to. After Python has evaluated `x` to get 4, it ends up with this:",
"_____no_output_____"
]
],
[
[
"3 * 4 + 2",
"_____no_output_____"
]
],
[
[
"I can also give a name to the *result* of this expression - such as `y`:",
"_____no_output_____"
]
],
[
[
"y = 3 * x + 2",
"_____no_output_____"
],
[
"y",
"_____no_output_____"
]
],
[
[
"If I change `x`, I change the result:",
"_____no_output_____"
]
],
[
[
"x = 5\ny = 3 * x + 2\ny",
"_____no_output_____"
]
],
[
[
"Variables are essential in mathematics, to express general rules for\ncalculation. For the same reason, variables are essential in Python.",
"_____no_output_____"
],
[
"In mathematics, we usually prefer variables with very short names - $x$ or $y$\nor $p$. This makes them easier to read on the page.",
"_____no_output_____"
],
[
"In Python, and other programming languages, we can use variables with longer\nnames, and we usually do, because we use so many variables. Giving them\nlonger names helps us remember what the variables are for.\n\n[Next](Names) we go into more detail about names and expressions.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a05cc27c4425294674342b549a72ed45873c95f
| 65,069 |
ipynb
|
Jupyter Notebook
|
Using bw2waterbalancer.ipynb
|
CIRAIG/bw2waterbalancer
|
aaeace9f71fe55cb17488d020467f471b32be3d8
|
[
"MIT"
] | 1 |
2021-01-27T22:16:56.000Z
|
2021-01-27T22:16:56.000Z
|
Using bw2waterbalancer.ipynb
|
CIRAIG/bw2waterbalancer
|
aaeace9f71fe55cb17488d020467f471b32be3d8
|
[
"MIT"
] | null | null | null |
Using bw2waterbalancer.ipynb
|
CIRAIG/bw2waterbalancer
|
aaeace9f71fe55cb17488d020467f471b32be3d8
|
[
"MIT"
] | null | null | null | 112.771231 | 10,632 | 0.867433 |
[
[
[
"# Using `bw2waterbalancer`\n\nNotebook showing typical usage of `bw2waterbalancer`",
"_____no_output_____"
],
[
"## Generating the samples",
"_____no_output_____"
],
[
"`bw2waterbalancer` works with Brightway2. You only need set as current a project in which the database for which you want to balance water exchanges is imported.",
"_____no_output_____"
]
],
[
[
"import brightway2 as bw\nimport numpy as np\nbw.projects.set_current('ei36cutoff')",
"_____no_output_____"
]
],
[
[
"The only Class you need is the `DatabaseWaterBalancer`:",
"_____no_output_____"
]
],
[
[
"from bw2waterbalancer import DatabaseWaterBalancer",
"_____no_output_____"
]
],
[
[
"Instantiating the DatabaseWaterBalancer will automatically identify activities that are associated with water exchanges. ",
"_____no_output_____"
]
],
[
[
"dwb = DatabaseWaterBalancer(\n ecoinvent_version=\"3.6\", # used to identify activities with water production exchanges\n database_name=\"ei36_cutoff\", #name the LCI db in the brightway2 project\n)",
"Validating data\nGetting information on technosphere water exchanges\n"
]
],
[
[
"Generating presamples for the whole database is a lengthy process. Thankfully, it only ever needs to be done once per database:",
"_____no_output_____"
]
],
[
[
"dwb.add_samples_for_all_acts(iterations=1000)",
"0% [##############################] 100% | ETA: 00:00:00\nTotal time elapsed: 02:59:48\n"
]
],
[
[
"The samples and associated indices are stored as attributes: ",
"_____no_output_____"
]
],
[
[
"dwb.matrix_samples",
"_____no_output_____"
],
[
"dwb.matrix_samples.shape",
"_____no_output_____"
],
[
"dwb.matrix_indices[0:10] # First ten indices",
"_____no_output_____"
],
[
"len(dwb.matrix_indices)",
"_____no_output_____"
]
],
[
[
"These can directly be used to generate [`presamples`](https://presamples.readthedocs.io/):",
"_____no_output_____"
]
],
[
[
"presamples_id, presamples_fp = dwb.create_presamples(\n name=None, #Could have specified a string as name, not passing anything will use automatically generated random name\n dirpath=None, #Could have specified a directory path to save presamples somewhere specific \n id_=None, #Could have specified a string as id, not passing anything will use automatically generated random id\n seed='sequential', #or None, or int.\n )",
"Presamples with id_ d531ce33faed4bea882f1ce1a61b8608 written at C:\\Users\\Pascal Lesage\\AppData\\Local\\pylca\\Brightway3\\ei36cutoff.e8d08b39952c787ab81510769bc7a655\\presamples\\d531ce33faed4bea882f1ce1a61b8608\n"
]
],
[
[
"## Using the samples\n\nThe samples are formatted for use in brighway2 via the presamples package. \n\nThe following function calculates: \n - Deterministic results, using `bw.LCA` \n - Stochastic results, using `bw.MonteCarloLCA` \n - Stochastic results using presamples, using `bw.MonteCarloLCA` and passing `presamples=[presamples_fp]` \n \nThe ratio of stochastic results to deterministic results are then plotted for Monte Carlo results with and without presamples. \nRatios for Monte Carlo with presamples are on the order of 1. \nRatios for Monte Carlo without presamples are much greater, as much (for the randomly selected activities) up to two orders of magnitude. ",
"_____no_output_____"
]
],
[
[
"def check_presamples_act(act_key, ps_fp, lcia_method, iterations=1000):\n \"\"\"Plot histrograms of Monte Carlo samples/det result for case w/ and w/o presamples\"\"\"\n lca = bw.LCA({act_key:1}, method=m)\n lca.lci()\n lca.lcia()\n \n mc_arr_wo = np.empty(shape=iterations)\n mc = bw.MonteCarloLCA({act_key:1}, method=m)\n for i in range(iterations):\n mc_arr_wo[i] = next(mc)/lca.score\n \n mc_arr_w = np.empty(shape=iterations)\n mc_w = bw.MonteCarloLCA({act_key:1}, method=m, presamples=[ps_fp])\n for i in range(iterations):\n mc_arr_w[i] = next(mc_w)/lca.score\n \n plt.hist(mc_arr_wo, histtype=\"step\", color='orange', label=\"without presamples\")\n plt.hist(mc_arr_w, histtype=\"step\", color='green', label=\"with presamples\")\n plt.legend()",
"_____no_output_____"
]
],
[
[
"Let's run this on a couple of random ecoinvent products with the ImpactWorld+ water scarcity LCIA method:",
"_____no_output_____"
]
],
[
[
"m=('IMPACTWorld+ (Default_Recommended_Midpoint 1.23)', 'Midpoint', 'Water scarcity')",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"act = bw.Database('ei36_cutoff').random()\nprint(\"Randomly working on \", act)\ncheck_presamples_act(act.key, presamples_fp, m)",
"Randomly working on 'treatment of waste gypsum, sanitary landfill' (kilogram, CH, None)\n"
],
[
"act = bw.Database('ei36_cutoff').random()\nprint(\"Randomly working on \", act)\ncheck_presamples_act(act.key, presamples_fp, m)",
"Randomly working on 'market for decommissioned pipeline, natural gas' (kilogram, CH, None)\n"
],
[
"act = bw.Database('ei36_cutoff').random()\nprint(\"Randomly working on \", act)\ncheck_presamples_act(act.key, presamples_fp, m)",
"Randomly working on 'mine construction, open cast, steatite' (unit, RoW, None)\n"
],
[
"act = bw.Database('ei36_cutoff').random()\nprint(\"Randomly working on \", act)\ncheck_presamples_act(act.key, presamples_fp, m)",
"Randomly working on 'printed wiring board mounting facility construction, surface mounting line' (unit, GLO, None)\n"
],
[
"act = bw.Database('ei36_cutoff').random()\nprint(\"Randomly working on \", act)\ncheck_presamples_act(act.key, presamples_fp, m)",
"Randomly working on 'electricity production, nuclear, pressure water reactor' (kilowatt hour, BG, None)\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a05d47126ea5e054c1b02e3784eccb7eaf165fc
| 4,183 |
ipynb
|
Jupyter Notebook
|
astr-119-final-project-pt2.ipynb
|
ericryan433/astr-119-final
|
c106d58b2d6013174060472e2effc406ae5c8eb9
|
[
"MIT"
] | null | null | null |
astr-119-final-project-pt2.ipynb
|
ericryan433/astr-119-final
|
c106d58b2d6013174060472e2effc406ae5c8eb9
|
[
"MIT"
] | null | null | null |
astr-119-final-project-pt2.ipynb
|
ericryan433/astr-119-final
|
c106d58b2d6013174060472e2effc406ae5c8eb9
|
[
"MIT"
] | null | null | null | 22.610811 | 156 | 0.532871 |
[
[
[
"import numpy as np\nimport sep\nfrom astropy.utils.data import get_pkg_data_filename\nfrom astropy.io import fits\nimport matplotlib.pyplot as plt\nfrom matplotlib import rcParams\n\n%matplotlib inline \n\nrcParams['figure.figsize'] = [20.,16.]",
"_____no_output_____"
],
[
"image_data = get_pkg_data_filename(\"/Users/ericryan/Downloads/hlsp_hudf12_hst_wfc3ir_udfmain_f105w_v1.0_drz.fits\")\ndata = data.byteswap().newbyteorder()\n\nm, s = np.mean(data), np.std(data)\nplt.imshow(data, interpolation='nearest', cmap='gray', vmin=m-s, vmax=m+s, origin='lower')\nplt.colorbar()\nplt.savefig('data.png')",
"_____no_output_____"
],
[
"bkg = sep.Background(data)",
"_____no_output_____"
],
[
"print(bkg.globalback)\nprint(bkg.globalrms)",
"_____no_output_____"
],
[
"bkg_image = bkg.back()",
"_____no_output_____"
],
[
"plt.imshow(bkg_image, interpolation='nearest', cmap='gray', origin='lower')\nplt.colorbar()\nplt.savefig('bkg_image.png')",
"_____no_output_____"
],
[
"bkg_rms = bkg.rms()",
"_____no_output_____"
],
[
"plt.imshow(bkg_rms, interpolation='nearest', cmap='gray', origin='lower')\nplt.colorbar()\nplt.savefig('bkg_rms.png')",
"_____no_output_____"
],
[
"data_sub = data - bkg",
"_____no_output_____"
],
[
"objects = sep.extract(data_sub, 1.5, err=bkg.globalrms)",
"_____no_output_____"
],
[
"len(objects)",
"_____no_output_____"
],
[
"from matplotlib.patches import Ellipse\n\nfig, ax = plt.subplots()\nm, s = np.mean(data_sub), np.std(data_sub)\nim = ax.imshow(data_sub, interpolation='nearest', cmap='gray', vmin=m-s, vmax=m+s, origin='lower')\n\nfor i in range(len(objects)):\n e = Ellipse(xy=(objects['x'][i], objects['y'][i]), width=6*objects['a'][i], height=6*objects['b'][i], angle=objects['theta'][i] * 180. / np.pi)\n e.set_facecolor('none')\n e.set_edgecolor('red')\n ax.add_artist(e)\n \nplt.savefig('detected_objects.png')",
"_____no_output_____"
],
[
"flux, fluxerr, flag = sep.sum_circle(data_sub, objects['x'], objects['y'], 3.0, err=bkg.globalrms, gain=1.0)",
"_____no_output_____"
],
[
"for i in range(10):\n print(\"object {:d}: flux = {:f} +/- {:f}\".format(i, flux[i], fluxerr[i]))",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a05d9e98b16c31d723a96adafca8c4d3dd15140
| 313,442 |
ipynb
|
Jupyter Notebook
|
Learning/SABRS_demo_stats1.ipynb
|
balarsen/pymc_learning
|
e4a077d492af6604a433433e64b835ce4ed0333a
|
[
"BSD-3-Clause"
] | null | null | null |
Learning/SABRS_demo_stats1.ipynb
|
balarsen/pymc_learning
|
e4a077d492af6604a433433e64b835ce4ed0333a
|
[
"BSD-3-Clause"
] | null | null | null |
Learning/SABRS_demo_stats1.ipynb
|
balarsen/pymc_learning
|
e4a077d492af6604a433433e64b835ce4ed0333a
|
[
"BSD-3-Clause"
] | 1 |
2017-05-23T16:38:55.000Z
|
2017-05-23T16:38:55.000Z
| 615.799607 | 46,186 | 0.936081 |
[
[
[
"# Brian Larsen - 28 April 2016 [email protected]\n# Overview\nThis is meant to be a proof of concept example of the SWx to Mission instrument prediction planned for AFTAC funding.\n\n## Abstract\nThere is currently no accepted method within SNDD for the time dependent quantification of environmental background in this mission instruments, this work aims to provide this capability. This work provides a preliminary proof of concept of the techniques in order to both show utility and provide a baseline for improvement. The methods incorporated here are Bayesian MCMC curve fitting and Bayesian posterior sampling. The method is to utilize data of SWx and mission instruments to find (linear) correlation coefficients using all the data. Then utilizing the relations discovered in the correlation step the predicted response in the mission instruments are then created. \n\n## Goals\n1. Provide a quantifiable prediction of count rate in the mission instruments\n 1. Need to study which data are required to provide the best prediction\n 1. First step is beginning with data moving toward prediction, second step is a first principle assessment of background based on SAGE\n1. Provide an automatic method to determine a quantifiable deviation from the prediction in order to determine Events of Interest (EOI)\n1. Provide an update and future capability into the method for update and reassessment of accuracy and addition of additional data \n1. Provide mechanisms for the inclusion of the model and prediction rules into DIORAMA without DIORAMA having to run MCMC.\n\n\n\n### The Steps are\n1. Perform a correlation analysis between the two data sets figuring out which variables make the best prediction\n1. Use the results of this correlation to predict the response for the Mission instrument from the SWx\n1. Utilize the predicted response to determine quantifiable count rate prediction limits\n1. Develop a technique to quantify excursions of the observed data from the predicted limits to flag as EOI with an interest score",
"_____no_output_____"
]
],
[
[
"!date",
"Thu Apr 28 10:38:40 MDT 2016\r\n"
],
[
"# Standard Library Python Modules\n\n# Common Python Modules\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport spacepy.plot as spp\nimport spacepy.toolbox as tb\nimport pandas as pd\nimport pymc # this is the MCMC tool\n\n# put plots into this document\n%matplotlib inline",
"This unreleased version of SpacePy is not supported by the SpacePy team.\n"
]
],
[
[
"Create simulated data that can be used in this proof of concept",
"_____no_output_____"
]
],
[
[
"# observed data\n\nfrom scipy.signal import savgol_filter\n# make a time dependent x\nt_x = np.random.uniform(0, 1, 500)\nt_x = savgol_filter(t_x, 95, 2)[95//2:-95//2]\nt_x -= t_x.min()\nt_x /= (t_x.max() - t_x.min())\nplt.plot(t_x)\nplt.xlabel('time')\nplt.ylabel('SWx data value')\n\na = 6\nb = 2\nsigma = 2.0\ny_obs = a*t_x + b + np.random.normal(0, sigma, len(t_x))\ndata = pd.DataFrame(np.array([t_x, y_obs]).T, columns=['x', 'y'])\n\nx = t_x\n\ndata.plot(x='x', y='y', kind='scatter', s=50)\nplt.xlabel('SWx inst')\nplt.ylabel('Mission Inst')",
"_____no_output_____"
],
[
"\n\n# define priors\na = pymc.Normal('slope', mu=0, tau=1.0/10**2)\nb = pymc.Normal('intercept', mu=0, tau=1.0/10**2)\ntau = pymc.Gamma(\"tau\", alpha=0.1, beta=0.1)\n\n# define likelihood\[email protected]\ndef mu(a=a, b=b, x=x):\n return a*x + b\n\n\n\ny = pymc.Normal('y', mu=mu, tau=tau, value=y_obs, observed=True)\n",
"_____no_output_____"
],
[
"\n# inference\nm = pymc.Model([a, b, tau, x, y])\nmc = pymc.MCMC(m)\n# run 6 chains\nfor i in range(6):\n mc.sample(iter=90000, burn=10000)\n\n",
" [-----------------100%-----------------] 90000 of 90000 complete in 15.8 sec"
],
[
"# plot up the data and overplot the possible fit lines\n\ndata.plot(x='x', y='y', kind='scatter', s=50)\n\nxx = np.linspace(data.x.min(), data.x.max(), 10)\nfor ii in range(0, len(mc.trace('slope', chain=None)[:]), \n len(mc.trace('slope', chain=None)[:])//400):\n yy = (xx*mc.trace('slope', chain=None)[:][ii] + \n mc.trace('intercept', chain=None)[:][ii])\n plt.plot(xx,yy, c='r')\n \n",
"_____no_output_____"
],
[
"\npymc.Matplot.plot(mc)\npymc.Matplot.summary_plot(mc)\n\n",
"Plotting slope\nPlotting tau\nPlotting intercept\n"
]
],
[
[
"Now based on the results above we can use this as a prediction of the Y-data from the X-data into the future",
"_____no_output_____"
]
],
[
[
"\n",
"_____no_output_____"
]
],
[
[
"Now that we have time dependent data use the results from the above correlation to predict what the Y-instrument would have seen",
"_____no_output_____"
]
],
[
[
"int_vals = mc.stats()['intercept']['95% HPD interval']\nslope_vals = mc.stats()['slope']['95% HPD interval']\nprint(int_vals, slope_vals)",
"/Users/blarsen/miniconda3/envs/python3/lib/python3.4/site-packages/numpy/core/fromnumeric.py:225: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future\n return reshape(newshape, order=order)\n"
],
[
"y_inst = np.tile(t_x, (2,1)).T * slope_vals + int_vals",
"_____no_output_____"
],
[
"plt.plot(y_inst, c='r')\nplt.xlabel('time')\nplt.ylabel('Mission inst value')\nplt.title('Major upper limit of spread')\n",
"_____no_output_____"
]
],
[
[
"Or do this smarter by sampling the posterior",
"_____no_output_____"
]
],
[
[
"pred = []\nfor v in t_x:\n pred.append(np.percentile(v * mc.trace('slope', \n chain=None)[:] + \n mc.trace('intercept', \n chain=None)[:], \n [2.5, 97.5]))\nplt.plot(pred, c='r')\nplt.xlabel('time')\nplt.ylabel('Predicted Mission inst value')",
"_____no_output_____"
]
],
[
[
"# Next steps\n1. Statistically validate and/or modify this proof of concept. \n1. Determine how to quantify the prediction capability\n1. Determine methods to identify EOI outside of prediction\n# Issues\n* The overall spread in the data are not captured (enveloped) by this proof of concept",
"_____no_output_____"
],
[
"# Another possible method\nFor each value of X (or small range of X) fit a distribution to the Y (what kind?) and use that as a prediction. This is maybe not as good as there is likely to not be continous. ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a05e3d1c85d5d8dc24c6e9f3f5b7e26cb1de915
| 38,332 |
ipynb
|
Jupyter Notebook
|
BoneSegmentation_v2.ipynb
|
TopalSolcan/BoneSegmentation_IMA
|
9fcca0e2715b5c15015dfa2873e3862cd36c6ccf
|
[
"MIT"
] | 1 |
2021-07-06T09:06:13.000Z
|
2021-07-06T09:06:13.000Z
|
BoneSegmentation_v2.ipynb
|
TopalSolcan/BoneSegmentation_IMA
|
9fcca0e2715b5c15015dfa2873e3862cd36c6ccf
|
[
"MIT"
] | null | null | null |
BoneSegmentation_v2.ipynb
|
TopalSolcan/BoneSegmentation_IMA
|
9fcca0e2715b5c15015dfa2873e3862cd36c6ccf
|
[
"MIT"
] | null | null | null | 166.66087 | 15,609 | 0.713529 |
[
[
[
"from bone_data import BoneData",
"_____no_output_____"
],
[
"bone_data = BoneData()",
"_____no_output_____"
],
[
"print(bone_data.images_data.shape, bone_data.labels_data.shape)",
"(500, 200, 200, 3) (500, 200, 200, 1)\n"
],
[
"import PIL\nfrom PIL import ImageOps, Image\nimport numpy as np\n\nimage_index = 283 - 1\nim_xray = Image.fromarray(bone_data.images_data[image_index])\nim_bone = Image.fromarray(bone_data.labels_data[image_index])\n\nim_contrast = ImageOps.autocontrast(im_bone, cutoff = 2, ignore = 2)\n \ndisplay(im_xray)\ndisplay(im_contrast)",
"_____no_output_____"
],
[
"from xnet import model as Model\nimport tensorflow as tf\nfrom tensorflow import keras\nwith tf.device(\"cpu\"):\n # Free up RAM in case the model definition cells were run multiple times\n keras.backend.clear_session()\n xnet_model = Model(input_shape=(200, 200, 3), classes=2)",
"_____no_output_____"
],
[
"xnet_model.summary()",
"_____no_output_____"
],
[
"xnet_model.compile(optimizer=\"rmsprop\", loss=\"sparse_categorical_crossentropy\")",
"_____no_output_____"
],
[
"epochs = 2\nxnet_model.fit(bone_data.data, epochs=epochs)",
"_____no_output_____"
],
[
"from keras_model import get_model\nimport tensorflow as tf\nfrom tensorflow import keras\nimg_size = (200, 200)\nnum_classes = 2\nwith tf.device(\"cpu\"):\n # Free up RAM in case the model definition cells were run multiple times\n keras.backend.clear_session()\n model = get_model(img_size, num_classes)",
"_____no_output_____"
],
[
"model.summary()",
"Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 200, 200, 3) 0 \n__________________________________________________________________________________________________\nconv2d (Conv2D) (None, 100, 100, 32) 896 input_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization (BatchNorma (None, 100, 100, 32) 128 conv2d[0][0] \n__________________________________________________________________________________________________\nactivation (Activation) (None, 100, 100, 32) 0 batch_normalization[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 100, 100, 32) 0 activation[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d (SeparableConv (None, 100, 100, 64) 2400 activation_1[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_1 (BatchNor (None, 100, 100, 64) 256 separable_conv2d[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 100, 100, 64) 0 batch_normalization_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_1 (SeparableCo (None, 100, 100, 64) 4736 activation_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_2 (BatchNor (None, 100, 100, 64) 256 separable_conv2d_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 50, 50, 64) 0 batch_normalization_2[0][0] \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 50, 50, 64) 2112 activation[0][0] \n__________________________________________________________________________________________________\nadd (Add) (None, 50, 50, 64) 0 max_pooling2d[0][0] \n conv2d_1[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 50, 50, 64) 0 add[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_2 (SeparableCo (None, 50, 50, 128) 8896 activation_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 50, 50, 128) 512 separable_conv2d_2[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 50, 50, 128) 0 batch_normalization_3[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_3 (SeparableCo (None, 50, 50, 128) 17664 activation_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_4 (BatchNor (None, 50, 50, 128) 512 separable_conv2d_3[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 25, 25, 128) 0 batch_normalization_4[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 25, 25, 128) 8320 add[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 25, 25, 128) 0 max_pooling2d_1[0][0] \n conv2d_2[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 25, 25, 128) 0 add_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_4 (SeparableCo (None, 25, 25, 256) 34176 activation_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_5 (BatchNor (None, 25, 25, 256) 1024 separable_conv2d_4[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 25, 25, 256) 0 batch_normalization_5[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_5 (SeparableCo (None, 25, 25, 256) 68096 activation_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_6 (BatchNor (None, 25, 25, 256) 1024 separable_conv2d_5[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_2 (MaxPooling2D) (None, 13, 13, 256) 0 batch_normalization_6[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 13, 13, 256) 33024 add_1[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 13, 13, 256) 0 max_pooling2d_2[0][0] \n conv2d_3[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 13, 13, 256) 0 add_2[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose (Conv2DTranspo (None, 13, 13, 256) 590080 activation_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_7 (BatchNor (None, 13, 13, 256) 1024 conv2d_transpose[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 13, 13, 256) 0 batch_normalization_7[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_1 (Conv2DTrans (None, 13, 13, 256) 590080 activation_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_8 (BatchNor (None, 13, 13, 256) 1024 conv2d_transpose_1[0][0] \n__________________________________________________________________________________________________\nup_sampling2d_1 (UpSampling2D) (None, 26, 26, 256) 0 add_2[0][0] \n__________________________________________________________________________________________________\nup_sampling2d (UpSampling2D) (None, 26, 26, 256) 0 batch_normalization_8[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 26, 26, 256) 65792 up_sampling2d_1[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 26, 26, 256) 0 up_sampling2d[0][0] \n conv2d_4[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 26, 26, 256) 0 add_3[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_2 (Conv2DTrans (None, 26, 26, 128) 295040 activation_9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_9 (BatchNor (None, 26, 26, 128) 512 conv2d_transpose_2[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 26, 26, 128) 0 batch_normalization_9[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_3 (Conv2DTrans (None, 26, 26, 128) 147584 activation_10[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_10 (BatchNo (None, 26, 26, 128) 512 conv2d_transpose_3[0][0] \n__________________________________________________________________________________________________\nup_sampling2d_3 (UpSampling2D) (None, 52, 52, 256) 0 add_3[0][0] \n__________________________________________________________________________________________________\nup_sampling2d_2 (UpSampling2D) (None, 52, 52, 128) 0 batch_normalization_10[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 52, 52, 128) 32896 up_sampling2d_3[0][0] \n__________________________________________________________________________________________________\nadd_4 (Add) (None, 52, 52, 128) 0 up_sampling2d_2[0][0] \n conv2d_5[0][0] \n__________________________________________________________________________________________________\nactivation_11 (Activation) (None, 52, 52, 128) 0 add_4[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_4 (Conv2DTrans (None, 52, 52, 64) 73792 activation_11[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_11 (BatchNo (None, 52, 52, 64) 256 conv2d_transpose_4[0][0] \n__________________________________________________________________________________________________\nactivation_12 (Activation) (None, 52, 52, 64) 0 batch_normalization_11[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_5 (Conv2DTrans (None, 52, 52, 64) 36928 activation_12[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_12 (BatchNo (None, 52, 52, 64) 256 conv2d_transpose_5[0][0] \n__________________________________________________________________________________________________\nup_sampling2d_5 (UpSampling2D) (None, 104, 104, 128 0 add_4[0][0] \n__________________________________________________________________________________________________\nup_sampling2d_4 (UpSampling2D) (None, 104, 104, 64) 0 batch_normalization_12[0][0] \n__________________________________________________________________________________________________\nconv2d_6 (Conv2D) (None, 104, 104, 64) 8256 up_sampling2d_5[0][0] \n__________________________________________________________________________________________________\nadd_5 (Add) (None, 104, 104, 64) 0 up_sampling2d_4[0][0] \n conv2d_6[0][0] \n__________________________________________________________________________________________________\nactivation_13 (Activation) (None, 104, 104, 64) 0 add_5[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_6 (Conv2DTrans (None, 104, 104, 32) 18464 activation_13[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_13 (BatchNo (None, 104, 104, 32) 128 conv2d_transpose_6[0][0] \n__________________________________________________________________________________________________\nactivation_14 (Activation) (None, 104, 104, 32) 0 batch_normalization_13[0][0] \n__________________________________________________________________________________________________\nconv2d_transpose_7 (Conv2DTrans (None, 104, 104, 32) 9248 activation_14[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_14 (BatchNo (None, 104, 104, 32) 128 conv2d_transpose_7[0][0] \n__________________________________________________________________________________________________\nup_sampling2d_7 (UpSampling2D) (None, 208, 208, 64) 0 add_5[0][0] \n__________________________________________________________________________________________________\nup_sampling2d_6 (UpSampling2D) (None, 208, 208, 32) 0 batch_normalization_14[0][0] \n__________________________________________________________________________________________________\nconv2d_7 (Conv2D) (None, 208, 208, 32) 2080 up_sampling2d_7[0][0] \n__________________________________________________________________________________________________\nadd_6 (Add) (None, 208, 208, 32) 0 up_sampling2d_6[0][0] \n conv2d_7[0][0] \n__________________________________________________________________________________________________\nconv2d_8 (Conv2D) (None, 208, 208, 2) 578 add_6[0][0] \n==================================================================================================\nTotal params: 2,058,690\nTrainable params: 2,054,914\nNon-trainable params: 3,776\n__________________________________________________________________________________________________\n"
],
[
"model.compile(optimizer=\"rmsprop\", loss=\"sparse_categorical_crossentropy\")\n# Train the model, doing validation at the end of each epoch.\nepochs = 2\nmodel.fit(bone_data.data, validation_data=bone_data.data, epochs=epochs)",
"Epoch 1/2\n"
],
[
"len(bone_data.data)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a05e768f2a537abc7a72b9ee99a595e2bdde175
| 194,430 |
ipynb
|
Jupyter Notebook
|
classification/binary predict_severstal.ipynb
|
s4ke/Machine-Learning-scripts
|
65ad9e8780fdba7ba63b8fc0f35a0543844aa844
|
[
"Apache-2.0"
] | 2 |
2020-07-17T10:33:42.000Z
|
2021-11-08T09:31:27.000Z
|
classification/binary predict_severstal.ipynb
|
ashishpatel26/ML-DL-scripts
|
25f930630f6e546955ad13863d6e728c8c702d43
|
[
"Apache-2.0"
] | null | null | null |
classification/binary predict_severstal.ipynb
|
ashishpatel26/ML-DL-scripts
|
25f930630f6e546955ad13863d6e728c8c702d43
|
[
"Apache-2.0"
] | 1 |
2020-10-15T11:24:52.000Z
|
2020-10-15T11:24:52.000Z
| 161.620948 | 31,272 | 0.852651 |
[
[
[
"import numpy as np\nimport pandas as pd\n%matplotlib inline\nimport math \nfrom xgboost.sklearn import XGBClassifier\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn import cross_validation\nfrom sklearn.metrics import roc_auc_score\nfrom matplotlib import pyplot",
"/home/analyst/anaconda2/lib/python2.7/site-packages/sklearn/cross_validation.py:44: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n"
],
[
"train = pd.read_csv(\"xtrain.csv\")\ntarget = pd.read_csv(\"ytrain.csv\")\ntest = pd.read_csv(\"xtest.csv\")",
"_____no_output_____"
],
[
"train.head()",
"_____no_output_____"
],
[
"train.describe()",
"_____no_output_____"
],
[
"target.head()",
"_____no_output_____"
],
[
"for column in train:\n print column, \": \", len(train[column].unique()) ",
"1 : 820530\n2 : 8\n3 : 14\n4 : 828052\n5 : 6\n6 : 5\n7 : 9\n8 : 796406\n9 : 11\n10 : 3\n11 : 792090\n12 : 822005\n13 : 801825\n14 : 822892\n15 : 14\n16 : 822113\n17 : 15\n18 : 14\n19 : 857587\n20 : 829600\n21 : 20\n22 : 14\n23 : 5\n24 : 17\n25 : 809401\n26 : 19\n27 : 23\n28 : 812096\n29 : 14\n30 : 7\n31 : 7\n32 : 11\n33 : 815736\n34 : 815944\n35 : 800967\n36 : 15\n37 : 831797\n38 : 816784\n39 : 812254\n40 : 786254\n41 : 832849\n42 : 844695\n43 : 820060\n44 : 5\n45 : 10\n46 : 820502\n47 : 3\n48 : 15\n49 : 803959\n50 : 7\n51 : 21\n52 : 843545\n53 : 814018\n54 : 812819\n55 : 800857\n56 : 812927\n57 : 835366\n58 : 786116\n"
],
[
"cat_features = []\nreal_features = []\n\nfor column in train:\n if len(train[column].unique()) > 21:\n real_features.append(column)\n else:\n cat_features.append(column)\n ",
"_____no_output_____"
],
[
"# построим гистограммы для первых 50к значений для категориальных признаков\ntrain[cat_features].head(50000).plot.hist(bins = 100, figsize=(20, 20))\ntest[cat_features].head(50000).plot.hist(bins = 100, figsize=(20, 20))",
"_____no_output_____"
],
[
"# построим гистограммы для первых 50к значений для остальных признаков\ntrain[real_features].head(50000).plot.hist(bins = 100, figsize=(20, 20))\ntest[real_features].head(50000).plot.hist(bins = 100, figsize=(20, 20))\n\n#гистограммы для теста и обучающей выборки совпадают",
"_____no_output_____"
],
[
"import seaborn\nseaborn.heatmap(train[real_features].corr(), square=True)\n#числовые признаки не коррелируеют между собой",
"_____no_output_____"
],
[
"# в данных есть nan values в каждом столбце\ntrain.isnull().sum()",
"_____no_output_____"
],
[
"#для категориальных признаков, nan значения заменим -1\n#Для действительных признаков - заменим средним значнием\ntrain[cat_features] = train[cat_features].fillna(-1)\n ",
"_____no_output_____"
],
[
"for column in train[real_features]:\n mean_val = train[column].mean()\n train[column] = train[column].fillna(mean_val)",
"_____no_output_____"
],
[
"target.mean() #класса 0 больше чем 1",
"_____no_output_____"
],
[
"import xgboost as xgb\nfrom sklearn.cross_validation import train_test_split\n\nX_fit, X_eval, y_fit, y_eval= train_test_split(\n train, target, test_size=0.20, random_state=1\n)\n\nclf = xgb.XGBClassifier(missing=np.nan, max_depth=3, \n n_estimators=550, learning_rate=0.05, gamma =0.3, min_child_weight = 3,\n subsample=0.9, colsample_bytree=0.8, seed=2000,objective= 'binary:logistic')\n\nclf.fit(X_fit, y_fit, early_stopping_rounds=40, eval_metric=\"auc\", eval_set=[(X_eval, y_eval)])\n",
"_____no_output_____"
],
[
"auc_train = roc_auc_score(y_fit.x, clf.predict(X_fit))\nauc_val = roc_auc_score(y_eval.x, clf.predict(X_eval))\n\nprint 'auc_train: ', auc_train\nprint 'auc_val: ', auc_val\n\n#имеет место быть переобучение",
"auc_train: 0.553770758732\nauc_val: 0.552502316614\n"
],
[
"eps = 1e-5\ndropped_columns = set()\nC = train.columns\n#Определим константные признаки\nfor c in C:\n if train[c].var() < eps:\n print '.. %-30s: too low variance ... column ignored'%(c)\n dropped_columns.add(c)\n#таких не обнаружено",
"_____no_output_____"
],
[
"for i, c1 in enumerate(C):\n f1 = train[c1].values\n for j, c2 in enumerate(C[i+1:]):\n f2 = train[c2].values\n if np.all(f1 == f2):\n dropped_columns.add(c2)\n print c2\n# одинаковых полей также нет",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import ExtraTreesClassifier\nimport matplotlib.pyplot as plt\nforest = ExtraTreesClassifier(n_estimators=150,\n random_state=0)\n\nforest.fit(train.head(100000), target.head(100000).x)\nimportances = forest.feature_importances_\nstd = np.std([tree.feature_importances_ for tree in forest.estimators_],\n axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Попробуем посмотреть какие признаки значимы с помощью деревьев\nprint(\"Feature ranking:\")\n\nfor f in range(train.head(100000).shape[1]):\n print(\"%d. feature %d (%f)\" % (f + 1, indices[f], importances[indices[f]]))\n\n# Построим графики\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(train.head(100000).shape[1]), importances[indices],\n color=\"r\", yerr=std[indices], align=\"center\")\nplt.xticks(range(train.head(100000).shape[1]), indices)\nplt.xlim([-1, train.head(100000).shape[1]])\nplt.show()",
"Feature ranking:\n1. feature 11 (0.018281)\n2. feature 33 (0.018268)\n3. feature 13 (0.018228)\n4. feature 40 (0.018156)\n5. feature 10 (0.018108)\n6. feature 52 (0.018095)\n7. feature 45 (0.018085)\n8. feature 41 (0.018078)\n9. feature 19 (0.018045)\n10. feature 51 (0.018021)\n11. feature 55 (0.018017)\n12. feature 32 (0.018017)\n13. feature 53 (0.018003)\n14. feature 54 (0.017998)\n15. feature 37 (0.017997)\n16. feature 34 (0.017996)\n17. feature 7 (0.017995)\n18. feature 27 (0.017994)\n19. feature 42 (0.017992)\n20. feature 18 (0.017987)\n21. feature 56 (0.017968)\n22. feature 38 (0.017966)\n23. feature 57 (0.017961)\n24. feature 3 (0.017956)\n25. feature 48 (0.017951)\n26. feature 24 (0.017942)\n27. feature 15 (0.017901)\n28. feature 0 (0.017894)\n29. feature 12 (0.017852)\n30. feature 39 (0.017843)\n31. feature 26 (0.017689)\n32. feature 21 (0.017592)\n33. feature 20 (0.017566)\n34. feature 35 (0.017560)\n35. feature 23 (0.017536)\n36. feature 28 (0.017486)\n37. feature 47 (0.017442)\n38. feature 31 (0.017387)\n39. feature 17 (0.017322)\n40. feature 50 (0.017236)\n41. feature 25 (0.017199)\n42. feature 2 (0.017032)\n43. feature 49 (0.016939)\n44. feature 36 (0.016865)\n45. feature 8 (0.016755)\n46. feature 14 (0.016686)\n47. feature 16 (0.016525)\n48. feature 22 (0.016472)\n49. feature 30 (0.016253)\n50. feature 44 (0.016195)\n51. feature 43 (0.016066)\n52. feature 1 (0.015586)\n53. feature 29 (0.015509)\n54. feature 5 (0.015104)\n55. feature 6 (0.015079)\n56. feature 4 (0.014163)\n57. feature 9 (0.013654)\n58. feature 46 (0.012509)\n"
],
[
"# Явных лидеров как и аутсайдеров среди признаков не видно. Признаки анонимны, \n# еще раз обучим модель с более сложными вычислительно гиперпараметрами\nfrom sklearn.cross_validation import train_test_split\nimport xgboost as xgb\n\nX_fit, X_eval, y_fit, y_eval= train_test_split(\n train, target, test_size=0.20, random_state=1\n)\n\nclf = xgb.XGBClassifier(missing=np.nan, max_depth=3, \n n_estimators=1200, learning_rate=0.05, gamma =0.3, min_child_weight = 3,\n subsample=0.9, colsample_bytree=0.8, seed=2000,objective= 'binary:logistic')\n\nclf.fit(X_fit, y_fit, early_stopping_rounds=40, eval_metric=\"auc\", eval_set=[(X_eval, y_eval)])",
"_____no_output_____"
],
[
"# формирование результатов\ntest_target = clf.predict(test)\nsubmission = pd.DataFrame(test_target)\nsubmission.to_csv(\"test_target.csv\", index=False)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a05ed216d7b8b6c9721425583e9c98e2b27a64f
| 274,279 |
ipynb
|
Jupyter Notebook
|
src/airbnb-boston.ipynb
|
pranj-sa791/Boston-Air-BNB
|
880cec3d1757416c26a4aba7aa9555441dd022bd
|
[
"MIT"
] | null | null | null |
src/airbnb-boston.ipynb
|
pranj-sa791/Boston-Air-BNB
|
880cec3d1757416c26a4aba7aa9555441dd022bd
|
[
"MIT"
] | null | null | null |
src/airbnb-boston.ipynb
|
pranj-sa791/Boston-Air-BNB
|
880cec3d1757416c26a4aba7aa9555441dd022bd
|
[
"MIT"
] | null | null | null | 137.690261 | 89,076 | 0.834471 |
[
[
[
"# Insight into AirBNB Boston Data",
"_____no_output_____"
],
[
"A quick glance at [AirBnB Boston data](https://www.kaggle.com/airbnb/boston) arouse curiosity to see if following questions can be convincingly answered using data analysis.\n\n- What are hot locations?\n- What are peak seasons?\n- Does number of properties in neighbourhood affect the occupancy?\n- What are the factors affecting overall occupancy and review ratings?\n",
"_____no_output_____"
],
[
"Load the necessary libraries and list the files in dataset",
"_____no_output_____"
]
],
[
[
"conda install -c conda-forge textblob",
"Collecting package metadata (current_repodata.json): done\nSolving environment: done\n\n# All requested packages already installed.\n\n\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"import os\nprint(os.listdir(\"input\"))",
"['reviews.csv', 'listings.csv', 'calendar.csv']\n"
],
[
"import numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.ensemble import AdaBoostRegressor\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import r2_score, mean_squared_error\nfrom sklearn.preprocessing import MinMaxScaler\n\nimport datetime\n\nfrom textblob import TextBlob\nimport nltk\nnltk.download('punkt')\n\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\n# Input data files are available in the \"input/\" directory.\n# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory\nimport os\nprint(os.listdir(\"input\"))",
"[nltk_data] Downloading package punkt to\n[nltk_data] /Users/pranjalkumarsadawarte/nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n"
]
],
[
[
"## Data Understanding",
"_____no_output_____"
],
[
"### Listings\nTake a look at the data set describing all the different properties that are being offered at airBNB in Boston.",
"_____no_output_____"
]
],
[
[
"df_listings = pd.read_csv('input/listings.csv', parse_dates=['host_since'])\nprint(df_listings.shape)\ndf_listings.head(n=5)",
"(3585, 95)\n"
]
],
[
[
"Take a look at missing values in dataset.",
"_____no_output_____"
]
],
[
[
"(100*(df_listings.shape[0] - df_listings.count())/df_listings.shape[0]).plot(kind=\"bar\", title=\"Percentage Missing Data\", figsize=(20, 8));",
"_____no_output_____"
]
],
[
[
"**Following columns have high percentage of missing data**\n- neighbourhood_group_cleansed\n- square_feet\n- has_availability\n- license\n- jurisdiction_names",
"_____no_output_____"
],
[
"### Calendar/Occupancy Data\n\nLoad and understand structure of calendar/occupancy data",
"_____no_output_____"
]
],
[
[
"df_calendar = pd.read_csv('input/calendar.csv')\nprint(df_calendar.shape)\ndf_calendar.head()",
"(1308890, 4)\n"
]
],
[
[
"- available is a boolean column with string values 'f' and 't' representing occupied and available status\n- price values are available only for dates when property is available and missing when property is occupied\n",
"_____no_output_____"
],
[
"### Customer Reviews Data\n\nGlance into the customer reviews to understand the data",
"_____no_output_____"
]
],
[
[
"df_reviews = pd.read_csv('input/reviews.csv')\nprint(df_reviews.shape)\ndf_reviews.head()",
"(68275, 6)\n"
]
],
[
[
"## Data Preparation",
"_____no_output_____"
],
[
"### Define some utility functions",
"_____no_output_____"
]
],
[
[
"def amt_str_to_float(text):\n '''\n INPUT:\n text - formatted amount text to convert to float value\n OUTPUT:\n amount - A parsed float value\n \n Parses the amount values specified as \"$2,332\" into a float value 2332.00\n '''\n return float(text[1:].replace(',','')) if type(text) == str else text\n\ndef pct_str_to_float(text):\n '''\n INPUT:\n text - formatted percentage text to convert to float value\n OUTPUT:\n percentage - A parsed float value\n \n Parses the text percentage values specified as \"98%\" into a float value 98.00\n '''\n return float(text[:-1]) if type(text) == str else text\n\ndef get_sentiment_score(text):\n '''\n INPUT:\n text - input text for sentiment analysis\n OUTPUT:\n score - sentiment score asa float value\n \n Calculates the sentiment score using textblob library and returns the score\n '''\n blob = TextBlob(text)\n scores = []\n for sentence in blob.sentences:\n scores.append(sentence.sentiment.polarity)\n \n return np.mean(scores)",
"_____no_output_____"
]
],
[
[
"### Cleanup and Prepare Listings Data",
"_____no_output_____"
],
[
"- Of various features describing the property we use *description* to calculate the sentiment score and store it as *description_score*. This is done to see if property description has any effect on occupancy and review ratings.\n- Drop irrelevant columns \n- Apply other data wrangling and imputation techniques to fill the missing values.",
"_____no_output_____"
]
],
[
[
"def prepare_listings_data(df):\n dfc = df.copy()\n \n # 1. Defined calculated/derived columns\n # Extract sentiment score of the property description\n dfc['description_score'] = dfc.description.apply(get_sentiment_score)\n\n # Extract annnual proportional occupancy for the property\n dfc['occupancy'] = (1 - dfc.availability_365/365)\n \n # Extract age of property listing\n dfc['listing_age'] = (datetime.datetime(2017, 1, 1) - dfc.host_since).apply(lambda col: col.days)\n \n # 2. Drop irrelevant columns\n columns_having_same_values = ['country', 'country_code', 'state', 'experiences_offered']\n\n\n # Drop Columns that are \n # a. descriptive, \n # b. image urls\n # c. High NaN values\n # d. that have been mapped to other calculated/provided columns\n # - e.g. coarse 'neighbourhoood' is considered in analysis instead of geo locations, street etc.\n # - description is mapped to corresponding sentiment score and availability_365 is mapped to occupancy\n irrelevant_columns = ['listing_url', 'scrape_id', 'last_scraped', 'notes', 'transit', \n 'access', 'interaction', 'house_rules', 'thumbnail_url', 'medium_url', \n 'picture_url', 'xl_picture_url', 'host_url', 'host_name',\n 'host_thumbnail_url', 'host_picture_url', 'host_neighbourhood', 'host_listings_count', 'host_total_listings_count', 'host_verifications',\n 'calculated_host_listings_count', 'reviews_per_month', 'requires_license', 'license', 'jurisdiction_names',\n 'host_id', 'host_location', 'host_about', 'neighbourhood_group_cleansed', 'latitude', 'longitude',\n 'market', 'smart_location', 'street', 'square_feet', 'amenities',\n 'maximum_nights', 'calendar_updated', 'has_availability', 'availability_30', 'availability_60', 'availability_90',\n 'calendar_last_scraped', 'first_review', 'last_review', 'neighbourhood', 'neighborhood_overview',\n 'name' ,'summary' ,'space' ,'description', 'city', 'zipcode', 'availability_365', 'host_since' \n ]\n\n columns_to_drop = columns_having_same_values\n columns_to_drop.extend(irrelevant_columns)\n \n dfc.drop(columns_to_drop, axis=1, inplace=True)\n\n # 3. Convert binary columns into 0,1\n binary_columns = ['host_is_superhost', 'host_has_profile_pic', 'host_identity_verified', \n 'is_location_exact', 'instant_bookable', 'require_guest_profile_picture', 'require_guest_phone_verification'\n ]\n for col in binary_columns:\n dfc[col] = dfc[col].apply(lambda c: 1 if c == 't' else 0)\n\n # 4. Prepare numeric columns\n # Convert Amount columns to number from string\n dfc['price'] = dfc['price'].apply(amt_str_to_float)\n dfc['weekly_price'] = dfc['weekly_price'].apply(amt_str_to_float)\n dfc['monthly_price'] = dfc['monthly_price'].apply(amt_str_to_float)\n dfc['security_deposit'] = dfc['security_deposit'].apply(amt_str_to_float)\n dfc['cleaning_fee'] = dfc['cleaning_fee'].apply(amt_str_to_float)\n dfc['extra_people'] = dfc['extra_people'].apply(amt_str_to_float)\n\n # Convert String Percentage values to numeric\n dfc['host_response_rate'] = dfc['host_response_rate'].apply(pct_str_to_float)\n dfc['host_acceptance_rate'] = dfc['host_acceptance_rate'].apply(pct_str_to_float)\n\n\n # 5. Apply Imputation to fill missing values\n\n # security deposit and cleaning fee can be marked 0 if not specified\n dfc['security_deposit'].fillna(0, inplace=True)\n dfc['cleaning_fee'].fillna(0, inplace=True)\n\n # Weekly and Monthly prices can be filled with simply multiplication.\n dfc['weekly_price'] = np.where(np.isnan(dfc['weekly_price']), dfc['price']*7, dfc['weekly_price'])\n dfc['monthly_price'] = np.where(np.isnan(dfc['monthly_price']), dfc['price']*30, dfc['monthly_price'])\n\n # Missing Number of Bathrooms: We can assume 1 bathroom per bedroom (if bedrooms are specified)\n # Vice-versa Missing Number of Bedrooms: We can assume to be same as number of bathrooms (if specified)\n dfc['bathrooms'] = np.where(np.isnan(dfc['bathrooms']), dfc['bedrooms'], dfc['bathrooms'])\n dfc['bedrooms'] = np.where(np.isnan(dfc['bedrooms']), dfc['bathrooms'], dfc['bedrooms'])\n\n # Missing number of beds - Fill with average number of beds per bedroom * number_of_bedrooms\n average_beds_ped_bedroom = (dfc[dfc.bedrooms>0].beds/dfc[dfc.bedrooms>0].bedrooms).mean()\n dfc['beds'] = np.where(np.isnan(dfc['beds']), average_beds_ped_bedroom*dfc['bedrooms'], dfc['beds'])\n\n\n # Fill host_response_rate and host_acceptance_rate to corresponding mean values\n dfc['host_response_rate'].fillna(dfc['host_response_rate'].mean(), inplace=True)\n dfc['host_acceptance_rate'].fillna(dfc['host_acceptance_rate'].mean(), inplace=True)\n\n # Fill Categorical variables using mode()\n dfc['host_response_time'].fillna(dfc['host_response_time'].mode()[0], inplace=True)\n dfc['property_type'].fillna(dfc['property_type'].mode()[0], inplace=True)\n \n dfc.rename({'neighbourhood_cleansed':'neighbourhood'}, axis=1, inplace=True)\n return dfc\n \ndfc_listings = prepare_listings_data(df_listings)\n\ndfc_listings.head(n=5)",
"_____no_output_____"
]
],
[
[
"### Cleanup and Prepare Calendar Data",
"_____no_output_____"
]
],
[
[
"def prepare_calendar_data(df):\n monthNames = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']\n\n dfc = df.copy()\n\n # Convert available=(t,f) => status=(available/occupied)\n dfc['status'] = dfc['available'].apply(lambda col: 'available' if col == 't' else 'occupied')\n dfc.drop('available', axis=1, inplace=True)\n # Convert text price to float value\n dfc['price'] = dfc.price.apply(amt_str_to_float)\n\n # Extract Month from date column and store as Month Number (1 based index) and Month Name\n dfc['month'] = dfc.date.apply(lambda col: int(col[5:7]))\n dfc['month_name'] = dfc.date.apply(lambda col: monthNames[int(col[5:7])-1])\n return dfc\n\ndfc_calendar = prepare_calendar_data(df_calendar)\ndfc_calendar.head()",
"_____no_output_____"
]
],
[
[
"Pivot on Month and calculate occupancy percentage per property per month",
"_____no_output_____"
]
],
[
[
"df_occupancy = pd.pivot_table(dfc_calendar.groupby(['listing_id', 'status', 'month', 'month_name']).count().reset_index(), index=[\"listing_id\", \"month\", \"month_name\"], columns='status', values='date').reset_index().rename_axis(None, axis=1)\n\n# If property is fully occupied then 'available' remains NaN \n# conversely if property is fully available then 'occupied' remains NaN\n# it is apt to fill these values as 0\ndf_occupancy.fillna(0, inplace=True)\ndf_occupancy['occupancy']=100*df_occupancy['occupied']/(df_occupancy['occupied']+df_occupancy['available'])\n\ndf_occupancy.head()",
"_____no_output_____"
]
],
[
[
"### Cleanup and Prepare Review Data\n\n- Calculate the sentiment score from the review comments\n- Keep only property id and sentiment score and drop other columns",
"_____no_output_____"
]
],
[
[
"def prepare_reviews_data(df):\n dfc = df.copy()\n # Extract sentiment score from the review comments\n dfc.comments = dfc.comments.apply(lambda col: col if type(col) == str else '')\n dfc['review_score'] = dfc.comments.apply(get_sentiment_score)\n dfc.drop('reviewer_id', axis=1, inplace=True)\n dfc.drop('reviewer_name', axis=1, inplace=True)\n dfc.drop('id', axis=1, inplace=True)\n dfc.drop('date', axis=1, inplace=True)\n dfc.drop('comments', axis=1, inplace=True)\n return dfc\n\ndfc_reviews = prepare_reviews_data(df_reviews)\ndfc_reviews.head()",
"/Users/pranjalkumarsadawarte/opt/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3257: RuntimeWarning: Mean of empty slice.\n out=out, **kwargs)\n"
]
],
[
[
"## Data Modeling\n\nAt this point we still have some missing values in ratings columns, primarily for the properties where no reviews were given.",
"_____no_output_____"
]
],
[
[
"# Drop the Listing-Id column from the regression analysis and drop other missing values\ndf_regression = pd.get_dummies(dfc_listings.drop('id', axis=1).dropna())",
"_____no_output_____"
]
],
[
[
"Fit a linear regression model for predicting occupancy rate and customer review rating",
"_____no_output_____"
]
],
[
[
"def coef_weights(model, columns):\n '''\n INPUT:\n coefficients - the coefficients of the linear model \n X_train - the training data, so the column names can be used\n OUTPUT:\n coefs_df - a dataframe holding the coefficient, estimate, and abs(estimate)\n \n Provides a dataframe that can be used to understand the most influential coefficients\n in a linear model by providing the coefficient estimates along with the name of the \n variable attached to the coefficient.\n '''\n coefs_df = pd.DataFrame()\n coefs_df['est_int'] = columns\n coefs_df['coefs'] = model.coef_\n coefs_df['abs_coefs'] = np.abs(model.coef_)\n coefs_df = coefs_df.sort_values('abs_coefs', ascending=False)\n return coefs_df\n\n\ndef get_lr_model(df, target):\n X = df.drop(target, axis=1)\n y = df[target]\n \n # Create training and test sets of data\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3, random_state=42)\n\n # Instantiate a LinearRegression model with normalized data\n model = LinearRegression(normalize=True)\n # Fit your model to the training data\n model = model.fit(X_train, y_train)\n # Predict the response for the training data and the test data\n y_pred = model.predict(X_test)\n # Obtain an rsquared value for both the training and test data\n train_score = r2_score(y_train, model.predict(X_train))\n test_score = r2_score(y_test, y_pred)\n features = coef_weights(model, X_train.columns)\n return model, train_score, test_score, features\n\n\n\noccupancy_results = get_lr_model(df_regression, 'occupancy')\n\nreviewscores_results = get_lr_model(df_regression, 'review_scores_rating')",
"_____no_output_____"
]
],
[
[
"## Evaluation\n",
"_____no_output_____"
],
[
"### Hot locations",
"_____no_output_____"
]
],
[
[
"plt_data = dfc_listings[['neighbourhood', 'occupancy']].groupby('neighbourhood').mean()\n\nax = plt_data.plot(kind=\"bar\", title=\"Occupancy Percentage by Neighbourhood\", label=\"Neighbourhood\", figsize=(18,8));\n\nfor p in ax.patches:\n ax.annotate('{:.2f}'.format(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.005))\n ",
"_____no_output_____"
]
],
[
[
"- Alston, Mission Hill and Leather Hill have high average occupancy\n- Mattapan and Roslindale have least occupancy rates",
"_____no_output_____"
],
[
"### Peak and lean seasons",
"_____no_output_____"
]
],
[
[
"df = df_occupancy[['month', 'occupancy']].groupby(['month']).mean()\n\nax = df.plot(kind=\"bar\", title=\"Occupancy Percentage by Month\", figsize=(18,8));\n\nfor p in ax.patches:\n ax.annotate('{:.1f}%'.format(p.get_height()), (p.get_x() * 1.005, p.get_height() * 1.005))",
"_____no_output_____"
]
],
[
[
"- September and October show high occupancy \n- The occupancy is around 50% for most of the remaining period.\n\n** These observations may not be conclusive as it looks at data only for one year**",
"_____no_output_____"
],
[
"## Does number of listings in neighbourhood affect the occupancy?\n\nLet us see if there are some neighbourhoods showing low occupancy rates coupled with high number of listings.\n",
"_____no_output_____"
]
],
[
[
"d1 = pd.DataFrame(dfc_listings.neighbourhood.value_counts()).rename(columns={'neighbourhood':'listing_count'})\nd2 = dfc_listings[['neighbourhood', 'occupancy']].groupby('neighbourhood').mean()\ndata1 = pd.merge(d1, d2, left_index=True, right_index=True)\n\nfig, ax1 = plt.subplots(figsize=(18,8))\n\ncolor = 'tab:blue'\nax1.set_xlabel('Neighbourhood')\nax1.set_ylabel('Number of Listing', color=color)\nax1.bar(data1.index, data1.listing_count, color=color)\nax1.tick_params(axis='y', labelcolor=color)\nax1.tick_params(axis='x',rotation=90)\nax2 = ax1.twinx() # instantiate a second axes that shares the same x-axis\n\ncolor = 'tab:green'\nax2.set_ylabel('Occupancy', color=color) # we already handled the x-label with ax1\nax2.scatter(data1.index, data1.occupancy, color=color)\nax2.tick_params(axis='y', labelcolor=color)\n\nfig.tight_layout() # otherwise the right y-label is slightly clipped\nplt.show()",
"_____no_output_____"
]
],
[
[
"- Leather District has very less number of listings with high occupancy\n- Highest number of listings are in Jamaica Plain and South End. However, these neighbourhoods still have higher occupancy rates compared to many other neighbourhoods.\n\nIn summary, the plot is all over the place and there is clearly no sign of over crowding of listings in any neighbourhood.\n- ",
"_____no_output_____"
],
[
"### Features affecting the occupancy and customer ratings",
"_____no_output_____"
]
],
[
[
"# Top 10 features affecting occupancy\noccupancy_results[3].head(10)",
"_____no_output_____"
],
[
"# Top 10 features affecting occupancy\nreviewscores_results[3].head(10)",
"_____no_output_____"
]
],
[
[
"# Conclusion\n\n- There are few neighbourhoods that show higher occupancy compared to others. However, there are no standouts by great margin\n- September and October appear to be peak seasons. However, any conclusion on this needs analysis of data over years.\n- Poor correlation between number of properties in a neighbourhoood and occupancy rate. **No conclusive evidence to suggest over supply of properties in any neighbourhood**\n- Prominant features deciding occupancy\n - **Property type seems to be most important feature that renters look for with special preference to Villa and appartments**\n - **This is followed by bed type with general dislike for properties with air-beds and couches**\n- Prominant features deciding review ratings\n - **Host Response Time showed up as most important feature deciding the review ratings. Possibly due to the first impression effect**\n - **This was followed by property type and cancellation policy**",
"_____no_output_____"
]
],
[
[
"!jupyter nbconvert --to html airbnb-boston.ipynb \n!mv airbnb-boston.html ../airbnb-boston-report.html",
"[NbConvertApp] Converting notebook airbnb-boston.ipynb to html\n[NbConvertApp] Writing 459900 bytes to airbnb-boston.html\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a05f0ea2c25de6a612c6ae31d89ffdbf4fcf11a
| 302,196 |
ipynb
|
Jupyter Notebook
|
7_Data Analysis with Python/exploratory-data-analysis.ipynb
|
karunpabbi/IBM-DataScience
|
354674b426fe9a6e7697211af164cf8ae942bb78
|
[
"MIT"
] | null | null | null |
7_Data Analysis with Python/exploratory-data-analysis.ipynb
|
karunpabbi/IBM-DataScience
|
354674b426fe9a6e7697211af164cf8ae942bb78
|
[
"MIT"
] | null | null | null |
7_Data Analysis with Python/exploratory-data-analysis.ipynb
|
karunpabbi/IBM-DataScience
|
354674b426fe9a6e7697211af164cf8ae942bb78
|
[
"MIT"
] | null | null | null | 62.027094 | 23,988 | 0.683818 |
[
[
[
"<center>\n <img src=\"https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Module%203/images/IDSNlogo.png\" width=\"300\" alt=\"cognitiveclass.ai logo\" />\n</center>\n\n# Data Analysis with Python\n\nEstimated time needed: **30** minutes\n\n## Objectives\n\nAfter completing this lab you will be able to:\n\n* Explore features or charecteristics to predict price of car\n",
"_____no_output_____"
],
[
"<h2>Table of Contents</h2>\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<ol>\n <li><a href=\"https://#import_data\">Import Data from Module</a></li>\n <li><a href=\"https://#pattern_visualization\">Analyzing Individual Feature Patterns using Visualization</a></li>\n <li><a href=\"https://#discriptive_statistics\">Descriptive Statistical Analysis</a></li>\n <li><a href=\"https://#basic_grouping\">Basics of Grouping</a></li>\n <li><a href=\"https://#correlation_causation\">Correlation and Causation</a></li>\n <li><a href=\"https://#anova\">ANOVA</a></li>\n</ol>\n\n</div>\n\n<hr>\n",
"_____no_output_____"
],
[
"<h3>What are the main characteristics that have the most impact on the car price?</h3>\n",
"_____no_output_____"
],
[
"<h2 id=\"import_data\">1. Import Data from Module 2</h2>\n",
"_____no_output_____"
],
[
"<h4>Setup</h4>\n",
"_____no_output_____"
],
[
"Import libraries:\n",
"_____no_output_____"
]
],
[
[
"#install specific version of libraries used in lab\n#! mamba install pandas==1.3.3\n#! mamba install numpy=1.21.2\n#! mamba install scipy=1.7.1-y\n#! mamba install seaborn=0.9.0-y",
"_____no_output_____"
],
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"Load the data and store it in dataframe `df`:\n",
"_____no_output_____"
],
[
"This dataset was hosted on IBM Cloud object. Click <a href=\"https://cocl.us/DA101EN_object_storage?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01\">HERE</a> for free storage.\n",
"_____no_output_____"
]
],
[
[
"path='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/automobileEDA.csv'\ndf = pd.read_csv(path)\ndf.head()",
"_____no_output_____"
]
],
[
[
"<h2 id=\"pattern_visualization\">2. Analyzing Individual Feature Patterns Using Visualization</h2>\n",
"_____no_output_____"
],
[
"To install Seaborn we use pip, the Python package manager.\n",
"_____no_output_____"
],
[
"Import visualization packages \"Matplotlib\" and \"Seaborn\". Don't forget about \"%matplotlib inline\" to plot in a Jupyter notebook.\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
"<h4>How to choose the right visualization method?</h4>\n<p>When visualizing individual variables, it is important to first understand what type of variable you are dealing with. This will help us find the right visualization method for that variable.</p>\n",
"_____no_output_____"
]
],
[
[
"# list the data types for each column\nprint(df.dtypes)",
"symboling int64\nnormalized-losses int64\nmake object\naspiration object\nnum-of-doors object\nbody-style object\ndrive-wheels object\nengine-location object\nwheel-base float64\nlength float64\nwidth float64\nheight float64\ncurb-weight int64\nengine-type object\nnum-of-cylinders object\nengine-size int64\nfuel-system object\nbore float64\nstroke float64\ncompression-ratio float64\nhorsepower float64\npeak-rpm float64\ncity-mpg int64\nhighway-mpg int64\nprice float64\ncity-L/100km float64\nhorsepower-binned object\ndiesel int64\ngas int64\ndtype: object\n"
]
],
[
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h3>Question #1:</h3>\n\n<b>What is the data type of the column \"peak-rpm\"? </b>\n\n</div>\n",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute \nprint(df['peak-rpm'].dtypes)",
"float64\n"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\nfloat64\n```\n\n</details>\n",
"_____no_output_____"
],
[
"For example, we can calculate the correlation between variables of type \"int64\" or \"float64\" using the method \"corr\":\n",
"_____no_output_____"
]
],
[
[
"df.corr()",
"_____no_output_____"
]
],
[
[
"The diagonal elements are always one; we will study correlation more precisely Pearson correlation in-depth at the end of the notebook.\n",
"_____no_output_____"
],
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h3> Question #2: </h3>\n\n<p>Find the correlation between the following columns: bore, stroke, compression-ratio, and horsepower.</p>\n<p>Hint: if you would like to select those columns, use the following syntax: df[['bore','stroke','compression-ratio','horsepower']]</p>\n</div>\n",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute \ndf[['bore', 'stroke', 'compression-ratio', 'horsepower']].corr()",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\ndf[['bore', 'stroke', 'compression-ratio', 'horsepower']].corr()\n```\n\n</details>\n",
"_____no_output_____"
],
[
"<h2>Continuous Numerical Variables:</h2> \n\n<p>Continuous numerical variables are variables that may contain any value within some range. They can be of type \"int64\" or \"float64\". A great way to visualize these variables is by using scatterplots with fitted lines.</p>\n\n<p>In order to start understanding the (linear) relationship between an individual variable and the price, we can use \"regplot\" which plots the scatterplot plus the fitted regression line for the data.</p>\n",
"_____no_output_____"
],
[
"Let's see several examples of different linear relationships:\n",
"_____no_output_____"
],
[
"<h3>Positive Linear Relationship</h4>\n",
"_____no_output_____"
],
[
"Let's find the scatterplot of \"engine-size\" and \"price\".\n",
"_____no_output_____"
]
],
[
[
"# Engine size as potential predictor variable of price\nsns.regplot(x=\"engine-size\", y=\"price\", data=df)\nplt.ylim(0,)\nplt.show()",
"_____no_output_____"
]
],
[
[
"<p>As the engine-size goes up, the price goes up: this indicates a positive direct correlation between these two variables. Engine size seems like a pretty good predictor of price since the regression line is almost a perfect diagonal line.</p>\n",
"_____no_output_____"
],
[
"We can examine the correlation between 'engine-size' and 'price' and see that it's approximately 0.87.\n",
"_____no_output_____"
]
],
[
[
"df[[\"engine-size\", \"price\"]].corr()",
"_____no_output_____"
]
],
[
[
"Highway mpg is a potential predictor variable of price. Let's find the scatterplot of \"highway-mpg\" and \"price\".\n",
"_____no_output_____"
]
],
[
[
"sns.regplot(x=\"highway-mpg\", y=\"price\", data=df)\nplt.ylim(0,)",
"_____no_output_____"
]
],
[
[
"<p>As highway-mpg goes up, the price goes down: this indicates an inverse/negative relationship between these two variables. Highway mpg could potentially be a predictor of price.</p>\n",
"_____no_output_____"
],
[
"We can examine the correlation between 'highway-mpg' and 'price' and see it's approximately -0.704.\n",
"_____no_output_____"
]
],
[
[
"df[['highway-mpg', 'price']].corr()",
"_____no_output_____"
]
],
[
[
"<h3>Weak Linear Relationship</h3>\n",
"_____no_output_____"
],
[
"Let's see if \"peak-rpm\" is a predictor variable of \"price\".\n",
"_____no_output_____"
]
],
[
[
"sns.regplot(x=\"peak-rpm\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>Peak rpm does not seem like a good predictor of the price at all since the regression line is close to horizontal. Also, the data points are very scattered and far from the fitted line, showing lots of variability. Therefore, it's not a reliable variable.</p>\n",
"_____no_output_____"
],
[
"We can examine the correlation between 'peak-rpm' and 'price' and see it's approximately -0.101616.\n",
"_____no_output_____"
]
],
[
[
"df[['peak-rpm','price']].corr()",
"_____no_output_____"
]
],
[
[
" <div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h1> Question 3 a): </h1>\n\n<p>Find the correlation between x=\"stroke\" and y=\"price\".</p>\n<p>Hint: if you would like to select those columns, use the following syntax: df[[\"stroke\",\"price\"]]. </p>\n</div>\n",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\n\ndf[['stroke','price']].corr()",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\n\n#The correlation is 0.0823, the non-diagonal elements of the table.\n\ndf[[\"stroke\",\"price\"]].corr()\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h1>Question 3 b):</h1>\n\n<p>Given the correlation results between \"price\" and \"stroke\", do you expect a linear relationship?</p> \n<p>Verify your results using the function \"regplot()\".</p>\n</div>\n",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute \nsns.regplot(x='stroke',y='price',data=df)\nplt.ylim(0,)\n",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\n\n#There is a weak correlation between the variable 'stroke' and 'price.' as such regression will not work well. We can see this using \"regplot\" to demonstrate this.\n\n#Code: \nsns.regplot(x=\"stroke\", y=\"price\", data=df)\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"<h3>Categorical Variables</h3>\n\n<p>These are variables that describe a 'characteristic' of a data unit, and are selected from a small group of categories. The categorical variables can have the type \"object\" or \"int64\". A good way to visualize categorical variables is by using boxplots.</p>\n",
"_____no_output_____"
],
[
"Let's look at the relationship between \"body-style\" and \"price\".\n",
"_____no_output_____"
]
],
[
[
"sns.boxplot(x=\"body-style\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>We see that the distributions of price between the different body-style categories have a significant overlap, so body-style would not be a good predictor of price. Let's examine engine \"engine-location\" and \"price\":</p>\n",
"_____no_output_____"
]
],
[
[
"sns.boxplot(x=\"engine-location\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>Here we see that the distribution of price between these two engine-location categories, front and rear, are distinct enough to take engine-location as a potential good predictor of price.</p>\n",
"_____no_output_____"
],
[
"Let's examine \"drive-wheels\" and \"price\".\n",
"_____no_output_____"
]
],
[
[
"# drive-wheels\nsns.boxplot(x=\"drive-wheels\", y=\"price\", data=df)",
"_____no_output_____"
]
],
[
[
"<p>Here we see that the distribution of price between the different drive-wheels categories differs. As such, drive-wheels could potentially be a predictor of price.</p>\n",
"_____no_output_____"
],
[
"<h2 id=\"discriptive_statistics\">3. Descriptive Statistical Analysis</h2>\n",
"_____no_output_____"
],
[
"<p>Let's first take a look at the variables by utilizing a description method.</p>\n\n<p>The <b>describe</b> function automatically computes basic statistics for all continuous variables. Any NaN values are automatically skipped in these statistics.</p>\n\nThis will show:\n\n<ul>\n <li>the count of that variable</li>\n <li>the mean</li>\n <li>the standard deviation (std)</li> \n <li>the minimum value</li>\n <li>the IQR (Interquartile Range: 25%, 50% and 75%)</li>\n <li>the maximum value</li>\n<ul>\n",
"_____no_output_____"
],
[
"We can apply the method \"describe\" as follows:\n",
"_____no_output_____"
]
],
[
[
"df.describe()",
"_____no_output_____"
]
],
[
[
"The default setting of \"describe\" skips variables of type object. We can apply the method \"describe\" on the variables of type 'object' as follows:\n",
"_____no_output_____"
]
],
[
[
"df.describe(include=['object'])",
"_____no_output_____"
]
],
[
[
"<h3>Value Counts</h3>\n",
"_____no_output_____"
],
[
"<p>Value counts is a good way of understanding how many units of each characteristic/variable we have. We can apply the \"value_counts\" method on the column \"drive-wheels\". Don’t forget the method \"value_counts\" only works on pandas series, not pandas dataframes. As a result, we only include one bracket <code>df['drive-wheels']</code>, not two brackets <code>df[['drive-wheels']]</code>.</p>\n",
"_____no_output_____"
]
],
[
[
"df['drive-wheels'].value_counts()",
"_____no_output_____"
]
],
[
[
"We can convert the series to a dataframe as follows:\n",
"_____no_output_____"
]
],
[
[
"df['drive-wheels'].value_counts().to_frame()",
"_____no_output_____"
]
],
[
[
"Let's repeat the above steps but save the results to the dataframe \"drive_wheels_counts\" and rename the column 'drive-wheels' to 'value_counts'.\n",
"_____no_output_____"
]
],
[
[
"drive_wheels_counts = df['drive-wheels'].value_counts().to_frame()\ndrive_wheels_counts.rename(columns={'drive-wheels': 'value_counts'}, inplace=True)\ndrive_wheels_counts",
"_____no_output_____"
]
],
[
[
"Now let's rename the index to 'drive-wheels':\n",
"_____no_output_____"
]
],
[
[
"drive_wheels_counts.index.name = 'drive-wheels'\ndrive_wheels_counts",
"_____no_output_____"
]
],
[
[
"We can repeat the above process for the variable 'engine-location'.\n",
"_____no_output_____"
]
],
[
[
"# engine-location as variable\nengine_loc_counts = df['engine-location'].value_counts().to_frame()\nengine_loc_counts.rename(columns={'engine-location': 'value_counts'}, inplace=True)\nengine_loc_counts.index.name = 'engine-location'\nengine_loc_counts.head(10)",
"_____no_output_____"
]
],
[
[
"<p>After examining the value counts of the engine location, we see that engine location would not be a good predictor variable for the price. This is because we only have three cars with a rear engine and 198 with an engine in the front, so this result is skewed. Thus, we are not able to draw any conclusions about the engine location.</p>\n",
"_____no_output_____"
],
[
"<h2 id=\"basic_grouping\">4. Basics of Grouping</h2>\n",
"_____no_output_____"
],
[
"<p>The \"groupby\" method groups data by different categories. The data is grouped based on one or several variables, and analysis is performed on the individual groups.</p>\n\n<p>For example, let's group by the variable \"drive-wheels\". We see that there are 3 different categories of drive wheels.</p>\n",
"_____no_output_____"
]
],
[
[
"df['drive-wheels'].unique()",
"_____no_output_____"
]
],
[
[
"<p>If we want to know, on average, which type of drive wheel is most valuable, we can group \"drive-wheels\" and then average them.</p>\n\n<p>We can select the columns 'drive-wheels', 'body-style' and 'price', then assign it to the variable \"df_group_one\".</p>\n",
"_____no_output_____"
]
],
[
[
"df_group_one = df[['drive-wheels','body-style','price']]",
"_____no_output_____"
]
],
[
[
"We can then calculate the average price for each of the different categories of data.\n",
"_____no_output_____"
]
],
[
[
"# grouping results\ndf_group_one = df_group_one.groupby(['drive-wheels'],as_index=False).mean()\ndf_group_one",
"_____no_output_____"
]
],
[
[
"<p>From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price.</p>\n\n<p>You can also group by multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'. This groups the dataframe by the unique combination of 'drive-wheels' and 'body-style'. We can store the results in the variable 'grouped_test1'.</p>\n",
"_____no_output_____"
]
],
[
[
"# grouping results\ndf_gptest = df[['drive-wheels','body-style','price']]\ngrouped_test1 = df_gptest.groupby(['drive-wheels','body-style'],as_index=False).mean()\ngrouped_test1",
"_____no_output_____"
]
],
[
[
"<p>This grouped data is much easier to visualize when it is made into a pivot table. A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. We can convert the dataframe to a pivot table using the method \"pivot\" to create a pivot table from the groups.</p>\n\n<p>In this case, we will leave the drive-wheels variable as the rows of the table, and pivot body-style to become the columns of the table:</p>\n",
"_____no_output_____"
]
],
[
[
"grouped_pivot = grouped_test1.pivot(index='drive-wheels',columns='body-style')\ngrouped_pivot",
"_____no_output_____"
]
],
[
[
"<p>Often, we won't have data for some of the pivot cells. We can fill these missing cells with the value 0, but any other value could potentially be used as well. It should be mentioned that missing data is quite a complex subject and is an entire course on its own.</p>\n",
"_____no_output_____"
]
],
[
[
"grouped_pivot = grouped_pivot.fillna(0) #fill missing values with 0\ngrouped_pivot",
"_____no_output_____"
]
],
[
[
"<div class=\"alert alert-danger alertdanger\" style=\"margin-top: 20px\">\n<h1>Question 4:</h1>\n\n<p>Use the \"groupby\" function to find the average \"price\" of each car based on \"body-style\".</p>\n</div>\n",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute \ngroup_data = df_gptest.groupby(['body-style'],as_index=False).mean()\ngroup_data",
"_____no_output_____"
]
],
[
[
"<details><summary>Click here for the solution</summary>\n\n```python\n# grouping results\ndf_gptest2 = df[['body-style','price']]\ngrouped_test_bodystyle = df_gptest2.groupby(['body-style'],as_index= False).mean()\ngrouped_test_bodystyle\n\n```\n\n</details>\n",
"_____no_output_____"
],
[
"If you did not import \"pyplot\", let's do it again.\n",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\n%matplotlib inline ",
"_____no_output_____"
]
],
[
[
"<h4>Variables: Drive Wheels and Body Style vs. Price</h4>\n",
"_____no_output_____"
],
[
"Let's use a heat map to visualize the relationship between Body Style vs Price.\n",
"_____no_output_____"
]
],
[
[
"#use the grouped results\nplt.pcolor(grouped_pivot, cmap='RdBu')\nplt.colorbar()\nplt.show()",
"_____no_output_____"
]
],
[
[
"<p>The heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' on the vertical and horizontal axis, respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'.</p>\n\n<p>The default labels convey no useful information to us. Let's change that:</p>\n",
"_____no_output_____"
]
],
[
[
"fig, ax = plt.subplots()\nim = ax.pcolor(grouped_pivot, cmap='RdBu')\n\n#label names\nrow_labels = grouped_pivot.columns.levels[1]\ncol_labels = grouped_pivot.index\n\n#move ticks and labels to the center\nax.set_xticks(np.arange(grouped_pivot.shape[1]) + 0.5, minor=False)\nax.set_yticks(np.arange(grouped_pivot.shape[0]) + 0.5, minor=False)\n\n#insert labels\nax.set_xticklabels(row_labels, minor=False)\nax.set_yticklabels(col_labels, minor=False)\n\n#rotate label if too long\nplt.xticks(rotation=90)\n\nfig.colorbar(im)\nplt.show()",
"_____no_output_____"
]
],
[
[
"<p>Visualization is very important in data science, and Python visualization packages provide great freedom. We will go more in-depth in a separate Python visualizations course.</p>\n\n<p>The main question we want to answer in this module is, \"What are the main characteristics which have the most impact on the car price?\".</p>\n\n<p>To get a better measure of the important characteristics, we look at the correlation of these variables with the car price. In other words: how is the car price dependent on this variable?</p>\n",
"_____no_output_____"
],
[
"<h2 id=\"correlation_causation\">5. Correlation and Causation</h2>\n",
"_____no_output_____"
],
[
"<p><b>Correlation</b>: a measure of the extent of interdependence between variables.</p>\n\n<p><b>Causation</b>: the relationship between cause and effect between two variables.</p>\n\n<p>It is important to know the difference between these two. Correlation does not imply causation. Determining correlation is much simpler the determining causation as causation may require independent experimentation.</p>\n",
"_____no_output_____"
],
[
"<p><b>Pearson Correlation</b></p>\n<p>The Pearson Correlation measures the linear dependence between two variables X and Y.</p>\n<p>The resulting coefficient is a value between -1 and 1 inclusive, where:</p>\n<ul>\n <li><b>1</b>: Perfect positive linear correlation.</li>\n <li><b>0</b>: No linear correlation, the two variables most likely do not affect each other.</li>\n <li><b>-1</b>: Perfect negative linear correlation.</li>\n</ul>\n",
"_____no_output_____"
],
[
"<p>Pearson Correlation is the default method of the function \"corr\". Like before, we can calculate the Pearson Correlation of the of the 'int64' or 'float64' variables.</p>\n",
"_____no_output_____"
]
],
[
[
"df.corr()",
"_____no_output_____"
]
],
[
[
"Sometimes we would like to know the significant of the correlation estimate.\n",
"_____no_output_____"
],
[
"<b>P-value</b>\n\n<p>What is this P-value? The P-value is the probability value that the correlation between these two variables is statistically significant. Normally, we choose a significance level of 0.05, which means that we are 95% confident that the correlation between the variables is significant.</p>\n\nBy convention, when the\n\n<ul>\n <li>p-value is $<$ 0.001: we say there is strong evidence that the correlation is significant.</li>\n <li>the p-value is $<$ 0.05: there is moderate evidence that the correlation is significant.</li>\n <li>the p-value is $<$ 0.1: there is weak evidence that the correlation is significant.</li>\n <li>the p-value is $>$ 0.1: there is no evidence that the correlation is significant.</li>\n</ul>\n",
"_____no_output_____"
],
[
"We can obtain this information using \"stats\" module in the \"scipy\" library.\n",
"_____no_output_____"
]
],
[
[
"from scipy import stats",
"_____no_output_____"
]
],
[
[
"<h3>Wheel-Base vs. Price</h3>\n",
"_____no_output_____"
],
[
"Let's calculate the Pearson Correlation Coefficient and P-value of 'wheel-base' and 'price'.\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P =\", p_value) ",
"The Pearson Correlation Coefficient is 0.584641822265508 with a P-value of P = 8.076488270733218e-20\n"
]
],
[
[
"<h4>Conclusion:</h4>\n<p>Since the p-value is $<$ 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585).</p>\n",
"_____no_output_____"
],
[
"<h3>Horsepower vs. Price</h3>\n",
"_____no_output_____"
],
[
"Let's calculate the Pearson Correlation Coefficient and P-value of 'horsepower' and 'price'.\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"The Pearson Correlation Coefficient is 0.809574567003656 with a P-value of P = 6.369057428259557e-48\n"
]
],
[
[
"<h4>Conclusion:</h4>\n\n<p>Since the p-value is $<$ 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1).</p>\n",
"_____no_output_____"
],
[
"<h3>Length vs. Price</h3>\n\nLet's calculate the Pearson Correlation Coefficient and P-value of 'length' and 'price'.\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['length'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"The Pearson Correlation Coefficient is 0.690628380448364 with a P-value of P = 8.016477466158986e-30\n"
]
],
[
[
"<h4>Conclusion:</h4>\n<p>Since the p-value is $<$ 0.001, the correlation between length and price is statistically significant, and the linear relationship is moderately strong (~0.691).</p>\n",
"_____no_output_____"
],
[
"<h3>Width vs. Price</h3>\n",
"_____no_output_____"
],
[
"Let's calculate the Pearson Correlation Coefficient and P-value of 'width' and 'price':\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['width'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P =\", p_value ) ",
"The Pearson Correlation Coefficient is 0.7512653440522674 with a P-value of P = 9.200335510481516e-38\n"
]
],
[
[
"#### Conclusion:\n\nSince the p-value is < 0.001, the correlation between width and price is statistically significant, and the linear relationship is quite strong (\\~0.751).\n",
"_____no_output_____"
],
[
"### Curb-Weight vs. Price\n",
"_____no_output_____"
],
[
"Let's calculate the Pearson Correlation Coefficient and P-value of 'curb-weight' and 'price':\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['curb-weight'], df['price'])\nprint( \"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"The Pearson Correlation Coefficient is 0.8344145257702846 with a P-value of P = 2.1895772388936914e-53\n"
]
],
[
[
"<h4>Conclusion:</h4>\n<p>Since the p-value is $<$ 0.001, the correlation between curb-weight and price is statistically significant, and the linear relationship is quite strong (~0.834).</p>\n",
"_____no_output_____"
],
[
"<h3>Engine-Size vs. Price</h3>\n\nLet's calculate the Pearson Correlation Coefficient and P-value of 'engine-size' and 'price':\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['engine-size'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P =\", p_value) ",
"The Pearson Correlation Coefficient is 0.8723351674455185 with a P-value of P = 9.265491622198389e-64\n"
]
],
[
[
"<h4>Conclusion:</h4>\n\n<p>Since the p-value is $<$ 0.001, the correlation between engine-size and price is statistically significant, and the linear relationship is very strong (~0.872).</p>\n",
"_____no_output_____"
],
[
"<h3>Bore vs. Price</h3>\n",
"_____no_output_____"
],
[
"Let's calculate the Pearson Correlation Coefficient and P-value of 'bore' and 'price':\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['bore'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value ) ",
"The Pearson Correlation Coefficient is 0.5431553832626602 with a P-value of P = 8.049189483935489e-17\n"
]
],
[
[
"<h4>Conclusion:</h4>\n<p>Since the p-value is $<$ 0.001, the correlation between bore and price is statistically significant, but the linear relationship is only moderate (~0.521).</p>\n",
"_____no_output_____"
],
[
"We can relate the process for each 'city-mpg' and 'highway-mpg':\n",
"_____no_output_____"
],
[
"<h3>City-mpg vs. Price</h3>\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['city-mpg'], df['price'])\nprint(\"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value) ",
"The Pearson Correlation Coefficient is -0.6865710067844677 with a P-value of P = 2.321132065567674e-29\n"
]
],
[
[
"<h4>Conclusion:</h4>\n<p>Since the p-value is $<$ 0.001, the correlation between city-mpg and price is statistically significant, and the coefficient of about -0.687 shows that the relationship is negative and moderately strong.</p>\n",
"_____no_output_____"
],
[
"<h3>Highway-mpg vs. Price</h3>\n",
"_____no_output_____"
]
],
[
[
"pearson_coef, p_value = stats.pearsonr(df['highway-mpg'], df['price'])\nprint( \"The Pearson Correlation Coefficient is\", pearson_coef, \" with a P-value of P = \", p_value ) ",
"The Pearson Correlation Coefficient is -0.7046922650589529 with a P-value of P = 1.7495471144477352e-31\n"
]
],
[
[
"#### Conclusion:\n\nSince the p-value is < 0.001, the correlation between highway-mpg and price is statistically significant, and the coefficient of about -0.705 shows that the relationship is negative and moderately strong.\n",
"_____no_output_____"
],
[
"<h2 id=\"anova\">6. ANOVA</h2>\n",
"_____no_output_____"
],
[
"<h3>ANOVA: Analysis of Variance</h3>\n<p>The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups. ANOVA returns two parameters:</p>\n\n<p><b>F-test score</b>: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means.</p>\n\n<p><b>P-value</b>: P-value tells how statistically significant our calculated score value is.</p>\n\n<p>If our price variable is strongly correlated with the variable we are analyzing, we expect ANOVA to return a sizeable F-test score and a small p-value.</p>\n",
"_____no_output_____"
],
[
"<h3>Drive Wheels</h3>\n",
"_____no_output_____"
],
[
"<p>Since ANOVA analyzes the difference between different groups of the same variable, the groupby function will come in handy. Because the ANOVA algorithm averages the data automatically, we do not need to take the average before hand.</p>\n\n<p>To see if different types of 'drive-wheels' impact 'price', we group the data.</p>\n",
"_____no_output_____"
]
],
[
[
"grouped_test2=df_gptest[['drive-wheels', 'price']].groupby(['drive-wheels'])\ngrouped_test2.head(2)",
"_____no_output_____"
],
[
"df_gptest",
"_____no_output_____"
]
],
[
[
"We can obtain the values of the method group using the method \"get_group\".\n",
"_____no_output_____"
]
],
[
[
"grouped_test2.get_group('4wd')['price']",
"_____no_output_____"
]
],
[
[
"We can use the function 'f_oneway' in the module 'stats' to obtain the <b>F-test score</b> and <b>P-value</b>.\n",
"_____no_output_____"
]
],
[
[
"# ANOVA\nf_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price'], grouped_test2.get_group('4wd')['price']) \n \nprint( \"ANOVA results: F=\", f_val, \", P =\", p_val) ",
"ANOVA results: F= 67.95406500780399 , P = 3.3945443577151245e-23\n"
]
],
[
[
"This is a great result with a large F-test score showing a strong correlation and a P-value of almost 0 implying almost certain statistical significance. But does this mean all three tested groups are all this highly correlated?\n\nLet's examine them separately.\n",
"_____no_output_____"
],
[
"#### fwd and rwd\n",
"_____no_output_____"
]
],
[
[
"f_val, p_val = stats.f_oneway(grouped_test2.get_group('fwd')['price'], grouped_test2.get_group('rwd')['price']) \n \nprint( \"ANOVA results: F=\", f_val, \", P =\", p_val )",
"ANOVA results: F= 130.5533160959111 , P = 2.2355306355677845e-23\n"
]
],
[
[
"Let's examine the other groups.\n",
"_____no_output_____"
],
[
"#### 4wd and rwd\n",
"_____no_output_____"
]
],
[
[
"f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('rwd')['price']) \n \nprint( \"ANOVA results: F=\", f_val, \", P =\", p_val) ",
"ANOVA results: F= 8.580681368924756 , P = 0.004411492211225333\n"
]
],
[
[
"<h4>4wd and fwd</h4>\n",
"_____no_output_____"
]
],
[
[
"f_val, p_val = stats.f_oneway(grouped_test2.get_group('4wd')['price'], grouped_test2.get_group('fwd')['price']) \n \nprint(\"ANOVA results: F=\", f_val, \", P =\", p_val) ",
"ANOVA results: F= 0.665465750252303 , P = 0.41620116697845666\n"
]
],
[
[
"<h3>Conclusion: Important Variables</h3>\n",
"_____no_output_____"
],
[
"<p>We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. We have narrowed it down to the following variables:</p>\n\nContinuous numerical variables:\n\n<ul>\n <li>Length</li>\n <li>Width</li>\n <li>Curb-weight</li>\n <li>Engine-size</li>\n <li>Horsepower</li>\n <li>City-mpg</li>\n <li>Highway-mpg</li>\n <li>Wheel-base</li>\n <li>Bore</li>\n</ul>\n\nCategorical variables:\n\n<ul>\n <li>Drive-wheels</li>\n</ul>\n\n<p>As we now move into building machine learning models to automate our analysis, feeding the model with variables that meaningfully affect our target variable will improve our model's prediction performance.</p>\n",
"_____no_output_____"
],
[
"### Thank you for completing this lab!\n\n## Author\n\n<a href=\"https://www.linkedin.com/in/joseph-s-50398b136/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01\" target=\"_blank\">Joseph Santarcangelo</a>\n\n### Other Contributors\n\n<a href=\"https://www.linkedin.com/in/mahdi-noorian-58219234/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01\" target=\"_blank\">Mahdi Noorian PhD</a>\n\nBahare Talayian\n\nEric Xiao\n\nSteven Dong\n\nParizad\n\nHima Vasudevan\n\n<a href=\"https://www.linkedin.com/in/fiorellawever/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDA0101ENSkillsNetwork20235326-2021-01-01\" target=\"_blank\">Fiorella Wenver</a>\n\n<a href=\"https:// https://www.linkedin.com/in/yi-leng-yao-84451275/ \" target=\"_blank\" >Yi Yao</a>.\n\n## Change Log\n\n| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ---------- | ---------------------------------- |\n| 2020-10-30 | 2.1 | Lakshmi | changed URL of csv |\n| 2020-08-27 | 2.0 | Lavanya | Moved lab to course repo in GitLab |\n\n<hr>\n\n## <h3 align=\"center\"> © IBM Corporation 2020. All rights reserved. <h3/>\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a05f643b72af97f32e94c040545d2f87659de4e
| 147,982 |
ipynb
|
Jupyter Notebook
|
notebooks/20200929-osic-baseline-lgbm-with-custom-metric-v2.ipynb
|
KFurudate/kaggle_OSIC_Pulmonary_Fibrosis_Progression
|
72f0dcfac8ccab295d0e827bd44d9bd40b0b7489
|
[
"MIT"
] | null | null | null |
notebooks/20200929-osic-baseline-lgbm-with-custom-metric-v2.ipynb
|
KFurudate/kaggle_OSIC_Pulmonary_Fibrosis_Progression
|
72f0dcfac8ccab295d0e827bd44d9bd40b0b7489
|
[
"MIT"
] | 7 |
2020-09-29T17:52:00.000Z
|
2020-09-30T00:50:01.000Z
|
notebooks/20200929-osic-baseline-lgbm-with-custom-metric-v2.ipynb
|
KFurudate/kaggle_OSIC_Pulmonary_Fibrosis_Progression
|
72f0dcfac8ccab295d0e827bd44d9bd40b0b7489
|
[
"MIT"
] | null | null | null | 38.01233 | 13,312 | 0.499297 |
[
[
[
"Thanks for:\n\nhttps://www.kaggle.com/ttahara/osic-baseline-lgbm-with-custom-metric\n\nhttps://www.kaggle.com/carlossouza/bayesian-experiments\n",
"_____no_output_____"
],
[
"## About\n\nIn this competition, participants are requiered to predict `FVC` and its **_`Confidence`_**. \nHere, I trained Lightgbm to predict them at the same time by utilizing custom metric.\n\nMost of codes in this notebook are forked from @yasufuminakama 's [lgbm baseline](https://www.kaggle.com/yasufuminakama/osic-lgb-baseline). Thanks!",
"_____no_output_____"
],
[
"## Library",
"_____no_output_____"
]
],
[
[
"import os\nimport operator\nimport typing as tp\nfrom logging import getLogger, INFO, StreamHandler, FileHandler, Formatter\nfrom functools import partial\n\n\nimport numpy as np\nimport pandas as pd\nimport pymc3 as pm\nimport random\nimport math\n\nfrom tqdm.notebook import tqdm\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nfrom sklearn.model_selection import StratifiedKFold, GroupKFold, KFold\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.preprocessing import StandardScaler,LabelEncoder\nimport category_encoders as ce\n\nfrom PIL import Image\nimport cv2\nimport pydicom\n\nimport torch\n\nimport lightgbm as lgb\nfrom sklearn.linear_model import Ridge\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")",
"_____no_output_____"
]
],
[
[
"## Utils",
"_____no_output_____"
]
],
[
[
"def get_logger(filename='log'):\n logger = getLogger(__name__)\n logger.setLevel(INFO)\n handler1 = StreamHandler()\n handler1.setFormatter(Formatter(\"%(message)s\"))\n handler2 = FileHandler(filename=f\"{filename}.log\")\n handler2.setFormatter(Formatter(\"%(message)s\"))\n logger.addHandler(handler1)\n logger.addHandler(handler2)\n return logger\n\nlogger = get_logger()\n\n\ndef seed_everything(seed=777):\n random.seed(seed)\n os.environ['PYTHONHASHSEED'] = str(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed(seed)\n torch.backends.cudnn.deterministic = True",
"_____no_output_____"
]
],
[
[
"## Config",
"_____no_output_____"
]
],
[
[
"OUTPUT_DICT = './'\n\nID = 'Patient_Week'\nTARGET = 'FVC'\nSEED = 42\nseed_everything(seed=SEED)\n\nN_FOLD = 4",
"_____no_output_____"
]
],
[
[
"# Data Loading",
"_____no_output_____"
]
],
[
[
"train = pd.read_csv('../input/osic-pulmonary-fibrosis-progression/train.csv')\ntr = train.copy()\ntrain[ID] = train['Patient'].astype(str) + '_' + train['Weeks'].astype(str)\nprint(train.shape)\ntrain.head()",
"(1549, 8)\n"
],
[
"# construct train input\n\noutput = pd.DataFrame()\ngb = train.groupby('Patient')\ntk0 = tqdm(gb, total=len(gb))\nfor _, usr_df in tk0:\n usr_output = pd.DataFrame()\n for week, tmp in usr_df.groupby('Weeks'):\n rename_cols = {'Weeks': 'base_Week', 'FVC': 'base_FVC', 'Percent': 'base_Percent', 'Age': 'base_Age'}\n tmp = tmp.drop(columns='Patient_Week').rename(columns=rename_cols)\n drop_cols = ['Age', 'Sex', 'SmokingStatus', 'Percent']\n _usr_output = usr_df.drop(columns=drop_cols).rename(columns={'Weeks': 'predict_Week'}).merge(tmp, on='Patient')\n _usr_output['Week_passed'] = _usr_output['predict_Week'] - _usr_output['base_Week']\n usr_output = pd.concat([usr_output, _usr_output])\n output = pd.concat([output, usr_output])\n \ntrain = output[output['Week_passed']!=0].reset_index(drop=True)\nprint(train.shape)\ntrain.head()",
"_____no_output_____"
],
[
"# construct test input\n\ntest = pd.read_csv('../input/osic-pulmonary-fibrosis-progression/test.csv')\nts = test.copy()",
"_____no_output_____"
]
],
[
[
"# Create test dataset with Bayesian approach\nhttps://colab.research.google.com/drive/13WTKUlpYEtN0RNhzax_j8gbf84FuU1CF?authuser=1#scrollTo=jUeafaYrv9Em",
"_____no_output_____"
]
],
[
[
"# PercentをFVCに合わせて補正\n# X * Percent / 100 = FVC\n# X = FVC * 100 / Percent\n\ndic = {}\nfor i in range(len(test)):\n X = int(test.FVC[i]*100/test.Percent[i])\n dic[test.Patient[i]] = X\ndic",
"_____no_output_____"
],
[
"tr = pd.concat([tr, ts], axis=0, ignore_index=True).drop_duplicates()\nle_id = LabelEncoder()\ntr['PatientID'] = le_id.fit_transform(tr['Patient'])",
"_____no_output_____"
],
[
"n_patients = tr['Patient'].nunique()\nFVC_obs = tr['FVC'].values\nWeeks = tr['Weeks'].values\nPatientID = tr['PatientID'].values\n\nwith pm.Model() as model_a:\n # create shared variables that can be changed later on\n FVC_obs_shared = pm.Data(\"FVC_obs_shared\", FVC_obs)\n Weeks_shared = pm.Data('Weeks_shared', Weeks)\n PatientID_shared = pm.Data('PatientID_shared', PatientID)\n \n mu_a = pm.Normal('mu_a', mu=1700., sigma=400)\n sigma_a = pm.HalfNormal('sigma_a', 1000.)\n mu_b = pm.Normal('mu_b', mu=-4., sigma=1)\n sigma_b = pm.HalfNormal('sigma_b', 5.)\n\n a = pm.Normal('a', mu=mu_a, sigma=sigma_a, shape=n_patients)\n b = pm.Normal('b', mu=mu_b, sigma=sigma_b, shape=n_patients)\n\n # Model error\n sigma = pm.HalfNormal('sigma', 150.)\n\n FVC_est = a[PatientID_shared] + b[PatientID_shared] * Weeks_shared\n\n # Data likelihood\n FVC_like = pm.Normal('FVC_like', mu=FVC_est,\n sigma=sigma, observed=FVC_obs_shared)\n \n # Fitting the model\n trace_a = pm.sample(2000, tune=2000, target_accept=.9, init=\"adapt_diag\")",
"_____no_output_____"
],
[
"pred_template = []\nfor p in ts['Patient'].unique():\n df = pd.DataFrame(columns=['PatientID', 'Weeks'])\n df['Weeks'] = np.arange(-12, 134)\n df['Patient'] = p\n pred_template.append(df)\npred_template = pd.concat(pred_template, ignore_index=True)\npred_template['PatientID'] = le_id.transform(pred_template['Patient'])\n\nwith model_a:\n pm.set_data({\n \"PatientID_shared\": pred_template['PatientID'].values.astype(int),\n \"Weeks_shared\": pred_template['Weeks'].values.astype(int),\n \"FVC_obs_shared\": np.zeros(len(pred_template)).astype(int),\n })\n post_pred = pm.sample_posterior_predictive(trace_a)",
"_____no_output_____"
],
[
"df = pd.DataFrame(columns=['Patient', 'Weeks', 'Patient_Week', 'FVC', 'Confidence'])\ndf['Patient'] = pred_template['Patient']\ndf['Weeks'] = pred_template['Weeks']\ndf['Patient_Week'] = df['Patient'] + '_' + df['Weeks'].astype(str)\ndf['FVC'] = post_pred['FVC_like'].T.mean(axis=1)\ndf['Confidence'] = post_pred['FVC_like'].T.std(axis=1)\nfinal = df[['Patient_Week', 'FVC', 'Confidence']]\nfinal.to_csv('submission.csv', index=False)\nprint(final.shape)\nfinal",
"(730, 3)\n"
],
[
"test = test.rename(columns={'Weeks': 'base_Week', 'FVC': 'base_FVC', 'Percent': 'base_Percent', 'Age': 'base_Age'})\nsubmission = pd.read_csv('../input/osic-pulmonary-fibrosis-progression/sample_submission.csv')\nsubmission['Patient'] = submission['Patient_Week'].apply(lambda x: x.split('_')[0])\nsubmission['predict_Week'] = submission['Patient_Week'].apply(lambda x: x.split('_')[1]).astype(int)\ntest = submission.drop(columns=['FVC', 'Confidence']).merge(test, on='Patient')\ntest['Week_passed'] = test['predict_Week'] - test['base_Week']\nprint(test.shape)\ntest",
"(730, 10)\n"
],
[
"test = test.drop(columns='base_FVC').merge(final[[\"Patient_Week\", \"FVC\"]], on='Patient_Week')\ntest",
"_____no_output_____"
],
[
"# Percent = FVC * 100 /X\n\nfor i in range(len(test)):\n Percent = test.FVC[i]*100 / dic[test.Patient[i]]\n test.base_Percent[i] = Percent\ntest",
"_____no_output_____"
],
[
"#getting FVC for base week and setting it as base_FVC of patient\ndef get_base_FVC(data):\n df = data.copy()\n df['min_week'] = df.groupby('Patient')['predict_Week'].transform('min')\n base = df.loc[df.predict_Week == df.min_week][['Patient','FVC']].copy()\n base.columns = ['Patient','base_FVC']\n \n base['nb']=1\n base['nb'] = base.groupby('Patient')['nb'].transform('cumsum')\n \n base = base[base.nb==1]\n base.drop('nb',axis =1,inplace=True)\n df = df.merge(base,on=\"Patient\",how='left')\n df.drop(['min_week'], axis = 1)\n return df \n\n#For Inference\n#getting Number of CT \ndef get_N_CT(data, mode=\"test\"):\n df = data.copy()\n N_CT = []\n for pt_id in df.Patient:\n if mode is \"test\":\n png_dir = os.path.join(image_folder, pt_id)\n if mode is \"train\":\n png_dir = os.path.join(data_dir, 'train', pt_id)\n files = os.listdir(png_dir)\n N_CT.append(len(files))\n df[\"N_CT\"] = N_CT\n return df",
"_____no_output_____"
],
[
"test[\"min_Weeks\"] = np.nan\ntest = get_base_FVC(test)\ntest",
"_____no_output_____"
],
[
"test = test.drop(['min_Weeks', 'min_week'], axis = 1)\ntest",
"_____no_output_____"
],
[
"submission = pd.read_csv('../input/osic-pulmonary-fibrosis-progression/sample_submission.csv')\nprint(submission.shape)\nsubmission.head()",
"(730, 3)\n"
]
],
[
[
"# Prepare folds",
"_____no_output_____"
]
],
[
[
"folds = train[[ID, 'Patient', TARGET]].copy()\n#Fold = KFold(n_splits=N_FOLD, shuffle=True, random_state=SEED)\nFold = GroupKFold(n_splits=N_FOLD)\ngroups = folds['Patient'].values\nfor n, (train_index, val_index) in enumerate(Fold.split(folds, folds[TARGET], groups)):\n folds.loc[val_index, 'fold'] = int(n)\nfolds['fold'] = folds['fold'].astype(int)\nfolds",
"_____no_output_____"
]
],
[
[
"## Custom Objective / Metric\n\nThe competition evaluation metric is:\n\n$\n\\displaystyle \\sigma_{clipped} = \\max \\left ( \\sigma, 70 \\right ) \\\\\n\\displaystyle \\Delta = \\min \\left ( \\|FVC_{ture} - FVC_{predicted}\\|, 1000 \\right ) \\\\\n\\displaystyle f_{metric} = - \\frac{\\sqrt{2} \\Delta}{\\sigma_{clipped}} - \\ln \\left( \\sqrt{2} \\sigma_{clipped} \\right) .\n$\n\nThis is too complex to directly optimize by custom metric.\nHere I use negative loglilelihood loss (_NLL_) of gaussian. \n\nLet $FVC_{ture}$ is $t$ and $FVC_{predicted}$ is $\\mu$, the _NLL_ $l$ is formulated by:\n\n$\n\\displaystyle l\\left( t, \\mu, \\sigma \\right) =\n-\\ln \\left [ \\frac{1}{\\sqrt{2 \\pi} \\sigma} \\exp \\left \\{ - \\frac{\\left(t - \\mu \\right)^2}{2 \\sigma^2} \\right \\} \\right ]\n= \\frac{\\left(t - \\mu \\right)^2}{2 \\sigma^2} + \\ln \\left( \\sqrt{2 \\pi} \\sigma \\right).\n$\n\n`grad` and `hess` are calculated as follows:\n\n$\n\\displaystyle \\frac{\\partial l}{\\partial \\mu } = -\\frac{t - \\mu}{\\sigma^2} \\ , \\ \\frac{\\partial^2 l}{\\partial \\mu^2 } = \\frac{1}{\\sigma^2}\n$\n\n$\n\\displaystyle \\frac{\\partial l}{\\partial \\sigma}\n=-\\frac{\\left(t - \\mu \\right)^2}{\\sigma^3} + \\frac{1}{\\sigma} = \\frac{1}{\\sigma} \\left\\{ 1 - \\left ( \\frac{t - \\mu}{\\sigma} \\right)^2 \\right \\}\n\\\\\n\\displaystyle \\frac{\\partial^2 l}{\\partial \\sigma^2}\n= -\\frac{1}{\\sigma^2} \\left\\{ 1 - \\left ( \\frac{t - \\mu}{\\sigma} \\right)^2 \\right \\}\n+\\frac{1}{\\sigma} \\frac{2 \\left(t - \\mu \\right)^2 }{\\sigma^3}\n= -\\frac{1}{\\sigma^2} \\left\\{ 1 - 3 \\left ( \\frac{t - \\mu}{\\sigma} \\right)^2 \\right \\}\n$",
"_____no_output_____"
],
[
"For numerical stability, I replace $\\sigma$ with $\\displaystyle \\tilde{\\sigma} := \\log\\left(1 + \\mathrm{e}^{\\sigma} \\right).$\n\n$\n\\displaystyle l'\\left( t, \\mu, \\sigma \\right)\n= \\frac{\\left(t - \\mu \\right)^2}{2 \\tilde{\\sigma}^2} + \\ln \\left( \\sqrt{2 \\pi} \\tilde{\\sigma} \\right).\n$\n\n$\n\\displaystyle \\frac{\\partial l'}{\\partial \\mu } = -\\frac{t - \\mu}{\\tilde{\\sigma}^2} \\ , \\ \\frac{\\partial^2 l}{\\partial \\mu^2 } = \\frac{1}{\\tilde{\\sigma}^2}\n$\n<br>\n\n$\n\\displaystyle \\frac{\\partial l'}{\\partial \\sigma}\n= \\frac{1}{\\tilde{\\sigma}} \\left\\{ 1 - \\left ( \\frac{t - \\mu}{\\tilde{\\sigma}} \\right)^2 \\right \\} \\frac{\\partial \\tilde{\\sigma}}{\\partial \\sigma}\n\\\\\n\\displaystyle \\frac{\\partial^2 l'}{\\partial \\sigma^2}\n= -\\frac{1}{\\tilde{\\sigma}^2} \\left\\{ 1 - 3 \\left ( \\frac{t - \\mu}{\\tilde{\\sigma}} \\right)^2 \\right \\}\n\\left( \\frac{\\partial \\tilde{\\sigma}}{\\partial \\sigma} \\right) ^2\n+\\frac{1}{\\tilde{\\sigma}} \\left\\{ 1 - \\left ( \\frac{t - \\mu}{\\tilde{\\sigma}} \\right)^2 \\right \\} \\frac{\\partial^2 \\tilde{\\sigma}}{\\partial \\sigma^2}\n$\n\n, where \n\n$\n\\displaystyle\n\\frac{\\partial \\tilde{\\sigma}}{\\partial \\sigma} = \\frac{1}{1 + \\mathrm{e}^{-\\sigma}} \\\\\n\\displaystyle\n\\frac{\\partial^2 \\tilde{\\sigma}}{\\partial^2 \\sigma} = \\frac{\\mathrm{e}^{-\\sigma}}{\\left( 1 + \\mathrm{e}^{-\\sigma} \\right)^2}\n= \\frac{\\partial \\tilde{\\sigma}}{\\partial \\sigma} \\left( 1 - \\frac{\\partial \\tilde{\\sigma}}{\\partial \\sigma} \\right)\n$",
"_____no_output_____"
]
],
[
[
"class OSICLossForLGBM:\n \"\"\"\n Custom Loss for LightGBM.\n \n * Objective: return grad & hess of NLL of gaussian\n * Evaluation: return competition metric\n \"\"\"\n \n def __init__(self, epsilon: float=1) -> None:\n \"\"\"Initialize.\"\"\"\n self.name = \"osic_loss\"\n self.n_class = 2 # FVC & Confidence\n self.epsilon = epsilon\n \n def __call__(self, preds: np.ndarray, labels: np.ndarray, weight: tp.Optional[np.ndarray]=None) -> float:\n \"\"\"Calc loss.\"\"\"\n sigma_clip = np.maximum(preds[:, 1], 70)\n Delta = np.minimum(np.abs(preds[:, 0] - labels), 1000)\n loss_by_sample = - np.sqrt(2) * Delta / sigma_clip - np.log(np.sqrt(2) * sigma_clip)\n loss = np.average(loss_by_sample, weight)\n \n return loss\n \n def _calc_grad_and_hess(\n self, preds: np.ndarray, labels: np.ndarray, weight: tp.Optional[np.ndarray]=None\n ) -> tp.Tuple[np.ndarray]:\n \"\"\"Calc Grad and Hess\"\"\"\n mu = preds[:, 0]\n sigma = preds[:, 1]\n \n sigma_t = np.log(1 + np.exp(sigma))\n grad_sigma_t = 1 / (1 + np.exp(- sigma))\n hess_sigma_t = grad_sigma_t * (1 - grad_sigma_t)\n \n grad = np.zeros_like(preds)\n hess = np.zeros_like(preds)\n grad[:, 0] = - (labels - mu) / sigma_t ** 2\n hess[:, 0] = 1 / sigma_t ** 2\n \n tmp = ((labels - mu) / sigma_t) ** 2\n grad[:, 1] = 1 / sigma_t * (1 - tmp) * grad_sigma_t\n hess[:, 1] = (\n - 1 / sigma_t ** 2 * (1 - 3 * tmp) * grad_sigma_t ** 2\n + 1 / sigma_t * (1 - tmp) * hess_sigma_t\n )\n if weight is not None:\n grad = grad * weight[:, None]\n hess = hess * weight[:, None]\n return grad, hess\n \n def return_loss(self, preds: np.ndarray, data: lgb.Dataset) -> tp.Tuple[str, float, bool]:\n \"\"\"Return Loss for lightgbm\"\"\"\n labels = data.get_label()\n weight = data.get_weight()\n n_example = len(labels)\n \n # # reshape preds: (n_class * n_example,) => (n_class, n_example) => (n_example, n_class)\n preds = preds.reshape(self.n_class, n_example).T\n # # calc loss\n loss = self(preds, labels, weight)\n \n return self.name, loss, True\n \n def return_grad_and_hess(self, preds: np.ndarray, data: lgb.Dataset) -> tp.Tuple[np.ndarray]:\n \"\"\"Return Grad and Hess for lightgbm\"\"\"\n labels = data.get_label()\n weight = data.get_weight()\n n_example = len(labels)\n \n # # reshape preds: (n_class * n_example,) => (n_class, n_example) => (n_example, n_class)\n preds = preds.reshape(self.n_class, n_example).T\n # # calc grad and hess.\n grad, hess = self._calc_grad_and_hess(preds, labels, weight)\n\n # # reshape grad, hess: (n_example, n_class) => (n_class, n_example) => (n_class * n_example,) \n grad = grad.T.reshape(n_example * self.n_class)\n hess = hess.T.reshape(n_example * self.n_class)\n \n return grad, hess",
"_____no_output_____"
]
],
[
[
"## Training Utils",
"_____no_output_____"
]
],
[
[
"#===========================================================\n# model\n#===========================================================\ndef run_single_lightgbm(\n model_param, fit_param, train_df, test_df, folds, features, target,\n fold_num=0, categorical=[], my_loss=None,\n):\n trn_idx = folds[folds.fold != fold_num].index\n val_idx = folds[folds.fold == fold_num].index\n logger.info(f'len(trn_idx) : {len(trn_idx)}')\n logger.info(f'len(val_idx) : {len(val_idx)}')\n \n if categorical == []:\n trn_data = lgb.Dataset(\n train_df.iloc[trn_idx][features], label=target.iloc[trn_idx])\n val_data = lgb.Dataset(\n train_df.iloc[val_idx][features], label=target.iloc[val_idx])\n else:\n trn_data = lgb.Dataset(\n train_df.iloc[trn_idx][features], label=target.iloc[trn_idx],\n categorical_feature=categorical)\n val_data = lgb.Dataset(\n train_df.iloc[val_idx][features], label=target.iloc[val_idx],\n categorical_feature=categorical)\n\n oof = np.zeros((len(train_df), 2))\n predictions = np.zeros((len(test_df), 2))\n \n best_model_str = [\"\"]\n clf = lgb.train(\n model_param, trn_data, **fit_param,\n valid_sets=[trn_data, val_data],\n fobj=my_loss.return_grad_and_hess,\n feval=my_loss.return_loss,\n )\n oof[val_idx] = clf.predict(train_df.iloc[val_idx][features], num_iteration=clf.best_iteration)\n\n fold_importance_df = pd.DataFrame()\n fold_importance_df[\"Feature\"] = features\n fold_importance_df[\"importance\"] = clf.feature_importance(importance_type='gain')\n fold_importance_df[\"fold\"] = fold_num\n\n predictions += clf.predict(test_df[features], num_iteration=clf.best_iteration)\n \n # RMSE\n logger.info(\"fold{} RMSE score: {:<8.5f}\".format(\n fold_num, np.sqrt(mean_squared_error(target[val_idx], oof[val_idx, 0]))))\n # Competition Metric\n logger.info(\"fold{} Metric: {:<8.5f}\".format(\n fold_num, my_loss(oof[val_idx], target[val_idx])))\n \n return oof, predictions, fold_importance_df\n\n\ndef run_kfold_lightgbm(\n model_param, fit_param, train, test, folds,\n features, target, n_fold=5, categorical=[], my_loss=None,\n):\n \n logger.info(f\"================================= {n_fold}fold lightgbm =================================\")\n \n oof = np.zeros((len(train), 2))\n predictions = np.zeros((len(test), 2))\n feature_importance_df = pd.DataFrame()\n\n for fold_ in range(n_fold):\n print(\"Fold {}\".format(fold_))\n _oof, _predictions, fold_importance_df =\\\n run_single_lightgbm(\n model_param, fit_param, train, test, folds,\n features, target, fold_num=fold_, categorical=categorical, my_loss=my_loss\n )\n feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)\n oof += _oof\n predictions += _predictions / n_fold\n\n # RMSE\n logger.info(\"CV RMSE score: {:<8.5f}\".format(np.sqrt(mean_squared_error(target, oof[:, 0]))))\n # Metric\n logger.info(\"CV Metric: {:<8.5f}\".format(my_loss(oof, target)))\n \n\n logger.info(f\"=========================================================================================\")\n \n return feature_importance_df, predictions, oof\n\n \ndef show_feature_importance(feature_importance_df, name):\n cols = (feature_importance_df[[\"Feature\", \"importance\"]]\n .groupby(\"Feature\")\n .mean()\n .sort_values(by=\"importance\", ascending=False)[:50].index)\n best_features = feature_importance_df.loc[feature_importance_df.Feature.isin(cols)]\n\n #plt.figure(figsize=(8, 16))\n plt.figure(figsize=(6, 4))\n sns.barplot(x=\"importance\", y=\"Feature\", data=best_features.sort_values(by=\"importance\", ascending=False))\n plt.title('Features importance (averaged/folds)')\n plt.tight_layout()\n plt.savefig(OUTPUT_DICT+f'feature_importance_{name}.png')",
"_____no_output_____"
]
],
[
[
"## predict FVC & Confidence(signa)",
"_____no_output_____"
]
],
[
[
"target = train[TARGET]\ntest[TARGET] = np.nan\n\n# features\ncat_features = ['Sex', 'SmokingStatus']\nnum_features = [c for c in test.columns if (test.dtypes[c] != 'object') & (c not in cat_features)]\nfeatures = num_features + cat_features\ndrop_features = [ID, TARGET, 'predict_Week', 'base_Week']\nfeatures = [c for c in features if c not in drop_features]\n\nif cat_features:\n ce_oe = ce.OrdinalEncoder(cols=cat_features, handle_unknown='impute')\n ce_oe.fit(train)\n train = ce_oe.transform(train)\n test = ce_oe.transform(test)\n \nlgb_model_param = {\n 'num_class': 2,\n # 'objective': 'regression',\n 'metric': 'None',\n 'boosting_type': 'gbdt',\n 'learning_rate': 5e-02,\n 'seed': SEED,\n \"subsample\": 0.4,\n \"subsample_freq\": 1,\n 'max_depth': 1,\n 'verbosity': -1,\n}\nlgb_fit_param = {\n \"num_boost_round\": 10000,\n \"verbose_eval\":100,\n \"early_stopping_rounds\": 500,\n}\n\nfeature_importance_df, predictions, oof = run_kfold_lightgbm(\n lgb_model_param, lgb_fit_param, train, test,\n folds, features, target,\n n_fold=N_FOLD, categorical=cat_features, my_loss=OSICLossForLGBM())\n \nshow_feature_importance(feature_importance_df, TARGET)",
"================================= 4fold lightgbm =================================\nlen(trn_idx) : 9110\nlen(val_idx) : 3034\n"
],
[
"oof[:5, :]",
"_____no_output_____"
],
[
"predictions[:5]",
"_____no_output_____"
],
[
"train[\"FVC_pred\"] = oof[:, 0]\ntrain[\"Confidence\"] = oof[:, 1]\ntest[\"FVC_pred\"] = predictions[:, 0]\ntest[\"Confidence\"] = predictions[:, 1]",
"_____no_output_____"
]
],
[
[
"# Submission",
"_____no_output_____"
]
],
[
[
"submission.head()",
"_____no_output_____"
],
[
"sub = submission.drop(columns=['FVC', 'Confidence']).merge(test[['Patient_Week', 'FVC_pred', 'Confidence']], \n on='Patient_Week')\nsub.columns = submission.columns\nsub.to_csv('submission.csv', index=False)\nsub.head()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a0607a466538a2420fa8aa9ba56489e8586e21e
| 107,672 |
ipynb
|
Jupyter Notebook
|
ipython/09. Scratch Pad.ipynb
|
rah/optimal-size-selection
|
491b9c6900974e1e04b0fba8ee3d5d488f37b5a6
|
[
"MIT"
] | null | null | null |
ipython/09. Scratch Pad.ipynb
|
rah/optimal-size-selection
|
491b9c6900974e1e04b0fba8ee3d5d488f37b5a6
|
[
"MIT"
] | null | null | null |
ipython/09. Scratch Pad.ipynb
|
rah/optimal-size-selection
|
491b9c6900974e1e04b0fba8ee3d5d488f37b5a6
|
[
"MIT"
] | null | null | null | 117.932092 | 45,272 | 0.796437 |
[
[
[
"A place holder to hold work in progress. ",
"_____no_output_____"
],
[
"# Amphipod A Head Depth to Length Relationship",
"_____no_output_____"
]
],
[
[
"import csv\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nama_x = []\nama_y = []\nwith open('data/AMA_HeadDepth_Length.csv', 'rb') as csvfile:\n reader = csv.reader(csvfile, delimiter=',')\n next(reader, None)\n for row in reader:\n ama_x.append(float(row[0]))\n ama_y.append(float(row[1]))\n\n\n(m, b) = np.polyfit(ama_x, ama_y, 1)\nprint \"y = \" + str(m) + \"x \" + \"+ \" + str(b)\nyp = np.polyval([m,b], ama_x)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nax.set_xlabel(\"Length (mm)\")\nax.set_ylabel(\"Head Depth (mm)\")\nax.set_title(\"Amphipod AMA Length Head Depth Relationship\")\n\nax.plot(ama_x, yp, color='r')\nax.scatter(ama_x, ama_y)\nplt.show()",
"y = 0.146931205104x + 0.0899237678573\n"
]
],
[
[
"# Goby Guts Analysis",
"_____no_output_____"
]
],
[
[
"import pandas as pd\npd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier\nfigsize(15, 5)\n\ndf = pd.read_excel(\"data/GobyGuts-Jan-83.xls\", 0, parse_dates=True, index_col=0)\ndf.head()",
"_____no_output_____"
]
],
[
[
"See what happens when we group by sex",
"_____no_output_____"
]
],
[
[
"df.groupby('S').sum()",
"_____no_output_____"
]
],
[
[
"The results are interesting, but we need to know how many so that we can compare the sexes.",
"_____no_output_____"
],
[
"So lets just try plotting the df for fun ..., really we need to modify the dataframe, but this is a good test",
"_____no_output_____"
]
],
[
[
"df['TL'].plot()",
"_____no_output_____"
],
[
"df.describe()",
"_____no_output_____"
],
[
"df['DMY'][:5]",
"_____no_output_____"
],
[
"df['TC'][:5]",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a061352e71acdb9026bcb63d25ba96ded585560
| 13,601 |
ipynb
|
Jupyter Notebook
|
notebooks/pipcook_image_classification.ipynb
|
10088/pipcook
|
1ee4765d8349ad96f2554c535a14cc2c5a483af3
|
[
"Apache-2.0"
] | null | null | null |
notebooks/pipcook_image_classification.ipynb
|
10088/pipcook
|
1ee4765d8349ad96f2554c535a14cc2c5a483af3
|
[
"Apache-2.0"
] | null | null | null |
notebooks/pipcook_image_classification.ipynb
|
10088/pipcook
|
1ee4765d8349ad96f2554c535a14cc2c5a483af3
|
[
"Apache-2.0"
] | null | null | null | 49.638686 | 580 | 0.602529 |
[
[
[
"## Environment Initialization\nThis cell is used to initlialize necessary environments for pipcook to run, including Node.js 12.x.",
"_____no_output_____"
]
],
[
[
"!wget -P /tmp https://nodejs.org/dist/v12.19.0/node-v12.19.0-linux-x64.tar.xz\n!rm -rf /usr/local/lib/nodejs\n!mkdir -p /usr/local/lib/nodejs\n!tar -xJf /tmp/node-v12.19.0-linux-x64.tar.xz -C /usr/local/lib/nodejs\n!sh -c 'echo \"export PATH=/usr/local/lib/nodejs/node-v12.19.0-linux-x64/bin:\\$PATH\" >> /etc/profile'\n!rm -f /usr/bin/node\n!rm -f /usr/bin/npm\n!ln -s /usr/local/lib/nodejs/node-v12.19.0-linux-x64/bin/node /usr/bin/node\n!ln -s /usr/local/lib/nodejs/node-v12.19.0-linux-x64/bin/npm /usr/bin/npm\n!npm config delete registry\n\nimport os\nPATH_ENV = os.environ['PATH']\n%env PATH=/usr/local/lib/nodejs/node-v12.19.0-linux-x64/bin:${PATH_ENV}",
"_____no_output_____"
]
],
[
[
"## install pipcook cli tool\npipcook-cli is the cli tool for pipcook for any operations later, including installing pipcook, run pipcook jobs and checking logs.",
"_____no_output_____"
]
],
[
[
"!npm install @pipcook/cli -g\n!rm -f /usr/bin/pipcook\n!ln -s /usr/local/lib/nodejs/node-v12.19.0-linux-x64/bin/pipcook /usr/bin/pipcook",
"_____no_output_____"
]
],
[
[
"# Classify images of UI components\n\n## Background\n\nHave you encountered such a scenario in the front-end business: there are some images in your hand, and you want an automatic way to identify what front-end components these images are, whether it is a button, a navigation bar, or a form? This is a typical image classification task.\n\n> The task of predicting image categories is called image classification. The purpose of training the image classification model is to identify various types of images.\n\nThis identification is very useful. You can use this identification information for code generation or automated testing.\n\nTaking code generation as an example, suppose we have a sketch design draft and the entire design draft is composed of different components. We can traverse the layers of the entire design draft. For each layer, use the model of image classification to identify what component each layer is. After that, we can replace the original design draft layer with the front-end component to generate the front-end code.\n\nAnother example is in the scenario of automated testing. We need an ability to identify the type of each layer. For the button that is recognized, we can automatically click to see if the button works. For the list component that we recognize, we can automatically track loading speed to monitor performance, etc.\n\n## Examples\n\nFor example, in the scenario where the forms are automatically generated, we need to identify which components are column charts or pie charts, as shown in the following figure:\n\n\n\n \n\nAfter the training is completed, for each picture, the model will eventually give us the prediction results we want. For example, when we enter the line chart of Figure 1, the model will give prediction results similar to the following:\n\n```\n[[0.1, 0.9]]\n```\n\nAt the same time, we will generate a labelmap during training. Labelmap is a mapping relationship between the serial number and the actual type. This generation is mainly due to the fact that our classification name is text, but before entering the model, we need to convert the text Into numbers. Here is a labelmap:\n\n```json\n{\n \"column\": 0,\n \"pie\": 1,\n}\n```\n\nFirst, why is the prediction result a two-dimensional array? First of all, the model allows prediction of multiple pictures at once. For each picture, the model will also give an array, this array describes the possibility of each classification, as shown in the labelmap, the classification is arranged in the order of column chart and pie chart, then corresponding to the prediction result of the model, We can see that the column chart has the highest confidence, which is 0.9, so this picture is predicted to be a column chart, that is, the prediction is correct.\n\n## Data Preparation\n\nWhen we are doing image classification tasks similar to this one, we need to organize our dataset in a certain format.\n\nWe need to divide our dataset into a training set (train), a validation set (validation) and a test set (test) according to a certain proportion. Among them, the training set is mainly used to train the model, and the validation set and the test set are used to evaluate the model. The validation set is mainly used to evaluate the model during the training process to facilitate viewing of the model's overfitting and convergence. The test set is used to perform an overall evaluation of the model after all training is completed.\n\nIn the training/validation/test set, we will organize the data according to the classification category. For example, we now have two categories, line and ring, then we can create two folders for these two category names, in the corresponding Place pictures under the folder. The overall directory structure is:\n\n- train\n - ring\n - xx.jpg\n - ...\n - line\n - xxjpg\n - ...\n - column\n - ...\n - pie\n - ...\n- validation\n - ring\n - xx.jpg\n - ...\n - line\n - xx.jpg\n - ...\n - column\n - ...\n - pie\n - ...\n- test\n - ring\n - xx.jpg\n - ...\n - line\n - xx.jpg\n - ...\n - column\n - ...\n - pie\n - ...\n\nWe have prepared such a dataset, you can download it and check it out:[Download here](http://ai-sample.oss-cn-hangzhou.aliyuncs.com/pipcook/datasets/component-recognition-image-classification/component-recognition-classification.zip).\n\n## Start Training\n\nAfter the dataset is ready, we can start training. Using Pipcook can be very convenient for the training of image classification. You only need to build the following pipeline:\n```json\n{\n \"specVersion\": \"2.0\",\n \"datasource\": \"https://cdn.jsdelivr.net/gh/imgcook/pipcook-script@fe00a8e/scripts/image-classification-mobilenet/build/datasource.js?url=http://ai-sample.oss-cn-hangzhou.aliyuncs.com/pipcook/datasets/component-recognition-image-classification/component-recognition-classification.zip\",\n \"dataflow\": [\n \"https://cdn.jsdelivr.net/gh/imgcook/pipcook-script@fe00a8e/scripts/image-classification-mobilenet/build/dataflow.js?size=224&size=224\"\n ],\n \"model\": \"https://cdn.jsdelivr.net/gh/imgcook/pipcook-script@fe00a8e/scripts/image-classification-mobilenet/build/model.js\",\n \"artifact\": [{\n \"processor\": \"[email protected]\",\n \"target\": \"/tmp/mobilenet-model.zip\"\n }],\n \"options\": {\n \"framework\": \"[email protected]\",\n \"train\": {\n \"epochs\": 15,\n \"validationRequired\": true\n }\n }\n}\n\n```\nThrough the above scripts, we can see that they are used separately:\n\n1. **datasource** This script is used to download the dataset that meets the image classification described above. Mainly, we need to provide the url parameter, and we provide the dataset address that we prepared above\n2. **dataflow** When performing image classification, we need to have some necessary operations on the original data. For example, image classification requires that all pictures are of the same size, so we use this script to resize the pictures to a uniform size\n3. **model** We use this script to define, train and evaluate and save the model.\n\n[mobilenet](https://arxiv.org/abs/1704.04861) is a lightweight model which can be trained on CPU. If you are using [resnet](https://arxiv.org/abs/1512.03385),since the model is quite large, we recommend use to train on GPU. \n\n> CUDA, short for Compute Unified Device Architecture, is a parallel computing platform and programming model founded by NVIDIA based on the GPUs (Graphics Processing Units, which can be popularly understood as graphics cards).\n\n> With CUDA, GPUs can be conveniently used for general purpose calculations (a bit like numerical calculations performed in the CPU, etc.). Before CUDA, GPUs were generally only used for graphics rendering (such as through OpenGL, DirectX).\n\nNow let's run our image-classification job!",
"_____no_output_____"
]
],
[
[
"!sudo pipcook run https://raw.githubusercontent.com/alibaba/pipcook/main/example/pipelines/databinding-image-classification-resnet.json",
"_____no_output_____"
]
],
[
[
"Often the model will converge at 10-20 epochs. Of course, it depends on the complexity of your dataset. Model convergence means that the loss (loss value) is low enough and the accuracy is high enough.\n\nAfter the training is completed, output will be generated in the current directory, which is a brand-new npm package, then we first install dependencies:",
"_____no_output_____"
]
],
[
[
"!cd output && sudo npm install --unsafe-perm\n!wget http://ai-sample.oss-cn-hangzhou.aliyuncs.com/pipcook/dsw/predict.js",
"_____no_output_____"
]
],
[
[
"Now we can predict. You can just have a try on code below to predict the image we provide. You can replace the image url with your own url to try on your own dataset. The predict result is in form of probablity of each category as we have explained before.",
"_____no_output_____"
]
],
[
[
"!node predict.js https://img.alicdn.com/tfs/TB1ekuMhQY2gK0jSZFgXXc5OFXa-400-400.jpg",
"_____no_output_____"
]
],
[
[
"\nNote that the prediction result we give is the probability of each category. You can process this probability to the result you want.\n\n## Conclusion\n\nIn this way, the component recognition task based on the image classification model is completed. After completing the pipeline in our example, if you are interested in such tasks, you can also start preparing your own dataset for training. We have already introduced the format of the dataset in detail in the data preparation chapter. You only need to follow the file directory to easily prepare the data that matches our image classification pipeline.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a0615e6ecb32007c74dcde40a6e2af14d0ec81e
| 10,504 |
ipynb
|
Jupyter Notebook
|
tutorial/314_tsp.ipynb
|
Potla1995/Blueqat-tutorials
|
287c00189addf7e0186957238383019d8895de6f
|
[
"Apache-2.0"
] | 1 |
2021-01-27T12:25:01.000Z
|
2021-01-27T12:25:01.000Z
|
tutorial/314_tsp.ipynb
|
Potla1995/Blueqat-tutorials
|
287c00189addf7e0186957238383019d8895de6f
|
[
"Apache-2.0"
] | null | null | null |
tutorial/314_tsp.ipynb
|
Potla1995/Blueqat-tutorials
|
287c00189addf7e0186957238383019d8895de6f
|
[
"Apache-2.0"
] | null | null | null | 33.240506 | 205 | 0.450305 |
[
[
[
"# Travelling Salesman Problem (TSP)\n\nIf we have a list of city and distance between cities, travelling salesman problem is to find out the least sum of the distance visiting all the cities only once.\n\n<img src=\"https://user-images.githubusercontent.com/5043340/45661145-2f8a7a80-bb37-11e8-99d1-42368906cfff.png\" width=\"400\">\n\nPlease prepare the blueqat first.",
"_____no_output_____"
]
],
[
[
"!pip3 install blueqat",
"_____no_output_____"
]
],
[
[
"Import libraries and make an instance",
"_____no_output_____"
]
],
[
[
"import blueqat.wq as wq\nimport numpy as np\na = wq.Opt()",
"_____no_output_____"
]
],
[
[
"\n## Example\n\nLet's see the example we have 4 cities ABCD and we have to visit these cities once. All the cities are connected each other with the distance value as below.\n\n<img src=\"https://user-images.githubusercontent.com/5043340/45661003-8ba0cf00-bb36-11e8-95fc-573e77ded327.png\" width=\"400\">\n",
"_____no_output_____"
],
[
"## Qubomatrix\n\nWe need a QUBO matrix to solve this problem on ising model.\nNow we have a cost function as this,\n\n$H = \\sum_{v=1}^n\\left( 1-\\sum_{j=1}^N x_{v,j} \\right)^2 + \\sum_{j=1}^N\\left(1-\\sum_{v=1}^Nx_{v,j} \\right)^2 + B\\sum_{(u,v)\\in E}W_{u,v}\\sum_{j=1}^N x_{u,j} x_{v,j+1}$ ・・・・・(1)\n\n\n\n$x_{vj}$ is a binary value if visit city $v$ on $j$ th order.\n\n$x_{vj} = 1$ (if visit city v on jth order)、$0$ (not visit)\n\nWe need${N}^2$×${N}^2$ of matrix for N cities.\nNow we have 4 cities, so finally we need 4*4 matrix.\n\nSimly we show $x_{vj}$ as $q_i$\n\n$x_{11}, x_{12}, x_{13}, x_{14}$ → $q_0, q_1, q_2, q_3$\n\n$x_{21}, x_{22}, x_{23}, x_{24}$ → $q_4, q_5, q_6, q_7$\n\n$x_{31}, x_{32}, x_{33}, x_{34}$ → $q_8, q_{9}, q_{10}, q_{11}$\n\n$x_{41}, x_{42}, x_{43}, x_{44}$ → $q_{12}, q_{13}, q_{14}, q_{15}$\n\nWe put number as ABCD cities as $x$1:A、2:B、3:C、4:D\n\nTo calculate the TSP we need 2 constraint term and 1 cost function\n\n* Visit just once on every city.\n* Visit just one city on jth order.\n* Minimize the total distance.\n\n",
"_____no_output_____"
],
[
"## Visit just once on every city\n\n<img src=\"https://user-images.githubusercontent.com/5043340/45663268-8a749f80-bb40-11e8-8c4a-8b2ad1dd3f35.png\" width=\"400\">\n\nIf we think about the constraint visit just once on every city, we have to think about just one qubit on every row will be 1 and others should be 0.\nたとえば、$q_0+q_1+q_2+q_3 = 1$. We think this on all of the row and we get.\n\n${(1-q_0-q_1-q_2-q_3)^2+(1-q_4-q_5-q_6-q_7)^2+(1-q_8-q_9-q_{10}-q_{11})^2+(1-q_{12}-q_{13}-q_{14}-q_{15})^2\n}$\n\n\n\n",
"_____no_output_____"
],
[
"## Visit just one city on jth order\n\nThink about the second constraint.\n<img src=\"https://user-images.githubusercontent.com/5043340/45666641-1bec0d80-bb51-11e8-87f7-0d1bb522f2e8.png\" width=\"400\">\n\nNow we have to think about the column that only one qubit on every col is 1 and others should be 0.\n\n${(1-q_0-q_4-q_8-q_{12})^2+(1-q_1-q_5-q_9-q_{13})^2+(1-q_2-q_6-q_{10}-q_{14})^2+(1-q_{3}-q_{7}-q_{11}-q_{15})^2\n}$\n\nFinally we have, \n\n${2q_0q_1 + 2q_0q_{12} + 2q_0q_2 + 2q_0q_3 + 2q_0q_4 + 2q_0q_8 - 2q_0}$ \n\n${+ 2q_1q_{13} + 2q_1q_2 + 2q_1q_3 + 2q_1q_5 + 2q_1q_9 - 2q_1}$ \n\n${ + 2q_{10}q_{11} + 2q_{10}q_{14} + 2q_{10}q_2 + 2q_{10}q_6 + 2q_{10}q_8 + 2q_{10}q_9 - 2q_{10} }$ \n\n${+ 2q_{11}q_{15} + 2q_{11}q_3 + 2q_{11}q_7 + 2q_{11}q_8 + 2q_{11}q_9 - 2q_{11}}$ \n\n${+ 2q_{12}q_{13} + 2q_{12}q_{14} + 2q_{12}q_{15} + 2q_{12}q_4 + 2q_{12}q_8 - 2q_{12} }$ \n\n${+ 2q_{13}q_{14}+ 2q_{13}q_{15} + 2q_{13}q_5 + 2q_{13}q_9 - 2q_{13} }$ \n\n${+ 2q_{14}q_{15} + 2q_{14}q_2 + 2q_{14}q_6 - 2q_{14}}$ \n\n${+ 2q_{15}q_3 + 2q_{15}q_7 - 2q_{15}}$ \n\n${+ 2q_2q_3 + 2q_2q_6 - 2q_2 + 2q_3q_7 - 2q_3 }$ \n\n${+ 2q_4q_5 + 2q_4q_6 + 2q_4q_7 + 2q_4q_8 - 2q_4 + 2q_5q_6 + 2q_5q_7 + 2q_5q_9 - 2q_5 }$ \n\n${ +2q_6q_7 - 2q_6 - 2q_7 + 2q_8q_9 - 2q_8 - 2q_9 + 8}$ \n\n\nWrite down on a QUBO matrix and we have\n\n\n<img src=\"https://user-images.githubusercontent.com/5043340/45666980-42f70f00-bb52-11e8-93a7-245e9d0f5609.png\" width=\"400\">\n",
"_____no_output_____"
],
[
"## Minimize the total distance\n\nFinally we have to think about the cost function of the total sum of distance and we get this QUBO matrix thinking about the distance between two cities as Jij on the matrix.\n\n<img src=\"https://user-images.githubusercontent.com/5043340/45667633-f3661280-bb54-11e8-9fbe-5dba63749b1d.png\" width=\"400\">\n",
"_____no_output_____"
],
[
"## Add all of the equation and calculate \n\nWe choose the parameter B=0.25 and get the final QUBO matrix which is the sum of all matrix.\n\n## Calculate\n\n\nPut the QUBO on python and start calculating.",
"_____no_output_____"
]
],
[
[
"a.qubo=np.array([\n [-2,2,2,2,2,0,0,0,2,0,0,0,2,0,0,0],\n [0,-2,2,2,0,2,0,0,0,2,0,0,0,2,0,0],\n [0,0,-2,2,0,0,2,0,0,0,2,0,0,0,2,0],\n [0,0,0,-2,0,0,0,2,0,0,0,2,0,0,0,2],\n [0,0,0,0,-2,2,2,2,2,0,0,0,2,0,0,0],\n [0,0,0,0,0,-2,2,2,0,2,0,0,0,2,0,0],\n [0,0,0,0,0,0,-2,2,0,0,2,0,0,0,2,0],\n [0,0,0,0,0,0,0,-2,0,0,0,2,0,0,0,2],\n [0,0,0,0,0,0,0,0,-2,2,2,2,2,0,0,0],\n [0,0,0,0,0,0,0,0,0,-2,2,2,0,2,0,0],\n [0,0,0,0,0,0,0,0,0,0,-2,2,0,0,2,0],\n [0,0,0,0,0,0,0,0,0,0,0,-2,0,0,0,2],\n [0,0,0,0,0,0,0,0,0,0,0,0,-2,2,2,2],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,-2,2,2],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,0,-2,2],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-2],\n])\n+np.array([\n [0,0,0,0,0,2,0,2,0,1,0,1,0,3,0,3],\n [0,0,0,0,2,0,2,0,1,0,1,0,3,0,3,0],\n [0,0,0,0,0,2,0,2,0,1,0,1,0,3,0,3],\n [0,0,0,0,2,0,2,0,1,0,1,0,3,0,3,0],\n [0,0,0,0,0,0,0,0,0,4,0,4,0,2,0,2],\n [0,0,0,0,0,0,0,0,4,0,4,0,2,0,2,0],\n [0,0,0,0,0,0,0,0,0,4,0,4,0,2,0,2],\n [0,0,0,0,0,0,0,0,4,0,4,0,2,0,2,0],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,2],\n [0,0,0,0,0,0,0,0,0,0,0,0,2,0,2,0],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,2],\n [0,0,0,0,0,0,0,0,0,0,0,0,2,0,2,0],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],\n])*0.25\nanswer = a.sa()",
"_____no_output_____"
]
],
[
[
"And now we have,",
"_____no_output_____"
]
],
[
[
"print(answer)",
"_____no_output_____"
]
],
[
[
"Result is \n\n[1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0]\n\nThis shows that the city should be visited from A→C→D→B→A",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a061ea8b11b0aeb43939495a4c3fadc0718279d
| 11,143 |
ipynb
|
Jupyter Notebook
|
00-anaconda-jupyter-setup.ipynb
|
kjaanson/jupyter-workshop-ttu
|
1587c851497e2a278d0d68e8c1ecea64514ff8fc
|
[
"CC-BY-4.0"
] | null | null | null |
00-anaconda-jupyter-setup.ipynb
|
kjaanson/jupyter-workshop-ttu
|
1587c851497e2a278d0d68e8c1ecea64514ff8fc
|
[
"CC-BY-4.0"
] | null | null | null |
00-anaconda-jupyter-setup.ipynb
|
kjaanson/jupyter-workshop-ttu
|
1587c851497e2a278d0d68e8c1ecea64514ff8fc
|
[
"CC-BY-4.0"
] | null | null | null | 37.392617 | 294 | 0.64453 |
[
[
[
"# Mis on Jupyter / Jupyter Notebook?\n\n * Interaktiivne Pythoni (ja teiste programeerimis keelte programeerimise keskkond).\n * Põhirõhk on lihtsal eksperimenteerimisel ja katsetamisel. Samuti sellel et hiljem jääks katsetustest jälg.\n * Lisaks Pythonile toetab ka muid programeerimiskeeli mida võiks vaja minna andmeanalüüsis...",
"_____no_output_____"
],
[
"# Mida proovin tutvustada?\n * [Jupyter notebook](http://jupyter.org) keskkonda üldiselt\n * (Üli)natukene Pythonit\n * [Pandas](http://pandas.pydata.org/) teeki andmeanalüüsiks Pythonis/Jupyter notebookis.",
"_____no_output_____"
],
[
"# Mida installida?\nKõige lihtsam võimalus Pythoni ja Jupyteri installimiseks on tõmmata Anaconda Pythoni installer. Sellega tulevad kaasa kõik vajalikud teegid ja programmid (Jupyter notebook ja Pandas).\n\n * [Anaconda](https://www.continuum.io/downloads) - Kindlasti tõmmata alla Python 3.6 Anaconda installer!",
"_____no_output_____"
],
[
"# Jupyter notebook-i tutvustus",
"_____no_output_____"
],
[
" \n*[Jupyter vihik näidis Zika viiruse RNASeq-i analüüsiga](https://github.com/MaayanLab/Zika-RNAseq-Pipeline/blob/master/Zika.ipynb)*\n*Artikkel [Wang et al. 2016](https://f1000research.com/articles/5-1574/v1)*\n",
"_____no_output_____"
],
[
"### Jupyter notebook\n\nJupyter-i Notebooki saab käivitada Start menüüst otsides seda nime järgi.\n\n* Käivitub Jupyteri Notebook-i server lokaalses masinas.\n* Automaatselt avatakse browseris ka Jupyteri Notebook ise.\n\nTavaliselt avaneb Jupyter-i Notebook kõigepealt kasutaja kodukataloogis. Töö jaoks saab luua paremalt `New` menüüst eraldi kataloogi endale.`\n\n  \n *Jupyter-i vihikute töökataloog*",
"_____no_output_____"
],
[
"Et luua uus vihik vali paremalt ülevalt \"New\" menüüst \"Python 3\"\n\n  \n *Jupyter-i vihikute töökataloog*",
"_____no_output_____"
],
[
"Uus tühi vihik näeb välja selline. Vihik on jaotatud erinevateks koodi ja Markdown-i teksti lahtriteks. Vastavalt lahtrile Jupyter kas jooksutab koodi ja kuvab selle tulemusi või siis muudab Markdownis kirjutatu browseris vormindatud tekstiks. \n\n  \n *Uus tühi Jupyter-i Pythoni vihik*",
"_____no_output_____"
],
[
"\"Help\" menüü all on abimaterjalid nii Jupyter notebook-i enda kui ka erinevate Pythoni teekide jaoks (Scipy, Pandas jne.)\n\n  \n *Abimaterjalid*",
"_____no_output_____"
],
[
"## Töötamine Jupyter-i vihikuga\n\nTöö Jupyteri vihikus käib lahtri (*cell*) kaupa. Lahtreid võib olla mitut tüüpi. Tähtsamad neist on:\n * Koodi lahter (*Code cell*) - Nendesse lahtritesse kirjutataks Pythoni koodi mida siis hiljem saab analüüsi kordamisel uuesti läbi jooksutada lahter lahtri kaupa.\n * Markdown lahter (*Markdown cell*) - Lahtrid kuhu saab kirjutada Markdown vormingus teksti et oma koodi/analüüsi mõtestada.",
"_____no_output_____"
],
[
"## Koodi lahter (*Code cell*)\n \n *Koodi lahter*",
"_____no_output_____"
],
[
"Koodi lahtrisse kirjutatud koodi jooksutamiseks tuleb aktiivses lahtris vajutada `Shift-Enter`. Peale seda kuvatakse selle alla jooksutatud koodi väljund (kui seda on) ja tekitatakse uus koodi lahter. Teiste nupukombinatsioonidega saab koodi lahtrit lihtsalt jooksutada (`Ctrl-Enter`).\n * `Shift-Enter` - Jooksutab koodi ja loob selle alla uue lahtri\n * `Ctrl-Enter` - Jooksutab koodi kuid ei loo uut lahtrit\n\nNumber koodilahtri kõrval näitab jooksutatud koodi järjekorda. Liikudes mitme koodilahtri vahet ja katsetades asju on selle järgi hea vaadata millist koodi on juba jooksutatud.\n\n  \n *Jooksutatud koodi lahter*",
"_____no_output_____"
],
[
"Number koodilahtri kõrval näitab jooksutatud koodi järjekorda. Liikudes mitme koodilahtri vahet ja katsetades asju on selle järgi hea vaadata millist koodi on juba jooksutatud.\n\n  \n *Jooksutatud koodi lahtrid*",
"_____no_output_____"
],
[
"## Teksti lahter (*Markdown* cell)\n\n*Markdown* on teksti vormindamise keel mis on samal ajal ka lihtsalt loetav. Et muuta lahtrit *Markdown* lahtriks tuleb see valida \"Cell Type\" menüüst.\n\n  \n *Markdown lahter teksti kirjutamisel*",
"_____no_output_____"
],
[
"Et kuvada kirjutatud Markdown koodi siis vajutada jällegi `Shift-Enter`\n\n  \n *Kuvatud Markdowni lahter*",
"_____no_output_____"
],
[
"## Markdown-i kirjutamine\n\nTäpsema juhendi Markdown-is vormindamise jaoks saab leida lingilt https://help.github.com/articles/basic-writing-and-formatting-syntax/\n\nNäiteks teksti stiilide muutmine käib nii:\n\n\n\nNimekirjade tegemine käib nii:\n\n",
"_____no_output_____"
],
[
"Markdown-iga on võimalik sisestada ka lihtsamaid tabeleid. Tabelite tegemiseks läheb vaja tekst paigutada `|` ja `+` sümbolite vahele.",
"_____no_output_____"
],
[
"| Pealkiri | Teine pealkiri|\n| ------------- | ------------- |\n| Sisu | Sisu |\n| Sisu | Sisu |",
"_____no_output_____"
],
[
"Põhjalikuma juhendi leiab https://help.github.com/articles/organizing-information-with-tables/ ",
"_____no_output_____"
],
[
"## MathJax-iga valemite kirjutamine",
"_____no_output_____"
],
[
"Endale märkmeks linke siia et hiljem vihikut täiendada:\n * https://stackoverflow.com/questions/13208286/how-to-write-latex-in-ipython-notebook\n * http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html\n * http://data-blog.udacity.com/posts/2016/10/latex-primer/",
"_____no_output_____"
],
[
"## Kokkuvõte\n\nSelliselt koodi ja teksti kirjutades on väga mugav üles ehitada analüüsi ja seda samal ajal dokumenteerida:\n\n- On lihtsasti võimalik kirjutada ja muuta koodi samal ajal eksperimenteerides sellega.\n- Markdowni kasutamine lubab andmeid, koodi ja analüüsi järeldusi kirja panna ja annoteerida et hilisem lugemine oleks arusaadavam (nii endal kui teistel).\n- Saab kaasa panna pilte, mis on tehtud analüüsi käigus.\n- Hiljem saab kirjutatud vihikut eksportida teistesse formaatidesse (vt `File->\"Download as\"`).\n- Python ei ole ainus keel millega vihikuid saab kirjutada. Võimalik on see näiteks ka R-iga (ja paljude teiste keeltega).",
"_____no_output_____"
],
[
"### Veel näpunäiteid\n\n* Tervet vihikut otsast peale saab jooksutada menüüst `Kernel->Restart & Run All`\n* Koodi lahtrites tasub kirjutada nii, et see on ilusti ülevalt alla järjekorras. On väga lihtne juhtuma et eksperimenteerides hakkad sa kirjutama koodi vales järjekorras ja hiljem vihik ei jookse ilusti.\n* Lahtrite muutmisel on kaks erinevat olekut:\n\n * `Edit mode` - Selle jooksul kirjutad sa tavaliselt lahtrisse koodi või teksti\n * `Command mode` - Vajutades `Esc` nuppu minnakse aktiivses lahtris mode-i kus saab manipuleerida lahtritega kasutades erinevaid nupukombinatsioone.\n \n \n* Väga kasulikuks kiiremaks tööks tulevad erinevad Shortcut-id. Neid kõiki näeb menüüst `Help->Keyboard Shortcuts`. Mõned põhilisemad on:\n * `Shift-Enter` - Jooksutab lahtri ja liigub järgmisesse lahtrisse\n * `Ctrl-Enter` - Joosutab lahtri kuid jääb sama lahtri peale\n * `Alt-enter` - Jooksutab lahtri ja loob uue tühja lahtri selle alla\n * `Y` - Muudab lahtri Koodi lahtriks\n * `M` - Muudab lahtri Markdown lahtriks\n * `A` - Lisa lahter olemasoleva alla\n * `B` - Lisa lahter olemasoleva kohale\n * `D, D` - Kustuta parasjagu aktiivne lahter\n",
"_____no_output_____"
],
[
"# Viited\n\n1. [Jupyter](http://jupyter.org/)\n2. [Anaconda Pythoni allalaadimine](https://www.anaconda.com/download/)\n3. [Gallery of Interesting Jupyter Notebooks](https://github.com/jupyter/jupyter/wiki/A-gallery-of-interesting-Jupyter-Notebooks#pandas-for-data-analysis)\n4. [Jupyter vihik näidis Zika viiruse RNASeq-i analüüsiga](https://github.com/MaayanLab/Zika-RNAseq-Pipeline/blob/master/Zika.ipynb)\n5. [Markdown teksti kirjutamise süntaks](https://help.github.com/articles/basic-writing-and-formatting-syntax/)\n6. [Nature-i artikkel IPythonist/Jupyter-ist](http://www.nature.com/news/interactive-notebooks-sharing-the-code-1.16261)",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a0627c4ac4f6ff298deab789eb933bf053c4f4a
| 16,821 |
ipynb
|
Jupyter Notebook
|
2_0_pre_trained_networks.ipynb
|
JSJeong-me/KOSA-Pytorch
|
a6974225ee71ba1dfc7c374c51204e910dc9fdab
|
[
"MIT"
] | 2 |
2021-05-25T08:52:07.000Z
|
2021-08-13T23:49:42.000Z
|
2_0_pre_trained_networks.ipynb
|
JSJeong-me/KOSA-Pytorch
|
a6974225ee71ba1dfc7c374c51204e910dc9fdab
|
[
"MIT"
] | null | null | null |
2_0_pre_trained_networks.ipynb
|
JSJeong-me/KOSA-Pytorch
|
a6974225ee71ba1dfc7c374c51204e910dc9fdab
|
[
"MIT"
] | 2 |
2021-05-24T00:49:45.000Z
|
2021-06-11T01:30:12.000Z
| 28.704778 | 244 | 0.464122 |
[
[
[
"<a href=\"https://colab.research.google.com/github/JSJeong-me/KOSA-Pytorch/blob/main/2_0_pre_trained_networks.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"from torchvision import models",
"_____no_output_____"
],
[
"dir(models)",
"_____no_output_____"
],
[
"alexnet = models.AlexNet()",
"_____no_output_____"
],
[
"resnet = models.resnet101(pretrained=True)",
"Downloading: \"https://download.pytorch.org/models/resnet101-5d3b4d8f.pth\" to /root/.cache/torch/hub/checkpoints/resnet101-5d3b4d8f.pth\n"
],
[
"resnet",
"_____no_output_____"
],
[
"from torchvision import transforms\npreprocess = transforms.Compose([\n transforms.Resize(256),\n transforms.CenterCrop(224),\n transforms.ToTensor(),\n transforms.Normalize(\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225]\n )])",
"_____no_output_____"
],
[
"from PIL import Image\nimg = Image.open(\"./bee1.jpg\")",
"_____no_output_____"
],
[
"img",
"_____no_output_____"
],
[
"img_t = preprocess(img)",
"_____no_output_____"
],
[
"img_t",
"_____no_output_____"
],
[
"import torch",
"_____no_output_____"
],
[
"batch_t = torch.unsqueeze(img_t, 0)",
"_____no_output_____"
],
[
"resnet.eval()",
"_____no_output_____"
],
[
"out = resnet(batch_t)\nout",
"_____no_output_____"
],
[
"with open('./imagenet_classes.txt') as f:\n labels = [line.strip() for line in f.readlines()]",
"_____no_output_____"
],
[
"_, index = torch.max(out, 1)",
"_____no_output_____"
],
[
"percentage = torch.nn.functional.softmax(out, dim=1)[0] * 100\nlabels[index[0]], percentage[index[0]].item()",
"_____no_output_____"
],
[
"_, indices = torch.sort(out, descending=True)\n[(labels[idx], percentage[idx].item()) for idx in indices[0][:5]]",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a06380f9011335d233e4bf8904d59e60eeaea4c
| 16,365 |
ipynb
|
Jupyter Notebook
|
Streaming - Popular movies by title.ipynb
|
gervarela/spark-101
|
1d93c227ddb6b57174c9c591ff0a9108954fe75c
|
[
"MIT"
] | 1 |
2020-02-18T09:25:17.000Z
|
2020-02-18T09:25:17.000Z
|
Streaming - Popular movies by title.ipynb
|
gervarela/spark-101
|
1d93c227ddb6b57174c9c591ff0a9108954fe75c
|
[
"MIT"
] | null | null | null |
Streaming - Popular movies by title.ipynb
|
gervarela/spark-101
|
1d93c227ddb6b57174c9c591ff0a9108954fe75c
|
[
"MIT"
] | null | null | null | 52.284345 | 1,562 | 0.584296 |
[
[
[
"# Spark on Tour\n## Ejemplo de procesamiento de datos en streaming para generar un dashboard en NRT\n\nEn este notebook vamos a ver un ejemplo completo de como se podría utilizar la API de streaming estructurado de Spark para procesar un stream de eventos de puntuación en vivo, en el tiempo real, y generar como salida un conjunto de estadísticas, o valores agregados, con los que poder construir un dashboard de visualización y monitorización en tiempo real.\n\nParticularmente vamos a simular una plataforma de vídeo bajo demanda en la que los usuarios están viendo pelítculas y puntuándolas. Tomaremos los eventos de puntuación que van entrando en streaming, y genrar, en tiempo real, estadísticas de visualización agredas por género, de forma que podamos monitorizar qué películas son las más populates en este momento.",
"_____no_output_____"
],
[
"### Importamos librerías, definimos esquemas e inicializamos la sesión Spark.",
"_____no_output_____"
]
],
[
[
"import findspark\nfindspark.init()\n\nimport pyspark\nfrom pyspark.sql.types import *\nfrom pyspark.sql import SparkSession\nimport pyspark.sql.functions as f\n\nfrom IPython.display import clear_output\nimport plotly.express as px",
"_____no_output_____"
],
[
"ratingSchema = StructType([\n StructField(\"user\", IntegerType()),\n StructField(\"movie\", IntegerType()),\n StructField(\"rating\", FloatType())\n])\n\nmovieSchema = StructType([\n StructField(\"movie\", IntegerType()),\n StructField(\"title\", StringType()),\n StructField(\"genres\", StringType())\n])",
"_____no_output_____"
],
[
"def foreach_batch_function(df, epoch_id):\n mostPopularMovies = df.limit(10).toPandas()\n clear_output()\n print(mostPopularMovies)",
"_____no_output_____"
],
[
"#setup spark session\nsparkSession = (SparkSession.builder\n .appName(\"Movie ratings streaming\")\n .master(\"local[*]\")\n .config(\"spark.scheduler.mode\", \"FAIR\")\n .getOrCreate())\nsparkSession.sparkContext.setLogLevel(\"ERROR\")",
"_____no_output_____"
]
],
[
[
"### Leemos el dataset de películas",
"_____no_output_____"
]
],
[
[
"movies = sparkSession.read.csv(\"/tmp/movielens/movies.csv\", schema=movieSchema, header=True)\nmovies.show()",
"+-----+--------------------+--------------------+\n|movie| title| genres|\n+-----+--------------------+--------------------+\n| 1| Toy Story (1995)|Adventure|Animati...|\n| 2| Jumanji (1995)|Adventure|Childre...|\n| 3|Grumpier Old Men ...| Comedy|Romance|\n| 4|Waiting to Exhale...|Comedy|Drama|Romance|\n| 5|Father of the Bri...| Comedy|\n| 6| Heat (1995)|Action|Crime|Thri...|\n| 7| Sabrina (1995)| Comedy|Romance|\n| 8| Tom and Huck (1995)| Adventure|Children|\n| 9| Sudden Death (1995)| Action|\n| 10| GoldenEye (1995)|Action|Adventure|...|\n| 11|American Presiden...|Comedy|Drama|Romance|\n| 12|Dracula: Dead and...| Comedy|Horror|\n| 13| Balto (1995)|Adventure|Animati...|\n| 14| Nixon (1995)| Drama|\n| 15|Cutthroat Island ...|Action|Adventure|...|\n| 16| Casino (1995)| Crime|Drama|\n| 17|Sense and Sensibi...| Drama|Romance|\n| 18| Four Rooms (1995)| Comedy|\n| 19|Ace Ventura: When...| Comedy|\n| 20| Money Train (1995)|Action|Comedy|Cri...|\n+-----+--------------------+--------------------+\nonly showing top 20 rows\n\n"
]
],
[
[
"### Inicializamos la carga del stream de puntuaciones desde Apache Kafka",
"_____no_output_____"
]
],
[
[
"dataset = (sparkSession\n .readStream\n .format(\"kafka\")\n .option(\"kafka.bootstrap.servers\", \"localhost:29092\")\n .option(\"subscribe\", \"ratings\")\n .load())\ndataset = dataset.selectExpr(\"CAST(value AS STRING)\")\ndataset = dataset.select(f.from_json(f.col(\"value\"), ratingSchema).alias(\"data\")).select(\"data.*\")",
"_____no_output_____"
]
],
[
[
"### Agrupamos por película y sumamos visualizaciones y media de puntuación",
"_____no_output_____"
]
],
[
[
"dataset = dataset.select(\"movie\", \"rating\") \\\n .groupBy(\"movie\") \\\n .agg(f.count(\"rating\").alias(\"num_ratings\"), f.avg(\"rating\").alias(\"avg_rating\"))",
"_____no_output_____"
]
],
[
[
"### Mezclamos con el dataset de películas para obtener el título",
"_____no_output_____"
]
],
[
[
"dataset = dataset.join(movies, dataset[\"movie\"] == movies[\"movie\"], \"left_outer\") \\\n .drop(movies[\"movie\"]) \\\n .drop(\"genres\")",
"_____no_output_____"
]
],
[
[
"### Ordenamos la salida por número de votaciones (visualizaciones)",
"_____no_output_____"
]
],
[
[
"dataset = dataset.select(\"movie\", \"title\", \"avg_rating\", \"num_ratings\") \\\n .sort(f.desc(\"num_ratings\"))",
"_____no_output_____"
]
],
[
[
"### Ejecutamos el procesamiento en streaming",
"_____no_output_____"
]
],
[
[
"query = dataset \\\n .writeStream \\\n .outputMode(\"complete\") \\\n .format(\"console\") \\\n .trigger(processingTime='5 seconds') \\\n .foreachBatch(foreach_batch_function) \\\n .start()",
"_____no_output_____"
],
[
"query.explain()\nquery.awaitTermination()",
" movie title avg_rating \\\n0 2628 Star Wars: Episode I - The Phantom Menace (1999) 3.285714 \n1 1580 Men in Black (a.k.a. MIB) (1997) 3.666667 \n2 1721 Titanic (1997) 3.833333 \n3 296 Pulp Fiction (1994) 4.333333 \n4 34 Babe (1995) 4.333333 \n5 1210 Star Wars: Episode VI - Return of the Jedi (1983) 3.500000 \n6 380 True Lies (1994) 3.333333 \n7 356 Forrest Gump (1994) 3.833333 \n8 1923 There's Something About Mary (1998) 4.333333 \n9 1784 As Good as It Gets (1997) 3.166667 \n\n num_ratings \n0 7 \n1 6 \n2 6 \n3 6 \n4 6 \n5 6 \n6 6 \n7 6 \n8 6 \n9 6 \n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a0647fe3a987fa424f3cb5dd024e90236765a06
| 13,689 |
ipynb
|
Jupyter Notebook
|
notebook/Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb
|
kazewong/nrpytutorial
|
cc511325f37f01284b2b83584beb2a452556b3fb
|
[
"BSD-2-Clause"
] | null | null | null |
notebook/Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb
|
kazewong/nrpytutorial
|
cc511325f37f01284b2b83584beb2a452556b3fb
|
[
"BSD-2-Clause"
] | null | null | null |
notebook/Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb
|
kazewong/nrpytutorial
|
cc511325f37f01284b2b83584beb2a452556b3fb
|
[
"BSD-2-Clause"
] | null | null | null | 45.936242 | 463 | 0.588867 |
[
[
[
"<script async src=\"https://www.googletagmanager.com/gtag/js?id=UA-59152712-8\"></script>\n<script>\n window.dataLayer = window.dataLayer || [];\n function gtag(){dataLayer.push(arguments);}\n gtag('js', new Date());\n\n gtag('config', 'UA-59152712-8');\n</script>\n\n# Enforce conformal 3-metric $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658))\n\n## Author: Zach Etienne\n### Formatting improvements courtesy Brandon Clark\n\n[comment]: <> (Abstract: TODO)\n\n**Module Status:** <font color='green'><b> Validated </b></font>\n\n**Validation Notes:** This tutorial notebook has been confirmed to be self-consistent with its corresponding NRPy+ module, as documented [below](#code_validation). In addition, its output has been \n\n### NRPy+ Source Code for this module: [BSSN/Enforce_Detgammabar_Constraint.py](../edit/BSSN/Enforce_Detgammabar_Constraint.py)\n\n## Introduction:\n[Brown](https://arxiv.org/abs/0902.3652)'s covariant Lagrangian formulation of BSSN, which we adopt, requires that $\\partial_t \\bar{\\gamma} = 0$, where $\\bar{\\gamma}=\\det \\bar{\\gamma}_{ij}$. Further, all initial data we choose satisfies $\\bar{\\gamma}=\\hat{\\gamma}$. \n\nHowever, numerical errors will cause $\\bar{\\gamma}$ to deviate from a constant in time. This actually disrupts the hyperbolicity of the PDEs, so to cure this, we adjust $\\bar{\\gamma}_{ij}$ at the end of each Runge-Kutta timestep, so that its determinant satisfies $\\bar{\\gamma}=\\hat{\\gamma}$ at all times. We adopt the following, rather standard prescription (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)):\n\n$$\n\\bar{\\gamma}_{ij} \\to \\left(\\frac{\\hat{\\gamma}}{\\bar{\\gamma}}\\right)^{1/3} \\bar{\\gamma}_{ij}.\n$$",
"_____no_output_____"
],
[
"<a id='toc'></a>\n\n# Table of Contents\n$$\\label{toc}$$\n\nThis notebook is organized as follows:\n\n1. [Step 1](#initializenrpy): Initialize needed NRPy+ modules\n1. [Step 2](#enforcegammaconstraint): Enforce the $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint\n1. [Step 3](#code_validation): Code Validation against `BSSN.Enforce_Detgammabar_Constraint` NRPy+ module\n1. [Step 4](#latex_pdf_output): Output this notebook to $\\LaTeX$-formatted PDF file",
"_____no_output_____"
],
[
"<a id='initializenrpy'></a>\n\n# Step 1: Initialize needed NRPy+ modules \\[Back to [top](#toc)\\]\n$$\\label{initializenrpy}$$",
"_____no_output_____"
]
],
[
[
"# Step P1: import all needed modules from NRPy+:\nfrom outputC import *\nimport NRPy_param_funcs as par\nimport grid as gri\nimport loop as lp\nimport indexedexp as ixp\nimport finite_difference as fin\nimport reference_metric as rfm\nimport BSSN.BSSN_quantities as Bq\n\n# Set spatial dimension (must be 3 for BSSN)\nDIM = 3\npar.set_parval_from_str(\"grid::DIM\",DIM)\n\n# Then we set the coordinate system for the numerical grid\npar.set_parval_from_str(\"reference_metric::CoordSystem\",\"SinhSpherical\")\nrfm.reference_metric() # Create ReU, ReDD needed for rescaling B-L initial data, generating BSSN RHSs, etc.",
"_____no_output_____"
]
],
[
[
"<a id='enforcegammaconstraint'></a>\n\n# Step 2: Enforce the $\\det{\\bar{\\gamma}_{ij}}=\\det{\\hat{\\gamma}_{ij}}$ constraint \\[Back to [top](#toc)\\]\n$$\\label{enforcegammaconstraint}$$\n\nRecall that we wish to make the replacement:\n$$\n\\bar{\\gamma}_{ij} \\to \\left(\\frac{\\hat{\\gamma}}{\\bar{\\gamma}}\\right)^{1/3} \\bar{\\gamma}_{ij}.\n$$\nNotice the expression on the right is guaranteed to have determinant equal to $\\hat{\\gamma}$.\n\n$\\bar{\\gamma}_{ij}$ is not a gridfunction, so we must rewrite the above in terms of $h_{ij}$:\n\\begin{align}\n\\left(\\frac{\\hat{\\gamma}}{\\bar{\\gamma}}\\right)^{1/3} \\bar{\\gamma}_{ij} &= \\bar{\\gamma}'_{ij} \\\\\n&= \\hat{\\gamma}_{ij} + \\varepsilon'_{ij} \\\\\n&= \\hat{\\gamma}_{ij} + \\text{Re[i][j]} h'_{ij} \\\\\n\\implies h'_{ij} &= \\left[\\left(\\frac{\\hat{\\gamma}}{\\bar{\\gamma}}\\right)^{1/3} \\bar{\\gamma}_{ij} - \\hat{\\gamma}_{ij}\\right] / \\text{Re[i][j]} \\\\\n&= \\left(\\frac{\\hat{\\gamma}}{\\bar{\\gamma}}\\right)^{1/3} \\frac{\\bar{\\gamma}_{ij}}{\\text{Re[i][j]}} - \\delta_{ij}\\\\\n&= \\left(\\frac{\\hat{\\gamma}}{\\bar{\\gamma}}\\right)^{1/3} \\frac{\\hat{\\gamma}_{ij} + \\text{Re[i][j]} h_{ij}}{\\text{Re[i][j]}} - \\delta_{ij}\\\\\n&= \\left(\\frac{\\hat{\\gamma}}{\\bar{\\gamma}}\\right)^{1/3} \\left(\\delta_{ij} + h_{ij}\\right) - \\delta_{ij}\n\\end{align}\n\nUpon inspection, when expressing $\\hat{\\gamma}$ SymPy generates expressions like `(xx0)^{4/3} = pow(xx0, 4./3.)`, which can yield $\\text{NaN}$s when `xx0 < 0` (i.e., in the `xx0` ghost zones). To prevent this, we know that $\\hat{\\gamma}\\ge 0$ for all reasonable coordinate systems, so we make the replacement $\\hat{\\gamma}\\to |\\hat{\\gamma}|$ below:",
"_____no_output_____"
]
],
[
[
"# We will need the h_{ij} quantities defined within BSSN_RHSs \n# below when we enforce the gammahat=gammabar constraint\n# Step 1: All barred quantities are defined in terms of BSSN rescaled gridfunctions,\n# which we declare here in case they haven't yet been declared elsewhere.\n\nBq.declare_BSSN_gridfunctions_if_not_declared_already()\nhDD = Bq.hDD\nBq.BSSN_basic_tensors()\ngammabarDD = Bq.gammabarDD\n\n# First define the Kronecker delta:\nKroneckerDeltaDD = ixp.zerorank2()\nfor i in range(DIM):\n KroneckerDeltaDD[i][i] = sp.sympify(1)\n\n# The detgammabar in BSSN_RHSs is set to detgammahat when BSSN_RHSs::detgbarOverdetghat_equals_one=True (default),\n# so we manually compute it here:\ndummygammabarUU, detgammabar = ixp.symm_matrix_inverter3x3(gammabarDD)\n\n# Next apply the constraint enforcement equation above.\nhprimeDD = ixp.zerorank2()\nfor i in range(DIM):\n for j in range(DIM):\n hprimeDD[i][j] = \\\n (sp.Abs(rfm.detgammahat)/detgammabar)**(sp.Rational(1,3)) * (KroneckerDeltaDD[i][j] + hDD[i][j]) \\\n - KroneckerDeltaDD[i][j]\n\nenforce_detg_constraint_vars = [ \\\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"hDD00\"),rhs=hprimeDD[0][0]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"hDD01\"),rhs=hprimeDD[0][1]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"hDD02\"),rhs=hprimeDD[0][2]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"hDD11\"),rhs=hprimeDD[1][1]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"hDD12\"),rhs=hprimeDD[1][2]),\n lhrh(lhs=gri.gfaccess(\"in_gfs\",\"hDD22\"),rhs=hprimeDD[2][2]) ]\n\nenforce_gammadet_string = fin.FD_outputC(\"returnstring\",enforce_detg_constraint_vars,\n params=\"outCverbose=False,preindent=0,includebraces=False\")\n\nwith open(\"BSSN/enforce_detgammabar_constraint.h\", \"w\") as file:\n indent = \" \"\n file.write(\"void enforce_detgammabar_constraint(const int Nxx_plus_2NGHOSTS[3],REAL *xx[3], REAL *in_gfs) {\\n\\n\")\n file.write(lp.loop([\"i2\",\"i1\",\"i0\"],[\"0\",\"0\",\"0\"],\n [\"Nxx_plus_2NGHOSTS[2]\",\"Nxx_plus_2NGHOSTS[1]\",\"Nxx_plus_2NGHOSTS[0]\"],\n [\"1\",\"1\",\"1\"],[\"#pragma omp parallel for\",\n \" const REAL xx2 = xx[2][i2];\",\n \" const REAL xx1 = xx[1][i1];\"],\"\",\n \"const REAL xx0 = xx[0][i0];\\n\"+enforce_gammadet_string))\n file.write(\"}\\n\")\nprint(\"Output C implementation of det(gammabar) constraint to file BSSN/enforce_detgammabar_constraint.h\")",
"Output C implementation of det(gammabar) constraint to file BSSN/enforce_detgammabar_constraint.h\n"
]
],
[
[
"<a id='code_validation'></a>\n\n# Step 3: Code Validation against `BSSN.Enforce_Detgammabar_Constraint` NRPy+ module \\[Back to [top](#toc)\\]\n$$\\label{code_validation}$$\n\nHere, as a code validation check, we verify agreement in the C code output between\n\n1. this tutorial and \n2. the NRPy+ [BSSN.Enforce_Detgammabar_Constraint](../edit/BSSN/Enforce_Detgammabar_Constraint.py) module.",
"_____no_output_____"
]
],
[
[
"!mv BSSN/enforce_detgammabar_constraint.h BSSN/enforce_detgammabar_constraint.h-validation\n\ngri.glb_gridfcs_list = []\n\nimport BSSN.Enforce_Detgammabar_Constraint as EGC\nEGC.output_Enforce_Detgammabar_Constraint_Ccode()\n\nimport filecmp\nfor file in [\"BSSN/enforce_detgammabar_constraint.h\"]:\n if filecmp.cmp(file,file+\"-validation\") == False:\n print(\"VALIDATION TEST FAILED on file: \"+file+\".\")\n exit(1)\n else:\n print(\"Validation test PASSED on file: \"+file)",
"Output C implementation of det(gammabar) constraint to file BSSN/enforce_detgammabar_constraint.h\nValidation test PASSED on file: BSSN/enforce_detgammabar_constraint.h\n"
]
],
[
[
"<a id='latex_pdf_output'></a>\n\n# Step 4: Output this notebook to $\\LaTeX$-formatted PDF file \\[Back to [top](#toc)\\]\n$$\\label{latex_pdf_output}$$\n\nThe following code cell converts this Jupyter notebook into a proper, clickable $\\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename\n[Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.pdf](Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)",
"_____no_output_____"
]
],
[
[
"!jupyter nbconvert --to latex --template latex_nrpy_style.tplx Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb\n!pdflatex -interaction=batchmode Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.tex\n!pdflatex -interaction=batchmode Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.tex\n!pdflatex -interaction=batchmode Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.tex\n!rm -f Tut*.out Tut*.aux Tut*.log",
"[NbConvertApp] Converting notebook Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.ipynb to latex\n[NbConvertApp] Writing 41262 bytes to Tutorial-BSSN-Enforcing_Determinant_gammabar_equals_gammahat_Constraint.tex\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\n restricted \\write18 enabled.\nentering extended mode\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\n restricted \\write18 enabled.\nentering extended mode\nThis is pdfTeX, Version 3.14159265-2.6-1.40.18 (TeX Live 2017/Debian) (preloaded format=pdflatex)\n restricted \\write18 enabled.\nentering extended mode\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a069f4b0a3a06b2d93fe65facf97aab3f60596b
| 4,287 |
ipynb
|
Jupyter Notebook
|
Knowledge/Python/Not Posted Things/4. Data Wrangling/act_report.ipynb
|
leeh8911/BeSuperRepo
|
688891f9b8e6336e144f635b0df0337fdbde40ea
|
[
"MIT"
] | null | null | null |
Knowledge/Python/Not Posted Things/4. Data Wrangling/act_report.ipynb
|
leeh8911/BeSuperRepo
|
688891f9b8e6336e144f635b0df0337fdbde40ea
|
[
"MIT"
] | null | null | null |
Knowledge/Python/Not Posted Things/4. Data Wrangling/act_report.ipynb
|
leeh8911/BeSuperRepo
|
688891f9b8e6336e144f635b0df0337fdbde40ea
|
[
"MIT"
] | null | null | null | 29.363014 | 666 | 0.619781 |
[
[
[
"# Act Report\nreported by Sangwon Lee\n\n## What Is In `WeRateDogs`\nIn the world, there exist many kind of dogs. People love these dogs cuase so many reason(e.g. cute, loyal, reliable, and so on). I wondered what kind of dogs were getting more love. Finally, Given the `WeRateDogs` dataset. So, after wrangling based on this data, I want to see which dogs are getting more love.\n\n## What kind of dogs were more tweeted?\nIn `WeRateDogs`, very many kind of dogs. some tweet include golden-retriever, some tweet include pug. I wonder how many dogs there are in each species. Therefore I want to know what kind of dog species in each tweet images. However there is no species data in dataset, but there is image classification result. Image classification result have classified species and confidence of classifications. Therefore, if there is one dog in each image, the confidence can be regarded as the expected value of the number of dogs of each kind in the image. The figure below is the result of adding the confidence of each tweet's image prediction value for each species.\n",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"In this image, \n* In left image, there are very many golden retriever images in twitter\n* In right image, there are very many kind of dog species which sum of confidence under 1(66% dog species are less than 1 sum of confidence).\n* Only about 33% of the dogs actually have dogs. In particular, most of them are `golden retriever`.\n\n## What kind of dogs people like?",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"* Each tweet's favorite and retweet numbers have positive relation.\n* `golden retriever` is the most of tweeted dogs, but the most retweeted species is `labrador retriever` and the most favorite species is `french bulldog`.",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a06a16f4b4f781f70d4651f8ddf96afb69c94a5
| 56,002 |
ipynb
|
Jupyter Notebook
|
others/mc_integration.ipynb
|
mazhengcn/scientific-computing-with-python
|
f821b99bc08b1170472433ac095296fe6039875a
|
[
"MIT"
] | null | null | null |
others/mc_integration.ipynb
|
mazhengcn/scientific-computing-with-python
|
f821b99bc08b1170472433ac095296fe6039875a
|
[
"MIT"
] | null | null | null |
others/mc_integration.ipynb
|
mazhengcn/scientific-computing-with-python
|
f821b99bc08b1170472433ac095296fe6039875a
|
[
"MIT"
] | null | null | null | 82.720827 | 18,966 | 0.838791 |
[
[
[
"%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom typing import Callable",
"_____no_output_____"
],
[
"# Define some types\n\nFunc= Callable[..., np.float64]",
"_____no_output_____"
]
],
[
[
"# Monte Carlo Integration",
"_____no_output_____"
],
[
"$$\nI(f) = \\int_{\\Omega} f(x) \\, dx, \\quad x\\in\\mathbb{R}^d\n$$",
"_____no_output_____"
],
[
"## Numerical Integration\n\nAny numerical integration method (including Monte Carlo Method) can be written in the following form:\n\n$$\nI_N(f) \\approx \\sum_{i=1}^N A_i f(x_i)\n$$",
"_____no_output_____"
]
],
[
[
"# quads = (points, weights)\n# shape: ((N,d), (N,))\ndef integrate_test(f: Func, quads: tuple[np.ndarray, np.ndarray]):\n points, weights = quads\n assert points.shape[0] == weights.shape[0]\n \n num_points = points.shape[0]\n integration = 0.0 \n for i in range(num_points):\n integration += weights[i] * f(points[i])\n\n return integration",
"_____no_output_____"
],
[
"# quads = (points, weights)\n# shape: ((N,d), (N,))\ndef integrate(f: Func, quads: tuple[np.ndarray, np.ndarray]):\n points, weights = quads\n if isinstance(weights, np.ndarray):\n assert points.shape[0] == weights.shape[0]\n # weights: (N,)\n # f(points): (N,)\n integration = np.sum(weights * f(points))\n return integration",
"_____no_output_____"
]
],
[
[
"### Example 1\n\n$$\n\\int_0^{\\frac{\\pi}{2}} \\sin x\\, dx = 1,\n$$",
"_____no_output_____"
],
[
"## Determinastic Methods\n\nTrapezoidal formula (复合梯形法)\n\n$$\nI(f) \\approx \\{\\frac{1}{2}f(x_1) + \\sum_{i=2}^{N-1} f(x_i) + \\frac{1}{2}f(x_N)\\} \\times h\n$$",
"_____no_output_____"
]
],
[
[
"def f1(x: np.ndarray):\n return np.sin(x)",
"_____no_output_____"
],
[
"N = 10000\nh = 0.5 * np.pi / (N-1)\npoints = np.linspace(0, 0.5 * np.pi, N)\nweights = np.ones((N)) * h \nweights[0] = weights[-1] = 0.5 * h",
"_____no_output_____"
],
[
"%timeit I1 = integrate_test(f1, (points, weights))",
"21.2 ms ± 460 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
],
[
"I2 = integrate(f1, (points, weights))",
"_____no_output_____"
],
[
"I2",
"_____no_output_____"
],
[
"errs = []\n\nfor h in [0.5, 0.25, 0.1, 0.05, 0.02, 0.01]:\n N = int(0.5 * np.pi / h) + 1\n points = np.linspace(0, 0.5 * np.pi, N)\n weights = np.ones((N)) * h \n weights[0] = weights[-1] = 0.5 * h\n I = integrate_test(f1, (points, weights))\n errs.append(np.abs(I - 1.0))",
"_____no_output_____"
],
[
"plt.loglog([0.5, 0.25, 0.1, 0.05, 0.02, 0.01], errs)",
"_____no_output_____"
]
],
[
[
"### Gauss",
"_____no_output_____"
]
],
[
[
"N = 15\na, b = 0.0, np.pi/2\npoints, weights = np.polynomial.legendre.leggauss(N)\npoints = 0.5*(points + 1)*(b - a) + a\nweights = weights * 0.5 * (b - a)",
"_____no_output_____"
],
[
"Gauss_I = integrate(f1, (points, weights))",
"_____no_output_____"
],
[
"Gauss_I",
"_____no_output_____"
]
],
[
[
"### Stochastic Method (Monte Carlo)\n\n$$\nI_N(f) \\approx \\frac{\\pi}{2N}\\sum_{i=1}^N f(X_i), \\quad X_i \\mathcal{U}[0,\\pi/2]\n$$",
"_____no_output_____"
],
[
"$$\nX_{n+1} = a X_{n} + b (\\text{mod } m)\n$$",
"_____no_output_____"
]
],
[
[
"# seed X_0\nrng = np.random.default_rng(0)",
"_____no_output_____"
],
[
"# 向量化生成随机数\nrng.uniform(0.0, np.pi/2)",
"_____no_output_____"
],
[
"# seed\n# rng = np.random.default_rng(1)\n\nN = 10000\n# sample points\nrpoints = rng.uniform(0.0, np.pi/2, N)\nweights = np.pi/2/ N\n\nMC_I = integrate(f1, (rpoints, weights))",
"_____no_output_____"
],
[
"np.linspace(100, 100000, 20)",
"_____no_output_____"
],
[
"mc_errs = []\nns = []\nfor n in np.linspace(100, 100000, 20):\n int_n = int(n)\n ns.append(int_n)\n rpoints = rng.uniform(0.0, np.pi/2, int_n)\n weights = np.pi/2/n\n MC_I = integrate(f1, (rpoints, weights))\n mc_errs.append(np.abs(MC_I - 1.0))\n ",
"_____no_output_____"
],
[
"plt.loglog(ns, mc_errs, ns, 1.0 / np.sqrt(np.asarray(ns)))",
"_____no_output_____"
]
],
[
[
"### Example 2\n\n$$\nx, y \\in [-1, 1]\n$$\n\n$$\nf(x, y) = 1, \\quad \\text{if } x^2 + y^2 < 1\n$$\n\n$$\n\\int_{[0, 1]^2} f(x, y) \\, dxdy\n$$",
"_____no_output_____"
]
],
[
[
"# z: (2, N)\ndef f2(z):\n x = z[0]\n y = z[1]\n \n return (x**2 + y**2 < 1) * 1.0\n ",
"_____no_output_____"
],
[
"x = np.linspace(-1.0, 1.0, 100)\ny = np.linspace(-1.0, 1.0, 100)\nz = np.meshgrid(x, y)",
"_____no_output_____"
],
[
"Z = f2(np.asarray(z))",
"_____no_output_____"
],
[
"Z.shape",
"_____no_output_____"
],
[
"X, Y = z\nfig, ax = plt.subplots(figsize=(8,8))\nax.contourf(X, Y, Z)",
"_____no_output_____"
]
],
[
[
"### Monte Carlo",
"_____no_output_____"
]
],
[
[
"# seed\nrng = np.random.default_rng(1)",
"_____no_output_____"
],
[
"N = 10000000\n# sample points\nrpoints = rng.uniform(-1.0, 1.0, (2, N))\nweights = 4.0 / N\n\nMC_I_2d = integrate(f2, (rpoints, weights))",
"_____no_output_____"
],
[
"MC_I_2d",
"_____no_output_____"
]
],
[
[
"### 中矩形公式",
"_____no_output_____"
]
],
[
[
"nx = ny = 1000\nh = 2.0 / nx\nx = np.arange(-1.0 + 0.5*h, 1.0, h)\ny = np.arange(-1.0 + 0.5*h, 1.0, h)\n# (2, N=nx*xy)\nxy = np.asarray(np.meshgrid(x, y))\npoints = xy.reshape(2, -1)\nweights = h**2",
"_____no_output_____"
],
[
"I3 = integrate(f2, (points, weights))",
"_____no_output_____"
],
[
"I3",
"_____no_output_____"
]
],
[
[
"### Example 3\n\n$$\n\\int_{[0, 1]^d} e^{-x} \\, dx\n$$",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a06ad33918ebf9516473402bd2ebc16c4421470
| 781,492 |
ipynb
|
Jupyter Notebook
|
5. Unsupervised Learning/GMM Clustering and Cluster Validation Lab.ipynb
|
Arwa-Ibrahim/ML_Nano_Projects
|
7f335b352cc4b335ae97aea2bf962188bc454204
|
[
"MIT"
] | null | null | null |
5. Unsupervised Learning/GMM Clustering and Cluster Validation Lab.ipynb
|
Arwa-Ibrahim/ML_Nano_Projects
|
7f335b352cc4b335ae97aea2bf962188bc454204
|
[
"MIT"
] | null | null | null |
5. Unsupervised Learning/GMM Clustering and Cluster Validation Lab.ipynb
|
Arwa-Ibrahim/ML_Nano_Projects
|
7f335b352cc4b335ae97aea2bf962188bc454204
|
[
"MIT"
] | null | null | null | 1,621.352697 | 203,472 | 0.95942 |
[
[
[
"## 1. KMeans vs GMM on a Generated Dataset\n\nIn the first example we'll look at, we'll generate a Gaussian dataset and attempt to cluster it and see if the clustering matches the original labels of the generated dataset.\n\nWe can use sklearn's [make_blobs](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.html) function to create a dataset of Gaussian blobs:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import cluster, datasets, mixture\n\n%matplotlib inline\n\nn_samples = 1000\n\nvaried = datasets.make_blobs(n_samples=n_samples,\n cluster_std=[5, 1, 0.5],\n random_state=3)\nX, y = varied[0], varied[1]\n\nplt.figure( figsize=(16,12))\nplt.scatter(X[:,0], X[:,1], c=y, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"Now when we hand off this dataset to the clustering algorithms, we obviously will not pass in the labels. So let's start with KMeans and see how it does with the dataset. WIll it be to produce clusters that match the original labels?",
"_____no_output_____"
]
],
[
[
"from sklearn.cluster import KMeans\n\nkmeans = KMeans(n_clusters=3)\npred = kmeans.fit_predict(X)",
"_____no_output_____"
],
[
"plt.figure( figsize=(16,12))\nplt.scatter(X[:,0], X[:,1], c=pred, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"How good of a job did KMeans do? Was it able to find clusters that match or are similar to the original labels?\n\nLet us now try clustering with [GaussianMixture](http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html):",
"_____no_output_____"
]
],
[
[
"# TODO: Import GaussianMixture\nfrom sklearn.mixture import GaussianMixture\n\n# TODO: Create an instance of Gaussian Mixture with 3 components\ngmm = GaussianMixture(n_components = 3)\n\n# TODO: fit the dataset\ngmm = gmm.fit(X)\n\n# TODO: predict the clustering labels for the dataset\npred_gmm = gmm.predict(X)",
"_____no_output_____"
],
[
"# Plot the clusters\nplt.figure( figsize=(16,12))\nplt.scatter(X[:,0], X[:,1], c=pred_gmm, edgecolor='black', lw=1.5, s=100, cmap=plt.get_cmap('viridis'))\nplt.show()",
"_____no_output_____"
]
],
[
[
"By visually comparing the result of KMeans and GMM clustering, which one was better able to match the original?\n- The GMM is better than KMeans.",
"_____no_output_____"
],
[
"# 2. KMeans vs GMM on The Iris Dataset\n\nFor our second example, we'll take a dataset that has more than two features. The Iris dataset is great for this purpose since it is reasonable to assume it's distributed according to Gaussian distributions.\n\nThe Iris dataset is a labeled dataset with four features:\n",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\n\niris = sns.load_dataset(\"iris\")\n\niris.head()",
"_____no_output_____"
]
],
[
[
"How do you visualize a datset with four dimensions? \n\nThere are a few ways (e.g. [PairGrid](https://seaborn.pydata.org/generated/seaborn.PairGrid.html), [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html), or [project into a lower number number dimensions using PCA](http://scikit-learn.org/stable/auto_examples/decomposition/plot_pca_iris.html#sphx-glr-auto-examples-decomposition-plot-pca-iris-py)). Let's attempt to visualize using PairGrid because it does not distort the dataset -- it merely plots every pair of features against each other in a subplot:",
"_____no_output_____"
]
],
[
[
"g = sns.PairGrid(iris, hue=\"species\", palette=sns.color_palette(\"cubehelix\", 3), vars=['sepal_length','sepal_width','petal_length','petal_width'])\ng.map(plt.scatter)\nplt.show()",
"_____no_output_____"
]
],
[
[
"If we cluster the Iris datset using KMeans, how close would the resulting clusters match the original labels?",
"_____no_output_____"
]
],
[
[
"kmeans_iris = KMeans(n_clusters=3)\npred_kmeans_iris = kmeans_iris.fit_predict(iris[['sepal_length','sepal_width','petal_length','petal_width']])",
"_____no_output_____"
],
[
"iris['kmeans_pred'] = pred_kmeans_iris\n\ng = sns.PairGrid(iris, hue=\"kmeans_pred\", palette=sns.color_palette(\"cubehelix\", 3), vars=['sepal_length','sepal_width','petal_length','petal_width'])\ng.map(plt.scatter)\nplt.show()",
"_____no_output_____"
]
],
[
[
"How do these clusters match the original labels?\n\nYou can clearly see that visual inspection is no longer useful if we're working with multiple dimensions like this. So how can we evaluate the clustering result versus the original labels? \n\nYou guessed it. We can use an external cluster validation index such as the [adjusted Rand score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) which generates a score between -1 and 1 (where an exact match will be scored as 1).",
"_____no_output_____"
]
],
[
[
"print(pred_kmeans_iris.shape)\nprint(iris['species'].shape)",
"(150,)\n(150,)\n"
],
[
"# TODO: Import adjusted rand score\nfrom sklearn.metrics import adjusted_rand_score\n\n# TODO: calculate adjusted rand score passing in the original labels and the kmeans predicted labels \niris_kmeans_score = adjusted_rand_score(iris['species'], pred_kmeans_iris)\n\n# Print the score\niris_kmeans_score",
"_____no_output_____"
]
],
[
[
"What if we cluster using Gaussian Mixture models? Would it earn a better ARI score?",
"_____no_output_____"
]
],
[
[
"gmm_iris = GaussianMixture(n_components=3).fit(iris[['sepal_length','sepal_width','petal_length','petal_width']])\npred_gmm_iris = gmm_iris.predict(iris[['sepal_length','sepal_width','petal_length','petal_width']])",
"_____no_output_____"
],
[
"iris['gmm_pred'] = pred_gmm_iris\n\n# TODO: calculate adjusted rand score passing in the original \n# labels and the GMM predicted labels iris['species']\niris_gmm_score = adjusted_rand_score(iris['species'], pred_gmm_iris)\n\n# Print the score\niris_gmm_score",
"_____no_output_____"
]
],
[
[
"Thanks to ARI socres, we have a clear indicator which clustering result better matches the original dataset.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4a06ae2618addb0dfd36d1a04e110c01e7885a53
| 40,938 |
ipynb
|
Jupyter Notebook
|
openfl-tutorials/interactive_api/Tensorflow_MNIST/workspace/Tensorflow_MNIST.ipynb
|
eceisik/openfl
|
050b8354b698a34b5ef01f0f55f968f52f63f84d
|
[
"Apache-2.0"
] | 1 |
2022-03-29T17:17:05.000Z
|
2022-03-29T17:17:05.000Z
|
openfl-tutorials/interactive_api/Tensorflow_MNIST/workspace/Tensorflow_MNIST.ipynb
|
eceisik/openfl
|
050b8354b698a34b5ef01f0f55f968f52f63f84d
|
[
"Apache-2.0"
] | null | null | null |
openfl-tutorials/interactive_api/Tensorflow_MNIST/workspace/Tensorflow_MNIST.ipynb
|
eceisik/openfl
|
050b8354b698a34b5ef01f0f55f968f52f63f84d
|
[
"Apache-2.0"
] | null | null | null | 51.044888 | 738 | 0.589062 |
[
[
[
"# Federated Tensorflow Mnist Tutorial\n",
"_____no_output_____"
],
[
"# Long-Living entities update\n\n* We now may have director running on another machine.\n* We use Federation API to communicate with Director.\n* Federation object should hold a Director's client (for user service)\n* Keeping in mind that several API instances may be connacted to one Director.\n\n\n* We do not think for now how we start a Director.\n* But it knows the data shape and target shape for the DataScience problem in the Federation.\n* Director holds the list of connected envoys, we do not need to specify it anymore.\n* Director and Envoys are responsible for encrypting connections, we do not need to worry about certs.\n\n\n* Yet we MUST have a cert to communicate to the Director.\n* We MUST know the FQDN of a Director.\n* Director communicates data and target shape to the Federation interface object.\n\n\n* Experiment API may use this info to construct a dummy dataset and a `shard descriptor` stub.",
"_____no_output_____"
]
],
[
[
"# Install dependencies if not already installed\n# !pip install tensorflow==2.3.1",
"Collecting tensorflow==2.3.1\n Using cached tensorflow-2.3.1-cp38-cp38-manylinux2010_x86_64.whl (320.5 MB)\nRequirement already satisfied: wheel>=0.26 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorflow==2.3.1) (0.37.0)\nCollecting h5py<2.11.0,>=2.10.0\n Using cached h5py-2.10.0-cp38-cp38-manylinux1_x86_64.whl (2.9 MB)\nCollecting gast==0.3.3\n Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB)\nCollecting keras-preprocessing<1.2,>=1.1.1\n Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)\nCollecting numpy<1.19.0,>=1.16.0\n Using cached numpy-1.18.5-cp38-cp38-manylinux1_x86_64.whl (20.6 MB)\nCollecting opt-einsum>=2.3.2\n Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB)\nRequirement already satisfied: protobuf>=3.9.2 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorflow==2.3.1) (3.18.1)\nCollecting tensorflow-estimator<2.4.0,>=2.3.0\n Using cached tensorflow_estimator-2.3.0-py2.py3-none-any.whl (459 kB)\nRequirement already satisfied: six>=1.12.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorflow==2.3.1) (1.16.0)\nRequirement already satisfied: tensorboard<3,>=2.3.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorflow==2.3.1) (2.6.0)\nCollecting google-pasta>=0.1.8\n Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB)\nCollecting astunparse==1.6.3\n Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB)\nRequirement already satisfied: grpcio>=1.8.6 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorflow==2.3.1) (1.34.1)\nCollecting wrapt>=1.11.1\n Downloading wrapt-1.13.1-cp38-cp38-manylinux2010_x86_64.whl (84 kB)\n\u001b[K |████████████████████████████████| 84 kB 468 kB/s eta 0:00:01\n\u001b[?25hCollecting termcolor>=1.1.0\n Using cached termcolor-1.1.0-py3-none-any.whl\nRequirement already satisfied: absl-py>=0.7.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorflow==2.3.1) (0.14.1)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.4.6)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.8.0)\nRequirement already satisfied: requests<3,>=2.21.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2.26.0)\nRequirement already satisfied: markdown>=2.6.8 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.3.4)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.35.0)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.6.1)\nRequirement already satisfied: setuptools>=41.0.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (58.2.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2.0.2)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.2.8)\nRequirement already satisfied: rsa<5,>=3.1.4 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (4.7.2)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (4.2.4)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.3.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (0.4.8)\nRequirement already satisfied: certifi>=2017.4.17 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2021.5.30)\nRequirement already satisfied: urllib3<1.27,>=1.21.1 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (1.26.7)\nRequirement already satisfied: idna<4,>=2.5 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.2)\nRequirement already satisfied: charset-normalizer~=2.0.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (2.0.6)\nRequirement already satisfied: oauthlib>=3.0.0 in /home/amokrov/venvs/py_38/lib/python3.8/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<3,>=2.3.0->tensorflow==2.3.1) (3.1.1)\nInstalling collected packages: numpy, wrapt, termcolor, tensorflow-estimator, opt-einsum, keras-preprocessing, h5py, google-pasta, gast, astunparse, tensorflow\n Attempting uninstall: numpy\n Found existing installation: numpy 1.21.2\n Uninstalling numpy-1.21.2:\n Successfully uninstalled numpy-1.21.2\nSuccessfully installed astunparse-1.6.3 gast-0.3.3 google-pasta-0.2.0 h5py-2.10.0 keras-preprocessing-1.1.2 numpy-1.18.5 opt-einsum-3.3.0 tensorflow-2.3.1 tensorflow-estimator-2.3.0 termcolor-1.1.0 wrapt-1.13.1\n"
]
],
[
[
"## Connect to the Federation",
"_____no_output_____"
]
],
[
[
"# Create a federation\nfrom openfl.interface.interactive_api.federation import Federation\n\n# please use the same identificator that was used in signed certificate\nclient_id = 'api'\ncert_dir = 'cert'\ndirector_node_fqdn = 'localhost'\ndirector_port=50051\n# 1) Run with API layer - Director mTLS \n# If the user wants to enable mTLS their must provide CA root chain, and signed key pair to the federation interface\n# cert_chain = f'{cert_dir}/root_ca.crt'\n# api_certificate = f'{cert_dir}/{client_id}.crt'\n# api_private_key = f'{cert_dir}/{client_id}.key'\n\n# federation = Federation(\n# client_id=client_id,\n# director_node_fqdn=director_node_fqdn,\n# director_port=director_port,\n# cert_chain=cert_chain,\n# api_cert=api_certificate,\n# api_private_key=api_private_key\n# )\n\n# --------------------------------------------------------------------------------------------------------------------\n\n# 2) Run with TLS disabled (trusted environment)\n# Federation can also determine local fqdn automatically\nfederation = Federation(\n client_id=client_id,\n director_node_fqdn=director_node_fqdn,\n director_port=director_port, \n tls=False\n)\n",
"_____no_output_____"
],
[
"shard_registry = federation.get_shard_registry()\nshard_registry",
"_____no_output_____"
],
[
"# First, request a dummy_shard_desc that holds information about the federated dataset \ndummy_shard_desc = federation.get_dummy_shard_descriptor(size=10)\ndummy_shard_dataset = dummy_shard_desc.get_dataset('train')\nsample, target = dummy_shard_dataset[0]\nf\"Sample shape: {sample.shape}, target shape: {target.shape}\"",
"_____no_output_____"
]
],
[
[
"## Describing FL experimen",
"_____no_output_____"
]
],
[
[
"from openfl.interface.interactive_api.experiment import TaskInterface, DataInterface, ModelInterface, FLExperiment",
"2021-10-11 18:54:28.413778: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\n2021-10-11 18:54:28.413798: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\n"
]
],
[
[
"### Register model",
"_____no_output_____"
]
],
[
[
"from layers import create_model, optimizer\nframework_adapter = 'openfl.plugins.frameworks_adapters.keras_adapter.FrameworkAdapterPlugin'\nmodel = create_model()\nMI = ModelInterface(model=model, optimizer=optimizer, framework_plugin=framework_adapter)",
"2021-10-11 18:54:33.125120: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1\n2021-10-11 18:54:33.157815: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2021-10-11 18:54:33.158223: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: \npciBusID: 0000:01:00.0 name: NVIDIA GeForce RTX 2080 Ti computeCapability: 7.5\ncoreClock: 1.545GHz coreCount: 68 deviceMemorySize: 10.76GiB deviceMemoryBandwidth: 573.69GiB/s\n2021-10-11 18:54:33.158303: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory\n2021-10-11 18:54:33.158374: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcublas.so.10'; dlerror: libcublas.so.10: cannot open shared object file: No such file or directory\n2021-10-11 18:54:33.159904: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10\n2021-10-11 18:54:33.160140: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10\n2021-10-11 18:54:33.166864: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10\n2021-10-11 18:54:33.166944: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcusparse.so.10'; dlerror: libcusparse.so.10: cannot open shared object file: No such file or directory\n2021-10-11 18:54:33.167005: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudnn.so.7'; dlerror: libcudnn.so.7: cannot open shared object file: No such file or directory\n2021-10-11 18:54:33.167013: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1753] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\nSkipping registering GPU devices...\n2021-10-11 18:54:33.167358: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2021-10-11 18:54:33.172625: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 2899885000 Hz\n2021-10-11 18:54:33.173076: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5545a20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:\n2021-10-11 18:54:33.173091: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version\n2021-10-11 18:54:33.174084: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:\n2021-10-11 18:54:33.174095: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] \n"
]
],
[
[
"### Register dataset",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom tensorflow.keras.utils import Sequence\n\nclass DataGenerator(Sequence):\n\n def __init__(self, shard_descriptor, batch_size):\n self.shard_descriptor = shard_descriptor\n self.batch_size = batch_size\n self.indices = np.arange(len(shard_descriptor))\n self.on_epoch_end()\n\n def __len__(self):\n return len(self.indices) // self.batch_size\n\n def __getitem__(self, index):\n index = self.indices[index * self.batch_size:(index + 1) * self.batch_size]\n batch = [self.indices[k] for k in index]\n\n X, y = self.shard_descriptor[batch]\n return X, y\n\n def on_epoch_end(self):\n np.random.shuffle(self.indices)\n\n\nclass MnistFedDataset(DataInterface):\n\n def __init__(self, **kwargs):\n super().__init__(**kwargs)\n\n @property\n def shard_descriptor(self):\n return self._shard_descriptor\n\n @shard_descriptor.setter\n def shard_descriptor(self, shard_descriptor):\n \"\"\"\n Describe per-collaborator procedures or sharding.\n\n This method will be called during a collaborator initialization.\n Local shard_descriptor will be set by Envoy.\n \"\"\"\n self._shard_descriptor = shard_descriptor\n \n self.train_set = shard_descriptor.get_dataset('train')\n self.valid_set = shard_descriptor.get_dataset('val')\n\n def __getitem__(self, index):\n return self.shard_descriptor[index]\n\n def __len__(self):\n return len(self.shard_descriptor)\n\n def get_train_loader(self):\n \"\"\"\n Output of this method will be provided to tasks with optimizer in contract\n \"\"\"\n if self.kwargs['train_bs']:\n batch_size = self.kwargs['train_bs']\n else:\n batch_size = 32\n return DataGenerator(self.train_set, batch_size=batch_size)\n\n def get_valid_loader(self):\n \"\"\"\n Output of this method will be provided to tasks without optimizer in contract\n \"\"\"\n if self.kwargs['valid_bs']:\n batch_size = self.kwargs['valid_bs']\n else:\n batch_size = 32\n \n return DataGenerator(self.valid_set, batch_size=batch_size)\n\n def get_train_data_size(self):\n \"\"\"\n Information for aggregation\n \"\"\"\n \n return len(self.train_set)\n\n def get_valid_data_size(self):\n \"\"\"\n Information for aggregation\n \"\"\"\n return len(self.valid_set)",
"_____no_output_____"
]
],
[
[
"### Create Mnist federated dataset",
"_____no_output_____"
]
],
[
[
"fed_dataset = MnistFedDataset(train_bs=64, valid_bs=512)",
"_____no_output_____"
]
],
[
[
"## Define and register FL tasks",
"_____no_output_____"
]
],
[
[
"TI = TaskInterface()\n\nimport time\nimport tensorflow as tf\nfrom layers import train_acc_metric, val_acc_metric, loss_fn\n\[email protected]_fl_task(model='model', data_loader='train_dataset', \\\n device='device', optimizer='optimizer') \ndef train(model, train_dataset, optimizer, device, loss_fn=loss_fn, warmup=False):\n start_time = time.time()\n\n # Iterate over the batches of the dataset.\n for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):\n with tf.GradientTape() as tape:\n logits = model(x_batch_train, training=True)\n loss_value = loss_fn(y_batch_train, logits)\n grads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n\n # Update training metric.\n train_acc_metric.update_state(y_batch_train, logits)\n\n # Log every 200 batches.\n if step % 200 == 0:\n print(\n \"Training loss (for one batch) at step %d: %.4f\"\n % (step, float(loss_value))\n )\n print(\"Seen so far: %d samples\" % ((step + 1) * 64))\n if warmup:\n break\n\n # Display metrics at the end of each epoch.\n train_acc = train_acc_metric.result()\n print(\"Training acc over epoch: %.4f\" % (float(train_acc),))\n\n # Reset training metrics at the end of each epoch\n train_acc_metric.reset_states()\n\n \n return {'train_acc': train_acc,}\n\n\[email protected]_fl_task(model='model', data_loader='val_dataset', device='device') \ndef validate(model, val_dataset, device):\n # Run a validation loop at the end of each epoch.\n for x_batch_val, y_batch_val in val_dataset:\n val_logits = model(x_batch_val, training=False)\n # Update val metrics\n val_acc_metric.update_state(y_batch_val, val_logits)\n val_acc = val_acc_metric.result()\n val_acc_metric.reset_states()\n print(\"Validation acc: %.4f\" % (float(val_acc),))\n \n return {'validation_accuracy': val_acc,}",
"_____no_output_____"
]
],
[
[
"## Time to start a federated learning experiment",
"_____no_output_____"
]
],
[
[
"# create an experimnet in federation\nexperiment_name = 'mnist_experiment'\nfl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)",
"_____no_output_____"
],
[
"# The following command zips the workspace and python requirements to be transfered to collaborator nodes\nfl_experiment.start(model_provider=MI, \n task_keeper=TI,\n data_loader=fed_dataset,\n rounds_to_train=5,\n opt_treatment='CONTINUE_GLOBAL')",
"_____no_output_____"
],
[
"fl_experiment.stream_metrics()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a06b230f530226f4d2b5d05cd38958552cf45a1
| 32,177 |
ipynb
|
Jupyter Notebook
|
examples/user_guide/12-Responding_to_Events.ipynb
|
chbrandt/holoviews
|
4dcddcc3a8a278dea550147def62f46ecd3d5d1d
|
[
"BSD-3-Clause"
] | null | null | null |
examples/user_guide/12-Responding_to_Events.ipynb
|
chbrandt/holoviews
|
4dcddcc3a8a278dea550147def62f46ecd3d5d1d
|
[
"BSD-3-Clause"
] | null | null | null |
examples/user_guide/12-Responding_to_Events.ipynb
|
chbrandt/holoviews
|
4dcddcc3a8a278dea550147def62f46ecd3d5d1d
|
[
"BSD-3-Clause"
] | null | null | null | 36.07287 | 607 | 0.628896 |
[
[
[
"# Responding to Events",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport holoviews as hv\nfrom holoviews import opts\n\nhv.extension('bokeh')",
"_____no_output_____"
]
],
[
[
"In the [Live Data](./07-Live_Data.ipynb) guide we saw how ``DynamicMap`` allows us to explore high dimensional data using the widgets in the same style as ``HoloMaps``. Although suitable for unbounded exploration of large parameter spaces, the ``DynamicMaps`` described in that notebook support exactly the same mode of interaction as ``HoloMaps``. In particular, the key dimensions are used to specify a set of widgets that when manipulated apply the appopriate indexing to invoke the user-supplied callable.\n\nIn this user guide we will explore the HoloViews streams system that allows *any* sort of value to be supplied from *anywhere*. This system opens a huge set of new possible visualization types, including continuously updating plots that reflect live data as well as dynamic visualizations that can be interacted with directly, as described in the [Custom Interactivity](./13-Custom_Interactivity.ipynb) guide.\n\n<center><div class=\"alert alert-info\" role=\"alert\">To use visualize and use a <b>DynamicMap</b> you need to be running a live Jupyter server.<br>This user guide assumes that it will be run in a live notebook environment.<br>\nWhen viewed statically, DynamicMaps will only show the first available Element.<br></div></center>",
"_____no_output_____"
]
],
[
[
"# Styles and plot options used in this user guide\n\nopts.defaults(\n opts.Area(fill_color='cornsilk', line_width=2,\n line_color='black'),\n opts.Ellipse(bgcolor='white', color='black'),\n opts.HLine(color='red', line_width=2),\n opts.Image(cmap='viridis'),\n opts.Path(bgcolor='white', color='black', line_dash='dashdot',\n show_grid=False),\n opts.VLine(color='red', line_width=2))",
"_____no_output_____"
]
],
[
[
"## A simple ``DynamicMap``",
"_____no_output_____"
],
[
"Before introducing streams, let us declare a simple ``DynamicMap`` of the sort discussed in the [Live Data](07-Live_Data.ipynb) user guide. This example consists of a ``Curve`` element showing a [Lissajous curve](https://en.wikipedia.org/wiki/Lissajous_curve) with ``VLine`` and ``HLine`` annotations to form a crosshair:",
"_____no_output_____"
]
],
[
[
"lin = np.linspace(-np.pi,np.pi,300)\n\ndef lissajous(t, a=3, b=5, delta=np.pi/2.):\n return (np.sin(a * t + delta), np.sin(b * t))\n\ndef lissajous_crosshair(t, a=3, b=5, delta=np.pi/2):\n (x,y) = lissajous(t,a,b,delta)\n return hv.VLine(x) * hv.HLine(y)\n\ncrosshair = hv.DynamicMap(lissajous_crosshair, kdims='t').redim.range(t=(-3.,3.))\n\npath = hv.Path(lissajous(lin))\n\npath * crosshair",
"_____no_output_____"
]
],
[
[
"As expected, the declared key dimension (``kdims``) has turned into a slider widget that lets us move the crosshair along the curve. Now let's see how to position the crosshair using streams.",
"_____no_output_____"
],
[
"## Introducing streams\n\n",
"_____no_output_____"
],
[
"The core concept behind a stream is simple: it defines one or more parameters that can change over time that automatically refreshes code depending on those parameter values. \n\nLike all objects in HoloViews, these parameters are declared using [param](https://ioam.github.io/param) and streams are defined as a parameterized subclass of the ``holoviews.streams.Stream``. A more convenient way is to use the ``Stream.define`` classmethod:",
"_____no_output_____"
]
],
[
[
"from holoviews.streams import Stream, param\nTime = Stream.define('Time', t=0.0)",
"_____no_output_____"
]
],
[
[
"This results in a ``Time`` class with a numeric ``t`` parameter that defaults to zero. As this object is parameterized, we can use ``hv.help`` to view it's parameters:",
"_____no_output_____"
]
],
[
[
"hv.help(Time)",
"_____no_output_____"
]
],
[
[
"This parameter is a ``param.Number`` as we supplied a float, if we had supplied an integer it would have been a ``param.Integer``. Notice that there is no docstring in the help output above but we can add one by explicitly defining the parameter as follows:",
"_____no_output_____"
]
],
[
[
"Time = Stream.define('Time', t=param.Number(default=0.0, doc='A time parameter'))\nhv.help(Time)",
"_____no_output_____"
]
],
[
[
"Now we have defined this ``Time`` stream class, we can make of an instance of it and look at its parameters:",
"_____no_output_____"
]
],
[
[
"time_dflt = Time()\nprint('This Time instance has parameter t={t}'.format(t=time_dflt.t))",
"_____no_output_____"
]
],
[
[
"As with all parameterized classes, we can choose to instantiate our parameters with suitable values instead of relying on defaults.",
"_____no_output_____"
]
],
[
[
"time = Time(t=np.pi/4)\nprint('This Time instance has parameter t={t}'.format(t=time.t))",
"_____no_output_____"
]
],
[
[
"For more information on defining ``Stream`` classes this way, use ``hv.help(Stream.define)``.",
"_____no_output_____"
],
[
"### Simple streams example",
"_____no_output_____"
],
[
"We can now supply this streams object to a ``DynamicMap`` using the same ``lissajous_crosshair`` callback from above by adding it to the ``streams`` list:",
"_____no_output_____"
]
],
[
[
"dmap = hv.DynamicMap(lissajous_crosshair, streams=[time])\npath * dmap + path * lissajous_crosshair(t=np.pi/4.)",
"_____no_output_____"
]
],
[
[
"Immediately we see that the crosshair position of the ``DynamicMap`` reflects the ``t`` parameter values we set on the ``Time`` stream. This means that the ``t`` parameter was supplied as the argument to the ``lissajous_curve`` callback. As we now have no key dimensions, there is no longer a widget for the ``t`` dimensions.\n\nAlthough we have what looks like a static plot, it is in fact dynamic and can be updated in place at any time. To see this, we can call the ``event`` method on our ``DynamicMap``:\n",
"_____no_output_____"
]
],
[
[
"dmap.event(t=0.2)",
"_____no_output_____"
]
],
[
[
"Running this cell will have updated the crosshair from its original position where $t=\\frac{\\pi}{4}$ to a new position where ``t=0.2``. Try running the cell above with different values of ``t`` and watch the plot update!\n\nThis ``event`` method is the recommended way of updating the stream parameters on a ``DynamicMap`` but if you have a handle on the relevant stream instance, you can also call the ``event`` method on that:",
"_____no_output_____"
]
],
[
[
"time.event(t=-0.2)",
"_____no_output_____"
]
],
[
[
"Running the cell above also moves the crosshair to a new position. As there are no key dimensions, there is only a single valid (empty) key that can be accessed with ``dmap[()]`` or ``dmap.select()`` making ``event`` the only way to explore new parameters.\n\nWe will examine the ``event`` method and the machinery that powers streams in more detail later in the user guide after we have looked at more examples of how streams are used in practice.",
"_____no_output_____"
],
[
"### Working with multiple streams",
"_____no_output_____"
],
[
"The previous example showed a curve parameterized by a single dimension ``t``. Often you will have multiple stream parameters you would like to declare as follows:",
"_____no_output_____"
]
],
[
[
"ls = np.linspace(0, 10, 200)\nxx, yy = np.meshgrid(ls, ls)\n\nXY = Stream.define('XY',x=0.0,y=0.0)\n\ndef marker(x,y):\n return hv.VLine(x) * hv.HLine(y)\n\nimage = hv.Image(np.sin(xx)*np.cos(yy))\n\ndmap = hv.DynamicMap(marker, streams=[XY()])\n\nimage * dmap",
"_____no_output_____"
]
],
[
[
"You can update both ``x`` and ``y`` by passing multiple keywords to the ``event`` method:",
"_____no_output_____"
]
],
[
[
"dmap.event(x=-0.2, y=0.1)",
"_____no_output_____"
]
],
[
[
"Note that the definition above behaves the same as the following definition where we define separate ``X`` and ``Y`` stream classes:\n\n```python\nX = Stream.define('X',x=0.0)\nY = Stream.define('Y',y=0.0)\nhv.DynamicMap(marker, streams=[X(), Y()])\n```\n\nThe reason why you might want to list multiple streams instead of always defining a single stream containing all the required stream parameters will be made clear in the [Custom Interactivity](./13-Custom_Interactivity.ipynb) guide.",
"_____no_output_____"
],
[
"## Using Parameterized classes as a stream\n\nCreating a custom ``Stream`` class is one easy way to declare parameters, however in many cases you may have already expressed your domain knowledge on a ``Parameterized`` class. A ``DynamicMap`` can easily be linked to the parameters of the class using a so called ``Params`` stream, let's define a simple example which will let use dynamically alter the style applied to the ``Image`` from the previous example. We define a ``Style`` class with two parameters, one to control the colormap and another to vary the number of color levels:",
"_____no_output_____"
]
],
[
[
"from holoviews.streams import Params\n\nclass Style(param.Parameterized):\n\n cmap = param.ObjectSelector(default='viridis', objects=['viridis', 'plasma', 'magma'])\n\n color_levels = param.Integer(default=255, bounds=(1, 255))\n\nstyle = Style()\n\nstream = Params(style)\n\nhv.DynamicMap(image.opts, streams=[stream]).opts(colorbar=True, width=400)",
"_____no_output_____"
]
],
[
[
"Instead of providing a custom callback function we supplied the ``image.opts`` method, which applies the parameters directly as options. Unlike a regular streams class the plot will update whenever a parameter on the instance or class changes, e.g. we can update set the ``cmap`` and ``color_level`` parameters and watch the plot update in response:",
"_____no_output_____"
]
],
[
[
"style.color_levels = 10\nstyle.cmap = 'plasma'",
"_____no_output_____"
]
],
[
[
"This is a powerful pattern to link parameters to a plot, particularly when combined with the [Panel](http://panel.pyviz.org/) library, which makes it easy to generate a set of widgets from a Parameterized class. To see how this works in practice see the [Dashboards user guide](./16-Dashboards.ipynb).",
"_____no_output_____"
],
[
"## Combining streams and key dimensions\n",
"_____no_output_____"
],
[
"All the ``DynamicMap`` examples above can't be indexed with anything other than ``dmap[()]`` or ``dmap.select()`` as none of them had any key dimensions. This was to focus exclusively on the streams system at the start of the user guide and not because you can't combine key dimensions and streams:",
"_____no_output_____"
]
],
[
[
"xs = np.linspace(-3, 3, 400)\n\ndef function(xs, time):\n \"Some time varying function\"\n return np.exp(np.sin(xs+np.pi/time))\n\ndef integral(limit, time):\n curve = hv.Curve((xs, function(xs, time)))[limit:]\n area = hv.Area ((xs, function(xs, time)))[:limit]\n summed = area.dimension_values('y').sum() * 0.015 # Numeric approximation\n return (area * curve * hv.VLine(limit) * hv.Text(limit + 0.5, 2.0, '%.2f' % summed))\n\nTime = Stream.define('Time', time=1.0)\ndmap=hv.DynamicMap(integral, kdims='limit', streams=[Time()]).redim.range(limit=(-3,2))\ndmap",
"_____no_output_____"
]
],
[
[
"In this example, you can drag the slider to see a numeric approximation to the integral on the left side on the ``VLine``.\n\nAs ``'limit'`` is declared as a key dimension, it is given a normal HoloViews slider. As we have also defined a ``time`` stream, we can update the displayed curve for any time value:",
"_____no_output_____"
]
],
[
[
"dmap.event(time=8)",
"_____no_output_____"
]
],
[
[
"We now see how to control the ``time`` argument of the integral function by triggering an event with a new time value, and how to control the ``limit`` argument by moving a slider. Controlling ``limit`` with a slider this way is valid but also a little unintuitive: what if you could control ``limit`` just by hovering over the plot?\n\nIn the [Custom Interactivity](13-Custom_Interactivity.ipynb) user guide, we will see how we can do exactly this by switching to the bokeh backend and using the linked streams system.",
"_____no_output_____"
],
[
"### Matching names to arguments",
"_____no_output_____"
],
[
"Note that in the example above, the key dimension names and the stream parameter names match the arguments to the callable. This *must* be true for stream parameters but this isn't a requirement for key dimensions: if you replace the word 'radius' with 'size' in the example above after ``XY`` is defined, the example still works. \n\nHere are the rules regarding the callback argument names:\n\n* If your key dimensions and stream parameters match the callable argument names, the definition is valid.\n* If your callable accepts mandatory positional arguments and their number matches the number of key dimensions, the names don't need to match and these arguments will be passed key dimensions values.\n\nAs stream parameters always need to match the argument names, there is a method to allow them to be easily renamed. Let's say you imported a stream class as shown in [Custom_Interactivity](13-Custom_Interactivity.ipynb) or for this example, reuse the existing ``XY`` stream class. You can then use the ``rename`` method allowing the following definition:",
"_____no_output_____"
]
],
[
[
"def integral2(lim, t): \n 'Same as integral with different argument names'\n return integral(lim, t)\n\ndmap = hv.DynamicMap(integral2, kdims='limit', streams=[Time().rename(time='t')]).redim.range(limit=(-3.,3.))\ndmap",
"_____no_output_____"
]
],
[
[
"Occasionally, it is useful to suppress some of the stream parameters of a stream class, especially when using the *linked streams* described in [Custom_Interactivity](13-Custom_Interactivity.ipynb). To do this you can rename the stream parameter to ``None`` so that you no longer need to worry about it being passed as an argument to the callable. To re-enable a stream parameter, it is sufficient to either give the stream parameter it's original string name or a new string name.",
"_____no_output_____"
],
[
"## Overlapping stream and key dimensions",
"_____no_output_____"
],
[
"In the above example above, the stream parameters do not overlap with the declared key dimension. What happens if we add 'time' to the declared key dimensions?\n",
"_____no_output_____"
]
],
[
[
"dmap=hv.DynamicMap(integral, kdims=['time','limit'], streams=[Time()]).redim.range(limit=(-3.,3.))\ndmap",
"_____no_output_____"
]
],
[
[
"First you might notice that the 'time' value is now shown in the title but that there is no corresponding time slider as its value is supplied by the stream.\n\nThe 'time' parameter is now an instance of what are called 'dimensioned streams' which reenable indexing of these dimensions:",
"_____no_output_____"
]
],
[
[
"dmap[1,0] + dmap.select(time=3,limit=1.5) + dmap[None,1.5]",
"_____no_output_____"
]
],
[
[
"In **A**, we supply our own values for the 'time and 'limit' parameters. This doesn't change the values of the 'time' parameters on the stream itself but it does allow us to see what would happen when the time value is one. Note the use of ``None`` in **C** as a way of leaving an explicit value unspecified, allowing the current stream value to be used.\n\nThis is one good reason to use dimensioned streams - it restores access to convenient indexing and selecting operation as a way of exploring your visualizations. The other reason it is useful is that if you keep all your parameters dimensioned, it re-enables the ``DynamicMap`` cache described in the [Live Data](07-Live_Data.ipynb), allowing you to record your interaction with streams and allowing you to cast to ``HoloMap`` for export:",
"_____no_output_____"
]
],
[
[
"dmap.reset() # Reset the cache, we don't want the values from the cell above\n# TODO: redim the limit dimension to a default of 0\ndmap.event(time=1)\ndmap.event(time=1.5)\ndmap.event(time=2)\nhv.HoloMap(dmap)",
"_____no_output_____"
]
],
[
[
"One use of this would be to have a simulator drive a visualization forward using ``event`` in a loop. You could then stop your simulation and retain the recent history of the output as long as the allowed ``DynamicMap`` cache.",
"_____no_output_____"
],
[
"## Generators and argument-free callables",
"_____no_output_____"
],
[
"In addition to callables, Python supports [generators](https://docs.python.org/3/glossary.html#term-generator) that can be defined with the ``yield`` keyword. Calling a function that uses yield returns a [generator iterator](https://docs.python.org/3/glossary.html#term-generator-iterator) object that accepts no arguments but returns new values when iterated or when ``next()`` is applied to it.\n\nHoloViews supports Python generators for completeness and [generator expressions](https://docs.python.org/3/glossary.html#term-generator-expression) can be a convenient way to define code inline instead of using lambda functions. As generators expressions don't accept arguments and can get 'exhausted' ***we recommend using callables with ``DynamicMap``*** - exposing the relevant arguments also exposes control over your visualization.\n\nUnlike generators, callables that have arguments allow you to re-visit portions of your parameter space instead of always being forced in one direction via calls to ``next()``. With this caveat in mind, here is an example of a generator and the corresponding generator iterator that returns a ``BoxWhisker`` element:",
"_____no_output_____"
]
],
[
[
"def sample_distributions(samples=10, tol=0.04):\n np.random.seed(42)\n while True:\n gauss1 = np.random.normal(size=samples)\n gauss2 = np.random.normal(size=samples)\n data = (['A']*samples + ['B']*samples, np.hstack([gauss1, gauss2]))\n yield hv.BoxWhisker(data, 'Group', 'Value')\n samples+=1\n \nsample_generator = sample_distributions()",
"_____no_output_____"
]
],
[
[
"This returns two box whiskers representing samples from two Gaussian distributions of 10 samples. Iterating over this generator simply resamples from these distributions using an additional sample each time.\n\nAs with a callable, we can pass our generator iterator to ``DynamicMap``:",
"_____no_output_____"
]
],
[
[
"hv.DynamicMap(sample_generator)",
"_____no_output_____"
]
],
[
[
"Without using streams, we now have a problem as there is no way to trigger the generator to view the next distribution in the sequence. We can solve this by defining a stream with no parameters:",
"_____no_output_____"
]
],
[
[
"dmap = hv.DynamicMap(sample_generator, streams=[Stream.define('Next')()])\ndmap",
"_____no_output_____"
]
],
[
[
"### Stream event update loops",
"_____no_output_____"
],
[
"Now we can simply use ``event()`` to drive the generator forward and update the plot, showing how the two Gaussian distributions converge as the number of samples increase.",
"_____no_output_____"
]
],
[
[
"for i in range(40):\n dmap.event()",
"_____no_output_____"
]
],
[
[
"Note that there is a better way to run loops that drive ``dmap.event()`` which supports a ``period`` (in seconds) between updates and a ``timeout`` argument (also in seconds):",
"_____no_output_____"
]
],
[
[
"dmap.periodic(0.1, 1000, timeout=3)",
"_____no_output_____"
]
],
[
[
"In this generator example, ``event`` does not require any arguments but you can set the ``param_fn`` argument to a callable that takes an iteration counter and returns a dictionary for setting the stream parameters. In addition you can use ``block=False`` to avoid blocking the notebook using a threaded loop. This can be very useful although it has two downsides 1. all running visualizations using non-blocking updates will be competing for computing resources 2. if you override a variable that the thread is actively using, there can be issues with maintaining consistent state in the notebook.\n\nGenerally, the ``periodic`` utility is recommended for all such event update loops and it will be used instead of explicit loops in the rest of the user guides involving streams.\n",
"_____no_output_____"
],
[
"### Using ``next()``",
"_____no_output_____"
],
[
"The approach shown above of using an empty stream works in an exactly analogous fashion for callables that take no arguments. In both cases, the ``DynamicMap`` ``next()`` method is enabled:",
"_____no_output_____"
]
],
[
[
"hv.HoloMap({i:next(dmap) for i in range(10)}, kdims='Iteration')",
"_____no_output_____"
]
],
[
[
"## Next steps",
"_____no_output_____"
],
[
"The streams system allows you to update plots in place making it possible to build live visualizations that update in response to incoming live data or any other type of event. As we have seen in this user guide, you can use streams together with key dimensions to add additional interactivity to your plots while retaining the familiar widgets.\n\nThis user guide used examples that work with either the matplotlib or bokeh backends. In the [Custom Interactivity](13-Custom_Interactivity.ipynb) user guide, you will see how you can directly interact with dynamic visualizations when using the bokeh backend.",
"_____no_output_____"
],
[
"## [Advanced] How streams work\n\n",
"_____no_output_____"
],
[
"This optional section is not necessary for users who simply want to use the streams system, but it does describe how streams actually work in more detail.\n\nA stream class is one that inherits from ``Stream`` that typically defines some new parameters. We have already seen one convenient way of defining a stream class:",
"_____no_output_____"
]
],
[
[
"defineXY = Stream.define('defineXY', x=0.0, y=0.0)",
"_____no_output_____"
]
],
[
[
"This is equivalent to the following definition which would be more appropriate in library code or for complex stream class requiring lots of parameters that need to be documented:",
"_____no_output_____"
]
],
[
[
"class XY(Stream):\n x = param.Number(default=0.0, constant=True, doc='An X position.')\n y = param.Number(default=0.0, constant=True, doc='A Y position.')",
"_____no_output_____"
]
],
[
[
"As we have already seen, we can make an instance of ``XY`` with some initial values for ``x`` and ``y``.",
"_____no_output_____"
]
],
[
[
"xy = XY(x=2,y=3)",
"_____no_output_____"
]
],
[
[
"However, trying to modify these parameters directly will result in an exception as they have been declared constant (e.g ``xy.x=4`` will throw an error). This is because there are two allowed ways of modifying these parameters, the simplest one being ``update``:",
"_____no_output_____"
]
],
[
[
"xy.update(x=4,y=50)\nxy.rename(x='xpos', y='ypos').contents",
"_____no_output_____"
]
],
[
[
"This shows how you can update the parameters and also shows the correct way to view the stream parameter values via the ``contents`` property as this will apply any necessary renaming.\n\nSo far, using ``update`` has done nothing but force us to access parameter a certain way. What makes streams work are the side-effects you can trigger when changing a value via the ``event`` method. The relevant side-effect is to invoke callables called 'subscribers'",
"_____no_output_____"
],
[
"### Subscribers",
"_____no_output_____"
],
[
"Without defining any subscribes, the ``event`` method is identical to ``update``:",
"_____no_output_____"
]
],
[
[
"xy = XY()\nxy.event(x=4,y=50)\nxy.contents",
"_____no_output_____"
]
],
[
[
"Now let's add a subscriber:",
"_____no_output_____"
]
],
[
[
"def subscriber(xpos,ypos):\n print('The subscriber received xpos={xpos} and ypos={ypos}'.format(xpos=xpos,ypos=ypos))\n\nxy = XY().rename(x='xpos', y='ypos')\nxy.add_subscriber(subscriber)\nxy.event(x=4,y=50)",
"_____no_output_____"
]
],
[
[
"As we can see, now when you call ``event``, our subscriber is called with the updated parameter values, renamed as appropriate. The ``event`` method accepts the original parameter names and the subscriber receives the new values after any renaming is applied. You can add as many subscribers as you want and you can clear them using the ``clear`` method:",
"_____no_output_____"
]
],
[
[
"xy.clear()\nxy.event(x=0,y=0)",
"_____no_output_____"
]
],
[
[
"When you define a ``DynamicMap`` using streams, the HoloViews plotting system installs the necessary callbacks as subscibers to update the plot when the stream parameters change. The above example clears all subscribers (it is equivalent to ``clear('all')``. To clear only the subscribers you define yourself use ``clear('user')`` and to clear any subscribers installed by the HoloViews plotting system use ``clear('internal')``.\n\nWhen using linked streams as described in the [Custom Interactivity](13-Custom_Interactivity.ipynb) user guide, the plotting system recognizes the stream class and registers the necessary machinery with Bokeh to update the stream values based on direct interaction with the plot.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a06b257e04538c5ee8ae1314f866e2225bf2a60
| 283,517 |
ipynb
|
Jupyter Notebook
|
wex1_normal_distribution.ipynb
|
arcticv/python
|
9e4abab611d268dba8fb56098112ff339d7df489
|
[
"MIT"
] | null | null | null |
wex1_normal_distribution.ipynb
|
arcticv/python
|
9e4abab611d268dba8fb56098112ff339d7df489
|
[
"MIT"
] | null | null | null |
wex1_normal_distribution.ipynb
|
arcticv/python
|
9e4abab611d268dba8fb56098112ff339d7df489
|
[
"MIT"
] | null | null | null | 428.921331 | 58,948 | 0.93466 |
[
[
[
"# random data with a normal distribution curve thrown on top\nfrom scipy.stats import norm\nx = np.random.rand(1000)\nfig, ax = plt.subplots()\nax = sns.distplot(x, fit=norm, kde=False)",
"_____no_output_____"
],
[
"# normal distribution vs. standard normal distribution\n# 3 flavors of code using numpy/scipy to get simulated normal distribution \n# scipy needed for the fit=norm in the seaborns distplot\nfrom scipy.stats import norm\n\n# flavor 1: normal distribution providing a mean and standard deviation\nmu, sigma = 0, 1 # mean and standard deviation\nx1 = np.random.normal(mu, sigma, 4000)\n\n# flavor 2: standard normal distribution with mean = 0 and standard deviation = 1\nx2 = np.random.randn(4000) \n\n# flavor 3: scipy way\nx3_x = []\nx3_y = []\nfor i in np.arange(-4,4,0.01):\n x3_x.append(i)\n x3_y.append(norm.pdf(i))\n \n\n# output 3\nfig, ax = plt.subplots()\nax = sns.distplot(x1, fit=norm, kde=False)\nax = sns.distplot(x2, fit=norm, kde=False)\nax = sns.lineplot(x=x3_x,y=x3_y, color='red',ls='--')\nplt.tight_layout()\nplt.savefig(fname='normal distribution.png', dpi=150)\nplt.show()",
"_____no_output_____"
],
[
"# Normal Distribution Example\n# https://towardsdatascience.com/understanding-the-normal-distribution-with-python-e70bb855b027\nimport numpy as np\nfrom scipy.stats import norm\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n# Starting data in inches\n# Assume that the true mean height of a person is 5 feet 6 inches \n# and the true standard deviation is 1 foot (12 inches) \n# Also define a variable called “target” of 6 feet, which is the height that our friend inquired about\nmean_height = 5.5*12\nstdev_height = 1*12\ntarget = 6*12\n",
"_____no_output_____"
],
[
"# make a 10,000 by 10 array to hold our survey results, where each row is a survey of people heights\n# Could have given the random variable function a mean and a standard deviation, \n# but manually customizing the mean and standard deviation of your random variable gives more intuition:\n# A) You can shift the central location to where you want it\n# B) You can shift the standard deviation of normally distributed random variable, by multiplying a constant\n# np.random.normal generates a random number normally distributed with mean of 0 and a standard deviation of 1\n# we customize the variable by multiplying volatility and then adding mean height to shift central location\nrandom_variable = mean_height + np.random.normal()*stdev_height\n\n# populate the array \nheight_surveys = np.zeros((10000,10))\nfor i in range(height_surveys.shape[0]):\n for j in range(height_surveys.shape[1]):\n height_surveys[i,j] = mean_height + np.random.normal()*stdev_height\n\nprint('Mean Height: ', round(np.mean(height_surveys)/12,1), ' feet')\nprint('Standard Deviation of Height: ', round(np.var(height_surveys)**0.5 / 12, 1), ' feet')\n",
"Mean Height: 5.5 feet\nStandard Deviation of Height: 1.0 feet\n"
],
[
"# Multi Plot using Seaborns subplot sns\n# f, axes = plt.subplots(1, 2)\n# sns.boxplot( y=\"b\", x= \"a\", data=df, orient='v' , ax=axes[0])\n# sns.boxplot( y=\"c\", x= \"a\", data=df, orient='v' , ax=axes[1])\nfig3 = plt.figure(figsize=(12,8))\ngs = fig3.add_gridspec(ncols=4, nrows=5)\nax1 = fig3.add_subplot(gs[0:2, :])\nax1.set_title('Plot 1: Single Sample Distribution')\nax2 = fig3.add_subplot(gs[2:5, 0:2])\nax2.set_title('Plot 2: Population Distribution')\nax3 = fig3.add_subplot(gs[2:5, -2:])\nax3.set_title('Plot 3: Sample Generated Distribution vs Population Distribution')\nplt.style.use('ggplot') # ggplot tableau-colorblind10 seaborn-dark-palette bmh default\n\n# Plot 1 - One Sample\n# randomly pick one sample and plot it out = single sample histogram analysis\n\n# sample_survey = height_surveys[1][:]\nax1.hist(sample_survey, label='People Height')\n\n# vertical line to show the mean\nax1.axvline(x=sample_survey.mean(), ymin=0, ymax=1, label=('single sample mean=' + str(round(sample_survey.mean(),2))))\nax1.set_xlabel(\"Height in Inches\",fontsize=12)\nax1.set_ylabel(\"Frequency\",fontsize=12)\n# randomly picked sample again #2\nprint('Sample Standard Deviation: ' , np.var(height_surveys[2])**0.5, '') # = 12.5\n\n\n\n# Plot 2\n# now look at all samples\n\n# histogram of all samples to show true population mean across all surveys\nsns.distplot(np.mean(height_surveys,axis=1), \n kde=False, label='People Height', color='blue', ax=ax2)\nax2.set_xlabel(\"Height in Inches\",fontsize=12)\nax2.set_ylabel(\"Frequency\",fontsize=12)\nax2.axvline(x=target, color='red', label=('target=' + str(round(target,2))))\nax2.legend()\nplt.tight_layout()\n\n\n\n\n# Plot 3\n# compare sample mean vs. true mean\n\n# plot histogram to show all means for each survey, for all surveys\nsns.distplot(np.mean(height_surveys,axis=1), \n kde=False, label='True', color='blue', ax=ax3)\nax3.set_xlabel(\"Height in Inches\",fontsize=14)\nax3.set_ylabel(\"Frequency\",fontsize=14)\nax3.axvline(x=target, color='red', label=('target=' + str(round(target,2))))\n\n# Calculate stats using single sample\nsample_mean = np.mean(height_surveys[3])\nsample_stdev = np.var(height_surveys[3])**0.5\n\n# Calculate standard error = sample std dev / sqrt(N) where N = 10 in this example\nstd_error = sample_stdev/(height_surveys[3].shape[0])**0.5\n\n########IMPORTANT#############################\n# Infer distribution using single sample into 10000 samples \ninferred_distribution = [sample_mean + np.random.normal()*\\\n std_error for i in range(10000)]\n\n# Plot histogram of inferred distribution \nsns.distplot(inferred_distribution, kde=False, label='Inferred', color='red', ax=ax3)\nax3.set_xlabel(\"Height in Inches\",fontsize=12)\nax3.set_ylabel(\"Frequency\",fontsize=12)\nax3.legend()\nplt.tight_layout()\n\n# If jupyter output starts to scroll, how to remove/undo/turn off the scroll output cell:\n# I just placed my cursor in the grey box next to the output and clicked and then all of the output was displayed.",
"Sample Standard Deviation: 9.69336629245248 \n"
],
[
"# true vs all surveys combined\nfig, ax = plt.subplots(figsize=(12,8))\n\n# True Population\nsns.distplot(np.mean(height_surveys,axis=1), kde=False, \n label='True', color='darkblue', hist_kws = {\"alpha\": 1.0}, ax=ax)\n\nax.axvline(x=target, color='red', label=('target=' + str(round(target,2))))\n\n# If you unwrap all surveys, can make distribution wider than True\nsns.distplot(height_surveys.flatten()[:height_surveys.shape[0]], \n kde=False, \n label='All Surveys Height Distribution', hist_kws = {\"alpha\": 0.2}, ax=ax)\n\nax.set_xlabel(\"Height in Inches\",fontsize=14)\nax.set_ylabel(\"Frequency\",fontsize=14)\nplt.legend()",
"_____no_output_____"
],
[
"# Variance Analysis (x axis should be smaller now)\n\n# The distribution of the sample standard deviations is also roughly normal (Plot it to prove it)\n# Remember that a sample standard deviation is the standard deviation of a single survey of 10 people)\n\n# Take each sample, calculate the standard deviation of each sample, plot it\n# Then analyze the \"Standard Deviation of Standard Deviations\"\n# How much does the standard deviation vary?\n\n\n# Calculate the standard deviation for each sample (square root of the variance)\n# np.var:\n# axis = None is default: The default is to compute the variance of the flattened array.\n# axis = 1 calculates each row (axis = 0: calculates down each of the columns)\nvolatility_dist = ( np.var(height_surveys,axis=1) )**0.5\n\n# Histogram to show distribution of 10000 sample standard deviation\nfig, ax = plt.subplots(figsize=(12,8))\nsns.set_style('dark')\nsns.distplot(volatility_dist, kde=False, label='The Distribution of the Sample Standard Deviations', color='navy',hist_kws = {\"alpha\": 0.8})\nax.set_xlabel(\"Inches\",fontsize=12)\nax.set_ylabel(\"Frequency\",fontsize=12)\nax.legend()\n\n\n###################################################################################################\n\n# Add more analysis\n# How does standard deviation distribution magnitude compare to previous plot (sample distribution of heights)?\n\n# standard error = standard deviation / sqrt(N)\nSE_dist = volatility_dist/(height_surveys.shape[1]**0.5)\nsns.distplot(SE_dist, kde=False, label='Sample Standard Error Distribution1', color='red', ax=ax)\n\nax.set_xlabel(\"Height in Inches\",fontsize=12)\nax.set_ylabel(\"Frequency\",fontsize=12)\nax.legend()\nplt.tight_layout()\n\n\n\n###################################################################################################\n\n\n# Annotate with text + Arrow\n# Label and coordinate\nax.annotate('This standard deviation is so wide!', \n xy=(5, 50), xytext=(17, 50),\n arrowprops={'arrowstyle': '<->','lw': 2, 'color': 'red'}, va='center') # Custom arrow\n\n# regular text\nax.text(20,400, 'hello', fontsize=12)\n\n# boxed text\nax.text(20, 550, 'boxed italics text in data coords', style='italic',\n bbox={'facecolor': 'red', 'alpha': 0.5, 'pad': 10})\n\n# equation text\nax.text(20, 500, r'an equation: $E=mc^2$', fontsize=15)\n\nplt.tight_layout()\n\n\n",
"_____no_output_____"
],
[
"# Increasing N observations to shrink the standard error\n\nrandom_variable = mean_height + np.random.normal()*stdev_height\n# populate the array \nheight_surveys = np.zeros((10000,30))\nfor i in range(height_surveys.shape[0]):\n for j in range(height_surveys.shape[1]):\n height_surveys[i,j] = mean_height + np.random.normal()*stdev_height\n\nprint('Mean Height: ', round(np.mean(height_surveys)/12,1), ' feet')\nprint('Standard Deviation of Height: ', round(np.var(height_surveys)**0.5 / 12, 1), ' feet')\n\n# Variance Analysis (x axis should be smaller now)\n\n# The distribution of the sample standard deviations is also roughly normal (Plot it to prove it)\n# Remember that a sample standard deviation is the standard deviation of a single survey of 10 people)\n\n# Take each sample, calculate the standard deviation of each sample, plot it\n# Then analyze the \"Standard Deviation of Standard Deviations\"\n# How much does the standard deviation vary?\n\n\n# Calculate the standard deviation for each sample (square root of the variance)\n# np.var:\n# axis = None is default: The default is to compute the variance of the flattened array.\n# axis = 1 calculates each row (axis = 0: calculates down each of the columns)\nvolatility_dist = ( np.var(height_surveys,axis=1) )**0.5\n\n# Histogram to show distribution of 10000 sample standard deviation\nfig, ax = plt.subplots(figsize=(12,8))\nsns.set_style('dark')\nsns.distplot(volatility_dist, kde=False, label='The Distribution of the Sample Standard Deviations', \n color='navy',hist_kws = {\"alpha\": 0.8})\nax.set_xlabel(\"Inches\",fontsize=12)\nax.set_ylabel(\"Frequency\",fontsize=12)\nax.legend()\n\n\n\n\n###################################################################################################\n\n# Add more analysis\n# How does standard deviation distribution magnitude compare to previous plot (sample distribution of heights)?\n\n# standard error = standard deviation / sqrt(N)\nSE_dist2 = volatility_dist/(height_surveys.shape[1]**0.5)\nsns.distplot(SE_dist2 , kde=False, label='Sample Standard Error Distribution2', color='red', ax=ax)\nsns.distplot(SE_dist , kde=False, label='Sample Standard Error Distribution1', color='orange', ax=ax)\n\nax.set_xlabel(\"Height in Inches\",fontsize=12)\nax.set_ylabel(\"Frequency\",fontsize=12)\nax.legend()\nplt.tight_layout()\n\n\n\n###################################################################################################\n\n\n# Annotate with text + Arrow\n# Label and coordinate\nax.annotate('This standard deviation is so wide!', \n xy=(5, 50), xytext=(17, 50),\n arrowprops={'arrowstyle': '<->','lw': 2, 'color': 'red'}, va='center') # Custom arrow\n\n# regular text\nax.text(20,400, 'hello', fontsize=12)\n\n# boxed text\nax.text(20, 550, 'boxed italics text in data coords', style='italic',\n bbox={'facecolor': 'red', 'alpha': 0.5, 'pad': 10})\n\n# equation text\nax.text(20, 500, r'an equation: $E=mc^2$', fontsize=15)\n\nplt.tight_layout()\n\n\n\n",
"Mean Height: 5.5 feet\nStandard Deviation of Height: 1.0 feet\n"
],
[
"# Calculate the probability of a person being 6 feet or taller\n\n\n# Method 1: using simulation generating 10000 random variables\ninferred_dist = [sample_mean + np.random.normal()*std_error for i in range(10000)]\nprobability1 = sum([1 for i in inferred_dist if i>=target])/len(inferred_dist)\nprint('The simulated probability is ', probability1)\n\n# Method 2: using cumulative distribution function \nprobability2 = 1 - norm.cdf(target, loc=sample_mean, scale=std_error)\nprint('The calculated probability is ', round(probability2,5))",
"The simulated probability is 0.0714\nThe calculated probability is 0.07386\n"
],
[
"'''\nPlot a normally distributed random variable - and samples of this process - using scipy's univariate probability distributions.\n'''\n\nfrom scipy.stats import norm\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\n# Define parameters for normal distribution.\nmu = 0\nsigma = 5\nrng = range(-30,30)\n\n# Generate normal distribution with given mean and standard deviation.\ndist = norm(mu, sigma)\n\n# Plot probability density function and of this distribution.\n# the pdf() method takes takes in a list x values and returns a list of y's.\nplt.figure(figsize=(10, 10))\nplt.subplot(311) # Creates a 3 row, 1 column grid of plots, and renders the following chart in slot 1.\nplt.plot(rng, dist.pdf(rng), 'r', linewidth=2)\nplt.title('Probability density function of normal distribution')\n\n\n# Plot probability density function and of this distribution.\nplt.subplot(312)\nplt.plot(rng, dist.cdf(rng))\nplt.title('Cumulutative distribution function of normal distribution')\n\n# Draw 1000 samples from the random variable.\nsample = dist.rvs(size=10000)\n\nprint (\"Sample descriptive statistics:\")\nprint (pd.DataFrame(sample).describe())\n\n# Plot a histogram of the samples.\nplt.subplot(313)\nplt.hist(sample, bins=50, density=True)\nplt.plot(rng, dist.pdf(rng), 'r--', linewidth=2)\nplt.title('10,000 random samples from normal distribution')\n\n\n# Show all plots.\nplt.show()",
"Sample descriptive statistics:\n 0\ncount 10000.000000\nmean 0.065080\nstd 5.008509\nmin -21.574350\n25% -3.342049\n50% 0.033659\n75% 3.421885\nmax 19.326905\n"
],
[
"print('x')",
"x\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a06c69d10a0aa8349accd6361ca271c67f65806
| 3,870 |
ipynb
|
Jupyter Notebook
|
week4/regex.ipynb
|
thomasnilsson/02805-social-graphs-2018
|
45199300e3b8ab85f5c051f3f9f473a229eb5253
|
[
"MIT"
] | 1 |
2019-02-22T18:55:16.000Z
|
2019-02-22T18:55:16.000Z
|
week4/regex.ipynb
|
thomasnilsson/02805-social-graphs-2018
|
45199300e3b8ab85f5c051f3f9f473a229eb5253
|
[
"MIT"
] | null | null | null |
week4/regex.ipynb
|
thomasnilsson/02805-social-graphs-2018
|
45199300e3b8ab85f5c051f3f9f473a229eb5253
|
[
"MIT"
] | 1 |
2020-11-07T10:00:05.000Z
|
2020-11-07T10:00:05.000Z
| 21.741573 | 194 | 0.510336 |
[
[
[
"# Week 4\n## Prelude: RegEx\n**Explain in your own words: what are regular expressions?**\nRegEx is a way of encapsulating the structure of regular languages, that is, languages which can be described with a finite state automaton. \nFor our purposes of parsing text, we can consider English a regular language, even though matching brackets cannot be achieved by a regex, and one needs a context free grammar to do so.\n\n**Provide an example of a regex to match 4 digits numbers**",
"_____no_output_____"
]
],
[
[
"import urllib2\nresponse = urllib2.urlopen('https://raw.githubusercontent.com/suneman/socialgraphs2017/master/files/test.txt')\nhtml = response.read()",
"_____no_output_____"
]
],
[
[
"The RegEx: **\\b(\\d{4})\\b** will catch any 4 digit number, delimited by two boundaries.",
"_____no_output_____"
]
],
[
[
"import re\npattern = r'\\b(\\d{4})\\b'\nre.findall(pattern, html)",
"_____no_output_____"
]
],
[
[
"**Provide an example of a regex to match words starting with \"super\".**\n\nSolution: \"\\b(super\\w*)\\b\"",
"_____no_output_____"
]
],
[
[
"pattern_super = r'\\b(super\\w*)\\b'\nre.findall(pattern_super, html)",
"_____no_output_____"
]
],
[
[
"**Wikipedia Links**",
"_____no_output_____"
]
],
[
[
"first = r'\\[\\[.*?\\]\\]'\nre.findall(first, html)",
"_____no_output_____"
],
[
"# (?:) indicates a non-capturing group",
"_____no_output_____"
],
[
"full_coverage = r'\\[\\[([^\\]]*?)(?:\\|.*?)*\\]\\]'\nre.findall(full_coverage, html)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a06cba97f3fc3df3b2b157f33ef549320f34749
| 28,882 |
ipynb
|
Jupyter Notebook
|
tutorials/nlp/Entity_Linking_Medical.ipynb
|
dimapihtar/NeMo
|
167aa5e80de9a20bc77ac152527b085bcc566863
|
[
"Apache-2.0"
] | null | null | null |
tutorials/nlp/Entity_Linking_Medical.ipynb
|
dimapihtar/NeMo
|
167aa5e80de9a20bc77ac152527b085bcc566863
|
[
"Apache-2.0"
] | null | null | null |
tutorials/nlp/Entity_Linking_Medical.ipynb
|
dimapihtar/NeMo
|
167aa5e80de9a20bc77ac152527b085bcc566863
|
[
"Apache-2.0"
] | null | null | null | 45.699367 | 1,020 | 0.6458 |
[
[
[
"\"\"\"\nYou can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.\n\nInstructions for setting up Colab are as follows:\n1. Open a new Python 3 notebook.\n2. Import this notebook from GitHub (File -> Upload Notebook -> \"GITHUB\" tab -> copy/paste GitHub URL)\n3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select \"GPU\" for hardware accelerator)\n4. Run this cell to set up dependencies.\n\"\"\"\n\n## Install NeMo if using google collab or if its not installed locally\nBRANCH = 'r1.7.0'\n!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]",
"_____no_output_____"
],
[
"## Install dependencies\n!pip install wget\n!pip install faiss-gpu",
"_____no_output_____"
],
[
"import faiss\nimport torch\nimport wget\nimport os\nimport numpy as np\nimport pandas as pd\n\nfrom omegaconf import OmegaConf\nfrom pytorch_lightning import Trainer\nfrom IPython.display import display\nfrom tqdm import tqdm\n\nfrom nemo.collections import nlp as nemo_nlp\nfrom nemo.utils.exp_manager import exp_manager",
"_____no_output_____"
]
],
[
[
"## Entity Linking",
"_____no_output_____"
],
[
"#### Task Description\n[Entity linking](https://en.wikipedia.org/wiki/Entity_linking) is the process of connecting concepts mentioned in natural language to their canonical forms stored in a knowledge base. For example, say a knowledge base contained the entity 'ID3452 influenza' and we wanted to process some natural language containing the sentence \"The patient has flu like symptoms\". An entity linking model would match the word 'flu' to the knowledge base entity 'ID3452 influenza', allowing for disambiguation and normalization of concepts referenced in text. Entity linking applications range from helping automate data ingestion to assisting in real time dialogue concept normalization. We will be focusing on entity linking in the medical domain for this demo, but the entity linking model, dataset, and training code within NVIDIA NeMo can be applied to other domains like finance and retail.\n\nWithin NeMo and this tutorial we use the entity linking approach described in Liu et. al's NAACL 2021 \"[Self-alignment Pre-training for Biomedical Entity Representations](https://arxiv.org/abs/2010.11784v2)\". The main idea behind this approach is to reshape an initial concept embedding space such that synonyms of the same concept are pulled closer together and unrelated concepts are pushed further apart. The concept embeddings from this reshaped space can then be used to build a knowledge base embedding index. This index stores concept IDs mapped to their respective concept embeddings in a format conducive to efficient nearest neighbor search. We can link query concepts to their canonical forms in the knowledge base by performing a nearest neighbor search- matching concept query embeddings to the most similar concepts embeddings in the knowledge base index. \n\nIn this tutorial we will be using the [faiss](https://github.com/facebookresearch/faiss) library to build our concept index.",
"_____no_output_____"
],
[
"#### Self Alignment Pretraining\nSelf-Alignment pretraining is a second stage pretraining of an existing encoder (called second stage because the encoder model can be further finetuned after this more general pretraining step). The dataset used during training consists of pairs of concept synonyms that map to the same ID. At each training iteration, we only select *hard* examples present in the mini batch to calculate the loss and update the model weights. In this context, a hard example is an example where a concept is closer to an unrelated concept in the mini batch than it is to the synonym concept it is paired with by some margin. I encourage you to take a look at [section 2 of the paper](https://arxiv.org/pdf/2010.11784.pdf) for a more formal and in depth description of how hard examples are selected.\n\nWe then use a [metric learning loss](https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf) calculated from the hard examples selected. This loss helps reshape the embedding space. The concept representation space is rearranged to be more suitable for entity matching via embedding cosine similarity. \n\nNow that we have idea of what's going on, let's get started!",
"_____no_output_____"
],
[
"## Dataset Preprocessing",
"_____no_output_____"
]
],
[
[
"# Download data into project directory\nPROJECT_DIR = \".\" #Change if you don't want the current directory to be the project dir\nDATA_DIR = os.path.join(PROJECT_DIR, \"tiny_example_data\")\n\nif not os.path.isdir(os.path.join(DATA_DIR)):\n wget.download('https://dldata-public.s3.us-east-2.amazonaws.com/tiny_example_data.zip',\n os.path.join(PROJECT_DIR, \"tiny_example_data.zip\"))\n\n !unzip {PROJECT_DIR}/tiny_example_data.zip -d {PROJECT_DIR}",
"_____no_output_____"
]
],
[
[
"In this tutorial we will be using a tiny toy dataset to demonstrate how to use NeMo's entity linking model functionality. The dataset includes synonyms for 12 medical concepts. Entity phrases with the same ID are synonyms for the same concept. For example, \"*chronic kidney failure*\", \"*gradual loss of kidney function*\", and \"*CKD*\" are all synonyms of concept ID 5. Here's the dataset before preprocessing:",
"_____no_output_____"
]
],
[
[
"raw_data = pd.read_csv(os.path.join(DATA_DIR, \"tiny_example_dev_data.csv\"), names=[\"ID\", \"CONCEPT\"], index_col=False)\nprint(raw_data)",
"_____no_output_____"
]
],
[
[
"We've already paired off the concepts for this dataset with the format `ID concept_synonym1 concept_synonym2`. Here are the first ten rows:",
"_____no_output_____"
]
],
[
[
"training_data = pd.read_table(os.path.join(DATA_DIR, \"tiny_example_train_pairs.tsv\"), names=[\"ID\", \"CONCEPT_SYN1\", \"CONCEPT_SYN2\"], delimiter='\\t')\nprint(training_data.head(10))",
"_____no_output_____"
]
],
[
[
"Use the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) dataset for full medical domain entity linking training. The data contains over 9 million entities and is a table of medical concepts with their corresponding concept IDs (CUI). After [requesting a free license and making a UMLS Terminology Services (UTS) account](https://www.nlm.nih.gov/research/umls/index.html), the [entire UMLS dataset](https://www.nlm.nih.gov/research/umls/licensedcontent/umlsknowledgesources.html) can be downloaded from the NIH's website. If you've cloned the NeMo repo you can run the data processing script located in `examples/nlp/entity_linking/data/umls_dataset_processing.py` on the full dataset. This script will take in the initial table of UMLS concepts and produce a .tsv file with each row formatted as `CUI\\tconcept_synonym1\\tconcept_synonym2`. Once the UMLS dataset .RRF file is downloaded, the script can be run from the `examples/nlp/entity_linking` directory like so: \n```\npython data/umls_dataset_processing.py\n```",
"_____no_output_____"
],
[
"## Model Training",
"_____no_output_____"
],
[
"Second stage pretrain a BERT Base encoder on the self-alignment pretraining task (SAP) for improved entity linking. Using a GPU, the model should take 5 minutes or less to train on this example dataset and training progress will be output below the cell.",
"_____no_output_____"
]
],
[
[
"#Download config\nwget.download(\"https://raw.githubusercontent.com/vadam5/NeMo/main/examples/nlp/entity_linking/conf/tiny_example_entity_linking_config.yaml\",\n os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n\n# Load in config file\ncfg = OmegaConf.load(os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n\n# Set config file variables\ncfg.project_dir = PROJECT_DIR\ncfg.model.nemo_path = os.path.join(PROJECT_DIR, \"tiny_example_sap_bert_model.nemo\")\ncfg.model.train_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_train_pairs.tsv\")\ncfg.model.validation_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_validation_pairs.tsv\")\n\n# remove distributed training flags\ncfg.trainer.accelerator = None",
"_____no_output_____"
],
[
"# Initialize the trainer and model\ntrainer = Trainer(**cfg.trainer)\nexp_manager(trainer, cfg.get(\"exp_manager\", None))\nmodel = nemo_nlp.models.EntityLinkingModel(cfg=cfg.model, trainer=trainer)",
"_____no_output_____"
],
[
"# Train and save the model\ntrainer.fit(model)\nmodel.save_to(cfg.model.nemo_path)",
"_____no_output_____"
]
],
[
[
"You can run the script at `examples/nlp/entity_linking/self_alignment_pretraining.py` to train a model on a larger dataset. Run\n\n```\npython self_alignment_pretraining.py project_dir=.\n```\nfrom the `examples/nlp/entity_linking` directory.",
"_____no_output_____"
],
[
"## Model Evaluation\n\nLet's evaluate our freshly trained model and compare its performance with a BERT Base encoder that hasn't undergone self-alignment pretraining. We first need to restore our trained model and load our BERT Base Baseline model.",
"_____no_output_____"
]
],
[
[
"device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n\n# Restore second stage pretrained model\nsap_model_cfg = cfg\nsap_model_cfg.index.index_save_name = os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_index\")\nsap_model_cfg.index.index_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_index_data.tsv\")\nsap_model = nemo_nlp.models.EntityLinkingModel.restore_from(sap_model_cfg.model.nemo_path).to(device)\n\n# Load original model\nbase_model_cfg = OmegaConf.load(os.path.join(PROJECT_DIR, \"tiny_example_entity_linking_config.yaml\"))\n\n# Set train/val datasets to None to avoid loading datasets associated with training\nbase_model_cfg.model.train_ds = None\nbase_model_cfg.model.validation_ds = None\nbase_model_cfg.index.index_save_name = os.path.join(PROJECT_DIR, \"base_model_index\")\nbase_model_cfg.index.index_ds.data_file = os.path.join(DATA_DIR, \"tiny_example_index_data.tsv\")\nbase_model = nemo_nlp.models.EntityLinkingModel(base_model_cfg.model).to(device)",
"_____no_output_____"
]
],
[
[
"We are going evaluate our model on a nearest neighbor task using top 1 and top 5 accuracies as our metric. We will be using a tiny example test knowledge base and test queries. For this evaluation we are going to be comparing every test query with every concept vector in our test set knowledge base. We will rank each item in the knowledge base by its cosine similarity with the test query. We'll then compare the IDs of the predicted most similar test knowledge base concepts with our ground truth query IDs to calculate top 1 and top 5 accuracies. For this metric higher is better.",
"_____no_output_____"
]
],
[
[
"# Helper function to get data embeddings\ndef get_embeddings(model, dataloader):\n embeddings, cids = [], []\n\n with torch.no_grad():\n for batch in tqdm(dataloader):\n input_ids, token_type_ids, attention_mask, batch_cids = batch\n batch_embeddings = model.forward(input_ids=input_ids.to(device), \n token_type_ids=token_type_ids.to(device), \n attention_mask=attention_mask.to(device))\n\n # Accumulate index embeddings and their corresponding IDs\n embeddings.extend(batch_embeddings.cpu().detach().numpy())\n cids.extend(batch_cids)\n \n return embeddings, cids",
"_____no_output_____"
],
[
"def evaluate(model, test_kb, test_queries, ks):\n # Initialize knowledge base and query data loaders\n test_kb_dataloader = model.setup_dataloader(test_kb, is_index_data=True)\n test_query_dataloader = model.setup_dataloader(test_queries, is_index_data=True)\n \n # Get knowledge base and query embeddings\n test_kb_embs, test_kb_cids = get_embeddings(model, test_kb_dataloader)\n test_query_embs, test_query_cids = get_embeddings(model, test_query_dataloader)\n\n # Calculate the cosine distance between each query and knowledge base concept\n score_matrix = np.matmul(np.array(test_query_embs), np.array(test_kb_embs).T)\n accs = {k : 0 for k in ks}\n \n # Compare the knowledge base IDs of the knowledge base entities with \n # the smallest cosine distance from the query \n for query_idx in tqdm(range(len(test_query_cids))):\n query_emb = test_query_embs[query_idx]\n query_cid = test_query_cids[query_idx]\n query_scores = score_matrix[query_idx]\n\n for k in ks:\n topk_idxs = np.argpartition(query_scores, -k)[-k:]\n topk_cids = [test_kb_cids[idx] for idx in topk_idxs]\n \n # If the correct query ID is amoung the top k closest kb IDs\n # the model correctly linked the entity\n match = int(query_cid in topk_cids)\n accs[k] += match\n\n for k in ks:\n accs[k] /= len(test_query_cids)\n \n return accs",
"_____no_output_____"
],
[
"# Create configs for our test data\ntest_kb = OmegaConf.create({\n \"data_file\": os.path.join(DATA_DIR, \"tiny_example_test_kb.tsv\"),\n \"max_seq_length\": 128,\n \"batch_size\": 10,\n \"shuffle\": False,\n})\n\ntest_queries = OmegaConf.create({\n \"data_file\": os.path.join(DATA_DIR, \"tiny_example_test_queries.tsv\"),\n \"max_seq_length\": 128,\n \"batch_size\": 10,\n \"shuffle\": False,\n})\n\nks = [1, 5]\n\n# Evaluate both models on our test data\nbase_accs = evaluate(base_model, test_kb, test_queries, ks)\nbase_accs[\"Model\"] = \"BERT Base Baseline\"\n\nsap_accs = evaluate(sap_model, test_kb, test_queries, ks)\nsap_accs[\"Model\"] = \"BERT + SAP\"\n\nprint(\"Top 1 and Top 5 Accuracy Comparison:\")\nresults_df = pd.DataFrame([base_accs, sap_accs], columns=[\"Model\", 1, 5])\nresults_df = results_df.style.set_properties(**{'text-align': 'left', }).set_table_styles([dict(selector='th', props=[('text-align', 'left')])])\ndisplay(results_df)",
"_____no_output_____"
]
],
[
[
"The purpose of this section was to show an example of evaluating your entity linking model. This evaluation set contains very little data, and no serious conclusions should be drawn about model performance. Top 1 accuracy should be between 0.7 and 1.0 for both models and top 5 accuracy should be between 0.8 and 1.0. When evaluating a model trained on a larger dataset, you can use a nearest neighbors index to speed up the evaluation time.",
"_____no_output_____"
],
[
"## Building an Index",
"_____no_output_____"
],
[
"To qualitatively observe the improvement we gain from the second stage pretraining, let's build two indices. One will be built with BERT base embeddings before self-alignment pretraining and one will be built with the model we just trained. Our knowledge base in this tutorial will be in the same domain and have some overlapping concepts as the training set. This data file is formatted as `ID\\tconcept`.",
"_____no_output_____"
],
[
"The `EntityLinkingDataset` class can load the data used for training the entity linking encoder as well as for building the index if the `is_index_data` flag is set to true. ",
"_____no_output_____"
]
],
[
[
"def build_index(cfg, model):\n # Setup index dataset loader\n index_dataloader = model.setup_dataloader(cfg.index.index_ds, is_index_data=True)\n \n # Get index dataset embeddings\n embeddings, _ = get_embeddings(model, index_dataloader)\n \n # Train IVFFlat index using faiss\n embeddings = np.array(embeddings)\n quantizer = faiss.IndexFlatL2(cfg.index.dims)\n index = faiss.IndexIVFFlat(quantizer, cfg.index.dims, cfg.index.nlist)\n index = faiss.index_cpu_to_all_gpus(index)\n index.train(embeddings)\n \n # Add concept embeddings to index\n for i in tqdm(range(0, embeddings.shape[0], cfg.index.index_batch_size)):\n index.add(embeddings[i:i+cfg.index.index_batch_size])\n\n # Save index\n faiss.write_index(faiss.index_gpu_to_cpu(index), cfg.index.index_save_name)",
"_____no_output_____"
],
[
"build_index(sap_model_cfg, sap_model.to(device))\nbuild_index(base_model_cfg, base_model.to(device))",
"_____no_output_____"
]
],
[
[
"## Entity Linking via Nearest Neighbor Search",
"_____no_output_____"
],
[
"Now it's time to query our indices! We are going to query both our index built with embeddings from BERT Base, and our index with embeddings built from the SAP BERT model we trained. Our sample query phrases will be \"*high blood sugar*\" and \"*head pain*\". \n\nTo query our indices, we first need to get the embedding of each query from the corresponding encoder model. We can then pass these query embeddings into the faiss index which will perform a nearest neighbor search, using cosine distance to compare the query embedding with embeddings present in the index. Once we get a list of knowledge base index concept IDs most closely matching our query, all that is left to do is map the IDs to a representative string describing the concept. ",
"_____no_output_____"
]
],
[
[
"def query_index(cfg, model, index, queries, id2string):\n # Get query embeddings from our entity linking encoder model\n query_embs = get_query_embedding(queries, model).cpu().detach().numpy()\n \n # Use query embedding to find closest concept embedding in knowledge base\n distances, neighbors = index.search(query_embs, cfg.index.top_n)\n \n # Get the canonical strings corresponding to the IDs of the query's nearest neighbors in the kb \n neighbor_concepts = [[id2string[concept_id] for concept_id in query_neighbor] \\\n for query_neighbor in neighbors]\n \n # Display most similar concepts in the knowledge base. \n for query_idx in range(len(queries)):\n print(f\"\\nThe most similar concepts to {queries[query_idx]} are:\")\n for cid, concept, dist in zip(neighbors[query_idx], neighbor_concepts[query_idx], distances[query_idx]):\n print(cid, concept, 1 - dist)\n\n \ndef get_query_embedding(queries, model):\n # Tokenize our queries\n model_input = model.tokenizer(queries,\n add_special_tokens = True,\n padding = True,\n truncation = True,\n max_length = 512,\n return_token_type_ids = True,\n return_attention_mask = True)\n \n # Pass tokenized input into model\n query_emb = model.forward(input_ids=torch.LongTensor(model_input[\"input_ids\"]).to(device),\n token_type_ids=torch.LongTensor(model_input[\"token_type_ids\"]).to(device),\n attention_mask=torch.LongTensor(model_input[\"attention_mask\"]).to(device))\n \n return query_emb",
"_____no_output_____"
],
[
"# Load indices\nsap_index = faiss.read_index(sap_model_cfg.index.index_save_name)\nbase_index = faiss.read_index(base_model_cfg.index.index_save_name)",
"_____no_output_____"
],
[
"# Map concept IDs to one canonical string\nindex_data = open(sap_model_cfg.index.index_ds.data_file, \"r\", encoding='utf-8-sig')\nid2string = {}\n\nfor line in index_data:\n cid, concept = line.split(\"\\t\")\n id2string[int(cid) - 1] = concept.strip()",
"_____no_output_____"
],
[
"id2string",
"_____no_output_____"
],
[
"# Some sample queries\nqueries = [\"high blood sugar\", \"head pain\"]\n\n# Query BERT Base\nprint(\"BERT Base output before Self Alignment Pretraining:\")\nquery_index(base_model_cfg, base_model, base_index, queries, id2string)\nprint(\"\\n\" + \"-\" * 50 + \"\\n\")\n\n# Query SAP BERT\nprint(\"SAP BERT output after Self Alignment Pretraining:\")\nquery_index(sap_model_cfg, sap_model, sap_index, queries, id2string)\nprint(\"\\n\" + \"-\" * 50 + \"\\n\")",
"_____no_output_____"
]
],
[
[
"Even after only training on this tiny amount of data, the qualitative performance boost from self-alignment pretraining is visible. The baseline model links \"*high blood sugar*\" to the entity \"*6 diabetes*\" while our SAP BERT model accurately links \"*high blood sugar*\" to \"*Hyperinsulinemia*\". Similarly, \"*head pain*\" and \"*Myocardial infraction*\" are not the same concept, but \"*head pain*\" and \"*Headache*\" are.",
"_____no_output_____"
],
[
"For larger knowledge bases keeping the default embedding size might be too large and cause out of memory issues. You can apply PCA or some other dimensionality reduction method to your data to reduce its memory footprint. Code for creating a text file of all the UMLS entities in the correct format needed to build an index and creating a dictionary mapping concept ids to canonical concept strings can be found here `examples/nlp/entity_linking/data/umls_dataset_processing.py`. \n\nThe code for extracting knowledge base concept embeddings, training and applying a PCA transformation to the embeddings, building a faiss index and querying the index from the command line is located at `examples/nlp/entity_linking/build_index.py` and `examples/nlp/entity_linking/query_index.py`. \n\nIf you've cloned the NeMo repo, both of these steps can be run as follows on the command line from the `examples/nlp/entity_linking/` directory.\n\n```\npython data/umls_dataset_processing.py --index\npython build_index.py --restore\npython query_index.py --restore\n```\nBy default the project directory will be \".\" but can be changed by adding the flag `--project_dir=<PATH>` after each of the above commands. Intermediate steps of the index building process are saved. In the occurrence of an error, previously completed steps do not need to be rerun. ",
"_____no_output_____"
],
[
"## Command Recap",
"_____no_output_____"
],
[
"Here is a recap of the commands and steps to repeat this process on the full UMLS dataset. \n\n1) Download the UMLS dataset file `MRCONSO.RRF` from the NIH website and place it in the `examples/nlp/entity_linking/data` directory.\n\n2) Run the following commands from the `examples/nlp/entity_linking` directory\n```\npython data/umls_dataset_processing.py\npython self_alignment_pretraining.py project_dir=. \npython data/umls_dataset_processing.py --index\npython build_index.py --restore\npython query_index.py --restore\n```\nThe model will take ~24hrs to train on two GPUs and ~48hrs to train on one GPU. By default the project directory will be \".\" but can be changed by adding the flag `--project_dir=<PATH>` after each of the above commands and changing `project_dir=<PATH>` in the `self_alignment_pretraining.py` command. If you change the project directory, you should also move the `MRCONOSO.RRF` file to a `data` sub directory within the one you've specified. ",
"_____no_output_____"
],
[
"As mentioned in the introduction, entity linking within NVIDIA NeMo is not limited to the medical domain. The same data processing and training steps can be applied to a variety of domains and use cases. You can edit the datasets used as well as training and loss function hyperparameters within your config file to better suit your domain.",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a06ef8f6ecab9e2a8baed5532766459480d01db
| 55,637 |
ipynb
|
Jupyter Notebook
|
01_advanced_topics_machine_learning/assignments/notebooks/1_MLP_DNN/HW1_Building_your_Deep_Neural_Network.ipynb
|
rachelfanti/data_science
|
016ae7eb4185975fc488a52a8bb1dfa6380afd63
|
[
"MIT"
] | null | null | null |
01_advanced_topics_machine_learning/assignments/notebooks/1_MLP_DNN/HW1_Building_your_Deep_Neural_Network.ipynb
|
rachelfanti/data_science
|
016ae7eb4185975fc488a52a8bb1dfa6380afd63
|
[
"MIT"
] | null | null | null |
01_advanced_topics_machine_learning/assignments/notebooks/1_MLP_DNN/HW1_Building_your_Deep_Neural_Network.ipynb
|
rachelfanti/data_science
|
016ae7eb4185975fc488a52a8bb1dfa6380afd63
|
[
"MIT"
] | null | null | null | 38.317493 | 562 | 0.515826 |
[
[
[
"# Building your Deep Neural Network: Step by Step\n\nYou will implement all the building blocks of a neural network and use these building blocks to build a neural network of any architecture you want. By completing this assignment you will:\n\n- Develop an intuition of the over all structure of a neural network.\n\n- Write functions (e.g. forward propagation, backward propagation, logistic loss, etc...) that would help you decompose your code and ease the process of building a neural network.\n\n- Initialize/update parameters according to your desired structure.\n\n<!-- - In this notebook, you will implement all the functions required to build a deep neural network. -->\n\n<!-- - In the next assignment, you will use these functions to build a deep neural network for image classification. -->\n\n**After this assignment you will be able to:**\n- Use non-linear units like ReLU to improve your model\n- Build a deeper neural network (with more than 1 hidden layer)\n- Implement an easy-to-use neural network class\n\n**Notation**:\n- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. \n - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.\n- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. \n - Example: $x^{(i)}$ is the $i^{th}$ training example.\n- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.\n - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).\n\nLet's get started!",
"_____no_output_____"
],
[
"## 1 - Packages\n\nLet's first import all the packages that you will need during this assignment. \n- [numpy](www.numpy.org) is the main package for scientific computing with Python.\n- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.\n- dnn_utils provides some necessary functions for this notebook.\n- testCases provides some test cases to assess the correctness of your functions\n- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed. ",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport h5py\nimport matplotlib.pyplot as plt\nfrom testCases_v4a import *\nfrom dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n%load_ext autoreload\n%autoreload 2\n\nnp.random.seed(1)",
"_____no_output_____"
]
],
[
[
"## 2 - Outline of the Assignment\n\nTo build your neural network, you will be implementing several \"helper functions\". These helper functions will be used in the notebook \"HW1_Part2_Deep_Neural_Network_Application\" to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:\n\n- Initialize the parameters for a two-layer network and for an $L$-layer neural network.\n- Implement the forward propagation module (shown in purple in the figure below).\n - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).\n - We give you the ACTIVATION function (relu/sigmoid).\n - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.\n - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.\n- Compute the loss.\n- Implement the backward propagation module (denoted in red in the figure below).\n - Complete the LINEAR part of a layer's backward propagation step.\n - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) \n - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.\n - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function\n- Finally update the parameters.\n\n<img src=\"images/final outline.png\" style=\"width:800px;height:500px;\">\n<caption><center> **Figure 1**</center></caption><br>\n\n\n**Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. ",
"_____no_output_____"
],
[
"## 3 - Initialization\n\nYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.\n\n### 3.1 - 2-layer Neural Network\n\n**Exercise**: Create and initialize the parameters of the 2-layer neural network.\n\n**Instructions**:\n- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. \n- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.\n- Use zero initialization for the biases. Use `np.zeros(shape)`.",
"_____no_output_____"
]
],
[
[
"def initialize_parameters(n_x, n_h, n_y):\n \"\"\"\n Argument:\n n_x -- size of the input layer\n n_h -- size of the hidden layer\n n_y -- size of the output layer\n \n Returns:\n parameters -- python dictionary containing your parameters:\n W1 -- weight matrix of shape (n_h, n_x)\n b1 -- bias vector of shape (n_h, 1)\n W2 -- weight matrix of shape (n_y, n_h)\n b2 -- bias vector of shape (n_y, 1)\n \"\"\"\n \n np.random.seed(1)\n \n ### START CODE HERE ### (≈ 4 lines of code)\n W1 = np.random.randn(n_h, n_x)*0.01\n b1 = np.zeros((n_h, 1))\n W2 = np.random.randn(n_y, n_h)*0.01\n b2 = np.zeros((n_y, 1))\n ### END CODE HERE ###\n \n assert(W1.shape == (n_h, n_x))\n assert(b1.shape == (n_h, 1))\n assert(W2.shape == (n_y, n_h))\n assert(b2.shape == (n_y, 1))\n \n parameters = {\"W1\": W1,\n \"b1\": b1,\n \"W2\": W2,\n \"b2\": b2}\n \n return parameters ",
"_____no_output_____"
],
[
"parameters = initialize_parameters(3,2,1)\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"W1 = [[ 0.01624345 -0.00611756 -0.00528172]\n [-0.01072969 0.00865408 -0.02301539]]\nb1 = [[0.]\n [0.]]\nW2 = [[ 0.01744812 -0.00761207]]\nb2 = [[0.]]\n"
]
],
[
[
"**Expected output**:\n \n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td> [[ 0.01624345 -0.00611756 -0.00528172]\n [-0.01072969 0.00865408 -0.02301539]] </td> \n </tr>\n\n <tr>\n <td> **b1**</td>\n <td>[[ 0.]\n [ 0.]]</td> \n </tr>\n \n <tr>\n <td>**W2**</td>\n <td> [[ 0.01744812 -0.00761207]]</td>\n </tr>\n \n <tr>\n <td> **b2** </td>\n <td> [[ 0.]] </td> \n </tr>\n \n</table>",
"_____no_output_____"
],
[
"### 3.2 - L-layer Neural Network\n\nThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:\n\n<table style=\"width:100%\">\n\n\n <tr>\n <td> </td> \n <td> **Shape of W** </td> \n <td> **Shape of b** </td> \n <td> **Activation** </td>\n <td> **Shape of Activation** </td> \n <tr>\n \n <tr>\n <td> **Layer 1** </td> \n <td> $(n^{[1]},12288)$ </td> \n <td> $(n^{[1]},1)$ </td> \n <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> \n \n <td> $(n^{[1]},209)$ </td> \n <tr>\n \n <tr>\n <td> **Layer 2** </td> \n <td> $(n^{[2]}, n^{[1]})$ </td> \n <td> $(n^{[2]},1)$ </td> \n <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> \n <td> $(n^{[2]}, 209)$ </td> \n <tr>\n \n <tr>\n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$ </td> \n <td> $\\vdots$</td> \n <td> $\\vdots$ </td> \n <tr>\n \n <tr>\n <td> **Layer L-1** </td> \n <td> $(n^{[L-1]}, n^{[L-2]})$ </td> \n <td> $(n^{[L-1]}, 1)$ </td> \n <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> \n <td> $(n^{[L-1]}, 209)$ </td> \n <tr>\n \n \n <tr>\n <td> **Layer L** </td> \n <td> $(n^{[L]}, n^{[L-1]})$ </td> \n <td> $(n^{[L]}, 1)$ </td>\n <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>\n <td> $(n^{[L]}, 209)$ </td> \n <tr>\n\n</table>\n\nRemember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: \n\n$$ W = \\begin{bmatrix}\n j & k & l\\\\\n m & n & o \\\\\n p & q & r \n\\end{bmatrix}\\;\\;\\; X = \\begin{bmatrix}\n a & b & c\\\\\n d & e & f \\\\\n g & h & i \n\\end{bmatrix} \\;\\;\\; b =\\begin{bmatrix}\n s \\\\\n t \\\\\n u\n\\end{bmatrix}\\tag{2}$$\n\nThen $WX + b$ will be:\n\n$$ WX + b = \\begin{bmatrix}\n (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\\\\n (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\\\\n (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\n\\end{bmatrix}\\tag{3} $$",
"_____no_output_____"
],
[
"**Exercise**: Implement initialization for an L-layer Neural Network. \n\n**Instructions**:\n- The model's structure is *[LINEAR -> RELU] $ \\times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.\n- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.\n- Use zeros initialization for the biases. Use `np.zeros(shape)`.\n- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the \"Multi-layer Perceptron Model\" from Lecture 04 would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. This means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! \n- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).\n```python\n if L == 1:\n parameters[\"W\" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01\n parameters[\"b\" + str(L)] = np.zeros((layer_dims[1], 1))\n```",
"_____no_output_____"
]
],
[
[
"def initialize_parameters_deep(layer_dims):\n \"\"\"\n Arguments:\n layer_dims -- python array (list) containing the dimensions of each layer in our network\n \n Returns:\n parameters -- python dictionary containing your parameters \"W1\", \"b1\", ..., \"WL\", \"bL\":\n Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])\n bl -- bias vector of shape (layer_dims[l], 1)\n \"\"\"\n \n np.random.seed(3)\n parameters = {}\n L = len(layer_dims) # number of layers in the network\n\n for l in range(1, L):\n ### START CODE HERE ### (≈ 2 lines of code)\n parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01\n parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))\n ### END CODE HERE ###\n \n assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))\n assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))\n\n \n return parameters",
"_____no_output_____"
],
[
"parameters = initialize_parameters_deep([5,4,3])\nprint(\"W1 = \" + str(parameters[\"W1\"]))\nprint(\"b1 = \" + str(parameters[\"b1\"]))\nprint(\"W2 = \" + str(parameters[\"W2\"]))\nprint(\"b2 = \" + str(parameters[\"b2\"]))",
"W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]\nb1 = [[0.]\n [0.]\n [0.]\n [0.]]\nW2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716]\n [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n [-0.00768836 -0.00230031 0.00745056 0.01976111]]\nb2 = [[0.]\n [0.]\n [0.]]\n"
]
],
[
[
"**Expected output**:\n \n<table style=\"width:80%\">\n <tr>\n <td> **W1** </td>\n <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]\n [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]\n [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]\n [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> \n </tr>\n \n <tr>\n <td>**b1** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n \n <tr>\n <td>**W2** </td>\n <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]\n [-0.01023785 -0.00712993 0.00625245 -0.00160513]\n [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> \n </tr>\n \n <tr>\n <td>**b2** </td>\n <td>[[ 0.]\n [ 0.]\n [ 0.]]</td> \n </tr>\n \n</table>",
"_____no_output_____"
],
[
"## 4 - Forward propagation module\n\n### 4.1 - Linear Forward \nNow that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:\n\n- LINEAR\n- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. \n- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID (whole model)\n\nThe linear forward module (vectorized over all the examples) computes the following equations:\n\n$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\\tag{4}$$\n\nwhere $A^{[0]} = X$. \n\n**Exercise**: Build the linear part of forward propagation.\n\n**Reminder**:\nThe mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.",
"_____no_output_____"
]
],
[
[
"def linear_forward(A, W, b):\n \"\"\"\n Implement the linear part of a layer's forward propagation.\n\n Arguments:\n A -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n\n Returns:\n Z -- the input of the activation function, also called pre-activation parameter \n cache -- a python tuple containing \"A\", \"W\" and \"b\" ; stored for computing the backward pass efficiently\n \"\"\"\n \n ### START CODE HERE ### (≈ 1 line of code)\n Z = np.dot(W, A) + b\n ### END CODE HERE ###\n \n assert(Z.shape == (W.shape[0], A.shape[1]))\n cache = (A, W, b)\n \n return Z, cache",
"_____no_output_____"
],
[
"A, W, b = linear_forward_test_case()\n\nZ, linear_cache = linear_forward(A, W, b)\nprint(\"Z = \" + str(Z))",
"Z = [[ 3.26295337 -1.23429987]]\n"
]
],
[
[
"**Expected output**:\n\n<table style=\"width:35%\">\n \n <tr>\n <td> **Z** </td>\n <td> [[ 3.26295337 -1.23429987]] </td> \n </tr>\n \n</table>",
"_____no_output_____"
],
[
"### 4.2 - Linear-Activation Forward\n\nIn this notebook, you will use two activation functions:\n\n- **Sigmoid**: $\\sigma(Z) = \\sigma(W A + b) = \\frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value \"`a`\" and a \"`cache`\" that contains \"`Z`\" (it's what we will feed in to the corresponding backward function). To use it you could just call: \n``` python\nA, activation_cache = sigmoid(Z)\n```\n\n- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value \"`A`\" and a \"`cache`\" that contains \"`Z`\" (it's what we will feed in to the corresponding backward function). To use it you could just call:\n``` python\nA, activation_cache = relu(Z)\n```",
"_____no_output_____"
],
[
"For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.\n\n**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation \"g\" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.",
"_____no_output_____"
]
],
[
[
"def linear_activation_forward(A_prev, W, b, activation):\n \"\"\"\n Implement the forward propagation for the LINEAR->ACTIVATION layer\n\n Arguments:\n A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)\n W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)\n b -- bias vector, numpy array of shape (size of the current layer, 1)\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n\n Returns:\n A -- the output of the activation function, also called the post-activation value \n cache -- a python tuple containing \"linear_cache\" and \"activation_cache\";\n stored for computing the backward pass efficiently\n \"\"\"\n \n if activation == \"sigmoid\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = sigmoid(Z)\n ### END CODE HERE ###\n \n elif activation == \"relu\":\n # Inputs: \"A_prev, W, b\". Outputs: \"A, activation_cache\".\n ### START CODE HERE ### (≈ 2 lines of code)\n Z, linear_cache = linear_forward(A_prev, W, b)\n A, activation_cache = relu(Z)\n ### END CODE HERE ###\n \n assert (A.shape == (W.shape[0], A_prev.shape[1]))\n cache = (linear_cache, activation_cache)\n\n return A, cache",
"_____no_output_____"
],
[
"A_prev, W, b = linear_activation_forward_test_case()\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"sigmoid\")\nprint(\"With sigmoid: A = \" + str(A))\n\nA, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = \"relu\")\nprint(\"With ReLU: A = \" + str(A))",
"With sigmoid: A = [[0.96890023 0.11013289]]\nWith ReLU: A = [[3.43896131 0. ]]\n"
]
],
[
[
"**Expected output**:\n \n<table style=\"width:35%\">\n <tr>\n <td> **With sigmoid: A ** </td>\n <td > [[ 0.96890023 0.11013289]]</td> \n </tr>\n <tr>\n <td> **With ReLU: A ** </td>\n <td > [[ 3.43896131 0. ]]</td> \n </tr>\n</table>\n",
"_____no_output_____"
],
[
"**Note**: In deep learning, the \"[LINEAR->ACTIVATION]\" computation is counted as a single layer in the neural network, not two layers. ",
"_____no_output_____"
],
[
"### d) L-Layer Model \n\nFor even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.\n\n<img src=\"images/model_architecture_kiank.png\" style=\"width:600px;height:300px;\">\n<caption><center> **Figure 2** : *[LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>\n\n**Exercise**: Implement the forward propagation of the above model.\n\n**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \\sigma(Z^{[L]}) = \\sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\\hat{Y}$.) \n\n**Tips**:\n- Use the functions you had previously written \n- Use a for loop to replicate [LINEAR->RELU] (L-1) times\n- Don't forget to keep track of the caches in the \"caches\" list. To add a new value `c` to a `list`, you can use `list.append(c)`.",
"_____no_output_____"
]
],
[
[
"# GRADED FUNCTION: L_model_forward\n\ndef L_model_forward(X, parameters):\n \"\"\"\n Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation\n \n Arguments:\n X -- data, numpy array of shape (input size, number of examples)\n parameters -- output of initialize_parameters_deep()\n \n Returns:\n AL -- last post-activation value\n caches -- list of caches containing:\n every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)\n \"\"\"\n\n caches = []\n A = X\n L = len(parameters) // 2 # number of layers in the neural network\n \n # Implement [LINEAR -> RELU]*(L-1). Add \"cache\" to the \"caches\" list.\n for l in range(1, L):\n A_prev = A \n ### START CODE HERE ### (≈ 2 lines of code)\n A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation= \"relu\")\n caches.append(cache)\n ### END CODE HERE ###\n \n # Implement LINEAR -> SIGMOID. Add \"cache\" to the \"caches\" list.\n ### START CODE HERE ### (≈ 2 lines of code)\n AL, cache = linear_activation_forward(A, parameters['W' + str(l+1)], parameters['b' + str(l+1)], activation= \"sigmoid\")\n caches.append(cache)\n ### END CODE HERE ###\n \n assert(AL.shape == (1,X.shape[1]))\n \n return AL, caches",
"_____no_output_____"
],
[
"X, parameters = L_model_forward_test_case_2hidden()\nAL, caches = L_model_forward(X, parameters)\nprint(\"AL = \" + str(AL))\nprint(\"Length of caches list = \" + str(len(caches)))",
"AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]\nLength of caches list = 3\n"
]
],
[
[
"<table style=\"width:50%\">\n <tr>\n <td> **AL** </td>\n <td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td> \n </tr>\n <tr>\n <td> **Length of caches list ** </td>\n <td > 3 </td> \n </tr>\n</table>",
"_____no_output_____"
],
[
"Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in \"caches\". Using $A^{[L]}$, you can compute the cost of your predictions.",
"_____no_output_____"
],
[
"## 5 - Cost function\n\nNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.\n\n**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\\frac{1}{m} \\sum\\limits_{i = 1}^{m} (y^{(i)}\\log\\left(a^{[L] (i)}\\right) + (1-y^{(i)})\\log\\left(1- a^{[L](i)}\\right)) \\tag{7}$$\n",
"_____no_output_____"
]
],
[
[
"def compute_cost(AL, Y):\n \"\"\"\n Implement the cost function defined by equation (7).\n\n Arguments:\n AL -- probability vector corresponding to your label predictions, shape (1, number of examples)\n Y -- true \"label\" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)\n\n Returns:\n cost -- cross-entropy cost\n \"\"\"\n \n m = Y.shape[1]\n\n # Compute loss from aL and y.\n ### START CODE HERE ### (≈ 1 lines of code)\n cost = -1/m * np.sum (np.multiply(Y,np.log(AL)) + np.multiply(1-Y,np.log(1-AL)) )\n ### END CODE HERE ###\n \n cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).\n assert(cost.shape == ())\n \n return cost",
"_____no_output_____"
],
[
"Y, AL = compute_cost_test_case()\n\nprint(\"cost = \" + str(compute_cost(AL, Y)))",
"cost = 0.2797765635793422\n"
]
],
[
[
"**Expected Output**:\n\n<table>\n\n <tr>\n <td>**cost** </td>\n <td> 0.2797765635793422</td> \n </tr>\n</table>",
"_____no_output_____"
],
[
"## 6 - Backward propagation module\n\nJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. \n\n**Reminder**: \n<img src=\"images/backprop_kiank.png\" style=\"width:650px;height:250px;\">\n<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>\n\n<!-- \nFor those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:\n\n$$\\frac{d \\mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \\frac{d\\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\\frac{{da^{[2]}}}{{dz^{[2]}}}\\frac{{dz^{[2]}}}{{da^{[1]}}}\\frac{{da^{[1]}}}{{dz^{[1]}}} \\tag{8} $$\n\nIn order to calculate the gradient $dW^{[1]} = \\frac{\\partial L}{\\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.\n\nEquivalently, in order to calculate the gradient $db^{[1]} = \\frac{\\partial L}{\\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \\times \\frac{\\partial z^{[1]} }{\\partial b^{[1]}}$.\n\nThis is why we talk about **backpropagation**.\n!-->\n\nNow, similar to forward propagation, you are going to build the backward propagation in three steps:\n- LINEAR backward\n- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation\n- [LINEAR -> RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)",
"_____no_output_____"
],
[
"### 6.1 - Linear backward\n\nFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).\n\nSuppose you have already calculated the derivative $dZ^{[l]} = \\frac{\\partial \\mathcal{L} }{\\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$.\n\n<img src=\"images/linearback_kiank.png\" style=\"width:250px;height:300px;\">\n<caption><center> **Figure 4** </center></caption>\n\nThe three outputs $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:\n$$ dW^{[l]} = \\frac{\\partial \\mathcal{J} }{\\partial W^{[l]}} = \\frac{1}{m} dZ^{[l]} A^{[l-1] T} \\tag{8}$$\n$$ db^{[l]} = \\frac{\\partial \\mathcal{J} }{\\partial b^{[l]}} = \\frac{1}{m} \\sum_{i = 1}^{m} dZ^{[l](i)}\\tag{9}$$\n$$ dA^{[l-1]} = \\frac{\\partial \\mathcal{L} }{\\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \\tag{10}$$\n",
"_____no_output_____"
],
[
"**Exercise**: Use the 3 formulas above to implement linear_backward().",
"_____no_output_____"
]
],
[
[
"def linear_backward(dZ, cache):\n \"\"\"\n Implement the linear portion of backward propagation for a single layer (layer l)\n\n Arguments:\n dZ -- Gradient of the cost with respect to the linear output (of current layer l)\n cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer\n\n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n A_prev, W, b = cache\n m = A_prev.shape[1]\n\n ### START CODE HERE ### (≈ 3 lines of code)\n dW = 1/m*np.dot(dZ,A_prev.T)\n db = np.sum (dZ, axis=1, keepdims=True)/m\n dA_prev = np.dot(W.T, dZ)\n ### END CODE HERE ###\n \n assert (dA_prev.shape == A_prev.shape)\n assert (dW.shape == W.shape)\n assert (db.shape == b.shape)\n \n return dA_prev, dW, db",
"_____no_output_____"
],
[
"# Set up some test inputs\ndZ, linear_cache = linear_backward_test_case()\n\ndA_prev, dW, db = linear_backward(dZ, linear_cache)\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))",
"dA_prev = [[-1.15171336 0.06718465 -0.3204696 2.09812712]\n [ 0.60345879 -3.72508701 5.81700741 -3.84326836]\n [-0.4319552 -1.30987417 1.72354705 0.05070578]\n [-0.38981415 0.60811244 -1.25938424 1.47191593]\n [-2.52214926 2.67882552 -0.67947465 1.48119548]]\ndW = [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]\n [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]\n [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]\ndb = [[-0.14713786]\n [-0.11313155]\n [-0.13209101]]\n"
]
],
[
[
"** Expected Output**:\n \n```\ndA_prev = \n [[-1.15171336 0.06718465 -0.3204696 2.09812712]\n [ 0.60345879 -3.72508701 5.81700741 -3.84326836]\n [-0.4319552 -1.30987417 1.72354705 0.05070578]\n [-0.38981415 0.60811244 -1.25938424 1.47191593]\n [-2.52214926 2.67882552 -0.67947465 1.48119548]]\ndW = \n [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]\n [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]\n [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]\ndb = \n [[-0.14713786]\n [-0.11313155]\n [-0.13209101]]\n```",
"_____no_output_____"
],
[
"### 6.2 - Linear-Activation backward\n\nNext, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. \n\nTo help you implement `linear_activation_backward`, we provided two backward functions:\n- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:\n\n```python\ndZ = sigmoid_backward(dA, activation_cache)\n```\n\n- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:\n\n```python\ndZ = relu_backward(dA, activation_cache)\n```\n\nIf $g(.)$ is the activation function, \n`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \\tag{11}$$. \n\n**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.",
"_____no_output_____"
]
],
[
[
"def linear_activation_backward(dA, cache, activation):\n \"\"\"\n Implement the backward propagation for the LINEAR->ACTIVATION layer.\n \n Arguments:\n dA -- post-activation gradient for current layer l \n cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently\n activation -- the activation to be used in this layer, stored as a text string: \"sigmoid\" or \"relu\"\n \n Returns:\n dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev\n dW -- Gradient of the cost with respect to W (current layer l), same shape as W\n db -- Gradient of the cost with respect to b (current layer l), same shape as b\n \"\"\"\n \n linear_cache, activation_cache = cache\n # print (linear_cache)\n # print (activation_cache)\n \n if activation == \"relu\":\n ### START CODE HERE ### (≈ 2 lines of code)\n dZ = relu_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n elif activation == \"sigmoid\":\n ### START CODE HERE ### (≈ 2 lines of code)\n dZ = sigmoid_backward(dA, activation_cache)\n dA_prev, dW, db = linear_backward(dZ, linear_cache)\n ### END CODE HERE ###\n \n return dA_prev, dW, db",
"_____no_output_____"
],
[
"dAL, linear_activation_cache = linear_activation_backward_test_case()\n\ndA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = \"sigmoid\")\nprint (\"sigmoid:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db) + \"\\n\")\n\ndA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = \"relu\")\nprint (\"relu:\")\nprint (\"dA_prev = \"+ str(dA_prev))\nprint (\"dW = \" + str(dW))\nprint (\"db = \" + str(db))",
"sigmoid:\ndA_prev = [[ 0.11017994 0.01105339]\n [ 0.09466817 0.00949723]\n [-0.05743092 -0.00576154]]\ndW = [[ 0.10266786 0.09778551 -0.01968084]]\ndb = [[-0.05729622]]\n\nrelu:\ndA_prev = [[ 0.44090989 0. ]\n [ 0.37883606 0. ]\n [-0.2298228 0. ]]\ndW = [[ 0.44513824 0.37371418 -0.10478989]]\ndb = [[-0.20837892]]\n"
]
],
[
[
"**Expected output with sigmoid:**\n\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td >[[ 0.11017994 0.01105339]\n [ 0.09466817 0.00949723]\n [-0.05743092 -0.00576154]] </td> \n\n </tr> \n \n <tr>\n <td > dW </td> \n <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> \n </tr> \n \n <tr>\n <td > db </td> \n <td > [[-0.05729622]] </td> \n </tr> \n</table>\n\n",
"_____no_output_____"
],
[
"**Expected output with relu:**\n\n<table style=\"width:100%\">\n <tr>\n <td > dA_prev </td> \n <td > [[ 0.44090989 0. ]\n [ 0.37883606 0. ]\n [-0.2298228 0. ]] </td> \n\n </tr> \n \n <tr>\n <td > dW </td> \n <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> \n </tr> \n \n <tr>\n <td > db </td> \n <td > [[-0.20837892]] </td> \n </tr> \n</table>\n\n",
"_____no_output_____"
],
[
"### 6.3 - L-Model Backward \n\nNow you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. \n\n\n<img src=\"images/mn_backward.png\" style=\"width:450px;height:300px;\">\n<caption><center> **Figure 5** : Backward pass </center></caption>\n\n** Initializing backpropagation**:\nTo backpropagate through this network, we know that the output is, \n$A^{[L]} = \\sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \\frac{\\partial \\mathcal{L}}{\\partial A^{[L]}}$.\nTo do so, use this formula (derived using calculus which you don't need in-depth knowledge of):\n```python\ndAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL\n```\n\nYou can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : \n\n$$grads[\"dW\" + str(l)] = dW^{[l]}\\tag{15} $$\n\nFor example, for $l=3$ this would store $dW^{[l]}$ in `grads[\"dW3\"]`.\n\n**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\\times$ (L-1) -> LINEAR -> SIGMOID* model.",
"_____no_output_____"
]
],
[
[
"def L_model_backward(AL, Y, caches):\n \"\"\"\n Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group\n \n Arguments:\n AL -- probability vector, output of the forward propagation (L_model_forward())\n Y -- true \"label\" vector (containing 0 if non-cat, 1 if cat)\n caches -- list of caches containing:\n every cache of linear_activation_forward() with \"relu\" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)\n the cache of linear_activation_forward() with \"sigmoid\" (it's caches[L-1])\n \n Returns:\n grads -- A dictionary with the gradients\n grads[\"dA\" + str(l)] = ... \n grads[\"dW\" + str(l)] = ...\n grads[\"db\" + str(l)] = ... \n \"\"\"\n grads = {}\n L = len(caches) # the number of layers\n m = AL.shape[1]\n Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL\n \n # Initializing the backpropagation\n ### START CODE HERE ### (1 line of code)\n dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))\n ### END CODE HERE ###\n \n # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: \"dAL, current_cache\". Outputs: \"grads[\"dAL-1\"], grads[\"dWL\"], grads[\"dbL\"]\n ### START CODE HERE ### (approx. 2 lines)\n current_cache = caches[L-1]\n grads[\"dA\" + str(L-1)], grads[\"dW\" + str(L)], grads[\"db\" + str(L)] = linear_activation_backward(dAL, current_cache, activation = \"sigmoid\") \n ### END CODE HERE ###\n \n # Loop from l=L-2 to l=0\n for l in reversed(range(L-1)):\n # lth layer: (RELU -> LINEAR) gradients.\n # Inputs: \"grads[\"dA\" + str(l + 1)], current_cache\". Outputs: \"grads[\"dA\" + str(l)] , grads[\"dW\" + str(l + 1)] , grads[\"db\" + str(l + 1)] \n ### START CODE HERE ### (approx. 5 lines)\n current_cache = caches[l]\n dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads[\"dA\" + str(l + 1)], current_cache, activation = \"relu\")\n grads[\"dA\" + str(l)] = dA_prev_temp\n grads[\"dW\" + str(l + 1)] = dW_temp\n grads[\"db\" + str(l + 1)] = db_temp\n ### END CODE HERE ###\n\n return grads",
"_____no_output_____"
],
[
"AL, Y_assess, caches = L_model_backward_test_case()\ngrads = L_model_backward(AL, Y_assess, caches)\nprint_grads(grads)",
"dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167]\n [0. 0. 0. 0. ]\n [0.05283652 0.01005865 0.01777766 0.0135308 ]]\ndb1 = [[-0.22007063]\n [ 0. ]\n [-0.02835349]]\ndA1 = [[ 0.12913162 -0.44014127]\n [-0.14175655 0.48317296]\n [ 0.01663708 -0.05670698]]\n"
]
],
[
[
"**Expected Output**\n\n<table style=\"width:60%\">\n \n <tr>\n <td > dW1 </td> \n <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]\n [ 0. 0. 0. 0. ]\n [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> \n </tr> \n \n <tr>\n <td > db1 </td> \n <td > [[-0.22007063]\n [ 0. ]\n [-0.02835349]] </td> \n </tr> \n \n <tr>\n <td > dA1 </td> \n <td > [[ 0.12913162 -0.44014127]\n [-0.14175655 0.48317296]\n [ 0.01663708 -0.05670698]] </td> \n\n </tr> \n</table>\n\n",
"_____no_output_____"
],
[
"### 6.4 - Update Parameters\n\nIn this section you will update the parameters of the model, using gradient descent: \n\n$$ W^{[l]} = W^{[l]} - \\alpha \\text{ } dW^{[l]} \\tag{16}$$\n$$ b^{[l]} = b^{[l]} - \\alpha \\text{ } db^{[l]} \\tag{17}$$\n\nwhere $\\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. ",
"_____no_output_____"
],
[
"**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.\n\n**Instructions**:\nUpdate parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$. \n",
"_____no_output_____"
]
],
[
[
"def update_parameters(parameters, grads, learning_rate):\n \"\"\"\n Update parameters using gradient descent\n \n Arguments:\n parameters -- python dictionary containing your parameters \n grads -- python dictionary containing your gradients, output of L_model_backward\n \n Returns:\n parameters -- python dictionary containing your updated parameters \n parameters[\"W\" + str(l)] = ... \n parameters[\"b\" + str(l)] = ...\n \"\"\"\n \n L = len(parameters) // 2 # number of layers in the neural network\n\n # Update rule for each parameter. Use a for loop.\n ### START CODE HERE ### (≈ 3 lines of code)\n for l in range(L):\n parameters[\"W\" + str(l+1)] = parameters[\"W\" + str(l+1)] - learning_rate*grads[\"dW\" + str(l+1)]\n parameters[\"b\" + str(l+1)] = parameters[\"b\" + str(l+1)] - learning_rate*grads[\"db\" + str(l+1)] \n ### END CODE HERE ###\n return parameters",
"_____no_output_____"
],
[
"parameters, grads = update_parameters_test_case()\nparameters = update_parameters(parameters, grads, 0.1)\n\nprint (\"W1 = \"+ str(parameters[\"W1\"]))\nprint (\"b1 = \"+ str(parameters[\"b1\"]))\nprint (\"W2 = \"+ str(parameters[\"W2\"]))\nprint (\"b2 = \"+ str(parameters[\"b2\"]))",
"W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n [-1.0535704 -0.86128581 0.68284052 2.20374577]]\nb1 = [[-0.04659241]\n [-1.28888275]\n [ 0.53405496]]\nW2 = [[-0.55569196 0.0354055 1.32964895]]\nb2 = [[-0.84610769]]\n"
]
],
[
[
"**Expected Output**:\n\n<table style=\"width:100%\"> \n <tr>\n <td > W1 </td> \n <td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]\n [-1.76569676 -0.80627147 0.51115557 -1.18258802]\n [-1.0535704 -0.86128581 0.68284052 2.20374577]] </td> \n </tr> \n \n <tr>\n <td > b1 </td> \n <td > [[-0.04659241]\n [-1.28888275]\n [ 0.53405496]] </td> \n </tr> \n <tr>\n <td > W2 </td> \n <td > [[-0.55569196 0.0354055 1.32964895]]</td> \n </tr> \n \n <tr>\n <td > b2 </td> \n <td > [[-0.84610769]] </td> \n </tr> \n</table>\n",
"_____no_output_____"
],
[
"\n<!-- ## 7 - Conclusion\n\nCongrats on implementing all the functions required for building a deep neural network! \n\nWe know it was a long assignment but going forward it will only get better. The next part of the assignment is easier. \n\nIn the next assignment you will put all these together to build two models:\n- A two-layer neural network\n- An L-layer neural network\n -->",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4a06f4634cb34b332b11c60649d26107b68b374b
| 222,802 |
ipynb
|
Jupyter Notebook
|
notebooks/nipype_tutorial.ipynb
|
Neurita/nipype-lessons
|
5a713b09ba9a23b48ee5298cff92381e016c5919
|
[
"CC-BY-4.0"
] | null | null | null |
notebooks/nipype_tutorial.ipynb
|
Neurita/nipype-lessons
|
5a713b09ba9a23b48ee5298cff92381e016c5919
|
[
"CC-BY-4.0"
] | null | null | null |
notebooks/nipype_tutorial.ipynb
|
Neurita/nipype-lessons
|
5a713b09ba9a23b48ee5298cff92381e016c5919
|
[
"CC-BY-4.0"
] | null | null | null | 75.245525 | 36,448 | 0.785141 |
[
[
[
"# Dissecting Nipype Workflows\n\n<center>\nNipype team | contact: [email protected] | nipy.org/nipype\n<br>\n(Hit Esc to get an overview)\n</center>[Latest version][notebook] | [Latest slideshow][slideshow]\n\n[notebook]: http://nbviewer.ipython.org/urls/raw.github.com/nipy/nipype/master/examples/nipype_tutorial.ipynb\n[slideshow]: http://slideviewer.herokuapp.com/url/raw.github.com/nipy/nipype/master/examples/nipype_tutorial.ipynb",
"_____no_output_____"
],
[
"# Contributors\n\nhttp://nipy.org/nipype/about.html#code-contributors\n\n# Funding\n\n- 1R03EB008673-01 from NIBIB, Satrajit Ghosh, Susan Whitfield-Gabrieli\n- 5R01MH081909-02 from NIMH, Mark D'Esposito\n- INCF\n\n# Conflict of interest\n\n<center>\nSatrajit Ghosh: TankThink Labs, LLC\n</center>",
"_____no_output_____"
],
[
"# What is Nipype?\n\n<center>\n<img src=\"https://raw.github.com/satra/intro2nipype/master/images/nipype.png\" width=\"40%\" />\n<br>\nFigure designed and created by: Arno Klein (www.mindboggle.info)\n</center>\n",
"_____no_output_____"
],
[
"# Make life a little easier\n\n<img src=\"https://raw.github.com/satra/intro2nipype/master/images/EDC.png\" />\n\nPoline _et al._ (2012)",
"_____no_output_____"
],
[
"# Many workflow systems out there\n\n- [BioImage Suite](http://www.bioimagesuite.org/)\n- [BIRN Tools](https://wiki.birncommunity.org/x/LgFrAQ)\n- [BrainVisa](http://brainvisa.info)\n- [CambaFX](http://www-bmu.psychiatry.cam.ac.uk/software/)\n- [JIST for MIPAV](http://www.nitrc.org/projects/jist/)\n- [LONI pipeline](http://pipeline.loni.ucla.edu)\n- [MEVIS Lab](http://www.mevislab.de)\n- [PSOM](http://code.google.com/p/psom/)\n",
"_____no_output_____"
],
[
"# Solution requirements\n\nComing at it from a developer's perspective, we needed something\n\n- lightweight\n- scriptable\n- provided formal, common semantics\n- allowed interactive exploration\n- supported efficient batch processing\n- enabled rapid algorithm prototyping\n- was flexible and adaptive\n- part of an ecosystem",
"_____no_output_____"
],
[
"# Python ecosystem\n\n<table width=\"1024px\">\n<tr>\n<td colspan=\"2\"><a href=\"http://ipython.org/\"><img src=\"http://ipython.org/_static/IPy_header.png\"></a></td>\n<td colspan=\"2\"><a href=\"http://nipy.org/\"><img src=\"http://nipy.org/img/nipy.svg\"></a></td>\n</tr>\n<tr>\n<td><a href=\"http://scipy.org/\"><img src=\"http://www.scipy.org/_static/images/tutorial.png\"></a></td>\n<td><a href=\"http://numpy.org/\"><img src=\"http://www.numpy.org/_static/numpy_logo.png\"></a></td>\n<td><a href=\"http://pymvpa.org/\"><img src=\"http://www.pymvpa.org/_static/pymvpa_logo.jpg\" width=\"256\"></a></td>\n<td><a href=\"http://scikit-learn.org/\"><img src=\"http://scikit-learn.org/stable/_static/scikit-learn-logo-small.png\"></a></td>\n</tr>\n<tr>\n<td><a href=\"http://networkx.github.io/\"><img src=\"https://raw.github.com/networkx/networkx/master/doc/source/static/art1.png\" width=\"256\"></a></td>\n<td><a href=\"http://matplotlib.org/\"><img src=\"http://matplotlib.org/_static/logo2.png\" width=\"256\"></a></td>\n<td><a href=\"http://code.enthought.com/projects/mayavi/\"><img src=\"http://code.enthought.com/img/mayavi-samp.png\" width=\"256\"></a></td>\n<td><a href=\"http://neuro.debian.net/\"><img src=\"http://neuro.debian.net/_files/neurodebian_logo_posters_banner.svg\" width=\"256\"></a></td>\n</tr>\n</table>\n",
"_____no_output_____"
],
[
"# Existing technologies\n\n**shell scripting**:\n\n Can be quick to do, and powerful, but only provides application specific \n scalability, and not easy to port across different architectures.\n\n**make/CMake**:\n\n Similar in concept to workflow execution in Nipype, but again limited by the\n need for command line tools and flexibility in terms of scaling across\n hardware architectures (although see [makeflow](http://nd.edu/~ccl/software/makeflow).",
"_____no_output_____"
],
[
"# Existing technologies\n\n**Octave/MATLAB**:\n\n Integration with other tools is *ad hoc* (i.e., system call) and dataflow is\n managed at a programmatic level. However, see [PSOM](http://code.google.com/p/psom/) which offers a nice\n alternative to some aspects of Nipype for Octave/Matlab users.\n\n**Graphical options**: (e.g., [LONI Pipeline](http://pipeline.loni.ucla.edu), [VisTrails](http://www.vistrails.org/))\n\n Are easy to use but reduces flexibility relative to scripting options.",
"_____no_output_____"
],
[
"# Nipype architecture\n\n<img src=\"https://raw.github.com/satra/intro2nipype/master/images/arch.png\" width=\"100%\">",
"_____no_output_____"
],
[
"## Concepts\n\n* **Interface**: Wraps a program or function\n\n- **Node/MapNode**: Wraps an `Interface` for use in a Workflow that provides\n caching and other goodies (e.g., pseudo-sandbox)\n- **Workflow**: A *graph* or *forest of graphs* whose nodes are of type `Node`,\n `MapNode` or `Workflow` and whose edges represent data flow\n\n* **Plugin**: A component that describes how a `Workflow` should be executed",
"_____no_output_____"
],
[
"# Software interfaces\n\nCurrently supported (5-2-2013). [Click here for latest](http://www.mit.edu/~satra/nipype-nightly/documentation.html)\n\n<style>\n.rendered_html table{border:0px}\n.rendered_html tr{border:0px}\n.rendered_html td{border:0px}\n</style>\n\n<table>\n<tr>\n<td>\n<ul>\n<li><a href=\"http://afni.nimh.nih.gov/afni\">AFNI</a></li>\n<li><a href=\"http://www.picsl.upenn.edu/ANTS\">ANTS</a></li>\n<li><a href=\"http://www.psychiatry.uiowa.edu/mhcrc/IPLpages/BRAINS.htm\">BRAINS</a></li>\n<li><a href=\"http://www.cs.ucl.ac.uk/research/medic/camino\">Camino</a></li>\n<li><a href=\"http://www.nitrc.org/projects/camino-trackvis\">Camino-TrackVis</a></li>\n<li><a href=\"http://www.connectomeviewer.org\">ConnectomeViewerToolkit</a></li>\n<li><a href=\"http://www.cabiatl.com/mricro/mricron/dcm2nii.html\">dcm2nii</a></li>\n<li><a href=\"http://www.trackvis.org/dtk\">Diffusion Toolkit</a></li>\n</ul>\n</td>\n<td>\n<ul>\n<li><a href=\"http://freesurfer.net\">FreeSurfer</a></li>\n<li><a href=\"http://www.fmrib.ox.ac.uk/fsl\">FSL</a></li>\n<li><a href=\"http://www.brain.org.au/software/mrtrix/index.html\">MRtrx</a></li>\n<li><a href=\"http://nipy.org/nipy\">Nipy</a></li>\n<li><a href=\"http://nipy.org/nitime\">Nitime</a></li>\n<li><a href=\"http://github.com/pyxnat\">PyXNAT</a></li>\n<li><a href=\"http://www.slicer.org\">Slicer</a></li>\n<li><a href=\"http://www.fil.ion.ucl.ac.uk/spm\">SPM</a></li>\n</ul>\n</td>\n</tr>\n</table>\n\nMost used/contributed policy!\n\nNot all components of these packages are available.",
"_____no_output_____"
],
[
"# Workflows\n\n- Properties:\n\n - processing pipeline is a directed acyclic graph (DAG)\n - nodes are processes\n - edges represent data flow\n - compact represenation for any process\n - code and data separation",
"_____no_output_____"
],
[
"# Execution Plugins\n\nAllows seamless execution across many architectures\n\n - Local\n\n - Serial\n - Multicore\n\n - Clusters\n\n - HTCondor\n - PBS/Torque/SGE/LSF (native and via IPython)\n - SSH (via IPython)\n - Soma Workflow",
"_____no_output_____"
],
[
"# Learn Nipype concepts in 10 easy steps\n\n\n1. Installing and testing the installation \n2. Working with interfaces\n3. Using Nipype caching\n4. Creating Nodes, MapNodes and Workflows\n5. Getting and saving data\n6. Using Iterables\n7. Function nodes\n8. Distributed computation\n9. Connecting to databases\n10. Execution configuration options",
"_____no_output_____"
],
[
"# Step 1. Installing Nipype\n\n## Python environment:\n\n* Debian/Ubuntu/Scientific Fedora\n* [Canopy from Enthought](https://www.enthought.com/products/canopy/)\n* [Anaconda from Contnuum Analytics](https://store.continuum.io/cshop/anaconda/)",
"_____no_output_____"
],
[
"## Installing Nipype:\n\n* Available from [@NeuroDebian](http://neuro.debian.net/pkgs/python-nipype.html),\n [@PyPI](http://pypi.python.org/pypi/nipype/), and\n [@GitHub](http://github.com/nipy/nipype)\n \n - pip install nipype\n - easy_install nipype\n - sudo apt-get install python-nipype\n\n* Dependencies: networkx, nibabel, numpy, scipy, traits",
"_____no_output_____"
],
[
"## Running Nipype ([Quickstart](http://nipy.org/nipype/quickstart.html)):\n\n* Ensure underlying tools are installed and accessible\n* Nipype **is a wrapper, not a substitute** for AFNI, ANTS, FreeSurfer, FSL, SPM,\n NiPy, etc.,.",
"_____no_output_____"
],
[
"# Step 2. Testing nipype",
"_____no_output_____"
],
[
"```\n$ jupyter notebook\n```",
"_____no_output_____"
]
],
[
[
"import nipype\n\n# Comment the following section to increase verbosity of output\nnipype.config.set('logging', 'workflow_level', 'CRITICAL')\nnipype.config.set('logging', 'interface_level', 'CRITICAL')\nnipype.logging.update_logging(nipype.config)\n\nnipype.test(verbose=0) # Increase verbosity parameter for more info",
"_____no_output_____"
],
[
"nipype.get_info()",
"_____no_output_____"
]
],
[
[
"# Step 3: Environment and setup",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nimport os\n\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"# download the files from: https://github.com/Neurita/nipype-tutorial-data/archive/master.zip\nimport os.path as op\n\ntutorial_dir = '/Users/alexandre/nipype-tutorial'\n\ndata_dir = op.join(tutorial_dir, 'ds107')\n\nrequired_files = [op.join(data_dir, 'sub001', 'BOLD', 'task001_run001', 'bold.nii.gz'),\n op.join(data_dir, 'sub001', 'BOLD', 'task001_run002', 'bold.nii.gz'),\n op.join(data_dir, 'sub044', 'BOLD', 'task001_run001', 'bold.nii.gz'),\n op.join(data_dir, 'sub044', 'BOLD', 'task001_run002', 'bold.nii.gz'),\n op.join(data_dir, 'sub001', 'anatomy', 'highres001.nii.gz'),\n op.join(data_dir, 'sub044', 'anatomy', 'highres001.nii.gz'),\n ]\n\nprint(required_files)",
"['/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run001/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run002/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub044/BOLD/task001_run001/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub044/BOLD/task001_run002/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub001/anatomy/highres001.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub044/anatomy/highres001.nii.gz']\n"
]
],
[
[
"# Step 4. Working with interfaces",
"_____no_output_____"
]
],
[
[
"import nipype.algorithms",
"_____no_output_____"
],
[
"from nipype.interfaces.fsl import DTIFit\nfrom nipype.interfaces.spm import Realign",
"_____no_output_____"
]
],
[
[
"### Finding interface inputs and outputs and examples",
"_____no_output_____"
]
],
[
[
"DTIFit.help()",
"Wraps command **dtifit**\n\nUse FSL dtifit command for fitting a diffusion tensor model at each\nvoxel\n\nExample\n-------\n\n>>> from nipype.interfaces import fsl\n>>> dti = fsl.DTIFit()\n>>> dti.inputs.dwi = 'diffusion.nii'\n>>> dti.inputs.bvecs = 'bvecs'\n>>> dti.inputs.bvals = 'bvals'\n>>> dti.inputs.base_name = 'TP'\n>>> dti.inputs.mask = 'mask.nii'\n>>> dti.cmdline\n'dtifit -k diffusion.nii -o TP -m mask.nii -r bvecs -b bvals'\n\nInputs::\n\n\t[Mandatory]\n\tbvals: (an existing file name)\n\t\tb values file\n\t\tflag: -b %s, position: 4\n\tbvecs: (an existing file name)\n\t\tb vectors file\n\t\tflag: -r %s, position: 3\n\tdwi: (an existing file name)\n\t\tdiffusion weighted image data file\n\t\tflag: -k %s, position: 0\n\tmask: (an existing file name)\n\t\tbet binary mask file\n\t\tflag: -m %s, position: 2\n\n\t[Optional]\n\targs: (a string)\n\t\tAdditional parameters to the command\n\t\tflag: %s\n\tbase_name: (a string, nipype default value: dtifit_)\n\t\tbase_name that all output files will start with\n\t\tflag: -o %s, position: 1\n\tcni: (an existing file name)\n\t\tinput counfound regressors\n\t\tflag: --cni=%s\n\tenviron: (a dictionary with keys which are a value of class 'str' and\n\t\t with values which are a value of class 'str', nipype default value:\n\t\t {})\n\t\tEnvironment variables\n\tgradnonlin: (an existing file name)\n\t\tgradient non linearities\n\t\tflag: --gradnonlin=%s\n\tignore_exception: (a boolean, nipype default value: False)\n\t\tPrint an error message instead of throwing an exception in case the\n\t\tinterface fails to run\n\tlittle_bit: (a boolean)\n\t\tonly process small area of brain\n\t\tflag: --littlebit\n\tmax_x: (an integer (int or long))\n\t\tmax x\n\t\tflag: -X %d\n\tmax_y: (an integer (int or long))\n\t\tmax y\n\t\tflag: -Y %d\n\tmax_z: (an integer (int or long))\n\t\tmax z\n\t\tflag: -Z %d\n\tmin_x: (an integer (int or long))\n\t\tmin x\n\t\tflag: -x %d\n\tmin_y: (an integer (int or long))\n\t\tmin y\n\t\tflag: -y %d\n\tmin_z: (an integer (int or long))\n\t\tmin z\n\t\tflag: -z %d\n\toutput_type: ('NIFTI_GZ' or 'NIFTI_PAIR_GZ' or 'NIFTI' or\n\t\t 'NIFTI_PAIR')\n\t\tFSL output type\n\tsave_tensor: (a boolean)\n\t\tsave the elements of the tensor\n\t\tflag: --save_tensor\n\tsse: (a boolean)\n\t\toutput sum of squared errors\n\t\tflag: --sse\n\tterminal_output: ('stream' or 'allatonce' or 'file' or 'none')\n\t\tControl terminal output: `stream` - displays to terminal immediately\n\t\t(default), `allatonce` - waits till command is finished to display\n\t\toutput, `file` - writes output to file, `none` - output is ignored\n\nOutputs::\n\n\tFA: (an existing file name)\n\t\tpath/name of file with the fractional anisotropy\n\tL1: (an existing file name)\n\t\tpath/name of file with the 1st eigenvalue\n\tL2: (an existing file name)\n\t\tpath/name of file with the 2nd eigenvalue\n\tL3: (an existing file name)\n\t\tpath/name of file with the 3rd eigenvalue\n\tMD: (an existing file name)\n\t\tpath/name of file with the mean diffusivity\n\tMO: (an existing file name)\n\t\tpath/name of file with the mode of anisotropy\n\tS0: (an existing file name)\n\t\tpath/name of file with the raw T2 signal with no diffusion weighting\n\tV1: (an existing file name)\n\t\tpath/name of file with the 1st eigenvector\n\tV2: (an existing file name)\n\t\tpath/name of file with the 2nd eigenvector\n\tV3: (an existing file name)\n\t\tpath/name of file with the 3rd eigenvector\n\ttensor: (an existing file name)\n\t\tpath/name of file with the 4D tensor volume\n\n"
],
[
"Realign.help()",
"_____no_output_____"
]
],
[
[
"### Creating a directory for running interfaces",
"_____no_output_____"
]
],
[
[
"import os\n\nlibrary_dir = os.path.join(tutorial_dir, 'results')\nif not os.path.exists(library_dir):\n os.mkdir(library_dir)\nos.chdir(library_dir)",
"_____no_output_____"
],
[
"# pick a demo file\ndemo_file = required_files[0]\nprint(demo_file)",
"/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run001/bold.nii.gz\n"
],
[
"# check the current folder\nprint(op.abspath('.'))",
"/Users/alexandre/nipype-tutorial/results\n"
]
],
[
[
"## Executing interfaces",
"_____no_output_____"
]
],
[
[
"from nipype.algorithms.misc import Gunzip\n\nconvert = Gunzip()\nconvert.inputs.in_file = demo_file\nresults = convert.run()",
"_____no_output_____"
],
[
"results.outputs",
"_____no_output_____"
]
],
[
[
"## Other ways",
"_____no_output_____"
]
],
[
[
"from nipype.algorithms.misc import Gunzip\n\nconvert = Gunzip()\nconvert.inputs.in_file = demo_file\nresults = convert.run()\n\nuzip_bold = results.outputs.out_file\nprint(uzip_bold)",
"/Users/alexandre/nipype-tutorial/results/bold.nii\n"
],
[
"convert = Gunzip()\nresults = convert.run(in_file=demo_file)\n\nprint(results.outputs)",
"\nout_file = /Users/alexandre/nipype-tutorial/results/bold.nii\n\n"
],
[
"convert.inputs",
"_____no_output_____"
]
],
[
[
"#### Look at only the defined inputs",
"_____no_output_____"
]
],
[
[
"results.inputs",
"_____no_output_____"
]
],
[
[
"### Experiment with other interfaces\n\nFor example, run realignment with SPM",
"_____no_output_____"
]
],
[
[
"from nipype.interfaces.spm import Realign\n\nrealign = Realign(in_files=uzip_bold, register_to_mean=False)\nresults1 = realign.run()\n#print(os.listdir())",
"_____no_output_____"
]
],
[
[
"And now use FSL",
"_____no_output_____"
]
],
[
[
"from nipype.interfaces.fsl import MCFLIRT\n\nmcflirt = MCFLIRT(in_file=uzip_bold, ref_vol=0, save_plots=True)\n\nresults2 = mcflirt.run()",
"_____no_output_____"
]
],
[
[
"### Now we can look at some results",
"_____no_output_____"
]
],
[
[
"print('SPM realign execution time:', results1.runtime.duration)\nprint('Flirt execution time:', results2.runtime.duration)",
"51.40525 32.69443\n"
],
[
"!ls\n!fslinfo bold.nii\n!cat bold_mcf.nii.gz.par\n!wc -l bold_mcf.nii.gz.par\n!cat rp_bold.txt\n!wc -l rp_bold.txt",
"bold.mat bold_mcf.nii.gz meanbold.nii rbold.nii\nbold.nii bold_mcf.nii.gz.par pyscript_realign.m rp_bold.txt\ndata_type INT16\ndim1 64\ndim2 64\ndim3 35\ndim4 164\ndatatype 4\npixdim1 3.000000\npixdim2 3.000000\npixdim3 3.000000\npixdim4 3.000000\ncal_max 0.0000\ncal_min 0.0000\nfile_type NIFTI-1+\n0 -0 0 0 0 0 \n-0.000648031 -6.80307e-05 0.000233637 0.0172324 -0.011007 0.0299689 \n-0.000648031 -0.000384255 0.000525166 0.0221569 -0.00310645 0.0299601 \n-0.000537742 -0.000213411 0.000675624 0.0164069 -0.00310852 0.0540617 \n-0.000485945 -0.000443529 0.000675624 0.0233628 -0.00310753 0.0515571 \n-0.000713941 -4.71689e-05 0.000675624 0.0185578 -0.0209441 0.0540576 \n-0.00134132 -0.000368914 1.40026e-05 -0.00727328 0.00322092 0.0696141 \n-0.00134132 -0.000443529 1.40026e-05 -0.00769971 -0.0215257 0.0484619 \n-0.00177979 -0.00057554 -0.00039095 -0.0185175 0.000717472 0.0782566 \n-0.00174227 -0.000602118 -0.000273836 -0.026563 -0.0231801 0.0893012 \n-0.00189562 -0.000773365 -0.000442515 -0.0220809 -0.00992513 0.0877945 \n-0.00197001 -0.000981862 -0.000452038 -0.0313417 -0.0231963 0.09697 \n-0.00197001 -0.00123776 -0.000432048 -0.0179202 0.000562458 0.103887 \n-0.0020451 -0.00123295 -0.000416925 -0.0179359 0.00868513 0.103891 \n-0.0024408 -0.00137755 -0.000473275 -0.0179408 0.000525663 0.115768 \n-0.00207883 -0.00137755 -0.00041182 -0.0179213 -0.00231937 0.121093 \n-0.00214197 -0.00137755 -0.000585361 -0.0271888 0.000529654 0.103909 \n-0.0020506 -0.00143763 -0.000685392 -0.0355295 0.000542326 0.107176 \n-0.00200531 -0.00186246 -0.000688311 -0.0345114 -0.00393885 0.137953 \n-0.00193013 -0.00186246 -0.000585361 -0.0178968 7.6944e-05 0.147711 \n-0.0024408 -0.00186246 -0.000585361 -0.0114255 0.00267362 0.136166 \n-0.0024408 -0.00224854 -0.000659947 0.00316795 3.41878e-05 0.158576 \n-0.00191346 -0.00186246 -0.000529187 -0.0178971 0.0195162 0.158586 \n-0.000867569 -0.0022732 -0.000585361 -0.0178691 -0.0173949 0.145398 \n-0.002162 -0.00200945 -0.000585361 -0.0178997 0.00181581 0.185335 \n-0.00221511 -0.00233878 -0.000585361 0.000933661 0.0185651 0.220762 \n-0.00282951 -0.00233878 -0.000262259 -0.00516718 -0.00117484 0.239309 \n-0.00298955 -0.00233646 -0.000113495 0.00404857 -0.00650937 0.25584 \n-0.00333767 -0.00264748 0.000189539 0.0295328 -0.00126543 0.277892 \n-0.00340314 -0.00276428 0.000220181 0.0342786 -0.00127839 0.293125 \n-0.00378527 -0.00276428 0.000103392 0.0395581 -0.00133553 0.271431 \n-0.00413901 -0.00265273 0.00021096 0.0342485 0.0144438 0.263382 \n-0.00407179 -0.00263205 0.000225567 0.034284 -0.00139715 0.264772 \n-0.00390828 -0.00276428 4.29208e-06 0.0342959 -0.0194981 0.239354 \n-0.00392429 -0.00276428 -0.000141989 0.0280029 -0.0066876 0.255358 \n-0.00357232 -0.00284399 -0.000488067 0.0167271 -0.019441 0.247849 \n-0.00314524 -0.00299634 9.39482e-05 0.0623041 0.0132893 0.271479 \n-0.00274308 -0.00338827 0.000122301 0.0623548 -0.00706818 0.275725 \n-0.00294514 -0.00367612 0.000225253 0.0623796 -0.0071158 0.289407 \n-0.00300618 -0.00378055 0.000194818 0.0799731 0.00666649 0.298655 \n-0.00279371 -0.00420707 1.87201e-05 0.0926068 0.0121676 0.292538 \n-0.00266179 -0.0045453 0.0002077 0.0993838 0.0158051 0.306031 \n-0.00259066 -0.00434959 0.000129676 0.109483 0.0101185 0.306057 \n-0.00283023 -0.0044853 0.000130499 0.115446 0.00662743 0.310839 \n-0.00283023 -0.00459889 0.000227268 0.120003 0.0148419 0.311047 \n-0.00283023 -0.00459033 0.000226882 0.134566 -8.62201e-05 0.306034 \n-0.00283023 -0.00485603 0.000676555 0.150112 0.00668223 0.294623 \n-0.00195744 -0.00485603 -6.13082e-05 0.114638 0.0100787 0.30596 \n-0.00155947 -0.00485603 0.000181004 0.131432 0.010163 0.310483 \n-0.00198098 -0.00548523 0.000771018 0.159675 0.00999502 0.305853 \n-0.00195488 -0.00548523 0.000576732 0.150557 0.00777 0.323763 \n-0.00220592 -0.00574571 0.000771018 0.147351 0.00944537 0.351384 \n-0.00261081 -0.00618387 0.000771018 0.159847 0.0338023 0.38182 \n-0.00240068 -0.00625456 0.000929938 0.159876 0.0337124 0.38563 \n-0.00265803 -0.00651453 0.000796437 0.17053 0.0143757 0.391227 \n-0.00379069 -0.00670426 0.000978918 0.184039 0.0114949 0.416401 \n-0.00367708 -0.00670426 0.00124646 0.192947 0.00878098 0.413831 \n-0.003647 -0.00696073 0.00128291 0.199822 0.0116874 0.398761 \n-0.00362834 -0.00675588 0.00119059 0.192941 0.0174788 0.404381 \n-0.00398316 -0.00668414 0.00110269 0.192942 0.00580324 0.380184 \n-0.00341188 -0.00659862 0.00133978 0.192948 0.0126984 0.367414 \n-0.00309243 -0.00668414 0.00133978 0.192969 0.0127881 0.380149 \n-0.0035182 -0.00713793 0.00141465 0.218868 0.00717236 0.417076 \n-0.00343782 -0.00691114 0.00150208 0.217142 0.0230089 0.406685 \n-0.0033467 -0.00722952 0.00142065 0.216816 0.000795165 0.415164 \n-0.00361048 -0.00663299 0.00150952 0.212216 -0.00343516 0.402474 \n-0.0035182 -0.00648493 0.00142065 0.207687 0.012306 0.406218 \n-0.00314248 -0.00663299 0.00145569 0.20428 0.00025415 0.389841 \n-0.00336083 -0.0066185 0.00146733 0.210019 -0.00712772 0.389826 \n-0.00298039 -0.0067864 0.00145661 0.210655 0.00147235 0.389778 \n-0.00239394 -0.00699305 0.00144833 0.21744 0.00669908 0.389723 \n-0.00252237 -0.0070877 0.00134474 0.205386 0.00607419 0.384684 \n-0.00238337 -0.00689397 0.00117434 0.200714 -0.00681186 0.354692 \n-0.0019644 -0.00690773 0.00118008 0.203423 0.011145 0.338649 \n-0.00189691 -0.0070877 0.00110244 0.201689 0.00136635 0.36064 \n-0.00166435 -0.0070877 0.00130305 0.215144 0.0209843 0.348259 \n-0.00213316 -0.00672544 0.00153364 0.219233 -0.000649884 0.333255 \n-0.0022227 -0.00667788 0.0019414 0.224227 0.0141399 0.338702 \n-0.00235355 -0.00697026 0.00152183 0.19983 0.0204139 0.352023 \n-0.00226307 -0.007101 0.00148364 0.206423 0.0196835 0.343881 \n-0.00182946 -0.00684641 0.00148376 0.210641 0.0193717 0.33864 \n-0.00231136 -0.00734192 0.0010764 0.204039 0.0194589 0.400291 \n-0.00287107 -0.00734483 0.00145586 0.204388 0.0141879 0.399432 \n-0.00294536 -0.00781072 0.0015519 0.227363 0.0114705 0.384767 \n-0.00239196 -0.0074063 0.00152782 0.207013 0.00976923 0.35843 \n-0.00241202 -0.0075947 0.00152782 0.219165 0.0120362 0.368343 \n-0.00218109 -0.00821514 0.00161111 0.224866 0.00526447 0.39622 \n-0.00202926 -0.00808956 0.0016119 0.227456 0.00974611 0.384387 \n-0.00260759 -0.00822753 0.0015532 0.237094 0.0192098 0.39842 \n-0.00260759 -0.00862479 0.00155479 0.239931 0.0109056 0.420617 \n-0.00260759 -0.00868212 0.00145791 0.227616 0.0146772 0.397553 \n-0.00230122 -0.00869488 0.00135557 0.227615 0.0216743 0.397541 \n-0.00140152 -0.00863737 0.00135557 0.221441 0.0118992 0.372535 \n-0.0014389 -0.00903848 0.00144515 0.243114 0.0100816 0.376961 \n-0.00164553 -0.00945298 0.00176454 0.271215 0.0221545 0.365036 \n-0.00153172 -0.0094838 0.00183986 0.282898 0.0135014 0.378396 \n-0.00157856 -0.00945107 0.00233511 0.293208 0.0156069 0.36114 \n-0.00165327 -0.00905714 0.00213265 0.29312 -0.00131112 0.356394 \n-0.00139194 -0.00947915 0.00218269 0.293217 0.0139749 0.372147 \n-0.00163617 -0.00912455 0.00193269 0.282656 0.0142696 0.326976 \n-0.00183893 -0.00905714 0.00193269 0.271708 0.00792137 0.327003 \n-0.00177667 -0.00858365 0.00133168 0.256096 0.00758056 0.327147 \n-0.00193745 -0.00887837 0.00146893 0.256958 0.0397567 0.341106 \n-0.00117881 -0.00905714 0.00147013 0.243326 0.0242137 0.310278 \n-0.000954704 -0.00905714 0.00131253 0.247695 0.0193559 0.326984 \n-0.00120443 -0.00928097 0.00180097 0.257449 0.023443 0.345306 \n-0.000870904 -0.00947919 0.00180097 0.263378 0.0119278 0.345969 \n-0.00109149 -0.00949699 0.00151546 0.273211 0.0454722 0.326851 \n-0.00110329 -0.0095865 0.00180097 0.280165 0.0136147 0.335702 \n-0.000982853 -0.00994492 0.00139505 0.273954 0.0320574 0.343208 \n-0.000818998 -0.00983353 0.00176518 0.269745 0.0101212 0.326755 \n-0.000564727 -0.00957944 0.00184016 0.271712 0.0188107 0.326834 \n0.000309882 -0.00948116 0.00133468 0.266448 0.0285431 0.340248 \n0.000443122 -0.00948116 0.00139586 0.270844 0.0596369 0.308164 \n0.000123991 -0.00948116 0.00182996 0.273475 0.0357393 0.30818 \n-0.000248592 -0.0101713 0.00182996 0.273946 0.0233993 0.306573 \n-0.000423246 -0.0100986 0.00138206 0.266373 0.0375577 0.293378 \n-0.00079007 -0.00943782 0.00207972 0.297892 0.0486114 0.284854 \n-0.00082456 -0.00909381 0.00254579 0.297874 0.0468249 0.230194 \n-0.00116096 -0.00920499 0.00239133 0.297865 0.0468187 0.223378 \n-0.00116096 -0.0093388 0.00233274 0.289851 0.0468342 0.230143 \n-0.00141434 -0.0093388 0.0022628 0.277371 0.0460361 0.243067 \n-0.00136569 -0.0093388 0.00213114 0.27885 0.0333136 0.230161 \n-0.00137393 -0.00988741 0.00222874 0.286887 0.0451126 0.241329 \n-0.00127608 -0.00968492 0.00213114 0.297918 0.0372889 0.235667 \n-0.00177693 -0.00980594 0.00266912 0.322421 0.0413373 0.23587 \n-0.00206172 -0.00991864 0.00266793 0.331628 0.0449173 0.259338 \n-0.00191706 -0.00992889 0.00292538 0.345556 0.0406334 0.247125 \n-0.00191722 -0.00964329 0.002911 0.344671 0.0516438 0.223485 \n-0.0023003 -0.00984449 0.0030926 0.361069 0.0401356 0.219025 \n-0.00189412 -0.00962236 0.00290486 0.349081 0.0296165 0.242942 \n-0.00137071 -0.00972768 0.00290486 0.348541 0.0510408 0.239752 \n-0.00181409 -0.00992889 0.00315574 0.363095 0.0487666 0.247151 \n-0.002085 -0.00962123 0.00349775 0.373536 0.050692 0.252412 \n-0.00261596 -0.00987652 0.00350957 0.392916 0.0529021 0.279393 \n-0.00261596 -0.0101768 0.003723 0.39721 0.0303516 0.309866 \n-0.00303645 -0.0104077 0.00408709 0.417733 0.0300827 0.292828 \n-0.00318552 -0.0101577 0.00408709 0.417673 0.0461926 0.329219 \n-0.00322842 -0.0104077 0.00408709 0.417733 0.0300399 0.331771 \n-0.00348728 -0.0104077 0.00366468 0.4008 0.042998 0.340314 \n-0.00331602 -0.0104077 0.00366468 0.398795 0.0302205 0.328136 \n-0.00291412 -0.0104077 0.00366468 0.38255 0.0357622 0.313314 \n-0.00315734 -0.00996599 0.00396277 0.402445 0.0189324 0.316782 \n-0.00326502 -0.00971134 0.00394798 0.402395 0.0275426 0.312763 \n-0.00313641 -0.00971134 0.0038281 0.402389 0.0340728 0.28639 \n-0.00295495 -0.00973966 0.00383539 0.402403 0.0341072 0.293122 \n-0.00324894 -0.00955393 0.00381537 0.402364 0.0340586 0.28647 \n-0.00346835 -0.00987932 0.0038854 0.402439 0.0256179 0.294511 \n-0.00333527 -0.00987932 0.00398028 0.404925 0.0243339 0.286316 \n-0.00333527 -0.00977985 0.00388976 0.402411 0.0245428 0.302844 \n-0.00362382 -0.0101217 0.00397538 0.398777 0.0482021 0.326885 \n-0.00373736 -0.00969556 0.00366469 0.377794 0.0145924 0.286436 \n-0.00373736 -0.00982597 0.00366469 0.383214 0.039209 0.281433 \n-0.00373736 -0.00981491 0.00366469 0.39158 0.039209 0.272082 \n-0.00360417 -0.00960327 0.00400733 0.398096 0.039068 0.266023 \n-0.00342957 -0.009413 0.00366469 0.411692 0.0417208 0.270922 \n-0.00298905 -0.00911278 0.00396532 0.397985 0.03919 0.261475 \n-0.00299327 -0.00927534 0.00408049 0.39803 0.0402672 0.243958 \n-0.00288916 -0.00973863 0.00387298 0.407733 0.0516868 0.268467 \n-0.00255916 -0.00973396 0.0039936 0.409449 0.0341704 0.294032 \n-0.00237164 -0.00992527 0.00404552 0.404861 0.0342793 0.268322 \n-0.0020655 -0.00991986 0.00390318 0.410156 0.0409325 0.268296 \n-0.00230979 -0.0100951 0.00401762 0.398135 0.0408331 0.274538 \n-0.00231116 -0.00976693 0.00396935 0.386487 0.0307988 0.22874 \n 164 bold_mcf.nii.gz.par\n 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00\n -1.8807540e-02 -1.9992003e-02 4.4480285e-02 -8.2109034e-04 -3.3518355e-04 -1.7881878e-04\n -2.9379832e-02 -2.0190367e-02 4.5324498e-02 -8.6203713e-04 -3.7375069e-04 -3.2522068e-04\n -2.4402193e-02 -3.3023112e-02 6.8122160e-02 -8.8148070e-04 -4.2252073e-04 -3.8591240e-04\n -3.5961084e-02 -1.9341716e-02 6.1945148e-02 -5.8929972e-04 -3.5300384e-04 -3.9707274e-04\n -2.4014516e-02 -2.9384475e-02 8.6038695e-02 -1.1348338e-03 -1.5849435e-04 -3.3842225e-04\n 5.4464241e-03 -1.5493970e-02 1.2023794e-01 -1.7862077e-03 -6.1302822e-04 8.7691617e-05\n 6.6382560e-03 -3.9606175e-02 8.9097102e-02 -1.8654187e-03 -8.5075936e-04 1.4977574e-04\n 2.3340990e-02 -2.5221903e-02 1.2618554e-01 -2.0837932e-03 -7.6287899e-04 4.2360612e-04\n 3.9404909e-02 -4.5811915e-02 1.5014888e-01 -2.1546068e-03 -1.0316028e-03 5.1826915e-04\n 3.1434941e-02 -3.6384660e-02 1.4488414e-01 -2.2756736e-03 -1.1749888e-03 6.9192162e-04\n 3.9841822e-02 -3.0321049e-02 1.5818081e-01 -2.4514724e-03 -1.5352472e-03 6.8036931e-04\n 2.5854620e-02 -2.9924409e-02 1.6556179e-01 -2.6792226e-03 -1.7379286e-03 6.2251583e-04\n 2.1856970e-02 -6.8956971e-03 1.5189214e-01 -2.5927159e-03 -1.7018223e-03 5.5102730e-04\n 2.2333037e-02 -7.7407732e-03 1.8852138e-01 -2.8747329e-03 -1.8887033e-03 6.5497473e-04\n 2.8382915e-02 -2.2515273e-02 1.9255219e-01 -2.4117946e-03 -1.7322666e-03 6.7760308e-04\n 3.7778752e-02 -1.1778276e-02 1.8777955e-01 -3.0315937e-03 -2.1960734e-03 8.9900476e-04\n 4.6518702e-02 1.3826253e-03 1.6869449e-01 -2.3753717e-03 -2.2671741e-03 1.0469836e-03\n 4.1967150e-02 -1.6179116e-02 2.0716941e-01 -2.4579983e-03 -2.8055401e-03 1.2556207e-03\n 2.5913785e-02 -2.0362556e-02 2.2861389e-01 -2.4909774e-03 -2.8702891e-03 1.2560700e-03\n 9.7845534e-03 -9.2942552e-03 2.1488460e-01 -2.8664125e-03 -2.9488260e-03 1.1983659e-03\n -2.8047927e-03 -1.8603978e-02 2.3844398e-01 -2.9594620e-03 -3.1124519e-03 1.1691115e-03\n 1.1916049e-02 9.7471843e-03 2.2259537e-01 -2.5439256e-03 -2.8027243e-03 1.0370663e-03\n 3.3088895e-02 -1.3027673e-02 1.9860847e-01 -1.1710790e-03 -3.2074295e-03 1.2080728e-03\n 1.2811056e-02 -5.4181772e-05 2.6231564e-01 -2.5834145e-03 -3.2641809e-03 9.7150337e-04\n -6.1675055e-03 -2.4422124e-03 3.2261080e-01 -2.8148593e-03 -3.5106777e-03 7.6059715e-04\n -1.0788952e-02 -3.3917460e-02 3.5888893e-01 -3.6495257e-03 -3.5913580e-03 3.8153892e-04\n -2.7212895e-02 -4.4569137e-02 3.8033801e-01 -3.9286731e-03 -3.6643530e-03 3.8748395e-04\n -5.9434751e-02 -3.4091392e-02 4.2618621e-01 -4.3368398e-03 -4.1305989e-03 -1.6089432e-06\n -7.2334048e-02 -4.1192307e-02 4.3945258e-01 -4.4356867e-03 -4.2004888e-03 -9.7614940e-05\n -7.8372444e-02 -3.8118242e-02 4.2433558e-01 -4.8403451e-03 -4.0767812e-03 -6.3558340e-05\n -6.8473274e-02 -2.1447071e-02 4.1286462e-01 -5.1484530e-03 -4.1688236e-03 2.7673430e-05\n -6.9172465e-02 -4.9032458e-02 4.1389465e-01 -5.3862934e-03 -4.1724172e-03 -3.8432708e-05\n -5.2351884e-02 -5.5894928e-02 3.8860699e-01 -5.2200691e-03 -4.1731926e-03 3.8933512e-04\n -4.7853296e-02 -4.3734831e-02 3.9616658e-01 -4.9911901e-03 -4.0306871e-03 3.6337869e-04\n -3.6811660e-02 -6.5063111e-02 3.9291812e-01 -4.9395472e-03 -4.4940112e-03 5.5161486e-04\n -9.9022299e-02 -1.0189973e-02 3.8785864e-01 -3.6230590e-03 -4.6682026e-03 3.6293938e-04\n -1.0738276e-01 -4.4132393e-02 3.9320538e-01 -3.4578733e-03 -5.1827078e-03 1.4580024e-04\n -1.1545457e-01 -4.9446102e-02 4.1452988e-01 -3.9468494e-03 -5.7165189e-03 2.1938177e-04\n -1.4063103e-01 -2.1007889e-02 4.2887944e-01 -3.6146190e-03 -5.8781983e-03 -2.0289218e-05\n -1.5192891e-01 -1.0300818e-02 4.0894093e-01 -3.4504780e-03 -6.3797654e-03 2.5929694e-04\n -1.6675953e-01 -1.2938625e-03 4.1544629e-01 -3.0538368e-03 -6.7561480e-03 2.4879104e-04\n -1.7952786e-01 -4.7257718e-03 4.3695695e-01 -3.0339800e-03 -6.8548907e-03 1.8621220e-04\n -1.9309207e-01 -1.5335337e-02 4.2373092e-01 -3.3110178e-03 -6.9045303e-03 1.5493168e-04\n -2.0050712e-01 -1.6166367e-02 4.1931754e-01 -3.3101042e-03 -7.3108847e-03 5.4013032e-05\n -2.2497374e-01 -1.8082890e-02 4.1765871e-01 -3.1530257e-03 -7.3697033e-03 -1.5066255e-04\n -2.4030051e-01 -2.6919686e-02 4.0088925e-01 -3.2258266e-03 -7.6022974e-03 -3.6295642e-04\n -1.6771217e-01 -1.1834073e-02 3.9181476e-01 -2.3050355e-03 -7.1201597e-03 6.3204087e-04\n -2.1079787e-01 -1.6907612e-02 3.9522408e-01 -1.6913980e-03 -7.4544043e-03 1.3786301e-04\n -2.4830863e-01 -1.3387990e-02 3.9846221e-01 -2.3886469e-03 -8.2758739e-03 -3.1822208e-04\n -2.4678878e-01 -8.0890687e-03 4.1452591e-01 -2.2671272e-03 -8.3862009e-03 -1.9801023e-04\n -2.4729296e-01 -6.7312035e-03 4.4213408e-01 -2.3400978e-03 -8.7412909e-03 -3.8330217e-04\n -2.5763908e-01 2.1406348e-02 4.9605500e-01 -2.6606118e-03 -9.4546906e-03 -3.0649762e-04\n -2.7501562e-01 2.5173797e-02 4.9297698e-01 -2.2592969e-03 -9.3154899e-03 -7.5713139e-04\n -2.9929566e-01 -1.2482286e-02 5.3111642e-01 -3.6979387e-03 -9.8717109e-03 -9.6819234e-04\n -3.1856299e-01 -1.9634141e-02 5.5010008e-01 -4.5087570e-03 -1.0056601e-02 -1.1665992e-03\n -3.3045890e-01 -2.7377462e-02 5.5941054e-01 -4.6075261e-03 -9.9823099e-03 -1.2647274e-03\n -3.4515996e-01 -1.8959714e-02 5.3670847e-01 -4.2048365e-03 -1.0248757e-02 -1.3583284e-03\n -3.3951402e-01 -1.2872135e-02 5.3599478e-01 -4.0723213e-03 -1.0386483e-02 -1.2624775e-03\n -3.3755341e-01 -2.5624888e-02 5.1575472e-01 -4.5272112e-03 -1.0160501e-02 -1.0830023e-03\n -3.2258982e-01 -3.1269129e-02 4.8818599e-01 -4.1937008e-03 -9.9869285e-03 -1.0812365e-03\n -3.4093488e-01 -2.2364739e-02 5.0470351e-01 -3.9298434e-03 -1.0142073e-02 -1.2312543e-03\n -3.6339869e-01 -2.8241074e-02 5.5228380e-01 -3.9351445e-03 -1.0543530e-02 -1.3188380e-03\n -3.7062544e-01 -1.3755169e-02 5.3665072e-01 -3.7548096e-03 -1.0319489e-02 -1.5587220e-03\n -3.6752007e-01 -4.2331576e-02 5.5297874e-01 -3.8172436e-03 -1.0534360e-02 -1.4921874e-03\n -3.6175684e-01 -4.5577152e-02 5.3972691e-01 -4.2335061e-03 -1.0045347e-02 -1.5288803e-03\n -3.5083073e-01 -3.2821446e-02 5.4024752e-01 -4.1928999e-03 -9.7196529e-03 -1.3749730e-03\n -3.4421623e-01 -4.2942656e-02 5.2568947e-01 -4.1090442e-03 -9.9771628e-03 -1.3198787e-03\n -3.4971154e-01 -4.0532139e-02 5.1302961e-01 -3.7105014e-03 -1.0125343e-02 -1.3378268e-03\n -3.5584794e-01 -3.9654894e-02 5.1425134e-01 -3.4620202e-03 -1.0101977e-02 -1.3524341e-03\n -3.6313719e-01 -4.0875098e-02 4.9475010e-01 -2.8114756e-03 -1.0307637e-02 -1.3245577e-03\n -3.5133797e-01 -3.4098212e-02 4.8173637e-01 -2.8090227e-03 -1.0490986e-02 -1.2530305e-03\n -3.2823811e-01 -3.8685805e-02 4.4275756e-01 -2.6850615e-03 -1.0061032e-02 -9.4212239e-04\n -3.3424870e-01 -1.8554361e-02 4.2245404e-01 -2.2383070e-03 -1.0300011e-02 -9.4037199e-04\n -3.3297351e-01 -2.7812950e-02 4.3502326e-01 -1.8041934e-03 -1.0372609e-02 -9.8747359e-04\n -3.6033769e-01 -1.5915669e-04 4.1984309e-01 -1.3676600e-03 -1.0827833e-02 -1.1944310e-03\n -3.7433026e-01 -2.9159886e-02 4.1686479e-01 -2.4729287e-03 -1.0287078e-02 -1.6769357e-03\n -3.7757225e-01 -7.7871974e-03 4.3787599e-01 -2.4516106e-03 -1.0112897e-02 -1.8368242e-03\n -3.4919436e-01 -2.2661291e-02 4.4279986e-01 -2.7256141e-03 -1.0398396e-02 -1.3369418e-03\n -3.5167888e-01 4.6043689e-04 4.3598616e-01 -2.7327792e-03 -1.0698002e-02 -1.3502558e-03\n -3.5355483e-01 3.4810071e-03 4.3971026e-01 -2.1563156e-03 -1.0641377e-02 -1.2914670e-03\n -3.3780712e-01 4.1952722e-03 4.9986258e-01 -2.5480049e-03 -1.1012657e-02 -1.0173003e-03\n -3.4816327e-01 -2.0629038e-03 4.9762495e-01 -2.9637865e-03 -1.0946778e-02 -1.3419644e-03\n -3.8252060e-01 -1.9598314e-02 4.9553440e-01 -3.1548241e-03 -1.1396192e-02 -1.5555730e-03\n -3.6859638e-01 -2.3220697e-02 4.5203822e-01 -2.9769183e-03 -1.1187764e-02 -1.3796147e-03\n -3.7429515e-01 -2.4075820e-02 4.5903797e-01 -2.6656759e-03 -1.1203145e-02 -1.4866063e-03\n -3.8557878e-01 -2.7208315e-02 4.8259830e-01 -2.2955641e-03 -1.1915473e-02 -1.5638619e-03\n -3.8991454e-01 -2.5320024e-02 4.7981693e-01 -2.1820456e-03 -1.1981771e-02 -1.5570778e-03\n -4.0260241e-01 -3.9701592e-03 4.8948015e-01 -2.5530670e-03 -1.2370432e-02 -1.4635486e-03\n -4.0853769e-01 -1.1933711e-02 5.0577984e-01 -2.4633623e-03 -1.2588585e-02 -1.5499962e-03\n -3.9006863e-01 -1.1794524e-02 4.9257749e-01 -2.4808612e-03 -1.2652598e-02 -1.3441215e-03\n -3.7838023e-01 3.8243119e-03 4.7604844e-01 -1.9052674e-03 -1.2746960e-02 -1.0334737e-03\n -3.7931674e-01 -2.3060726e-02 4.5704348e-01 -1.8159156e-03 -1.2695319e-02 -1.1543786e-03\n -4.1299052e-01 -6.4797633e-03 4.3537501e-01 -1.3306102e-03 -1.3485557e-02 -1.3953630e-03\n -4.5219317e-01 2.5192257e-03 4.2445053e-01 -1.8071786e-03 -1.3758544e-02 -1.6208196e-03\n -4.7627709e-01 -2.1590396e-02 4.4196852e-01 -1.4650168e-03 -1.3844101e-02 -1.9013521e-03\n -5.0461874e-01 -1.8193600e-02 4.1954329e-01 -1.5006755e-03 -1.3974293e-02 -2.1765842e-03\n -4.8522123e-01 -3.1569833e-02 4.1946567e-01 -1.5642773e-03 -1.3640267e-02 -1.9315635e-03\n -4.8380089e-01 -2.8452926e-02 4.2531751e-01 -1.7992063e-03 -1.3779103e-02 -1.8965868e-03\n -4.7344440e-01 -1.3648858e-02 3.9397873e-01 -1.5801625e-03 -1.3525902e-02 -1.8249368e-03\n -4.4654714e-01 -2.3261417e-02 3.6964370e-01 -1.9013810e-03 -1.3102736e-02 -1.6067940e-03\n -4.3096117e-01 -2.0610847e-02 3.7394851e-01 -1.7780886e-03 -1.3241783e-02 -1.3317840e-03\n -4.3396482e-01 2.2688578e-02 3.9678751e-01 -1.7613367e-03 -1.3201003e-02 -1.3229597e-03\n -4.1781412e-01 -2.6604406e-03 3.4588988e-01 -1.0247653e-03 -1.3297541e-02 -1.2695168e-03\n -4.1710258e-01 -8.8548254e-03 3.5696995e-01 -9.4509370e-04 -1.3742158e-02 -9.9715985e-04\n -4.2965282e-01 5.1298331e-03 3.8521928e-01 -9.9410439e-04 -1.3780275e-02 -1.1834295e-03\n -4.4389097e-01 -2.1716874e-02 3.8626946e-01 -1.0321583e-03 -1.3874438e-02 -1.3285101e-03\n -4.5640108e-01 1.8721250e-02 3.6516930e-01 -9.8878625e-04 -1.4023136e-02 -1.5266508e-03\n -4.6060529e-01 -5.6976103e-03 3.6732884e-01 -8.8721532e-04 -1.4344370e-02 -1.4048668e-03\n -4.5287249e-01 2.2643676e-02 3.6526608e-01 -6.2353986e-04 -1.4224469e-02 -1.2024187e-03\n -4.4854097e-01 -1.4985517e-02 3.5266162e-01 -4.8270441e-04 -1.4155634e-02 -1.3069187e-03\n -4.5546116e-01 -1.2474263e-02 3.5085303e-01 -1.2554901e-04 -1.4276969e-02 -1.4859004e-03\n -4.3072032e-01 1.2660799e-02 3.3453553e-01 8.7797014e-04 -1.4071814e-02 -8.3104778e-04\n -4.4945408e-01 5.4996747e-02 2.9276249e-01 1.2070468e-03 -1.4159928e-02 -1.1749440e-03\n -4.5834737e-01 1.0603703e-02 3.0252330e-01 6.9342400e-04 -1.4052332e-02 -1.4619497e-03\n -4.5795160e-01 9.1893136e-03 3.1022183e-01 1.9205051e-04 -1.4610478e-02 -1.4174436e-03\n -4.4738083e-01 2.4118655e-02 2.9349834e-01 -2.6820757e-04 -1.4689778e-02 -1.0969259e-03\n -4.9210159e-01 2.4075445e-02 2.9002642e-01 -4.2004273e-04 -1.4136627e-02 -2.0175560e-03\n -5.1438451e-01 3.6602276e-02 2.4075940e-01 -5.4749506e-04 -1.3730696e-02 -2.5708314e-03\n -5.1158334e-01 2.7958231e-02 2.4989914e-01 -1.2895034e-03 -1.3879644e-02 -2.5205262e-03\n -4.9693818e-01 2.5610341e-02 2.5389104e-01 -1.3145280e-03 -1.4100662e-02 -2.3705574e-03\n -4.7820739e-01 1.6998149e-02 2.6699610e-01 -1.4433714e-03 -1.3995682e-02 -2.3234147e-03\n -4.8349647e-01 1.6708636e-03 2.4508660e-01 -1.7009878e-03 -1.4148665e-02 -2.3793712e-03\n -4.9666563e-01 2.5419154e-02 2.5554022e-01 -1.4958235e-03 -1.4725473e-02 -2.3402445e-03\n -5.2470730e-01 8.8944666e-03 2.5237614e-01 -1.1942107e-03 -1.4709806e-02 -2.6225336e-03\n -5.4322191e-01 3.0914009e-03 2.7343206e-01 -2.1124722e-03 -1.5034759e-02 -2.7544850e-03\n -5.6645079e-01 1.8710190e-02 2.9905156e-01 -2.3453796e-03 -1.5243215e-02 -3.0071342e-03\n -5.9093039e-01 1.4242311e-02 2.7864945e-01 -2.1214790e-03 -1.5096167e-02 -3.2968912e-03\n -5.9817519e-01 2.9144763e-02 2.5775517e-01 -2.0580314e-03 -1.4916217e-02 -3.4265171e-03\n -6.1787168e-01 1.9223071e-02 2.4981067e-01 -2.4766460e-03 -1.5158823e-02 -3.6407876e-03\n -5.9834490e-01 -7.3145330e-03 2.6450709e-01 -1.9793900e-03 -1.4584928e-02 -3.4703726e-03\n -5.9208235e-01 2.3920304e-02 2.7004730e-01 -2.0000461e-03 -1.4625030e-02 -3.2575886e-03\n -6.2816337e-01 6.4613673e-03 2.6958394e-01 -2.2973917e-03 -1.5128428e-02 -3.7148410e-03\n -6.4049682e-01 9.8211918e-03 2.9144277e-01 -2.5327771e-03 -1.4801563e-02 -3.9490075e-03\n -6.6390420e-01 1.9124711e-02 3.2802362e-01 -2.8819371e-03 -1.5147187e-02 -4.2583724e-03\n -6.8054750e-01 -1.3281189e-02 3.5881791e-01 -3.0734040e-03 -1.5620781e-02 -4.4811524e-03\n -6.9541632e-01 -6.0738959e-03 3.6848734e-01 -3.4719235e-03 -1.5150332e-02 -4.7618121e-03\n -7.0399729e-01 4.0952720e-03 3.9788174e-01 -3.5139573e-03 -1.5074923e-02 -4.8642958e-03\n -6.9940395e-01 -2.2348479e-02 4.1649143e-01 -3.8098178e-03 -1.5439251e-02 -4.8393688e-03\n -6.8530326e-01 -2.4485981e-03 4.1182196e-01 -4.0738119e-03 -1.5748803e-02 -4.5467668e-03\n -6.8611088e-01 -1.0631541e-02 3.9354215e-01 -3.7305508e-03 -1.5421763e-02 -4.5488014e-03\n -6.6075419e-01 -8.6485885e-03 3.7334128e-01 -3.2064478e-03 -1.5514723e-02 -4.4732926e-03\n -7.0391158e-01 -4.1102833e-02 3.8548140e-01 -4.0370041e-03 -1.5192737e-02 -4.8385390e-03\n -7.0249033e-01 -3.0199973e-02 3.8330792e-01 -3.9196802e-03 -1.4993987e-02 -4.9821405e-03\n -6.9573458e-01 -2.4476285e-02 3.6437878e-01 -3.8596596e-03 -1.5113234e-02 -4.8573310e-03\n -6.9527754e-01 -2.5868624e-02 3.7089318e-01 -3.8838699e-03 -1.4965748e-02 -4.9144204e-03\n -6.8679271e-01 -2.3313693e-02 3.6165445e-01 -4.0662813e-03 -1.4773626e-02 -4.8291365e-03\n -6.8455348e-01 -3.4764097e-02 3.7926884e-01 -4.5404093e-03 -1.4854772e-02 -4.7195322e-03\n -6.8910094e-01 -2.8651032e-02 3.7414186e-01 -4.1758032e-03 -1.4787118e-02 -4.8127843e-03\n -6.8639323e-01 -2.1433287e-02 3.7761260e-01 -3.8330833e-03 -1.4804110e-02 -4.7824119e-03\n -6.8232039e-01 -2.8936111e-03 4.0992104e-01 -4.3158329e-03 -1.4983446e-02 -4.8401740e-03\n -6.6624872e-01 -4.5150152e-02 3.7492902e-01 -4.7378779e-03 -1.4786437e-02 -4.6633668e-03\n -6.6061967e-01 -1.0435509e-02 3.5495420e-01 -4.6070528e-03 -1.4898830e-02 -4.5382035e-03\n -6.7173014e-01 -5.1544630e-03 3.5908340e-01 -4.3300158e-03 -1.4797838e-02 -4.6318960e-03\n -6.8778773e-01 -1.1579424e-02 3.4218301e-01 -4.3919736e-03 -1.4749166e-02 -4.8167360e-03\n -7.0252663e-01 -8.8128804e-03 3.4201952e-01 -4.2743407e-03 -1.4180328e-02 -4.9784784e-03\n -6.8795102e-01 -1.3173718e-02 3.3526768e-01 -3.8489191e-03 -1.4022000e-02 -4.9797466e-03\n -6.8492645e-01 -7.9545991e-03 2.9303610e-01 -3.6001388e-03 -1.4352034e-02 -5.0377531e-03\n -6.9686251e-01 1.5468304e-02 3.3828742e-01 -3.1462028e-03 -1.4851026e-02 -4.8478826e-03\n -6.9963414e-01 -4.4826696e-03 3.5194033e-01 -2.9870889e-03 -1.5103165e-02 -5.0233235e-03\n -7.0699782e-01 -9.3684894e-03 3.2294465e-01 -3.1048275e-03 -1.5271920e-02 -5.0672115e-03\n -6.9445131e-01 2.5546619e-03 3.3542401e-01 -3.1198617e-03 -1.5417522e-02 -4.6675618e-03\n -6.8572500e-01 1.4077848e-02 3.2726545e-01 -2.6810437e-03 -1.5499242e-02 -4.8228385e-03\n -6.6813120e-01 -8.6150664e-04 2.7814568e-01 -2.8955272e-03 -1.4898280e-02 -4.8442769e-03\n 164 rp_bold.txt\n"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(2)\nax[0].plot(np.genfromtxt('bold_mcf.nii.gz.par')[:, 3:])\nax[0].set_title('FSL')\n\nax[1].plot(np.genfromtxt('rp_bold.txt')[:, :3])\nax[1].set_title('SPM')",
"_____no_output_____"
]
],
[
[
"#### if i execute the MCFLIRT line again, well, it runs again!",
"_____no_output_____"
],
[
"# Step 3. Nipype caching",
"_____no_output_____"
]
],
[
[
"from nipype.caching import Memory\n\nmem = Memory('.')",
"_____no_output_____"
]
],
[
[
"### Create `cacheable` objects",
"_____no_output_____"
]
],
[
[
"spm_realign = mem.cache(Realign)\nfsl_realign = mem.cache(MCFLIRT)",
"_____no_output_____"
]
],
[
[
"### Execute interfaces",
"_____no_output_____"
]
],
[
[
"spm_results = spm_realign(in_files='ds107.nii', register_to_mean=False)\nfsl_results = fsl_realign(in_file='ds107.nii', ref_vol=0, save_plots=True)",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(2)\nax[0].plot(np.genfromtxt(fsl_results.outputs.par_file)[:, 3:])\nax[1].plot(np.genfromtxt(spm_results.outputs.realignment_parameters)[:,:3])",
"_____no_output_____"
]
],
[
[
"# More caching",
"_____no_output_____"
]
],
[
[
"files = required_files[0:2]\nprint(files)",
"['/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run001/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run002/bold.nii.gz']\n"
],
[
"converter = mem.cache(Gunzip)\nnewfiles = []\n\nfor idx, fname in enumerate(files):\n results = converter(in_file=fname)\n newfiles.append(results.outputs.out_file)\n\nprint(newfiles)",
"['/Users/alexandre/Dropbox (Personal)/Documents/projects/swcarpentry/lessons/nipype-lessons/notebooks/nipype_mem/nipype-algorithms-misc-Gunzip/95c5f9fcf1c96139e3d2f69375cf7746/bold.nii', '/Users/alexandre/Dropbox (Personal)/Documents/projects/swcarpentry/lessons/nipype-lessons/notebooks/nipype_mem/nipype-algorithms-misc-Gunzip/2f1d2b6c602a72b50f16d77126e94bed/bold.nii']\n"
],
[
"os.chdir(tutorial_dir)",
"_____no_output_____"
]
],
[
[
"# Step 4: Nodes, Mapnodes and workflows",
"_____no_output_____"
]
],
[
[
"from nipype.pipeline.engine import Node, MapNode, Workflow",
"_____no_output_____"
]
],
[
[
"**Node**:",
"_____no_output_____"
]
],
[
[
"realign_spm = Node(Realign(), name='motion_correct')",
"_____no_output_____"
]
],
[
[
"**Mapnode**:\n\n<img src=\"https://raw.github.com/satra/intro2nipype/master/images/mapnode.png\" width=\"30%\">",
"_____no_output_____"
]
],
[
[
"convert2nii = MapNode(Gunzip(), iterfield=['in_file'],\n name='convert2nii')",
"_____no_output_____"
]
],
[
[
"# \"Hello World\" of Nipype workflows",
"_____no_output_____"
]
],
[
[
"realignflow = Workflow(name='realign_with_spm')\n\n#realignflow.connect(convert2nii, 'out_file', realign_spm, 'in_files')\n\nrealignflow.connect([(convert2nii, realign_spm, [('out_file', 'in_files')]) ])",
"_____no_output_____"
],
[
"convert2nii.inputs.in_file = required_files\nrealign_spm.inputs.register_to_mean = False\n\nrealignflow.base_dir = '.'\nrealignflow.run()",
"/Users/alexandre/envs/pytre/lib/python3.5/site-packages/IPython/kernel/__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.\n \"You should import from ipykernel or jupyter_client instead.\", ShimWarning)\n"
]
],
[
[
"# Visualize the workflow",
"_____no_output_____"
]
],
[
[
"realignflow.write_graph()",
"_____no_output_____"
],
[
"from IPython.core.display import Image\nImage('realign_with_spm/graph.dot.png')",
"_____no_output_____"
],
[
"realignflow.write_graph(graph2use='orig')\nImage('realign_with_spm/graph_detailed.dot.png')",
"_____no_output_____"
]
],
[
[
"# Step 5. Getting and saving data\n\n### Let's use *glob*",
"_____no_output_____"
]
],
[
[
"cd $tutorial_dir",
"/Users/alexandre/nipype-tutorial\n"
],
[
"from nipype.interfaces.io import DataGrabber, DataFinder\n\nds = Node(DataGrabber(infields=['subject_id'], \n outfields=['func']),\n name='datasource')\n\nds.inputs.base_directory = op.join(tutorial_dir, 'ds107')\nds.inputs.template = '%s/BOLD/task001*/bold.nii.gz'\nds.inputs.sort_filelist = True\n\nds.inputs.subject_id = 'sub001'\n\nprint(ds.run().outputs)",
"\nfunc = ['/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run001/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run002/bold.nii.gz']\n\n"
],
[
"ds.inputs.subject_id = ['sub001', 'sub044']\nprint(ds.run().outputs)",
"\nfunc = [['/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run001/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub001/BOLD/task001_run002/bold.nii.gz'], ['/Users/alexandre/nipype-tutorial/ds107/sub044/BOLD/task001_run001/bold.nii.gz', '/Users/alexandre/nipype-tutorial/ds107/sub044/BOLD/task001_run002/bold.nii.gz']]\n\n"
]
],
[
[
"# Multiple files per subject",
"_____no_output_____"
]
],
[
[
"ds = Node(DataGrabber(infields=['subject_id', 'task_id'],\n outfields=['func', 'anat']),\n name='datasource')\n\nds.inputs.base_directory = os.path.abspath('ds107')\nds.inputs.template = '*'\nds.inputs.template_args = {'func': [['subject_id', 'task_id']],\n 'anat': [['subject_id']]}\n\nds.inputs.field_template = {'func': '%s/BOLD/task%03d*/bold.nii.gz',\n 'anat': '%s/anatomy/highres001.nii.gz'}\n\nds.inputs.sort_filelist = True\nds.inputs.subject_id = ['sub001', 'sub044']\nds.inputs.task_id = 1\n\nds_out = ds.run()\n\nprint(ds_out.outputs)",
"_____no_output_____"
]
],
[
[
"# Connecting to computation",
"_____no_output_____"
]
],
[
[
"convert2nii = MapNode(Gunzip(), iterfield=['in_file'],\n name='convert2nii')\n\nrealign_spm = Node(Realign(), name='motion_correct')\nrealign_spm.inputs.register_to_mean = False\n\nconnectedworkflow = Workflow(name='connectedtogether')\nconnectedworkflow.base_dir = os.path.abspath('working_dir')\n\nconnectedworkflow.connect([(ds, convert2nii, [('func', 'in_file')]),\n (convert2nii, realign_spm, [('out_file', 'in_files')]),\n ])",
"_____no_output_____"
]
],
[
[
"# Data sinking\n\n### Take output computed in a workflow out of it.",
"_____no_output_____"
]
],
[
[
"from nipype.interfaces import DataSink\n\nsinker = Node(DataSink(), name='sinker')\nsinker.inputs.base_directory = os.path.abspath('output')\n\nconnectedworkflow.connect([(realign_spm, sinker, [('realigned_files', 'realigned'),\n ('realignment_parameters', 'realigned.@parameters'),\n ]),\n ])",
"_____no_output_____"
]
],
[
[
"### How to determine output location\n\n 'base_directory/container/parameterization/destloc/filename'\n \n destloc = [@]string[[.[@]]string[[.[@]]string]...] and\n destloc = realigned.@parameters --> 'realigned'\n destloc = realigned.parameters.@1 --> 'realigned/parameters'\n destloc = realigned.parameters.@2 --> 'realigned/parameters'\n filename comes from the input to the connect statement.",
"_____no_output_____"
]
],
[
[
"connectedworkflow.run?",
"_____no_output_____"
]
],
[
[
"# Step 6: *iterables* - parametric execution\n\n**Workflow + iterables**: runs subgraph several times, attribute not input",
"_____no_output_____"
],
[
"<img src=\"https://raw.github.com/satra/intro2nipype/master/images/iterables.png\" width=\"30%\">",
"_____no_output_____"
]
],
[
[
"ds.iterables = ('subject_id', ['sub001', 'sub044'])\nconnectedworkflow.run()",
"_____no_output_____"
]
],
[
[
"# Putting it all together\n\n### iterables + MapNode + Node + Workflow + DataGrabber + DataSink",
"_____no_output_____"
]
],
[
[
"connectedworkflow.write_graph()\nImage('working_dir/connectedtogether/graph.dot.png')",
"_____no_output_____"
]
],
[
[
"# Step 7: The Function interface\n\n### The do anything you want card",
"_____no_output_____"
]
],
[
[
"from nipype.interfaces.utility import Function\n\ndef myfunc(input1, input2):\n \"\"\"Add and subtract two inputs.\"\"\"\n return input1 + input2, input1 - input2\n\ncalcfunc = Node(Function(input_names=['input1', 'input2'],\n output_names = ['sum', 'difference'],\n function=myfunc),\n name='mycalc')\n\ncalcfunc.inputs.input1 = 1\ncalcfunc.inputs.input2 = 2\n\nres = calcfunc.run()\n\nprint res.outputs",
"_____no_output_____"
]
],
[
[
"# Step 8: Distributed computing\n\n### Normally calling run executes the workflow in series",
"_____no_output_____"
]
],
[
[
"connectedworkflow.run()",
"_____no_output_____"
]
],
[
[
"### but you can scale very easily\n\nFor example, to use multiple cores on your local machine",
"_____no_output_____"
]
],
[
[
"connectedworkflow.run('MultiProc', plugin_args={'n_procs': 4})",
"_____no_output_____"
]
],
[
[
"### Or to other job managers\n\n```python\nconnectedworkflow.run('PBS', plugin_args={'qsub_args': '-q many'})\n\nconnectedworkflow.run('SGE', plugin_args={'qsub_args': '-q many'})\n\nconnectedworkflow.run('LSF', plugin_args={'qsub_args': '-q many'})\n\nconnectedworkflow.run('Condor')\n\nconnectedworkflow.run('IPython')\n```",
"_____no_output_____"
],
[
"### or submit graphs as a whole\n\n```python\nconnectedworkflow.run('PBSGraph', plugin_args={'qsub_args': '-q many'})\n\nconnectedworkflow.run('SGEGraph', plugin_args={'qsub_args': '-q many'})\n\nconnectedworkflow.run('CondorDAGMan')\n```",
"_____no_output_____"
],
[
"### Current Requirement: **SHARED FILESYSTEM**",
"_____no_output_____"
],
[
"### You can also set node specific plugin arguments",
"_____no_output_____"
],
[
"```python\nnode.plugin_args = {'qsub_args': '-l nodes=1:ppn=3', 'overwrite': True}\n```",
"_____no_output_____"
],
[
"# Step 9: Connecting to Databases",
"_____no_output_____"
]
],
[
[
"from os.path import abspath as opap\n\nfrom nipype.interfaces.io import XNATSource\nfrom nipype.pipeline.engine import Node, Workflow\nfrom nipype.interfaces.fsl import BET\n\nsubject_id = 'xnat_S00001'\n\ndg = Node(XNATSource(infields=['subject_id'],\n outfields=['struct'],\n config='/Users/satra/xnat_configs/nitrc_ir_config'),\n name='xnatsource')\n\ndg.inputs.query_template = ('/projects/fcon_1000/subjects/%s/experiments/xnat_E00001'\n '/scans/%s/resources/NIfTI/files')\ndg.inputs.query_template_args['struct'] = [['subject_id', 'anat_mprage_anonymized']]\ndg.inputs.subject_id = subject_id\n\nbet = Node(BET(), name='skull_stripper')\n\nwf = Workflow(name='testxnat')\nwf.base_dir = opap('xnattest')\n\nwf.connect(dg, 'struct', bet, 'in_file')",
"_____no_output_____"
],
[
"from nipype.interfaces.io import XNATSink\n\nds = Node(XNATSink(config='/Users/satra/xnat_configs/central_config'),\n name='xnatsink')\n\nds.inputs.project_id = 'NPTEST'\nds.inputs.subject_id = 'NPTEST_xnat_S00001'\nds.inputs.experiment_id = 'test_xnat'\nds.inputs.reconstruction_id = 'bet'\nds.inputs.share = True\n\nwf.connect(bet, 'out_file', ds, 'brain')",
"_____no_output_____"
],
[
"wf.run()",
"_____no_output_____"
]
],
[
[
"# Step 10: Configuration options\n\n[Configurable options](http://nipy.org/nipype/users/config_file.html) control workflow and node execution options\n\nAt the global level:",
"_____no_output_____"
]
],
[
[
"from nipype import config, logging\n\nconfig.enable_debug_mode()\nlogging.update_logging(config)\n\nconfig.set('execution', 'stop_on_first_crash', 'true')",
"_____no_output_____"
]
],
[
[
"At the workflow level:",
"_____no_output_____"
]
],
[
[
"wf.config['execution']['hash_method'] = 'content'",
"_____no_output_____"
]
],
[
[
"Configurations can also be set at the node level.",
"_____no_output_____"
]
],
[
[
"bet.config = {'execution': {'keep_unnecessary_outputs': 'true'}}",
"_____no_output_____"
],
[
"wf.run()",
"_____no_output_____"
]
],
[
[
"# Reusable workflows",
"_____no_output_____"
]
],
[
[
"config.set_default_config()\nlogging.update_logging(config)",
"_____no_output_____"
],
[
"from nipype.workflows.fmri.fsl.preprocess import create_susan_smooth\n\nsmooth = create_susan_smooth()\nsmooth.inputs.inputnode.in_files = opap('output/realigned/_subject_id_sub044/rbold_out.nii')\nsmooth.inputs.inputnode.fwhm = 5\nsmooth.inputs.inputnode.mask_file = 'mask.nii'\n\nsmooth.run() # Will error because mask.nii does not exist",
"_____no_output_____"
],
[
"from nipype.interfaces.fsl import BET, MeanImage, ImageMaths\nfrom nipype.pipeline.engine import Node\n\nremove_nan = Node(ImageMaths(op_string= '-nan'), name='nanremove')\nremove_nan.inputs.in_file = op.abspath('output/realigned/_subject_id_sub044/rbold_out.nii')\n\nmi = Node(MeanImage(), name='mean')\nmask = Node(BET(mask=True), name='mask')\n\nwf = Workflow('reuse')\nwf.base_dir = op.abspath(op.curdir)\n\nwf.connect([(remove_nan, mi, ['out_file', 'in_file']),\n (mi, mask, ['out_file', 'in_file']),\n (mask, smooth, ['out_file', 'inputnode.mask_file']),\n (remove_nan, smooth, ['out_file', 'inputnode.in_files']),\n ])\n\nwf.run()",
"_____no_output_____"
]
],
[
[
"## Setting internal parameters of workflows",
"_____no_output_____"
]
],
[
[
"print(smooth.list_node_names())\n\nmedian = smooth.get_node('median')\nmedian.inputs.op_string = '-k %s -p 60'",
"_____no_output_____"
],
[
"wf.run()",
"_____no_output_____"
]
],
[
[
"# Summary\n\n\n- This tutorial covers the concepts of Nipype\n\n 1. Installing and testing the installation \n 2. Working with interfaces\n 3. Using Nipype caching\n 4. Creating Nodes, MapNodes and Workflows\n 5. Getting and saving data\n 6. Using Iterables\n 7. Function nodes\n 8. Distributed computation\n 9. Connecting to databases\n 10. Execution configuration options\n\n- It will allow you to reuse and debug the various workflows available in Nipype, BIPS and CPAC\n- Please contribute new interfaces and workflows!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4a06fa88a4abcca8e5809658dd7442f7c12e2b91
| 58,983 |
ipynb
|
Jupyter Notebook
|
model.ipynb
|
vivekgangwar02/Abstractive-Summarization-of-Text
|
36cdf118e70e60634c2cac1e7861c9e1e613076f
|
[
"MIT"
] | 1 |
2020-04-15T13:34:20.000Z
|
2020-04-15T13:34:20.000Z
|
model.ipynb
|
vivekgangwar02/Abstractive-Summarization-of-Text
|
36cdf118e70e60634c2cac1e7861c9e1e613076f
|
[
"MIT"
] | null | null | null |
model.ipynb
|
vivekgangwar02/Abstractive-Summarization-of-Text
|
36cdf118e70e60634c2cac1e7861c9e1e613076f
|
[
"MIT"
] | null | null | null | 81.020604 | 17,379 | 0.583575 |
[
[
[
"from google.colab import drive\ndrive.mount('/content/drive')\nimport os\nprint(os.getcwd())\nos.chdir('/content/drive/My Drive/Colab Notebooks/summarization')\nprint(os.listdir())",
"Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/drive\n/content\n['glove.6B.100d.txt', 'glove.6B.200d.txt', 'glove.6B.50d.txt', 'summ', 'dailymail.tgz', 'cnn.tgz', 'data_read_test.ipynb', 'PrepareInput.ipynb', 'glove.6B.300d.txt', 'sent', 'testcode.ipynb', 'summ2', 's2s.h5', 'model.ipynb']\n"
],
[
"import os\nimport numpy as np\nimport pandas as pd\nimport sys\nimport os\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' \nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib \nprint(device_lib.list_local_devices())\nimport keras\norig = os.getcwd()\nprint(orig)",
"[name: \"/device:CPU:0\"\ndevice_type: \"CPU\"\nmemory_limit: 268435456\nlocality {\n}\nincarnation: 15377469079644100594\n, name: \"/device:XLA_CPU:0\"\ndevice_type: \"XLA_CPU\"\nmemory_limit: 17179869184\nlocality {\n}\nincarnation: 10391440037780190724\nphysical_device_desc: \"device: XLA_CPU device\"\n]\n/content/drive/My Drive/Colab Notebooks/summarization\n"
],
[
"#Loading Data and Preparing vocab\nfrom keras.preprocessing.sequence import pad_sequences\nfrom keras.preprocessing.text import Tokenizer\ntokenizer = Tokenizer(num_words=sys.maxsize,filters ='', lower=False, oov_token = '<OOV>')",
"_____no_output_____"
],
[
"print(dir(tokenizer))",
"['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_keras_api_names', '_keras_api_names_v1', 'char_level', 'document_count', 'filters', 'fit_on_sequences', 'fit_on_texts', 'get_config', 'index_docs', 'index_word', 'lower', 'num_words', 'oov_token', 'sequences_to_matrix', 'sequences_to_texts', 'sequences_to_texts_generator', 'split', 'texts_to_matrix', 'texts_to_sequences', 'texts_to_sequences_generator', 'to_json', 'word_counts', 'word_docs', 'word_index']\n"
],
[
"#Initializing tokenizer for Vocabulary\ndata1 = open('sent').readlines()\ndata2 = open('summ2').readlines()\n\ntokenizer.fit_on_texts(data1)\ntokenizer.fit_on_texts(data2)\n\nprint(\"No. of articles and summ\",len(data2),len(data1))\n\ndictionary = tokenizer.word_index\nword2idx = {}\nidx2word = {}\nnum_encoder_tokens = len(tokenizer.word_index)+1\nnum_decoder_tokens = len(tokenizer.word_index)+1\nfor k, v in dictionary.items():\n word2idx[k] = v\n idx2word[v] = k",
"No. of articles and summ 92454 92454\n"
],
[
"#Encoding data to integers\nsent = tokenizer.texts_to_sequences(data1)\nsumm = tokenizer.texts_to_sequences(data2)",
"_____no_output_____"
],
[
"#padding sequences\n#Finding the maximum sequence length\nMAX_INPUT_LENGTH = max(len(i.split()) for i in data1)\nprint(MAX_INPUT_LENGTH)\nMAX_TARGET_LENGTH = max(len(j.split()) for j in data2)\nprint(MAX_TARGET_LENGTH)\npadded_sent = pad_sequences(sent, maxlen = MAX_INPUT_LENGTH,padding = 'post')\npadded_summ = pad_sequences(summ, maxlen = MAX_TARGET_LENGTH,padding = 'post')\nprint(padded_sent.shape,padded_summ.shape,type(padded_sent))",
"2009\n112\n(92454, 2009) (92454, 112) <class 'numpy.ndarray'>\n"
],
[
"#preparing training data\nencoder_input_data = padded_sent.copy()\ndecoder_input_data = padded_summ.copy()\n# print(decoder_input_data[0],decoder_input_data[1])\ndecoder_target_data = np.roll(decoder_input_data, -1, axis = -1)\ndecoder_target_data[:,-1] = 0\n\n# encoder_input_data.reshape(-1,1,MAX_INPUT_LENGTH)\n# decoder_input_data = decoder_input_data.reshape(-1,1,MAX_TARGET_LENGTH)\ndecoder_target_data = decoder_target_data.reshape(-1,MAX_TARGET_LENGTH,1)\n# encoder_input_data = tf.one_hot(encoder_input_data, len(tokenizer.word_index))\n# decoder_input_data = tf.one_hot(decoder_input_data, len(tokenizer.word_index))\n# decoder_target_data = tf.one_hot(decoder_target_data, len(tokenizer.word_index))\nprint(encoder_input_data.shape,decoder_input_data.shape,decoder_target_data.shape)\n\n# print(decoder_input_data[0],decoder_target_data[0])",
"(92454, 2009) (92454, 112) (92454, 112, 1)\n"
],
[
"# Preparing GloVe\nEMBEDDING_DIM = 300\nembeddings_index = {}\nf = open(os.path.join('', 'glove.6B.{}d.txt'.format(EMBEDDING_DIM)))\nfor line in f:\n values = line.split()\n word = values[0]\n coefs = np.asarray(values[1:], dtype='float32')\n embeddings_index[word] = coefs\nf.close()",
"_____no_output_____"
],
[
"\"fishtailed\" in embeddings_index",
"_____no_output_____"
],
[
"#Embedding matrix\nembedding_matrix = np.zeros((len(tokenizer.word_index)+1, EMBEDDING_DIM),dtype='float32')\nfor word,i in tokenizer.word_index.items():\n embedding_vector = embeddings_index.get(word)\n if embedding_vector is not None:\n embedding_matrix[i] = embedding_vector\nprint(embedding_matrix.shape)",
"(348009, 300)\n"
],
[
"#Creating the Bidirectional model\nfrom keras.layers import Embedding\nfrom keras.layers import Dense, LSTM, Input, concatenate\nfrom keras.models import Model\nbatch_size = 32\nepochs = 10\nHIDDEN_UNITS_ENC = 256\nnum_samples = 10000",
"_____no_output_____"
],
[
"encoder_inputs = Input(shape=(MAX_INPUT_LENGTH,), name='encoder_inputs')\nembedding_layer = Embedding(num_encoder_tokens, EMBEDDING_DIM, weights=[embedding_matrix],\n input_length=MAX_INPUT_LENGTH, trainable=False, name='embedding_layer')\n\nencoder_rnn = LSTM(units=HIDDEN_UNITS_ENC, return_state=True, dropout=0.5, recurrent_dropout=0.5,name='encoder_lstm')\nencoder_output, state_h_f, state_c_f = encoder_rnn(embedding_layer(encoder_inputs))\nencoder_rnn2 = LSTM(units=HIDDEN_UNITS_ENC, return_state=True, dropout=0.5, recurrent_dropout=0.5,\ngo_backwards=True,name='encoder_lstm_backward')\nencoder_output, state_h_b, state_c_b = encoder_rnn2(embedding_layer(encoder_inputs))\n\nstate_h = concatenate([state_h_f, state_h_b])\nstate_c = concatenate([state_c_f, state_c_b])\n\nencoder_states = [state_h, state_c]\n\ndecoder_inputs = Input(shape=(None,), name='decoder_inputs')\nembedding_layer = Embedding(num_decoder_tokens, EMBEDDING_DIM, weights=[embedding_matrix], trainable=False, name='emb_2')\ndecoder_lstm = LSTM(HIDDEN_UNITS_ENC * 2, return_sequences=True, return_state=True, dropout=0.5,\nrecurrent_dropout=0.5, name='decoder_lstm')\ndecoder_outputs, state_h, state_c = decoder_lstm(embedding_layer(decoder_inputs), initial_state=encoder_states)\n\ndecoder_dense = Dense(num_decoder_tokens, name='decoder_dense')\ndecoder_outputs = decoder_dense(decoder_outputs)\nmodel = Model([encoder_inputs, decoder_inputs], decoder_outputs)",
"_____no_output_____"
],
[
"print(model.summary())\n# visualize model structure\nfrom IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\n\nSVG(model_to_dot(model, show_shapes=True, show_layer_names=False, \n rankdir='TB',dpi=65).create(prog='dot', format='svg'))",
"Model: \"model_4\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nencoder_inputs (InputLayer) (None, 2009) 0 \n__________________________________________________________________________________________________\nembedding_layer (Embedding) (None, 2009, 300) 104402700 encoder_inputs[0][0] \n encoder_inputs[0][0] \n__________________________________________________________________________________________________\ndecoder_inputs (InputLayer) (None, None) 0 \n__________________________________________________________________________________________________\nencoder_lstm (LSTM) [(None, 128), (None, 219648 embedding_layer[0][0] \n__________________________________________________________________________________________________\nencoder_lstm_backward (LSTM) [(None, 128), (None, 219648 embedding_layer[1][0] \n__________________________________________________________________________________________________\nemb_2 (Embedding) (None, None, 300) 104402700 decoder_inputs[0][0] \n__________________________________________________________________________________________________\nconcatenate_3 (Concatenate) (None, 256) 0 encoder_lstm[0][1] \n encoder_lstm_backward[0][1] \n__________________________________________________________________________________________________\nconcatenate_4 (Concatenate) (None, 256) 0 encoder_lstm[0][2] \n encoder_lstm_backward[0][2] \n__________________________________________________________________________________________________\ndecoder_lstm (LSTM) [(None, None, 256), 570368 emb_2[0][0] \n concatenate_3[0][0] \n concatenate_4[0][0] \n__________________________________________________________________________________________________\ndecoder_dense (Dense) (None, None, 348009) 89438313 decoder_lstm[0][0] \n==================================================================================================\nTotal params: 299,253,377\nTrainable params: 90,447,977\nNon-trainable params: 208,805,400\n__________________________________________________________________________________________________\nNone\n"
],
[
"model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])\nmodel.fit([encoder_input_data,decoder_input_data],decoder_target_data,batch_size = batch_size, epochs = epochs,validation_split=0.9)\nmodel.save('s2s.h5')",
"_____no_output_____"
],
[
"from keras.models import load_model\nmodel = load_model('s2s.h5')",
"_____no_output_____"
],
[
"#inference step\nencoder_model = Model(encoder_inputs, encoder_states)\n# encoder_model.summary()\n\ndecoder_state_input_h = Input(shape = (HIDDEN_UNITS_ENC*2,))\ndecoder_state_input_c = Input(shape = (HIDDEN_UNITS_ENC*2,))\ndecoder_states_inputs = [decoder_state_input_h,decoder_state_input_c]\ndecoder_output, state_h, state_c = decoder_lstm(embedding_layer(decoder_inputs), initial_state = decoder_states_inputs)\ndecoder_states = [state_h,state_c]\ndecoder_outputs = decoder_dense(decoder_output)\ndecoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)\n",
"_____no_output_____"
],
[
"decoder_model.summary()\n# visualize model structure\nfrom IPython.display import SVG\nfrom keras.utils.vis_utils import model_to_dot\n\nSVG(model_to_dot(decoder_model, show_shapes=True, show_layer_names=False, \n rankdir='TB',dpi = 70).create(prog='dot', format='svg'))",
"Model: \"model_6\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ndecoder_inputs (InputLayer) (None, None) 0 \n__________________________________________________________________________________________________\nemb_2 (Embedding) (None, None, 300) 104402700 decoder_inputs[0][0] \n__________________________________________________________________________________________________\ninput_3 (InputLayer) (None, 256) 0 \n__________________________________________________________________________________________________\ninput_4 (InputLayer) (None, 256) 0 \n__________________________________________________________________________________________________\ndecoder_lstm (LSTM) [(None, None, 256), 570368 emb_2[1][0] \n input_3[0][0] \n input_4[0][0] \n__________________________________________________________________________________________________\ndecoder_dense (Dense) (None, None, 348009) 89438313 decoder_lstm[1][0] \n==================================================================================================\nTotal params: 194,411,381\nTrainable params: 90,008,681\nNon-trainable params: 104,402,700\n__________________________________________________________________________________________________\n"
],
[
"#decoding sequences\ndef decode_sequence(input_seq):\n # Encode the input as state vectors.\n states_value = encoder_model.predict(input_seq)\n\n # Generate empty target sequence of length 1.\n target_seq = np.zeros((1,1))\n target_seq[0, 0] = tokenizer.word_index[\"<BOS>\"]\n\n # Sampling loop for a batch of sequences\n # (to simplify, here we assume a batch of size 1).\n stop_condition = False\n decoded_sentence = ''\n while not stop_condition:\n output_tokens, h, c = decoder_model.predict(\n [target_seq] + states_value)\n\n # Sample a token\n sampled_token_index = np.argmax(output_tokens[0,0])\n sampled_char = idx2word[sampled_token_index]\n # print(sampled_token_index,end=\" \")\n decoded_sentence += sampled_char + \" \"\n\n # Exit condition: either hit max length\n # or find stop character.\n if (sampled_char == '<EOS>' or\n len(decoded_sentence) > MAX_TARGET_LENGTH):\n stop_condition = True\n\n # Update the target sequence (of length 1).\n target_seq[0, 0] = sampled_token_index\n\n # Update states\n states_value = [h, c]\n\n return decoded_sentence",
"_____no_output_____"
],
[
"seq = 1\ninput_seq = encoder_input_data[seq:seq+1]\ndecoded_sentence = decode_sequence(input_seq)\nprint('-')\nprint('Article:', data1[seq].strip())\nprint('Actual Summary:', data2[seq][5:-5])\nprint('Predicted Summary:', decoded_sentence)",
"-\nArticle: brunei has become the first east asian country to adopt sharia law despite widespread condemnation from international human rights groups . the islamic criminal law is set to include punishments such as flogging dismemberment and death by stoning for crimes such as rape adultery and sodomy . the religious laws will operate alongside the existing civil penal code . during a ceremony wednesday morning the sultan of brunei hassanal bolkiah announced the commencement of the first phase of the shariabased penal code according to the government 's official website . the oilrich kingdom located on the island of borneo has a population of just 412 000 people . the country already follows a more conservative islamic rule than neighboring muslimdominated countries like indonesia and malaysia and has implemented strict religiouslymotivated laws such as the banning of the sale of alcohol . stringent laws in response to the new set of laws human rights group amnesty international said that it will `` take the country back to the dark ages . '' `` it the law makes a mockery of the country 's international human rights commitments and must be revoked immediately `` amnesty 's regional deputy director rupert abbott said in a statement released after the announcement . most parts of the new islamic code will apply to both muslims and nonmuslims affecting people from the christian and buddhist communities . around 70 percent of people in brunei are malay muslims while the remainder of the population are of chinese or other ethnic descent . the sultan who is also the prime minister first announced the law in october 2013. as per its provisions sexual offenses such as rape adultery and sodomy will be considered punishable acts for muslims . consensual sex between homosexuals will also be criminalized with death by stoning the prescribed punishment . in announcing the implementation of sharia law the government website quoted the sultan as saying that his government `` does not expect other people to accept and agree with it but that it would suffice if they just respect the nation in the same way that it also respects them . '' widespread condemnation lgbt advocacy groups in asia have voiced their opposition to brunei 's implementation of sharia law . `` it may open the floodgates for further human rights violations against women children and other people on the basis of sexual orientation and gender identity `` officials from the asia pacific coalition on male sexual health apcom and islands of south east asian network on male and transgender sexual health isean said in a joint statement released last week . the united nations has also publicly condemned the move . `` under international law stoning people to death constitutes torture or other cruel inhuman or degrading treatment or punishment and is thus clearly prohibited `` rupert colville spokesperson for the u.n. high commissioner for human rights said in a press briefing in geneva last month . antiwomen provisions he further expressed concerns about the implementation of sharia law 's impact on women . `` a number of un studies have revealed that women are more likely to be sentenced to death by stoning due to deeply entrenched discrimination and stereotyping against them . '' more than 40 000 people have attended briefing sessions organized by the government in the last four months to understand the provisions under the new islamic criminal law the country 's religious affairs minister said during a ceremony to mark the laws ' implementation .\nActual Summary: brunei has become the first east asian country to adopt sharia law . the shariabased penal code will eventually include death by stoning . international human rights groups have publicly condemned the move . <\nPredicted Summary: kievbased hermanos hermanos sharia4belgium 2footwide supercoach barbaresi charleyproject newscast reef.. zemlja zambales \n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0710ef6d339df953507980bbbc9031e60c48b2
| 404,558 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/3_Boxes on lanes created_now line will be fitted-checkpoint.ipynb
|
animesh-singhal/Project-2
|
d58a674eb0030fb5e998027341ef387570ac4742
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/3_Boxes on lanes created_now line will be fitted-checkpoint.ipynb
|
animesh-singhal/Project-2
|
d58a674eb0030fb5e998027341ef387570ac4742
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/3_Boxes on lanes created_now line will be fitted-checkpoint.ipynb
|
animesh-singhal/Project-2
|
d58a674eb0030fb5e998027341ef387570ac4742
|
[
"MIT"
] | null | null | null | 429.467091 | 365,628 | 0.927946 |
[
[
[
"## The goals / steps of this project are the following:\n\n1) Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.\n\n2) Apply a distortion correction to raw images.\n\n3) Use color transforms, gradients, etc., to create a thresholded binary image.\n\n4) Apply a perspective transform to rectify binary image (\"birds-eye view\").\n\n5) Detect lane pixels and fit to find the lane boundary.\n\n6) Determine the curvature of the lane and vehicle position with respect to center.\n\n7) Warp the detected lane boundaries back onto the original image.\n\n8) Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.",
"_____no_output_____"
],
[
"Strategy: \n - Generate and save transformation matrices to undistort images\n - Create undistort function",
"_____no_output_____"
],
[
"# 1) Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.\n\nThe images for camera calibration are stored in the folder called camera_cal. The images in test_images are for testing your pipeline on single frames. If you want to extract more test images from the videos, you can simply use an image writing method like cv2.imwrite(), i.e., you can read the video in frame by frame as usual, and for frames you want to save for later you can write to an image file.\n\n## Saves the relevant transformation matrices in a pickle file for saving time in the next set of codes",
"_____no_output_____"
]
],
[
[
"\"\"\"\nimport numpy as np\nimport cv2\nimport glob\nimport pickle\nimport matplotlib.pyplot as plt\n#%matplotlib notebook\n\n\"\"\"A: Finding image and object points\"\"\"\n\ndef undistort(test_img):\n# prepare object points (our ideal reference), like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n# Stores mtx and dist coefficients in a pickle file to use later\n nx=9 # Number of inner corners of our chessboard along x axis (or columns)\n ny=6 # Number of inner corners of our chessboard along y axis (or rows)\n\n objp = np.zeros((ny*nx,3), np.float32) #We have 9 corners on X axis and 6 corners on Y axis\n objp[:,:2] = np.mgrid[0:nx, 0:ny].T.reshape(-1,2) # Gives us coorinate points in pairs as a list of 54 items. It's shape will be (54,2) \n\n # Arrays to store object points and image points from all the images.\n objpoints = [] # 3d points in real world space. These are the points for our ideal chessboard which we are using as a reference. \n imgpoints = [] # 2d points in image plane. We'll extract these from the images given for caliberating the camera\n\n # Make a list of calibration images\n images = glob.glob('camera_cal/calibration*.jpg')\n\n # Step through the list and search for chessboard corners\n for idx, fname in enumerate(images):\n calib_img = cv2.imread(fname)\n gray = cv2.cvtColor(calib_img, cv2.COLOR_BGR2GRAY)\n\n # Find the chessboard corners\n # Grayscale conversion ensures an 8bit image as input.The next function needs that kind of input only. Generally color images are 24 bit images. (Refer \"Bits in images\" in notes) \n ret, corners = cv2.findChessboardCorners(gray, (nx,ny), None)\n\n # If found, add object points, image points\n if ret == True:\n objpoints.append(objp) # These will be same for caliberation image. The same points will get appended every time this fires up \n imgpoints.append(corners) # Corners \n \n # Draw and display the corners #This step can be completely skipped\n cv2.drawChessboardCorners(calib_img, (nx,ny), corners, ret)\n write_name = 'corners_found'+str(idx)+'.jpg'\n cv2.imwrite('output_files/corners_found_for_calib/'+write_name, calib_img) \n cv2.imshow(write_name, calib_img) #We dont want to see the images now so commenting out. TO see output later, un-comment these 3 lines\n cv2.waitKey(500) #Delete after testing. These will be used to show you images one after the other\n\n cv2.destroyAllWindows() #Delete this after testing\n \n # Test undistortion on an image\n\n test_img_size = (test_img.shape[1], test_img.shape[0])\n \n # Do camera calibration given object points and image points\n ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, test_img_size,None,None)\n \n # Use the above obtained results to undistort \n undist_img = cv2.undistort(test_img, mtx, dist, None, mtx)\n \n cv2.imwrite('output_files/test_undist.jpg',undist_img)\n \n # Save the camera calibration result for later use (we won't worry about rvecs / tvecs)\n dist_pickle = {}\n dist_pickle[\"mtx\"] = mtx\n dist_pickle[\"dist\"] = dist\n pickle.dump( dist_pickle, open( \"output_files/calib_pickle_files/dist_pickle.p\", \"wb\" ) )\n #undist_img = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)\n\n return undist_img\n \n \n\ntest_img= cv2.imread('camera_cal/calibration1.jpg') #Note: Your image will be in BGR format\noutput=undistort(test_img)\n\n\nf, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) #Refer subplots in python libraries\nax1.imshow(test_img)\nax1.set_title('Original Image', fontsize=30)\nax2.imshow(output)\nax2.set_title('Undistorted Image', fontsize=30)\ncv2.waitKey(500)\ncv2.destroyAllWindows()\n\"\"\"",
"_____no_output_____"
]
],
[
[
"# 2) Apply a distortion correction to raw images\n\nNow we'll use the transformation matrices stored in the pickle file above and try undistorting example images\n\nPrecaution: If you're reading colored image with cv2, convert it to RGB from BGR before using ax.imshow(). \n\nReason: It requred a RGB image if it is 3D\n\nSo I'm leaving a comment in my *\"cal_undistort function\"* to do the conversion in case you use cv2 to read frames and plan to output using ax.imshow()",
"_____no_output_____"
]
],
[
[
"import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\n\ndef cal_undistort(img):\n# Reads mtx and dist matrices, peforms image distortion correction and returns the undistorted image\n\n import pickle\n \n # Read in the saved matrices\n my_dist_pickle = pickle.load( open( \"output_files/calib_pickle_files/dist_pickle.p\", \"rb\" ) )\n mtx = my_dist_pickle[\"mtx\"]\n dist = my_dist_pickle[\"dist\"]\n\n img_size = (img.shape[1], img.shape[0]) \n\n undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)\n #undistorted_img = cv2.cvtColor(undistorted_img, cv2.COLOR_BGR2RGB) #Use if you use cv2 to import image. ax.imshow() needs RGB image\n return undistorted_img\n\ndef draw_subplot(img1,name1,img2,name2):\n f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))\n f.tight_layout()\n ax1.imshow(img1) #Needs an RGB image for 3D images. For 2D images, it auto-colors them so use cmap='gray' to get grayscale if needed\n ax1.set_title(name1, fontsize=50)\n ax2.imshow(img2)\n ax2.set_title(name2, fontsize=50)\n plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)\n\n# Read in an image\nimg = mpimg.imread('test_images/test2.jpg') # highway image\n#img = mpimg.imread('camera_cal/calibration3.jpg') # chessboard image\n\nundistorted = cal_undistort(img)\n\ndraw_subplot(img,\"OG image\",undistorted,\"Undist image\")\n\nprint(\"To note the changes, look carefully at the outer boundary of both the images\")",
"_____no_output_____"
]
],
[
[
"# 3) Use color transforms, gradients, etc., to create a thresholded binary image.\n\n\nCaution: In the thresh_img() function, we begin by coverting our color space from RGB to HLS. We need to check whether our image was RGB or BGR when it was extracted from the frame?\n\nNote: Put undistorted RGB images in this function",
"_____no_output_____"
]
],
[
[
"import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\ndef thresh_img(img):\n \"\"\"\n x gradient will identify lanes far away from us \n Saturation channel will help us with the lanes near us. This will help if there's a lot of light\n \"\"\" \n \n \"\"\"Starting with color channel\"\"\"\n # Convert to HLS color space and separate the S channel\n # Note: img is the undistorted image\n hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)\n s_channel = hls[:,:,2]\n h_channel = hls[:,:,0]\n # Threshold color channel\n s_thresh_min = 170\n s_thresh_max = 255\n \n h_thresh_min = 21\n h_thresh_max = 22\n \n s_binary = np.zeros_like(s_channel)\n s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1\n\n h_binary = np.zeros_like(h_channel)\n h_binary[(h_channel >= h_thresh_min) & (h_channel <= h_thresh_max)] = 1\n\n \n \"\"\"Now handling the x gradient\"\"\"\n # Grayscale image\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Sobel x\n sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x\n abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal\n scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))\n # Threshold x gradient\n thresh_min = 20\n thresh_max = 100\n sxbinary = np.zeros_like(scaled_sobel)\n sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1\n \n # Combine the two binary thresholds\n combined_binary = np.zeros_like(sxbinary)\n combined_binary[((s_binary == 1) & (h_binary == 1)) | (sxbinary == 1)] = 1\n #Used h as well so as to reduce noise in the image\n \n out_img = np.dstack((combined_binary, combined_binary, combined_binary))*255\n \n #return combined_binary\n return out_img\n",
"_____no_output_____"
]
],
[
[
"# 4) Apply a perspective transform to rectify binary image (\"birds-eye view\")",
"_____no_output_____"
]
],
[
[
"def perspective_transform(img):\n \n # Define calibration box in source (original) and destination (desired or warped) coordinates\n \n img_size = (img.shape[1], img.shape[0])\n \"\"\"Notice the format used for img_size. Yaha bhi ulta hai. x axis aur fir y axis chahiye. \n Apne format mein rows(y axis) and columns (x axis) hain\"\"\"\n \n \n # Four source coordinates\n\n src = np.array(\n [[437*img.shape[1]/960, 331*img.shape[0]/540],\n [523*img.shape[1]/960, 331*img.shape[0]/540],\n [850*img.shape[1]/960, img.shape[0]],\n [145*img.shape[1]/960, img.shape[0]]], dtype='f')\n \n \n # Next, we'll define a desired rectangle plane for the warped image.\n # We'll choose 4 points where we want source points to end up \n # This time we'll choose our points by eyeballing a rectangle\n \n dst = np.array(\n [[290*img.shape[1]/960, 0],\n [740*img.shape[1]/960, 0],\n [740*img.shape[1]/960, img.shape[0]],\n [290*img.shape[1]/960, img.shape[0]]], dtype='f')\n \n \n #Compute the perspective transform, M, given source and destination points:\n M = cv2.getPerspectiveTransform(src, dst)\n \n #Warp an image using the perspective transform, M; using linear interpolation \n #Interpolating points is just filling in missing points as it warps an image\n # The input image for this function can be a colored image too\n warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)\n \n return warped,src,dst\n",
"_____no_output_____"
]
],
[
[
"# Master Pipeline",
"_____no_output_____"
]
],
[
[
"import cv2\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\n# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\n\n\ndef process_image(frame):\n \n def cal_undistort(img):\n # Reads mtx and dist matrices, peforms image distortion correction and returns the undistorted image\n\n import pickle\n\n # Read in the saved matrices\n my_dist_pickle = pickle.load( open( \"output_files/calib_pickle_files/dist_pickle.p\", \"rb\" ) )\n mtx = my_dist_pickle[\"mtx\"]\n dist = my_dist_pickle[\"dist\"]\n\n img_size = (img.shape[1], img.shape[0]) \n\n undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)\n #undistorted_img = cv2.cvtColor(undistorted_img, cv2.COLOR_BGR2RGB) #Use if you use cv2 to import image. ax.imshow() needs RGB image\n return undistorted_img\n\n \n def yellow_threshold(img, sxbinary):\n # Convert to HLS color space and separate the S channel\n # Note: img is the undistorted image\n hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)\n s_channel = hls[:,:,2]\n h_channel = hls[:,:,0]\n # Threshold color channel\n s_thresh_min = 100\n s_thresh_max = 255\n \n #for 360 degree, my value for yellow ranged between 35 and 50. So uska half kar diya\n h_thresh_min = 10 \n h_thresh_max = 25\n\n s_binary = np.zeros_like(s_channel)\n s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1\n\n h_binary = np.zeros_like(h_channel)\n h_binary[(h_channel >= h_thresh_min) & (h_channel <= h_thresh_max)] = 1\n\n # Combine the two binary thresholds\n yellow_binary = np.zeros_like(s_binary)\n yellow_binary[(((s_binary == 1) | (sxbinary == 1) ) & (h_binary ==1))] = 1\n return yellow_binary\n \n def xgrad_binary(img, thresh_min=30, thresh_max=100):\n # Grayscale image\n gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Sobel x\n sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x\n abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal\n scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))\n # Threshold x gradient\n #thresh_min = 30 #Already given above\n #thresh_max = 100\n\n sxbinary = np.zeros_like(scaled_sobel)\n sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1\n return sxbinary\n \n def white_threshold(img, sxbinary, lower_white_thresh = 170):\n r_channel = img[:,:,0]\n g_channel = img[:,:,1]\n b_channel = img[:,:,2]\n # Threshold color channel\n r_thresh_min = lower_white_thresh\n r_thresh_max = 255\n r_binary = np.zeros_like(r_channel)\n r_binary[(r_channel >= r_thresh_min) & (r_channel <= r_thresh_max)] = 1\n \n g_thresh_min = lower_white_thresh\n g_thresh_max = 255\n g_binary = np.zeros_like(g_channel)\n g_binary[(g_channel >= g_thresh_min) & (g_channel <= g_thresh_max)] = 1\n\n b_thresh_min = lower_white_thresh\n b_thresh_max = 255\n b_binary = np.zeros_like(b_channel)\n b_binary[(b_channel >= b_thresh_min) & (b_channel <= b_thresh_max)] = 1\n\n white_binary = np.zeros_like(r_channel)\n white_binary[((r_binary ==1) & (g_binary ==1) & (b_binary ==1) & (sxbinary==1))] = 1\n return white_binary\n \n def thresh_img(img):\n \n \n #sxbinary = xgrad_binary(img, thresh_min=30, thresh_max=100)\n sxbinary = xgrad_binary(img, thresh_min=25, thresh_max=130)\n yellow_binary = yellow_threshold(img, sxbinary) #(((s) | (sx)) & (h))\n white_binary = white_threshold(img, sxbinary, lower_white_thresh = 150)\n \n # Combine the two binary thresholds\n combined_binary = np.zeros_like(sxbinary)\n combined_binary[((yellow_binary == 1) | (white_binary == 1))] = 1\n \n out_img = np.dstack((combined_binary, combined_binary, combined_binary))*255\n \n return out_img\n \n def perspective_transform(img):\n \n # Define calibration box in source (original) and destination (desired or warped) coordinates\n\n img_size = (img.shape[1], img.shape[0])\n \"\"\"Notice the format used for img_size. Yaha bhi ulta hai. x axis aur fir y axis chahiye. \n Apne format mein rows(y axis) and columns (x axis) hain\"\"\"\n\n\n # Four source coordinates\n # Order of points: top left, top right, bottom right, bottom left\n \n src = np.array(\n [[435*img.shape[1]/960, 350*img.shape[0]/540],\n [535*img.shape[1]/960, 350*img.shape[0]/540],\n [885*img.shape[1]/960, img.shape[0]],\n [220*img.shape[1]/960, img.shape[0]]], dtype='f')\n \n\n # Next, we'll define a desired rectangle plane for the warped image.\n # We'll choose 4 points where we want source points to end up \n # This time we'll choose our points by eyeballing a rectangle\n\n dst = np.array(\n [[290*img.shape[1]/960, 0],\n [740*img.shape[1]/960, 0],\n [740*img.shape[1]/960, img.shape[0]],\n [290*img.shape[1]/960, img.shape[0]]], dtype='f')\n\n\n #Compute the perspective transform, M, given source and destination points:\n M = cv2.getPerspectiveTransform(src, dst)\n\n #Warp an image using the perspective transform, M; using linear interpolation \n #Interpolating points is just filling in missing points as it warps an image\n # The input image for this function can be a colored image too\n warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)\n \n return warped, src, dst \n \n def draw_polygon(img1, img2, src, dst):\n src = src.astype(int) #Very important step (Pixels cannot be in decimals)\n dst = dst.astype(int)\n cv2.polylines(img1, [src], True, (255,0,0), 3)\n cv2.polylines(img2, [dst], True, (255,0,0), 3)\n \n def histogram_bottom_peaks (warped_img):\n # This will detect the bottom point of our lane lines\n \n # Take a histogram of the bottom half of the image\n bottom_half = warped_img[(warped_img.shape[0]//2):,:,0] # Collecting all pixels in the bottom half\n histogram = np.sum(bottom_half, axis=0) # Summing them along y axis (or along columns)\n # Find the peak of the left and right halves of the histogram\n # These will be the starting point for the left and right lines\n midpoint = np.int(histogram.shape[0]//2) # 1D array hai histogram toh uska bas 0th index filled hoga \n #print(np.shape(histogram)) #OUTPUT:(1280,)\n leftx_base = np.argmax(histogram[:midpoint])\n rightx_base = np.argmax(histogram[midpoint:]) + midpoint\n\n return leftx_base, rightx_base\n \n def find_lane_pixels(warped_img):\n \n leftx_base, rightx_base = histogram_bottom_peaks(warped_img)\n \n # Create an output image to draw on and visualize the result\n out_img = np.copy(warped_img)\n \n # HYPERPARAMETERS\n # Choose the number of sliding windows\n nwindows = 9\n # Set the width of the windows +/- margin. So width = 2*margin \n margin = 100\n # Set minimum number of pixels found to recenter window\n minpix = 50\n \n # Set height of windows - based on nwindows above and image shape\n window_height = np.int(warped_img.shape[0]//nwindows)\n # Identify the x and y positions of all nonzero pixels in the image\n nonzero = warped_img.nonzero() #pixel ke coordinates dega 2 seperate arrays mein\n nonzeroy = np.array(nonzero[0]) # Y coordinates milenge 1D array mein. They will we arranged in the order of pixels\n nonzerox = np.array(nonzero[1])\n # Current positions to be updated later for each window in nwindows\n leftx_current = leftx_base #initially set kar diya hai. For loop ke end mein change karenge\n rightx_current = rightx_base\n\n # Create empty lists to receive left and right lane pixel indices\n left_lane_inds = [] # Ismein lane-pixels ke indices collect karenge. \n # 'nonzerox' array mein index daalke coordinate mil jaayega\n right_lane_inds = [] \n\n # Step through the windows one by one\n for window in range(nwindows):\n # Identify window boundaries in x and y (and right and left)\n win_y_low = warped_img.shape[0] - (window+1)*window_height\n win_y_high = warped_img.shape[0] - window*window_height\n \"\"\"### TO-DO: Find the four below boundaries of the window ###\"\"\"\n win_xleft_low = leftx_current - margin \n win_xleft_high = leftx_current + margin \n win_xright_low = rightx_current - margin \n win_xright_high = rightx_current + margin \n\n # Draw the windows on the visualization image\n cv2.rectangle(out_img,(win_xleft_low,win_y_low),\n (win_xleft_high,win_y_high),(0,255,0), 2) \n cv2.rectangle(out_img,(win_xright_low,win_y_low),\n (win_xright_high,win_y_high),(0,255,0), 2) \n \n\n ### TO-DO: Identify the nonzero pixels in x and y within the window ###\n #Iska poora explanation seperate page mein likha hai\n good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & \n (nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]\n\n good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) & \n (nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]\n\n\n # Append these indices to the lists\n left_lane_inds.append(good_left_inds)\n right_lane_inds.append(good_right_inds)\n\n # If you found > minpix pixels, recenter next window on the mean position of the pixels in your current window (re-centre)\n if len(good_left_inds) > minpix:\n leftx_current = np.int(np.mean(nonzerox[good_left_inds]))\n if len(good_right_inds) > minpix: \n rightx_current = np.int(np.mean(nonzerox[good_right_inds]))\n \n \n # Concatenate the arrays of indices (previously was a list of lists of pixels)\n try:\n left_lane_inds = np.concatenate(left_lane_inds)\n right_lane_inds = np.concatenate(right_lane_inds)\n except ValueError:\n # Avoids an error if the above is not implemented fully\n pass\n \n # Extract left and right line pixel positions\n leftx = nonzerox[left_lane_inds]\n lefty = nonzeroy[left_lane_inds] \n rightx = nonzerox[right_lane_inds]\n righty = nonzeroy[right_lane_inds]\n\n #return leftx, lefty, rightx, righty, out_img\n \n return out_img\n\n\n \n undist_img = cal_undistort(frame)\n thresh_img = thresh_img(undist_img) # Note: This is not a binary iamge. It has been stacked already within the function\n warped_img, src, dst = perspective_transform(thresh_img)\n draw_polygon(frame, warped_img, src, dst) #the first image is the original image that you import into the system\n lane_identified = find_lane_pixels(warped_img)\n \n #return thresh_img, warped_img #3 images dekhne ke liye ye return\n return warped_img, lane_identified #video chalane ke liye ye return\n #return lane_identified\n \n",
"_____no_output_____"
]
],
[
[
"# 5) Detect lane pixels and fit to find the lane boundary.",
"_____no_output_____"
]
],
[
[
"#image = mpimg.imread(\"my_test_images/starter.JPG\")\n#image = mpimg.imread(\"my_test_images/straight_road.JPG\") #top left corner thoda right\nimage = mpimg.imread(\"my_test_images/change_road_color.JPG\") #too less data points in right lane\n#image = mpimg.imread(\"my_test_images/leaving_tree_to_road_color_change.JPG\")\n#image = mpimg.imread(\"my_test_images/tree_and_color_change.JPG\")\n#image = mpimg.imread(\"my_test_images/trees_left_lane_missing.JPG\")\n#image = mpimg.imread(\"my_test_images/trees_left_lane_missing2.JPG\")\n#image = mpimg.imread(\"my_test_images/1.JPG\")\n#image = mpimg.imread(\"my_test_images/2.JPG\") #too less data points in right lane\n#image = mpimg.imread(\"my_test_images/3.JPG\") #too less points in right lane\n#image = mpimg.imread(\"my_test_images/4.JPG\")\n\n#image = mpimg.imread(\"my_test_images/finding_hue.JPG\")\n#image = mpimg.imread(\"my_test_images/finding_hue2.JPG\") #ismein yellow bohot kam ho gaya ab\n\n\nthresh_img, warped_img=process_image(image)\n\ndef draw_subplot(img1,name1,img2,name2, img3,name3):\n f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(24, 9))\n f.tight_layout()\n \n ax1.imshow(img1) #Needs an RGB image for 3D images. For 2D images, it auto-colors them so use cmap='gray' to get grayscale if needed\n ax1.set_title(name1, fontsize=50)\n ax2.imshow(img2) #Needs an RGB image for 3D images. For 2D images, it auto-colors them so use cmap='gray' to get grayscale if needed\n ax2.set_title(name2, fontsize=50)\n ax3.imshow(img3)\n ax3.set_title(name3, fontsize=50)\n plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)\n \n#draw_subplot(image,\"OG\",output,\"lala lala image\")\ndraw_subplot(image, \"OG image\",thresh_img,\"Thresh_img\",warped_img,\"Bird eye's view\")\n",
"_____no_output_____"
]
],
[
[
"Remember to atleast stack binary images to form color images. ",
"_____no_output_____"
],
[
"## Reset\nIf your sanity checks reveal that the lane lines you've detected are problematic for some reason, you can simply assume it was a bad or difficult frame of video, retain the previous positions from the frame prior and step to the next frame to search again. If you lose the lines for several frames in a row, you should probably start searching from scratch using a histogram and sliding window, or another method, to re-establish your measurement.\n\n## Smoothing\nEven when everything is working, your line detections will jump around from frame to frame a bit and it can be preferable to smooth over the last n frames of video to obtain a cleaner result. Each time you get a new high-confidence measurement, you can append it to the list of recent measurements and then take an average over n past measurements to obtain the lane position you want to draw onto the image.",
"_____no_output_____"
],
[
"# Project video",
"_____no_output_____"
]
],
[
[
"project_output = 'output_files/video_clips/project_video.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n\nclip1 = VideoFileClip(\"project_video.mp4\")\n#clip1 = VideoFileClip(\"project_video.mp4\").subclip(0,1)\n\nproject_clip = clip1.fl_image(process_image) #NOTE: this function expects color images! \n%time project_clip.write_videofile(project_output, audio=False)",
"t: 0%| | 2/1260 [00:00<01:18, 16.01it/s, now=None]"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(project_output))",
"_____no_output_____"
],
[
"# challenge video",
"_____no_output_____"
],
[
"challenge_output = 'output_files/video_clips/challenge_video_old.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n\nclip2 = VideoFileClip(\"challenge_video.mp4\")\n#clip2 = VideoFileClip(\"challenge_video.mp4\").subclip(0,1)\n\nchallenge_clip = clip2.fl_image(process_image) #NOTE: this function expects color images! \n%time challenge_clip.write_videofile(challenge_output, audio=False)",
"t: 0%| | 2/485 [00:00<00:28, 16.96it/s, now=None]"
],
[
"HTML(\"\"\"\n<video width=\"960\" height=\"540\" controls>\n <source src=\"{0}\">\n</video>\n\"\"\".format(challenge_output))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4a0713ee42df4e1b20cc19891fdb60e484a004ba
| 248,967 |
ipynb
|
Jupyter Notebook
|
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/1.1) Understand the effect of freezing base model in transfer learning - 1 - mxnet.ipynb
|
take2rohit/monk_v1
|
9c567bf2c8b571021b120d879ba9edf7751b9f92
|
[
"Apache-2.0"
] | 542 |
2019-11-10T12:09:31.000Z
|
2022-03-28T11:39:07.000Z
|
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/1.1) Understand the effect of freezing base model in transfer learning - 1 - mxnet.ipynb
|
take2rohit/monk_v1
|
9c567bf2c8b571021b120d879ba9edf7751b9f92
|
[
"Apache-2.0"
] | 117 |
2019-11-12T09:39:24.000Z
|
2022-03-12T00:20:41.000Z
|
study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/1.1) Understand the effect of freezing base model in transfer learning - 1 - mxnet.ipynb
|
take2rohit/monk_v1
|
9c567bf2c8b571021b120d879ba9edf7751b9f92
|
[
"Apache-2.0"
] | 246 |
2019-11-09T21:53:24.000Z
|
2022-03-29T00:57:07.000Z
| 123.741054 | 52,484 | 0.857242 |
[
[
[
"<a href=\"https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/1.1)%20Understand%20the%20effect%20of%20freezing%20base%20model%20in%20transfer%20learning%20-%201%20-%20mxnet.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Goals\n\n\n### Understand the role of freezing models in transfer learning\n\n\n### Why freeze/unfreeze base models in transfer learning\n\n\n### Use comparison feature to appropriately set this parameter on custom dataset\n\n\n### You will be using lego bricks dataset to train the classifiers",
"_____no_output_____"
],
[
"# What is freezing base network\n\n\n - To recap you have two parts in your network\n - One that already existed, the pretrained one, the base network\n - The new sub-network or a single layer you added\n\n\n -The hyper-parameter we can see here: Freeze base network\n - Freezing base network makes the base network untrainable\n - The base network now acts as a feature extractor and only the next half is trained\n - If you do not freeze the base network the entire network is trained",
"_____no_output_____"
],
[
"# Table of Contents\n\n\n## [Install](#0)\n\n\n## [Freeze Base network in densenet121 and train a classifier](#1)\n\n\n## [Unfreeze base network in densenet121 and train another classifier](#2)\n\n\n## [Compare both the experiment](#3)",
"_____no_output_____"
],
[
"<a id='0'></a>\n# Install Monk",
"_____no_output_____"
],
[
"## Using pip (Recommended)\n\n - colab (gpu) \n - All bakcends: `pip install -U monk-colab`\n \n\n - kaggle (gpu) \n - All backends: `pip install -U monk-kaggle`\n \n\n - cuda 10.2\t\n - All backends: `pip install -U monk-cuda102`\n - Gluon bakcned: `pip install -U monk-gluon-cuda102`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda102`\n - Keras backend: `pip install -U monk-keras-cuda102`\n \n\n - cuda 10.1\t\n - All backend: `pip install -U monk-cuda101`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda101`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda101`\n\t - Keras backend: `pip install -U monk-keras-cuda101`\n \n\n - cuda 10.0\t\n - All backend: `pip install -U monk-cuda100`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda100`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda100`\n\t - Keras backend: `pip install -U monk-keras-cuda100`\n \n\n - cuda 9.2\t\n - All backend: `pip install -U monk-cuda92`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda92`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda92`\n\t - Keras backend: `pip install -U monk-keras-cuda92`\n \n\n - cuda 9.0\t\n - All backend: `pip install -U monk-cuda90`\n\t - Gluon bakcned: `pip install -U monk-gluon-cuda90`\n\t - Pytorch backend: `pip install -U monk-pytorch-cuda90`\n\t - Keras backend: `pip install -U monk-keras-cuda90`\n \n\n - cpu \t\t\n - All backend: `pip install -U monk-cpu`\n\t - Gluon bakcned: `pip install -U monk-gluon-cpu`\n\t - Pytorch backend: `pip install -U monk-pytorch-cpu`\n\t - Keras backend: `pip install -U monk-keras-cpu`",
"_____no_output_____"
],
[
"## Install Monk Manually (Not recommended)\n \n### Step 1: Clone the library\n - git clone https://github.com/Tessellate-Imaging/monk_v1.git\n \n \n \n \n### Step 2: Install requirements \n - Linux\n - Cuda 9.0\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`\n - Cuda 9.2\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`\n - Cuda 10.0\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`\n - Cuda 10.1\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`\n - Cuda 10.2\n - `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`\n - CPU (Non gpu system)\n - `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`\n \n \n - Windows\n - Cuda 9.0 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`\n - Cuda 9.2 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`\n - Cuda 10.0 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`\n - Cuda 10.1 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`\n - Cuda 10.2 (Experimental support)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`\n - CPU (Non gpu system)\n - `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`\n \n \n - Mac\n - CPU (Non gpu system)\n - `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`\n \n \n - Misc\n - Colab (GPU)\n - `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`\n - Kaggle (GPU)\n - `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`\n \n \n \n### Step 3: Add to system path (Required for every terminal or kernel run)\n - `import sys`\n - `sys.path.append(\"monk_v1/\");`",
"_____no_output_____"
],
[
"## Dataset - LEGO Classification\n - https://www.kaggle.com/joosthazelzet/lego-brick-images/",
"_____no_output_____"
]
],
[
[
"! wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1RB_f2Kv3vkBXcQnCSVqCvaZFBHizQacl' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\\1\\n/p')&id=1RB_f2Kv3vkBXcQnCSVqCvaZFBHizQacl\" -O LEGO.zip && rm -rf /tmp/cookies.txt",
"_____no_output_____"
],
[
"! unzip -qq LEGO.zip",
"_____no_output_____"
],
[
"if os.path.isfile(\"LEGO/train/.DS_Store\"):\n os.system(\"rm LEGO/train/.DS_Store\");",
"_____no_output_____"
]
],
[
[
"# Imports",
"_____no_output_____"
]
],
[
[
"#Using mxnet-gluon backend \n\n# When installed using pip\nfrom monk.gluon_prototype import prototype\n\n\n# When installed manually (Uncomment the following)\n#import os\n#import sys\n#sys.path.append(\"monk_v1/\");\n#sys.path.append(\"monk_v1/monk/\");\n#from monk.gluon_prototype import prototype",
"_____no_output_____"
]
],
[
[
"<a id='1'></a>\n# Freeze Base network in densenet121 and train a classifier",
"_____no_output_____"
],
[
"## Creating and managing experiments\n - Provide project name\n - Provide experiment name\n - For a specific data create a single project\n - Inside each project multiple experiments can be created\n - Every experiment can be have diferent hyper-parameters attached to it",
"_____no_output_____"
]
],
[
[
"gtf = prototype(verbose=1);\ngtf.Prototype(\"Project\", \"Freeze_Base_Network\");",
"Mxnet Version: 1.5.0\n\nExperiment Details\n Project: Project\n Experiment: Freeze_Base_Network\n Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/2_freezing_base_network/workspace/Project/Freeze_Base_Network/\n\n"
]
],
[
[
"### This creates files and directories as per the following structure\n \n \n workspace\n |\n |--------Project\n |\n |\n |-----Freeze_Base_Network\n |\n |-----experiment-state.json\n |\n |-----output\n |\n |------logs (All training logs and graphs saved here)\n |\n |------models (all trained models saved here)\n ",
"_____no_output_____"
],
[
"## Set dataset and select the model",
"_____no_output_____"
],
[
"## Quick mode training\n\n - Using Default Function\n - dataset_path\n - model_name\n - freeze_base_network\n - num_epochs\n \n \n## Sample Dataset folder structure\n\n parent_directory\n |\n |\n |------cats\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on)\n |------dogs\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on) ",
"_____no_output_____"
],
[
"## Modifyable params \n - dataset_path: path to data\n - model_name: which pretrained model to use\n - freeze_base_network: Retrain already trained network or not\n - num_epochs: Number of epochs to train for",
"_____no_output_____"
]
],
[
[
"gtf.Default(dataset_path=\"LEGO/train\", \n model_name=\"densenet121\", \n \n \n \n freeze_base_network=True, # Set this param as true\n \n \n \n num_epochs=5);\n\n#Read the summary generated once you run this cell. ",
"Dataset Details\n Train path: LEGO/train\n Val path: None\n CSV train path: None\n CSV val path: None\n\nDataset Params\n Input Size: 224\n Batch Size: 4\n Data Shuffle: True\n Processors: 4\n Train-val split: 0.7\n\nPre-Composed Train Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nPre-Composed Val Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num train images: 4465\n Num val images: 1914\n Num classes: 16\n\nModel Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n Freeze base network: True\n\nModel Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num of potentially trainable layers: 242\n Num of actual trainable layers: 1\n\nOptimizer\n Name: sgd\n Learning rate: 0.01\n Params: {'lr': 0.01, 'momentum': 0, 'weight_decay': 0.0001, 'momentum_dampening_rate': 0, 'clipnorm': 0.0, 'clipvalue': 0.0}\n\n\n\nLearning rate scheduler\n Name: steplr\n Params: {'step_size': 1, 'gamma': 0.98, 'last_epoch': -1}\n\nLoss\n Name: softmaxcrossentropy\n Params: {'weight': None, 'batch_axis': 0, 'axis_to_sum_over': -1, 'label_as_categories': True, 'label_smoothing': False}\n\nTraining params\n Num Epochs: 5\n\nDisplay params\n Display progress: True\n Display progress realtime: True\n Save Training logs: True\n Save Intermediate models: True\n Intermediate model prefix: intermediate_model_\n\n"
]
],
[
[
"## From the summary above\n\n - Model Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n \n \n Freeze base network: True",
"_____no_output_____"
],
[
"## Another thing to notice from summary\n\n Model Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num of potentially trainable layers: 242\n Num of actual trainable layers: 1\n \n\n### There are a total of 242 layers\n\n### Since we have freezed base network only 1 is trainable, the final layer",
"_____no_output_____"
],
[
"## Train the classifier",
"_____no_output_____"
]
],
[
[
"#Start Training\ngtf.Train();\n\n#Read the training summary generated once you run the cell and training is completed",
"Training Start\n Epoch 1/5\n ----------\n"
]
],
[
[
"## Validating the trained classifier",
"_____no_output_____"
],
[
"## Load the experiment in validation mode\n - Set flag eval_infer as True",
"_____no_output_____"
]
],
[
[
"gtf = prototype(verbose=1);\ngtf.Prototype(\"Project\", \"Freeze_Base_Network\", eval_infer=True);",
"Mxnet Version: 1.5.1\n\nModel Details\n Loading model - workspace/Project/Freeze_Base_Network/output/models/final-symbol.json\n Model loaded!\n\nExperiment Details\n Project: Project\n Experiment: Freeze_Base_Network\n Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/quick_prototyping/1_intro_to_quick_prototyping/workspace/Project/Freeze_Base_Network/\n\n"
]
],
[
[
"## Load the validation dataset",
"_____no_output_____"
]
],
[
[
"gtf.Dataset_Params(dataset_path=\"LEGO/valid\");\ngtf.Dataset();",
"Dataset Details\n Test path: LEGO/valid\n CSV test path: None\n\nDataset Params\n Input Size: 224\n Processors: 4\n\nPre-Composed Test Transforms\n[{'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num test images: 6379\n Num classes: 16\n\n"
]
],
[
[
"## Run validation",
"_____no_output_____"
]
],
[
[
"accuracy, class_based_accuracy = gtf.Evaluate();",
"Testing\n"
]
],
[
[
"### Accuracy achieved - 86.063\n(You may get a different result)",
"_____no_output_____"
],
[
"<a id='2'></a>\n# Unfreeze Base network in densenet121 and train a classifier",
"_____no_output_____"
],
[
"## Creating and managing experiments\n - Provide project name\n - Provide experiment name\n - For a specific data create a single project\n - Inside each project multiple experiments can be created\n - Every experiment can be have diferent hyper-parameters attached to it",
"_____no_output_____"
]
],
[
[
"gtf = prototype(verbose=1);\ngtf.Prototype(\"Project\", \"Unfreeze_Base_Network\");",
"Mxnet Version: 1.5.0\n\nExperiment Details\n Project: Project\n Experiment: Unfreeze_Base_Network\n Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/change_post_num_layers/5_transfer_learning_params/2_freezing_base_network/workspace/Project/Unfreeze_Base_Network/\n\n"
]
],
[
[
"### This creates files and directories as per the following structure\n \n \n workspace\n |\n |--------Project\n |\n |\n |-----Freeze_Base_Network (Previously created)\n |\n |-----experiment-state.json\n |\n |-----output\n |\n |------logs (All training logs and graphs saved here)\n |\n |------models (all trained models saved here)\n |\n |\n |-----Unfreeze_Base_Network (Created Now)\n |\n |-----experiment-state.json\n |\n |-----output\n |\n |------logs (All training logs and graphs saved here)\n |\n |------models (all trained models saved here)",
"_____no_output_____"
],
[
"## Set dataset and select the model",
"_____no_output_____"
],
[
"## Quick mode training\n\n - Using Default Function\n - dataset_path\n - model_name\n - freeze_base_network\n - num_epochs\n \n \n## Sample Dataset folder structure\n\n parent_directory\n |\n |\n |------cats\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on)\n |------dogs\n |\n |------img1.jpg\n |------img2.jpg\n |------.... (and so on)",
"_____no_output_____"
],
[
"## Modifyable params \n - dataset_path: path to data\n - model_name: which pretrained model to use\n - freeze_base_network: Retrain already trained network or not\n - num_epochs: Number of epochs to train for",
"_____no_output_____"
]
],
[
[
"gtf.Default(dataset_path=\"LEGO/train\", \n model_name=\"densenet121\", \n \n \n \n freeze_base_network=False, # Set this param as false\n \n \n \n num_epochs=5);\n\n#Read the summary generated once you run this cell. ",
"Dataset Details\n Train path: LEGO/train\n Val path: None\n CSV train path: None\n CSV val path: None\n\nDataset Params\n Input Size: 224\n Batch Size: 4\n Data Shuffle: True\n Processors: 4\n Train-val split: 0.7\n\nPre-Composed Train Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nPre-Composed Val Transforms\n[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num train images: 4465\n Num val images: 1914\n Num classes: 16\n\nModel Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n Freeze base network: False\n\nModel Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num of potentially trainable layers: 242\n Num of actual trainable layers: 242\n\nOptimizer\n Name: sgd\n Learning rate: 0.01\n Params: {'lr': 0.01, 'momentum': 0, 'weight_decay': 0.0001, 'momentum_dampening_rate': 0, 'clipnorm': 0.0, 'clipvalue': 0.0}\n\n\n\nLearning rate scheduler\n Name: steplr\n Params: {'step_size': 1, 'gamma': 0.98, 'last_epoch': -1}\n\nLoss\n Name: softmaxcrossentropy\n Params: {'weight': None, 'batch_axis': 0, 'axis_to_sum_over': -1, 'label_as_categories': True, 'label_smoothing': False}\n\nTraining params\n Num Epochs: 5\n\nDisplay params\n Display progress: True\n Display progress realtime: True\n Save Training logs: True\n Save Intermediate models: True\n Intermediate model prefix: intermediate_model_\n\n"
]
],
[
[
"## From the summary above\n\n - Model Params\n Model name: densenet121\n Use Gpu: True\n Use pretrained: True\n \n \n Freeze base network: False",
"_____no_output_____"
],
[
"## Another thing to notice from summary\n\n Model Details\n Loading pretrained model\n Model Loaded on device\n Model name: densenet121\n Num of potentially trainable layers: 242\n Num of actual trainable layers: 242\n \n\n### There are a total of 242 layers\n\n### Since we have unfreezed base network all 242 layers are trainable including the final layer",
"_____no_output_____"
],
[
"## Train the classifier",
"_____no_output_____"
]
],
[
[
"#Start Training\ngtf.Train();\n\n#Read the training summary generated once you run the cell and training is completed",
"Training Start\n Epoch 1/5\n ----------\n"
]
],
[
[
"## Validating the trained classifier",
"_____no_output_____"
],
[
"## Load the experiment in validation mode\n - Set flag eval_infer as True",
"_____no_output_____"
]
],
[
[
"gtf = prototype(verbose=1);\ngtf.Prototype(\"Project\", \"Unfreeze_Base_Network\", eval_infer=True);",
"Mxnet Version: 1.5.1\n\nModel Details\n Loading model - workspace/Project/Unfreeze_Base_Network/output/models/final-symbol.json\n Model loaded!\n\nExperiment Details\n Project: Project\n Experiment: Unfreeze_Base_Network\n Dir: /home/abhi/Desktop/Work/tess_tool/gui/v0.3/finetune_models/Organization/development/v5.0_blocks/study_roadmap/quick_prototyping/1_intro_to_quick_prototyping/workspace/Project/Unfreeze_Base_Network/\n\n"
]
],
[
[
"## Load the validation dataset",
"_____no_output_____"
]
],
[
[
"gtf.Dataset_Params(dataset_path=\"LEGO/valid\");\ngtf.Dataset();",
"Dataset Details\n Test path: LEGO/valid\n CSV test path: None\n\nDataset Params\n Input Size: 224\n Processors: 4\n\nPre-Composed Test Transforms\n[{'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]\n\nDataset Numbers\n Num test images: 6379\n Num classes: 16\n\n"
]
],
[
[
"## Run validation",
"_____no_output_____"
]
],
[
[
"accuracy, class_based_accuracy = gtf.Evaluate();",
"Testing\n"
]
],
[
[
"### Accuracy achieved - 99.31\n(You may get a different result)",
"_____no_output_____"
],
[
"<a id='3'></a>\n# Compare both the experiment",
"_____no_output_____"
]
],
[
[
"# Invoke the comparison class\nfrom monk.compare_prototype import compare",
"_____no_output_____"
]
],
[
[
"### Creating and managing comparison experiments\n - Provide project name",
"_____no_output_____"
]
],
[
[
"# Create a project \ngtf = compare(verbose=1);\ngtf.Comparison(\"Compare-effect-of-freezing\");",
"Comparison: - Compare-effect-of-freezing\n"
]
],
[
[
"### This creates files and directories as per the following structure\n \n workspace\n |\n |--------comparison\n |\n |\n |-----Compare-effect-of-freezing\n |\n |------stats_best_val_acc.png\n |------stats_max_gpu_usage.png\n |------stats_training_time.png\n |------train_accuracy.png\n |------train_loss.png\n |------val_accuracy.png\n |------val_loss.png\n \n |\n |-----comparison.csv (Contains necessary details of all experiments)",
"_____no_output_____"
],
[
"### Add the experiments\n - First argument - Project name\n - Second argument - Experiment name",
"_____no_output_____"
]
],
[
[
"gtf.Add_Experiment(\"Project\", \"Freeze_Base_Network\");\ngtf.Add_Experiment(\"Project\", \"Unfreeze_Base_Network\");",
"Project - Project, Experiment - Freeze_Base_Network added\nProject - Project, Experiment - Unfreeze_Base_Network added\n"
]
],
[
[
"### Run Analysis",
"_____no_output_____"
]
],
[
[
"gtf.Generate_Statistics();",
"Generating statistics...\nGenerated\n\n"
]
],
[
[
"## Visualize and study comparison metrics",
"_____no_output_____"
],
[
"### Training Accuracy Curves",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/train_accuracy.png\") ",
"_____no_output_____"
]
],
[
[
"### Training Loss Curves",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/train_loss.png\") ",
"_____no_output_____"
]
],
[
[
"### Validation Accuracy Curves",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/val_accuracy.png\") ",
"_____no_output_____"
]
],
[
[
"### Validation loss curves",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(filename=\"workspace/comparison/Compare-effect-of-freezing/val_loss.png\") ",
"_____no_output_____"
]
],
[
[
"## Accuracies achieved on validation dataset\n\n### With freezing base network - 86.063\n### Without freezing base network - 99.31\n\n#### For this classifier, keeping the base network trainable seems to be a good option. Thus for other data it may result in overfitting the training data\n\n(You may get a different result)",
"_____no_output_____"
],
[
"# Goals Completed\n\n\n### Understand the role of freezing models in transfer learning\n\n\n### Why freeze/unfreeze base models in transfer learning\n\n\n### Use comparison feature to appropriately set this parameter on custom dataset",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4a07202a55aa665b79231bbd03b08975eecdd0dc
| 510,956 |
ipynb
|
Jupyter Notebook
|
.ipynb_checkpoints/Pipline-checkpoint.ipynb
|
drtupe/LaneLine_Detection
|
4d3712cf4a3a5f189ada8c58ed089255fc0d0895
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/Pipline-checkpoint.ipynb
|
drtupe/LaneLine_Detection
|
4d3712cf4a3a5f189ada8c58ed089255fc0d0895
|
[
"MIT"
] | null | null | null |
.ipynb_checkpoints/Pipline-checkpoint.ipynb
|
drtupe/LaneLine_Detection
|
4d3712cf4a3a5f189ada8c58ed089255fc0d0895
|
[
"MIT"
] | null | null | null | 828.12966 | 113,216 | 0.951612 |
[
[
[
"#importing some useful packages\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nimport cv2\n# Import everything needed to edit/save/watch video clips\nfrom moviepy.editor import VideoFileClip\nfrom IPython.display import HTML\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Import the image to be processed",
"_____no_output_____"
]
],
[
[
"image = mpimg.imread('test_images/solidYellowCurve.jpg')\nprint('This image is: ',type(image), 'with dimensions: ', image.shape)\nplt.imshow(image)",
"This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)\n"
]
],
[
[
"### Some Helper functions :\n * this block contains multiple funtions which have been created for ease of final pipeline for this project.",
"_____no_output_____"
]
],
[
[
"def grayscale(img):\n \"\"\"Applies the Grayscale transform\n This will return an image with only one color channel\n but NOTE: to see the returned image as grayscale\n (assuming your grayscaled image is called 'gray')\n you should call plt.imshow(gray, cmap='gray')\"\"\"\n return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n # Or use BGR2GRAY if you read an image with cv2.imread()\n # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n \ndef canny(img, low_threshold, high_threshold):\n \"\"\"Applies the Canny transform\"\"\"\n return cv2.Canny(img, low_threshold, high_threshold)\n\ndef gaussian_blur(img, kernel_size):\n \"\"\"Applies a Gaussian Noise kernel\"\"\"\n return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n\ndef region_of_interest(img, vertices):\n \"\"\"\n Applies an image mask.\n \n Only keeps the region of the image defined by the polygon\n formed from `vertices`. The rest of the image is set to black.\n `vertices` should be a numpy array of integer points.\n \"\"\"\n #defining a blank mask to start with\n mask = np.zeros_like(img) \n \n #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n if len(img.shape) > 2:\n channel_count = img.shape[2] # i.e. 3 or 4 depending on your image\n ignore_mask_color = (255,) * channel_count\n else:\n ignore_mask_color = 255\n \n #filling pixels inside the polygon defined by \"vertices\" with the fill color \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n \n #returning the image only where mask pixels are nonzero\n masked_image = cv2.bitwise_and(img, mask)\n return masked_image\n\ndef average_intercept_slope(lines):\n \"\"\"\n This method help us to define left lane and right lane by calculating slope and intercept of lines.\n \"\"\"\n left_line = []\n left_len = []\n right_line = []\n right_len = []\n \n for line in lines:\n for x1, y1, x2, y2 in line:\n if x2 == x1:\n continue # this will ignore a vertical line which will result in infinite slope\n m = (y2 - y1) / (x2 - x1)\n b = y1 - m * x1\n length = np.sqrt((y2 - y1)**2 + (x2 - x1)**2)\n if m < 0:\n left_line.append((m, b))\n left_len.append(length)\n else:\n right_line.append((m, b))\n right_len.append(length)\n\n \n left_lane = np.dot(left_len, left_line) / np.sum(left_len) if len(left_len) > 0 else None\n right_lane = np.dot(right_len, right_line) / np.sum(right_len) if len(right_len) > 0 else None\n\n return left_lane, right_lane\n\ndef line_points(y1, y2, line):\n \"\"\"\n Convert a line represented in slope and intercept into pixel points\n \"\"\"\n if line is None:\n return None\n \n m, b = line\n\n x1 = int((y1 - b) / m)\n x2 = int((y2 - b) / m)\n y1 = int(y1)\n y2 = int(y2)\n \n return ((x1, y1), (x2, y2))\n\n\ndef draw_lines(img, lines, color=[0, 0, 255], thickness=10):\n \"\"\"\n NOTE: this is the function you might want to use as a starting point once you want to \n average/extrapolate the line segments you detect to map out the full\n extent of the lane (going from the result shown in raw-lines-example.mp4\n to that shown in P1_example.mp4). \n \n Think about things like separating line segments by their \n slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n line vs. the right line. Then, you can average the position of each of \n the lines and extrapolate to the top and bottom of the lane.\n \n This function draws `lines` with `color` and `thickness`. \n Lines are drawn on the image inplace (mutates the image).\n If you want to make the lines semi-transparent, think about combining\n this function with the weighted_img() function below\n \"\"\"\n left_lane, right_lane = average_intercept_slope(lines)\n\n y1 = img.shape[0] # bottom of the image\n y2 = y1 * 0.6 # middle point of the image. this point is slightly lower than actual middle\n \n left_line = line_points(y1, y2, left_lane)\n right_line = line_points(y1, y2, right_lane)\n \n for line in lines:\n for x1, y1, x2, y2 in line:\n cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n\ndef hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n \"\"\"\n `img` should be the output of a Canny transform.\n \n Returns an image with hough lines drawn.\n \"\"\"\n lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n draw_lines(line_img, lines)\n return line_img\n\n# Python 3 has support for cool math symbols.\n\ndef weighted_img(img, initial_img, α=0.8, β=1., γ=0.):\n \"\"\"\n `img` is the output of the hough_lines(), An image with lines drawn on it.\n Should be a blank image (all black) with lines drawn on it.\n \n `initial_img` should be the image before any processing.\n \n The result image is computed as follows:\n \n initial_img * α + img * β + γ\n NOTE: initial_img and img must be the same shape!\n \"\"\"\n return cv2.addWeighted(initial_img, α, img, β, γ)",
"_____no_output_____"
]
],
[
[
"### Steps of pipline :\n* This below code blocks are defined individually to show what changes each section of the code makes to the code.",
"_____no_output_____"
]
],
[
[
"gray = grayscale(image)\nplt.imshow(gray, cmap='gray')\nplt.savefig('examples/gray.jpg')",
"_____no_output_____"
]
],
[
[
"* gaussian blur image.",
"_____no_output_____"
]
],
[
[
"kernal_size = 5\nblur_gray = gaussian_blur(gray, kernal_size)\nplt.imshow(blur_gray)\nplt.savefig('examples/blur_gray.jpg')",
"_____no_output_____"
]
],
[
[
"* use of canny edge detection to detect the lane line.",
"_____no_output_____"
]
],
[
[
"low_threshold = 200\nhigh_threshold = 250\nedges = canny(gray, low_threshold, high_threshold)\nplt.imshow(edges, cmap = 'Greys_r')\nplt.savefig('examples/canny_edges.jpg')",
"_____no_output_____"
]
],
[
[
"* selecting region of interest to neglact non useful data from image.",
"_____no_output_____"
]
],
[
[
"# Region of interest start\n \n # Next we'll create a masked edges image using cv2.fillPoly()\n # This time we are defining a four sided polygon to mask\nmask = np.zeros_like(edges) \nignore_mask_color = 255 \nimshape = image.shape\n # vertices = np.array([[(image.shape[0]-100,imshape[0]),(425, 300), (500, 300), (900,imshape[0])]], dtype=np.int32)\n \npoint_A = (imshape[1]*0.1, imshape[0]) # (50,imshape[0])\npoint_B = (imshape[1]*0.45, imshape[0]*0.6) # (425, 300)\npoint_C = (imshape[1]*0.55, imshape[0]*0.6) # (500, 300)\npoint_D = (imshape[1]*0.95, imshape[0]) # (900,imshape[0])\n\nvertices = np.array([[point_A,point_B, point_C, point_D]], dtype=np.int32)\n \ncv2.fillPoly(mask, vertices, ignore_mask_color)\nmasked_edges = cv2.bitwise_and(edges, mask)\nplt.imshow(masked_edges)\nplt.savefig('examples/ROI.jpg')\n # End of region of intrest",
"_____no_output_____"
]
],
[
[
"* Drawing lines using houghs line method and then combining those details with the color image to display lane line of colored image.",
"_____no_output_____"
]
],
[
[
" # From this part Hough transform paramenters starts\nrho = 1\ntheta = np.pi/180\nthreshold = 50\nmin_line_length = 100\nmax_line_gap = 160\nline_image = np.copy(image)*0 # For creating a blank to draw lines on\n \n # masked edges is the output image of region of intrest\n \nlines = hough_lines(masked_edges, rho, theta, threshold, min_line_length, max_line_gap)\n\n #color_edges = np.dstack((masked_edges, masked_edges, masked_edges))\n\ncombo = weighted_img(lines, image, 0.8, 1, 0)\n\nplt.imshow(combo)\nplt.savefig('examples/lane_lines.jpg')",
"_____no_output_____"
]
],
[
[
"### Pipline :\n* Following code has process been piplined to be used in video processing.",
"_____no_output_____"
]
],
[
[
"def process_image(image):\n # TODO: put your pipeline here, you should return the final output (image where lines are drawn on lanes)\n\n # gray contains the gray scale version of image\n gray = grayscale(image)\n #plt.imshow(gray)\n\n kernal_size = 5\n blur_gray = gaussian_blur(gray, kernal_size)\n plt.imshow(blur_gray)\n\n # Now this gray image is further processed to filter out \n # between low and high threshold and detect edges with \n # canny edge detection method\n low_threshold = 200\n high_threshold = 250\n edges = canny(gray, low_threshold, high_threshold)\n plt.imshow(edges, cmap = 'Greys_r')\n \n # Region of interest start\n \n # Next we'll create a masked edges image using cv2.fillPoly()\n # This time we are defining a four sided polygon to mask\n mask = np.zeros_like(edges) \n ignore_mask_color = 255 \n imshape = image.shape\n # vertices = np.array([[(image.shape[0]-100,imshape[0]),(425, 300), (500, 300), (900,imshape[0])]], dtype=np.int32)\n \n point_A = (imshape[1]*0.1, imshape[0]) # (50,imshape[0])\n point_B = (imshape[1]*0.45, imshape[0]*0.6) # (425, 300)\n point_C = (imshape[1]*0.55, imshape[0]*0.6) # (500, 300)\n point_D = (imshape[1]*0.95, imshape[0]) # (900,imshape[0])\n\n vertices = np.array([[point_A,point_B, point_C, point_D]], dtype=np.int32)\n \n cv2.fillPoly(mask, vertices, ignore_mask_color)\n masked_edges = cv2.bitwise_and(edges, mask)\n \n # End of region of intrest\n\n # From this part Hough transform paramenters starts\n rho = 1\n theta = np.pi/180\n threshold = 50\n min_line_length = 100\n max_line_gap = 160\n line_image = np.copy(image)*0 # For creating a blank to draw lines on\n \n # masked edges is the output image of region of intrest\n \n lines = hough_lines(masked_edges, rho, theta, threshold, min_line_length, max_line_gap)\n\n #color_edges = np.dstack((masked_edges, masked_edges, masked_edges))\n\n combo = weighted_img(lines, image, 0.8, 1, 0)\n\n plt.imshow(combo)\n result = combo\n return result",
"_____no_output_____"
]
],
[
[
"### Video to be processed :\n* Initially white image output is created where the processed frames from the video can be saved.",
"_____no_output_____"
]
],
[
[
"white_output = 'test_videos_output/solidWhiteRight.mp4'\n## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video\n## To do so add .subclip(start_second,end_second) to the end of the line below\n## Where start_second and end_second are integer values representing the start and end of the subclip\n## You may also uncomment the following line for a subclip of the first 5 seconds\n##clip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,5)\nclip1 = VideoFileClip(\"test_videos/solidWhiteRight.mp4\").subclip(0,10)\nwhite_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!\n%time white_clip.write_videofile(white_output, audio=False)",
"\rt: 0%| | 0/250 [00:00<?, ?it/s, now=None]"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a07352431b6a79850c460f225249a41b14352ad
| 16,584 |
ipynb
|
Jupyter Notebook
|
docs/tutorials/intro/executing_tasks.ipynb
|
erikturk/nornir
|
dce6cf5d81037c78214da24edd7aa04bb454561f
|
[
"Apache-2.0"
] | 4 |
2020-01-19T16:29:19.000Z
|
2020-12-30T19:25:14.000Z
|
docs/tutorials/intro/executing_tasks.ipynb
|
erikturk/nornir
|
dce6cf5d81037c78214da24edd7aa04bb454561f
|
[
"Apache-2.0"
] | null | null | null |
docs/tutorials/intro/executing_tasks.ipynb
|
erikturk/nornir
|
dce6cf5d81037c78214da24edd7aa04bb454561f
|
[
"Apache-2.0"
] | 3 |
2018-11-01T18:20:30.000Z
|
2020-04-10T20:12:36.000Z
| 46.453782 | 572 | 0.489086 |
[
[
[
"from nornir import InitNornir\nnr = InitNornir(config_file=\"config.yaml\")",
"_____no_output_____"
]
],
[
[
"# Executing tasks\n\nNow that you know how to initialize nornir and work with the inventory let's see how we can leverage it to run tasks on groups of hosts.\n\nNornir ships a bunch of tasks you can use directly without having to code them yourself. You can check them out [here](../../plugins/tasks/index.rst).\n\nLet's start by executing the `ls -la /tmp` command on all the device in `cmh` of type `host`:\n",
"_____no_output_____"
]
],
[
[
"from nornir.plugins.tasks import commands\nfrom nornir.plugins.functions.text import print_result\n\ncmh_hosts = nr.filter(site=\"cmh\", role=\"host\")\n\nresult = cmh_hosts.run(task=commands.remote_command,\n command=\"ls -la /tmp\")\n\nprint_result(result, vars=[\"stdout\"])",
"\u001b[1m\u001b[36mremote_command******************************************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[34m* host1.cmh ** changed : False *************************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[32mvvvv remote_command ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO\u001b[0m\n\u001b[0mtotal 8\ndrwxrwxrwt 2 root root 4096 Oct 27 14:53 .\ndrwxr-xr-x 24 root root 4096 Oct 27 14:53 ..\n\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m^^^^ END remote_command ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\u001b[0m\n\u001b[0m\u001b[1m\u001b[34m* host2.cmh ** changed : False *************************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[32mvvvv remote_command ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO\u001b[0m\n\u001b[0mtotal 8\ndrwxrwxrwt 2 root root 4096 Oct 27 14:54 .\ndrwxr-xr-x 24 root root 4096 Oct 27 14:54 ..\n\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m^^^^ END remote_command ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\u001b[0m\n\u001b[0m"
]
],
[
[
"So what have we done here? First we have imported the `commands` and `text` modules. Then we have narrowed down nornir to the hosts we want to operate on. Once we have selected the devices we wanted to operate on we have run two tasks:\n\n1. The task `commands.remote_command` which runs the specified `command` in the remote device.\n2. The function `print_result` which just prints on screen the result of an executed task or group of tasks.\n\nLet's try with another example:",
"_____no_output_____"
]
],
[
[
"from nornir.plugins.tasks import networking\n\ncmh_spines = nr.filter(site=\"bma\", role=\"spine\")\nresult = cmh_spines.run(task=networking.napalm_get,\n getters=[\"facts\"]) \nprint_result(result)",
"\u001b[1m\u001b[36mnapalm_get**********************************************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[34m* spine00.bma ** changed : False ***********************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[32mvvvv napalm_get ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO\u001b[0m\n\u001b[0m{\u001b[0m \u001b[0m'facts'\u001b[0m: \u001b[0m{\u001b[0m \u001b[0m'fqdn'\u001b[0m: \u001b[0m'localhost'\u001b[0m,\n \u001b[0m'hostname'\u001b[0m: \u001b[0m'localhost'\u001b[0m,\n \u001b[0m'interface_list'\u001b[0m: \u001b[0m['Ethernet1', 'Ethernet2', 'Management1']\u001b[0m,\n \u001b[0m'model'\u001b[0m: \u001b[0m'vEOS'\u001b[0m,\n \u001b[0m'os_version'\u001b[0m: \u001b[0m'4.20.1F-6820520.4201F'\u001b[0m,\n \u001b[0m'serial_number'\u001b[0m: \u001b[0m''\u001b[0m,\n \u001b[0m'uptime'\u001b[0m: \u001b[0m499\u001b[0m,\n \u001b[0m'vendor'\u001b[0m: \u001b[0m'Arista'\u001b[0m}\u001b[0m}\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m^^^^ END napalm_get ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\u001b[0m\n\u001b[0m\u001b[1m\u001b[34m* spine01.bma ** changed : False ***********************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[32mvvvv napalm_get ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO\u001b[0m\n\u001b[0m{\u001b[0m \u001b[0m'facts'\u001b[0m: \u001b[0m{\u001b[0m \u001b[0m'fqdn'\u001b[0m: \u001b[0m'vsrx'\u001b[0m,\n \u001b[0m'hostname'\u001b[0m: \u001b[0m'vsrx'\u001b[0m,\n \u001b[0m'interface_list'\u001b[0m: \u001b[0m[\u001b[0m \u001b[0m\u001b[0m'ge-0/0/0'\u001b[0m,\n \u001b[0m'gr-0/0/0'\u001b[0m,\n \u001b[0m'ip-0/0/0'\u001b[0m,\n \u001b[0m'lsq-0/0/0'\u001b[0m,\n \u001b[0m'lt-0/0/0'\u001b[0m,\n \u001b[0m'mt-0/0/0'\u001b[0m,\n \u001b[0m'sp-0/0/0'\u001b[0m,\n \u001b[0m'ge-0/0/1'\u001b[0m,\n \u001b[0m'ge-0/0/2'\u001b[0m,\n \u001b[0m'.local.'\u001b[0m,\n \u001b[0m'dsc'\u001b[0m,\n \u001b[0m'gre'\u001b[0m,\n \u001b[0m'ipip'\u001b[0m,\n \u001b[0m'irb'\u001b[0m,\n \u001b[0m'lo0'\u001b[0m,\n \u001b[0m'lsi'\u001b[0m,\n \u001b[0m'mtun'\u001b[0m,\n \u001b[0m'pimd'\u001b[0m,\n \u001b[0m'pime'\u001b[0m,\n \u001b[0m'pp0'\u001b[0m,\n \u001b[0m'ppd0'\u001b[0m,\n \u001b[0m'ppe0'\u001b[0m,\n \u001b[0m'st0'\u001b[0m,\n \u001b[0m'tap'\u001b[0m,\n \u001b[0m'vlan'\u001b[0m]\u001b[0m,\n \u001b[0m'model'\u001b[0m: \u001b[0m'FIREFLY-PERIMETER'\u001b[0m,\n \u001b[0m'os_version'\u001b[0m: \u001b[0m'12.1X47-D20.7'\u001b[0m,\n \u001b[0m'serial_number'\u001b[0m: \u001b[0m'66c3cbe24e7b'\u001b[0m,\n \u001b[0m'uptime'\u001b[0m: \u001b[0m385\u001b[0m,\n \u001b[0m'vendor'\u001b[0m: \u001b[0m'Juniper'\u001b[0m}\u001b[0m}\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m^^^^ END napalm_get ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\u001b[0m\n\u001b[0m"
]
],
[
[
"Pretty much the same pattern, just different task on different devices.\n\n## What is a task\n\nLet's take a look at what a task is. In it's simplest form a task is a function that takes at least a [Task](../../ref/api/task.rst#nornir.core.task.Task) object as argument. For instance:\n\n",
"_____no_output_____"
]
],
[
[
"def hi(task):\n print(f\"hi! My name is {task.host.name} and I live in {task.host['site']}\")\n \nnr.run(task=hi, num_workers=1)",
"hi! My name is host1.cmh and I live in cmh\u001b[0m\n\u001b[0mhi! My name is host2.cmh and I live in cmh\u001b[0m\n\u001b[0mhi! My name is spine00.cmh and I live in cmh\u001b[0m\n\u001b[0mhi! My name is spine01.cmh and I live in cmh\u001b[0m\n\u001b[0mhi! My name is leaf00.cmh and I live in cmh\u001b[0m\n\u001b[0mhi! My name is leaf01.cmh and I live in cmh\u001b[0m\n\u001b[0mhi! My name is host1.bma and I live in bma\u001b[0m\n\u001b[0mhi! My name is host2.bma and I live in bma\u001b[0m\n\u001b[0mhi! My name is spine00.bma and I live in bma\u001b[0m\n\u001b[0mhi! My name is spine01.bma and I live in bma\u001b[0m\n\u001b[0mhi! My name is leaf00.bma and I live in bma\u001b[0m\n\u001b[0mhi! My name is leaf01.bma and I live in bma\u001b[0m\n\u001b[0m"
]
],
[
[
"The task object has access to `nornir`, `host` and `dry_run` attributes.\n\nYou can call other tasks from within a task:",
"_____no_output_____"
]
],
[
[
"def available_resources(task):\n task.run(task=commands.remote_command,\n name=\"Available disk\",\n command=\"df -h\")\n task.run(task=commands.remote_command,\n name=\"Available memory\",\n command=\"free -m\")\n \nresult = cmh_hosts.run(task=available_resources)\n\nprint_result(result, vars=[\"stdout\"])",
"\u001b[1m\u001b[36mavailable_resources*************************************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[34m* host1.cmh ** changed : False *************************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[32mvvvv available_resources ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m---- Available disk ** changed : False ----------------------------------------- INFO\u001b[0m\n\u001b[0mFilesystem Size Used Avail Use% Mounted on\n/dev/mapper/precise64-root 79G 2.2G 73G 3% /\nudev 174M 4.0K 174M 1% /dev\ntmpfs 74M 284K 73M 1% /run\nnone 5.0M 0 5.0M 0% /run/lock\nnone 183M 0 183M 0% /run/shm\n/dev/sda1 228M 25M 192M 12% /boot\nvagrant 373G 251G 122G 68% /vagrant\n\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m---- Available memory ** changed : False --------------------------------------- INFO\u001b[0m\n\u001b[0m total used free shared buffers cached\nMem: 365 87 277 0 8 36\n-/+ buffers/cache: 42 322\nSwap: 767 0 767\n\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m^^^^ END available_resources ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\u001b[0m\n\u001b[0m\u001b[1m\u001b[34m* host2.cmh ** changed : False *************************************************\u001b[0m\n\u001b[0m\u001b[1m\u001b[32mvvvv available_resources ** changed : False vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv INFO\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m---- Available disk ** changed : False ----------------------------------------- INFO\u001b[0m\n\u001b[0mFilesystem Size Used Avail Use% Mounted on\n/dev/mapper/precise64-root 79G 2.2G 73G 3% /\nudev 174M 4.0K 174M 1% /dev\ntmpfs 74M 284K 73M 1% /run\nnone 5.0M 0 5.0M 0% /run/lock\nnone 183M 0 183M 0% /run/shm\n/dev/sda1 228M 25M 192M 12% /boot\nvagrant 373G 251G 122G 68% /vagrant\n\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m---- Available memory ** changed : False --------------------------------------- INFO\u001b[0m\n\u001b[0m total used free shared buffers cached\nMem: 365 87 277 0 8 36\n-/+ buffers/cache: 42 322\nSwap: 767 0 767\n\u001b[0m\n\u001b[0m\u001b[1m\u001b[32m^^^^ END available_resources ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\u001b[0m\n\u001b[0m"
]
],
[
[
"You probably noticed in your previous example that you can name your tasks.\n\nYour task can also accept any extra arguments you may need:",
"_____no_output_____"
]
],
[
[
"def count(task, to):\n print(f\"{task.host.name}: {list(range(0, to))}\")\n \ncmh_hosts.run(task=count,\n num_workers=1,\n to=10)\ncmh_hosts.run(task=count,\n num_workers=1,\n to=20)",
"host1.cmh: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\u001b[0m\n\u001b[0mhost2.cmh: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\u001b[0m\n\u001b[0mhost1.cmh: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\u001b[0m\n\u001b[0mhost2.cmh: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]\u001b[0m\n\u001b[0m"
]
],
[
[
"## Tasks vs Functions\n\nYou probably noticed we introduced the concept of a `function` when we talked about `print_result`. The difference between tasks and functions is that tasks are meant to be run per host while functions are helper functions meant to be run globally.",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a07564b5c32430511aa7f5f04979611315e6107
| 316,726 |
ipynb
|
Jupyter Notebook
|
notebooks/FireDetails_exploration.ipynb
|
lmjacoby/toohottohandle
|
e2a45e8ad14671ed242030e7f579a02ccd9aaf72
|
[
"MIT"
] | null | null | null |
notebooks/FireDetails_exploration.ipynb
|
lmjacoby/toohottohandle
|
e2a45e8ad14671ed242030e7f579a02ccd9aaf72
|
[
"MIT"
] | null | null | null |
notebooks/FireDetails_exploration.ipynb
|
lmjacoby/toohottohandle
|
e2a45e8ad14671ed242030e7f579a02ccd9aaf72
|
[
"MIT"
] | null | null | null | 47.642298 | 31,536 | 0.540215 |
[
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"## Some things I've learned about the data:\n- there were fires in every state except Delaware in 2018.\n- Fire names seem to be repeated, but it's hard for me to distinguish how to parse them\n\n\nCould be cool to look at:\n- States with the most fires\n- Classes of fires and numbers\n- Human vs non-human fires",
"_____no_output_____"
]
],
[
[
"data = pd.read_csv('./2018_FireDetailsDataset.csv')",
"_____no_output_____"
],
[
"data.head(5)",
"_____no_output_____"
]
],
[
[
"## Making a dictionary of dataframes by fire size class (A-G)\n- Class A: 0.25 acres or less;\n- Class B: > 0.25 acres, < 10 acres;\n- Class C: >= 10 acres, < 100 acres;\n- Class D: >= 100 acres, < 300 acres;\n- Class E: >= 300 acres, < 1,000 acres;\n- Class F: >= 1,000 acres, < 5,000 acres;\n- Class G: >= 5,000 acres.",
"_____no_output_____"
]
],
[
[
"sizeclass = list(sorted(data['FIRE_SIZE_CLASS'].unique()))\nsizeclass",
"_____no_output_____"
],
[
"size_dic = {}\nfor size in sizeclass:\n size_dic[size] = data.loc[data['FIRE_SIZE_CLASS'] == size]",
"_____no_output_____"
],
[
"for (key, value) in size_dic.items():\n print('#{}: {}, {}: {} acres burned'.format(key, len(value), key, round(value['FIRE_SIZE'].sum())))",
"#A: 40314, A: 3820 acres burned\n#B: 31396, B: 62831 acres burned\n#C: 6865, C: 202977 acres burned\n#D: 1085, D: 173839 acres burned\n#E: 621, E: 319395 acres burned\n#F: 378, F: 864147 acres burned\n#G: 204, G: 6616913 acres burned\n"
],
[
"_,_,_ = plt.hist(x=size_dic['A']['FIRE_SIZE'], bins=50)",
"_____no_output_____"
],
[
"_,_,_ = plt.hist(x=size_dic['G']['FIRE_SIZE'], bins=50)",
"_____no_output_____"
],
[
"size_dic['G'].head()",
"_____no_output_____"
],
[
"size_dic['G'].sort_values(by=['FIRE_SIZE'], ascending=False).head()",
"_____no_output_____"
],
[
"len(size_dic['G']['FIRE_NAME'].unique())",
"_____no_output_____"
],
[
"dg = size_dic['G'][size_dic['G']['CONT_DATE'].isna()].sort_values(by=['FIRE_SIZE'], ascending=False)",
"_____no_output_____"
],
[
"size_dic['G'].sort_values(by=['FIRE_SIZE'], ascending = False).head(15)",
"_____no_output_____"
],
[
"#size_dic['G'].to_csv('./classg_2018_editedcontdate.csv', index=False)",
"_____no_output_____"
]
],
[
[
"## Cleaning up data?\nI didn't find any really great ways to clean the dataset",
"_____no_output_____"
]
],
[
[
"data[data['CONT_DATE'].isna()]",
"_____no_output_____"
],
[
"len(data)",
"_____no_output_____"
],
[
"len(data['FIRE_NAME'].unique())",
"_____no_output_____"
],
[
"data.loc[data['FIRE_NAME'] == 'NEW YEAR']",
"_____no_output_____"
],
[
"len(data['MTBS_FIRE_NAME'])",
"_____no_output_____"
],
[
"data.loc[data['FIRE_NAME'] == 'CARR']",
"_____no_output_____"
],
[
"duplicate = data[data.duplicated(['FIRE_NAME', 'STATE', 'DISCOVERY_DATE', 'NWCG_REPORTING_UNIT_ID', 'CONT_DATE'])]\nduplicate",
"_____no_output_____"
],
[
"group = data.groupby(['FIRE_NAME', 'STATE', 'DISCOVERY_DATE', 'NWCG_REPORTING_UNIT_ID', 'CONT_DATE']).size().reset_index(name='Freq')\ngroup['FIRE_NAME']",
"_____no_output_____"
],
[
"group.loc[group['FIRE_NAME'] == 'NEW YEAR']",
"_____no_output_____"
],
[
"duplicate_all = data[data.duplicated(['FIRE_NAME'], )]\nduplicate",
"_____no_output_____"
],
[
"nan = pd.isna(data['FIRE_NAME'])",
"_____no_output_____"
],
[
"nan_mtbs = pd.isna(data['MTBS_FIRE_NAME'])",
"_____no_output_____"
],
[
"nan_mtbs",
"_____no_output_____"
],
[
"len(nan_mtbs[nan_mtbs])",
"_____no_output_____"
],
[
"# 13806 NaN values for FIRE_NAME in the dataset\nlen(nan[nan])",
"_____no_output_____"
],
[
"#nullname\n#len(nullname)",
"_____no_output_____"
]
],
[
[
"## Making a dictionary of dataframes by state (2018)",
"_____no_output_____"
]
],
[
[
"states = list(sorted(data['STATE'].unique()))\n#states",
"_____no_output_____"
],
[
"state_dic = {}\nfor state in states:\n state_dic[state] = data.loc[data['STATE'] == state]",
"_____no_output_____"
],
[
"len(data.loc[data['STATE'] == 'AK'])",
"_____no_output_____"
],
[
"state_dic['AK']['FIRE_SIZE'].describe()",
"_____no_output_____"
]
],
[
[
"### Summing acres burned on a state basis",
"_____no_output_____"
]
],
[
[
"state_dic['AK']['FIRE_SIZE'].sum()",
"_____no_output_____"
],
[
"state_sum = []\nfor key, value in state_dic.items():\n tup = (key, round(value['FIRE_SIZE'].sum()))\n state_sum.append(tup)",
"_____no_output_____"
],
[
"statesum_df = pd.DataFrame(state_sum, columns = ['state', 'total_acres_burned'])",
"_____no_output_____"
],
[
"# adding data in for delaware since it didn't have any\nstatesum_df.loc[len(statesum_df.index)] = ['DE', 0]",
"_____no_output_____"
],
[
"statesum_df.sort_values(by='state', ignore_index=True, inplace=True)\n#statesum_df",
"_____no_output_____"
],
[
"#statesum_df.to_csv( './burnedacres_bystate.csv', index=False)",
"_____no_output_____"
],
[
"_,_,_ = plt.hist(x=statesum_df['total_acres_burned'] , bins=50)",
"_____no_output_____"
],
[
"#state_dic['HI']",
"_____no_output_____"
]
],
[
[
"#### Humans vs natural on a per state basis",
"_____no_output_____"
]
],
[
[
"statecause_sum = []\nfor key, value in state_dic.items():\n \n tup = (key, round(value['FIRE_SIZE'].sum()))\n state_sum.append(tup)",
"_____no_output_____"
],
[
"cause_list = data['NWCG_CAUSE_CLASSIFICATION'].unique()\ncause_list",
"_____no_output_____"
],
[
"statecause_list = []\nfor key, value in state_dic.items():\n cause_sumlist = []\n for cause in cause_list:\n statecause = value.loc[value['NWCG_CAUSE_CLASSIFICATION'] == cause]\n statecause_sum = round(statecause['FIRE_SIZE'].sum())\n \n cause_sumlist.append((cause, statecause_sum))\n \n statecause_list.append(cause_sumlist)",
"_____no_output_____"
],
[
"#statecause_list",
"_____no_output_____"
],
[
"concat_list = []\nfor df in statecause_list:\n dd = pd.DataFrame(df).T\n dd.columns = dd.iloc[0]\n dd.drop(dd.index[0], inplace=True)\n \n concat_list.append(dd)\n\nsumcause_df = pd.concat(concat_list)",
"_____no_output_____"
],
[
"sumcause_df.insert(0, 'State', list(state_dic.keys()))\nsumcause_df.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"#sumcause_df",
"_____no_output_____"
]
],
[
[
"# Loading in Big Data Set\nto make sure I didn't lose anything in the transfer",
"_____no_output_____"
]
],
[
[
"big_data = pd.read_csv('./1992to2018_FireDetails.csv', parse_dates=['DISCOVERY_DATE', 'CONT_DATE'], low_memory=False)",
"_____no_output_____"
],
[
"big_data.shape",
"_____no_output_____"
]
],
[
[
"### Breakdown of causes of fires by 'Cause Classification' and the more specific 'General Cause'\nI think this can be looked at as cause classification as the broad category and general cause as the subsidiary",
"_____no_output_____"
]
],
[
[
"big_data['NWCG_CAUSE_CLASSIFICATION'].unique()",
"_____no_output_____"
],
[
"cause = (big_data['NWCG_CAUSE_CLASSIFICATION'].value_counts())",
"_____no_output_____"
],
[
"cause.sort_index()",
"_____no_output_____"
],
[
"big_data['NWCG_GENERAL_CAUSE'].unique()",
"_____no_output_____"
],
[
"gcause = big_data['NWCG_GENERAL_CAUSE'].value_counts().sort_index()\ngcause",
"_____no_output_____"
]
],
[
[
"### Year dictionary",
"_____no_output_____"
]
],
[
[
"years = list(sorted(big_data['FIRE_YEAR'].unique()))\n#years",
"_____no_output_____"
],
[
"year_dic = {}\nfor year in years:\n year_dic[year] = big_data.loc[big_data['FIRE_YEAR'] == year]",
"_____no_output_____"
],
[
"year_sum = []\nfor key, value in year_dic.items():\n tup = (key, value.shape[0],\n round(value['FIRE_SIZE'].sum()))\n year_sum.append(tup)",
"_____no_output_____"
],
[
"yearsum_df = pd.DataFrame(year_sum, columns = \n ['year', 'total_fires',\n 'total_acres_burned'])#,\n #'cause_classification',\n #'general_cause'])",
"_____no_output_____"
],
[
"yearsum_df.head()",
"_____no_output_____"
],
[
"#yearsum_df.to_csv('./firesummary_byyear.csv', index=False)",
"_____no_output_____"
]
],
[
[
"-----\n-----",
"_____no_output_____"
],
[
"### Making csv's for causes",
"_____no_output_____"
]
],
[
[
"causeclass = value['NWCG_CAUSE_CLASSIFICATION'].value_counts().sort_index()\ngencause = value['NWCG_GENERAL_CAUSE'].value_counts().sort_index()",
"_____no_output_____"
],
[
"#gcause = big_data['NWCG_GENERAL_CAUSE'].value_counts().sort_index().to_frame().T\n#gcause",
"_____no_output_____"
],
[
"classcause_sum = []\nfor key, value in year_dic.items():\n dfclass = value['NWCG_CAUSE_CLASSIFICATION'].value_counts().sort_index().to_frame().T\n \n classcause_sum.append(dfclass)",
"_____no_output_____"
],
[
"#classcause_sum",
"_____no_output_____"
],
[
"df_classcause = pd.concat(classcause_sum)",
"_____no_output_____"
],
[
"df_classcause.insert(0, 'year', list(year_dic.keys()))",
"_____no_output_____"
],
[
"df_classcause.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"gencause_sum = []\nfor key, value in year_dic.items():\n dfgen = value['NWCG_GENERAL_CAUSE'].value_counts().sort_index().to_frame().T\n \n gencause_sum.append(dfgen)",
"_____no_output_____"
],
[
"#gencause_sum",
"_____no_output_____"
],
[
"df_gencause = pd.concat(gencause_sum)\n#df_gencause",
"_____no_output_____"
],
[
"#df_gencause.insert(0, 'year', list(year_dic.keys()))",
"_____no_output_____"
],
[
"df_gencause.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"df_gencause['Firearms and explosives use'] = df_gencause['Firearms and explosives use'].fillna(0).astype(int)",
"_____no_output_____"
],
[
"df_classcause.head()",
"_____no_output_____"
],
[
"df_gencause.head()",
"_____no_output_____"
],
[
"df_causecombo = pd.concat([df_classcause, df_gencause], axis=1)\ndf_causecombo.head()",
"_____no_output_____"
],
[
"#df_causecombo.to_csv('./causeoffire_byyear.csv', index=False)",
"_____no_output_____"
],
[
"# create figure and axis objects with subplots()\nfig,ax = plt.subplots()\n# make a plot\n#ax.plot(yearsum_df.year, yearsum_df.total_acres_burned, color=\"red\", marker=\"o\")\n# set x-axis label\n#ax.set_xlabel(\"year\",fontsize=14)\n# set y-axis label\n#ax.set_ylabel(\"total_acres_burned\",color=\"red\",fontsize=14)\n\n# twin object for two different y-axis on the sample plot\nax2=ax.twinx()\n# make a plot with different y-axis using second axis object\nax2.plot(df_causecombo.year, df_causecombo[\"Human\"],color=\"blue\",marker=\"o\")\nax2.plot(df_causecombo.year, df_causecombo[\"Natural\"],color=\"green\",marker=\"o\")\nax2.set_ylabel(\"Human/Natural\", fontsize=14)\nplt.show()",
"_____no_output_____"
],
[
"plt.plot(yearsum_df['year'], yearsum_df['total_acres_burned'], marker = 'o', color='red')\nplt.xlabel('Year')\nplt.ylabel('Total Acres Burned')\nplt.ylim(0,10545663)\nplt.show()\n#plt.plot(df_causecombo['year'], df_causecombo['Human'])\n#plt.plot(df_causecombo['year'], df_causecombo['Natural'])",
"_____no_output_____"
]
],
[
[
"----\n----\n### Size changes over years\nChecking Class data to see if large wildfires increased over the years recorded",
"_____no_output_____"
]
],
[
[
"sizeclass_sum = []\nfor key, value in year_dic.items():\n dfsize = value['FIRE_SIZE_CLASS'].value_counts().sort_index().to_frame().T\n \n sizeclass_sum.append(dfsize)",
"_____no_output_____"
],
[
"#sizeclass_sum[0]",
"_____no_output_____"
],
[
"df_sizeclass = pd.concat(sizeclass_sum)\n#df_sizeclass",
"_____no_output_____"
],
[
"df_sizeclass.insert(0, 'year', list(year_dic.keys()))",
"_____no_output_____"
],
[
"df_sizeclass.reset_index(drop=True, inplace=True)",
"_____no_output_____"
],
[
"df_sizeclass.head()",
"_____no_output_____"
],
[
"df_sizeclass.columns",
"_____no_output_____"
],
[
"df_sizeclass2 = df_sizeclass.set_axis(['year', '#A', '#B', '#C', '#D', '#E', '#F', '#G'], axis=1) ",
"_____no_output_____"
],
[
"df_sizeclass2.head()",
"_____no_output_____"
]
],
[
[
"---",
"_____no_output_____"
]
],
[
[
"sizeclass",
"_____no_output_____"
],
[
"megalist = []\nfor key, value in year_dic.items():\n class_sumlist = []\n for size in sizeclass:\n yrclass = value.loc[value['FIRE_SIZE_CLASS'] == size]\n class_sum = round(yrclass['FIRE_SIZE'].sum())\n \n class_sumlist.append((size, class_sum))\n \n megalist.append(class_sumlist)",
"_____no_output_____"
],
[
"megalist",
"_____no_output_____"
],
[
"concat_list = []\nfor df in megalist:\n dd = pd.DataFrame(df).T\n dd.columns = dd.iloc[0]\n dd.drop(dd.index[0], inplace=True)\n \n concat_list.append(dd)\n\nsumdf = pd.concat(concat_list)",
"_____no_output_____"
],
[
"#sumdf.insert(0, 'year', list(year_dic.keys()))\nsumdf.reset_index(drop=True, inplace=True)\nsumdf.head()",
"_____no_output_____"
],
[
"sumdf2 = sumdf.set_axis(['A_acres', 'B_acres', 'C_acres', 'D_acres', 'E_acres', 'F_acres', 'G_acres'], axis=1)\nsumdf2.head()",
"_____no_output_____"
],
[
"df_sizeclasscombo = pd.concat([df_sizeclass2, sumdf2], axis=1)\ndf_sizeclasscombo.head()",
"_____no_output_____"
],
[
"#df_sizeclasscombo.to_csv('./firesizeclass_byyear.csv', index=False)",
"_____no_output_____"
],
[
"sumdf2['%G'] = (sumdf2['G_acres']/sumdf.sum(axis=1))\nsumdf2['%B'] = (sumdf2['B_acres']/sumdf.sum(axis=1))\nsumdf2['%F'] = (sumdf2['F_acres']/sumdf.sum(axis=1))\nsumdf2.head()",
"_____no_output_____"
],
[
"plt.plot(df_sizeclass2['year'], sumdf2['%B'], marker='^', color='orange', label='class B')\nplt.plot(df_sizeclass2['year'], sumdf2['%F'], marker='x', color='green', label='class F')\nplt.plot(df_sizeclass2['year'], sumdf2['%G'], marker='o', label='class G')\n\nplt.xlabel('year')\nplt.ylabel('% of total fires')\nplt.legend()\nplt.ylim(0,1)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## 2018 data check",
"_____no_output_____"
]
],
[
[
"data2018 = big_data.loc[big_data['FIRE_YEAR'] == 2018]",
"_____no_output_____"
],
[
"data2018['DISCOVERY_DATE'].head()",
"_____no_output_____"
],
[
"#data2018.sort_values(by=('DISCOVERY_DATE'))",
"_____no_output_____"
],
[
"data2018.columns",
"_____no_output_____"
],
[
"col_diff = list(set(list(data2018.columns)) - set(list(data.columns)))\ncol_diff",
"_____no_output_____"
],
[
"datadrop = data2018.drop(col_diff, axis=1)",
"_____no_output_____"
],
[
"datadrop",
"_____no_output_____"
],
[
"datadrop.loc[datadrop['FIRE_NAME'] == 'SPRING CREEK']",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0757ac3c793139ec5fdb7e8f4e3de238545f31
| 3,614 |
ipynb
|
Jupyter Notebook
|
interviewq_exercises/q055_pandas_food_category_apply.ipynb
|
gentimouton/dives
|
5441379f592c7055a086db6426dbb367072848c6
|
[
"Unlicense"
] | 1 |
2020-02-28T17:08:43.000Z
|
2020-02-28T17:08:43.000Z
|
interviewq_exercises/q055_pandas_food_category_apply.ipynb
|
gentimouton/dives
|
5441379f592c7055a086db6426dbb367072848c6
|
[
"Unlicense"
] | null | null | null |
interviewq_exercises/q055_pandas_food_category_apply.ipynb
|
gentimouton/dives
|
5441379f592c7055a086db6426dbb367072848c6
|
[
"Unlicense"
] | null | null | null | 25.450704 | 131 | 0.415053 |
[
[
[
"# Question 55 - Categorizing foods\n\nYou are given the following dataframe and are asked to cateogrize each food into 1 of 3 categories: meat, fruit, or other.\n```\n food \tpounds\n0 \tbacon \t4.0\n1 \tSTRAWBERRIES \t3.5\n2 \tBacon \t7.0\n3 \tSTRAWBERRIES \t3.0\n4 \tBACON \t6.0\n5 \tstrawberries \t9.0\n6 \tStrawberries \t1.0\n7 \tpecans \t3.0\n```\n\nCan you add a new column containing the foods' categories to this dataframe using python?",
"_____no_output_____"
]
],
[
[
"import pandas as pd\ndf = pd.DataFrame({'food':['bacon','STRAWBERRIES','pecans','strawberries'], 'pounds':[1.0,1.2,5.3,4.9]})\n\nfood_dict = {\n 'strawberries': 'fruit',\n 'pecans': 'fruit',\n 'bacon': 'meat'\n}\n \ndf['category'] = df.apply(lambda row: food_dict.get(row['food'].lower(), 'other'), axis=1)\ndf",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
]
] |
4a075df843107f15eef1588f9ff3173ee8cba511
| 49,807 |
ipynb
|
Jupyter Notebook
|
Restaurant_Reviews_Sentiment_Analysis.ipynb
|
Spidy-Coder/Restuarant_reviews_Sentiment_Analysis
|
5c1ce395ed8c25b9c753ca8a26a320d85670b784
|
[
"MIT"
] | null | null | null |
Restaurant_Reviews_Sentiment_Analysis.ipynb
|
Spidy-Coder/Restuarant_reviews_Sentiment_Analysis
|
5c1ce395ed8c25b9c753ca8a26a320d85670b784
|
[
"MIT"
] | null | null | null |
Restaurant_Reviews_Sentiment_Analysis.ipynb
|
Spidy-Coder/Restuarant_reviews_Sentiment_Analysis
|
5c1ce395ed8c25b9c753ca8a26a320d85670b784
|
[
"MIT"
] | null | null | null | 93.271536 | 36,277 | 0.649367 |
[
[
[
"### Importing Libraries",
"_____no_output_____"
]
],
[
[
"import numpy as np\r\nimport pandas as pd\r\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"### Import Dataset",
"_____no_output_____"
]
],
[
[
"dataset = pd.read_csv(\"Restaurant_Reviews.tsv\", delimiter=\"\\t\",quoting=3)",
"_____no_output_____"
]
],
[
[
"### Cleaning the Texts",
"_____no_output_____"
]
],
[
[
"import re #simplify reviews\r\nimport nltk #for NLP,it allows us to download ensemble-> stop words\r\nnltk.download(\"stopwords\")\r\n\r\nfrom nltk.corpus import stopwords #for importing stopwords\r\n#stopwords are used to remove article (ex:-an,a,the,etc)\r\n#which are not imp and does not give any hint for reviews we want\r\n\r\nfrom nltk.stem.porter import PorterStemmer #to apply stemming over useful words\r\n\"\"\"stemming consists of taking only the root of a word\r\n that indicates enough about what this word means\r\n example:-i loved it OR i love it.\r\n here loved will be stemmed to love so just to simplify \r\n the process. It helps to reduce matrix size of sparse matrix\"\"\"\r\n\r\ncorpus = [] #empty list that will contain all cleaned data\r\n\r\nfor i in range(0,1000): #for loop is used to filter for the cleaned data and store them in corpus=[] empty list\r\n review = re.sub(\"[^a-zA-Z]\",\" \",dataset[\"Review\"][i]) #convert punctuations,etc with spaces\r\n review = review.lower() #convert all to lower case\r\n review = review.split() #split the reviews into separate words so as to apply stemming on each of them\r\n ps = PorterStemmer() #variable storing PorterStemmer() class\r\n all_stopwords = stopwords.words('english')\r\n all_stopwords.remove('not')\r\n review = [ps.stem(word) for word in review if not word in set(all_stopwords)]\r\n review = ' '.join(review)\r\n corpus.append(review)\r\n",
"[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Package stopwords is already up-to-date!\n"
],
[
"print(corpus)",
"['wow love place', 'crust not good', 'not tasti textur nasti', 'stop late may bank holiday rick steve recommend love', 'select menu great price', 'get angri want damn pho', 'honeslti tast fresh', 'potato like rubber could tell made ahead time kept warmer', 'fri great', 'great touch', 'servic prompt', 'would not go back', 'cashier care ever say still end wayyy overpr', 'tri cape cod ravoli chicken cranberri mmmm', 'disgust pretti sure human hair', 'shock sign indic cash', 'highli recommend', 'waitress littl slow servic', 'place not worth time let alon vega', 'not like', 'burritto blah', 'food amaz', 'servic also cute', 'could care less interior beauti', 'perform', 'right red velvet cake ohhh stuff good', 'never brought salad ask', 'hole wall great mexican street taco friendli staff', 'took hour get food tabl restaur food luke warm sever run around like total overwhelm', 'worst salmon sashimi', 'also combo like burger fri beer decent deal', 'like final blow', 'found place accid could not happier', 'seem like good quick place grab bite familiar pub food favor look elsewher', 'overal like place lot', 'redeem qualiti restaur inexpens', 'ampl portion good price', 'poor servic waiter made feel like stupid everi time came tabl', 'first visit hiro delight', 'servic suck', 'shrimp tender moist', 'not deal good enough would drag establish', 'hard judg whether side good gross melt styrofoam want eat fear get sick', 'posit note server attent provid great servic', 'frozen puck disgust worst peopl behind regist', 'thing like prime rib dessert section', 'bad food damn gener', 'burger good beef cook right', 'want sandwich go firehous', 'side greek salad greek dress tasti pita hummu refresh', 'order duck rare pink tender insid nice char outsid', 'came run us realiz husband left sunglass tabl', 'chow mein good', 'horribl attitud toward custom talk one custom enjoy food', 'portion huge', 'love friendli server great food wonder imagin menu', 'heart attack grill downtown vega absolut flat line excus restaur', 'not much seafood like string pasta bottom', 'salad right amount sauc not power scallop perfectli cook', 'rip banana not rip petrifi tasteless', 'least think refil water struggl wave minut', 'place receiv star appet', 'cocktail handmad delici', 'definit go back', 'glad found place', 'great food servic huge portion give militari discount', 'alway great time do gringo', 'updat went back second time still amaz', 'got food appar never heard salt batter fish chewi', 'great way finish great', 'deal includ tast drink jeff went beyond expect', 'realli realli good rice time', 'servic meh', 'took min get milkshak noth chocol milk', 'guess known place would suck insid excalibur use common sens', 'scallop dish quit appal valu well', 'time bad custom servic', 'sweet potato fri good season well', 'today second time lunch buffet pretti good', 'much good food vega feel cheat wast eat opportun go rice compani', 'come like experienc underwhelm relationship parti wait person ask break', 'walk place smell like old greas trap other eat', 'turkey roast beef bland', 'place', 'pan cake everyon rave tast like sugari disast tailor palat six year old', 'love pho spring roll oh yummi tri', 'poor batter meat ratio made chicken tender unsatisfi', 'say food amaz', 'omelet die', 'everyth fresh delici', 'summari larg disappoint dine experi', 'like realli sexi parti mouth outrag flirt hottest person parti', 'never hard rock casino never ever step forward', 'best breakfast buffet', 'say bye bye tip ladi', 'never go', 'back', 'food arriv quickli', 'not good', 'side cafe serv realli good food', 'server fantast found wife love roast garlic bone marrow ad extra meal anoth marrow go', 'good thing waiter help kept bloddi mari come', 'best buffet town price cannot beat', 'love mussel cook wine reduct duck tender potato dish delici', 'one better buffet', 'went tigerlilli fantast afternoon', 'food delici bartend attent person got great deal', 'ambienc wonder music play', 'go back next trip', 'sooooo good', 'real sushi lover let honest yama not good', 'least min pass us order food arriv busi', 'realli fantast thai restaur definit worth visit', 'nice spici tender', 'good price', 'check', 'pretti gross', 'better atmospher', 'kind hard mess steak', 'although much like look sound place actual experi bit disappoint', 'know place manag serv blandest food ever eaten prepar indian cuisin', 'worst servic boot least worri', 'servic fine waitress friendli', 'guy steak steak love son steak best worst place said best steak ever eaten', 'thought ventur away get good sushi place realli hit spot night', 'host staff lack better word bitch', 'bland not like place number reason want wast time bad review leav', 'phenomen food servic ambianc', 'return', 'definit worth ventur strip pork belli return next time vega', 'place way overpr mediocr food', 'penn vodka excel', 'good select food includ massiv meatloaf sandwich crispi chicken wrap delish tuna melt tasti burger', 'manag rude', 'delici nyc bagel good select cream chees real lox caper even', 'great subway fact good come everi subway not meet expect', 'serious solid breakfast', 'one best bar food vega', 'extrem rude realli mani restaur would love dine weekend vega', 'drink never empti made realli great menu suggest', '', 'waiter help friendli rare check us', 'husband ate lunch disappoint food servic', 'red curri much bamboo shoot tasti', 'nice blanket moz top feel like done cover subpar food', 'bathroom clean place well decor', 'menu alway chang food qualiti go servic extrem slow', 'servic littl slow consid serv peopl server food come slow pace', 'give thumb', 'watch waiter pay lot attent tabl ignor us', 'fianc came middl day greet seat right away', 'great restaur mandalay bay', 'wait forti five minut vain', 'crostini came salad stale', 'highlight great qualiti nigiri', 'staff friendli joint alway clean', 'differ cut piec day still wonder tender well well flavor', 'order voodoo pasta first time realli excel pasta sinc go gluten free sever year ago', 'place good', 'unfortun must hit bakeri leftov day everyth order stale', 'came back today sinc reloc still not impress', 'seat immedi', 'menu divers reason price', 'avoid cost', 'restaur alway full never wait', 'delici', 'place hand one best place eat phoenix metro area', 'go look good food', 'never treat bad', 'bacon hella salti', 'also order spinach avocado salad ingredi sad dress liter zero tast', 'realli vega fine dine use right menu hand ladi price list', 'waitress friendli', 'lordi khao soi dish not miss curri lover', 'everyth menu terrif also thrill made amaz accommod vegetarian daughter', 'perhap caught night judg review not inspir go back', 'servic leav lot desir', 'atmospher modern hip maintain touch cozi', 'not weekli haunt definit place come back everi', 'liter sat minut one ask take order', 'burger absolut flavor meat total bland burger overcook charcoal flavor', 'also decid not send back waitress look like verg heart attack', 'dress treat rude', 'probabl dirt', 'love place hit spot want someth healthi not lack quantiti flavor', 'order lemon raspberri ice cocktail also incred', 'food suck expect suck could imagin', 'interest decor', 'realli like crepe station', 'also serv hot bread butter home made potato chip bacon bit top origin good', 'watch prepar delici food', 'egg roll fantast', 'order arriv one gyro miss', 'salad wing ice cream dessert left feel quit satisfi', 'not realli sure joey vote best hot dog valley reader phoenix magazin', 'best place go tasti bowl pho', 'live music friday total blow', 'never insult felt disrespect', 'friendli staff', 'worth drive', 'heard good thing place exceed everi hope could dream', 'food great serivc', 'warm beer help', 'great brunch spot', 'servic friendli invit', 'good lunch spot', 'live sinc first last time step foot place', 'worst experi ever', 'must night place', 'side delish mix mushroom yukon gold pure white corn beateou', 'bug never show would given sure side wall bug climb kitchen', 'minut wait salad realiz come time soon', 'friend love salmon tartar', 'go back', 'extrem tasti', 'waitress good though', 'soggi not good', 'jamaican mojito delici', 'small not worth price', 'food rich order accordingli', 'shower area outsid rins not take full shower unless mind nude everyon see', 'servic bit lack', 'lobster bisqu bussel sprout risotto filet need salt pepper cours none tabl', 'hope bode go busi someon cook come', 'either cold not enough flavor bad', 'love bacon wrap date', 'unbeliev bargain', 'folk otto alway make us feel welcom special', 'main also uninspir', 'place first pho amaz', 'wonder experi made place must stop whenev town', 'food bad enough enjoy deal world worst annoy drunk peopl', 'fun chef', 'order doubl cheeseburg got singl patti fall apart pictur upload yeah still suck', 'great place coupl drink watch sport event wall cover tv', 'possibl give zero star', 'descript said yum yum sauc anoth said eel sauc yet anoth said spici mayo well none roll sauc', 'say would hardest decis honestli dish tast suppos tast amaz', 'not roll eye may stay not sure go back tri', 'everyon attent provid excel custom servic', 'horribl wast time money', 'dish quit flavour', 'time side restaur almost empti excus', 'busi either also build freez cold', 'like review said pay eat place', 'drink took close minut come one point', 'serious flavor delight folk', 'much better ayc sushi place went vega', 'light dark enough set mood', 'base sub par servic receiv effort show gratitud busi go back', 'owner realli great peopl', 'noth privileg work eat', 'greek dress creami flavor', 'overal think would take parent place made similar complaint silent felt', 'pizza good peanut sauc tasti', 'tabl servic pretti fast', 'fantast servic', 'well would given godfath zero star possibl', 'know make', 'tough short flavor', 'hope place stick around', 'bar vega not ever recal charg tap water', 'restaur atmospher exquisit', 'good servic clean inexpens boot', 'seafood fresh gener portion', 'plu buck', 'servic not par either', 'thu far visit twice food absolut delici time', 'good year ago', 'self proclaim coffe cafe wildli disappoint', 'veggitarian platter world', 'cant go wrong food', 'beat', 'stop place madison ironman friendli kind staff', 'chef friendli good job', 'better not dedic boba tea spot even jenni pho', 'like patio servic outstand', 'goat taco skimp meat wow flavor', 'think not', 'mac salad pretti bland not get', 'went bachi burger friend recommend not disappoint', 'servic stink', 'wait wait', 'place not qualiti sushi not qualiti restaur', 'would definit recommend wing well pizza', 'great pizza salad', 'thing went wrong burn saganaki', 'wait hour breakfast could done time better home', 'place amaz', 'hate disagre fellow yelper husband disappoint place', 'wait hour never got either pizza mani around us came later', 'know slow', 'staff great food delish incred beer select', 'live neighborhood disappoint back conveni locat', 'know pull pork could soooo delici', 'get incred fresh fish prepar care', 'go gave star rate pleas know third time eat bachi burger write review', 'love fact everyth menu worth', 'never dine place', 'food excel servic good', 'good beer drink select good food select', 'pleas stay away shrimp stir fri noodl', 'potato chip order sad could probabl count mani chip box probabl around', 'food realli bore', 'good servic check', 'greedi corpor never see anoth dime', 'never ever go back', 'much like go back get pass atroci servic never return', 'summer dine charm outdoor patio delight', 'not expect good', 'fantast food', 'order toast english muffin came untoast', 'food good', 'never go back', 'great food price high qualiti hous made', 'bu boy hand rude', 'point friend basic figur place joke mind make publicli loudli known', 'back good bbq lighter fare reason price tell public back old way', 'consid two us left full happi go wrong', 'bread made hous', 'downsid servic', 'also fri without doubt worst fri ever', 'servic except food good review', 'coupl month later return amaz meal', 'favorit place town shawarrrrrrma', 'black eye pea sweet potato unreal', 'disappoint', 'could serv vinaigrett may make better overal dish still good', 'go far mani place never seen restaur serv egg breakfast especi', 'mom got home immedi got sick bite salad', 'server not pleasant deal alway honor pizza hut coupon', 'truli unbeliev good glad went back', 'fantast servic pleas atmospher', 'everyth gross', 'love place', 'great servic food', 'first bathroom locat dirti seat cover not replenish plain yucki', 'burger got gold standard burger kind disappoint', 'omg food delicioso', 'noth authent place', 'spaghetti noth special whatsoev', 'dish salmon best great', 'veget fresh sauc feel like authent thai', 'worth drive tucson', 'select probabl worst seen vega none', 'pretti good beer select', 'place like chipotl better', 'classi warm atmospher fun fresh appet succul steak basebal steak', 'star brick oven bread app', 'eaten multipl time time food delici', 'sat anoth ten minut final gave left', 'terribl', 'everyon treat equal special', 'take min pancak egg', 'delici', 'good side staff genuin pleasant enthusiast real treat', 'sadli gordon ramsey steak place shall sharpli avoid next trip vega', 'alway even wonder food delici', 'best fish ever life', 'bathroom next door nice', 'buffet small food offer bland', 'outstand littl restaur best food ever tast', 'pretti cool would say', 'definit turn doubt back unless someon els buy', 'server great job handl larg rowdi tabl', 'find wast food despic food', 'wife lobster bisqu soup lukewarm', 'would come back sushi crave vega', 'staff great ambianc great', 'deserv star', 'left stomach ach felt sick rest day', 'drop ball', 'dine space tini elegantli decor comfort', 'custom order way like usual eggplant green bean stir fri love', 'bean rice mediocr best', 'best taco town far', 'took back money got outta', 'interest part town place amaz', 'rude inconsider manag', 'staff not friendli wait time serv horribl one even say hi first minut', 'back', 'great dinner', 'servic outshin definit recommend halibut', 'food terribl', 'never ever go back told mani peopl happen', 'recommend unless car break front starv', 'come back everi time vega', 'place deserv one star food', 'disgrac', 'def come back bowl next time', 'want healthi authent ethic food tri place', 'continu come ladi night andddd date night highli recommend place anyon area', 'sever time past experi alway great', 'walk away stuf happi first vega buffet experi', 'servic excel price pretti reason consid vega locat insid crystal shop mall aria', 'summar food incred nay transcend noth bring joy quit like memori pneumat condiment dispens', 'probabl one peopl ever go ian not like', 'kid pizza alway hit lot great side dish option kiddo', 'servic perfect famili atmospher nice see', 'cook perfect servic impecc', 'one simpli disappoint', 'overal disappoint qualiti food bouchon', 'account know get screw', 'great place eat remind littl mom pop shop san francisco bay area', 'today first tast buldogi gourmet hot dog tell ever thought possibl', 'left frustrat', 'definit soon', 'food realli good got full petti fast', 'servic fantast', 'total wast time', 'know kind best ice tea', 'come hungri leav happi stuf', 'servic give star', 'assur disappoint', 'take littl bad servic food suck', 'gave tri eat crust teeth still sore', 'complet gross', 'realli enjoy eat', 'first time go think quickli becom regular', 'server nice even though look littl overwhelm need stay profession friendli end', 'dinner companion told everyth fresh nice textur tast', 'ground right next tabl larg smear step track everywher pile green bird poop', 'furthermor even find hour oper websit', 'tri like place time think done', 'mistak', 'complaint', 'serious good pizza expert connisseur topic', 'waiter jerk', 'strike want rush', 'nicest restaur owner ever come across', 'never come', 'love biscuit', 'servic quick friendli', 'order appet took minut pizza anoth minut', 'absolutley fantast', 'huge awkward lb piec cow th gristl fat', 'definit come back', 'like steiner dark feel like bar', 'wow spici delici', 'not familiar check', 'take busi dinner dollar elsewher', 'love go back', 'anyway fs restaur wonder breakfast lunch', 'noth special', 'day week differ deal delici', 'not mention combin pear almond bacon big winner', 'not back', 'sauc tasteless', 'food delici spici enough sure ask spicier prefer way', 'ribey steak cook perfectli great mesquit flavor', 'think go back anytim soon', 'food gooodd', 'far sushi connoisseur definit tell differ good food bad food certainli bad food', 'insult', 'last time lunch bad', 'chicken wing contain driest chicken meat ever eaten', 'food good enjoy everi mouth enjoy relax venu coupl small famili group etc', 'nargil think great', 'best tater tot southwest', 'love place', 'definit not worth paid', 'vanilla ice cream creami smooth profiterol choux pastri fresh enough', 'im az time new spot', 'manag worst', 'insid realli quit nice clean', 'food outstand price reason', 'think run back carli anytim soon food', 'due fact took minut acknowledg anoth minut get food kept forget thing', 'love margarita', 'first vega buffet not disappoint', 'good though', 'one note ventil could use upgrad', 'great pork sandwich', 'wast time', 'total letdown would much rather go camelback flower shop cartel coffe', 'third chees friend burger cold', 'enjoy pizza brunch', 'steak well trim also perfectli cook', 'group claim would handl us beauti', 'love', 'ask bill leav without eat bring either', 'place jewel la vega exactli hope find nearli ten year live', 'seafood limit boil shrimp crab leg crab leg definit not tast fresh', 'select food not best', 'delici absolut back', 'small famili restaur fine dine establish', 'toro tartar cavier extraordinari like thinli slice wagyu white truffl', 'dont think back long time', 'attach ga station rare good sign', 'awesom', 'back mani time soon', 'menu much good stuff could not decid', 'wors humili worker right front bunch horribl name call', 'conclus fill meal', 'daili special alway hit group', 'tragedi struck', 'pancak also realli good pretti larg', 'first crawfish experi delici', 'monster chicken fri steak egg time favorit', 'waitress sweet funni', 'also tast mom multi grain pumpkin pancak pecan butter amaz fluffi delici', 'rather eat airlin food serious', 'cant say enough good thing place', 'ambianc incred', 'waitress manag friendli', 'would not recommend place', 'overal impress noca', 'gyro basic lettuc', 'terribl servic', 'thoroughli disappoint', 'much pasta love homemad hand made pasta thin pizza', 'give tri happi', 'far best cheesecurd ever', 'reason price also', 'everyth perfect night', 'food good typic bar food', 'drive get', 'first glanc love bakeri cafe nice ambianc clean friendli staff', 'anyway not think go back', 'point finger item menu order disappoint', 'oh thing beauti restaur', 'gone go', 'greasi unhealthi meal', 'first time might last', 'burger amaz', 'similarli deliveri man not say word apolog food minut late', 'way expens', 'sure order dessert even need pack go tiramisu cannoli die', 'first time wait next', 'bartend also nice', 'everyth good tasti', 'place two thumb way', 'best place vega breakfast check sat sun', 'love authent mexican food want whole bunch interest yet delici meat choos need tri place', 'terribl manag', 'excel new restaur experienc frenchman', 'zero star would give zero star', 'great steak great side great wine amaz dessert', 'worst martini ever', 'steak shrimp opinion best entre gc', 'opportun today sampl amaz pizza', 'wait thirti minut seat although vacant tabl folk wait', 'yellowtail carpaccio melt mouth fresh', 'tri go back even empti', 'go eat potato found stranger hair', 'spici enough perfect actual', 'last night second time dine happi decid go back', 'not even hello right', 'dessert bit strang', 'boyfriend came first time recent trip vega could not pleas qualiti food servic', 'realli recommend place go wrong donut place', 'nice ambianc', 'would recommend save room', 'guess mayb went night disgrac', 'howev recent experi particular locat not good', 'know not like restaur someth', 'avoid establish', 'think restaur suffer not tri hard enough', 'tapa dish delici', 'heart place', 'salad bland vinegrett babi green heart palm', 'two felt disgust', 'good time', 'believ place great stop huge belli hanker sushi', 'gener portion great tast', 'never go back place never ever recommend place anyon', 'server went back forth sever time not even much help', 'food delici', 'hour serious', 'consid theft', 'eew locat need complet overhaul', 'recent wit poor qualiti manag toward guest well', 'wait wait wait', 'also came back check us regularli excel servic', 'server super nice check us mani time', 'pizza tast old super chewi not good way', 'swung give tri deepli disappoint', 'servic good compani better', 'staff also friendli effici', 'servic fan quick serv nice folk', 'boy sucker dri', 'rate', 'look authent thai food go els', 'steak recommend', 'pull car wait anoth minut acknowledg', 'great food great servic clean friendli set', 'assur back', 'hate thing much cheap qualiti black oliv', 'breakfast perpar great beauti present giant slice toast lightli dust powder sugar', 'kid play area nasti', 'great place fo take eat', 'waitress friendli happi accomod vegan veggi option', 'omg felt like never eaten thai food dish', 'extrem crumbi pretti tasteless', 'pale color instead nice char flavor', 'crouton also tast homemad extra plu', 'got home see driest damn wing ever', 'regular stop trip phoenix', 'realli enjoy crema caf expand even told friend best breakfast', 'not good money', 'miss wish one philadelphia', 'got sit fairli fast end wait minut place order anoth minut food arriv', 'also best chees crisp town', 'good valu great food great servic', 'ask satisfi meal', 'food good', 'awesom', 'want leav', 'made drive way north scottsdal not one bit disappoint', 'not eat', 'owner realli realli need quit soooooo cheap let wrap freak sandwich two paper not one', 'check place coupl year ago not impress', 'chicken got definit reheat ok wedg cold soggi', 'sorri not get food anytim soon', 'absolut must visit', 'cow tongu cheek taco amaz', 'friend not like bloodi mari', 'despit hard rate busi actual rare give star', 'realli want make experi good one', 'not return', 'chicken pho tast bland', 'disappoint', 'grill chicken tender yellow saffron season', 'drive thru mean not want wait around half hour food somehow end go make us wait wait', 'pretti awesom place', 'ambienc perfect', 'best luck rude non custom servic focus new manag', 'grandmoth make roast chicken better one', 'ask multipl time wine list time ignor went hostess got one', 'staff alway super friendli help especi cool bring two small boy babi', 'four star food guy blue shirt great vibe still let us eat', 'roast beef sandwich tast realli good', 'even drastic sick', 'high qualiti chicken chicken caesar salad', 'order burger rare came done', 'promptli greet seat', 'tri go lunch madhous', 'proven dead wrong sushi bar not qualiti great servic fast food impecc', 'wait hour seat not greatest mood', 'good joint', 'macaron insan good', 'not eat', 'waiter attent friendli inform', 'mayb cold would somewhat edibl', 'place lot promis fail deliv', 'bad experi', 'mistak', 'food averag best', 'great food', 'go back anytim soon', 'disappoint order big bay plater', 'great place relax awesom burger beer', 'perfect sit famili meal get togeth friend', 'not much flavor poorli construct', 'patio seat comfort', 'fri rice dri well', 'hand favorit italian restaur', 'scream legit book somethat also pretti rare vega', 'not fun experi', 'atmospher great love duo violinist play song request', 'person love hummu pita baklava falafel baba ganoush amaz eggplant', 'conveni sinc stay mgm', 'owner super friendli staff courteou', 'great', 'eclect select', 'sweet potato tot good onion ring perfect close', 'staff attent', 'chef gener time even came around twice take pictur', 'owner use work nobu place realli similar half price', 'googl mediocr imagin smashburg pop', 'dont go', 'promis disappoint', 'sushi lover avoid place mean', 'great doubl cheeseburg', 'awesom servic food', 'fantast neighborhood gem', 'wait go back', 'plantain worst ever tast', 'great place highli recommend', 'servic slow not attent', 'gave star give star', 'staff spend time talk', 'dessert panna cotta amaz', 'good food great atmospher', 'damn good steak', 'total brunch fail', 'price reason flavor spot sauc home made slaw not drench mayo', 'decor nice piano music soundtrack pleasant', 'steak amaz rge fillet relleno best seafood plate ever', 'good food good servic', 'absolut amaz', 'probabl back honest', 'definit back', 'sergeant pepper beef sandwich auju sauc excel sandwich well', 'hawaiian breez mango magic pineappl delight smoothi tri far good', 'went lunch servic slow', 'much say place walk expect amaz quickli disappoint', 'mortifi', 'needless say never back', 'anyway food definit not fill price pay expect', 'chip came drip greas mostli not edibl', 'realli impress strip steak', 'go sinc everi meal awesom', 'server nice attent serv staff', 'cashier friendli even brought food', 'work hospit industri paradis valley refrain recommend cibo longer', 'atmospher fun', 'would not recommend other', 'servic quick even go order like like', 'mean realli get famou fish chip terribl', 'said mouth belli still quit pleas', 'not thing', 'thumb', 'read pleas go', 'love grill pizza remind legit italian pizza', 'pro larg seat area nice bar area great simpl drink menu best brick oven pizza homemad dough', 'realli nice atmospher', 'tonight elk filet special suck', 'one bite hook', 'order old classic new dish go time sore disappoint everyth', 'cute quaint simpl honest', 'chicken delici season perfect fri outsid moist chicken insid', 'food great alway compliment chef', 'special thank dylan recommend order yummi tummi', 'awesom select beer', 'great food awesom servic', 'one nice thing ad gratuiti bill sinc parti larger expect tip', 'fli appl juic fli', 'han nan chicken also tasti', 'servic thought good', 'food bare lukewarm must sit wait server bring us', 'ryan bar definit one edinburgh establish revisit', 'nicest chines restaur', 'overal like food servic', 'also serv indian naan bread hummu spici pine nut sauc world', 'probabl never come back recommend', 'friend pasta also bad bare touch', 'tri airport experi tasti food speedi friendli servic', 'love decor chines calligraphi wall paper', 'never anyth complain', 'restaur clean famili restaur feel', 'way fri', 'not sure long stood long enough begin feel awkwardli place', 'open sandwich impress not good way', 'not back', 'warm feel servic felt like guest special treat', 'extens menu provid lot option breakfast', 'alway order vegetarian menu dinner wide array option choos', 'watch price inflat portion get smaller manag attitud grow rapidli', 'wonder lil tapa ambienc made feel warm fuzzi insid', 'got enjoy seafood salad fabul vinegrett', 'wonton thin not thick chewi almost melt mouth', 'level spici perfect spice whelm soup', 'sat right time server get go fantast', 'main thing enjoy crowd older crowd around mid', 'side town definit spot hit', 'wait minut get drink longer get arepa', 'great place eat', 'jalapeno bacon soooo good', 'servic poor that nice', 'food good servic good price good', 'place not clean food oh stale', 'chicken dish ok beef like shoe leather', 'servic beyond bad', 'happi', 'tast like dirt', 'one place phoenix would defin go back', 'block amaz', 'close hous low key non fanci afford price good food', 'hot sour egg flower soup absolut star', 'sashimi poor qualiti soggi tasteless', 'great time famili dinner sunday night', 'food not tasti not say real tradit hunan style', 'bother slow servic', 'flair bartend absolut amaz', 'frozen margarita way sugari tast', 'good order twice', 'nutshel restaraunt smell like combin dirti fish market sewer', 'girlfriend veal bad', 'unfortun not good', 'pretti satifi experi', 'join club get awesom offer via email', 'perfect someon like beer ice cold case even colder', 'bland flavorless good way describ bare tepid meat', 'chain fan beat place easili', 'nacho must', 'not come back', 'mani word say place everyth pretti well', 'staff super nice quick even crazi crowd downtown juri lawyer court staff', 'great atmospher friendli fast servic', 'receiv pita huge lot meat thumb', 'food arriv meh', 'pay hot dog fri look like came kid meal wienerschnitzel not idea good meal', 'classic main lobster roll fantast', 'brother law work mall ate day guess sick night', 'good go review place twice herea tribut place tribut event held last night', 'chip salsa realli good salsa fresh', 'place great', 'mediocr food', 'get insid impress place', 'super pissd', 'servic super friendli', 'sad littl veget overcook', 'place nice surpris', 'golden crispi delici', 'high hope place sinc burger cook charcoal grill unfortun tast fell flat way flat', 'could eat bruschetta day devin', 'not singl employe came see ok even need water refil final serv us food', 'lastli mozzarella stick best thing order', 'first time ever came amaz experi still tell peopl awesom duck', 'server neglig need made us feel unwelcom would not suggest place', 'servic terribl though', 'place overpr not consist boba realli overpr', 'pack', 'love place', 'say dessert yummi', 'food terribl', 'season fruit fresh white peach pure', 'kept get wors wors offici done', 'place honestli blown', 'definit would not eat', 'not wast money', 'love put food nice plastic contain oppos cram littl paper takeout box', 'cr pe delic thin moist', 'aw servic', 'ever go', 'food qualiti horribl', 'price think place would much rather gone', 'servic fair best', 'love sushi found kabuki price hip servic', 'favor stay away dish', 'poor servic', 'one tabl thought food averag worth wait', 'best servic food ever maria server good friendli made day', 'excel', 'paid bill not tip felt server terribl job', 'lunch great experi', 'never bland food surpris consid articl read focus much spice flavor', 'food way overpr portion fuck small', 'recent tri caballero back everi week sinc', 'buck head realli expect better food', 'food came good pace', 'ate twice last visit especi enjoy salmon salad', 'back', 'could not believ dirti oyster', 'place deserv star', 'would not recommend place', 'fact go round star awesom', 'disbelief dish qualifi worst version food ever tast', 'bad day not low toler rude custom servic peopl job nice polit wash dish otherwis', 'potato great biscuit', 'probabl would not go', 'flavor perfect amount heat', 'price reason servic great', 'wife hate meal coconut shrimp friend realli not enjoy meal either', 'fella got huevo ranchero look appeal', 'went happi hour great list wine', 'may say buffet pricey think get pay place get quit lot', 'probabl come back', 'worst food servic', 'place pretti good nice littl vibe restaur', 'talk great custom servic cours back', 'hot dish not hot cold dish close room temp watch staff prepar food bare hand glove everyth deep fri oil', 'love fri bean', 'alway pleasur deal', 'plethora salad sandwich everyth tri get seal approv', 'place awesom want someth light healthi summer', 'sushi strip place go', 'servic great even manag came help tabl', 'feel dine room colleg cook cours high class dine servic slow best', 'start review two star edit give one', 'worst sushi ever eat besid costco', 'excel restaur highlight great servic uniqu menu beauti set', 'boyfriend sat bar complet delight experi', 'weird vibe owner', 'hardli meat', 'better bagel groceri store', 'go place gyro', 'love owner chef one authent japanes cool dude', 'burger good pizza use amaz doughi flavorless', 'found six inch long piec wire salsa', 'servic terribl food mediocr', 'defin enjoy', 'order albondiga soup warm tast like tomato soup frozen meatbal', 'three differ occas ask well done medium well three time got bloodiest piec meat plate', 'two bite refus eat anymor', 'servic extrem slow', 'minut wait got tabl', 'serious killer hot chai latt', 'allergi warn menu waitress absolut clue meal not contain peanut', 'boyfriend tri mediterranean chicken salad fell love', 'rotat beer tap also highlight place', 'price bit concern mellow mushroom', 'worst thai ever', 'stay vega must get breakfast least', 'want first say server great perfect servic', 'pizza select good', 'strawberri tea good', 'highli unprofession rude loyal patron', 'overal great experi', 'spend money elsewher', 'regular toast bread equal satisfi occasion pat butter mmmm', 'buffet bellagio far anticip', 'drink weak peopl', 'order not correct', 'also feel like chip bought not made hous', 'disappoint dinner went elsewher dessert', 'chip sal amaz', 'return', 'new fav vega buffet spot', 'serious cannot believ owner mani unexperienc employe run around like chicken head cut', 'sad', 'felt insult disrespect could talk judg anoth human like', 'call steakhous properli cook steak understand', 'not impress concept food', 'thing crazi guacamol like pur ed', 'realli noth postino hope experi better', 'got food poison buffet', 'brought fresh batch fri think yay someth warm', 'hilari yummi christma eve dinner rememb biggest fail entir trip us', 'needless say go back anytim soon', 'place disgust', 'everi time eat see care teamwork profession degre', 'ri style calamari joke', 'howev much garlic fondu bare edibl', 'could bare stomach meal complain busi lunch', 'bad lost heart finish', 'also took forev bring us check ask', 'one make scene restaur get definit lost love one', 'disappoint experi', 'food par denni say not good', 'want wait mediocr food downright terribl servic place', 'waaaaaayyyyyyyyyi rate say', 'go back', 'place fairli clean food simpli worth', 'place lack style', 'sangria half glass wine full ridicul', 'bother come', 'meat pretti dri slice brisket pull pork', 'build seem pretti neat bathroom pretti trippi eat', 'equal aw', 'probabl not hurri go back', 'slow seat even reserv', 'not good stretch imagin', 'cashew cream sauc bland veget undercook', 'chipolt ranch dip saus tasteless seem thin water heat', 'bit sweet not realli spici enough lack flavor', 'disappoint', 'place horribl way overpr', 'mayb vegetarian fare twice thought averag best', 'busi know', 'tabl outsid also dirti lot time worker not alway friendli help menu', 'ambianc not feel like buffet set douchey indoor garden tea biscuit', 'con spotti servic', 'fri not hot neither burger', 'came back cold', 'food came disappoint ensu', 'real disappoint waiter', 'husband said rude not even apolog bad food anyth', 'reason eat would fill night bing drink get carb stomach', 'insult profound deuchebaggeri go outsid smoke break serv solidifi', 'someon order two taco think may part custom servic ask combo ala cart', 'quit disappoint although blame need place door', 'rave review wait eat disappoint', 'del taco pretti nasti avoid possibl', 'not hard make decent hamburg', 'like', 'hell go back', 'gotten much better servic pizza place next door servic receiv restaur', 'know big deal place back ya', 'immedi said want talk manag not want talk guy shot firebal behind bar', 'ambianc much better', 'unfortun set us disapppoint entre', 'food good', 'server suck wait correct server heimer suck', 'happen next pretti put', 'bad caus know famili own realli want like place', 'overpr get', 'vomit bathroom mid lunch', 'kept look time soon becom minut yet still food', 'place eat circumst would ever return top list', 'start tuna sashimi brownish color obvious fresh', 'food averag', 'sure beat nacho movi would expect littl bit come restaur', 'ha long bay bit flop', 'problem charg sandwich bigger subway sub offer better amount veget', 'shrimp unwrap live mile brushfir liter ice cold', 'lack flavor seem undercook dri', 'realli impress place close', 'would avoid place stay mirag', 'refri bean came meal dri crusti food bland', 'spend money time place els', 'ladi tabl next us found live green caterpillar salad', 'present food aw', 'tell disappoint', 'think food flavor textur lack', 'appetit instantli gone', 'overal not impress would not go back', 'whole experi underwhelm think go ninja sushi next time', 'wast enough life pour salt wound draw time took bring check']\n"
]
],
[
[
"## Creating the Bag of model",
"_____no_output_____"
]
],
[
[
"from sklearn.feature_extraction.text import CountVectorizer\r\ncv = CountVectorizer(max_features = 1500) #Most frequent coming words will be removed from the sparse matrixx \r\nx = cv.fit_transform(corpus).toarray()\r\ny = dataset.iloc[:,-1].values",
"_____no_output_____"
],
[
"len(x[0])",
"_____no_output_____"
]
],
[
[
"## Splitting the Dataset into the training set and test set",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\r\nx_train,x_test,y_train,y_test = train_test_split(x,y,test_size=0.2,random_state = 0)\r\n",
"_____no_output_____"
]
],
[
[
"## Training the Naive Bayes Model on the Training set",
"_____no_output_____"
]
],
[
[
"from sklearn.naive_bayes import GaussianNB\r\nclassifier = GaussianNB()\r\nclassifier.fit(x_train,y_train)",
"_____no_output_____"
],
[
"y_pred = classifier.predict(x_test)\r\nprint(np.concatenate((y_pred.reshape(len(y_pred),1),y_test.reshape(len(y_test),1)),1))",
"[[1 0]\n [1 0]\n [1 0]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [1 0]\n [1 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [1 1]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 1]\n [1 1]\n [1 0]\n [1 0]\n [0 1]\n [1 1]\n [1 1]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [0 0]\n [1 0]\n [0 0]\n [1 0]\n [1 1]\n [1 1]\n [1 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [1 0]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [1 1]\n [1 0]\n [0 0]\n [1 0]\n [1 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [0 0]\n [1 0]\n [1 1]\n [0 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [1 1]\n [1 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [1 0]\n [1 1]\n [1 1]\n [1 0]\n [0 1]\n [1 1]\n [1 1]\n [1 0]\n [0 1]\n [1 0]\n [1 1]\n [1 1]\n [0 0]\n [0 1]\n [0 1]\n [1 1]\n [0 0]\n [1 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [0 0]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [1 0]\n [0 0]\n [1 1]\n [1 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 1]\n [1 1]\n [1 1]\n [0 0]\n [1 0]\n [0 0]\n [1 0]\n [1 1]\n [1 1]\n [1 1]\n [1 1]\n [0 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [1 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [1 0]\n [1 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [1 0]\n [1 1]]\n"
]
],
[
[
"## Making the confusion matrix",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import confusion_matrix, accuracy_score\r\ncn = confusion_matrix(y_test,y_pred)\r\nprint(cn)\r\naccuracy_score(y_test,y_pred)",
"[[55 42]\n [12 91]]\n"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a075faf1f3a1df4d8080015028df1f739d72ecc
| 419,025 |
ipynb
|
Jupyter Notebook
|
p8_notebook02.ipynb
|
nalron/project_electric_cars_france2040
|
21d340ab96c3a853758544b5575b443bf675c634
|
[
"MIT"
] | 1 |
2020-08-25T08:15:08.000Z
|
2020-08-25T08:15:08.000Z
|
p8_notebook02.ipynb
|
nalron/project_electric_cars_france2040
|
21d340ab96c3a853758544b5575b443bf675c634
|
[
"MIT"
] | null | null | null |
p8_notebook02.ipynb
|
nalron/project_electric_cars_france2040
|
21d340ab96c3a853758544b5575b443bf675c634
|
[
"MIT"
] | 2 |
2022-03-12T12:30:49.000Z
|
2022-03-12T12:36:16.000Z
| 52.014027 | 44,620 | 0.577853 |
[
[
[
"# 2040 le cap des 100% de voitures électriques \n*Etude data - Projet 8 - @Nalron (août 2020)*\\\n*Traitement des données sur Jupyter Notebook (Distribution Anaconda)*\\\n*Etude réalisée en langage Python*\n\nVisualisation des Tableaux de bord: [Tableau Public](https://public.tableau.com/profile/nalron#!/vizhome/ElectricCarsFrance2040/Vuedensemble)",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
],
[
"# Rappel des missions\n\n\n### [Mission 1 : Positionnement de la voiture électrique en France](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook01.ipynb)\nÉvolution du parc automobile électrique à 2 ans.<br>\nIdentification et classification des inégalités locales des voitures électriques.<br>\nAutonomie et consommation moyenne d'une voiture électrique.\n\n### [Mission 2 : Besoin des déploiements en IRVE](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook02.ipynb)\nÉvolution du nombre de points de recharge disponibles ouverts au public.<br>\nAnalyse de la répartition par borne de recharge, type de prise et catégorie d’aménageur.<br>\nUtilisation des ratios pour le dimensionnement d'un maillage de taille optimale.<br>\nPrévision du nombre de PDC à horizon 2025.<br>\n\n### [Mission 3 : Appel de charge au réseau électrique](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook03.ipynb)\nAnalyse de la consommation d'électricité en France et des filières de production.<br>\nProfiler un pic d’utilisation des bornes de recharge.<br>\nCourbe de charge réseau électrique pour répondre aux nouveaux modes de consommation.",
"_____no_output_____"
],
[
"---",
"_____no_output_____"
]
],
[
[
"#Import des principales librairies Python\nimport pandas as pd\nimport plotly.figure_factory as ff\nimport requests\nimport seaborn as sns\n%pylab inline",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"## Mission 2 : Besoin des déploiements en IRVE<a id=\"borne\">",
"_____no_output_____"
],
[
"__`Traitement des données sur les points de charge par typologie`__",
"_____no_output_____"
],
[
"Ce jeu de données présente le nombre total de points de charge en France continentale.\n\nLes points de charge sont matérialisés par un socle de prise sur lequel un véhicule électrique peut potentiellement se brancher. Une borne de recharge peut comporter un ou plusieurs points de charge. Les données présentées segmentent les points de charge en trois typologies :\n\n- Les points de charge « accessible au public » correspondent aux points de charge accessibles dans les commerces (supermarché, concession automobile…), parking, sites publics ou stations en voirie.\n- Les points de charge « particulier » sont des points de charges privés localisés dans le résidentiel collectif (immeubles, copropriétés…) ou individuel (pavillons).\n- Les points de charge « société » sont des points de charge privés localisés dans les sociétés et réservés à l’activité de la société ou à la recharge des véhicules électriques des employés.\n\nLe jeu de données a été élaboré par Enedis à partir de ses données propres combinées avec certaines données externes, issues des sociétés Girève et AAA Data. Les données sur les points de charge « particulier » et « société » sont une reconstitution de l’existant construite par Enedis sur la base d’hypothèses. Ces hypothèses s’appuient sur l’évolution du marché du véhicule électrique.",
"_____no_output_____"
]
],
[
[
"#Chargement du jeu de données \"nombre-de-points-de-charge-par-typologie.csv\"\nirve_type = pd.read_csv('p8_data/nombre-de-points-de-charge-par-typologie.csv', sep=';')\ndisplay(irve_type.shape)\ndisplay(irve_type.head())",
"_____no_output_____"
],
[
"#Analyse des valeurs de la variable 'Nombre'\nirve_type['Nombre'].unique()",
"_____no_output_____"
],
[
"#plt.figure(figsize=(12,3))\nirve_type.boxplot(column= 'Nombre', by='Année')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Il ne semble pas avoir de valeur aberrante dans les valeurs de la variable 'Nombre'. Pour rappel, ici nous avons les points de charge électriques quantifiés par année et trimestre.",
"_____no_output_____"
]
],
[
[
"#Mise en forme plus logique des données selon l'année et le trimestre\nirve_type = irve_type.pivot_table(index=['Année', 'Trimestre'], \n columns='Typologie', \n values='Nombre').reset_index()\nirve_type.columns.name = None\nirve_type",
"_____no_output_____"
],
[
"#Calcul des évolutions en % entre chaque trimestre\nfor i, row in irve_type.iterrows():\n if i+1 < len(irve_type):\n number_public = ((irve_type.loc[i+1, 'Accessible au public'] \n - irve_type.loc[i, 'Accessible au public']) / (irve_type.loc[i, 'Accessible au public'])*100)\n irve_type.loc[i+1, '%Public'] = round(number_public, 2)\n if i+1 < len(irve_type):\n number_particulier = ((irve_type.loc[i+1, 'Particulier'] \n - irve_type.loc[i, 'Particulier']) / (irve_type.loc[i, 'Particulier'])*100)\n irve_type.loc[i+1, '%Particulier'] = round(number_particulier, 2)\n if i+1 < len(irve_type):\n number_societe = ((irve_type.loc[i+1, 'Société'] \n - irve_type.loc[i, 'Société']) / (irve_type.loc[i, 'Société'])*100)\n irve_type.loc[i+1, '%Société'] = round(number_societe, 2)\n else :\n irve_type.fillna(0, inplace=True)\n pass\n",
"_____no_output_____"
],
[
"#Modification des Trimestres pour obtenir un Time Series\nirve_type.replace({'T1' : '31-03',\n 'T2' : '30-06',\n 'T3' : '30-09',\n 'T4' : '31-12'},\n inplace=True)\n\nirve_type['Time'] = irve_type['Année'].astype(str)+ str(\"-\")+irve_type['Trimestre']\nirve_type['Time'] = pd.to_datetime(irve_type['Time'], format=\"%Y-%d-%m\")",
"_____no_output_____"
],
[
"#Affichage du dataframe enrichi\nirve_type",
"_____no_output_____"
],
[
"#Affichage des types de données /Variables\nirve_type.dtypes",
"_____no_output_____"
],
[
"#Sauvegarde \nirve_type.to_csv('p8_datatable/irve_type.csv')",
"_____no_output_____"
],
[
"#Analyse des valeurs manquantes du jeu de données \nirve_type.isna().any()",
"_____no_output_____"
],
[
"#Analyse des valeurs doublons du jeu de données \nirve_type.duplicated().any()",
"_____no_output_____"
],
[
"#Années traitées dans ce jeu de données list\nlist(irve_type['Année'].unique())",
"_____no_output_____"
]
],
[
[
"__`Traitement des données sur les bornes de recharge pour vehicules electriques (IRVE)`__",
"_____no_output_____"
],
[
"Ce fichier est une version consolidée des sources suivantes: Stations Tesla, Bornes de la Métropole de Rennes, Bornes dans les Concessions Renault, Bornes Autolib', Plus de Bornes, opérateur en Provence, Compagnie Nationale du Rhône, Magasins E.Leclerc\n\nDonnées ajoutées en décembre 2014: Vincipark/Sodetrel, Grand Lyon, Morbihan Energies\n\nDonnées ajoutées en octobre 2015: Magasins AUCHAN, Concessions NISSAN, Réseau ALTERBASE, SyDEV, Freshmile, EFFIA\n\nDonnées ajoutées en mai 2016: SDE18, SDE24, SDE28, SDE32, MOVeasy, Seine Aval, SIEML, SDESM, Vienne",
"_____no_output_____"
]
],
[
[
"#Chargement du jeu de données \"fichier-consolide-des-bornes-de-recharge-pour-vehicules-electriques-irve\"\nirve = pd.read_csv('p8_data/fichier-consolide-des-bornes-de-recharge-pour-vehicules-electriques-irve.csv', \n sep=';')\ndisplay(irve.shape)\ndisplay(irve.head())",
"_____no_output_____"
]
],
[
[
"Le premier point de contrôle passe par la recherche d'éventuels doublons. Notons que le contexte métier nécessite de la rigueur dans l'interprétation de certaines variables, l'amalgame entre station, borne et point de charge est régulièrement rencontré. Donc, \"id_station\" n'est pas le sous-ensemble le plus approprié à l'identification de doublons, une station de recharge peut avoir plusieurs points de charge, et l'identifiant ne tient pas compte du point de charge. Notons que \"id_pdc\" permet d'obtenir des identifiants uniques pouvant cette fois-ci être pris comme sous-ensemble.",
"_____no_output_____"
]
],
[
[
"#Test de recherche des éventuels doublons à partir de la variable 'id_pdc'\nirve.duplicated(subset='id_pdc').sum()",
"_____no_output_____"
]
],
[
[
"Notons que le fichier mis à disposition sur le site data.gouv.fr annonce plusieurs consolidations selon les années 2014 à 2016 et 2018. Attention, quelques opérateurs comme Tesla, Nissan, Auchan, etc… ne sont plus observés dans la version de juin 2020 et même depuis plusieurs mois. Non pas parce que ces stations de recharge ont été retirées, mais par logique d'uniformisation selon une charte d'utilisation \"Fichiers à destination des aménageurs et opérateurs publics et privés d'infrastructures de recharge pour véhicules électriques\" consultable sur [data.gouv.fr](https://www.data.gouv.fr/fr/datasets/fichiers-pour-les-infrastructures-de-recharge-de-vehicules-electriques/)\n\n<em>Le décret 2017-26 du 12 janvier 2017 fixe les exigences requises pour la configuration des points de recharge à publier sur un nouveau fichier désormais en CSV. L'aménageur, ou l'opérateur désigné le cas échéant, prend les mesures appropriées pour que ces données soient en permanence tenues à jour et rendues publiques sur data.gouv.fr</em>\n\n<u>Dans le cadre de l'étude, les opérateurs (ou principaux opérateurs) identifiés comme manquants seront réintégrés dans l'échantillon.</u>",
"_____no_output_____"
]
],
[
[
"#Combien de stations de recharge (en anglais Charging Station Pool) à Juin 2020?\nirve.id_station.nunique()",
"_____no_output_____"
],
[
"#Combien de bornes de recharge (en anglais Charging Station) à Juin 2020?\nirve.id_pdc.nunique()",
"_____no_output_____"
]
],
[
[
"**Combien de points de charge (en anglais Charging Point ou EVSE) à Juin 2020?**\nSelon la définition de l'AFIREV, le point de charge représente le nombre d'emplacement individuel permettant le stationnement du véhicule pendant le temps de charge, donc le nombre de prises de la borne. Le jeu de données `irve` ne permet pas de le quantifier directement, malgré la présence d'une variable 'nbre_pdc' qui ne représente que la borne et non le nombre de prises. Notons qu'il est nécessaire d'enrichir les données par une estimation des prises de chacune des bornes, ce calcul pourra être réalisé à l'aide de la variable 'type_prise'. <u>Cet enrichissement sera fait plus tard après intégration des opérateurs manquants.</u>",
"_____no_output_____"
],
[
"### Exploitation des opérateurs et aménageurs manquants",
"_____no_output_____"
]
],
[
[
"#Chargement du jeu de données de l'enseigne \"Mobive\"\n#https://www.data.gouv.fr/fr/datasets/infrastructures-de-recharge-pour-vehicules-electriques-mobive-1/\nmobive = pd.read_csv('p8_data/irve-mobive-20200331.csv', sep=';', decimal=\",\")\ndisplay(mobive.shape)\ndisplay(mobive.head())",
"_____no_output_____"
],
[
"#Test de matching des variables avant concaténation\ndisplay(irve.columns)\ndisplay(mobive.columns)",
"_____no_output_____"
],
[
"#Chargement du jeu de données de la grande distribution LECLERC\n#https://www.data.gouv.fr/fr/datasets/localisation-des-bornes-de-recharge-\n#pour-vehicules-electriques-dans-les-magasins-e-leclerc/\nleclerc = pd.read_csv('p8_data/leclerc.csv', sep=';', decimal=\",\")\ndisplay(leclerc.shape)\ndisplay(leclerc.head())",
"_____no_output_____"
],
[
"#Test de matching des variables avant concaténation\ndisplay(irve.columns)\ndisplay(leclerc.columns)",
"_____no_output_____"
],
[
"#Divergences à traiter avant concaténation des données\nleclerc.rename(columns={\n 'nom_station': 'n_amenageur',\n 'nom_porteur': 'n_enseigne',\n 'ID_station': 'id_station',\n 'adresse_station': 'ad_station',\n 'longitude_WSG84': 'Xlongitude',\n 'latitude_WSG84': 'Ylatitude',\n 'type_connecteur': 'type_prise',\n 'type_charge': 'puiss_max'\n }, inplace=True)",
"_____no_output_____"
],
[
"#Remplacement des modalités de la variable 'puiss_max'\nleclerc['puiss_max'] = 22",
"_____no_output_____"
],
[
"#Chargement du jeu de données des bornes de la grande distribution AUCHAN\n#https://www.data.gouv.fr/fr/datasets/reseau-bornes-de-recharge-rapide-auchan/\nauchan = pd.read_csv('p8_data/auchan.csv', sep=';')\ndisplay(auchan.shape)\ndisplay(auchan.head())",
"_____no_output_____"
],
[
"#Fusion des variables relatives à l'adresse de la station\nauchan['ad_station'] = auchan['ADRESSE'] + str(' ') + auchan['CP'].astype(str) + str(' ') + auchan['Unnamed: 5']",
"_____no_output_____"
],
[
"#Renommage des variables à traiter avant concaténation des données\nauchan.rename(columns={\n 'LIEU': 'n_amenageur',\n 'Latitude': 'Ylatitude',\n 'Longitude': 'Xlongitude'\n}, inplace=True)\n\nauchan.drop(columns=['N°', 'ADRESSE', 'CP', 'LIEN CHARGEMAP', 'Dept', \n 'Unnamed: 5', 'Unnamed: 9', 'Unnamed: 10'], inplace=True)",
"_____no_output_____"
],
[
"#Intégration d'une variable 'puiss_max' représentatif de la puissance maximale \n#disponible dans plus de 90% des centres commerciaux AUCHAN\nauchan['puiss_max'] = 50",
"_____no_output_____"
],
[
"#Chargement du jeu de données des bornes des parkings EFFIA\n#https://www.data.gouv.fr/fr/datasets/bornes-de-recharge-pour-vehicules-electriques-parking-effia/\neffia = pd.read_csv('p8_data/effia.csv', sep=';')\ndisplay(effia.shape)\ndisplay(effia.head())",
"_____no_output_____"
],
[
"#Renommage des variables à traiter avant concaténation des données\neffia.rename(columns={\n 'nom_station': 'n_amenageur',\n 'adresse_station': 'ad_station',\n 'latitude_WSG84': 'Ylatitude',\n 'longitude_WSG84': 'Xlongitude', \n 'type_connecteur': 'type_prise',\n 'type_charge': 'puiss_max',\n 'nom_porteur': 'n_enseigne'\n}, inplace=True)\n\neffia.drop(index=0, inplace=True)\neffia.drop(columns=['ID_station', 'observations', 'Unnamed: 11', 'Unnamed: 12', 'Unnamed: 13','Unnamed: 14'], \n inplace=True)",
"_____no_output_____"
],
[
"#Changement de modalité de la variable 'puiss_max'\neffia['puiss_max'] = 3.7",
"_____no_output_____"
],
[
"#Chargement du jeu de données des bornes des parkings VINCI\nvinci = pd.read_csv('p8_data/vincipark.csv', sep=';')\ndisplay(vinci.shape)\ndisplay(vinci.head())",
"_____no_output_____"
],
[
"#Renommage des variables à traiter avant concaténation des données\nvinci.rename(columns={\n 'nom_station': 'n_station',\n 'adresse_station': 'ad_station',\n 'latitude': 'Ylatitude',\n 'longitude': 'Xlongitude',\n 'nom_porteur': 'n_enseigne',\n 'type_connecteur': 'type_prise',\n}, inplace=True)\n\nvinci.drop(columns=['ID_station', 'type_charge'], inplace=True)",
"_____no_output_____"
],
[
"#Chargement du jeu de données des bornes TESLA connecteur Recharge à destination\n#https://www.data.gouv.fr/fr/datasets/recharge-a-destination-tesla/\ntesla = pd.read_csv('p8_data/irve-tesla-destination-charging-20181130.csv', sep=';')\ndisplay(tesla.shape)\ndisplay(tesla.head())",
"_____no_output_____"
],
[
"#Changement de modalité pour la variable 'type_prise'\ntesla['type_prise'] = \"Tesla Type 2\"",
"_____no_output_____"
],
[
"#Remplacement de la modalité 'A Cheda' par 'Tesla'\ntesla['n_amenageur'].replace('A Cheda', 'Tesla', inplace=True)",
"_____no_output_____"
],
[
"#Renommage des variables à traiter avant concaténation des données\ntesla.rename(columns={'Xlatitude': 'Ylatitude'}, inplace=True)\ntesla.drop(columns=['ID_station', 'ID_pdc'], inplace=True)",
"_____no_output_____"
],
[
"#Chargement du jeu de données des bornes TESLA Supercharger\n#https://www.data.gouv.fr/fr/datasets/stations-supercharger-tesla/\ntesla_supercharger = pd.read_csv('p8_data/irve-tesla-supercharger-20181130.csv', sep=';')\ndisplay(tesla_supercharger.shape)\ndisplay(tesla_supercharger.head())",
"_____no_output_____"
],
[
"#Renommage d'une variable à traiter avant concaténation des données\ntesla_supercharger.rename(columns={'accessibilite' : 'accessibilité'}, inplace=True)",
"_____no_output_____"
],
[
"#Changement de modalité pour la variable 'type_prise'\ntesla_supercharger['type_prise'] = \"Tesla Supercharger\"",
"_____no_output_____"
],
[
"#Chargement du jeu de données des bornes des Concessionnaires NISSAN\n#https://www.data.gouv.fr/fr/datasets/reseau-bornes-de-recharge-rapide-concessions-nissan/\nnissan = pd.read_csv('p8_data/nissan.csv', sep=';')\ndisplay(nissan.shape)\ndisplay(nissan.head())",
"_____no_output_____"
],
[
"#Suppression d'une observation NaN\nnissan.drop(index=58, inplace= True)",
"_____no_output_____"
],
[
"#Adaptation de la variable représentative des modalités de l'adresse\nnissan['ad_station'] = nissan['ADRESSE'] + str(' ') + nissan['CP'].astype(str) + str(' ') + nissan['VILLE']",
"_____no_output_____"
],
[
"#Renommage des variables à traiter avant concaténation des données\nnissan.rename(columns={\n 'LIEU': 'n_enseigne',\n 'Type': 'type_prise',\n 'Latitude': 'Ylatitude',\n 'Longitude': 'Xlongitude'\n}, inplace=True)\n\nnissan.drop(columns=['ADRESSE', 'CP', 'Dept', 'VILLE', 'Code concession', 'Unnamed: 8', 'Téléphone', \n 'Directeur Concession Nissan', 'Unnamed: 13', 'Unnamed: 14', 'LIEN CHARGEMAP'], inplace=True)",
"_____no_output_____"
],
[
"#Chargement du jeu de données des bornes des Concessionnaires RENAULT\n#https://www.data.gouv.fr/fr/datasets/reseau-bornes-de-recharge-rapide-concessions-nissan/\nrenault = pd.read_csv('p8_data/renault.csv', sep=';', decimal=\",\")\ndisplay(renault.shape)\ndisplay(renault.head())",
"_____no_output_____"
],
[
"#Renommage des variables à traiter avant concaténation des données\nrenault.rename(columns={\n 'nom_station': 'n_station',\n 'longitude_WSG84': 'Ylatitude',\n 'latitude_WSG84': 'Xlongitude',\n 'nom_porteur': 'n_enseigne',\n 'type_connecteur': 'type_prise',\n 'type_charge':'puiss_max',\n 'observation': 'observations'\n}, inplace=True)\n\nrenault.drop(columns=['ID_station', 'adresse_station', 'nbre_pdc'], inplace=True)",
"_____no_output_____"
],
[
"#Intégration d'une variable 'puiss_max'\nrenault['puiss_max'] = 22",
"_____no_output_____"
],
[
"#Concaténation des jeux de données\nirvePlus = pd.concat([irve, mobive, leclerc, auchan, effia, vinci, tesla, tesla_supercharger, nissan, renault], \n sort=False).reset_index(drop=True)",
"_____no_output_____"
],
[
"#Affichage des 5 premières observations\nirvePlus.head()",
"_____no_output_____"
],
[
"#Affichage du nombre d'observations\n#Ici une observation représente une Borne de recharge\nlen(irvePlus)",
"_____no_output_____"
],
[
"#Analyse des valeurs manquantes\nirvePlus.isna().sum()",
"_____no_output_____"
]
],
[
[
"Les précédentes manipulations font que des valeurs et modalités doivent être manquantes, visibles ci-dessus. Notons que dans le contexte de l'étude, il n'est pas nécessaire d'avoir 100% des données suivant les observations, voyons comment traiter ces NaN. ",
"_____no_output_____"
],
[
"#### Traitement NaN des variables n_amenageur, n_operateur et n_enseigne",
"_____no_output_____"
]
],
[
[
"#Traitement des NaN relatifs aux aménageurs selon l'enseigne\nirvePlus[irvePlus['n_amenageur'].isna()]['n_enseigne'].unique()",
"_____no_output_____"
],
[
"#Boucle permettant le remplacement des valeurs manquantes des aménageurs selon condition\nfor i, row in irvePlus.iterrows():\n if row['n_enseigne'] == 'SIPLEC':\n irvePlus.loc[i, 'n_amenageur'] = 'LECLERC' \n elif row['n_enseigne'] == 'EFFIA':\n irvePlus.loc[i, 'n_amenageur'] = 'EFFIA' \n elif row['n_enseigne'] == 'Sodetrel':\n irvePlus.loc[i, 'n_amenageur'] = 'IZIVIA'\n elif row['n_enseigne'] == 'Concession NISSAN' or row['n_enseigne'] == 'NISSAN WEST EUROPE TRAINING' or row['n_enseigne'] == 'Siège NISSAN France':\n irvePlus.loc[i, 'n_amenageur'] = 'NISSAN'\n elif row['n_enseigne'] == 'Renault':\n irvePlus.loc[i, 'n_amenageur'] = 'RENAULT' \n else :\n pass",
"_____no_output_____"
],
[
"#Traitement des NaN relatifs aux opérateurs selon l'aménageur\nirvePlus[irvePlus['n_operateur'].isna()]['n_amenageur'].unique()",
"_____no_output_____"
],
[
"#Boucle permettant le remplacement des valeurs manquantes des aménageurs selon condition\nfor i, row in irvePlus.iterrows():\n if row['n_amenageur'] == 'LECLERC':\n irvePlus.loc[i, 'n_operateur'] = 'LECLERC'\n elif row['n_amenageur'] == 'AUCHAN ':\n irvePlus.loc[i, 'n_operateur'] = 'AUCHAN'\n elif row['n_amenageur'] == 'EFFIA':\n irvePlus.loc[i, 'n_operateur'] = 'EFFIA'\n elif row['n_amenageur'] == 'IZIVIA':\n irvePlus.loc[i, 'n_operateur'] = 'IZIVIA' \n elif row['n_amenageur'] == 'NISSAN':\n irvePlus.loc[i, 'n_operateur'] = 'NISSAN'\n elif row['n_amenageur'] == 'RENAULT':\n irvePlus.loc[i, 'n_operateur'] = 'RENAULT' \n else :\n pass",
"_____no_output_____"
],
[
"#Traitement des NaN relatifs aux enseignes selon l'opérateur\nirvePlus[irvePlus['n_enseigne'].isna()]['n_operateur'].unique()",
"_____no_output_____"
],
[
"#Boucle permettant le remplacement des valeurs manquantes des enseignes selon condition\nfor i, row in irvePlus.iterrows():\n if row['n_operateur'] == 'CITEOS/FRESHMILE':\n irvePlus.loc[i, 'n_enseigne'] = 'Scame'\n elif row['n_operateur'] == 'New motion':\n irvePlus.loc[i, 'n_enseigne'] = 'New motion'\n elif row['n_operateur'] == 'MOUVELECVAR':\n irvePlus.loc[i, 'n_enseigne'] = 'MOUVELECVAR'\n elif row['n_operateur'] == 'SAINT-LOUIS':\n irvePlus.loc[i, 'n_enseigne'] = 'SAINT-LOUIS' \n elif row['n_operateur'] == 'SPIE':\n irvePlus.loc[i, 'n_enseigne'] = 'SDEY'\n elif row['n_operateur'] == 'AUCHAN':\n irvePlus.loc[i, 'n_enseigne'] = 'AUCHAN' \n else :\n pass",
"_____no_output_____"
]
],
[
[
"#### Traitement NaN des variables Xlongitude et Ylatitude",
"_____no_output_____"
]
],
[
[
"#Traitement des deux NaN 'Xlongitude' et 'Ylatitude'\nirvePlus[irvePlus['Xlongitude'].isna()]",
"_____no_output_____"
],
[
"#Intégration manuelle des 4 valeurs \nirvePlus['Xlongitude'][2188] = 4.0811882 \nirvePlus['Xlongitude'][2189] = 4.0811882 \nirvePlus['Ylatitude'][2188] = 46.0822754 \nirvePlus['Ylatitude'][2189] = 46.0822754 ",
"_____no_output_____"
]
],
[
[
"#### Traitement NaN de la variable 'type_prise'",
"_____no_output_____"
]
],
[
[
"#Traitement des valeurs NaN identifiées pour le type de prise\nirvePlus[irvePlus['type_prise'].isna()]['n_operateur'].unique()",
"_____no_output_____"
]
],
[
[
"Globalement le groupe Auchan à équipé ses parkings de bornes Type 2 + CHAdeMO, notons que l'échantillon sera donc complété selon cette hypothèse, hypothèse retenue et préférable devant les NaN.",
"_____no_output_____"
]
],
[
[
"#Boucle permettant le remplacement des valeurs manquantes identifiées ci-dessus\nfor i, row in irvePlus.iterrows():\n if row['n_operateur'] == 'AUCHAN':\n irvePlus.loc[i, 'type_prise'] = 'Type 2 + CHAdeMO'",
"_____no_output_____"
]
],
[
[
"#### Traitement NaN de la variable 'puiss_max'",
"_____no_output_____"
]
],
[
[
"#Traitement des valeurs NaN identifiées pour la puissance max.\n#Le type de prise permet de pouvoir intervenir sur ces valeurs manquantes\nirvePlus[irvePlus['puiss_max'].isna()]['type_prise'].unique()",
"_____no_output_____"
],
[
"#Boucle permettant le remplacement des valeurs manquantes des puissances max. identifiées ci-dessus\nfor i, row in irvePlus.iterrows():\n if row['type_prise'] == 'TE-T3':\n irvePlus.loc[i, 'puiss_max'] = 22\n elif row['type_prise'] == 'TE-T2': \n irvePlus.loc[i, 'puiss_max'] = 22\n elif row['type_prise'] == 'DC Chademo - 44 kWh': \n irvePlus.loc[i, 'puiss_max'] = 44\n elif row['type_prise'] == 'DC Chademo - 44 kWh + \\nAC Type 3 - 43 kWh': \n irvePlus.loc[i, 'puiss_max'] = 44\n else:\n pass",
"_____no_output_____"
],
[
"#Nouvelle situation après ce traitement NaN\nirvePlus.isna().sum()",
"_____no_output_____"
]
],
[
[
"L'enrichissement de l'échantillon de départ `irve` rend l'exploitation de la variable 'id_pdc' obsolète. En effet, la concaténation avec les autres sources de données permet un recensement plus complet du réseau, mais sans pouvoir obtenir une charte d'utilisation commune et complète. En l'occurrence les id des points de charge ne sont plus complets, notons donc qu'il est nécessaire d'intégrer un identifiant unique à cet usage.",
"_____no_output_____"
]
],
[
[
"#Intégration d'un identifiant unique par Point de charge\nirvePlus['id_borne']= np.arange(1, len(irvePlus)+1)",
"_____no_output_____"
]
],
[
[
"Il n'est pas nécessaire de traiter toutes les valeurs NaN, dans le contexte de l'étude les précédents traitements semblent être suffisants. Voyons immédiatement comment enrichir et optimiser ce qui peut l'être, comme par exemple les puissances et les types de prise.",
"_____no_output_____"
],
[
"#### Traitement à des fins d'uniformisation des modalités / valeurs de la variable 'puiss_max'",
"_____no_output_____"
]
],
[
[
"#Affichage des modalités et valeurs de la variable 'puiss_max'\nirvePlus.puiss_max.unique()",
"_____no_output_____"
]
],
[
[
"Difficilement exploitable, on peut comprendre que chaque \"acteur\" à l'origine des fichiers ait pu nommer les puissances selon ses propres codes ou habitudes, mais il est nécessaire de pouvoir clarifier le tout. Notons que l'étude menée est porteuse d'un message plus perceptible quelque soit l'interlocuteur, voyons comment mettre en place un classement des puissances. ",
"_____no_output_____"
]
],
[
[
"#Boucle pour éliminer les modalités dont l'unité 'kva' est mentionnée\nfor x in irvePlus['puiss_max']:\n if x == '36kva':\n irvePlus['puiss_max'].replace(x, '36', inplace=True)\n elif x == '22kva':\n irvePlus['puiss_max'].replace(x, '22', inplace=True)\n elif x == '48kva':\n irvePlus['puiss_max'].replace(x, '48', inplace=True)\n elif x == '43-50':\n irvePlus['puiss_max'].replace(x, '50', inplace=True)\n else:\n pass",
"_____no_output_____"
],
[
"#Recherche des valeurs '0', '0.0' et 60.000\nirvePlus[(irvePlus.puiss_max == '0') | (irvePlus.puiss_max == '0.0') | (irvePlus.puiss_max == '60.000')]",
"_____no_output_____"
],
[
"#Remplacement des valeurs '0', '0.0' et '60.000'\nirvePlus['puiss_max'].replace('0', 22, inplace=True)\nirvePlus['puiss_max'].replace('0.0', 22, inplace=True) \nirvePlus['puiss_max'].replace('60.000', 22, inplace=True)",
"_____no_output_____"
],
[
"#Changement du type de donnée variable 'puiss_max' afin de faciliter son traitement\nirvePlus['puiss_max'] = irvePlus.puiss_max.astype(float)",
"_____no_output_____"
],
[
"#Classification des puissances via une boucle sous condition\nclass_puiss = []\nfor value in irvePlus.puiss_max:\n if value <= 3.7 :\n class_puiss.append('Recharge normale 3,7 kVA')\n elif value > 3.7 and value <=20 :\n class_puiss.append('Recharge accélérée de 3,7 à 20 kVA')\n elif value == 22 :\n class_puiss.append('Recharge accélérée 22 kVA')\n elif value >= 43 and value <= 50 :\n class_puiss.append('Recharge rapide 43 à 50 kVA')\n else :\n class_puiss.append('Recharge haute puissance 100 à 350 kVA')\n\n#Intégration d'une nouvelle variable 'class_puiss'\nirvePlus['class_puiss'] = class_puiss",
"_____no_output_____"
]
],
[
[
"#### Traitement à des fins d'uniformisation des modalités de la variable 'type_prise'",
"_____no_output_____"
]
],
[
[
"irvePlus.type_prise.unique()",
"_____no_output_____"
]
],
[
[
"Le constat reste le même que pour les puissances, les modalités listées ci-dessus sont difficilement exploitables en l'état. Notons qu'il est nécessaire de pouvoir classifier correctement et de manière lisible les connecteurs des bornes.",
"_____no_output_____"
]
],
[
[
"#Création de listes groupant les diverses typologies rencontrées dans l'échantillon \nlist_ef_x1 = ['EF', 'E/F', 'E', 'AC socket']\n\nlist_tesla_supercharger_x1 = ['Tesla Supercharger']\n\nlist_chademo_x1 = ['CHADEMO', 'CAHDEMO', 'CHAdeMO', 'Chademo', 'chademo', 'DC Chademo - 44 kWh', 'CHAdeMO-EU']\n\nlist_t2_x1 = ['T2', 'T2 câble attaché', 'Borne PULSE QC-50 de chez LAFON, Recharge Rapide sur prise T2', \n 'semi-rapide', 'AC plug', 'Tesla Type 2', '22', '23']\n\nlist_t3_x1 = ['T3', 'Type 3c', 'DC Chademo - 44 kWh + \\nAC Type 3 - 43 kWh'] \n\nlist_combo_x1 = ['COMBO', 'Combo 2', 'Combo2', 'COMBO 2', 'combo2', 'combo', 'Borne LAFON - recharge rapide 43AC-50DC']\n\nlist_combo_ccs350_x1 = ['CCS350-CCS350-CCS350-CCS350', 'CCS350-CCS350-CCS350-CCS350-CCS350-CCS350']\n\nlist_t2_ef_x2 = ['EF - T2', 'T2 - E/F', 'E/F-T2', 'T2 - EF', 'T2/EF', 'T2-EF', 'T2-AC Triphasé', 'T2/TE', 'E/F - T2',\n 'E/F + T2', 'EF/T2', 'T2-E/F', 'TE-T2', 'T2S-E/F', 'EF-T2', 'EF - T2', 'Type 2 - E/F', 'T2 – E/F',\n 'Borne SESAME de chez Sobem / Recharge de Type C , recharge accélérée, 2 prises sur chaque PDC : E/F et T2',\n 'Borne SESAME de chez Sobem / Recharge de Type C , recharge acc?l?r?e, 2 prises sur chaque PDC : E/F et T2',\n 'E/F-T5', 'E/F-T7', 'E/F + T4', 'T2*E']\n\nlist_t3_ef_x2 = ['EF - T3', 'T3 - EF', 'E/F + T3', 'EF/T3', 'TE-T3', 'T3 et EF', 'Type 3 - E/F', 'T3-EF',\n 'EF-T3', 'E/F-T3', 'T3-E/F']\n\nlist_t2_chademo_x2 = ['T2-CHAdeMO', 'Type 2 + CHAdeMO']\n\nlist_chademo_combo_x2 = ['CHADEMO - COMBO', 'CHAdeMO-Combo', 'Combo-Chademo', 'Combo2-CHAdeMO', 'CHAdeMo-Combo']\n\nlist_combo_ccs350_chademo_t2_x3 = ['CCS350-CCS350-CCS50-CHAdeMO - T2', \n 'CCS350-CCS350-CCS350-CCS350-CCS50-CHAdeMO - T2']\n\nlist_t2_t3_ef__x3 = ['EF - T2 - T3', 'EF - T2 - t3', 'T2-T3-EF', 'T3-EF-T2', 'T2-T2-EF']\n\nlist_chademo_combo_ef_x3 = ['A/C - Combo - CHAdeMO']\n\nlist_t2_combo_chademo_x3 = ['T2-Combo2-CHAdeMO', 'T2 Combo Chademo', 'Combo-ChaDeMo-T2', 'CHADEMO - COMBO -T2',\n 'CHAdeMO-Combo-T2 câble attaché']\n\n#Intégration des colonnes booléennes \nirvePlus['EF'] = False\nirvePlus['Type 2'] = False\nirvePlus['Type 3'] = False\nirvePlus['Combo'] = False\nirvePlus['Combo CCS350'] = False \nirvePlus['Chademo'] = False\nirvePlus['Tesla Supercharger'] = False\n\n#Boucle itérative selon liste condition\nfor i, row in irvePlus.iterrows():\n if row['type_prise'] in list_ef_x1:\n irvePlus.loc[i, 'EF'] = True \n elif row['type_prise'] in list_t2_x1:\n irvePlus.loc[i, 'Type 2'] = True \n elif row['type_prise'] in list_t3_x1:\n irvePlus.loc[i, 'Type 3'] = True \n elif row['type_prise'] in list_combo_x1:\n irvePlus.loc[i, 'Combo'] = True \n elif row['type_prise'] in list_combo_ccs350_x1:\n irvePlus.loc[i, 'Combo CCS350'] = True \n elif row['type_prise'] in list_chademo_x1:\n irvePlus.loc[i, 'Chademo'] = True \n elif row['type_prise'] in list_tesla_supercharger_x1:\n irvePlus.loc[i, 'Tesla Supercharger'] = True\n elif row['type_prise'] in list_t2_ef_x2:\n irvePlus.loc[i, 'Type 2'] = True\n irvePlus.loc[i, 'EF'] = True\n elif row['type_prise'] in list_t3_ef_x2:\n irvePlus.loc[i, 'Type 3'] = True\n irvePlus.loc[i, 'EF'] = True\n elif row['type_prise'] in list_t2_chademo_x2:\n irvePlus.loc[i, 'Type 2'] = True\n irvePlus.loc[i, 'Chademo'] = True\n elif row['type_prise'] in list_chademo_combo_x2:\n irvePlus.loc[i, 'Chademo'] = True\n irvePlus.loc[i, 'Combo'] = True \n elif row['type_prise'] in list_combo_ccs350_chademo_t2_x3:\n irvePlus.loc[i, 'Type 2'] = True\n irvePlus.loc[i, 'Chademo'] = True \n irvePlus.loc[i, 'Combo CCS350'] = True \n elif row['type_prise'] in list_t2_t3_ef__x3:\n irvePlus.loc[i, 'Type 2'] = True\n irvePlus.loc[i, 'Type 3'] = True \n irvePlus.loc[i, 'EF'] = True \n elif row['type_prise'] in list_chademo_combo_ef_x3:\n irvePlus.loc[i, 'Chademo'] = True\n irvePlus.loc[i, 'Combo'] = True \n irvePlus.loc[i, 'EF'] = True \n elif row['type_prise'] in list_t2_combo_chademo_x3:\n irvePlus.loc[i, 'Type 2'] = True\n irvePlus.loc[i, 'Chademo'] = True \n irvePlus.loc[i, 'Combo'] = True \n else:\n pass\n",
"_____no_output_____"
]
],
[
[
"#### Traitement des valeurs manquantes identifiées dans le comptage des points de charge",
"_____no_output_____"
]
],
[
[
"#Identification des aménageurs concernés par le nombre de pdc manquant\nirvePlus[irvePlus.nbre_pdc.isna()]['n_amenageur'].unique()",
"_____no_output_____"
]
],
[
[
"Notons que la diversité ci-dessus n'apporte aucune solution pour pouvoir identifier les 'nbre_pdc' manquants. L'option choisie ici, sera de comptabiliser les connecteurs (booléens) sous condition que la valeur de 'nbre_pdc' soit inconnue, dans le cas contraire la valeur d'origine sera conservée.",
"_____no_output_____"
]
],
[
[
"#Remplacement des valeurs manquantes par une valeur flottante 0.0\nirvePlus.nbre_pdc.fillna(0.0, inplace=True)",
"_____no_output_____"
],
[
"#Remplacement des valeurs 0.0 par la somme des True Values correspondant aux connecteurs EF, Type 2, etc…\nfor i, row in irvePlus.iterrows():\n if row['nbre_pdc'] == 0.0:\n number = sum(irvePlus[['EF', 'Type 2', 'Type 3', 'Chademo', 'Combo', 'Combo CCS350', 'Tesla Supercharger']], \n axis=1)\n irvePlus.loc[i, 'nbre_pdc'] = number[i]",
"_____no_output_____"
],
[
"#Comptage des connecteurs suivant le type \ndisplay(irvePlus['EF'].value_counts())\ndisplay(irvePlus['Type 2'].value_counts())\ndisplay(irvePlus['Type 3'].value_counts())\ndisplay(irvePlus['Chademo'].value_counts())\ndisplay(irvePlus['Combo'].value_counts())\ndisplay(irvePlus['Combo CCS350'].value_counts())\ndisplay(irvePlus['Tesla Supercharger'].value_counts())",
"_____no_output_____"
]
],
[
[
"#### Enrichissement de l'échantillon en intégrant une catégorisation des aménageurs \nCette étape permettra de pouvoir obtenir une vision plus explicite de qui sont les aménageurs IRVE sur notre territoire. Il semble pertinent de pouvoir mieux comprendre comment s'organise l'implantation des bornes.",
"_____no_output_____"
]
],
[
[
"#Aperçu de la diversité des aménageurs à l'origine de l'implantation des bornes en France\nirvePlus.n_amenageur.unique()[:30]",
"_____no_output_____"
],
[
"#Liste des catégories pouvant rassembler les aménageurs identifiées dans l'échantillon\n\n#Collectivités territoriales\nlist_c_t = ['Aix-Marseille-Provence', 'BREST METROPOLE', 'CAPG', 'CAPL', 'CARF', 'CC VITRY CHAMPAGNE ET DER', \n 'CC de la Côtičre', 'CCPA', 'CCPHVA', 'CCVBA', 'CELLIEU', 'CGLE', 'CHARLIEU','CHAUSSON MATERIAUX', \n 'CHAZELLES SUR LYON', 'CNR', 'COMMELLE VERNAY',\"Communauté Urbaine d'Arras\", 'CANTAL', 'Aéroports de Paris SA', \n \"Communauté d'Agglomération Douaisis Agglo\",\"Communauté d'Agglomération Maubeuge Val de Sambre\", 'SODETREL ',\n \"Communauté d'Agglomération Valenciennes Métropole\", \"Communauté d'Agglomération du Boulonnais\", 'SMOYS', \n \"Communauté d'Agglomération du Pays de Saint Omer\", 'Communauté de Communes Flandre-Lys', 'SMEG 30', \n 'Communauté de Communes de la Haute Vallée de Chevreuse', \"Communauté de Communes du Coeur d'Ostrevent\",\n 'Communauté de Communes du Haut-Pays Montreuillois', \"Communauté de Communes du Pays d'Opale\",\n 'Communauté de Communes du Pays de Lumbres', \"Commune d'Eguisheim\",'FDEL 46', 'FDEL 46', 'FEURS',\n 'FONTANÈS', 'FRAISSES', 'GENILAC', 'GOLF CLUB DE LYON', 'GPSO-MEUDON', 'Grenoble-Alpes Métropole', \n 'Hauts-de-France', 'Herault Energies 34', 'ISTRES', \"L'ETRAT\", \"L'HORME\", 'LA FOUILLOUSE', 'LA GRAND CROIX', \n 'LA PACAUDIÈRE', 'LA RICAMARIE', 'LA TALAUDIÈRE', 'LA VALLA EN GIER', 'LE COTEAU', 'LORETTE','Le Pont du Gard', \n 'MABLY', 'MARLHES', 'MONTAGNY', 'MONTBRISON', 'MOUVELECVAR', 'MRN', 'Modulo (Mobilité Locale Durable)',\n 'Montpellier Mediterranee Metropole', 'Métropole Européenne de Lille', 'NEULISE', 'ORLEANS METROPOLE',\n 'PANISSIERES', 'PARIGNY', 'PERREUX','REGNY', 'RENAISON', 'RIORGES', 'ROANNE', 'ROCHE LA MOLIÈRE',\n 'SABLE SUR SARTHE', \"SAINT ANDRÉ D'APCHON\", 'SAINT ANDRÉ LE PUY', 'SAINT BONNET LE CHÂTEAU', \n 'SAINT CHRISTO EN JAREZ', 'SAINT CYR', 'SAINT ETIENNE ROCHETAILLÉE', 'SAINT ETIENNE SAINT VICTOR SUR LOIRE', \n 'SAINT GALMIER', 'SAINT GENEST LERPT', 'SAINT HÉAND', 'SAINT JUST SAINT RAMBERT', 'SAINT LÉGER SUR ROANNE', \n 'SAINT MARCELLIN EN FOREZ', 'SAINT MARTIN LA PLAINE', 'SAINT MAURICE EN GOURGOIS', 'SAINT PAUL EN JAREZ', \n 'SAINT ROMAIN EN JAREZ', 'SAINT ROMAIN LES ATHEUX', 'SAINT SAUVEUR EN RUE', 'SAINT SYMPHORIEN DE LAY', 'SAINT-LOUIS', 'SAINTE CROIX EN JAREZ', \n 'SALVIZINET', 'SAVIGNEUX', 'SDE 18', 'SDE 23', 'SDE 56', 'SDE 65', 'SDE07', 'SDE09', 'SDE29', 'SDE65', 'SDE76', \n 'SDEA10', 'SDED', 'SDEE48 48', 'SDESM', 'SDET 81', 'SDEY', \"SDEY Syndicat Departemental d'Energies de l'Yonne\", \n 'SE60', 'SEDI', 'SIDELC', 'SIED70', 'SIEDA 12', 'SIEEEN', 'SIEGE 27', 'SIEIL37', 'SIEML 49', 'SIPPEREC', \n 'SMA PNR Gatinais', 'SMED 13', 'SORBIERS', 'SOREGIES', 'SURY LE COMTAL', 'SYADEN 11', 'SYANE', 'SYDED', \n 'SYDEEL66 66', 'SYDESL', 'SYDEV 85', 'SYME05', 'Se 61', 'TE 53', \"TERRITOIRE D'ENERGIE 90\", 'Séolis', 'S‚olis',\n \"Syndicat Départemental d'Énergie de Loire-Atlantique (SYDELA)\", 'FDEE 19', 'SDEPA 64', 'SDEG 16', \n \"Syndicat Départemental d'Énergies d'Eure et Loir (SDE28)\", 'SDEE 47', 'SDEER 17', 'SYDEC 40',\n \"Syndicat Intercommunal de Distribution d'Electricité de Loir-et-Cher (SIDELC41)\", 'SDE 24', 'SDEEG 33',\n \"Syndicat de l'Énergie de l'Orne (TE61)\", 'Toulouse Metropole', 'UNIEUX', 'USEDA', 'USSON EN FOREZ',\n 'VEAUCHE', 'VILLARS', 'VILLE DE CAVAILLON', 'VILLE DE GAP', 'VILLE DE ROSHEIM', 'VILLEREST', \"Ville d'Hazebrouck\",\n 'Ville de Garches', 'Ville de Montrouge', 'Ville de Revel', 'Ville de Saverne', 'Ville de Viriat', \n 'Arcs 1950 Le Village - Parking', 'B&B Hôtel Lyon Eurexpo Chassieu', \"Bastide Selva - Maison d'Hôtes\",\n 'Baumanière les Baux de Provence', 'Belle Isle sur Risle','Benvengudo Hôtel Restaurant', \n 'Best Western Amarys Rambouillet', 'Best Western Golf Hôtel Lacanau','Best Western Grand Hôtel de Bordeaux',\n 'Best Western Hotel Alexandra', 'Best Western le Lavarin', 'Best Western Plus - Hôtel de la Paix',\n 'Best Western Plus - Hôtel de la Régate', 'Best Western Plus Cannes Riviera & spa',\n 'Best Western Plus Excelsior Chamonix', 'Best Western Plus Santa Maria', 'Brasserie des Eclusiers',\n 'Buffalo Grill de Foix', 'Caffe Mazzo', 'Camping BelleRive', 'Camping du Domaine de Massereau', \n \"Camping Ecolodge de l'Etoile d'Argens\", 'Camping La Fontaine du Hallate en Morbihan', \"Camping La Roche d'Ully\", \n 'Camping Le Brasilia', 'Camping Palmira Beach', 'Camping Sunêlia Berrua', 'Camping Sunêlia Le Fief *****', \n \"Casino d'Évian - Evian Resort\", \"Casino d'Andernos - Le Miami\", 'Casino De Plombières-Les-Bains',\n 'Casino de Pornichet', 'Casino Joa Antibes La Siesta', 'Casino JOA Le Boulou', 'Casino Le Domaine de Forges',\n 'Casino Partouche de Boulogne-sur-Mer', 'Casino Partouche de Palavas Les FLots','Castel Camping Le Brévedent', \n 'Castel Maintenon']\n\n#Constructeurs Auto\nlist_auto = ['IONITY', 'Tesla', 'A Cheda', 'NISSAN', 'RENAULT']\n\n#Parkings \nlist_parking = ['EFFIA', 'Alyse Parc Auto', 'Parking Bodin', 'Parking François 1er Interparking', 'TM _Parking']\n\n#Centres commerciaux\nlist_centres_commerciaux = ['Centre commercial Grand Var', 'GEMO', 'Sičge Intermarché', 'Supermarchés COLRUYT', 'LECLERC', 'AUCHAN ', 'LECLERC',\n 'Centre Commercial Carrefour Villiers en Bière', 'Centre commercial Les Eléis', 'Centre Commercial Parly 2',\n 'Centre Commercial Waves Actisud', 'E-Leclerc Paray-le-Monial', 'Hyper U Sierentz', \"Intermarché l'Isle sur le Doubs\",\n 'Intermarché Mont près Chambord', 'Intermarché Ramonville', 'intermarché verneuil', \n 'Parc Commercial Les Portes de Soissons', 'Usines Center', 'CASA']\n\n#Opérateurs privés\nlist_op_prive = ['SODETREL', 'IZIVIA', 'ELECTRIC 55 CHARGING', 'PLUS DE BORNES', 'BE TROM', 'BOEN', 'DOCUWORLD']\n\n#Entreprises diverses\nlist_entreprise_diverse = [\"Cattin - Grands Vins & Crémants d'Alsace\", 'Caves Carrière', 'Champagne Bergere', 'Champagne Drappier', \n 'Champagne J de Telmont', 'Champagne Paul Dethune', 'Champagne Pertois-Moriset',\n 'Domaine Viticole Château de Chamirey', 'Dopff au Moulin', 'Jet Systems Hélicoptères Services']\n\n#Hotels, restaurants, tourisme\nlist_tourisme = [\"A L'Ecole Buissonière\", 'Aa Saint-Omer Golf Club', 'Abbaye de Bussiere sur Ouche ', 'Abbaye de Talloires',\n 'Aigle des Neiges Hotel', 'Altapura', 'Aparthotel Adagio Genève Saint Genis Pouilly', 'Atmosphères Hôtel', \n 'Au Grès des Ouches', 'Au Pont Tournant', 'Auberge Bienvenue', 'Auberge Bressane de Buellas', \n 'Auberge de Cassagne & Spa ', 'Auberge de la Petite Reine', 'Auberge du Lac', 'Auberge du Mehrbächel', \n 'Auberge du Vieux Puits', 'Auberge Edelweiss', 'Auberge Ostapé', 'Auberge Sundgovienne', 'Aux Terrasses', \n 'Avancher Hôtel & Lodge, Restaurant & Bar', 'Château Beauregard', \"Château d'Audrieu\", \"Château d'Igé****\", \n \"Château d'Isenbourg Hôtel Restaurant\", 'Château Dauzac', 'Château de Beaulieu', 'Château de Belmesnil', \n 'Château de Challanges', 'Château de Chapeau Cornu', 'Château de Chenonceau', 'Château de Clérac', \n 'Château de Germigney R&C Port-Lesney', 'Château de Gilly', \"Château de l'Hoste\", \"Château de l'Ile\", \n 'Château de la Presle', 'Château de la Treyne - Relais & Château', 'Château de Locguénolé', \n 'Château de Massillan', 'Château de Nazelles', 'Château de Noirieux', 'Château de Quesmy', \n 'Château de Riell - Relais & Châteaux', 'Château de Sacy', 'Château de Sissi', 'Château de St Paul', \n 'Château de Valmer', 'Château de Vault-de-Lugny', 'Château des Ducs de Joyeuse', 'Château du Galoupet', \n 'Château Fombrauge', 'Château Guiraud', 'Château Hôtel le Boisniard', 'Château Hourtin-Ducasse', \n 'Château La Coste', 'Château La Fleunie Hôtel/Restaurant', 'Château La Tour Carnet', 'Château Laborde Saint-Martin',\n 'Château Pape Clément', 'Château Sainte Sabine', 'Château Soutard', 'Château Talluy', 'Château Vignelaure', \n 'Châteaux de la Messardiere',\"Chalet L'Orignal\", 'Chalet M la Plagne', 'Chalet Marano Hôtel Restaurant & Spa', \n \"Chalet-Hôtel Le Chamois d'Or\", \"Chambre d'hôtes Le Crot Foulot\", 'Charmhotel Au Bois le Sire', \n 'Chateau de Courban & Spa Nuxe', 'Château des Demoiselles', 'Chateau MontPlaisir', 'Chateau Prieuré Marquet', \n 'Circuit Paul Ricard', 'Circuits Automobiles LFG', 'Clos des Sens', 'Clos Marcamps', 'Club Les Ormes', 'CosyCamp', \n 'Courtyard Paris Roissy CDG', 'Crowne Plaza Montpellier Corum', 'Domaine Château du Faucon', \n \"Domaine d'Auriac - Relais & Châteaux\", \"Domaine d'Essendiéras\", 'Domaine de Barive', 'Domaine de Barres', \n 'Domaine de Bournel', 'Domaine de Cabasse', 'Domaine de Crécy', 'Domaine de Divonne', \"Domaine de l'Hostreiere\", \n 'Domaine de la Corniche', \"Domaine de la Forêt d'Orient - Hôtel Golf & Spa\", 'Domaine de la Poignardiere', \n 'Domaine de la Tortinière', 'Domaine de la Tour', 'Domaine de Manville', 'Domaine de Mialaret', \n 'Domaine de Rochevilaine', 'Domaine de Saint-Géry', 'Domaine de Vaugouard', 'Domaine de Verchant', \n 'Domaine des Andéols', 'Domaine des Etangs', 'Domaine des Séquoias', 'Domaine du Bailli', \n 'Domaine du Château de Meursault', 'Domaine du Clos Fleuri', 'Domaine du Moulin', 'Domaine du Prieuré', \n 'Domaine du Revermont', 'Domaine Lafage', 'Domaine Selosse - Hôtel Les Avisés', 'Emerald Stay Apartments Morzine', \n 'Espace Montagne Grenoble', 'Eurotel', 'Evian Resort Golf Club', 'Ferme de la Rançonnière', 'Flocons de Sel', \n 'Gîte des Prés de Garnes', 'Gîte La Mystérieuse Ponts sur Seulles', 'Gîtes Bon Air Chalets Piscine Spa', \n 'Golden Tulip Le Grand Bé Saint Malo', 'Golden Tulip Sophia Antipolis', 'Golf Cap Malo', 'Golf Club Omaha Beach', \n 'Golf de Barbaroux - Open Golf Club', 'Golf de la Prée la Rochelle', 'Golf de la Sainte Baume - Open Golf Club', \n 'Golf de Marseille la Salette - Open Golf Club', 'Golf de Servanes - Open Golf Club', \n 'Golf du Touquet - Open Golf Club', 'Golf Hôtel Restaurant du Kempferhof', 'Golf International de Grenoble', \n 'Golf Les Gets', 'Grand Hôtel des Alpes', 'Grand Hôtel des Thermes', 'Grand Hotel La Cloche', \n 'Grand Parc du Puy du Fou', 'Hôtel-Restaurant & SPA Les Gentianettes', 'Hôtel-Restaurant Kleiber', \n 'Hôtel-Restaurant Le Grand Turc', 'Hôtel-Restaurant Le Mas du Terme', 'Hôtel & Spa Best Western Plus - Chassieu', \n \"Hôtel & Spa L'Equipe\", 'Hôtel & Spa Les Violettes', 'Hôtel 202', 'Hôtel A Madonetta', 'Hôtel Akena', \n 'Hôtel AKENA de Saint-Witz', 'Hôtel Akena Dol de Bretagne', 'Hôtel Ampère', 'Hôtel Atena', \n 'Hôtel Au Coeur du Village', 'Hôtel B&B Colmar Expo', 'Hôtel Barrière - le Grand Hôtel Dinard', \n 'Hôtel Barrière Le Normandy Deauville', 'Hôtel Barrière Le Westminster', 'Hôtel Best Western Plus Metz Technopôle', \n 'Hôtel Cézanne', 'Hôtel Cala Di Greco', 'Hôtel Cap-Estel', 'Hôtel Capao', 'Hôtel Castel Burgond', \n 'Hôtel Castel Mouisson', 'Hôtel Cayrons', 'Hôtel Château de la Begude - Golf Opio Valbonne', \n 'Hôtel Château de la marlière', 'Hôtel Chais Monnet', 'Hôtel Champs Fleuris', 'Hôtel Chapelle et Parc', \n 'Hôtel Chez Camillou - Restaurant Cyril ATTRAZIC', 'Hôtel Cour des Loges', \"Hôtel d'Angleterre\", \n 'Hôtel Daumesnil-Vincennes', 'Hôtel de France', 'Hôtel de Greuze', 'Hôtel de la Cité', 'Hôtel des Dunes', \n 'Hôtel des Princes', 'Hôtel Diana Restaurant & Spa', 'Hôtel du Bois Blanc', 'Hôtel du Cap-Eden-Roc', \n 'Hôtel du Palais', 'Hôtel Escapade', 'Hôtel Fleur de Sel', 'Hôtel Golf Château de Chailly', 'Hôtel Ha(a)ïtza', \n 'Hôtel Husseren-les-Châteaux', 'Hôtel ibis Besançon Centre Ville', 'Hôtel Juana', \n 'Hôtel Kyriad Prestige Clermont-Ferrand', 'Hôtel Kyriad Prestige Lyon Saint-Priest Eurexpo', \n 'Hôtel Kyriad Prestige Strasbourg Nord', 'Hôtel Kyriad Prestige Vannes', \"Hôtel l'Angleterre\", \n \"Hôtel L'Estelle en Camargue \", 'Hôtel La Chaumière', 'Hôtel La Ferme', \"Hôtel La Ferme D'Augustin\", \n 'Hôtel La Sivolière', 'Hôtel La Villa', 'Hôtel La Villa Douce', 'Hôtel la Villa K', 'Hôtel Le Bellevue', \n 'Hôtel Le Bristol Paris', 'Hôtel Le Burdigala', 'Hôtel le Cèdre', 'Hôtel Le Capricorne', 'Hôtel Le Cep', \n 'Hôtel le Clos', 'Hôtel le M de Megève', 'Hôtel Le Mas des Herbes Blanches', 'Hôtel Le Morgane', \n 'Hôtel le Pic Blanc', 'Hôtel Le Relais des Champs', 'Hôtel Le Rivage', 'Hôtel Le Royal Barrière Deauville', \n 'Hôtel Le Vallon de Valrugues & Spa', 'Hôtel Les Airelles', 'Hôtel Les Bartavelles & SPA', 'Hôtel Les Bories & Spa',\n 'Hôtel Les Bouis', 'Hôtel Les Colonnes', 'Hôtel Les Esclargies', 'Hôtel Les Glycines et Spa', 'Hôtel Les Gravades', \n 'Hôtel Les Maritonnes Parc & Vignoble', 'Hôtel Les Trésoms', 'Hôtel Lodges Ste Victoire & Restaurant St-Estève', \n 'Hôtel Logis Châteaudun', 'Hôtel Lyon Métropole', 'Hôtel Marriott Roissy Charles de Gaulle Airport', \n 'Hôtel Mercure Côte Ouest Thalasso & Spa', 'Hôtel Mercure Caen Centre', 'Hôtel Mercure Epinal Centre', \n 'Hôtel Mercure Omaha Beach', 'Hôtel Mercure Reims Centre Cathedrale', 'Hôtel Miramar', 'Hôtel Mont-Blanc', \n 'Hôtel Negrecoste', 'Hôtel Parc Beaumont ', 'Hôtel Parc Victoria', 'Hôtel Parkest', 'Hôtel Radisson Blu 1835', \n 'Hôtel Radisson Blu Biarritz', \"Hôtel Restaurant A l'Etoile\", 'Hôtel Restaurant Alliance Couvent des Minimes', \n 'Hôtel Restaurant Au Boeuf Rouge', 'Hôtel Restaurant de la Tabletterie', 'Hôtel Restaurant des Bains', \n 'Hôtel Restaurant Edward 1er', 'Hôtel Restaurant Kyriad Montauban', 'Hôtel Restaurant La Ferme de Cupelin', \n 'Hôtel Restaurant Le Beauregard', 'Hôtel Restaurant Le Cerf', 'Hôtel Restaurant Le Noirlac', \n 'Hôtel Restaurant Le Tropicana', 'Hôtel Restaurant Les Oliviers', 'Hôtel Royal - Evian Resort', \n 'Hôtel Sezz Saint-Tropez - Restaurant Colette', 'Hôtel Stella', 'Hôtel U Capu Biancu', \n 'Hôtel, Restaurant Le Belvedere', 'Holiday Inn Blois centre ', 'Holiday Inn Express Paris - Velizy', \n 'Holiday Inn Lyon - Vaise', 'Honfleur Normandy Outlet', 'Hostellerie de la Pointe Saint Mathieu', \n 'Hostellerie de Levernois', 'Hostellerie La Briqueterie', 'Hostellerie La Farandole', 'Hostellerie Le Cèdre',\n 'Hotel & Spa Le Dahu', 'Hotel Alpen Roc', 'Hotel Bel Air - Brasserie La Terrasse', 'Hotel Castelbrac', \n 'Hotel du Clocher Villa Savoy ***', 'Hotel Ibis Manosque Cadarache', 'Hotel ibis Saint Brieuc Yffiniac', \n 'Hotel Imperial Garoupe', 'Hotel Koh-I Nor', \"Hotel L'Alta Peyra\", 'Hotel Le Club de Cavalière & Spa', \n 'Hotel Le Kaïla', 'Hotel le Manoir Saint Michel', 'Hotel Le Mans Country Club', 'Hotel le Montrachet', \n 'Hotel Le Pigonnet', 'Hotel Le Tillau', 'Hotel Les Bains de Cabourg - Thalazur', 'Hotel Maison Bras', \n 'Hotel Marina Corsica Porto Vecchio', 'Hotel Mercure Bordeaux Château Chartrons', 'Hotel Normandie', \n 'Hotel Restaurant de la poste', 'Hotel Restaurant Ferme Blanche', 'Hotel Restaurant Le Viscos', \n 'Hotel Restaurant Spa Le Rabelais', 'Hotel Royal Riviera', 'hotel Taj-I Mah*****', \n 'Hotel The Originals Domaine de La Groirie', 'Hotel The Originals Nantes Ouest Agora', \n 'Hotel-Restaurant Au Chêne Vert', 'Hyatt Paris Madeleine', 'Ibis Cergy Pontoise Le Port', 'Ibis La Roche sur Yon', \n 'Ibis Roanne', 'Ibis Styles - Mulsanne', 'Ibis Styles Mâcon Centre', 'Ibis Styles Paris Mairie de Clichy', \n 'Ibis Styles Tours Sud', 'Inter Hotel Acadie tremblay en france', 'Inter-Hôtel Alteora site du Futuroscope', \n 'Inter-Hôtel de la Chaussairie', 'Inter-Hôtel Le Cap', 'Inter-Hôtel Roanne Hélios', 'Inter-Hotel Albi le Cantepau', \n 'Inter-Hôtel du Lac', 'Inter-Hotel Ecoparc Montpellier Est', 'Inter-Hotel Saint Martial', \n 'Isulella Hôtel & Restaurant', 'Jiva Hill Resort', \"Jum'Hôtel - Restaurant Atelier Grill\", \n 'Kon Tiki - Riviera Villages ', 'Kube Hôtel Saint-Tropez', 'Kyriad Clermont-Ferrand Centre', \n \"L'Apogée Courchevel\", \"L'Assiette Champenoise\", \"L'Atelier\", \"L'atelier d'Edmond\", \n \"L'Enclos Béarnais Maison d'hôtes\", \"L'Impérial Palace\", \"L'Oustalet Gigondas\", \"l'Oustau de Baumanière\", \n 'La Bastide de Gordes', 'La Bastide de Tourtour Hôtel & Spa ', 'La Côte Saint Jacques & Spa', \n 'La Cheneaudière & Spa - Relais & Châteaux', 'La Coquillade Provence Village', 'La Ferme du Chozal', \n 'La Gentilhommiere', 'La Grande Maison de Bernard Magrez ', 'La Grande Terrasse Hôtel & Spa Mgallery', \n 'La Guitoune', 'La Jasoupe', 'La Maison de Rhodes', 'La Malouiniere des Longchamps', 'La Pinède Plage', \n 'La Pyramide Patrick Henriroux', 'La Réserve', 'La Réserve des Prés Verts Massages & Spa', 'La Réserve Ramatuelle', \n 'La Signoria - Relais & Châteaux', 'La Tannerie de Montreuil', 'La Vaucouleurs Golf Club', 'Lagardère Paris Racing',\n 'Le Barn', 'Le Beau Rivage', 'Le Binjamin', 'Le Bois Joli', 'Le Brittany & Spa', 'Le Château de la Tour', \n 'Le Chambard Relais & Châteaux', 'Le Clos de la Ribaudiere', 'Le Clos de Serre', 'Le Clos des Délices', \n 'Le Clos Saint Vincent', 'Le Clos Saint-Martin Hôtel & Spa', \"Le Couvent des Minimes Hotel &SPA L'Occitane\", \n 'Le Domaine de Montjoie', 'Le Domaine des Prés Verts Massages & Spa', \"Le Fouquet's\", 'Le Gîte de Garbay ', \n 'Le Grand Aigle Hôtel & Spa', \"Le Grand Casino d'Annemasse \", 'Le Grand Hôtel Cannes', \n \"Le Grand Hôtel de l'Espérance\", 'Le grand Monarque', 'Le Hameau Albert 1er', 'Le Hommet', \n 'Le Majestic Barrière Cannes', 'Le Manoir de Kerbot', 'Le Manoir des Impressionnistes', \n 'Le Mas Candille, Relais & Châteaux', 'Le Moulin de Vernègues', 'Le Palace de Menthon', 'Le Petit Nice Passedat', \n 'Le Phebus & Spa', 'Le Pigeonnier du Perron', 'Le Prieuré', 'Le Prieuré des Sources', \n 'Le Refuge des Près Verts Massages & Spa', 'Le Relais Bernard Loiseau', 'Le Relais du Boisniard', 'Le Richelieu', \n 'Le Saint-Barnabé Hôtel et Spa ', 'Le Saint-James', 'Les Châtaigniers de Florac', 'Les Cures Marines', \n 'Les Etangs de Corot', 'Les Fermes de Marie', 'Les Hôtels de Beauval', 'Les Haras Hôtel ', 'Les Hauts de Loire', \n 'Les Maisons de Bricourt', 'Les Manoirs Tourgeville', 'Les Orangeries', \"Les Prés d'Eugénie - Michel Guérard\", \n 'Les Prairies de la Mer', 'Les Sources de Caudalie', 'Les Terrasses du Port', \"Les Vignobles de l'Escarelle\", \n 'Logis Aigue Marine Hôtel', \"Logis Au Comté D'Ornon\", 'Logis Auberge de la Diège', 'Logis Auberge de la Tour', \n 'Logis Château de la Motte-Liessies', 'Logis Château de Labro', 'Logis Domaine du Relais de Vincey', \n 'Logis Grand Hôtels des Bains', 'Logis Hôtel & Spa Marina Adelphia', 'Logis Hôtel Acotel', \"Logis Hôtel AR Milin'\", \n 'Logis Hôtel Arcombelle', 'Logis Hôtel Bellevue', 'Logis Hôtel Center Brest', 'Logis Hôtel de la Clape', \n 'Logis Hôtel des Châteaux', 'Logis Hôtel des Elmes - Restaurant la Littorine', 'Logis Hôtel du Cheval Blanc', \n 'Logis Hôtel Le Prince Noir', 'Logis Hôtel Le Régent', 'Logis Hôtel le Régina', 'Logis Hôtel le Vernay', \n 'Logis Hôtel les 2 Rives', 'Logis Hôtel Les Pierres Dorées', 'Logis Hôtel Murtel', \n 'Logis Hôtel Restaurant Au cheval blanc', 'Logis Hôtel Restaurant La Brèche de Roland', \n 'Logis Hôtel Restaurant Spa Les Peupliers', 'Logis Hôtel Taillard', 'Logis Hostellerie du Périgord Vert', \n 'Logis Hostellerie Saint Vincent ', 'Logis Hotel le Céans', 'Logis Hotel Restaurant des Acacias', \n \"Logis L'Abreuvoir Hôtel Restaurant\", \"Logis L'Hôtel D'Arc\", \"Logis L'Orée du Bois\", 'Logis La Résidence', \n 'Logis La Source du Mont', 'Logis Lacotel', 'Logis Le Moulin de la Coudre', \n 'Logis Le Moulin des Gardelles Hôtel-Restaurant', 'Logis Le Relais des Dix Crus', 'Logis Les Hauts de Montreuil', \n 'Logis Mas de la Feniere', 'Logis Relais du Gué de Selle', 'Lorraine Hôtel', \n 'M Gallery - La Cour des Consuls Hotel & Spa', 'Maison Addama', 'Maison Cazes', \"Maison d'Hotes La Cimentelle\", \n 'Maison des Algues', 'Maison Lameloise', 'Maison Pic', 'Mama Shelter', 'Mama Shelter Lyon', \n 'Mama Shelter Marseille', 'Manoir de Gressy', 'Manoir de la Poterie & SPA', 'Manoir de Pancemont', \n 'Manoir de Surville', 'Manoir Plessis Bellevue', 'Mas de Chastelas', 'Mas de la Crémaillère', \n 'Mas de la Grenouillère', 'Mas la Jaina', 'Mercure Bourges Hôtel de Bourbon', 'Mercure Cherbourg Centre Port', \n 'Mercure Grand Hotel des Thermes', 'Mercure Lille Centre Vieux Lille', 'Mercure Lyon Genas Eurexpo', \n 'Mineral Lodge', 'Misincu', 'MOB Hotel Lyon', 'Monte Carlo Beach Hôtel', 'Musée Würth France Erstein', \n 'Najeti Hôtel Château Tilques', \"Najeti Hôtel de l'Univers\", 'Najeti Hôtel La Magnaneraie', \n 'New Cottage & Spa de nage', 'Nouvel Hôtel', 'Novotel Chartres', 'Novotel La Rochelle Centre', \n 'Novotel Marseille Centre Prado Vélodrome', 'Novotel Noisy Marne la Vallée', 'Novotel Spa Rennes Centre Gare', \n 'Novotel Thalassa Dinard', 'Orée de Chartres', \"Pêche de Vigne Spa et Maison d'Hôtes\", 'Parc zoologique Cerza', \n 'Paris International Golf Club', 'Petit Hôtel Confidentiel', 'Pierre et Vacances Premium Le Crotoy', \n \"Pierre et Vacances Premium Les Terrasses d'Eos\", \"Pierre et Vacances Premium Presqu'Ile de la Touques\", \n 'Pizza Del Arte', 'Plaza Madeleine', 'Punta Lara', \"Qualys Hôtel d'Alsace\", \"Qualys Hôtel du Golf de l'Ailette\", \n 'Qualys-Hotel Grand Hôtel Saint Pierre', 'Résidence de France', 'Résidence Le Balamina', 'Radisson Blu Hôtel Nice', \n 'Relais & Châteaux - La Ferme Saint Siméon', 'Relais & Châteaux Georges Blanc Parc & Spa', 'Relais Christine', \n 'Relais du Silence - Château de Perreux', 'Relais du Silence - Le Mas de Guilles', \n 'Relais du Silence Domaine du Normandoux', 'Relais du Silence Ker Moor Préférence', \n 'Relais du Silence La Mainaz Hôtel Restaurant', 'Relais du Silence Les Vignes de la Chapelle', \n 'Relais du Silence Manoir de la Roche Torin', 'Relais Thalasso Chateau des Tourelles', \n 'Relais Thalasso Hotel Atalante', 'Renaissance Arc de Triomphe', 'Resort Barrière Lille', \n 'Resort Barrière Ribeauvillé', 'Resort Résidence Pierre', 'Restaurant Del Arte', 'Restaurant DEL ARTE Ploërmel', \n 'Restaurant La Chaudanne', 'Restaurant La Ferme Saint Michel', \"Restaurant La Grande Cascade - L'Auberge du Bonheur\", \n 'Restaurant Les Amis du Lac', 'Ristorante Del Arte', 'Saint Charles Hôtel & Spa', 'Saint James Paris ', \n 'SAS Louis Moreau', 'Shangri-La Hotel Paris', 'SNIP Yachting', 'Splendid Hôtel & Spa', 'Stiletto Cabaret', \n 'Stras Kart', 'Sunélia Aluna Vacances', 'Sunêlia Camping du Ranc Davaine', 'Sunêlia Domaine de la Dragonnière', \n 'Sunêlia Domaine Les Ranchisses', 'Sunêlia La Ribeyre', 'Sunêlia Les 3 Vallées', \n 'Sunêlia Perla di Mare camping restaurant', 'Télécabine du Mont-Chéry', 'Terre Blanche Hotel Spa Golf Resort', \n 'Territoires Charente - ZAC Montagnes Ouest', \"Toison d'Or\", 'Valthoparc', 'Vichy Célestins Spa Hôtel', \n 'Villa Duflot', 'Villa Florentine - Restaurant Les Terrasses de Lyon', 'Villa Garbo Cannes', 'Villa La Coste', \n 'Villa Maïa', 'Villa Magnolia Parc', 'Villa Mas St Jean', 'Villa Morelia', 'Villa Regalido', 'Villa René Lalique', \n 'Village Les Armaillis', 'Vincent Cuisinier de Campagne', 'Yelloh Village Camping Le Sérignan-Plage', \n 'Yelloh Village Les Grands Pins', 'Yelloh Village Les Tournels']\n\n\n#Intégration d'une nouvelle variable 'categ_amenageur' selon condition\nirvePlus['categ_amenageur'] = irvePlus['n_amenageur'].copy()\n\nfor x in irvePlus['categ_amenageur']:\n if x in list_c_t:\n irvePlus['categ_amenageur'].replace(x, 'Collectivités territoriales', inplace=True)\n elif x in list_auto:\n irvePlus['categ_amenageur'].replace(x, 'Constructeurs Automobiles', inplace=True)\n elif x in list_parking:\n irvePlus['categ_amenageur'].replace(x, 'Sociétés de Parking', inplace=True) \n elif x in list_centres_commerciaux:\n irvePlus['categ_amenageur'].replace(x, 'Centres commerciaux', inplace=True) \n elif x in list_op_prive:\n irvePlus['categ_amenageur'].replace(x, 'Opérateurs privés', inplace=True) \n elif x in list_entreprise_diverse:\n irvePlus['categ_amenageur'].replace(x, 'Entreprises diverses', inplace=True) \n elif x in list_tourisme:\n irvePlus['categ_amenageur'].replace(x, 'Hôtels, Restaurants…', inplace=True) \n else:\n pass\n ",
"_____no_output_____"
]
],
[
[
"#### Enrichissement de l'échantillon en intégrant code département, département et région\nL'API Google Géocoding a été utilisée de manière à pouvoir extraire les données de géolocalisation attendues. Après quelques essais, plusieurs coordonnées 'Latitude' et 'Longitude' ont pu être identifiées comme non conformes (inversion de coordonnées, problème de format, etc…), un traitement au cas par cas de ces anomalies a été fait pour pouvoir utiliser l'API.",
"_____no_output_____"
]
],
[
[
"#Intervention sur quelques coordonnées atypiques \nirvePlus['Ylatitude'].replace(\"43*96228900\", 43.96228900, inplace=True)\nirvePlus['Xlongitude'].replace('6?07\\'44.1\"E', 6.07441, inplace=True) \nirvePlus['Xlongitude'].replace('6›09\\'34.8\"E', 6.09348, inplace=True)",
"_____no_output_____"
],
[
"#Changement du type de données sur les variables Latitude et Longitude\nirvePlus['Ylatitude'] = irvePlus['Ylatitude'].astype(float)\nirvePlus['Xlongitude'] = irvePlus['Xlongitude'].astype(float)",
"_____no_output_____"
],
[
"#Traitement des observations en anomalie après avoir effectué quelques tentatives\nirvePlus.loc[1442, 'Ylatitude'] = 43.279831\nirvePlus.loc[1442, 'Xlongitude'] = 6.577639\nirvePlus.loc[1477, 'Ylatitude'] = 43.279831\nirvePlus.loc[1477, 'Xlongitude'] = 6.577639\nirvePlus.loc[1505, 'Ylatitude'] = 43.279831\nirvePlus.loc[1505, 'Xlongitude'] = 6.577639\nirvePlus.loc[2059, 'Ylatitude'] = 45.889087\nirvePlus.loc[2059, 'Xlongitude'] = 4.893406\nirvePlus.loc[2078, 'Ylatitude'] = 47.031041\nirvePlus.loc[2078, 'Xlongitude'] = 5.108918\nirvePlus.loc[8527, 'Ylatitude'] = 43.608195\nirvePlus.loc[8527, 'Xlongitude'] = 5.003735\nirvePlus.loc[8543, 'Ylatitude'] = 43.608195\nirvePlus.loc[8543, 'Xlongitude'] = 5.003735\nirvePlus.loc[10071, 'Ylatitude'] = 46.3026926\nirvePlus.loc[10071, 'Xlongitude'] = 4.8321937\nirvePlus.loc[10072, 'Ylatitude'] = 46.3027089\nirvePlus.loc[10072, 'Xlongitude'] = 4.8234389\nirvePlus.loc[10073, 'Ylatitude'] = 46.3026926\nirvePlus.loc[10073, 'Xlongitude'] = 4.8321937\nirvePlus.loc[10074, 'Ylatitude'] = 46.276451\nirvePlus.loc[10074, 'Xlongitude'] = 4.038723\nirvePlus.loc[10075, 'Ylatitude'] = 46.276451\nirvePlus.loc[10075, 'Xlongitude'] = 4.038723\nirvePlus.loc[10076, 'Ylatitude'] = 46.3027089\nirvePlus.loc[10076, 'Xlongitude'] = 4.8234389\nirvePlus.loc[13671, 'Ylatitude'] = 45.271378\nirvePlus.loc[13671, 'Xlongitude'] = 0.043441\nirvePlus.loc[13672, 'Ylatitude'] = 45.271378\nirvePlus.loc[13672, 'Xlongitude'] = 0.043441\nirvePlus.loc[13683, 'Ylatitude'] = 45.886326\nirvePlus.loc[13683, 'Xlongitude'] = 0.582253\nirvePlus.loc[13684, 'Ylatitude'] = 45.886326\nirvePlus.loc[13684, 'Xlongitude'] = 0.582253",
"_____no_output_____"
]
],
[
[
"#### Attention !\n__Le code suivant nécessite une clé d'API Google Geocode, non mis à disposition.\nLa variable \"list_cp\" a été sauvegardée pour éviter de lancer le script à chaque \nexécution du Notebook, +/- 1 heure de temps.__",
"_____no_output_____"
]
],
[
[
"%%time\n#Code permettant de préciser les codes postaux des bornes de recharge de l'échantillon\nfrom urllib.request import urlopen\nimport sys\nimport json\n\nfrom sys import stdout\nfrom time import sleep\n\nlist_cp = []\nfor i, row in irvePlus.iterrows():\n key = \"*********************************\"\n url = \"https://maps.googleapis.com/maps/api/geocode/json?\"\n url += \"latlng=%s,%s&sensor=false&key=%s\" % (row['Ylatitude'], row['Xlongitude'], key)\n v = urlopen(url).read()\n j = json.loads(v)\n components = j['results'][0]['address_components'] \n \n for c in components:\n if \"postal_code\" in c['types']:\n cp = c['long_name']\n list_cp.append(cp) \n else:\n pass\n \n sys.stdout.write('\\r' \"Progress. \"+ str(i+1) + \"/\" +str(len(irvePlus)) + \" >>>>>>> \")\n sys.stdout.flush()",
"Progress. 16112/16112 CPU times: user 4min 42s, sys: 24.6 s, total: 5min 7s\nWall time: 1h 15min 19s\n"
]
],
[
[
"Progress. 16112/16112 CPU times: user 4min 42s, sys: 24.6 s, total: 5min 7s\nWall time: 1h 15min 19s",
"_____no_output_____"
],
[
"A partir de la liste 'list_cp' on peut modifier les données de manière à obtenir les codes des départements, et donc enrichir l'échantillon d'une localisation selon les départements du pays.",
"_____no_output_____"
]
],
[
[
"#Sauvegarde de la variable \nimport pickle\n#pickle.dump(list_cp, open('p8_datatable/list_cp.pickle', 'wb'))",
"_____no_output_____"
],
[
"with open('p8_datatable/list_cp.pickle', 'rb') as f:\n list_cp = pickle.load(f)",
"_____no_output_____"
],
[
"#Création d'une liste propre aux codes des départements\ncd = []\nfor c in list_cp.astype(str):\n cd.append(c[:2])\n \n#Intégration des nouvelles variables dans l'échantillon\nirvePlus['code_postal'] = list_cp\nirvePlus['code_dpt'] = cd",
"_____no_output_____"
],
[
"#Visualisation rapide de quelques observations \nirvePlus[6000:6005]",
"_____no_output_____"
],
[
"#Visualisation des codes départements\nirvePlus.code_dpt.unique()",
"_____no_output_____"
],
[
"#Modification de quelques codes pour pouvoir ensuite effectuer une jointure sans défaut\ncode_modif = ['01', '02', '03', '04', '05', '06', '07', '08', '09' ]\nfor x in irvePlus['code_dpt']:\n if x in code_modif:\n irvePlus['code_dpt'].replace(x, x[1:], inplace=True)",
"_____no_output_____"
],
[
"#Précision apportée à la Corse avec différenciation entre 2A et 2B\nirvePlus.code_dpt.replace('20', '2A', inplace=True)\n\ncode_dpt_2b = [14106, 14107, 14662, 14663, 15070, 15071, 15377, 15378, 15379, 15561, 15562, 15799, 15800]\nfor i, row in irvePlus.iterrows():\n if i in code_dpt_2b:\n irvePlus.loc[i, \"code_dpt\"] = '2B'",
"_____no_output_____"
],
[
"#Enrichement des départements et régions via le fichier 'departements-francais.csv'\n#Source : https://www.regions-et-departements.fr/departements-francais\ndpt_fr = pd.read_csv('p8_data/departements-francais.csv', sep=';')\ndpt_fr.rename(columns={'NUMÉRO': 'code_dpt', 'NOM': 'dpt', 'REGION': 'region',\n 'SUPERFICIE (km²)': 'superficie_km2', 'POPULATION': 'nbre_habitant'}, inplace=True)\ndpt_fr.head()",
"_____no_output_____"
],
[
"#Jointure entre l'échantillon et le référentiel des départements et régions\nirvePlus = pd.merge(irvePlus, dpt_fr[['code_dpt', 'dpt', 'region', 'superficie_km2', 'nbre_habitant']], \n how='left', on = \"code_dpt\")",
"_____no_output_____"
],
[
"#Visualisation des 5 dernières lignes \nirvePlus.tail()",
"_____no_output_____"
],
[
"#Estimation du nombre de stations de recharge (en anglais Charging Station Pool)\nirvePlus.id_station.nunique()",
"_____no_output_____"
],
[
"#Estimation du nombre de bornes de recharge (en anglais Charging Station)\nirvePlus.id_borne.nunique()",
"_____no_output_____"
],
[
"len(irvePlus.n_station.unique())",
"_____no_output_____"
],
[
"#Estimation du nombre de points de recharge (en anglais Charging Point)\nirvePlus.nbre_pdc.sum()",
"_____no_output_____"
]
],
[
[
"Notons que selon les études la répartition établie ci-dessus diverge. Parfois par abus de langage entre borne de recharge et point de charge. Ici, il n'est pas réalisable d'avoir une granularité plus fine qui pourrait prendre en compte l'état de service de la borne. ",
"_____no_output_____"
]
],
[
[
"#Sauvegarde \nirvePlus.to_csv('p8_datatable/irvePlus.csv')",
"_____no_output_____"
]
],
[
[
"### Prévision du nombre de Points de charge à 5 ans\nA partir de l'échantillon 'irve_type' basé sur des chiffres trimestriels, l'échantillon sera re-calibré par mois afin d'avoir une granularité plus fine des données.",
"_____no_output_____"
]
],
[
[
"#Rappel de l'échantillon 'irve_type' vu en début de Mission 2\nirve_type",
"_____no_output_____"
],
[
"#Création d'un échantillon spécifique à la prévision\nirve_type_month = irve_type.copy()\nirve_type_month = irve_type_month[['Time', 'Accessible au public']].set_index('Time')\nirve_type_month = irve_type_month.resample('M').sum().reset_index()",
"_____no_output_____"
],
[
"#Intégration de deux lignes d'observations manquantes\nirve_type_month.loc[58] = ['2015-01-31 00:00:00', 0]\nirve_type_month.loc[59] = ['2015-02-28 00:00:00', 0]",
"_____no_output_____"
],
[
"#Mise en forme de l'échantillon\nirve_type_month['Time'] = pd.to_datetime(irve_type_month['Time'])\nirve_type_month = irve_type_month.sort_values(by='Time').reset_index(drop=True)",
"_____no_output_____"
],
[
"#Ventilation des valeurs trimestrielles /Mois\nseed(1)\nfor i, row in irve_type_month.iterrows():\n if row['Time'] < pd.Timestamp('2015-03-31') :\n irve_type_month.loc[i, 'Accessible au public'] = randint(5000, 8478)\n elif (row['Time'] > pd.Timestamp('2015-03-31')) & (row['Time'] < pd.Timestamp('2015-06-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(8478, 10086)\n elif (row['Time'] > pd.Timestamp('2015-06-30')) & (row['Time'] < pd.Timestamp('2015-09-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(10086, 10928)\n elif (row['Time'] > pd.Timestamp('2015-09-30')) & (row['Time'] < pd.Timestamp('2015-12-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(10928, 11113) \n elif (row['Time'] > pd.Timestamp('2015-12-31')) & (row['Time'] < pd.Timestamp('2016-03-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(11113, 12830) \n elif (row['Time'] > pd.Timestamp('2016-03-31')) & (row['Time'] < pd.Timestamp('2016-06-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(12830, 13861) \n elif (row['Time'] > pd.Timestamp('2016-06-30')) & (row['Time'] < pd.Timestamp('2016-09-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(12859, 13861) \n elif (row['Time'] > pd.Timestamp('2016-09-30')) & (row['Time'] < pd.Timestamp('2016-12-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(13861, 16220) \n elif (row['Time'] > pd.Timestamp('2016-12-31')) & (row['Time'] < pd.Timestamp('2017-03-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(16220, 17423) \n elif (row['Time'] > pd.Timestamp('2017-03-31')) & (row['Time'] < pd.Timestamp('2017-06-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(17423, 19750) \n elif (row['Time'] > pd.Timestamp('2017-06-30')) & (row['Time'] < pd.Timestamp('2017-09-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(19750, 20688)\n elif (row['Time'] > pd.Timestamp('2017-09-30')) & (row['Time'] < pd.Timestamp('2017-12-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(19309, 20688) \n elif (row['Time'] > pd.Timestamp('2017-12-31')) & (row['Time'] < pd.Timestamp('2018-03-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(19309, 26370) \n elif (row['Time'] > pd.Timestamp('2018-03-31')) & (row['Time'] < pd.Timestamp('2018-06-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(22283, 26370) \n elif (row['Time'] > pd.Timestamp('2018-06-30')) & (row['Time'] < pd.Timestamp('2018-09-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(22283, 24362) \n elif (row['Time'] > pd.Timestamp('2018-09-30')) & (row['Time'] < pd.Timestamp('2018-12-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(24362, 26297) \n elif (row['Time'] > pd.Timestamp('2018-12-31')) & (row['Time'] < pd.Timestamp('2019-03-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(26297, 27446) \n elif (row['Time'] > pd.Timestamp('2019-03-31')) & (row['Time'] < pd.Timestamp('2019-06-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(27446, 28910) \n elif (row['Time'] > pd.Timestamp('2019-06-30')) & (row['Time'] < pd.Timestamp('2019-09-30')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(28910, 31461) \n elif (row['Time'] > pd.Timestamp('2019-09-30')) & (row['Time'] < pd.Timestamp('2019-12-31')):\n irve_type_month.loc[i, 'Accessible au public'] = randint(30110, 31461) \n else :\n pass",
"_____no_output_____"
],
[
"#Affichage de l'échantillon \nirve_type_month",
"_____no_output_____"
],
[
"#Sauvegarde \nirve_type_month.to_csv('p8_datatable/irve_type_month.csv')",
"_____no_output_____"
],
[
"#Mise en oeuvre de l'algorithme Prophet (Facebook)\nfrom fbprophet import Prophet\n\npdc_forecast_prophet = irve_type_month.copy()\npdc_forecast_prophet = pdc_forecast_prophet[['Time', 'Accessible au public']]\npdc_forecast_prophet.rename(columns={'Time': 'ds', 'Accessible au public': 'y'}, inplace=True)\npdc_forecast_prophet.tail()",
"_____no_output_____"
],
[
"#Sauvegarde \npdc_forecast_prophet.to_csv('p8_datatable/pdc_forecast_prophet.csv')",
"_____no_output_____"
],
[
"#Instanciation et entrainement du modèle\nmodel = Prophet(yearly_seasonality=True, weekly_seasonality=False, daily_seasonality=False)\nmodel.fit(pdc_forecast_prophet)",
"_____no_output_____"
],
[
"#Prévision du nombre de Points de charge à 5 ans\nfuture = model.make_future_dataframe(periods=60, freq='M')\nforecast = model.predict(future)\nfig = model.plot(forecast)\nfig.savefig('p8_img/forecast_prophet_pdc.png')",
"_____no_output_____"
],
[
"#Affichage des 5 derniers mois de prévision\nforecast_pdc = model.predict(future)\nforecast_pdc[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()",
"_____no_output_____"
],
[
"#Sauvegarde \nforecast_pdc.to_csv('p8_datatable/forecast_pdc.csv')",
"_____no_output_____"
]
],
[
[
"D'ici fin 2024 le maillage de Points de charge pourrait être étendu à environ 56 000 connecteurs, selon la prédiction de l'algorithme Prophet.",
"_____no_output_____"
]
],
[
[
"#Préparation des données (observations + prévisions) pour Test statistique\nmetric_forecast_pdc = forecast_pdc.set_index('ds')[['yhat']].join(pdc_forecast_prophet.set_index('ds').y).reset_index()\nmetric_forecast_pdc.dropna(inplace=True)\nmetric_forecast_pdc",
"_____no_output_____"
],
[
"#Mesures statistiques permettant d'évaluer le modèle\nprint(\"R2 = \" + str(r2_score(metric_forecast_pdc['y'], metric_forecast_pdc['yhat'])))\nprint(\"MSE = \" + str(mean_squared_error(metric_forecast_pdc['y'], metric_forecast_pdc['yhat'])))\nprint(\"RMSE = \" + str(math.sqrt(mean_squared_error(metric_forecast_pdc['y'], metric_forecast_pdc['yhat']))))\nprint(\"MAE = \" + str(mean_absolute_error(metric_forecast_pdc['y'], metric_forecast_pdc['yhat'])))",
"R2 = 0.9847791336028472\nMSE = 797887.9309059462\nRMSE = 893.245728176713\nMAE = 740.1299762404773\n"
]
],
[
[
"Les coefficients statistiques sont plus optimistes que ceux des précédentes prévisions. Le coefficient de détermination reste proche de 1, seulement les autres métriques d'écart sont assez élevées. En d'autres termes la robustesse du modèle n'est pas très satisfaisante.",
"_____no_output_____"
],
[
"<u>A des fins de comparaison, la méthode de Holt-winters est également exploitée.</u>",
"_____no_output_____"
]
],
[
[
"#Préparation des données\nirve_forecast_hw = irve_type_month.copy()\nirve_forecast_hw['Time'] = pd.to_datetime(irve_forecast_hw['Time'])\nirve_forecast_hw.set_index('Time', inplace=True)",
"_____no_output_____"
],
[
"#Méthode ExponentialSmoothing de statsmodels est utilisée pour la modélisation d'Holt-Winters.\nfrom statsmodels.tsa.api import ExponentialSmoothing\n\ny = np.array(irve_forecast_hw['Accessible au public'])\nhw = ExponentialSmoothing(y, seasonal_periods=12, trend='add', seasonal='add').fit()\nhw_pred = hw.forecast(60)",
"_____no_output_____"
],
[
"#Visualisation de la prévision à 5 ans par Holt-Winters\nplt.figure(figsize(16, 8))\nplt.plot(irve_forecast_hw['Accessible au public'], label='PDC')\nplt.plot(pd.date_range(irve_forecast_hw.index[len(y)-1], periods=60, freq='M'), \n hw_pred, label='Prévision Holt-Winters')\n\nplt.title(\"Points de charge ouverts au public en France d'ici 2024\")\n\nfig.savefig('p8_img/holtwinters_pdc.png')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"#Affichage des valeurs prédites\nhw_pred",
"_____no_output_____"
]
],
[
[
"**Après ces deux modélisations, on peut conclure à un développement du réseau des points de charge (PDC ou Charging Point) entre 55 000 et 60 000 connecteurs d'ici fin 2024.** ",
"_____no_output_____"
],
[
"[Retour vers la page notebook précédente (Positionnement de la voiture électrique de 2010 à 2019 et prévision à 2 ans)](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook01.ipynb)\n\n[Voir la suite du projet : Appel de charge au réseau électrique (Profilage d'un pic de consommation en 2040, etc…)](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook03.ipynb)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4a0766d0da30892dda19b0a4e49420bf71cd16f3
| 1,878 |
ipynb
|
Jupyter Notebook
|
Model booster.ipynb
|
shomerthesec/Machine-Learning-Models
|
94e988c6aaa6862034b0c285c32749f91056f080
|
[
"Unlicense"
] | null | null | null |
Model booster.ipynb
|
shomerthesec/Machine-Learning-Models
|
94e988c6aaa6862034b0c285c32749f91056f080
|
[
"Unlicense"
] | null | null | null |
Model booster.ipynb
|
shomerthesec/Machine-Learning-Models
|
94e988c6aaa6862034b0c285c32749f91056f080
|
[
"Unlicense"
] | null | null | null | 22.357143 | 94 | 0.555911 |
[
[
[
"# to use grid search for deciding the params\n\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.metrics import make_scorer, f1_score\n\nparameters = {'kernel':['poly', 'rbf'],'C':[0.1, 1, 10]} #decide the available params\n#create a score variable to store the score we want f1\nscorer = make_scorer(f1_score)\n# Create the object.\ngrid_obj = GridSearchCV(clf, parameters, scoring=scorer)\n# Fit the data\ngrid_fit = grid_obj.fit(X, y)\n\nbest_clf = grid_fit.best_estimator_",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4a079a061523a677ca819cb8dc220c6d5d92efcd
| 41,637 |
ipynb
|
Jupyter Notebook
|
02_Align.ipynb
|
tjtsai/SegmentalDTW
|
308e650a88d0f98e821154ae5b4c781911451ded
|
[
"MIT"
] | null | null | null |
02_Align.ipynb
|
tjtsai/SegmentalDTW
|
308e650a88d0f98e821154ae5b4c781911451ded
|
[
"MIT"
] | null | null | null |
02_Align.ipynb
|
tjtsai/SegmentalDTW
|
308e650a88d0f98e821154ae5b4c781911451ded
|
[
"MIT"
] | null | null | null | 36.395979 | 288 | 0.523693 |
[
[
[
"# Alignment",
"_____no_output_____"
],
[
"The goal of this notebook is to align files using DTW, weakly-ordered Segmental DTW, or strictly-ordered Segmental DTW.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n%load_ext Cython",
"_____no_output_____"
],
[
"import numpy as np\nimport matplotlib.pyplot as plt\nimport librosa as lb\nimport os.path\nfrom pathlib import Path\nimport pickle\nimport multiprocessing\nimport time\nimport gc",
"_____no_output_____"
]
],
[
[
"### Align with DTW",
"_____no_output_____"
],
[
"The following cell contains a cython implementation of basic DTW.",
"_____no_output_____"
]
],
[
[
"%%cython\nimport numpy as np\ncimport numpy as np\ncimport cython\n\nimport sys\nimport time\n\n\nDTYPE_INT32 = np.int32\nctypedef np.int32_t DTYPE_INT32_t\n\nDTYPE_FLOAT = np.float64\nctypedef np.float64_t DTYPE_FLOAT_t\n\ncdef DTYPE_FLOAT_t MAX_FLOAT = float('inf')\n\n# careful, without bounds checking can mess up memory - also can't use negative indices I think (like x[-1])\[email protected](False) # turn off bounds-checking for entire function\ndef DTW_Cost_To_AccumCostAndSteps(Cin, parameter):\n '''\n Inputs\n C: The cost Matrix\n '''\n\n\n '''\n Section for checking and catching errors in the inputs\n '''\n\n cdef np.ndarray[DTYPE_FLOAT_t, ndim=2] C\n try:\n C = np.array(Cin, dtype=DTYPE_FLOAT)\n except TypeError:\n print(bcolors.FAIL + \"FAILURE: The type of the cost matrix is wrong - please pass in a 2-d numpy array\" + bcolors.ENDC)\n return [-1, -1, -1]\n except ValueError:\n print(bcolors.FAIL + \"FAILURE: The type of the elements in the cost matrix is wrong - please have each element be a float (perhaps you passed in a matrix of ints?)\" + bcolors.ENDC)\n return [-1, -1, -1]\n\n cdef np.ndarray[np.uint32_t, ndim=1] dn\n cdef np.ndarray[np.uint32_t, ndim=1] dm\n cdef np.ndarray[DTYPE_FLOAT_t, ndim=1] dw\n # make sure dn, dm, and dw are setup\n # dn loading and exception handling\n if ('dn' in parameter.keys()):\n try:\n\n dn = np.array(parameter['dn'], dtype=np.uint32)\n except TypeError:\n print(bcolors.FAIL + \"FAILURE: The type of dn (row steps) is wrong - please pass in a 1-d numpy array that holds uint32s\" + bcolors.ENDC)\n return [-1, -1, -1]\n except ValueError:\n print(bcolors.FAIL + \"The type of the elements in dn (row steps) is wrong - please have each element be a uint32 (perhaps you passed a long?). You can specify this when making a numpy array like: np.array([1,2,3],dtype=np.uint32)\" + bcolors.ENDC)\n return [-1, -1, -1]\n else:\n dn = np.array([1, 1, 0], dtype=np.uint32)\n # dm loading and exception handling\n if 'dm' in parameter.keys():\n try:\n dm = np.array(parameter['dm'], dtype=np.uint32)\n except TypeError:\n print(bcolors.FAIL + \"FAILURE: The type of dm (col steps) is wrong - please pass in a 1-d numpy array that holds uint32s\" + bcolors.ENDC)\n return [-1, -1, -1]\n except ValueError:\n print(bcolors.FAIL + \"FAILURE: The type of the elements in dm (col steps) is wrong - please have each element be a uint32 (perhaps you passed a long?). You can specify this when making a numpy array like: np.array([1,2,3],dtype=np.uint32)\" + bcolors.ENDC)\n return [-1, -1, -1]\n else:\n print(bcolors.FAIL + \"dm (col steps) was not passed in (gave default value [1,0,1]) \" + bcolors.ENDC)\n dm = np.array([1, 0, 1], dtype=np.uint32)\n # dw loading and exception handling\n if 'dw' in parameter.keys():\n try:\n dw = np.array(parameter['dw'], dtype=DTYPE_FLOAT)\n except TypeError:\n print(bcolors.FAIL + \"FAILURE: The type of dw (step weights) is wrong - please pass in a 1-d numpy array that holds floats\" + bcolors.ENDC)\n return [-1, -1, -1]\n except ValueError:\n print(bcolors.FAIL + \"FAILURE:The type of the elements in dw (step weights) is wrong - please have each element be a float (perhaps you passed ints or a long?). You can specify this when making a numpy array like: np.array([1,2,3],dtype=np.float64)\" + bcolors.ENDC)\n return [-1, -1, -1]\n else:\n dw = np.array([1, 1, 1], dtype=DTYPE_FLOAT)\n print(bcolors.FAIL + \"dw (step weights) was not passed in (gave default value [1,1,1]) \" + bcolors.ENDC)\n\n \n '''\n Section where types are given to the variables we're going to use \n '''\n # create matrices to store our results (D and E)\n cdef DTYPE_INT32_t numRows = C.shape[0] # only works with np arrays, use np.shape(x) will work on lists? want to force to use np though?\n cdef DTYPE_INT32_t numCols = C.shape[1]\n cdef DTYPE_INT32_t numDifSteps = np.size(dw)\n\n cdef unsigned int maxRowStep = max(dn)\n cdef unsigned int maxColStep = max(dm)\n\n cdef np.ndarray[np.uint32_t, ndim=2] steps = np.zeros((numRows,numCols), dtype=np.uint32)\n cdef np.ndarray[DTYPE_FLOAT_t, ndim=2] accumCost = np.ones((maxRowStep + numRows, maxColStep + numCols), dtype=DTYPE_FLOAT) * MAX_FLOAT\n\n cdef DTYPE_FLOAT_t bestCost\n cdef DTYPE_INT32_t bestCostIndex\n cdef DTYPE_FLOAT_t costForStep\n cdef unsigned int row, col\n cdef unsigned int stepIndex\n\n '''\n The start of the actual algorithm, now that all our variables are set up\n '''\n # initializing the cost matrix - depends on whether its subsequence DTW\n # essentially allow us to hop on the bottom anywhere (so could start partway through one of the signals)\n if parameter['SubSequence']:\n for col in range(numCols):\n accumCost[maxRowStep, col + maxColStep] = C[0, col]\n else:\n accumCost[maxRowStep, maxColStep] = C[0,0]\n\n # filling the accumulated cost matrix\n for row in range(maxRowStep, numRows + maxRowStep, 1):\n for col in range(maxColStep, numCols + maxColStep, 1):\n bestCost = accumCost[<unsigned int>row, <unsigned int>col] # initialize with what's there - so if is an entry point, then can start low\n bestCostIndex = 0\n # go through each step, find the best one\n for stepIndex in range(numDifSteps):\n #costForStep = accumCost[<unsigned int>(row - dn[<unsigned int>(stepIndex)]), <unsigned int>(col - dm[<unsigned int>(stepIndex)])] + dw[<unsigned int>(stepIndex)] * C[<unsigned int>(row - maxRowStep), <unsigned int>(col - maxColStep)]\n costForStep = accumCost[<unsigned int>((row - dn[(stepIndex)])), <unsigned int>((col - dm[(stepIndex)]))] + dw[stepIndex] * C[<unsigned int>(row - maxRowStep), <unsigned int>(col - maxColStep)]\n if costForStep < bestCost:\n bestCost = costForStep\n bestCostIndex = stepIndex\n # save the best cost and best cost index\n accumCost[row, col] = bestCost\n steps[<unsigned int>(row - maxRowStep), <unsigned int>(col - maxColStep)] = bestCostIndex\n\n # return the accumulated cost along with the matrix of steps taken to achieve that cost\n return [accumCost[maxRowStep:, maxColStep:], steps]\n\[email protected](False) # turn off bounds-checking for entire function\ndef DTW_GetPath(np.ndarray[DTYPE_FLOAT_t, ndim=2] accumCost, np.ndarray[np.uint32_t, ndim=2] stepsForCost, parameter):\n '''\n\n Parameter should have: 'dn', 'dm', 'dw', 'SubSequence'\n '''\n\n cdef np.ndarray[unsigned int, ndim=1] dn\n cdef np.ndarray[unsigned int, ndim=1] dm\n cdef np.uint8_t subseq\n cdef np.int32_t startCol # added\n # make sure dn, dm, and dw are setup\n if ('dn' in parameter.keys()):\n dn = parameter['dn']\n else:\n dn = np.array([1, 1, 0], dtype=DTYPE_INT32)\n if 'dm' in parameter.keys():\n dm = parameter['dm']\n else:\n dm = np.array([1, 0, 1], dtype=DTYPE_INT32)\n if 'SubSequence' in parameter.keys():\n subseq = parameter['SubSequence']\n else:\n subseq = 0\n \n # added START\n if 'startCol' in parameter.keys(): \n startCol = parameter['startCol']\n else:\n startCol = -1\n # added END\n\n cdef np.uint32_t numRows\n cdef np.uint32_t numCols\n cdef np.uint32_t curRow\n cdef np.uint32_t curCol\n cdef np.uint32_t endCol\n cdef DTYPE_FLOAT_t endCost\n\n numRows = accumCost.shape[0]\n numCols = accumCost.shape[1]\n\n # either start at the far corner (non sub-sequence)\n # or start at the lowest cost entry in the last row (sub-sequence)\n # where all of the signal along the row has been used, but only a \n # sub-sequence of the signal along the columns has to be used\n curRow = numRows - 1\n if subseq:\n curCol = np.argmin(accumCost[numRows - 1, :])\n else:\n curCol = numCols - 1\n \n # added - if specified, overrides above\n if startCol >= 0:\n curCol = startCol\n\n endCol = curCol\n endCost = accumCost[curRow, curCol]\n\n cdef np.uint32_t curRowStep\n cdef np.uint32_t curColStep\n cdef np.uint32_t curStepIndex\n\n\n cdef np.ndarray[np.uint32_t, ndim=2] path = np.zeros((2, numRows + numCols), dtype=np.uint32) # make as large as could need, then chop at the end\n path[0, 0] = curRow\n path[1, 0] = curCol\n\n cdef np.uint32_t stepsInPath = 1 # starts at one, we add in one before looping\n cdef np.uint32_t stepIndex = 0\n cdef np.int8_t done = (subseq and curRow == 0) or (curRow == 0 and curCol == 0)\n while not done:\n if accumCost[curRow, curCol] == MAX_FLOAT:\n print('A path is not possible')\n break\n\n # you're done if you've made it to the bottom left (non sub-sequence)\n # or just the bottom (sub-sequence)\n # find the step size\n curStepIndex = stepsForCost[curRow, curCol]\n curRowStep = dn[curStepIndex]\n curColStep = dm[curStepIndex]\n # backtrack by 1 step\n curRow = curRow - curRowStep\n curCol = curCol - curColStep\n # add your new location onto the path\n path[0, stepsInPath] = curRow\n path[1, stepsInPath] = curCol\n stepsInPath = stepsInPath + 1\n # check to see if you're done\n done = (subseq and curRow == 0) or (curRow == 0 and curCol == 0)\n\n # reverse the path (a matrix with two rows) and return it\n return [np.fliplr(path[:, 0:stepsInPath]), endCol, endCost]\n\nclass bcolors:\n HEADER = '\\033[95m'\n OKBLUE = '\\033[94m'\n OKGREEN = '\\033[92m'\n WARNING = '\\033[93m'\n FAIL = '\\033[91m'\n ENDC = '\\033[0m'\n BOLD = '\\033[1m'\n UNDERLINE = '\\033[4m'",
"_____no_output_____"
],
[
"def alignDTW(featfile1, featfile2, steps, weights, downsample, outfile = None, profile = False):\n \n F1 = np.load(featfile1) # 12 x N\n F2 = np.load(featfile2) # 12 x M\n if max(F1.shape[1], F2.shape[1]) / min(F1.shape[1], F2.shape[1]) >= 2: # no valid path possible\n if outfile:\n pickle.dump(None, open(outfile, 'wb'))\n return None\n times = []\n times.append(time.time())\n C = 1 - F1[:,0::downsample].T @ F2[:,0::downsample] # cos distance metric\n times.append(time.time())\n\n dn = steps[:,0].astype(np.uint32)\n dm = steps[:,1].astype(np.uint32)\n parameters = {'dn': dn, 'dm': dm, 'dw': weights, 'SubSequence': False}\n [D, s] = DTW_Cost_To_AccumCostAndSteps(C, parameters)\n times.append(time.time())\n [wp, endCol, endCost] = DTW_GetPath(D, s, parameters)\n times.append(time.time())\n if outfile:\n pickle.dump(wp, open(outfile, 'wb'))\n \n if profile:\n return wp, np.diff(times)\n else:\n return wp",
"_____no_output_____"
],
[
"def alignDTW_batch(querylist, featdir1, featdir2, outdir, n_cores, steps, weights, downsample):\n \n outdir.mkdir(parents=True, exist_ok=True)\n \n # prep inputs for parallelization\n inputs = []\n with open(querylist, 'r') as f:\n for line in f:\n parts = line.strip().split(' ')\n assert len(parts) == 2\n featfile1 = (featdir1 / parts[0]).with_suffix('.npy')\n featfile2 = (featdir2 / parts[1]).with_suffix('.npy')\n queryid = os.path.basename(parts[0]) + '__' + os.path.basename(parts[1])\n outfile = (outdir / queryid).with_suffix('.pkl')\n if os.path.exists(outfile):\n print(f\"Skipping {outfile}\")\n else:\n inputs.append((featfile1, featfile2, steps, weights, downsample, outfile))\n\n # process files in parallel\n pool = multiprocessing.Pool(processes = n_cores)\n pool.starmap(alignDTW, inputs)\n \n return",
"_____no_output_____"
]
],
[
[
"Align a single pair of audio files",
"_____no_output_____"
]
],
[
[
"featfile1 = 'features/clean/Chopin_Op068No3/Chopin_Op068No3_Tomsic-1995_pid9190-11.npy'\nfeatfile2 = 'features/clean/Chopin_Op068No3/Chopin_Op068No3_Cortot-1951_pid9066b-19.npy'\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([2,3,3])\ndownsample = 1\nwp = alignDTW(featfile1, featfile2, steps, weights, downsample)",
"_____no_output_____"
]
],
[
[
"Align all pairs of audio files",
"_____no_output_____"
]
],
[
[
"query_list = 'cfg_files/query.test.list'\nfeatdir1 = Path('features/clean')\nfeatdir2 = Path('features/clean') # in case you want to align clean vs noisy\noutdir = Path('experiments_test/dtw_clean')\nn_cores = 1\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([2,3,3])\ndownsample = 1\ninputs = alignDTW_batch(query_list, featdir1, featdir2, outdir, n_cores, steps, weights, downsample)",
"_____no_output_____"
]
],
[
[
"### Align with WSDTW",
"_____no_output_____"
],
[
"Align with weakly-ordered Segmental DTW.",
"_____no_output_____"
]
],
[
[
"def alignWSDTW(featfile1, featfile2, steps, weights, downsample, numSegments, outfile = None, profile = False):\n \n # compute cost matrix\n F1 = np.load(featfile1) # 12 x N\n F2 = np.load(featfile2) # 12 x M\n if max(F1.shape[1], F2.shape[1]) / min(F1.shape[1], F2.shape[1]) >= 2: # no valid path possible\n if outfile:\n pickle.dump(None, open(outfile, 'wb'))\n return None\n times = []\n times.append(time.time())\n C = 1 - F1[:,0::downsample].T @ F2[:,0::downsample] # cos distance metric\n times.append(time.time())\n\n # run subseqDTW on chunks\n seglen = int(np.ceil(C.shape[0] / numSegments))\n dn1 = steps[:,0].astype(np.uint32)\n dm1 = steps[:,1].astype(np.uint32)\n dw1 = weights\n params1 = {'dn': dn1, 'dm': dm1, 'dw': dw1, 'SubSequence': True}\n Dparts = []\n Bparts = []\n for i in range(numSegments):\n Cpart = C[i*seglen : min((i+1)*seglen, C.shape[0]), :]\n [D, B] = DTW_Cost_To_AccumCostAndSteps(Cpart, params1)\n Dparts.append(D)\n Bparts.append(B)\n times.append(time.time())\n\n # run segment-level DP\n Cseg = np.zeros((numSegments+1, C.shape[1]))\n for i in range(len(Dparts)):\n Cseg[i+1,:] = Dparts[i][-1,:]\n dn2 = np.array([0, 1], dtype=np.uint32)\n dm2 = np.array([1, seglen//np.max(steps[:,0])], dtype=np.uint32)\n dw2 = np.array([0, 1])\n params2 = {'dn': dn2, 'dm': dm2, 'dw': dw2, 'SubSequence': False}\n [Dseg, Bseg] = DTW_Cost_To_AccumCostAndSteps(Cseg, params2)\n times.append(time.time())\n [wpseg, _, _] = DTW_GetPath(Dseg, Bseg, params2)\n\n # backtrace\n segmentEndIdxs = getSegmentEndingLocs(wpseg)\n times.append(time.time())\n wps = []\n for i, endidx in enumerate(segmentEndIdxs):\n params3 = {'dn': dn1, 'dm': dm1, 'dw': dw1, 'SubSequence': True, 'startCol': endidx}\n [wpchunk, _, _] = DTW_GetPath(Dparts[i], Bparts[i], params3)\n wpchunk[0,:] = wpchunk[0,:] + i*seglen # account for relative offset\n wps.append(wpchunk.copy())\n wp_merged = np.hstack(wps)\n times.append(time.time())\n\n if outfile:\n pickle.dump(wp_merged, open(outfile, 'wb'))\n\n if profile:\n return wp_merged, np.diff(times)\n else:\n return wp_merged",
"_____no_output_____"
],
[
"def getSegmentEndingLocs(wp):\n prevLoc = wp[:,0] # [r,c]\n endLocs = []\n for i in range(wp.shape[1]):\n curLoc = wp[:,i]\n if curLoc[0] != prevLoc[0]: # if row changes\n endLocs.append(curLoc[1])\n prevLoc = curLoc\n \n return endLocs",
"_____no_output_____"
],
[
"def alignSegmentalDTW_batch(querylist, featdir1, featdir2, outdir, n_cores, steps, weights, downsample, numSegments, fn):\n\n outdir.mkdir(parents=True, exist_ok=True)\n \n # prep inputs for parallelization\n inputs = []\n with open(querylist, 'r') as f:\n for line in f:\n parts = line.strip().split(' ')\n assert len(parts) == 2\n featfile1 = (featdir1 / parts[0]).with_suffix('.npy')\n featfile2 = (featdir2 / parts[1]).with_suffix('.npy')\n queryid = os.path.basename(parts[0]) + '__' + os.path.basename(parts[1])\n outfile = (outdir / queryid).with_suffix('.pkl')\n if os.path.exists(outfile):\n print(f\"Skipping {outfile}\")\n else:\n inputs.append((featfile1, featfile2, steps, weights, downsample, numSegments, outfile))\n\n # process files in parallel\n pool = multiprocessing.Pool(processes = n_cores)\n pool.starmap(fn, inputs)\n\n return",
"_____no_output_____"
]
],
[
[
"Align a single pair of audio files",
"_____no_output_____"
]
],
[
[
"featfile1 = 'features/clean/Chopin_Op017No4/Chopin_Op017No4_Afanassiev-2001_pid9130-01.npy'\nfeatfile2 = 'features/clean/Chopin_Op017No4/Chopin_Op017No4_Ben-Or-1989_pid9161-12.npy'\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([1,1,2])\ndownsample = 1\nnumSegments = 5\nwp = alignWSDTW(featfile1, featfile2, steps, weights, downsample, numSegments)",
"_____no_output_____"
]
],
[
[
"Align all pairs of audio files",
"_____no_output_____"
]
],
[
[
"query_list = 'cfg_files/query.test.list'\nfeatdir1 = Path('features/clean')\nfeatdir2 = Path('features/clean') # in case you want to align clean vs noisy\nn_cores = 1\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([1,1,2])\ndownsample = 1\nsegmentVals = [2, 4, 8, 16, 32] \nfor numSegments in segmentVals:\n outdir = Path(f'experiments_test/wsdtw_{numSegments}_clean')\n alignSegmentalDTW_batch(query_list, featdir1, featdir2, outdir, n_cores, steps, weights, downsample, numSegments, alignWSDTW)",
"_____no_output_____"
]
],
[
[
"### Align with SSDTW",
"_____no_output_____"
],
[
"Align with strictly-ordered Segmental DTW",
"_____no_output_____"
]
],
[
[
"%%cython\nimport numpy as np\ncimport numpy as np\ncimport cython\n\nimport sys\nimport time\n\n\nDTYPE_INT32 = np.int32\nctypedef np.int32_t DTYPE_INT32_t\n\nDTYPE_FLOAT = np.float64\nctypedef np.float64_t DTYPE_FLOAT_t\n\ncdef DTYPE_FLOAT_t MAX_FLOAT = float('inf')\n\n# careful, without bounds checking can mess up memory - also can't use negative indices I think (like x[-1])\[email protected](False) # turn off bounds-checking for entire function\ndef Segment_DP(np.ndarray[DTYPE_FLOAT_t, ndim=2] C, np.ndarray[np.int32_t, ndim=2] T):\n\n cdef DTYPE_INT32_t numRows = C.shape[0]\n cdef DTYPE_INT32_t numCols = C.shape[1] \n cdef np.ndarray[np.int32_t, ndim=2] steps = np.zeros((numRows+1,numCols), dtype=np.int32)\n cdef np.ndarray[DTYPE_FLOAT_t, ndim=2] accumCost = np.ones((numRows+1, numCols), dtype=DTYPE_FLOAT) * MAX_FLOAT\n\n cdef unsigned int row, col\n cdef DTYPE_FLOAT_t skipCost\n cdef np.int32_t jumpStartCol\n cdef DTYPE_FLOAT_t jumpCost\n\n # initialize\n for row in range(numRows+1):\n for col in range(numCols):\n steps[row, col] = -1 # skip by default\n for col in range(numCols):\n accumCost[0, col] = 0 # all inf except first row\n \n # dynamic programming\n for row in range(1, numRows+1):\n for col in range(numCols):\n \n # skip transition\n if col == 0:\n skipCost = MAX_FLOAT\n else:\n skipCost = accumCost[row, col-1]\n accumCost[row, col] = skipCost\n # best step is skip by default, so don't need to assign\n \n # jump transition\n jumpStartCol = T[row-1, col]\n if jumpStartCol >= 0: # valid subsequence path\n jumpCost = accumCost[row-1, jumpStartCol] + C[row-1, col]\n if jumpCost < skipCost:\n accumCost[row, col] = jumpCost\n steps[row, col] = jumpStartCol\n\n return [accumCost, steps]\n\[email protected](False) # turn off bounds-checking for entire function\ndef Segment_Backtrace(np.ndarray[DTYPE_FLOAT_t, ndim=2] accumCost, np.ndarray[np.int32_t, ndim=2] steps):\n\n cdef np.uint32_t numRows = accumCost.shape[0]\n cdef np.uint32_t numCols = accumCost.shape[1]\n cdef np.uint32_t curRow = numRows - 1\n cdef np.uint32_t curCol = numCols - 1\n cdef np.int32_t jump\n cdef np.ndarray[np.uint32_t, ndim=1] path = np.zeros(numRows-1, dtype=np.uint32)\n cdef np.uint32_t pathElems = 0\n\n while curRow > 0:\n if accumCost[curRow, curCol] == MAX_FLOAT:\n print('A path is not possible')\n break\n\n jump = steps[curRow, curCol]\n if jump < 0: # skip\n curCol = curCol - 1\n else: # jump\n path[pathElems] = curCol\n pathElems = pathElems + 1\n curRow = curRow - 1\n curCol = jump\n\n return path[::-1]\n\[email protected](False) # turn off bounds-checking for entire function\ndef calc_Tseg(np.ndarray[DTYPE_FLOAT_t, ndim=2] accumCost, np.ndarray[np.uint32_t, ndim=2] stepsForCost, parameter):\n '''\n\n Parameter should have: 'dn', 'dm'\n '''\n\n cdef np.ndarray[unsigned int, ndim=1] dn\n cdef np.ndarray[unsigned int, ndim=1] dm\n cdef np.uint32_t numRows = accumCost.shape[0]\n cdef np.uint32_t numCols = accumCost.shape[1]\n cdef np.ndarray[np.int32_t, ndim=1] startLocs = np.zeros(numCols, dtype=np.int32)\n cdef np.uint32_t endCol\n cdef np.uint32_t curRow\n cdef np.uint32_t curCol\n cdef np.uint32_t curStepIndex\n\n # get step transitions\n if ('dn' in parameter.keys()):\n dn = parameter['dn']\n else:\n dn = np.array([1, 1, 0], dtype=DTYPE_INT32)\n if 'dm' in parameter.keys():\n dm = parameter['dm']\n else:\n dm = np.array([1, 0, 1], dtype=DTYPE_INT32)\n\n # backtrace from every location\n for endCol in range(numCols):\n curCol = endCol\n curRow = numRows - 1\n while curRow > 0:\n if accumCost[curRow, curCol] == MAX_FLOAT: # no valid path\n startLocs[curCol] = -1\n break\n\n curStepIndex = stepsForCost[curRow, curCol]\n curRow = curRow - dn[curStepIndex]\n curCol = curCol - dm[curStepIndex]\n if curRow == 0:\n startLocs[endCol] = curCol\n \n return startLocs\n\nclass bcolors:\n HEADER = '\\033[95m'\n OKBLUE = '\\033[94m'\n OKGREEN = '\\033[92m'\n WARNING = '\\033[93m'\n FAIL = '\\033[91m'\n ENDC = '\\033[0m'\n BOLD = '\\033[1m'\n UNDERLINE = '\\033[4m'",
"_____no_output_____"
],
[
"def alignSSDTW(featfile1, featfile2, steps, weights, downsample, numSegments, outfile = None, profile = False):\n \n # compute cost matrix\n F1 = np.load(featfile1) # 12 x N\n F2 = np.load(featfile2) # 12 x M\n swap = (F1.shape[1] > F2.shape[1])\n if swap:\n F1, F2 = F2, F1 # make the shorter sequence the query\n if max(F1.shape[1], F2.shape[1]) / min(F1.shape[1], F2.shape[1]) >= 2: # no valid path possible\n if outfile:\n pickle.dump(None, open(outfile, 'wb'))\n return None\n times = []\n times.append(time.time())\n C = 1 - F1[:,0::downsample].T @ F2[:,0::downsample] # cos distance metric\n times.append(time.time())\n \n # run subseqDTW on chunks\n seglen = int(np.ceil(F1.shape[1] / numSegments))\n dn = steps[:,0].astype(np.uint32)\n dm = steps[:,1].astype(np.uint32)\n dw = weights\n params1 = {'dn': dn, 'dm': dm, 'dw': dw, 'SubSequence': True}\n Dparts = []\n Bparts = []\n for i in range(numSegments):\n Cpart = C[i*seglen : min((i+1)*seglen, F1.shape[1]), :]\n [D, B] = DTW_Cost_To_AccumCostAndSteps(Cpart, params1)\n Dparts.append(D)\n Bparts.append(B)\n times.append(time.time())\n \n # construct Cseg, Tseg\n Cseg = np.zeros((numSegments, F2.shape[1]))\n Tseg = np.zeros((numSegments, F2.shape[1]), dtype=np.int32)\n for i, Dpart in enumerate(Dparts):\n Cseg[i,:] = Dpart[-1,:]\n Tseg[i,:] = calc_Tseg(Dpart, Bparts[i], params1)\n times.append(time.time())\n \n # segment-level DP\n [Dseg, Bseg] = Segment_DP(Cseg, Tseg)\n times.append(time.time())\n segmentEndIdxs = Segment_Backtrace(Dseg, Bseg)\n times.append(time.time())\n \n # backtrace on chunks\n wps = []\n for i, endidx in enumerate(segmentEndIdxs):\n params2 = {'dn': dn, 'dm': dm, 'dw': dw, 'SubSequence': True, 'startCol': endidx}\n [wpchunk, _, _] = DTW_GetPath(Dparts[i], Bparts[i], params2)\n wpchunk[0,:] = wpchunk[0,:] + i*seglen # account for relative offset\n wps.append(wpchunk.copy())\n wp_merged = np.hstack(wps)\n times.append(time.time())\n \n if swap:\n wp_merged = np.flipud(wp_merged) # undo swap\n \n if outfile:\n pickle.dump(wp_merged, open(outfile, 'wb'))\n \n if profile:\n return wp_merged, np.diff(times)\n else:\n return wp_merged",
"_____no_output_____"
]
],
[
[
"Align a single pair of audio files",
"_____no_output_____"
]
],
[
[
"featfile1 = 'features/clean/Chopin_Op017No4/Chopin_Op017No4_Afanassiev-2001_pid9130-01.npy'\nfeatfile2 = 'features/clean/Chopin_Op017No4/Chopin_Op017No4_Ben-Or-1989_pid9161-12.npy'\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([1,1,2])\ndownsample = 1\nnumSegments = 5\nwp = alignSSDTW(featfile1, featfile2, steps, weights, downsample, numSegments)",
"_____no_output_____"
]
],
[
[
"Align all pairs of audio files",
"_____no_output_____"
]
],
[
[
"query_list = 'cfg_files/query.test.list'\nfeatdir1 = Path('features/clean')\nfeatdir2 = Path('features/clean') # in case you want to align clean vs noisy\nn_cores = 1\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([1,1,2])\ndownsample = 1\nsegmentVals = [2, 4, 8, 16, 32]\nfor numSegments in segmentVals:\n outdir = Path(f'experiments_test/ssdtw_{numSegments}_clean')\n alignSegmentalDTW_batch(query_list, featdir1, featdir2, outdir, n_cores, steps, weights, downsample, numSegments, alignSSDTW)",
"_____no_output_____"
]
],
[
[
"### Runtime Profiling",
"_____no_output_____"
],
[
"Measure runtime of different DTW variants on cost matrices of varying sizes.",
"_____no_output_____"
]
],
[
[
"def saveRandomFeatureMatrices(sizes, outdir):\n \n if not os.path.exists(outdir):\n os.mkdir(outdir)\n \n np.random.seed(0)\n for sz in sizes:\n F = np.random.rand(12, sz)\n outfile = outdir + f'/F_{sz}.npy'\n np.save(outfile, F)\n \n return",
"_____no_output_____"
],
[
"sizes = [1000, 2000, 5000, 10000, 20000, 50000]\nrand_feat_dir = 'features/random'\nsaveRandomFeatureMatrices(sizes, rand_feat_dir)",
"_____no_output_____"
]
],
[
[
"Profiling DTW",
"_____no_output_____"
]
],
[
[
"# DTW\noutfile = 'dtw_prof.pkl'\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([2,3,3])\ndownsample = 1\nsizes = [50000, 20000, 10000, 5000, 2000, 1000]\nN = 10\ndurs = np.zeros((len(sizes), N, 3)) # DTW runtime is broken into 3 parts\nfor i in range(len(sizes)):\n sz = sizes[i]\n print(f'Running size = {sz} ', end='')\n featfile = rand_feat_dir + f'/F_{sz}.npy'\n for j in range(N):\n print('.', end='')\n gc.collect()\n _, times = alignDTW(featfile, featfile, steps, weights, downsample, profile=True)\n durs[i,j,:] = np.array(times)\n print('')\npickle.dump([durs, sizes], open(outfile, 'wb'))",
"_____no_output_____"
]
],
[
[
"Profiling WSDTW",
"_____no_output_____"
]
],
[
[
"# WSDTW\noutfile = 'wsdtw_prof.pkl'\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([1,1,2])\ndownsample = 1\nsegmentVals = [2, 4, 8, 16, 32]\nsizes = [50000, 20000, 10000, 5000, 2000, 1000]\nN = 10\ndurs = np.zeros((len(segmentVals), len(sizes), N, 5)) # WSDTW runtime is broken into 5 parts\nfor i, numSegments in enumerate(segmentVals):\n print(f'Running numSegments = {numSegments} ', end='')\n for j, sz in enumerate(sizes):\n print('|', end='')\n featfile = rand_feat_dir + f'/F_{sz}.npy'\n for k in range(N):\n print('.', end='')\n gc.collect()\n _, times = alignWSDTW(featfile, featfile, steps, weights, downsample, numSegments, profile=True)\n durs[i,j,k,:] = np.array(times)\n print('')\npickle.dump([durs, segmentVals, sizes], open(outfile, 'wb'))",
"_____no_output_____"
]
],
[
[
"Profiling SSDTW",
"_____no_output_____"
]
],
[
[
"# SSDTW\noutfile = 'ssdtw_prof.pkl'\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([1,1,2])\ndownsample = 1\nsegmentVals = [2, 4, 8, 16, 32]\nsizes = [50000, 20000, 10000, 5000, 2000, 1000]\nN = 10\ndurs = np.zeros((len(segmentVals), len(sizes), N, 6)) # SSDTW runtime is broken into 6 parts\nfor i, numSegments in enumerate(segmentVals):\n print(f'Running numSegments = {numSegments} ', end='')\n for j, sz in enumerate(sizes):\n print('|', end='')\n featfile = rand_feat_dir + f'/F_{sz}.npy'\n for k in range(N):\n print('.', end='')\n gc.collect()\n _, times = alignSSDTW(featfile, featfile, steps, weights, downsample, numSegments, profile=True)\n durs[i,j,k,:] = np.array(times)\n print('')\npickle.dump([durs, segmentVals, sizes], open(outfile, 'wb'))",
"_____no_output_____"
]
],
[
[
"### Comparing Alignments on Random Data",
"_____no_output_____"
],
[
"See how closely Segmental DTW variants match DTW alignments on random cost matrices.",
"_____no_output_____"
]
],
[
[
"def saveRandomFeatureMatrices2(sz, N, outdir):\n \n if not os.path.exists(outdir):\n os.mkdir(outdir)\n \n np.random.seed(0)\n for i in range(N):\n F = np.random.rand(12, sz)\n norm_factor = np.sqrt(np.sum(F*F, axis=0))\n F = F / norm_factor\n outfile = outdir + f'/F_{sz}_{i}.npy'\n np.save(outfile, F)\n \n return",
"_____no_output_____"
],
[
"sz = 10000\nN = 10\nrand_feat_dir = 'features/random'\nsaveRandomFeatureMatrices2(sz, N, rand_feat_dir)",
"_____no_output_____"
],
[
"featfile1 = 'features/random/F_10000_0.npy'\nfeatfile2 = 'features/random/F_10000_6.npy'\n\n# DTW\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([2,3,3])\ndownsample = 1\nwp = alignDTW(featfile1, featfile2, steps, weights, downsample)\n\n# Segmental DTW variants\nsteps = np.array([1,1,1,2,2,1]).reshape((-1,2))\nweights = np.array([1,1,2])\nnumSegments = 16\nwp2 = alignWSDTW(featfile1, featfile2, steps, weights, downsample, numSegments)\nwp3 = alignSSDTW(featfile1, featfile2, steps, weights, downsample, numSegments)",
"_____no_output_____"
],
[
"plt.plot(wp[0,:], wp[1,:], 'k')\nplt.plot(wp2[0,:], wp2[1,:], 'r')\nplt.plot(wp3[0,:], wp3[1,:], 'b')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a079e64f3537c95bd3ce10446acd59511eee526
| 24,741 |
ipynb
|
Jupyter Notebook
|
notebooks/.ipynb_checkpoints/dynestylinreg-checkpoint.ipynb
|
ssagear/metallicity
|
62a301d9c6bb3b37a395dc6f8c3130e836f9e095
|
[
"MIT"
] | 2 |
2021-03-04T02:50:36.000Z
|
2021-11-14T10:55:44.000Z
|
notebooks/.ipynb_checkpoints/dynestylinreg-checkpoint.ipynb
|
ssagear/metallicity
|
62a301d9c6bb3b37a395dc6f8c3130e836f9e095
|
[
"MIT"
] | null | null | null |
notebooks/.ipynb_checkpoints/dynestylinreg-checkpoint.ipynb
|
ssagear/metallicity
|
62a301d9c6bb3b37a395dc6f8c3130e836f9e095
|
[
"MIT"
] | null | null | null | 96.268482 | 18,096 | 0.848308 |
[
[
[
"# Linear Regression",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
],
[
"First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.",
"_____no_output_____"
]
],
[
[
"# Python 3 compatability\nfrom __future__ import division, print_function\nfrom six.moves import range\n\n# system functions that are always useful to have\nimport time, sys, os\n\n# basic numeric setup\nimport numpy as np\n\n# inline plotting\n%matplotlib inline\n\n# plotting\nimport matplotlib\nfrom matplotlib import pyplot as plt\n\n# seed the random number generator\nnp.random.seed(56101)",
"_____no_output_____"
],
[
"# re-defining plotting defaults\nfrom matplotlib import rcParams\nrcParams.update({'xtick.major.pad': '7.0'})\nrcParams.update({'xtick.major.size': '7.5'})\nrcParams.update({'xtick.major.width': '1.5'})\nrcParams.update({'xtick.minor.pad': '7.0'})\nrcParams.update({'xtick.minor.size': '3.5'})\nrcParams.update({'xtick.minor.width': '1.0'})\nrcParams.update({'ytick.major.pad': '7.0'})\nrcParams.update({'ytick.major.size': '7.5'})\nrcParams.update({'ytick.major.width': '1.5'})\nrcParams.update({'ytick.minor.pad': '7.0'})\nrcParams.update({'ytick.minor.size': '3.5'})\nrcParams.update({'ytick.minor.width': '1.0'})\nrcParams.update({'font.size': 30})",
"_____no_output_____"
],
[
"import dynesty",
"_____no_output_____"
]
],
[
[
"Linear regression is ubiquitous in research. In this example we'll fit a line \n\n$$ y=mx+b $$ \n\nto data where the error bars have been underestimated and need to be inflated by a factor $f$. This example is taken from the [emcee documentation](http://dan.iel.fm/emcee/current/user/line/).",
"_____no_output_____"
]
],
[
[
"# truth\nm_true = -0.9594\nb_true = 4.294\nf_true = 0.534\n\n# generate mock data\nN = 50\nx = np.sort(10 * np.random.rand(N))\nyerr = 0.1 + 0.5 * np.random.rand(N)\ny_true = m_true * x + b_true\ny = y_true + np.abs(f_true * y_true) * np.random.randn(N)\ny += yerr * np.random.randn(N)\n\n# plot results\nplt.figure(figsize=(10, 5))\nplt.errorbar(x, y, yerr=yerr, fmt='ko', ecolor='red')\nplt.plot(x, y_true, color='blue', lw=3)\nplt.xlabel(r'$X$')\nplt.ylabel(r'$Y$')\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"We will assume the errors are Normal and impose uniform priors on $(m, b, \\ln f)$.",
"_____no_output_____"
]
],
[
[
"# log-likelihood\ndef loglike(theta):\n m, b, lnf = theta\n model = m * x + b\n inv_sigma2 = 1.0 / (yerr**2 + model**2 * np.exp(2 * lnf))\n \n return -0.5 * (np.sum((y-model)**2 * inv_sigma2 - np.log(inv_sigma2)))\n\n# prior transform\ndef prior_transform(utheta):\n um, ub, ulf = utheta\n m = 5.5 * um - 5.\n b = 10. * ub\n lnf = 11. * ulf - 10.\n \n return m, b, lnf",
"_____no_output_____"
]
],
[
[
"Let's sample from this distribution using multiple bounding ellipsoids and random \"staggers\" (and alternative to random walks).",
"_____no_output_____"
]
],
[
[
"dsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=3,\n bound='multi', sample='rstagger')\ndsampler.run_nested()\ndres = dsampler.results",
"13136it [00:58, 576.57it/s, batch: 5 | bound: 122 | nc: 25 | ncall: 289231 | eff(%): 4.484 | loglstar: -38.542 < -34.180 < -34.488 | logz: -44.138 +/- 0.183 | stop: 1.607] "
]
],
[
[
"Let's see how we did.",
"_____no_output_____"
]
],
[
[
"from dynesty import plotting as dyplot\n\ntruths = [m_true, b_true, np.log(f_true)]\nlabels = [r'$m$', r'$b$', r'$\\ln f$']\nfig, axes = dyplot.traceplot(dsampler.results, truths=truths, labels=labels,\n fig=plt.subplots(3, 2, figsize=(16, 12)))\nfig.tight_layout()",
"_____no_output_____"
],
[
"fig, axes = dyplot.cornerplot(dres, truths=truths, show_titles=True, \n title_kwargs={'y': 1.04}, labels=labels,\n fig=plt.subplots(3, 3, figsize=(15, 15)))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a07adb75b31ad3f19dd37b916948e73cb6dacc0
| 130,233 |
ipynb
|
Jupyter Notebook
|
engr1330jb/_build/jupyter_execute/lessons/lesson11/lesson11.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null |
engr1330jb/_build/jupyter_execute/lessons/lesson11/lesson11.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null |
engr1330jb/_build/jupyter_execute/lessons/lesson11/lesson11.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null | 28.435153 | 599 | 0.406518 |
[
[
[
"<div class=\"alert alert-block alert-info\">\n <b><h1>ENGR 1330 Computational Thinking with Data Science </h1></b> \n</div> \n\nCopyright © 2021 Theodore G. Cleveland and Farhang Forghanparast\n\nLast GitHub Commit Date: \n \n# 11: Databases\n- Fundamental Concepts\n- Dataframes\n- Read/Write to from files \n",
"_____no_output_____"
],
[
"---\n## Objectives\n1. To understand the **dataframe abstraction** as implemented in the Pandas library(module).\n 1. To be able to access and manipulate data within a dataframe\n 2. To be able to obtain basic statistical measures of data within a dataframe\n2. Read/Write from/to files\n 1. MS Excel-type files (.xls,.xlsx,.csv) (LibreOffice files use the MS .xml standard)\n 2. Ordinary ASCII (.txt) files \n",
"_____no_output_____"
],
[
"---\n\n## Computational Thinking Concepts\n\nThe CT concepts expressed within Databases include:\n\n- `Decomposition` : Break a problem down into smaller pieces; Collections of data are decomposed into their smallest usable part\n- `Abstraction` : A database is an abstraction of a collection of data\n\nSuppose we need to store a car as a collection of its parts (implying dissassembly each time we park it), such decompotion would produce a situation like the image.\n\n<img src=\"https://miro.medium.com/max/590/1*Xfxl8HoQqqg_KtpEsEcskw.jpeg\" width=\"400\">\n\nIf the location of each part is recorded, then we can determine if something is missing as in \n\n<img src=\"http://54.243.252.9/engr-1330-webroot/engr1330jb/lessons/lesson11/modifiedCar.jpeg\" width=\"400\">\n\nIn the image, the two missing parts are pretty large, and would be evident on a fully assembled car (missing front corner panel, and right rear tire. Smaller parts would be harder to track on the fully assembled object. However if we had two fully assembled cars, and when we moved them heard the tink-tink-tink of a ball bearing bouncing on the floor, we would know something is missing - a query of the database to find where all the balls are supposed to be will help us figure out which car is incomplete.\n\nIn other contexts you wouldn’t want to have to take your car apart and store every piece separately whenever you park it in the garage. In that case, you would want to store your car as a single entry in the database (garage), and access its pieces through it (used car parts are usually sourced from fully assembled cars. \n\nThe U.S. Airforce keeps a lot of otherwise broken aircraft for parts replacement. As a part is removed it is entered into a database \"a transaction\" so they know that part is no longer in the broken aircraft lot but in service somewhere. So the database may locate a part in a bin in a hangar or a part that is residing in an assembled aircraft. In either case, the hangar (and parts bin) as well as the broken aircarft are both within the database schema - an abstraction.\n\n<img src=\"http://54.243.252.9/engr-1330-webroot/engr1330jb/lessons/lesson11/boneyard.png\" width = \"500\" > \n\nAnd occassionally they grab a whole airframe \n\n<img src=\"http://54.243.252.9/engr-1330-webroot/engr1330jb/lessons/lesson11/B52WhereRU.png\" width = \"500\" >",
"_____no_output_____"
],
[
"---\n\n### Databases\nDatabases are containers for data. A public library stores books, hence we could legitimately state that a library is a database of books. But strictly defined, databases are computer structures that save, organize, protect, and deliver data. A system that contains and manipulates databases is called a database management system, or DBMS. \n\nA database can be thought of as a kind of electronic filing cabinet; it contains digitized information (“data”), which is kept in persistent storage of some kind. Users can insert new information into the database, and delete, change, or retrieve existing information in the database, by issuing requests or commands to the software that manages the database—which is to say, the database management system (DBMS). \n\nIn practice those user requests to the DBMS can be formulated in a variety of different ways (e.g., by pointing and clicking with a mouse). For our purposes, however, it’s more convenient to assume they’re expressed in the form of simple text strings in some formal language. Given a human resources database, for example, we might write:",
"_____no_output_____"
],
[
"```\nEMP WHERE JOB = 'Programmer'\n```",
"_____no_output_____"
],
[
"And this expression represents a retrieval request—more usually known as a `query` for employee information for employees whose job title is ‘Programmer’. A query submission and responce is called a transaction.\n\n---\n#### Database Types\n\nThe simplest form of databases is a text database. When data are organized in a text file in rows and columns, it can be used to store, organize, protect, and retrieve data. Saving a list of names in a file, starting with first name and followed by last name, would be a simple database. Each row of the file represents a record. You can update records by changing specific names, you can remove rows by deleting lines, and you can add new rows by adding new lines. The term \"flat-file\" database is a typical analog.\n\nDesktop database programs are another type of database that's more complex than a flat-file text database yet still intended for a single user. A Microsoft Excel spreadsheet or Microsoft Access database are good examples of desktop database programs. These programs allow users to enter data, store it, protect it, and retrieve it when needed. The benefit of desktop database programs over text databases is the speed of changing data, and the ability to store comparatively large amounts of data while keeping performance of the system manageable.\n\nRelational databases are the most common database systems. A relational database contains multiple tables of data with rows and columns that relate to each other through special key fields. These databases are more flexible than flat file structures, and provide functionality for reading, creating, updating, and deleting data. Relational databases use variations of Structured Query Language (SQL) - a standard user application that provides an easy programming interface for database interaction.\n\nSome examples of Relational Database Management Systems (RDMS) are SQL Server, Oracle Database, Sybase, Informix, and MySQL. The relational database management systems (RDMS) exhibit superior performance for managing large collections of for multiple users (even thousands!) to work with the data at the same time, with elaborate security to protect the data. RDBMS systems still store data in columns and rows, which in turn make up tables. A table in RDBMS is like a spreadsheet, or any other flat-file structure. A set of tables makes up a schema. A number of schemas create a database. \n\nEmergent structures for storing data today are NoSQL and object-oriented databases. These do not follow the table/row/column approach of RDBMS. Instead, they build bookshelves of elements and allow access per bookshelf. So, instead of tracking individual words in books, NoSQL and object-oriented databases narrow down the data you are looking for by pointing you to the bookshelf, then a mechanical assistant works with the books to identify the exact word you are looking for. \n\n---\n\n#### Relational Database Concepts\n\nThe figure below shows sample values for a typical database, having to do with suppliers, parts, and shipments (of parts by suppliers).\n\n<img src=\"http://54.243.252.9/engr-1330-webroot/engr1330jb/lessons/lesson11/PartsAreParts.png\" width=\"500\">\n\nObserve that this database contains three files, or tables. The tables are named S, P, and SP, respectively, and since they’re tables they’re made up of rows and columns (in conventional file terms, the rows correspond to records of the file in question and the columns to fields). They’re meant to be understood as follows:\n\n> Table S represents suppliers under contract. Each supplier has one supplier number (SNO), unique to that supplier; one name (SNAME), not necessarily unique (though the sample values shown in Figure 1-1 do happen to be unique); one status value (STATUS); and one location (CITY). Note: In the rest of this book I’ll abbreviate “suppliers under contract,” most of the time, to just suppliers.\n\n> Table P represents kinds of parts. Each kind of part has one part number (PNO), which is unique; one name (PNAME); one color (COLOR); one weight (WEIGHT); and one location where parts of that kind are stored (CITY). Note: In the rest of this book I’ll abbreviate “kinds of parts,” most of the time, to just parts.\n\n> Table SP represents shipments—it shows which parts are shipped, or supplied, by which suppliers. Each shipment has one supplier number (SNO); one part number (PNO); and one quantity (QTY). Also, there’s at most one shipment at any given time for a given supplier and given part, and so the combination of supplier number and part number is unique to any given shipment.\n\nReal databases tend to be much more complicated than this “toy” example. However we can make useful observations; these three tables are our schema (our framework for lack of a better word), and at this point is also our only schema, hence it is the `PartsIsParts` database (we have just named the database here)",
"_____no_output_____"
],
[
"### Dataframe-type Structure using primative python\n\nFirst lets construct a dataframe like objects using python primatives, and the *PartsIsParts* database schema\n",
"_____no_output_____"
]
],
[
[
"parts = [['PNO','PNAME','COLOR','WEIGHT','CITY'],\n ['P1','Nut','Red',12.0,'London'],\n ['P2','Bolt','Green',17.0,'Paris'],\n ['P3','Screw','Blue',17.0,'Oslo'],\n ['P4','Screw','Red',14.0,'London'],\n ['P5','Cam','Blue',12.0,'Paris'],\n ['P6','Cog','Red',19.0,'London'],]\nsuppliers = [['SNO','SNAME','STATUS','CITY'],\n ['S1','Smith',20,'London'],\n ['S2','Jones',10,'Paris'],\n ['S3','Blake',30,'Paris'],\n ['S4','Clark',20,'London'],\n ['S5','Adams',30,'Athens']]\nshipments = [['SNO','PNO','QTY'],\n ['S1','P1',300],\n ['S1','P2',200],\n ['S1','P3',400],\n ['S1','P4',200],\n ['S1','P5',100],\n ['S1','P6',100],\n ['S2','P1',300],\n ['S2','P2',400],\n ['S3','P2',200],\n ['S4','P2',200],\n ['S4','P4',300],\n ['S4','P5',400]]",
"_____no_output_____"
]
],
[
[
"Lets examine some things:\n\nIn each table there are columns, these are called fields. There are also rows, these are called records. Hidden from view is a unique record identifier for each record, each table.\n\n\nNow lets query our database, lets list all parts whose weight is less than 13 - how do we proceede?\n\n- We have to select the right table\n- We have to construct a search to find all instances of parts with weight less than 13\n- Print the result\n\nFor the toy problem not too hard",
"_____no_output_____"
]
],
[
[
"for i in range(1,len(parts)):\n if parts[i][3] < 13.0 :\n print(parts[i])",
"['P1', 'Nut', 'Red', 12.0, 'London']\n['P5', 'Cam', 'Blue', 12.0, 'Paris']\n"
]
],
[
[
"Now lets query our database, lets list all parts whose weight is less than 13 - but only list the part number, color, and city\n\n- We have to select the right table\n- We have to construct a search to find all instances of parts with weight less than 13\n- Print the list slice with the requesite information\n\nFor the toy problem still not too hard, but immediately we see if this keeps up its going to get kind of tricky fast!; Also it would be nice to be able to refer to a column by its name.",
"_____no_output_____"
]
],
[
[
"for i in range(1,len(parts)):\n if parts[i][3] < 13.0 :\n print(parts[i][0],parts[i][2],parts[i][4]) # slice the sublist",
"P1 Red London\nP5 Blue Paris\n"
]
],
[
[
"Now lets modify contents of a table. Lets delete all instances of suppliers with status 10. Then for remaining suppliers elevate their status by 5.\n\nAgain \n- We have to select the right table\n- We have to construct a search to find all instances of status equal to 10\n- If not equal to 10, copy the row, otherwise skip\n- Delete original table, and rename the temporary table",
"_____no_output_____"
]
],
[
[
"temp=[]\nfor i in range(len(suppliers)):\n print(suppliers[i])\nfor i in range(0,len(suppliers)):\n if suppliers[i][2] == 10 :\n continue\n else:\n temp.append(suppliers[i]) # slice the sublist\nsuppliers = temp # attempt to rewrite the original\nfor i in range(len(suppliers)):\n print(suppliers[i])",
"['SNO', 'SNAME', 'STATUS', 'CITY']\n['S1', 'Smith', 20, 'London']\n['S2', 'Jones', 10, 'Paris']\n['S3', 'Blake', 30, 'Paris']\n['S4', 'Clark', 20, 'London']\n['S5', 'Adams', 30, 'Athens']\n['SNO', 'SNAME', 'STATUS', 'CITY']\n['S1', 'Smith', 20, 'London']\n['S3', 'Blake', 30, 'Paris']\n['S4', 'Clark', 20, 'London']\n['S5', 'Adams', 30, 'Athens']\n"
]
],
[
[
"Now suppose we want to find how many parts are coming from London, our query gets more complex, but still manageable.",
"_____no_output_____"
]
],
[
[
"temp=[]\nfor i in range(0,len(suppliers)):\n if suppliers[i][3] == 'London' :\n temp.append(suppliers[i][0]) # get supplier code from london\n else:\n continue\n\nhowmany = 0 # keep count \nfor i in range(0,len(shipments)):\n for j in range(len(temp)):\n if shipments[i][0] == temp[j]:\n howmany = howmany + shipments[i][2]\n else:\n continue\n\nprint(howmany)",
"2200\n"
]
],
[
[
"Instead of writing all our own scripts, unique to each database the python community created a module called `Pandas`, so named because most things in the world are made in China, and their national critter is a Panda Bear (actually the name is a contraction of **PAN**el **DA**ta **S**tructure' or something close to that.\n\nSo to build these queries in an easier fashion - lets examine `pandas`.\n\n---\n\n## The `pandas` module \n- About Pandas\n- How to install\n - Anaconda\n - JupyterHub/Lab (on Linux)\n - JupyterHub/Lab (on MacOS)\n - JupyterHub/Lab (on Windoze)\n- The Dataframe\n - Primatives\n - Using Pandas\n - Create, Modify, Delete datagrames\n - Slice Dataframes\n - Conditional Selection\n - Synthetic Programming (Symbolic Function Application)\n - Files\n- Access Files from a remote Web Server\n - Get file contents\n - Get the actual file\n - Adaptations for encrypted servers (future semester)\n",
"_____no_output_____"
],
[
"---\n\n### About Pandas: \nPandas is the core library for dataframe manipulation in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. The library’s name is derived from the term ‘Panel Data’. \nIf you are curious about Pandas, this cheat sheet is recommended: [https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)\n\n#### Data Structure \nThe Primary data structure is called a dataframe. It is an **abstraction** where data are represented as a 2-dimensional mutable and heterogenous tabular data structure; much like a Worksheet in MS Excel. The structure itself is popular among statisticians and data scientists and business executives. \n\nAccording to the marketing department \n*\"Pandas Provides rich data structures and functions designed to make working with data fast, easy, and expressive. It is useful in data manipulation, cleaning, and analysis; Pandas excels in performance and productivity \"*",
"_____no_output_____"
],
[
"---\n\n## The Dataframe\n\nA data table is called a `DataFrame` in pandas (and other programming environments too). The figure below from [https://pandas.pydata.org/docs/getting_started/index.html](https://pandas.pydata.org/docs/getting_started/index.html) illustrates a dataframe model:\n\n \n\nEach **column** and each **row** in a dataframe is called a series, the header row, and index column are special. \nLike MS Excel we can query the dataframe to find the contents of a particular `cell` using its **row name** and **column name**, or operate on entire **rows** and **columns**\n\nTo use pandas, we need to import the module.",
"_____no_output_____"
],
[
"---\n\n## Computational Thinking Concepts\n\nThe CT concepts expressed within Pandas include:\n\n- `Decomposition` : Data interpretation, manipulation, and analysis of Pandas dataframes is an act of decomposition -- although the dataframes can be quite complex.\n- `Abstraction` : The dataframe is a data representation abstraction that allows for placeholder operations, later substituted with specific contents for a problem; enhances reuse and readability. We leverage the principle of algebraic replacement using these abstractions.\n- `Algorithms` : Data interpretation, manipulation, and analysis of dataframes are generally implemented as part of a supervisory algorithm.",
"_____no_output_____"
],
[
"---\n\n## Module Set-Up\n\nIn principle, Pandas should be available in a default Anaconda install \n- You should not have to do any extra installation steps to install the library in Python\n- You do have to **import** the library in your scripts\n\nHow to check\n- Simply open a code cell and run `import pandas` if the notebook does not protest (i.e. pink block of error), the youis good to go.",
"_____no_output_____"
]
],
[
[
"import pandas",
"_____no_output_____"
]
],
[
[
"If you do get an error, that means that you will have to install using `conda` or `pip`; you are on-your-own here! On the **content server** the process is:\n\n1. Open a new terminal from the launcher\n2. Change to root user `su` then enter the root password\n3. `sudo -H /opt/jupyterhib/bin/python3 -m pip install pandas`\n4. Wait until the install is complete; for security, user `compthink` is not in the `sudo` group\n5. Verify the install by trying to execute `import pandas` as above.\n\nThe process above will be similar on a Macintosh, or Windows if you did not use an Anaconda distribution. Best is to have a sucessful anaconda install, or go to the [GoodJobUntilMyOrgansGetHarvested](https://apply.mysubwaycareer.com/us/en/). \n\nIf you have to do this kind of install, you will have to do some reading, some references I find useful are:\n1. https://jupyterlab.readthedocs.io/en/stable/user/extensions.html\n2. https://www.pugetsystems.com/labs/hpc/Note-How-To-Install-JupyterHub-on-a-Local-Server-1673/#InstallJupyterHub\n3. https://jupyterhub.readthedocs.io/en/stable/installation-guide-hard.html (This is the approach on the content server which has a functioning JupyterHub)",
"_____no_output_____"
]
],
[
[
"#%reset -f ",
"_____no_output_____"
]
],
[
[
"---\n\nNow lets repeat the example using Pandas, here we will reuse the original lists, so there is some extra work to get the structures just so",
"_____no_output_____"
]
],
[
[
"import pandas\npartsdf = pandas.DataFrame(parts)\npartsdf.set_axis(parts[0][:],axis=1,inplace=True) # label the columns\npartsdf.drop(0, axis=0, inplace = True) # remove the first row that held the column names\npartsdf",
"_____no_output_____"
],
[
"suppliersdf = pandas.DataFrame(suppliers)\nsuppliersdf.set_axis(suppliers[0][:],axis=1,inplace=True) # label the columns\nsuppliersdf.drop(0, axis=0, inplace = True) # remove the first row that held the column names\nsuppliersdf",
"_____no_output_____"
],
[
"shipmentsdf = pandas.DataFrame(shipments)\nshipmentsdf.set_axis(shipments[0][:],axis=1,inplace=True) # label the columns\nshipmentsdf.drop(0, axis=0, inplace = True) # remove the first row that held the column names\nshipmentsdf",
"_____no_output_____"
]
],
[
[
"Now lets learn about our three dataframes",
"_____no_output_____"
]
],
[
[
"partsdf.shape # this is a method to return shape, notice no argument list i.e. no ()",
"_____no_output_____"
],
[
"suppliersdf.shape",
"_____no_output_____"
],
[
"shipmentsdf.shape",
"_____no_output_____"
],
[
"partsdf['COLOR'] #Selecing a single column",
"_____no_output_____"
],
[
"partsdf[['COLOR','CITY']] #Selecing a multiple columns - note the names are supplied as a list",
"_____no_output_____"
],
[
"partsdf.loc[[5,6]] #Selecing rows based on label via loc[ ] indexer using row indices - note supplied as a list",
"_____no_output_____"
]
],
[
[
"Now lets query our dataframes, lets list all parts whose weight is less than 13,\n\nRecall from before:\n\n- We have to select the right table\n- We have to construct a search to find all instances of parts with weight less than 13\n- Print the list slice with the requesite information\n\nWe have to do these same activities, but the syntax is far more readable:",
"_____no_output_____"
]
],
[
[
"partsdf[partsdf['WEIGHT'] < 13] # from dataframe named partsdf, find all rows in column \"WEIGHT less than 13, and return these rows\"",
"_____no_output_____"
]
],
[
[
"Now lets query our dataframe, lets list all parts whose weight is less than 13 - but only list the part number, color, and city\n\n- We have to select the right table\n- We have to construct a search to find all instances of parts with weight less than 13\n- Print the list slice with the requesite information\n\nAgain a more readable syntax",
"_____no_output_____"
]
],
[
[
"partsdf[partsdf['WEIGHT'] < 13][['PNO','COLOR','CITY']] # from dataframe named partsdf, find all rows in column \"WEIGHT less than 13, and return part number, color, and city from these rows\"",
"_____no_output_____"
]
],
[
[
"---\n\n### `head` method\n\nReturns the first few rows, useful to infer structure",
"_____no_output_____"
]
],
[
[
"shipmentsdf.head() # if you supply an argument you control how many rows are shown i.e. shipmentsdf.head(3) returns first 3 rows",
"_____no_output_____"
]
],
[
[
"---\n\n### `tail` method\n\nReturns the last few rows, useful to infer structure",
"_____no_output_____"
]
],
[
[
"shipmentsdf.tail()",
"_____no_output_____"
]
],
[
[
"---\n\n### `info` method\n\nReturns the data model (data column count, names, data types)",
"_____no_output_____"
]
],
[
[
"#Info about the dataframe\n\nsuppliersdf.info()",
"<class 'pandas.core.frame.DataFrame'>\nInt64Index: 4 entries, 1 to 4\nData columns (total 4 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 SNO 4 non-null object\n 1 SNAME 4 non-null object\n 2 STATUS 4 non-null object\n 3 CITY 4 non-null object\ndtypes: object(4)\nmemory usage: 160.0+ bytes\n"
]
],
[
[
"---\n\n### `describe` method\n\nReturns summary statistics of each numeric column. \nAlso returns the minimum and maximum value in each column, and the IQR (Interquartile Range). \nAgain useful to understand structure of the columns.\n\nOur toy example contains limited numeric data, so describe is not too useful - but in general its super useful for engineering databases",
"_____no_output_____"
]
],
[
[
"#Statistics of the dataframe\n\npartsdf.describe()",
"_____no_output_____"
]
],
[
[
"---\n\n### Examples with \"numerical\" data",
"_____no_output_____"
]
],
[
[
"%reset -f",
"_____no_output_____"
],
[
"import numpy # we just reset the worksheet, so reimport the packages\nimport pandas",
"_____no_output_____"
]
],
[
[
"### Now we shall create a proper dataframe\nWe will now do the same using pandas",
"_____no_output_____"
]
],
[
[
"mydf = pandas.DataFrame(numpy.random.randint(1,100,(5,4)), ['A','B','C','D','E'], ['W','X','Y','Z'])\nmydf",
"_____no_output_____"
]
],
[
[
"---\n\n### Getting the shape of dataframes\n\nThe shape method, which is available after the dataframe is constructed, will return the row and column rank (count) of a dataframe.",
"_____no_output_____"
]
],
[
[
"mydf.shape",
"_____no_output_____"
]
],
[
[
"---\n\n### Appending new columns\nTo append a column simply assign a value to a new column name to the dataframe",
"_____no_output_____"
]
],
[
[
"mydf['new']= 'NA'",
"_____no_output_____"
],
[
"mydf",
"_____no_output_____"
]
],
[
[
"---\n### Appending new rows\nThis is sometimes a bit trickier but here is one way:\n- create a copy of a row, give it a new name. \n- concatenate it back into the dataframe.",
"_____no_output_____"
]
],
[
[
"newrow = mydf.loc[['E']].rename(index={\"E\": \"X\"}) # create a single row, rename the index\nnewtable = pandas.concat([mydf,newrow]) # concatenate the row to bottom of df - note the syntax",
"_____no_output_____"
],
[
"newtable",
"_____no_output_____"
]
],
[
[
"---\n\n### Removing Rows and Columns\n\nTo remove a column is straightforward, we use the drop method",
"_____no_output_____"
]
],
[
[
"newtable.drop('new', axis=1, inplace = True)\nnewtable",
"_____no_output_____"
]
],
[
[
"To remove a row, you really got to want to, easiest is probablty to create a new dataframe with the row removed",
"_____no_output_____"
]
],
[
[
"newtable = newtable.loc[['A','B','D','E','X']] # select all rows except C\nnewtable",
"_____no_output_____"
],
[
"# or just use drop with axis specify\nnewtable.drop('X', axis=0, inplace = True)",
"_____no_output_____"
],
[
"newtable",
"_____no_output_____"
]
],
[
[
"---\n\n## Indexing\nWe have already been indexing, but a few examples follow:",
"_____no_output_____"
]
],
[
[
"newtable['X'] #Selecing a single column",
"_____no_output_____"
],
[
"newtable[['X','W']] #Selecing a multiple columns",
"_____no_output_____"
],
[
"newtable.loc['E'] #Selecing rows based on label via loc[ ] indexer",
"_____no_output_____"
],
[
"newtable",
"_____no_output_____"
],
[
"newtable.loc[['E','D','B']] #Selecing multiple rows based on label via loc[ ] indexer",
"_____no_output_____"
],
[
"newtable.loc[['B','E','D'],['X','Y']] #Selecting elements via both rows and columns via loc[ ] indexer",
"_____no_output_____"
]
],
[
[
"---\n\n## Conditional Selection",
"_____no_output_____"
]
],
[
[
"mydf = pandas.DataFrame({'col1':[1,2,3,4,5,6,7,8],\n 'col2':[444,555,666,444,666,111,222,222],\n 'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})\nmydf",
"_____no_output_____"
],
[
"#What fruit corresponds to the number 555 in ‘col2’?\n\nmydf[mydf['col2']==555]['col3']",
"_____no_output_____"
],
[
"#What fruit corresponds to the minimum number in ‘col2’?\n\nmydf[mydf['col2']==mydf['col2'].min()]['col3']",
"_____no_output_____"
]
],
[
[
"---\n\n## Descriptor Functions",
"_____no_output_____"
]
],
[
[
"#Creating a dataframe from a dictionary\n\nmydf = pandas.DataFrame({'col1':[1,2,3,4,5,6,7,8],\n 'col2':[444,555,666,444,666,111,222,222],\n 'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})\nmydf",
"_____no_output_____"
],
[
"#Returns only the first five rows\n\nmydf.head()",
"_____no_output_____"
]
],
[
[
"---\n\n### `info` method\n\nReturns the data model (data column count, names, data types)",
"_____no_output_____"
]
],
[
[
"#Info about the dataframe\n\nmydf.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 8 entries, 0 to 7\nData columns (total 3 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 col1 8 non-null int64 \n 1 col2 8 non-null int64 \n 2 col3 8 non-null object\ndtypes: int64(2), object(1)\nmemory usage: 320.0+ bytes\n"
]
],
[
[
"---\n\n### `describe` method\n\nReturns summary statistics of each numeric column. \nAlso returns the minimum and maximum value in each column, and the IQR (Interquartile Range). \nAgain useful to understand structure of the columns.",
"_____no_output_____"
]
],
[
[
"#Statistics of the dataframe\n\nmydf.describe()",
"_____no_output_____"
]
],
[
[
"---\n\n### Counting and Sum methods\n\nThere are also methods for counts and sums by specific columns",
"_____no_output_____"
]
],
[
[
"mydf['col2'].sum() #Sum of a specified column",
"_____no_output_____"
]
],
[
[
"The `unique` method returns a list of unique values (filters out duplicates in the list, underlying dataframe is preserved)",
"_____no_output_____"
]
],
[
[
"mydf['col2'].unique() #Returns the list of unique values along the indexed column ",
"_____no_output_____"
]
],
[
[
"The `nunique` method returns a count of unique values",
"_____no_output_____"
]
],
[
[
"mydf['col2'].nunique() #Returns the total number of unique values along the indexed column ",
"_____no_output_____"
]
],
[
[
"The `value_counts()` method returns the count of each unique value (kind of like a histogram, but each value is the bin)",
"_____no_output_____"
]
],
[
[
"mydf['col2'].value_counts() #Returns the number of occurences of each unique value",
"_____no_output_____"
]
],
[
[
"---\n\n## Using functions in dataframes - symbolic apply\n\nThe power of **Pandas** is an ability to apply a function to each element of a dataframe series (or a whole frame) by a technique called symbolic (or synthetic programming) application of the function.\n\nThis employs principles of **pattern matching**, **abstraction**, and **algorithm development**; a holy trinity of Computational Thinning.\n\nIt's somewhat complicated but quite handy, best shown by an example:",
"_____no_output_____"
]
],
[
[
"def times2(x): # A prototype function to scalar multiply an object x by 2\n return(x*2)\n\nprint(mydf)\nprint('Apply the times2 function to col2')\nmydf['reallynew'] = mydf['col2'].apply(times2) #Symbolic apply the function to each element of column col2, result is another dataframe",
" col1 col2 col3\n0 1 444 orange\n1 2 555 apple\n2 3 666 grape\n3 4 444 mango\n4 5 666 jackfruit\n5 6 111 watermelon\n6 7 222 banana\n7 8 222 peach\nApply the times2 function to col2\n"
],
[
"mydf",
"_____no_output_____"
]
],
[
[
"---\n\n## Sorts ",
"_____no_output_____"
]
],
[
[
"mydf.sort_values('col2', ascending = True) #Sorting based on columns ",
"_____no_output_____"
],
[
"mydf.sort_values('col3', ascending = True) #Lexiographic sort",
"_____no_output_____"
]
],
[
[
"---\n\n## Aggregating (Grouping Values) dataframe contents\n",
"_____no_output_____"
]
],
[
[
"#Creating a dataframe from a dictionary\n\ndata = {\n 'key' : ['A', 'B', 'C', 'A', 'B', 'C'],\n 'data1' : [1, 2, 3, 4, 5, 6],\n 'data2' : [10, 11, 12, 13, 14, 15],\n 'data3' : [20, 21, 22, 13, 24, 25]\n}\n\nmydf1 = pandas.DataFrame(data)\nmydf1",
"_____no_output_____"
],
[
"# Grouping and summing values in all the columns based on the column 'key'\n\nmydf1.groupby('key').sum()",
"_____no_output_____"
],
[
"# Grouping and summing values in the selected columns based on the column 'key'\n\nmydf1.groupby('key')[['data1', 'data2']].sum()",
"_____no_output_____"
]
],
[
[
"---\n\n## Filtering out missing values\n\nFiltering and *cleaning* are often used to describe the process where data that does not support a narrative is removed ;typically for maintenance of profit applications, if the data are actually missing that is common situation where cleaning is justified.",
"_____no_output_____"
]
],
[
[
"#Creating a dataframe from a dictionary\n\ndf = pandas.DataFrame({'col1':[1,2,3,4,None,6,7,None],\n 'col2':[444,555,None,444,666,111,None,222],\n 'col3':['orange','apple','grape','mango','jackfruit','watermelon','banana','peach']})\ndf",
"_____no_output_____"
]
],
[
[
"Below we drop any row that contains a `NaN` code.",
"_____no_output_____"
]
],
[
[
"df_dropped = df.dropna()\ndf_dropped",
"_____no_output_____"
]
],
[
[
"Below we replace `NaN` codes with some value, in this case 0",
"_____no_output_____"
]
],
[
[
"df_filled1 = df.fillna(0)\ndf_filled1",
"_____no_output_____"
]
],
[
[
"Below we replace `NaN` codes with some value, in this case the mean value of of the column in which the missing value code resides.",
"_____no_output_____"
]
],
[
[
"df_filled2 = df.fillna(df.mean())\ndf_filled2",
"_____no_output_____"
]
],
[
[
"---\n## Reading a File into a Dataframe\n\nPandas has methods to read common file types, such as `csv`,`xls`, and `json`. \nOrdinary text files are also quite manageable.\n\n> Specifying `engine='openpyxl'` in the read/write statement is required for the xml versions of Excel (xlsx). Default is .xls regardless of file name. If you still encounter read errors, try opening the file in Excel and saving as .xls (Excel 97-2004 Workbook) or as a CSV if the structure is appropriate.<br><br>\n> You may have to install the packages using something like <br>`sudo -H /opt/jupyterhub/bin/python3 -m pip install xlwt openpyxl xlsxwriter xlrd` from the Anaconda Prompt interface (adjust the path to your system) or something like <br>`sudo -H /opt/conda/envs/python/bin/python -m pip install xlwt openpyxl xlsxwriter xlrd` \n\nThe files in the following examples are [CSV_ReadingFile.csv](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson11/CSV_ReadingFile.csv), [Excel_ReadingFile.xlsx](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson11/Excel_ReadingFile.xlsx), ",
"_____no_output_____"
]
],
[
[
"readfilecsv = pandas.read_csv('CSV_ReadingFile.csv') #Reading a .csv file\nprint(readfilecsv)",
" a b c d\n0 0 1 2 3\n1 4 5 6 7\n2 8 9 10 11\n3 12 13 14 15\n"
]
],
[
[
"Similar to reading and writing .csv files, you can also read and write .xslx files as below (useful to know this)",
"_____no_output_____"
]
],
[
[
"readfileexcel = pandas.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1', engine='openpyxl') #Reading a .xlsx file\nprint(readfileexcel)",
" Unnamed: 0 a b c d\n0 0 0 1 2 3\n1 1 4 5 6 7\n2 2 8 9 10 11\n3 3 12 13 14 15\n"
]
],
[
[
"# Writing a dataframe to file",
"_____no_output_____"
]
],
[
[
"#Creating and writing to a .csv file\nreadfilecsv = pandas.read_csv('CSV_ReadingFile.csv')\nreadfilecsv.to_csv('CSV_WritingFile1.csv') # write to local directory\nreadfilecsv = pandas.read_csv('CSV_WritingFile1.csv') # read the file back\nprint(readfilecsv)",
" Unnamed: 0 a b c d\n0 0 0 1 2 3\n1 1 4 5 6 7\n2 2 8 9 10 11\n3 3 12 13 14 15\n"
],
[
"#Creating and writing to a .csv file by excluding row labels \nreadfilecsv = pandas.read_csv('CSV_ReadingFile.csv')\nreadfilecsv.to_csv('CSV_WritingFile2.csv', index = False)\nreadfilecsv = pandas.read_csv('CSV_WritingFile2.csv')\nprint(readfilecsv)",
" a b c d\n0 0 1 2 3\n1 4 5 6 7\n2 8 9 10 11\n3 12 13 14 15\n"
],
[
"#Creating and writing to a .xls file\nreadfileexcel = pandas.read_excel('Excel_ReadingFile.xlsx', sheet_name='Sheet1', engine='openpyxl')\nreadfileexcel.to_excel('Excel_WritingFile.xlsx', sheet_name='Sheet1' , index = False, engine='openpyxl')\nreadfileexcel = pandas.read_excel('Excel_WritingFile.xlsx', sheet_name='Sheet1', engine='openpyxl')\nprint(readfileexcel)",
" Unnamed: 0 a b c d\n0 0 0 1 2 3\n1 1 4 5 6 7\n2 2 8 9 10 11\n3 3 12 13 14 15\n"
]
],
[
[
"---\n\n## References\nOverland, B. (2018). Python Without Fear. Addison-Wesley \nISBN 978-0-13-468747-6. \n\nGrus, Joel (2015). Data Science from Scratch: First Principles with Python O’Reilly\nMedia. Kindle Edition.\n\nPrecord, C. (2010) wxPython 2.8 Application Development Cookbook Packt Publishing Ltd. Birmingham , B27 6PA, UK \nISBN 978-1-849511-78-0.\n\nJohnson, J. (2020). Python Numpy Tutorial (with Jupyter and Colab). Retrieved September 15, 2020, from https://cs231n.github.io/python-numpy-tutorial/ \n\nWillems, K. (2019). (Tutorial) Python NUMPY Array TUTORIAL. Retrieved September 15, 2020, from https://www.datacamp.com/community/tutorials/python-numpy-tutorial?utm_source=adwords_ppc\n\nWillems, K. (2017). NumPy Cheat Sheet: Data Analysis in Python. Retrieved September 15, 2020, from https://www.datacamp.com/community/blog/python-numpy-cheat-sheet\n\nW3resource. (2020). NumPy: Compare two given arrays. Retrieved September 15, 2020, from https://www.w3resource.com/python-exercises/numpy/python-numpy-exercise-28.php\n\nSorting https://www.programiz.com/python-programming/methods/list/sort\n\n\nhttps://www.oreilly.com/library/view/relational-theory-for/9781449365431/ch01.html\n\nhttps://realpython.com/pandas-read-write-files/#using-pandas-to-write-and-read-excel-files",
"_____no_output_____"
],
[
"---\n\n## Laboratory 11\n\n**Examine** (click) Laboratory 11 as a webpage at [Laboratory 11.html](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab11/Lab11.html)\n\n**Download** (right-click, save target as ...) Laboratory 11 as a jupyterlab notebook from [Laboratory 11.ipynb](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab11/Lab11.ipynb)\n",
"_____no_output_____"
],
[
"<hr><hr>\n\n## Exercise Set 11\n\n**Examine** (click) Exercise Set 11 as a webpage at [Exercise 11.html](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab11/Lab11-TH.html)\n\n**Download** (right-click, save target as ...) Exercise Set 11 as a jupyterlab notebook at [Exercise Set 11.ipynb](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab11/Lab11-TH.ipynb)\n\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a07e4c66dfeb80fc735c29b1b19dd7f51ccc2a5
| 23,852 |
ipynb
|
Jupyter Notebook
|
jie/RFraserDischargeFile.ipynb
|
SalishSeaCast/analysis
|
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
|
[
"Apache-2.0"
] | null | null | null |
jie/RFraserDischargeFile.ipynb
|
SalishSeaCast/analysis
|
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
|
[
"Apache-2.0"
] | null | null | null |
jie/RFraserDischargeFile.ipynb
|
SalishSeaCast/analysis
|
5964628f08ca1f36121a5d8430ad5b4ae7756c7a
|
[
"Apache-2.0"
] | null | null | null | 33.127778 | 115 | 0.544273 |
[
[
[
"from __future__ import division\nfrom salishsea_tools import rivertools\nfrom salishsea_tools import nc_tools\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport netCDF4 as nc\nimport arrow\nimport numpy.ma as ma\nimport sys\nsys.path.append('/ocean/klesouef/meopar/tools/I_ForcingFiles/Rivers')\n%matplotlib inline",
"_____no_output_____"
],
[
"# Constant and data ranges etc\nyear = 2015\nsmonth = 6\nemonth = 6\nstartdate = arrow.get(year,smonth,15)\nenddate = arrow.get(year,emonth,29)\nprint (startdate, enddate)",
"2015-06-15T00:00:00+00:00 2015-06-29T00:00:00+00:00\n"
],
[
"# get Fraser Flow data\nfilename = '/data/dlatorne/SOG-projects/SOG-forcing/ECget/Fraser_flow'\nfraserflow = np.loadtxt(filename)\nprint (fraserflow)",
"[[ 2013. 10. 17. 1709.085]\n [ 2013. 10. 18. 1676.078]\n [ 2013. 10. 19. 1653.68 ]\n ..., \n [ 2015. 10. 10. 1853.233]\n [ 2015. 10. 11. 2013.845]\n [ 2015. 10. 12. 1871.939]]\n"
],
[
"#Fraser watershed\npd = rivertools.get_watershed_prop_dict_long_fraser('fraser')\ntotalfraser = (pd['Fraser1']['prop'] + pd['Fraser2']['prop'] + \n pd['Fraser3']['prop'] + pd['Fraser4']['prop'])",
"fraser has 10 rivers\n"
],
[
"# Climatology, Fraser Watershed\nfluxfile = nc.Dataset('/ocean/jieliu/research/meopar/nemo-forcing/rivers/Salish_allrivers_monthly.nc','r')\nclimFraserWaterShed = fluxfile.variables['fraser'][:]\n# Fraser River at Hope Seasonal Climatology (found in matlab using Mark's mean daily data)\nclimFraseratHope = (931, 878, 866, 1814, 4097, 6970, 5538, 3539, 2372, 1937, 1595, 1119)\nNonHope = climFraserWaterShed - climFraseratHope\notherratio = 0.016\nfraserratio = 1-otherratio\n\nnonFraser = (otherratio * climFraserWaterShed.sum()/NonHope.sum()) * NonHope\nafterHope = NonHope - nonFraser\nprint (pd['Fraser1']['i'],pd['Fraser1']['j'])",
"500 395\n"
],
[
"def calculate_daily_flow(r,criverflow):\n '''interpolate the daily values from the monthly values'''\n print (r.day, r.month)\n if r.day < 16:\n prevmonth = r.month-1\n if prevmonth == 0:\n prevmonth = 12\n nextmonth = r.month\n else:\n prevmonth = r.month\n nextmonth = r.month + 1\n if nextmonth == 13:\n nextmonth = 1\n fp = r - arrow.get(year,prevmonth,15)\n fn = arrow.get(year,nextmonth,15) - r\n ft = fp+fn\n fp = fp.days/ft.days\n fn = fn.days/ft.days\n print (ft, fp, fn)\n driverflow = fn*criverflow[prevmonth-1] + fp*criverflow[nextmonth-1]\n return driverflow",
"_____no_output_____"
],
[
"def write_file(r,flow,lat,lon,riverdepth):\n ''' given the flow and the riverdepth and the date, write the nc file'''\n directory = '.'\n # set up filename to follow NEMO conventions\n filename = 'NewRFraserCElse_y'+str(year)+'m'+'{:0=2}'.format(r.month)+'d'+'{:0=2}'.format(r.day)+'.nc'\n # print directory+'/'+filename\n nemo = nc.Dataset(directory+'/'+filename, 'w')\n nemo.description = 'Real Fraser Values, Daily Climatology for Other Rivers' \n \n # dimensions\n ymax, xmax = lat.shape\n nemo.createDimension('x', xmax)\n nemo.createDimension('y', ymax)\n nemo.createDimension('time_counter', None)\n \n # variables\n # latitude and longitude\n nav_lat = nemo.createVariable('nav_lat','float32',('y','x'),zlib=True)\n nav_lat = lat\n x = nemo.createVariable('nav_lon','float32',('y','x'),zlib=True)\n nav_lon = lon\n # time\n time_counter = nemo.createVariable('time_counter', 'float32', ('time_counter'),zlib=True)\n time_counter.units = 'non-dim'\n time_counter[0:1] = range(1,2)\n # runoff\n rorunoff = nemo.createVariable('rorunoff', 'float32', ('time_counter','y','x'), zlib=True)\n rorunoff._Fillvalue = 0.\n rorunoff._missing_value = 0.\n rorunoff._units = 'kg m-2 s-1'\n rorunoff[0,:] = flow\n # depth\n rodepth = nemo.createVariable('rodepth','float32',('y','x'),zlib=True)\n rodepth._Fillvalue = -1.\n rodepth.missing_value = -1.\n rodepth.units = 'm'\n rodepth = riverdepth\n nemo.close()\n return",
"_____no_output_____"
],
[
"def fraser_correction(pd, fraserflux, r, afterHope, NonFraser, fraserratio, otherratio,\n runoff):\n ''' for the Fraser Basin only, replace basic values with the new climatology after Hope and the\n observed values for Hope. Note, we are changing runoff only and not using/changing river\n depth '''\n for key in pd:\n if \"Fraser\" in key:\n flux = calculate_daily_flow(r,afterHope) + fraserflux\n subarea = fraserratio\n else:\n flux = calculate_daily_flow(r,NonFraser)\n subarea = otherratio\n \n river = pd[key]\n runoff = rivertools.fill_runoff_array(flux*river['prop']/subarea,river['i'],\n river['di'],river['j'],river['dj'],river['depth'],\n runoff,np.empty_like(runoff))[0]\n return runoff",
"_____no_output_____"
]
],
[
[
"* Open climatology files",
"_____no_output_____"
]
],
[
[
"##open climatolgy file with modified fresh water point source \nclim_rivers_edit = nc.Dataset('/ocean/jieliu/research/meopar/river-treatment/rivers_month_nole.nc','r' )\ncriverflow_edit = clim_rivers_edit.variables['rorunoff']\nlat = clim_rivers_edit.variables['nav_lat']\nlon = clim_rivers_edit.variables['nav_lon']\nriverdepth = clim_rivers_edit.variables['rodepth']\ncriverflow_edit[0,500,395]",
"_____no_output_____"
],
[
"for r in arrow.Arrow.range('day', startdate, enddate):\n print (r)\n driverflow = calculate_daily_flow(r, criverflow_edit)\n storeflow = calculate_daily_flow(r, criverflow_edit)\n step1 = fraserflow[fraserflow[:,0] == r.year]\n step2 = step1[step1[:,1] == r.month]\n step3 = step2[step2[:,2] == r.day]\n# print r.year, r.month, r.day, step3[0,3]\n runoff = fraser_correction(pd, step3[0,3] , r, afterHope, nonFraser, fraserratio, otherratio,\n driverflow)\n write_file(r,runoff,lat,lon,riverdepth)\nig = 500\njg = 395\nprint (criverflow_edit[7:10,500,395], driverflow[ig,jg])\nprint (storeflow[ig,jg], driverflow[ig,jg])\n#ig = 351; jg = 345\n#print storeflow[ig,jg], driverflow[ig,jg]\n#ig = 749; jg=123\n#print storeflow[ig,jg], driverflow[ig,jg]\n\n# jan 0, feb 1, mar 2, apr 3, may 4, jun 5\n# jul 6, aug 7, sep 8",
"2015-06-15T00:00:00+00:00\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n15 6\n31 days, 0:00:00 1.0 0.0\n2015-06-16T00:00:00+00:00\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n16 6\n30 days, 0:00:00 0.03333333333333333 0.9666666666666667\n2015-06-17T00:00:00+00:00\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n17 6\n30 days, 0:00:00 0.06666666666666667 0.9333333333333333\n2015-06-18T00:00:00+00:00\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n18 6\n30 days, 0:00:00 0.1 0.9\n2015-06-19T00:00:00+00:00\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n19 6\n30 days, 0:00:00 0.13333333333333333 0.8666666666666667\n2015-06-20T00:00:00+00:00\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n20 6\n30 days, 0:00:00 0.16666666666666666 0.8333333333333334\n2015-06-21T00:00:00+00:00\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n21 6\n30 days, 0:00:00 0.2 0.8\n2015-06-22T00:00:00+00:00\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n22 6\n30 days, 0:00:00 0.23333333333333334 0.7666666666666667\n2015-06-23T00:00:00+00:00\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n23 6\n30 days, 0:00:00 0.26666666666666666 0.7333333333333333\n2015-06-24T00:00:00+00:00\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n24 6\n30 days, 0:00:00 0.3 0.7\n2015-06-25T00:00:00+00:00\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n25 6\n30 days, 0:00:00 0.3333333333333333 0.6666666666666666\n2015-06-26T00:00:00+00:00\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n26 6\n30 days, 0:00:00 0.36666666666666664 0.6333333333333333\n2015-06-27T00:00:00+00:00\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n27 6\n30 days, 0:00:00 0.4 0.6\n2015-06-28T00:00:00+00:00\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n28 6\n30 days, 0:00:00 0.43333333333333335 0.5666666666666667\n2015-06-29T00:00:00+00:00\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n29 6\n30 days, 0:00:00 0.4666666666666667 0.5333333333333333\n[ 14.99801826 10.32102013 8.77157974] 20.041\n25.9405 20.041\n"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a07e779fd2c8d1146fd33429c82f2c339bd8d22
| 39,983 |
ipynb
|
Jupyter Notebook
|
examples/Huang1991.ipynb
|
TheFranCouz/OpenSAFT.jl
|
633bba91b0795713bc0db8e51ff2f0e62408922b
|
[
"MIT"
] | null | null | null |
examples/Huang1991.ipynb
|
TheFranCouz/OpenSAFT.jl
|
633bba91b0795713bc0db8e51ff2f0e62408922b
|
[
"MIT"
] | null | null | null |
examples/Huang1991.ipynb
|
TheFranCouz/OpenSAFT.jl
|
633bba91b0795713bc0db8e51ff2f0e62408922b
|
[
"MIT"
] | null | null | null | 151.450758 | 16,297 | 0.676813 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a07ee1f0d99a9d2b6286aa033eebbac5724c9c3
| 202,916 |
ipynb
|
Jupyter Notebook
|
Quora Project/Quora Question Pairs.ipynb
|
thirasit/Real-world-projects-ML
|
6508b5162d9cf371343bbc19af632dabb173f34f
|
[
"MIT"
] | null | null | null |
Quora Project/Quora Question Pairs.ipynb
|
thirasit/Real-world-projects-ML
|
6508b5162d9cf371343bbc19af632dabb173f34f
|
[
"MIT"
] | null | null | null |
Quora Project/Quora Question Pairs.ipynb
|
thirasit/Real-world-projects-ML
|
6508b5162d9cf371343bbc19af632dabb173f34f
|
[
"MIT"
] | null | null | null | 202,916 | 202,916 | 0.896144 |
[
[
[
"# **Quora Question Pairs**",
"_____no_output_____"
],
[
"## **1. Business Problem**",
"_____no_output_____"
],
[
"### **1.1 Description**",
"_____no_output_____"
],
[
"Quora is a place to gain and share knowledge—about anything. It’s a platform to ask questions and connect with people who contribute unique insights and quality answers. This empowers people to learn from each other and to better understand the world.\n\nOver 100 million people visit Quora every month, so it's no surprise that many people ask similarly worded questions. Multiple questions with the same intent can cause seekers to spend more time finding the best answer to their question, and make writers feel they need to answer multiple versions of the same question. Quora values canonical questions because they provide a better experience to active seekers and writers, and offer more value to both of these groups in the long term.\n\n> Credits: Kaggle ",
"_____no_output_____"
],
[
"**Problem Statement**\n- Identify which questions asked on Quora are duplicates of questions that have already been asked. \n- This could be useful to instantly provide answers to questions that have already been answered. \n- We are tasked with predicting whether a pair of questions are duplicates or not. ",
"_____no_output_____"
],
[
"### **1.2 Sources/Useful Links**",
"_____no_output_____"
],
[
"- Source : https://www.kaggle.com/c/quora-question-pairs\n\nUseful Links\n- Discussions : https://www.kaggle.com/anokas/data-analysis-xgboost-starter-0-35460-lb/comments\n- Kaggle Winning Solution and other approaches: https://www.dropbox.com/sh/93968nfnrzh8bp5/AACZdtsApc1QSTQc7X0H3QZ5a?dl=0\n- Blog 1 : https://engineering.quora.com/Semantic-Question-Matching-with-Deep-Learning\n- Blog 2 : https://towardsdatascience.com/identifying-duplicate-questions-on-quora-top-12-on-kaggle-4c1cf93f1c30",
"_____no_output_____"
],
[
"### **1.3 Real world/Business Objectives and Constraints**",
"_____no_output_____"
],
[
"1. The cost of a mis-classification can be very high.\n2. You would want a probability of a pair of questions to be duplicates so that you can choose any threshold of choice.\n3. No strict latency concerns.\n4. Interpretability is partially important.",
"_____no_output_____"
],
[
"## **2. Machine Learning Probelm**",
"_____no_output_____"
],
[
"### **2.1 Data**",
"_____no_output_____"
],
[
"#### **2.1.1 Data Overview**\n\n\n- Data will be in a file Train.csv\n- Train.csv contains 5 columns : qid1, qid2, question1, question2, is_duplicate\n- Size of Train.csv - 60MB\n- Number of rows in Train.csv = 404,290",
"_____no_output_____"
],
[
"#### **2.1.2 Example Data point**\n\n<pre>\n\"id\",\"qid1\",\"qid2\",\"question1\",\"question2\",\"is_duplicate\"\n\"0\",\"1\",\"2\",\"What is the step by step guide to invest in share market in india?\",\"What is the step by step guide to invest in share market?\",\"0\"\n\"1\",\"3\",\"4\",\"What is the story of Kohinoor (Koh-i-Noor) Diamond?\",\"What would happen if the Indian government stole the Kohinoor (Koh-i-Noor) diamond back?\",\"0\"\n\"7\",\"15\",\"16\",\"How can I be a good geologist?\",\"What should I do to be a great geologist?\",\"1\"\n\"11\",\"23\",\"24\",\"How do I read and find my YouTube comments?\",\"How can I see all my Youtube comments?\",\"1\"\n</pre>",
"_____no_output_____"
],
[
"### **2.2 Mapping the real world problem to an ML problem**",
"_____no_output_____"
],
[
"#### **2.2.1 Type of Machine Leaning Problem**",
"_____no_output_____"
],
[
"It is a binary classification problem, for a given pair of questions we need to predict if they are duplicate or not.",
"_____no_output_____"
],
[
"#### **2.2.2 Performance Metric**",
"_____no_output_____"
],
[
"Source: https://www.kaggle.com/c/quora-question-pairs#evaluation\n\nMetric(s): \n* log-loss : https://www.kaggle.com/wiki/LogarithmicLoss\n* Binary Confusion Matrix",
"_____no_output_____"
],
[
"### **2.3 Train and Test Construction**",
"_____no_output_____"
],
[
"We build train and test by randomly splitting in the ratio of 70:30 or 80:20 whatever we choose as we have sufficient points to work with.",
"_____no_output_____"
],
[
"## **3. Exploratory Data Analysis**",
"_____no_output_____"
]
],
[
[
"!pip install Distance",
"Collecting Distance\n Downloading Distance-0.1.3.tar.gz (180 kB)\n\u001b[K |████████████████████████████████| 180 kB 4.6 MB/s \n\u001b[?25hBuilding wheels for collected packages: Distance\n Building wheel for Distance (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for Distance: filename=Distance-0.1.3-py3-none-any.whl size=16275 sha256=86c132410112bf2163caaaca11de2458422a1c74092c32b7b510138038fb05cb\n Stored in directory: /root/.cache/pip/wheels/b2/10/1b/96fca621a1be378e2fe104cfb0d160bb6cdf3d04a3d35266cc\nSuccessfully built Distance\nInstalling collected packages: Distance\nSuccessfully installed Distance-0.1.3\n"
],
[
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom subprocess import check_output\n%matplotlib inline\nimport plotly.offline as py\npy.init_notebook_mode(connected=True)\nimport plotly.graph_objs as go\nimport plotly.tools as tls\nimport os\nimport gc\n\nimport re\nfrom nltk.corpus import stopwords\nimport distance\nfrom nltk.stem import PorterStemmer\nfrom bs4 import BeautifulSoup",
"_____no_output_____"
]
],
[
[
"### **3.1 Reading data and basic stats**",
"_____no_output_____"
]
],
[
[
"from google.colab import files\nuploaded = files.upload()",
"_____no_output_____"
],
[
"df = pd.read_csv(\"train.csv\")\n\nprint(\"Number of data points:\",df.shape[0])",
"Number of data points: 404290\n"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 404290 entries, 0 to 404289\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 id 404290 non-null int64 \n 1 qid1 404290 non-null int64 \n 2 qid2 404290 non-null int64 \n 3 question1 404289 non-null object\n 4 question2 404288 non-null object\n 5 is_duplicate 404290 non-null int64 \ndtypes: int64(4), object(2)\nmemory usage: 18.5+ MB\n"
]
],
[
[
"We are given a minimal number of data fields here, consisting of:\n\n- id: Looks like a simple rowID\n- qid{1, 2}: The unique ID of each question in the pair\n- question{1, 2}: The actual textual contents of the questions.\n- is_duplicate: The label that we are trying to predict - whether the two questions are duplicates of each other.",
"_____no_output_____"
],
[
"#### **3.2.1 Distribution of data points among output classes**\n- Number of duplicate(smilar) and non-duplicate(non similar) questions",
"_____no_output_____"
]
],
[
[
"df.groupby(\"is_duplicate\")['id'].count().plot.bar()",
"_____no_output_____"
],
[
"print('~> Total number of question pairs for training:\\n {}'.format(len(df)))",
"~> Total number of question pairs for training:\n 404290\n"
],
[
"print('~> Question pairs are not Similar (is_duplicate = 0):\\n {}%'.format(100 - round(df['is_duplicate'].mean()*100, 2)))\nprint('\\n~> Question pairs are Similar (is_duplicate = 1):\\n {}%'.format(round(df['is_duplicate'].mean()*100, 2)))",
"~> Question pairs are not Similar (is_duplicate = 0):\n 63.08%\n\n~> Question pairs are Similar (is_duplicate = 1):\n 36.92%\n"
]
],
[
[
"#### **3.2.2 Number of unique questions**",
"_____no_output_____"
]
],
[
[
"qids = pd.Series(df['qid1'].tolist() + df['qid2'].tolist())\nunique_qs = len(np.unique(qids))\nqs_morethan_onetime = np.sum(qids.value_counts() > 1)\nprint ('Total number of Unique Questions are: {}\\n'.format(unique_qs))\n#print len(np.unique(qids))\n\nprint ('Number of unique questions that appear more than one time: {} ({}%)\\n'.format(qs_morethan_onetime,qs_morethan_onetime/unique_qs*100))\n\nprint ('Max number of times a single question is repeated: {}\\n'.format(max(qids.value_counts()))) \n\nq_vals=qids.value_counts()\n\nq_vals=q_vals.values",
"Total number of Unique Questions are: 537933\n\nNumber of unique questions that appear more than one time: 111780 (20.77953945937505%)\n\nMax number of times a single question is repeated: 157\n\n"
],
[
"x = [\"unique_questions\" , \"Repeated Questions\"]\ny = [unique_qs , qs_morethan_onetime]\n\nplt.figure(figsize=(10, 6))\nplt.title (\"Plot representing unique and repeated questions \")\nsns.barplot(x,y)\nplt.show()",
"/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning:\n\nPass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n\n"
]
],
[
[
"#### **3.2.3 Checking for Duplicates**",
"_____no_output_____"
]
],
[
[
"#checking whether there are any repeated pair of questions\n\npair_duplicates = df[['qid1','qid2','is_duplicate']].groupby(['qid1','qid2']).count().reset_index()\n\nprint (\"Number of duplicate questions\",(pair_duplicates).shape[0] - df.shape[0])",
"Number of duplicate questions 0\n"
]
],
[
[
"#### **3.2.4 Number of occurrences of each question**",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(20, 10))\n\nplt.hist(qids.value_counts(), bins=160)\n\nplt.yscale('log', nonposy='clip')\n\nplt.title('Log-Histogram of question appearance counts')\n\nplt.xlabel('Number of occurences of question')\n\nplt.ylabel('Number of questions')\n\nprint ('Maximum number of times a single question is repeated: {}\\n'.format(max(qids.value_counts()))) ",
"Maximum number of times a single question is repeated: 157\n\n"
]
],
[
[
"#### **3.2.5 Checking for NULL values**",
"_____no_output_____"
]
],
[
[
"#Checking whether there are any rows with null values\nnan_rows = df[df.isnull().any(1)]\nprint (nan_rows)",
" id ... is_duplicate\n105780 105780 ... 0\n201841 201841 ... 0\n363362 363362 ... 0\n\n[3 rows x 6 columns]\n"
]
],
[
[
"- There are two rows with null values in question2 ",
"_____no_output_____"
]
],
[
[
"# Filling the null values with ' '\ndf = df.fillna('')\nnan_rows = df[df.isnull().any(1)]\nprint (nan_rows)",
"Empty DataFrame\nColumns: [id, qid1, qid2, question1, question2, is_duplicate]\nIndex: []\n"
]
],
[
[
"### **3.3 Basic Feature Extraction (before cleaning)**",
"_____no_output_____"
],
[
"Let us now construct a few features like:\n - **freq_qid1** = Frequency of qid1's\n - **freq_qid2** = Frequency of qid2's \n - **q1len** = Length of q1\n - **q2len** = Length of q2\n - **q1_n_words** = Number of words in Question 1\n - **q2_n_words** = Number of words in Question 2\n - **word_Common** = (Number of common unique words in Question 1 and Question 2)\n - **word_Total** =(Total num of words in Question 1 + Total num of words in Question 2)\n - **word_share** = (word_common)/(word_Total)\n - **freq_q1+freq_q2** = sum total of frequency of qid1 and qid2 \n - **freq_q1-freq_q2** = absolute difference of frequency of qid1 and qid2 ",
"_____no_output_____"
]
],
[
[
"if os.path.isfile('df_fe_without_preprocessing_train.csv'):\n df = pd.read_csv(\"df_fe_without_preprocessing_train.csv\",encoding='latin-1')\nelse:\n df['freq_qid1'] = df.groupby('qid1')['qid1'].transform('count') \n df['freq_qid2'] = df.groupby('qid2')['qid2'].transform('count')\n df['q1len'] = df['question1'].str.len() \n df['q2len'] = df['question2'].str.len()\n df['q1_n_words'] = df['question1'].apply(lambda row: len(row.split(\" \")))\n df['q2_n_words'] = df['question2'].apply(lambda row: len(row.split(\" \")))\n\n def normalized_word_Common(row):\n w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(\" \")))\n w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(\" \"))) \n return 1.0 * len(w1 & w2)\n df['word_Common'] = df.apply(normalized_word_Common, axis=1)\n\n def normalized_word_Total(row):\n w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(\" \")))\n w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(\" \"))) \n return 1.0 * (len(w1) + len(w2))\n df['word_Total'] = df.apply(normalized_word_Total, axis=1)\n\n def normalized_word_share(row):\n w1 = set(map(lambda word: word.lower().strip(), row['question1'].split(\" \")))\n w2 = set(map(lambda word: word.lower().strip(), row['question2'].split(\" \"))) \n return 1.0 * len(w1 & w2)/(len(w1) + len(w2))\n df['word_share'] = df.apply(normalized_word_share, axis=1)\n\n df['freq_q1+q2'] = df['freq_qid1']+df['freq_qid2']\n df['freq_q1-q2'] = abs(df['freq_qid1']-df['freq_qid2'])\n\n df.to_csv(\"df_fe_without_preprocessing_train.csv\", index=False)\n\ndf.head()",
"_____no_output_____"
]
],
[
[
"#### **3.3.1 Analysis of some of the extracted features**",
"_____no_output_____"
],
[
"- Here are some questions have only one single words.",
"_____no_output_____"
]
],
[
[
"print (\"Minimum length of the questions in question1 : \" , min(df['q1_n_words']))\n\nprint (\"Minimum length of the questions in question2 : \" , min(df['q2_n_words']))\n\nprint (\"Number of Questions with minimum length [question1] :\", df[df['q1_n_words']== 1].shape[0])\nprint (\"Number of Questions with minimum length [question2] :\", df[df['q2_n_words']== 1].shape[0])",
"Minimum length of the questions in question1 : 1\nMinimum length of the questions in question2 : 1\nNumber of Questions with minimum length [question1] : 67\nNumber of Questions with minimum length [question2] : 24\n"
]
],
[
[
"##### **3.3.1.1 Feature: word_share**",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12, 8))\n\nplt.subplot(1,2,1)\nsns.violinplot(x = 'is_duplicate', y = 'word_share', data = df[0:])\n\nplt.subplot(1,2,2)\nsns.distplot(df[df['is_duplicate'] == 1.0]['word_share'][0:] , label = \"1\", color = 'red')\nsns.distplot(df[df['is_duplicate'] == 0.0]['word_share'][0:] , label = \"0\" , color = 'blue' )\nplt.show()",
"/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2619: FutureWarning:\n\n`distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n\n/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2619: FutureWarning:\n\n`distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n\n"
]
],
[
[
"- The distributions for normalized word_share have some overlap on the far right-hand side, i.e., there are quite a lot of questions with high word similarity\n- The average word share and Common no. of words of qid1 and qid2 is more when they are duplicate(Similar)",
"_____no_output_____"
],
[
"##### **3.3.1.2 Feature: word_Common**",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(12, 8))\n\nplt.subplot(1,2,1)\nsns.violinplot(x = 'is_duplicate', y = 'word_Common', data = df[0:])\n\nplt.subplot(1,2,2)\nsns.distplot(df[df['is_duplicate'] == 1.0]['word_Common'][0:] , label = \"1\", color = 'red')\nsns.distplot(df[df['is_duplicate'] == 0.0]['word_Common'][0:] , label = \"0\" , color = 'blue' )\nplt.show()",
"/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2619: FutureWarning:\n\n`distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n\n/usr/local/lib/python3.7/dist-packages/seaborn/distributions.py:2619: FutureWarning:\n\n`distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).\n\n"
]
],
[
[
"The distributions of the word_Common feature in similar and non-similar questions are highly overlapping",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a07f0d5426f04d469669ae30205c27a854633e6
| 30,355 |
ipynb
|
Jupyter Notebook
|
notebooks/Hydrotrend_lesson_1.ipynb
|
pymt-lab/pymt_hydrotrend
|
85289c8380d29b7881cfcfb1e927c6004768cdab
|
[
"MIT"
] | null | null | null |
notebooks/Hydrotrend_lesson_1.ipynb
|
pymt-lab/pymt_hydrotrend
|
85289c8380d29b7881cfcfb1e927c6004768cdab
|
[
"MIT"
] | 1 |
2018-10-19T21:57:15.000Z
|
2018-10-31T22:56:45.000Z
|
notebooks/Hydrotrend_lesson_1.ipynb
|
mcflugen/pymt_hydrotrend
|
5d3d74be20584d2843721f3c81fcf9df60d053a1
|
[
"MIT"
] | 1 |
2020-09-07T02:18:08.000Z
|
2020-09-07T02:18:08.000Z
| 68.988636 | 15,468 | 0.797496 |
[
[
[
"# River Sediment Supply Modeling with HydroTrend\n\nIf you have never used the CSDMS Python Modeling Toolkit (PyMT), learn how to use it here.\n\nWe are using a theoretical river basin of ~1990 km2, with 1200m of relief and a river length of\n~100 km. All parameters that are shown by default once the HydroTrend Model is loaded are based\non a present-day, temperate climate. Whereas these runs are not meant to be specific, we are\nusing parameters that are realistic for the [Waiapaoa River][map_of_waiapaoa] in New Zealand. The Waiapaoa River\nis located on North Island and receives high rain and has erodible soils, so the river sediment\nloads are exceptionally high. It has been called the *\"dirtiest small river in the world\"*.\n\nTo learn more about HydroTrend and its approach to sediment supply modeling, you can download\nthis [presentation][hydrotrend_presentation].\n\n[map_of_waiapaoa]: https://www.google.com/maps/place/Waipaoa+River/@-38.5099042,177.7668002,71814m/data=!3m1!1e3!4m5!3m4!1s0x6d65def908624859:0x2a00ef6165e1dfa0!8m2!3d-38.5392405!4d177.8843782\n[hydrotrend_presentation]: https://csdms.colorado.edu/wiki/File:SedimentSupplyModeling02_2013.ppt",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n\nimport pymt",
"\u001b[32m✓ Sedflux3D\u001b[39;49;00m\n\u001b[32m✓ Child\u001b[39;49;00m\n\u001b[32m✓ Hydrotrend\u001b[39;49;00m\n\u001b[32m✓ OverlandFlow\u001b[39;49;00m\n\u001b[32m✓ BmiFrostNumberMethod\u001b[39;49;00m\n\u001b[32m✓ BmiKuMethod\u001b[39;49;00m\n"
],
[
"hydrotrend = pymt.plugins.Hydrotrend()",
"_____no_output_____"
]
],
[
[
"HydroTrend will now be active in the WMT. The HydroTrend Parameter list is used to set the parameters for any simulation. You can set the parameters by going through the different tabs in the parameter list. Once your input is set up, you save the information. Then, you can run it by hitting the arrow run button. This way you generate a job script that can be submitted to Beach-the CSDMS High Performance Computing System. Provide your Beach account information (i.e. user name and password) to get the run started. The status page allows you to keep track of a simulation. From the status page you can eventually download your output files. \n",
"_____no_output_____"
],
[
"## Exercise 1: Explore the base-case river simulation\n\nThe default \"base-case\" simulation for 50 years at daily time-step. This means you run Hydrotrend for 18,250 days total. ",
"_____no_output_____"
]
],
[
[
"config_file, config_folder = hydrotrend.setup()",
"_____no_output_____"
],
[
"hydrotrend.initialize(config_file, config_folder)",
"_____no_output_____"
],
[
"hydrotrend.output_var_names",
"_____no_output_____"
],
[
"hydrotrend.get_start_time(), hydrotrend.get_current_time(), hydrotrend.get_end_time(), hydrotrend.get_time_step(), hydrotrend.get_time_units()",
"_____no_output_____"
],
[
"n_days = int(hydrotrend.get_end_time())\nq = np.empty(n_days)\nqs = np.empty(n_days)\ncs = np.empty(n_days)\nqb = np.empty(n_days)\nfor i in range(n_days):\n hydrotrend.update()\n q[i] = hydrotrend.get_value(\"channel_exit_water__volume_flow_rate\")\n qs[i] = hydrotrend.get_value(\"channel_exit_water_sediment~suspended__mass_flow_rate\")\n cs[i] = hydrotrend.get_value(\"channel_exit_water_sediment~suspended__mass_concentration\")\n qb[i] = hydrotrend.get_value(\"channel_exit_water_sediment~bedload__mass_flow_rate\")",
"_____no_output_____"
],
[
"plt.plot(qs)",
"_____no_output_____"
]
],
[
[
"## Q1a: Calculate mean water discharge Q, mean suspended load Qs, mean sediment concentration Cs, and mean bedload Qb.\n\n*Note all values are reported as daily averages. What are the units?*\n\n*A1a*:",
"_____no_output_____"
]
],
[
[
"(\n (q.mean(), hydrotrend.get_var_units(\"channel_exit_water__volume_flow_rate\")),\n (cs.mean(), hydrotrend.get_var_units(\"channel_exit_water_sediment~suspended__mass_flow_rate\")),\n (qs.mean(), hydrotrend.get_var_units(\"channel_exit_water_sediment~suspended__mass_concentration\")),\n (qb.mean(), hydrotrend.get_var_units(\"channel_exit_water_sediment~bedload__mass_flow_rate\"))\n)",
"_____no_output_____"
],
[
"hydrotrend.get_var_units(\"channel_exit_water__volume_flow_rate\")",
"_____no_output_____"
]
],
[
[
"## Q1b: Identify the highest flood event for this simulation. Is this the 50-year flood? Plot the year of Q-data which includes the flood.\n\n*A1b*:",
"_____no_output_____"
]
],
[
[
"flood_day = q.argmax()\nflood_year // 365\nplt.plot(q[flood_year * 365: (flood_year + 1) * 365])",
"_____no_output_____"
],
[
"q.max()",
"_____no_output_____"
]
],
[
[
"## Q1c: Calculate the mean annual sediment load for this river system.\n\n*A1c*:",
"_____no_output_____"
]
],
[
[
"qs_by_year = qs.reshape((-1, 365))\nqs_annual = qs_by_year.sum(axis=1)\nplt.plot(qs_annual)",
"_____no_output_____"
],
[
"qs_annual.mean()",
"_____no_output_____"
]
],
[
[
"## Q1d: How does the sediment yield of this river system compare to the present-day Mississippi River?\n\n*To compare the mean annual load to other river systems you will need to calculate its sediment yield. \nSediment Yield is defined as sediment load normalized for the river drainage area; \nso it can be reported in T/km2/yr.*\n\n*A1d*:",
"_____no_output_____"
],
[
"# Exercise 2: How does a river system respond to climate change; a few simple scenarios for the coming century.\n\nNow we will look at changing climatic conditions in a small river basin. We'll change temperature and precipitation regimes and compare discharge and sediment load characteristics to the original basecase. And we will look at the are potential implications of changes in the peak events.\n\nModify the mean annual temperature T, the mean annual precipitation P, and its the variability of the yearly means through the standard deviation. You can specify trends over time, by modifying the parameter ‘change in mean annual temperature’ or ‘change in mean annual precipitation’. HydroTrend runs at daily timestep, and thus can deal with seasonal variations in temperature and precipitation for a basin. The model ingests monthly mean input values for these two climate parameters and their monthly standard deviations, ideally the values would be derived from analysis of a longterm record of daily climate data. You can adapt seasonal trends by using the monthly values.",
"_____no_output_____"
],
[
"## Q2a: What happens to discharge, suspended load and bedload if the mean annual temperature in this specific river basin increases by 4 °C over 50 years?\n\n*A2a*:",
"_____no_output_____"
],
[
"## Q2b: How much increase of discharge do you see after 50 years? How is the average suspended load affected? How does the bedload change? What happens to the peak event; look at the maximum discharge event of the last 10 years of the simulation?\n\n*A2b*:",
"_____no_output_____"
],
[
"## Q2c: In addition, climate model predictions indicate that perhaps precipitation intensity and variability could increase. How would you model this; discuss all your input settings for precipitation.\n\n*A2c*:",
"_____no_output_____"
],
[
"# Exercise 3: How do humans affect river sediment loads?\n\nHere we will look at the effect of human in a river basin. Humans can accelerate erosion\nprocesses, or reduce the sediment loads traveling through a river system. Both concepts can\nbe simulated, first run 3 simulations systematically increasing the anthropogenic factor (0.5-8.0 is the range).",
"_____no_output_____"
],
[
"## Q3a: Describe in your own words the meaning of the human-induced erosion factor, (Eh) (Syvitski & Milliman, 2007). This factor is parametrized as the “Antropogenic” factor in HydroTrend. See references for the paper.\n\n*A3a*:",
"_____no_output_____"
],
[
"Model a scenario of a drinking water supply reservoir to be planned in the coastal area of the basin. The reservoir would have 800 km 2of contributing drainage area and be 3 km long, 200m wide and 100m deep. Set up a simulation with these parameters.",
"_____no_output_____"
],
[
"## Q3b: How would such a reservoir affect the sediment load at the coast (i.e. downstream of the reservoir)?\n\n*A3b*:",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a07f0e77ac651a6ef8ec7cba786674ac9e77a85
| 756,779 |
ipynb
|
Jupyter Notebook
|
nbs/design-sac.ipynb
|
hududed/multifidelity-sac
|
5ff6d5fec09c2b0340abc37fd129b7a9492c15e9
|
[
"MIT"
] | null | null | null |
nbs/design-sac.ipynb
|
hududed/multifidelity-sac
|
5ff6d5fec09c2b0340abc37fd129b7a9492c15e9
|
[
"MIT"
] | null | null | null |
nbs/design-sac.ipynb
|
hududed/multifidelity-sac
|
5ff6d5fec09c2b0340abc37fd129b7a9492c15e9
|
[
"MIT"
] | null | null | null | 540.556429 | 427,049 | 0.682284 |
[
[
[
"%load_ext autoreload\n%autoreload 2\n%aiida",
"UsageError: Line magic function `%aiida` not found.\n"
]
],
[
[
"# Create structure using pymatgen",
"_____no_output_____"
]
],
[
[
"#!pip install pymatgen",
"_____no_output_____"
],
[
"import sys \nsys.path.insert(0, '../src/')\nfrom view import *\nfrom functions import *\n\nfrom pymatgen.core.structure import Structure, Lattice\n\nfrom pymatgen.transformations.advanced_transformations import CubicSupercellTransformation\n\nstruct = Structure.from_spacegroup('P6/mmm',\n Lattice.hexagonal(2.46,15),\n ['C'],\n [[1/3,2/3,1/2]])\ntfms = CubicSupercellTransformation().apply_transformation(struct)",
"_____no_output_____"
],
[
"len(tfms)",
"_____no_output_____"
],
[
"tfms = tfms*[2,2,1]\nview_top(tfms)",
"_____no_output_____"
],
[
"tfms.lattice",
"_____no_output_____"
]
],
[
[
"## Definitions for creating sac structures",
"_____no_output_____"
],
[
"### Single TM",
"_____no_output_____"
]
],
[
[
"def create_small_(element,adsorbate=True):\n sup = CubicSupercellTransformation().apply_transformation(struct)\n idx = []\n for i,coords in enumerate(sup.cart_coords):\n if coords[0] < 3 : idx.append(i)\n elif coords[0] > 14: idx.append(i)\n elif coords[1] < 3 : idx.append(i) \n elif coords[1] > 14 : idx.append(i)\n sup.remove_sites(idx)\n\n sup.remove_sites(find_corner_idx(sup))\n sup.remove_sites(mid_idx(sup))\n\n for i in around_idx(sup):\n sup[i] = 'N'\n\n for i in h_idx(sup): sup[i]='H'\n\n if adsorbate==True:\n sup.append('H',[find_x_center(sup),find_y_center(sup),9.0],coords_are_cartesian=True)\n sup.append(element,[find_x_center(sup),find_y_center(sup),7.5],coords_are_cartesian=True)\n \n return sup\n\n\ndef create_large_(element,adsorbate=True):\n sup = CubicSupercellTransformation().apply_transformation(struct)\n sup = sup*[2,2,1]\n\n idx = []\n for i,coords in enumerate(sup.cart_coords):\n if coords[0] < 1.5: idx.append(i)\n elif coords[0] > 23: idx.append(i)\n elif coords[1] < 1 : idx.append(i) \n elif coords[1] > 20 : idx.append(i)\n sup.remove_sites(idx)\n \n sup.remove_sites(find_corner_ix(sup,mid=False))\n \n sup.remove_sites(mid_idx(sup))\n for i in around_idx(sup):\n sup[i] = 'N'\n\n for i in h_idx(sup): sup[i]='H'\n for i in h_idx2(sup): sup[i]='H'\n\n if adsorbate==True:\n sup.append('H',[find_x_center(sup),find_y_center(sup),9.0],coords_are_cartesian=True)\n sup.append(element,[find_x_center(sup),find_y_center(sup),7.5],coords_are_cartesian=True)\n return sup\n\ndef create_medium_(element,adsorbate=True):\n sup = CubicSupercellTransformation().apply_transformation(struct)\n sup = sup*[2,2,1]\n\n idx = []\n for i,coords in enumerate(sup.cart_coords):\n if coords[0] < 8 : idx.append(i)\n elif coords[0] > 24.5: idx.append(i)\n elif coords[1] < 9.5 : idx.append(i) \n elif coords[1] > 24.8 : idx.append(i)\n sup.remove_sites(idx)\n\n sup.remove_sites(find_corner_ix(sup))\n\n sup.remove_sites(mid_idx(sup))\n\n for i in around_idx(sup):\n sup[i] = 'N'\n\n for i in h_idx(sup): sup[i]='H'\n\n if adsorbate==True:\n sup.append('H',[find_x_center(sup),find_y_center(sup),9.0],coords_are_cartesian=True)\n sup.append(element,[find_x_center(sup),find_y_center(sup),7.5],coords_are_cartesian=True)\n return sup",
"_____no_output_____"
]
],
[
[
"### Co-TM",
"_____no_output_____"
]
],
[
[
"def create_large_co(element1,element2,adsorbate=True):\n \n sup = CubicSupercellTransformation().apply_transformation(struct)\n sup = sup*[2,2,1]\n\n idx = []\n for i,coords in enumerate(sup.cart_coords):\n if coords[0] < 1.5: idx.append(i)\n elif coords[0] > 23: idx.append(i)\n elif coords[1] < 1 : idx.append(i) \n elif coords[1] > 20 : idx.append(i)\n sup.remove_sites(idx)\n\n sup.remove_sites(find_corner_ix(sup,mid=False))\n\n sup.remove_sites(mid_idx_co(sup))\n for i in around_idx_co(sup):\n sup[i] = 'N'\n\n for i in h_idx(sup): sup[i]='H'\n for i in h_idx2(sup): sup[i]='H'\n\n tm1 = [find_x_center_co(sup)[0],\n find_y_center_co(sup),\n 7.5]\n tm2 = [find_x_center_co(sup)[1],\n find_y_center_co(sup),\n 7.5]\n h1 = [find_x_center_co(sup)[0],\n find_y_center_co(sup),\n 9.0]\n h2 = [find_x_center_co(sup)[1],\n find_y_center_co(sup),\n 9.0]\n \n if adsorbate==True:\n sup.append('H',h1,coords_are_cartesian=True)\n sup.append('H',h2,coords_are_cartesian=True)\n\n sup.append(element1,tm1,coords_are_cartesian=True) #(11.07,12.07)\n sup.append(element2,tm2,coords_are_cartesian=True) #(13.53,12.07)\n \n return sup",
"_____no_output_____"
]
],
[
[
"## View Structures",
"_____no_output_____"
]
],
[
[
"view_top(create_large_('Sc')) # Single TM",
"_____no_output_____"
],
[
"view_top(create_large_co('Sc','V',adsorbate=True)) # Co TM",
"_____no_output_____"
]
],
[
[
"## Create list of structures",
"_____no_output_____"
]
],
[
[
"tms = ['Sc','Ti','V','Cr','Mn','Fe','Co','Ni','Cu','Zn','Zr','Nb','Mo','Tc',\\\n 'Ru','Rh','Pd','Ag','Cd','Hf','Ta','W','Re','Os','Ir','Pt','Au']\n\nsmall_total=[]\nfor tm in tms:\n small_total.append(create_small_(tm))\n \nmedium_total=[]\nfor tm in tms:\n medium_total.append(create_medium_(tm))\n\nlarge_total=[]\nfor tm in tms:\n large_total.append(create_large_(tm))\n \nsmall=[]\nfor tm in tms:\n small.append(create_small_(tm,adsorbate=False))\n \nmedium=[]\nfor tm in tms:\n medium.append(create_medium_(tm,adsorbate=False))\n\nlarge=[]\nfor tm in tms:\n large.append(create_large_(tm,adsorbate=False))",
"_____no_output_____"
],
[
"len(small), len(medium), len(large), len(small_total), len(medium_total), len(large_total)",
"_____no_output_____"
]
],
[
[
"## Write POSCAR file",
"_____no_output_____"
]
],
[
[
"from pymatgen.io.vasp.inputs import Poscar\n\nsupp = create_large_co('Sc','V',adsorbate=True)\ntest = Poscar(supp)\ntest.write_file('POSCAR_co')",
"_____no_output_____"
]
],
[
[
"## Run aiida for all LF ",
"_____no_output_____"
]
],
[
[
"from aiida import orm\nfrom aiida import plugins\nfrom aiida.plugins import DataFactory\nfrom aiida.engine import submit\nfrom aiida.orm.nodes.data.upf import get_pseudos_from_structure\n\nPwBaseWorkChain = plugins.WorkflowFactory('quantumespresso.pw.base')\n\ncode = load_code('qe-6.6-pw@arcc-msi')\n\nstructures = small\n\nStructureData = DataFactory(\"structure\")\nKpointsData = DataFactory('array.kpoints')\nkpoints = KpointsData()\nkpoints.set_kpoints_mesh([1,1,1])\n\ninputs = {\n 'pw': {\n 'code': code,\n 'parameters': orm.Dict(dict={\n 'CONTROL': {\n 'calculation':'scf',\n },\n 'SYSTEM':{\n 'ecutwfc':150.,\n 'occupations':'smearing',\n 'degauss':0.02\n },\n 'ELECTRONS':{\n 'conv_thr':1.e-6,\n }\n }),\n 'metadata':{\n 'label':'LF-smallH',\n 'options':{\n 'account':'rd-hea',\n 'resources':{\n 'num_machines':1,\n 'num_cores_per_mpiproc':32\n },\n 'max_wallclock_seconds':1*24*60*60,\n 'max_memory_kb':int(128e6)\n }\n }\n },\n 'kpoints': kpoints,\n}\n\nfor structure in structures:\n inputs['pw']['structure'] = StructureData(pymatgen_structure=structure)\n inputs['pw']['pseudos'] = get_pseudos_from_structure(StructureData(pymatgen=structure),'SSSP')\n submit(PwBaseWorkChain, **inputs)",
"_____no_output_____"
]
],
[
[
"## Run aiida for HF calcs",
"_____no_output_____"
]
],
[
[
"a = np.array([13,14,15,16,20,21,22,23,24])\nsmall_redo = [small_total[i] for i in a] # accidentally did 6-LARGE",
"_____no_output_____"
],
[
"view_top(small_redo[1])",
"_____no_output_____"
],
[
"len(small_total[613])",
"_____no_output_____"
],
[
"from aiida import orm\nfrom aiida import plugins\nfrom aiida.plugins import DataFactory, WorkflowFactory\nfrom aiida.engine import submit\nfrom aiida.orm.nodes.data.upf import get_pseudos_from_structure\n\n# PwBaseWorkChain = WorkflowFactory('quantumespresso.pw.base')\nPwRelaxWorkChain = WorkflowFactory('quantumespresso.pw.relax')\n\ncode = load_code('qe-6.6-pw@arcc-msi')\n\nstructures = small_redo\n\nStructureData = DataFactory(\"structure\")\nKpointsData = DataFactory('array.kpoints')\nkpoints = KpointsData()\nkpoints.set_kpoints_mesh([3,3,1])\n\ninputs = {\n 'base':{\n 'pw': {\n 'code': code,\n 'parameters': orm.Dict(dict={\n 'SYSTEM':{\n 'ecutwfc':300.,\n 'occupations':'smearing',\n 'degauss':0.02\n \n },\n 'ELECTRONS':{\n 'conv_thr':1.e-6,\n }\n }),\n 'metadata':{\n 'label':'HF-medium',\n 'options':{\n 'account':'rd-hea',\n 'resources':{\n 'num_machines':4,\n 'num_cores_per_mpiproc':32\n },\n 'max_wallclock_seconds':2*24*60*60,\n 'max_memory_kb':int(128e6)\n }\n }\n },\n 'kpoints': kpoints, \n },\n 'relaxation_scheme':orm.Str('relax')\n}\n\nfor structure in structures:\n inputs['structure'] = StructureData(pymatgen=structure)\n inputs['base']['pw']['pseudos'] = get_pseudos_from_structure(StructureData(pymatgen=structure),'SSSP')\n# inputs['base_final_scf']['pw']['pseudos'] = get_pseudos_from_structure(StructureData(pymatgen=structure),'SSSP')\n submit(PwRelaxWorkChain, **inputs)",
"_____no_output_____"
]
],
[
[
"## Results",
"_____no_output_____"
],
[
"### 150 eV SMALL LF",
"_____no_output_____"
]
],
[
[
"lst_out = [15025,15029,15036,15040,15047,15051,15058,15062,15069,\\\n 15073,15077,15084,15091,15095,15099,15106,15110,15117,\\\n 15121,15128,15132,15139,15143,15150,15154,15161,15165]",
"_____no_output_____"
],
[
"LF_small_out=[]\nlst_out = [15025,15029,15036,15040,15047,15051,15058,15062,15069,\\\n 15073,15077,15084,15091,15095,15099,15106,15110,15117,\\\n 15121,15128,15132,15139,15143,15150,15154,15161,15165]\nd_1 = pd.DataFrame(small_comp,columns=['comp'])\nfor i in lst_out:\n LF_small_out.append(load_node(i).outputs.output_parameters.\\\n dict.energy)\nd_1['Ecat_LF']=LF_small_out\nd_1.head()",
"_____no_output_____"
],
[
"d_1",
"_____no_output_____"
],
[
"d_1['E_LF']=d_1['EcatH_LF']-d_1['Ecat_LF']+16\nd_1.reset_index().plot(x='index', y='E_LF',kind='bar')",
"_____no_output_____"
]
],
[
[
"### 150 eV SMALL+H LF",
"_____no_output_____"
]
],
[
[
"tms = ['Sc','Ti','V','Cr','Mn','Fe','Co','Ni','Cu',\\\n 'Zn','Zr','Nb','Mo','Tc','Ru','Rh','Pd','Ag',\\\n 'Cd','Hf','Ta','W','Re','Os','Ir','Pt','Au']",
"_____no_output_____"
],
[
"import pandas as pd\nfrom aiida import orm\nfrom aiida import plugins\nfrom aiida.plugins import DataFactory\nfrom aiida.engine import submit\nfrom aiida.orm.nodes.data.upf import get_pseudos_from_structure\n\nStructureData = DataFactory(\"structure\")\nKpointsData = DataFactory('array.kpoints')\n\nsmall_comp = [StructureData(pymatgen_structure=f).get_formula() for f in small]\n# d_2 = pd.DataFrame(small_comp,columns=['comp'])\n\nLF_small_out=[]\nlst_out = [14699,14703,14710,14714,14721,14725,14732,14736,14743,\\\n 14747,14754,14758,14765,14769,14776,14780,14787,14791,\\\n 14798,14802,14809,14813,14820,14824,14831,14835,14839]\n\nfor i in lst_out:\n LF_small_out.append(load_node(i).outputs.output_parameters.\\\n dict.energy)\nd_1['EcatH_LF']=LF_small_out\nd_1.head()\nd_1.to_csv('LF_small_150.csv')",
"_____no_output_____"
]
],
[
[
"### 300 eV SMALL HF",
"_____no_output_____"
]
],
[
[
"tms = ['Sc','Ti','V','Cr','Mn','Fe','Co','Ni','Cu',\\\n 'Zn','Zr','Nb','Mo','Tc','Ru','Rh','Pd','Ag',\\\n 'Cd','Hf','Ta','W','Re','Os','Ir','Pt','Au']",
"_____no_output_____"
],
[
"import pandas as pd\nfrom aiida import orm\nfrom aiida import plugins\nfrom aiida.plugins import DataFactory\nfrom aiida.engine import submit\nfrom aiida.orm.nodes.data.upf import get_pseudos_from_structure\n\nStructureData = DataFactory(\"structure\")\nKpointsData = DataFactory('array.kpoints')\n\nsmall_comp = [StructureData(pymatgen_structure=f).get_formula() for f in small]\nd_2 = pd.DataFrame(small_comp,columns=['comp'])\n\nLF_small_out=[]\nlst_out = [12388,12396,12407,12418,12429,12440,12451,12462,12470,\\\n 12481,12492,12503,12511,12807,12815,12826,12837,12848,\\\n 12859,12867,12878,12889,12900,12908,12919,12930,12941]\n\nfor i in lst_out:\n LF_small_out.append(load_node(i).outputs.output_parameters.\\\n dict.energy)\nd_2['Ecat_HF']=LF_small_out\nd_2.head()",
"_____no_output_____"
]
],
[
[
"### 300 eV SMALL+H HF",
"_____no_output_____"
]
],
[
[
"tms = ['Sc','Ti','V','Cr','Mn','Fe','Co','Ni','Cu',\\\n 'Zn','Zr','Nb','Mo','Tc','Ru','Rh','Pd','Ag',\\\n 'Cd','Hf','Ta','W','Re','Os','Ir','Pt','Au']",
"_____no_output_____"
],
[
"import pandas as pd\nfrom aiida import orm\nfrom aiida import plugins\nfrom aiida.plugins import DataFactory\nfrom aiida.engine import submit\nfrom aiida.orm.nodes.data.upf import get_pseudos_from_structure\n\nStructureData = DataFactory(\"structure\")\nKpointsData = DataFactory('array.kpoints')\n\nsmall_comp = [StructureData(pymatgen_structure=f).get_formula() for f in small]\n# d_2 = pd.DataFrame(small_comp,columns=['comp'])\n\nLF_small_out=[]\nlst_out = [13285,13293,13304,13315,13326,13678,13905,13356,13367,\\\n 13378,13916,13697,13924,14395,14403,14414,14425,14046,\\\n 14057,14068,14436,14447,14458,14466,14477,14128,14139]\n\nfor i in lst_out:\n LF_small_out.append(load_node(i).outputs.output_parameters.\\\n dict.energy)\nd_2['EcatH_HF']=LF_small_out\nd_2.head()\nd_2.to_csv('HF_small_300.csv')",
"_____no_output_____"
],
[
"d_2['E_HF']=d_2['EcatH_HF']-d_2['Ecat_HF']+16\nd_2.reset_index().plot(x='index', y='E_HF',kind='bar')",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a081728a27fe8c98bf8378e5b837e9f1961daf7
| 6,946 |
ipynb
|
Jupyter Notebook
|
content/articles/website-meta/example-notebook.ipynb
|
obestwalter/obestwalter.github.io
|
d2407838dd690ad09b71b78d21130a697756be94
|
[
"MIT"
] | 2 |
2016-03-28T22:07:31.000Z
|
2018-06-02T16:50:26.000Z
|
content/articles/website-meta/example-notebook.ipynb
|
obestwalter/obestwalter.github.io
|
d2407838dd690ad09b71b78d21130a697756be94
|
[
"MIT"
] | 2 |
2017-02-24T15:44:00.000Z
|
2018-11-04T17:49:15.000Z
|
content/articles/website-meta/example-notebook.ipynb
|
obestwalter/obestwalter.github.io
|
d2407838dd690ad09b71b78d21130a697756be94
|
[
"MIT"
] | null | null | null | 57.404959 | 1,555 | 0.643248 |
[
[
[
"\nThis is a raw cell, it will stay unchanged.\n",
"_____no_output_____"
]
],
[
[
"## This is a header in a markdown cell\n\nThis is a paragraph in a markdown cell. It will be rendered and provides all kinds of fancy things.",
"_____no_output_____"
]
],
[
[
"# this is a code cell containing Python code - it can be executed\n# and the output will be shown in the notebook\nprint(\"I am the output of a code cell.\")\n2 + 5 # The evaluated result on the last line, will also be shown",
"I the output of a code cell.\n"
],
[
"from pathlib import Path\n\n# this will throw an exception\nPath(\"idontexist\").read_text()",
"_____no_output_____"
]
]
] |
[
"raw",
"markdown",
"code"
] |
[
[
"raw"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a082a966ca0470b8aa893c43fd9738b50c1db7a
| 32,372 |
ipynb
|
Jupyter Notebook
|
notebooks/User-Guide.ipynb
|
rschmoltzi/networkit
|
e9d3d770a868d1a5f853b9bf0d5fc9c2de7848c1
|
[
"MIT"
] | null | null | null |
notebooks/User-Guide.ipynb
|
rschmoltzi/networkit
|
e9d3d770a868d1a5f853b9bf0d5fc9c2de7848c1
|
[
"MIT"
] | null | null | null |
notebooks/User-Guide.ipynb
|
rschmoltzi/networkit
|
e9d3d770a868d1a5f853b9bf0d5fc9c2de7848c1
|
[
"MIT"
] | 1 |
2019-10-16T18:10:56.000Z
|
2019-10-16T18:10:56.000Z
| 28.396491 | 707 | 0.607871 |
[
[
[
"# NetworKit User Guide",
"_____no_output_____"
],
[
"## About NetworKit",
"_____no_output_____"
],
[
"[NetworKit][networkit] is an open-source toolkit for high-performance\nnetwork analysis. Its aim is to provide tools for the analysis of large\nnetworks in the size range from thousands to billions of edges. For this\npurpose, it implements efficient graph algorithms, many of them parallel to\nutilize multicore architectures. These are meant to compute standard measures\nof network analysis, such as degree sequences, clustering coefficients and\ncentrality. In this respect, NetworKit is comparable\nto packages such as [NetworkX][networkx], albeit with a focus on parallelism \nand scalability. NetworKit is also a testbed for algorithm engineering and\ncontains a few novel algorithms from recently published research, especially\nin the area of community detection.\n\n[networkit]: http://parco.iti.kit.edu/software/networkit.shtml \n[networkx]: http://networkx.github.com/\n\n",
"_____no_output_____"
],
[
"## Introduction",
"_____no_output_____"
],
[
"This notebook provides an interactive introduction to the features of NetworKit, consisting of text and executable code. We assume that you have read the Readme and successfully built the core library and the Python module. Code cells can be run one by one (e.g. by selecting the cell and pressing `shift+enter`), or all at once (via the `Cell->Run All` command). Try running all cells now to verify that NetworKit has been properly built and installed.\n",
"_____no_output_____"
],
[
"## Preparation",
"_____no_output_____"
],
[
"This notebook creates some plots. To show them in the notebook, matplotlib must be imported and we need to activate matplotlib's inline mode:",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"NetworKit is a hybrid built from C++ and Python code: Its core functionality is implemented in C++ for performance reasons, and then wrapped for Python using the Cython toolchain. This allows us to expose high-performance parallel code as a normal Python module. On the surface, NetworKit is just that and can be imported accordingly:",
"_____no_output_____"
]
],
[
[
"import networkit as nk",
"_____no_output_____"
]
],
[
[
"## Reading and Writing Graphs",
"_____no_output_____"
],
[
"Let us start by reading a network from a file on disk: `PGPgiantcompo.graph` network. In the course of this tutorial, we are going to work on the PGPgiantcompo network, a social network/web of trust in which nodes are PGP keys and an edge represents a signature from one key on another. It is distributed with NetworKit as a good starting point.\n\nThere is a convenient function in the top namespace which tries to guess the input format and select the appropriate reader:",
"_____no_output_____"
]
],
[
[
"G = nk.readGraph(\"../input/PGPgiantcompo.graph\", nk.Format.METIS)",
"_____no_output_____"
]
],
[
[
"There is a large variety of formats for storing graph data in files. For NetworKit, the currently best supported format is the [METIS adjacency format](http://people.sc.fsu.edu/~jburkardt/data/metis_graph/metis_graph.html). Various example graphs in this format can be found [here](http://www.cc.gatech.edu/dimacs10/downloads.shtml). The `readGraph` function tries to be an intelligent wrapper for various reader classes. In this example, it uses the `METISGraphReader` which is located in the `graphio` submodule, alongside other readers. These classes can also be used explicitly:",
"_____no_output_____"
]
],
[
[
"G = nk.graphio.METISGraphReader().read(\"../input/PGPgiantcompo.graph\")\n# is the same as: readGraph(\"input/PGPgiantcompo.graph\", Format.METIS)",
"_____no_output_____"
]
],
[
[
"It is also possible to specify the format for `readGraph()` and `writeGraph()`. Supported formats can be found via `[graphio.]Format`. However, graph formats are most likely only supported as far as the NetworKit::Graph can hold and use the data. Please note, that not all graph formats are supported for reading and writing.\n\nThus, it is possible to use NetworKit to convert graphs between formats. Let's say I need the previously read PGP graph in the Graphviz format:",
"_____no_output_____"
]
],
[
[
"import os\n\nif not os.path.isdir('./output/'):\n os.makedirs('./output')\nnk.graphio.writeGraph(G,\"output/PGPgiantcompo.graphviz\", nk.Format.GraphViz)",
"_____no_output_____"
]
],
[
[
"NetworKit also provides a function to convert graphs directly:",
"_____no_output_____"
]
],
[
[
"nk.graphio.convertGraph(nk.Format.LFR, nk.Format.GML, \"../input/example.edgelist\", \"output/example.gml\")",
"_____no_output_____"
]
],
[
[
"## The Graph Object",
"_____no_output_____"
],
[
"`Graph` is the central class of NetworKit. An object of this type represents an undirected, optionally weighted network. Let us inspect several of the methods which the class provides.",
"_____no_output_____"
]
],
[
[
"print(G.numberOfNodes(), G.numberOfEdges())",
"_____no_output_____"
]
],
[
[
"Nodes are simply integer indices, and edges are pairs of such indices.",
"_____no_output_____"
]
],
[
[
"for u in G.iterNodes():\n if u > 5:\n print('...')\n break\n print(u)",
"_____no_output_____"
],
[
"i = 0\nfor u, v in G.iterEdges():\n if i > 5:\n print('...')\n break\n print(u, v)\n i += 1",
"_____no_output_____"
],
[
"i = 0\nfor u, v, w in G.iterEdgesWeights():\n if i > 5:\n print('...')\n break\n print(u, v, w)\n i += 1",
"_____no_output_____"
]
],
[
[
"This network is unweighted, meaning that each edge has the default weight of 1.",
"_____no_output_____"
]
],
[
[
"G.weight(42, 11)",
"_____no_output_____"
]
],
[
[
"## Connected Components",
"_____no_output_____"
],
[
"A connected component is a set of nodes in which each pair of nodes is connected by a path. The following function determines the connected components of a graph:",
"_____no_output_____"
]
],
[
[
"cc = nk.components.ConnectedComponents(G)\ncc.run()\nprint(\"number of components \", cc.numberOfComponents())\nv = 0\nprint(\"component of node \", v , \": \" , cc.componentOfNode(0))\nprint(\"map of component sizes: \", cc.getComponentSizes())",
"_____no_output_____"
]
],
[
[
"## Degree Distribution",
"_____no_output_____"
],
[
"Node degree, the number of edges connected to a node, is one of the most studied properties of networks. Types of networks are often characterized in terms of their distribution of node degrees. We obtain and visualize the degree distribution of our example network as follows. ",
"_____no_output_____"
]
],
[
[
"dd = sorted(nk.centrality.DegreeCentrality(G).run().scores(), reverse=True)\nplt.xscale(\"log\")\nplt.xlabel(\"degree\")\nplt.yscale(\"log\")\nplt.ylabel(\"number of nodes\")\nplt.plot(dd)\nplt.show()",
"_____no_output_____"
]
],
[
[
"We choose a logarithmic scale on both axes because a _powerlaw degree distribution_, a characteristic feature of complex networks, would show up as a straight line from the top left to the bottom right on such a plot. As we see, the degree distribution of the `PGPgiantcompo` network is definitely skewed, with few high-degree nodes and many low-degree nodes. But does the distribution actually obey a power law? In order to study this, we need to apply the [powerlaw](https://pypi.python.org/pypi/powerlaw) module. Call the following function:",
"_____no_output_____"
]
],
[
[
"try:\n import powerlaw\n fit = powerlaw.Fit(dd)\nexcept ImportError:\n print (\"Module powerlaw could not be loaded\")",
"_____no_output_____"
]
],
[
[
"The powerlaw coefficient can then be retrieved via:",
"_____no_output_____"
]
],
[
[
"try:\n import powerlaw\n fit.alpha\nexcept ImportError:\n print (\"Module powerlaw could not be loaded\")",
"_____no_output_____"
]
],
[
[
"If you further want to know how \"good\" it fits the power law distribution, you can use the the `distribution_compare`-function. From the documentation of the function: \n> R : float\n>\n> Loglikelihood ratio of the two distributions' fit to the data. If\n> greater than 0, the first distribution is preferred. If less than\n> 0, the second distribution is preferred.\n\n> p : float\n>\n> Significance of R\n",
"_____no_output_____"
]
],
[
[
"try:\n import powerlaw\n fit.distribution_compare('power_law','exponential')\nexcept ImportError:\n print (\"Module powerlaw could not be loaded\")",
"_____no_output_____"
]
],
[
[
"## Community Detection",
"_____no_output_____"
],
[
"This section demonstrates the community detection capabilities of NetworKit. Community detection is concerned with identifying groups of nodes which are significantly more densely connected to eachother than to the rest of the network.",
"_____no_output_____"
],
[
"Code for community detection is contained in the `community` module. The module provides a top-level function to quickly perform community detection with a suitable algorithm and print some stats about the result.",
"_____no_output_____"
]
],
[
[
"nk.community.detectCommunities(G)",
"_____no_output_____"
]
],
[
[
"The function prints some statistics and returns the partition object representing the communities in the network as an assignment of node to community label. Let's capture this result of the last function call.",
"_____no_output_____"
]
],
[
[
"communities = nk.community.detectCommunities(G)",
"_____no_output_____"
]
],
[
[
"*Modularity* is the primary measure for the quality of a community detection solution. The value is in the range `[-0.5,1]` and usually depends both on the performance of the algorithm and the presence of distinctive community structures in the network.",
"_____no_output_____"
]
],
[
[
"nk.community.Modularity().getQuality(communities, G)",
"_____no_output_____"
]
],
[
[
"### The Partition Data Structure",
"_____no_output_____"
],
[
"The result of community detection is a partition of the node set into disjoint subsets. It is represented by the `Partition` data strucure, which provides several methods for inspecting and manipulating a partition of a set of elements (which need not be the nodes of a graph).",
"_____no_output_____"
]
],
[
[
"type(communities)",
"_____no_output_____"
],
[
"print(\"{0} elements assigned to {1} subsets\".format(communities.numberOfElements(),\n communities.numberOfSubsets()))",
"_____no_output_____"
],
[
"print(\"the biggest subset has size {0}\".format(max(communities.subsetSizes())))",
"_____no_output_____"
]
],
[
[
"The contents of a partition object can be written to file in a simple format, in which each line *i* contains the subset id of node *i*.",
"_____no_output_____"
]
],
[
[
"nk.community.writeCommunities(communities, \"output/communties.partition\")",
"_____no_output_____"
]
],
[
[
"### Choice of Algorithm",
"_____no_output_____"
],
[
"The community detection function used a good default choice for an algorithm: *PLM*, our parallel implementation of the well-known Louvain method. It yields a high-quality solution at reasonably fast running times. Let us now apply a variation of this algorithm.",
"_____no_output_____"
]
],
[
[
"nk.community.detectCommunities(G, algo=nk.community.PLM(G, True))",
"_____no_output_____"
]
],
[
[
"We have switched on refinement, and we can see how modularity is slightly improved. For a small network like this, this takes only marginally longer.",
"_____no_output_____"
],
[
"### Visualizing the Result",
"_____no_output_____"
],
[
"We can easily plot the distribution of community sizes as follows. While the distribution is skewed, it does not seem to fit a power-law, as shown by a log-log plot.",
"_____no_output_____"
]
],
[
[
"sizes = communities.subsetSizes()\nsizes.sort(reverse=True)\nax1 = plt.subplot(2,1,1)\nax1.set_ylabel(\"size\")\nax1.plot(sizes)\n\nax2 = plt.subplot(2,1,2)\nax2.set_xscale(\"log\")\nax2.set_yscale(\"log\")\nax2.set_ylabel(\"size\")\nax2.plot(sizes)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Search and Shortest Paths",
"_____no_output_____"
],
[
"A simple breadth-first search from a starting node can be performed as follows:",
"_____no_output_____"
]
],
[
[
"v = 0\nbfs = nk.distance.BFS(G, v)\nbfs.run()\n\nbfsdist = bfs.getDistances(False)",
"_____no_output_____"
]
],
[
[
"The return value is a list of distances from `v` to other nodes - indexed by node id. For example, we can now calculate the mean distance from the starting node to all other nodes:",
"_____no_output_____"
]
],
[
[
"sum(bfsdist) / len(bfsdist)",
"_____no_output_____"
]
],
[
[
"Similarly, Dijkstra's algorithm yields shortest path distances from a starting node to all other nodes in a weighted graph. Because `PGPgiantcompo` is an unweighted graph, the result is the same here:",
"_____no_output_____"
]
],
[
[
"dijkstra = nk.distance.Dijkstra(G, v)\ndijkstra.run()\nspdist = dijkstra.getDistances(False)\nsum(spdist) / len(spdist)",
"_____no_output_____"
]
],
[
[
"## Centrality",
"_____no_output_____"
],
[
"[Centrality](http://en.wikipedia.org/wiki/Centrality) measures the relative importance of a node within a graph. Code for centrality analysis is grouped into the `centrality` module.",
"_____no_output_____"
],
[
"### Betweenness Centrality",
"_____no_output_____"
],
[
"We implement Brandes' algorithm for the exact calculation of betweenness centrality. While the algorithm is efficient, it still needs to calculate shortest paths between all pairs of nodes, so its scalability is limited. We demonstrate it here on the small Karate club graph. ",
"_____no_output_____"
]
],
[
[
"K = nk.readGraph(\"../input/karate.graph\", nk.Format.METIS)",
"_____no_output_____"
],
[
"bc = nk.centrality.Betweenness(K)\nbc.run()",
"_____no_output_____"
]
],
[
[
"We have now calculated centrality values for the given graph, and can retrieve them either as an ordered ranking of nodes or as a list of values indexed by node id. ",
"_____no_output_____"
]
],
[
[
"bc.ranking()[:10] # the 10 most central nodes",
"_____no_output_____"
]
],
[
[
"### Approximation of Betweenness",
"_____no_output_____"
],
[
"Since exact calculation of betweenness scores is often out of reach, NetworKit provides an approximation algorithm based on path sampling. Here we estimate betweenness centrality in `PGPgiantcompo`, with a probabilistic guarantee that the error is no larger than an additive constant $\\epsilon$.",
"_____no_output_____"
]
],
[
[
"abc = nk.centrality.ApproxBetweenness(G, epsilon=0.1)\nabc.run()",
"_____no_output_____"
]
],
[
[
"The 10 most central nodes according to betweenness are then",
"_____no_output_____"
]
],
[
[
"abc.ranking()[:10]",
"_____no_output_____"
]
],
[
[
"### Eigenvector Centrality and PageRank",
"_____no_output_____"
],
[
"Eigenvector centrality and its variant PageRank assign relative importance to nodes according to their connections, incorporating the idea that edges to high-scoring nodes contribute more. PageRank is a version of eigenvector centrality which introduces a damping factor, modeling a random web surfer which at some point stops following links and jumps to a random page. In PageRank theory, centrality is understood as the probability of such a web surfer to arrive on a certain page. Our implementation of both measures is based on parallel power iteration, a relatively simple eigensolver.",
"_____no_output_____"
]
],
[
[
"# Eigenvector centrality\nec = nk.centrality.EigenvectorCentrality(K)\nec.run()\nec.ranking()[:10] # the 10 most central nodes",
"_____no_output_____"
],
[
"# PageRank\npr = nk.centrality.PageRank(K, 1e-6)\npr.run()\npr.ranking()[:10] # the 10 most central nodes",
"_____no_output_____"
]
],
[
[
"## Core Decomposition",
"_____no_output_____"
],
[
"A $k$-core decomposition of a graph is performed by successicely peeling away nodes with degree less than $k$. The remaining nodes form the $k$-core of the graph.",
"_____no_output_____"
]
],
[
[
"K = nk.readGraph(\"../input/karate.graph\", nk.Format.METIS)\ncoreDec = nk.centrality.CoreDecomposition(K)\ncoreDec.run()",
"_____no_output_____"
]
],
[
[
"Core decomposition assigns a core number to each node, being the maximum $k$ for which a node is contained in the $k$-core. For this small graph, core numbers have the following range:",
"_____no_output_____"
]
],
[
[
"set(coreDec.scores())",
"_____no_output_____"
],
[
"nk.viztasks.drawGraph(K, node_size=[(k**2)*20 for k in coreDec.scores()])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Subgraph",
"_____no_output_____"
],
[
"NetworKit supports the creation of Subgraphs depending on an original graph and a set of nodes. This might be useful in case you want to analyze certain communities of a graph. Let's say that community 2 of the above result is of further interest, so we want a new graph that consists of nodes and intra cluster edges of community 2.",
"_____no_output_____"
]
],
[
[
"c2 = communities.getMembers(2)\ng2 = nk.graphtools.subgraphFromNodes(G, c2)",
"_____no_output_____"
],
[
"communities.subsetSizeMap()[2]",
"_____no_output_____"
],
[
"g2.numberOfNodes()",
"_____no_output_____"
]
],
[
[
"As we can see, the number of nodes in our subgraph matches the number of nodes of community 2. The subgraph can be used like any other graph object, e.g. further community analysis:",
"_____no_output_____"
]
],
[
[
"communities2 = nk.community.detectCommunities(g2)",
"_____no_output_____"
],
[
"nk.viztasks.drawCommunityGraph(g2,communities2)\nplt.show()",
"_____no_output_____"
]
],
[
[
"## NetworkX Compatibility",
"_____no_output_____"
],
[
"[NetworkX](http://en.wikipedia.org/wiki/Centrality) is a popular Python package for network analysis. To let both packages complement each other, and to enable the adaptation of existing NetworkX-based code, we support the conversion of the respective graph data structures.",
"_____no_output_____"
]
],
[
[
"import networkx as nx\nnxG = nk.nxadapter.nk2nx(G) # convert from NetworKit.Graph to networkx.Graph\nprint(nx.degree_assortativity_coefficient(nxG))",
"_____no_output_____"
]
],
[
[
"## Generating Graphs",
"_____no_output_____"
],
[
"An important subfield of network science is the design and analysis of generative models. A variety of generative models have been proposed with the aim of reproducing one or several of the properties we find in real-world complex networks. NetworKit includes generator algorithms for several of them.",
"_____no_output_____"
],
[
"The **Erdös-Renyi model** is the most basic random graph model, in which each edge exists with the same uniform probability. NetworKit provides an efficient generator:",
"_____no_output_____"
]
],
[
[
"ERD = nk.generators.ErdosRenyiGenerator(200, 0.2).generate()\nprint(ERD.numberOfNodes(), ERD.numberOfEdges())",
"_____no_output_____"
]
],
[
[
"## Transitivity / Clustering Coefficients",
"_____no_output_____"
],
[
"In the most general sense, transitivity measures quantify how likely it is that the relations out of which the network is built are transitive. The clustering coefficient is the most prominent of such measures. We need to distinguish between global and local clustering coefficient: The global clustering coefficient for a network gives the fraction of closed triads. The local clustering coefficient focuses on a single node and counts how many of the possible edges between neighbors of the node exist. The average of this value over all nodes is a good indicator for the degreee of transitivity and the presence of community structures in a network, and this is what the following function returns:",
"_____no_output_____"
]
],
[
[
"nk.globals.clustering(G)",
"_____no_output_____"
]
],
[
[
"A simple way to generate a **random graph with community structure** is to use the `ClusteredRandomGraphGenerator`. It uses a simple variant of the Erdös-Renyi model: The node set is partitioned into a given number of subsets. Nodes within the same subset have a higher edge probability.",
"_____no_output_____"
]
],
[
[
"CRG = nk.generators.ClusteredRandomGraphGenerator(200, 4, 0.2, 0.002).generate()\nnk.community.detectCommunities(CRG)",
"_____no_output_____"
]
],
[
[
"The **Chung-Lu model** (also called **configuration model**) generates a random graph which corresponds to a given degree sequence, i.e. has the same expected degree sequence. It can therefore be used to replicate some of the properties of a given real networks, while others are not retained, such as high clustering and the specific community structure.",
"_____no_output_____"
]
],
[
[
"degreeSequence = [CRG.degree(v) for v in CRG.nodes()]\nclgen = nk.generators.ChungLuGenerator(degreeSequence)\nCLG = clgen.generate()\nnk.community.detectCommunities(CLG)",
"_____no_output_____"
]
],
[
[
"## Settings",
"_____no_output_____"
],
[
"In this section we discuss global settings.",
"_____no_output_____"
],
[
"### Logging",
"_____no_output_____"
],
[
"When using NetworKit from the command line, the verbosity of console output can be controlled via several loglevels, from least to most verbose: `FATAL`, `ERROR`, `WARN`, `INFO`, `DEBUG` and `TRACE`. (Currently, logging is only available on the console and not visible in the IPython Notebook). ",
"_____no_output_____"
]
],
[
[
"nk.getLogLevel() # the default loglevel",
"_____no_output_____"
],
[
"nk.setLogLevel(\"TRACE\") # set to most verbose mode\nnk.setLogLevel(\"ERROR\") # set back to default",
"_____no_output_____"
]
],
[
[
"Please note, that the default build setting is optimized (`--optimize=Opt`) and thus, every LOG statement below INFO is removed. If you need DEBUG and TRACE statements, please build the extension module by appending `--optimize=Dbg` when calling the setup script.",
"_____no_output_____"
],
[
"### Parallelism",
"_____no_output_____"
],
[
"The degree of parallelism can be controlled and monitored in the following way:",
"_____no_output_____"
]
],
[
[
"nk.setNumberOfThreads(4) # set the maximum number of available threads",
"_____no_output_____"
],
[
"nk.getMaxNumberOfThreads() # see maximum number of available threads",
"_____no_output_____"
],
[
"nk.getCurrentNumberOfThreads() # the number of threads currently executing",
"_____no_output_____"
]
],
[
[
"## Support",
"_____no_output_____"
],
[
"NetworKit is an open-source project that improves with suggestions and contributions from its users. The [mailing list](https://sympa.cms.hu-berlin.de/sympa/subscribe/networkit) is the place for general discussion and questions.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4a082ad468e02393be228f1fe8098a11fb1931df
| 43,412 |
ipynb
|
Jupyter Notebook
|
tests/tf/06_lasso_and_ridge_regression.ipynb
|
gopala-kr/ds-notebooks
|
bc35430ecdd851f2ceab8f2437eec4d77cb59423
|
[
"MIT"
] | 1 |
2019-05-10T09:16:23.000Z
|
2019-05-10T09:16:23.000Z
|
tests/tf/06_lasso_and_ridge_regression.ipynb
|
gopala-kr/ds-notebooks
|
bc35430ecdd851f2ceab8f2437eec4d77cb59423
|
[
"MIT"
] | null | null | null |
tests/tf/06_lasso_and_ridge_regression.ipynb
|
gopala-kr/ds-notebooks
|
bc35430ecdd851f2ceab8f2437eec4d77cb59423
|
[
"MIT"
] | 1 |
2019-05-10T09:17:28.000Z
|
2019-05-10T09:17:28.000Z
| 125.106628 | 20,056 | 0.87796 |
[
[
[
"# LASSO and Ridge Regression\n\nThis function shows how to use TensorFlow to solve lasso or ridge regression for $\\boldsymbol{y} = \\boldsymbol{Ax} + \\boldsymbol{b}$\n\nWe will use the iris data, specifically: $\\boldsymbol{y}$ = Sepal Length, $\\boldsymbol{x}$ = Petal Width",
"_____no_output_____"
]
],
[
[
"# import required libraries\nimport matplotlib.pyplot as plt\nimport sys\nimport numpy as np\nimport tensorflow as tf\nfrom sklearn import datasets\nfrom tensorflow.python.framework import ops",
"/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\n return f(*args, **kwds)\n"
],
[
"# Specify 'Ridge' or 'LASSO'\nregression_type = 'LASSO'",
"_____no_output_____"
],
[
"# clear out old graph\nops.reset_default_graph()\n\n# Create graph\nsess = tf.Session()",
"_____no_output_____"
]
],
[
[
"## Load iris data",
"_____no_output_____"
]
],
[
[
"# iris.data = [(Sepal Length, Sepal Width, Petal Length, Petal Width)]\niris = datasets.load_iris()\nx_vals = np.array([x[3] for x in iris.data])\ny_vals = np.array([y[0] for y in iris.data])",
"_____no_output_____"
]
],
[
[
"## Model Parameters",
"_____no_output_____"
]
],
[
[
"# Declare batch size\nbatch_size = 50\n\n# Initialize placeholders\nx_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)\ny_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)\n\n# make results reproducible\nseed = 13\nnp.random.seed(seed)\ntf.set_random_seed(seed)\n\n# Create variables for linear regression\nA = tf.Variable(tf.random_normal(shape=[1,1]))\nb = tf.Variable(tf.random_normal(shape=[1,1]))\n\n# Declare model operations\nmodel_output = tf.add(tf.matmul(x_data, A), b)",
"_____no_output_____"
]
],
[
[
"## Loss Functions\n",
"_____no_output_____"
]
],
[
[
"# Select appropriate loss function based on regression type\n\nif regression_type == 'LASSO':\n # Declare Lasso loss function\n # Lasso Loss = L2_Loss + heavyside_step,\n # Where heavyside_step ~ 0 if A < constant, otherwise ~ 99\n lasso_param = tf.constant(0.9)\n heavyside_step = tf.truediv(1., tf.add(1., tf.exp(tf.multiply(-50., tf.subtract(A, lasso_param)))))\n regularization_param = tf.multiply(heavyside_step, 99.)\n loss = tf.add(tf.reduce_mean(tf.square(y_target - model_output)), regularization_param)\n\nelif regression_type == 'Ridge':\n # Declare the Ridge loss function\n # Ridge loss = L2_loss + L2 norm of slope\n ridge_param = tf.constant(1.)\n ridge_loss = tf.reduce_mean(tf.square(A))\n loss = tf.expand_dims(tf.add(tf.reduce_mean(tf.square(y_target - model_output)), tf.multiply(ridge_param, ridge_loss)), 0)\n \nelse:\n print('Invalid regression_type parameter value',file=sys.stderr)\n",
"_____no_output_____"
]
],
[
[
"## Optimizer",
"_____no_output_____"
]
],
[
[
"# Declare optimizer\nmy_opt = tf.train.GradientDescentOptimizer(0.001)\ntrain_step = my_opt.minimize(loss)",
"_____no_output_____"
]
],
[
[
"## Run regression",
"_____no_output_____"
]
],
[
[
"# Initialize variables\ninit = tf.global_variables_initializer()\nsess.run(init)\n\n# Training loop\nloss_vec = []\nfor i in range(1500):\n rand_index = np.random.choice(len(x_vals), size=batch_size)\n rand_x = np.transpose([x_vals[rand_index]])\n rand_y = np.transpose([y_vals[rand_index]])\n sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})\n temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})\n loss_vec.append(temp_loss[0])\n if (i+1)%300==0:\n print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)) + ' b = ' + str(sess.run(b)))\n print('Loss = ' + str(temp_loss))\n print('\\n')",
"Step #300 A = [[0.77170753]] b = [[1.8249986]]\nLoss = [[10.26473]]\n\n\nStep #600 A = [[0.7590854]] b = [[3.2220633]]\nLoss = [[3.0629203]]\n\n\nStep #900 A = [[0.74843585]] b = [[3.9975822]]\nLoss = [[1.2322046]]\n\n\nStep #1200 A = [[0.73752165]] b = [[4.429741]]\nLoss = [[0.57872057]]\n\n\nStep #1500 A = [[0.7294267]] b = [[4.672531]]\nLoss = [[0.40874988]]\n\n\n"
]
],
[
[
"## Extract regression results",
"_____no_output_____"
]
],
[
[
"# Get the optimal coefficients\n[slope] = sess.run(A)\n[y_intercept] = sess.run(b)\n\n# Get best fit line\nbest_fit = []\nfor i in x_vals:\n best_fit.append(slope*i+y_intercept)",
"_____no_output_____"
]
],
[
[
"## Plot results",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n# Plot the result\nplt.plot(x_vals, y_vals, 'o', label='Data Points')\nplt.plot(x_vals, best_fit, 'r-', label='Best fit line', linewidth=3)\nplt.legend(loc='upper left')\nplt.title('Sepal Length vs Pedal Width')\nplt.xlabel('Pedal Width')\nplt.ylabel('Sepal Length')\nplt.show()\n\n# Plot loss over time\nplt.plot(loss_vec, 'k-')\nplt.title(regression_type + ' Loss per Generation')\nplt.xlabel('Generation')\nplt.ylabel('Loss')\nplt.show()",
"_____no_output_____"
],
[
"tested; Gopal",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a08378a258e8bdd41c682c8d991d3ffd629f305
| 64,995 |
ipynb
|
Jupyter Notebook
|
jupyter/Watson Studio Public/RunDeployedModel.ipynb
|
IBMDecisionOptimization/DO-Samples
|
3ca625a2b6790d7f1ef215b14285851fa59b932b
|
[
"Apache-2.0"
] | 16 |
2019-09-26T14:57:33.000Z
|
2022-03-24T06:26:13.000Z
|
jupyter/Watson Studio Public/RunDeployedModel.ipynb
|
IBMDecisionOptimization/DO-Samples
|
3ca625a2b6790d7f1ef215b14285851fa59b932b
|
[
"Apache-2.0"
] | 2 |
2020-12-08T12:00:22.000Z
|
2021-01-19T19:20:22.000Z
|
jupyter/Watson Studio Public/RunDeployedModel.ipynb
|
IBMDecisionOptimization/DO-Samples
|
3ca625a2b6790d7f1ef215b14285851fa59b932b
|
[
"Apache-2.0"
] | 35 |
2018-12-19T20:08:19.000Z
|
2022-03-09T18:35:42.000Z
| 218.838384 | 55,996 | 0.9033 |
[
[
[
"## Use a Decision Optimization model deployed in Watson Machine Learning\n\nThis notebook shows you how to create and monitor jobs, and get solutions using the Watson Machine Learning Python Client.\n\nThis example only applies to Decision Optimization in Watson Machine Learning Local and Cloud Pak for Data/Watson Studio Local.\n\nIn order to use this example, you must first have deployed the Diet example.\n\nA Python API is provided to submit input data, solve, and get results.\n",
"_____no_output_____"
]
],
[
[
"# Uninstall the Watson Machine Learning client Python client based on v3 APIs\n\n!pip uninstall watson-machine-learning-client -y",
"_____no_output_____"
],
[
"# Install WML client API\n\n!pip install ibm-watson-machine-learning",
"_____no_output_____"
],
[
"from ibm_watson_machine_learning import APIClient",
"_____no_output_____"
],
[
"# Instantiate a client using credentials\nwml_credentials = {\n \"apikey\": \"<API_key>\",\n \"url\": \"<instance_url>\"\n}\n\nclient = APIClient(wml_credentials)",
"_____no_output_____"
],
[
"# Find the space ID\nspace_name = '<SPACE NAME>'\n\nspace_id = [x['metadata']['id'] for x in client.spaces.get_details()['resources'] if x['entity']['name'] == space_name][0]\n\nclient.set.default_space(space_id)\n",
"_____no_output_____"
],
[
"# Import pandas library \nimport pandas as pd \n \n# initialize list of lists \ndiet_food = pd.DataFrame([ [\"Roasted Chicken\", 0.84, 0, 10],\n [\"Spaghetti W/ Sauce\", 0.78, 0, 10],\n [\"Tomato,Red,Ripe,Raw\", 0.27, 0, 10],\n [\"Apple,Raw,W/Skin\", 0.24, 0, 10],\n [\"Grapes\", 0.32, 0, 10],\n [\"Chocolate Chip Cookies\", 0.03, 0, 10],\n [\"Lowfat Milk\", 0.23, 0, 10],\n [\"Raisin Brn\", 0.34, 0, 10],\n [\"Hotdog\", 0.31, 0, 10]] , columns = [\"name\",\"unit_cost\",\"qmin\",\"qmax\"])\n\ndiet_food_nutrients = pd.DataFrame([\n [\"Spaghetti W/ Sauce\", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2],\n [\"Roasted Chicken\", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2],\n [\"Tomato,Red,Ripe,Raw\", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1],\n [\"Apple,Raw,W/Skin\", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3],\n [\"Grapes\", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2],\n [\"Chocolate Chip Cookies\", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9],\n [\"Lowfat Milk\", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1],\n [\"Raisin Brn\", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4],\n [\"Hotdog\", 242.1, 23.5, 2.3, 0, 0, 18, 10.4 ]\n ] , columns = [\"Food\",\"Calories\",\"Calcium\",\"Iron\",\"Vit_A\",\"Dietary_Fiber\",\"Carbohydrates\",\"Protein\"])\n\ndiet_nutrients = pd.DataFrame([\n [\"Calories\", 2000, 2500],\n [\"Calcium\", 800, 1600],\n [\"Iron\", 10, 30],\n [\"Vit_A\", 5000, 50000],\n [\"Dietary_Fiber\", 25, 100],\n [\"Carbohydrates\", 0, 300],\n [\"Protein\", 50, 100]\n ], columns = [\"name\",\"qmin\",\"qmax\"])",
"_____no_output_____"
]
],
[
[
"You can find the deployment ID in the Analytics deployment spaces. \nOr by listing the deployment using the API.\n\n\n",
"_____no_output_____"
]
],
[
[
"client.deployments.list()",
"_____no_output_____"
],
[
"# Get the deployment ID from the Model name.\n# Note, that there could be several deployments for one model\nmodel_name = \"diet\"\ndeployment_uid = [x['metadata']['id'] for x in client.deployments.get_details()['resources'] if x['entity']['name'] == model_name][0]\n\nprint(deployment_uid)",
"_____no_output_____"
]
],
[
[
"Create and monitor a job with inline data for your deployed model. \nCreate a payload containing inline input data.\n\nCreate a new job with this payload and the deployment.\n\nGet the job_uid.",
"_____no_output_____"
]
],
[
[
"solve_payload = {\n client.deployments.DecisionOptimizationMetaNames.INPUT_DATA: [\n {\n \"id\":\"diet_food.csv\",\n \"values\" : diet_food\n },\n {\n \"id\":\"diet_food_nutrients.csv\",\n \"values\" : diet_food_nutrients\n },\n {\n \"id\":\"diet_nutrients.csv\",\n \"values\" : diet_nutrients\n }\n ],\n client.deployments.DecisionOptimizationMetaNames.OUTPUT_DATA: [\n {\n \"id\":\".*\\.csv\"\n }\n ]\n}\n\njob_details = client.deployments.create_job(deployment_uid, solve_payload)\njob_uid = client.deployments.get_job_uid(job_details)\n\nprint( job_uid )",
"_____no_output_____"
]
],
[
[
"Display job status until it is completed.\n\nThe first job of a new deployment might take some time as a compute node must be started.",
"_____no_output_____"
]
],
[
[
"from time import sleep\n\nwhile job_details['entity']['decision_optimization']['status']['state'] not in ['completed', 'failed', 'canceled']:\n print(job_details['entity']['decision_optimization']['status']['state'] + '...')\n sleep(5)\n job_details=client.deployments.get_job_details(job_uid)\n\nprint( job_details['entity']['decision_optimization']['status']['state'])",
"_____no_output_____"
],
[
"job_details['entity']['decision_optimization']['status']",
"_____no_output_____"
]
],
[
[
"Extract and display solution. \nDisplay the output solution.\n\nDisplay the KPI Total Calories value.",
"_____no_output_____"
]
],
[
[
"solution_table=[x for x in job_details['entity']['decision_optimization']['output_data'] if x['id'] == 'solution.csv'][0]\n\n# Create a dataframe for the solution\nsolution = pd.DataFrame(solution_table['values'], \n columns = solution_table['fields'])\nsolution.head()",
"_____no_output_____"
],
[
"print( job_details['entity']['decision_optimization']['solve_state']['details']['KPI.Total Calories'] )",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a085d1e4568dbc316a2210766a11d1a379d894f
| 334,826 |
ipynb
|
Jupyter Notebook
|
Notebooks/Tutorial/Jupyter_notebooks/GCS_data/Tutorial_notebook_GCS_data.ipynb
|
vectice/vectice-examples
|
ca06ae9002d80ad883e39d8e0f99db5b9c336431
|
[
"MIT"
] | 7 |
2021-08-23T14:15:43.000Z
|
2022-02-25T14:37:33.000Z
|
Notebooks/Tutorial/Jupyter_notebooks/GCS_data/Tutorial_notebook_GCS_data.ipynb
|
vectice/vectice-examples
|
ca06ae9002d80ad883e39d8e0f99db5b9c336431
|
[
"MIT"
] | null | null | null |
Notebooks/Tutorial/Jupyter_notebooks/GCS_data/Tutorial_notebook_GCS_data.ipynb
|
vectice/vectice-examples
|
ca06ae9002d80ad883e39d8e0f99db5b9c336431
|
[
"MIT"
] | 4 |
2021-07-12T09:56:21.000Z
|
2022-02-18T05:00:35.000Z
| 299.218945 | 298,342 | 0.920562 |
[
[
[
"## Preparation",
"_____no_output_____"
],
[
"Welcome to the Vectice tutorial notebook!\n\n\nThrough this notebook, we will be illustrating how to log the following information into Vectice using the Vectice Python library:\n- Dataset versions\n- Model versions\n- Runs and lineage\n\nFor more information on the tutorial, please refer to the \"Vectice Tutorial Page\" inside the app.\n\n",
"_____no_output_____"
],
[
"## Setup",
"_____no_output_____"
],
[
"Install Vectice",
"_____no_output_____"
]
],
[
[
"#Install Vectice Python library \n# In this tutorial we will do code versioning using github, we also support gitlab\n# and bitbucket: !pip install -q \"vectice[github, gitlab, bitbucket]\"\n!pip install --q vectice[github]",
"_____no_output_____"
],
[
"#Verify if Vectice python library was installed\n!pip3 show vectice",
"Name: vectice\nVersion: 0.21.0\nSummary: Vectice Python library\nHome-page: https://github.com/vectice/vectice-python\nAuthor: Vectice Inc.\nAuthor-email: [email protected]\nLicense: Apache License 2.0\nLocation: /opt/conda/lib/python3.7/site-packages\nRequires: python-dotenv, requests\nRequired-by: \n"
]
],
[
[
"Here, the our data is stored in GCS. We should install the following GCS packages in order to be able to get it.",
"_____no_output_____"
]
],
[
[
"## GCS packages\n!pip3 install --q fsspec\n!pip3 install --q gcsfs\n",
"_____no_output_____"
],
[
"## Import the required packages for data preparation and model training\nimport string\nfrom math import sqrt\nimport os\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\n# Load scikit-learn packages\nfrom sklearn.model_selection import train_test_split # Model Selection\nfrom sklearn.metrics import mean_absolute_error, mean_squared_error # Model Evaluation\nfrom sklearn.linear_model import LinearRegression # Linear Regression\nfrom sklearn.tree import DecisionTreeRegressor, plot_tree # Decision Tree Regression\nfrom sklearn.ensemble import RandomForestRegressor # Random Forest Regression\n",
"_____no_output_____"
]
],
[
[
"### Connect and authenticate to Vectice API",
"_____no_output_____"
]
],
[
[
"#Import the Vectice library\nfrom vectice import Vectice\nfrom vectice.models import JobType\nfrom vectice.entity.model import ModelType\nimport logging\nlogging.basicConfig(level=logging.INFO)\n\n# Specify the API endpoint for Vectice.\nos.environ['VECTICE_API_ENDPOINT']= \"beta.vectice.com\"\n\n# To use the Vectice Python library, you first need to authenticate your account using an API key.\n# You can generate an API key from the Vectice UI, by going to the \"API Tokens\" tab in your workspace\n# Copy and paste your API key here\nos.environ['VECTICE_API_TOKEN'] = \"QkZWM9EJD.0XeWYNgrVy7K69jq5azA4QkZWM9EJDpBPOLMm1xbl2w8vGR03d\"\n\n# Next, you need to specify the tutorial project where you will run this notebook using a \n# \"Project Token\". You can find the \"Project Token\" under the \"Settings\" tab of your project.\n\n# Copy and paste your Project Token here\n# autocode = True enables you to track your git changes for your code automatically everytime you execute a run (see below).\nvectice = Vectice(project_token=\"BpR8Go6eh84vybzZaLWj\", autocode= True)\n",
"INFO:vectice.auth:Vectice: Refreshing token... \nINFO:vectice.auth:Success!\nINFO:vectice.auth:Vectice: Validating project token... \nINFO:vectice.auth:The entered token is OK, and allows you to work on the '[Completed Tutorial] Predicting house prices in King County' Project, part of the 'Sample - Everyone' Workspace\n"
]
],
[
[
"## Create a run",
"_____no_output_____"
],
[
"A run is an execution of a job. You can think of a job like a grouping of runs.\n\nWhen creating a run we need to specify:\n\n 1) a job name (mandatory)\n \n 2) a job type (optional)\n \n 3) a run name (optional)\n\nJob names, job types and run names are useful to group and search runs in the Vectice UI.\nYou can also specify inputs when you start your run and outputs when you end it. The inputs can be code, dataset and model versions and the outputs can be dataset and model versions.\n",
"_____no_output_____"
]
],
[
[
"vectice.create_run(\"job_name\", JobType.PREPARATION, \"run name\").with_properties([(\"run key\", \"run prop\")])\nvectice.start_run(inputs=[inputs])\nvectice.end_run(outputs=[outputs])",
"_____no_output_____"
]
],
[
[
"You can also use the Python context manager (with) to manage runs. This helps to end the run and it also marks its status as failed in the Vectice UI in case we have an error in the run.",
"_____no_output_____"
]
],
[
[
"vectice.create_run(\"job_name\", JobType.PREPARATION, \"run name\").with_properties([(\"run key\", \"run prop\")])\nwith vectice.start_run(inputs=[inputs]) as run:\n #Add your code here\n run.add_outputs(outputs=[outputs])",
"_____no_output_____"
]
],
[
[
"## Create a dataset and a dataset version",
"_____no_output_____"
],
[
"There are three ways to create a dataset in Vectice:",
"_____no_output_____"
],
[
"1- Creating a dataset without a connection",
"_____no_output_____"
]
],
[
[
"### Creating a dataset without a connection\nvectice.create_dataset(dataset_name=\"dataset name\",data_properties=[(\"key\", \"prop\"), (\"key2\", \"prop2\")])",
"INFO:Vectice:Dataset: dataset name has been successfully created.\n"
]
],
[
[
"2- Creating a dataset with a connection",
"_____no_output_____"
],
[
"Getting the list of connections in the Workspace:",
"_____no_output_____"
]
],
[
[
"vectice.list_connections()",
"_____no_output_____"
],
[
"## Creating a dataset with a connection\n vectice.create_dataset_with_connection_name(connection_name=\"connection name\",\n dataset_name=\"dataset name\",\n files=[\"gs://file_path/file_name.csv\"],\n data_properties=[(\"key\", \"prop\"), (\"key2\", \"prop2\")])\n\n## We can also use vectice.create_dataset_with_connection_id()",
"_____no_output_____"
]
],
[
[
"3- Create a dataset and a dataset version at the same time",
"_____no_output_____"
],
[
"When creating a new dataset version, if the parent dataset doesn't exist in the project, a new dataset is created automatically and it will contain the first version we created.",
"_____no_output_____"
]
],
[
[
"dataset_version = vectice.create_dataset_version().with_parent_name(\"new dataset\").with_properties([(\"key\", \"prop\")])",
"_____no_output_____"
]
],
[
[
"The Vectice library automatically detects if there have been changes to the dataset you are using. If it detects changes, it will generate a new version of your dataset automatically. Else, it's going to use the latest version of your dataset.",
"_____no_output_____"
],
[
"We can get the list of the datasets we have in the project by calling **vectice.list_datasets()**",
"_____no_output_____"
]
],
[
[
"vectice.list_datasets().list",
"_____no_output_____"
]
],
[
[
"We can also get the list of dataset versions by calling **vectice.list_dataset_versions(dataset_id)**",
"_____no_output_____"
],
[
"### Attach a dataset version as input or output to a run",
"_____no_output_____"
]
],
[
[
"vectice.create_run(\"job_name\", JobType.PREPARATION, \"run name\").with_properties([(\"run key\", \"run prop\")])\nvectice.start_run(inputs=[dataset_version])\nvectice.end_run",
"_____no_output_____"
]
],
[
[
"You can also use another existing dataset version by using the existing version name, number or id (if you use the id, you don't need to specify the parent dataset name or id).",
"_____no_output_____"
]
],
[
[
"dataset_version = vectice.create_dataset_version().with_parent_name(\"dataset\").with_existing_version_number(1)\nvectice.create_run(\"job_name\", JobType.PREPARATION, \"run name\").with_properties([(\"run key\", \"run prop\")])\nvectice.start_run(inputs=[dataset_version])\nvectice.end_run",
"_____no_output_____"
]
],
[
[
"## Create a code version",
"_____no_output_____"
],
[
"Vectice enables you to track your source code by creating code versions. This can be done automatically and manually.",
"_____no_output_____"
],
[
"### Creating a code version automatically",
"_____no_output_____"
],
[
"If you are using your local environment with GIT installed or JupyterLab etc... the code tracking can be automated by setting autocode=True when creating the Vectice instance.",
"_____no_output_____"
],
[
"### Creating a code version manually",
"_____no_output_____"
],
[
"You can create a code version manually by using:\n\n1- **vectice.create_code_version_with_github_uri()** for GitHub\n\n2- **vectice.create_code_version_with_gitlab_uri()** for GitLab\n\n3- **vectice.create_code_version_with_bitbucket_uri()** for Bitbucket",
"_____no_output_____"
]
],
[
[
"## Example for code versioning with GitHub\ncode_version = Vectice.create_code_version_with_gitlab_uri(\"https://github.com/vectice/vectice-examples\",\n \"Notebooks/Tutorial/Jupyter_notebooks/GCS_data/Tutorial_notebook_GCS_data.ipynb\")\n\nvectice.create_run(\"Job name\", JobType.PREPARATION, \"Run name\").with_properties([(\"run key\", \"run prop\")])\nvectice.start_run(inputs=[code_version])\nvectice.end_run()",
"_____no_output_____"
]
],
[
[
"## Creating models and model versions",
"_____no_output_____"
],
[
"Vectice enables you to create your models and model versions and log the metrics, hyperparameters and model properties",
"_____no_output_____"
],
[
"When creating a model version, if there is a model with the same name as the given model name in your project, a new model version is added to the given model. Else, a new model is created automatically.",
"_____no_output_____"
]
],
[
[
"Vectice.create_model_version().with_parent_name('Regressor')",
"_____no_output_____"
]
],
[
[
"You can declare your model metrics, hyperparameters, properties, type, the used algorithme and model attachments when creating a model version.",
"_____no_output_____"
]
],
[
[
"metrics = [('metric', value), ('metric 2', value)]\nproperties = [('property', value), ('property 2', value)]\nmodel_version = vectice.create_model_version()\n .with_parent_name(\"Regressor\")\n .with_algorithm(\"Decision Tree\")\n .with_type(ModelType.REGRESSION)\n .with_properties(properties)\n .with_metrics(metrics)\n .with_attachments([\"DecisionTree_6.png\"])\n .with_user_version()",
"_____no_output_____"
]
],
[
[
"Here we used with_user_version() for model versioning. You can provide a version name for your model version. An error will be thrown if the given user version already exists and if you don't provide a version name, the version name will be generated automatically.\n",
"_____no_output_____"
],
[
"### Attach a model version as input or output of a run",
"_____no_output_____"
]
],
[
[
"vectice.create_run(\"job_name\", JobType.PREPARATION, \"run name\").with_properties([(\"run key\", \"run prop\")])\nvectice.start_run(inputs=[dataset_version])\nmetrics = [('metric', value), ('metric 2', value)]\nproperties = [('property', value), ('property 2', value)]\nmodel_version = vectice.create_model_version().with_user_version().with_parent_name(\"Regressor\").with_algorithm(\"Decision Tree\").with_type(ModelType.REGRESSION).with_properties(properties).with_metrics(metrics).with_attachments([\"DecisionTree_6.png\"])\n\nvectice.end_run(outputs=[model_version])",
"_____no_output_____"
]
],
[
[
"# Exercice",
"_____no_output_____"
],
[
"### Getting the data from GCS\n\nWe are going to load data stored in Google Cloud Storage, that is provided by Vectice for this tutorial.\n",
"_____no_output_____"
],
[
"You need a service account key to be able to get the data from your buckets on GCS. You can find more information about how to generate a key to access your data on GCS [here](https://doc.vectice.com/connections/google.html#google-cloud-storage).",
"_____no_output_____"
]
],
[
[
"## Provide the path to the service account JSON key file \nos.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'readerKey.json'\n\n# Once your file is loaded you can view your dataset in a Pandas dataframe.\ndf = pd.read_csv('gs://vectice_tutorial/kc_house_data_cleaned.csv')\n\n# Run head to make sure the data was loaded properly\ndf.head()\n",
"_____no_output_____"
]
],
[
[
"### Data preparation\n\nLet's split the dataset into train and test sets and save them in GCS. The GCS code has been commented out as the data has already been generated.",
"_____no_output_____"
]
],
[
[
"# The Vectice library automatically detects if there have been changes to the dataset you are using.\n# If it detects changes, it will generate a new version of your dataset automatically. Else, it's going\n# to use the latest version of your dataset.\n# You can also use another dataset version by calling .with_existing_version_name('version name')\n\ninput_ds_version = vectice.create_dataset_version().with_parent_name(\"cleaned_kc_house_data\")\n\n# For this run, we will use the job name \"80/20 Split\" and the job type \"PREPARATION\"\n# You can have multiple runs with the same job name\n# We can use the Python context manager (with) to end the run and make its status as failed\n## in the Vectice UI in case we have an error\nvectice.create_run(\"80/20 Split\", JobType.PREPARATION, \"Data preparation\")\nwith vectice.start_run(inputs=[input_ds_version]) as run:\n\n# We will use an 80/20 split to prepare the data\n test_size = 0.2\n\n# We will set the random seed so we always generate the same split.\n random_state = 42\n\n train, test = train_test_split(df, test_size = test_size, random_state = random_state)\n\n# We commented out the code to persist the training and testing test in GCS, \n# because we already generated the data for you.\n# We left the code below for convenience, in case you want to use your own credentials and GCS bucket.\n# train.to_csv (r'gs://vectice_tutorial/training_data.csv', index = False, header = True)\n# test.to_csv (r'gs://vectice_tutorial/testing_data.csv', index = False, header = True)\n\n# Generate X_train, X_test, y_train, y_test, which we will need for modeling\n X = df.drop(\"price\", axis=1).values\n y = df[\"price\"].values\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state)\n\n\n# Let's create new versions of the training and testing dataset if the data has changed.\n# We will use the existing dataset created by Albert, so that we can append new \n# dataset versions to it.\n train_ds_version = vectice.create_dataset_version().with_parent_name(\"train_cleaned_kc_house_data\")\n test_ds_version = vectice.create_dataset_version().with_parent_name(\"test_cleaned_kc_house_data\")\n\n# Attach the output datasets to the run.\n run.add_outputs(outputs=[train_ds_version,test_ds_version])\n\n# We can preview one of our generated outputs to make sure that everything was executed properly.\nX_train\n",
"_____no_output_____"
]
],
[
[
"## Modeling",
"_____no_output_____"
],
[
"We can get the list of the models existing in the project by calling **vectice.list_models()**",
"_____no_output_____"
]
],
[
[
"vectice.list_models().list",
"_____no_output_____"
]
],
[
[
"### Decision tree model",
"_____no_output_____"
],
[
"In this section let's use the decision tree algorithm and compare the accuracy to the logistic regression algorithm. We will try different values for the tree_depth. We will log the model parameters and metrics in Vectice.",
"_____no_output_____"
]
],
[
[
"# We can do a few runs with different max depth for the tree.\n# Just change the value below and re-run this cell.\n# The model versions you created will show up in the Vectice UI as new versions \n# of the \"Regressor\" Model. You can easily compare them from there.\ntree_depth = 6\n\nvectice.create_run(\"DT-Model\", JobType.TRAINING)\n\n# We can use the Python context manager (with) to end the run and make its status as failed\n## in the Vectice UI in case we have an error\nwith vectice.start_run(inputs=[train_ds_version,test_ds_version]) as run:\n\n dtr = DecisionTreeRegressor(max_depth=tree_depth, min_samples_split=50)\n dtr.fit(X_train,y_train)\n dtr_pred = dtr.predict(X_test) \n\n data_feature_names = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors',\n 'waterfront', 'view', 'condition', 'grade', 'sqft_above',\n 'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode', 'lat',\n 'long', 'sqft_living15', 'sqft_lot15']\n\n# Visualize the Decision Tree Model\n plt.figure(figsize=(25, 10))\n plot_tree(dtr, feature_names=data_feature_names, filled=True, fontsize=10)\n plt.savefig(\"DecisionTree_6.png\")\n# We save the plot in order to be able to attach to the model version.\n## We can attach the decision tree plot to the model version by using .with_attachments([Attachments])\n \n MAE = mean_absolute_error(dtr_pred, y_test)\n RMSE = sqrt(mean_squared_error(dtr_pred, y_test))\n\n print(\"Root Mean Squared Error:\", RMSE)\n print(\"Mean Absolute Error:\", MAE)\n \n# Here we use with_user_version() to create a new model version. You can provide a version name \n## for your model version. An error will be thrown if the given user version already exists and\n### if you don't provide a version name, the version name will be generated automatically.\n\n properties = [(\"Tree Depth\",str(tree_depth))]\n metrics = [(\"RMSE\", RMSE), (\"MAE\", MAE)]\n model_version = vectice.create_model_version().with_user_version().with_parent_name(\"Regressor\").with_algorithm(\"Decision Tree\").with_type(ModelType.REGRESSION).with_properties(properties).with_metrics(metrics).with_attachments([\"DecisionTree_6.png\"])\n\n## We add the created model version as output of the run\n run.add_outputs(outputs=[model_version])",
"Root Mean Squared Error: 144649.5566489278\nMean Absolute Error: 95872.63972094451\n"
]
],
[
[
"### Model versions table ",
"_____no_output_____"
],
[
"You can also get all the model versions you created in previous runs, for offline analysis and understanding in more details what's driving the models performance.",
"_____no_output_____"
]
],
[
[
"vectice.list_model_versions_dataframe(1859)",
"_____no_output_____"
]
],
[
[
"### Update your model",
"_____no_output_____"
],
[
"Vectice enables you to update your model by using **vectice.update_model()**",
"_____no_output_____"
]
],
[
[
"vectice.update_model(parent_name=\"Regressor\", model_type=ModelType.REGRESSION, description=\"Model description\")",
"INFO:Vectice:Model: 'Regressor' has been updated\n"
]
],
[
[
"Thank you and congratulations! You have succesfully completed this tutorial.\n\nIn this notebooks we have illustrated how you can capture your experiments, hyper-parameters, dataset versions and metrics using Vectice Python library. \nYou can now leverage Vectice UI for analysis, documentation and to engage a business conversation around the findings.\n\nVectice enables you to:\n1. Make your experiments more reproducible.\n2. Track the data and code that is used for each experiment and model versions.\n3. Document your projects' progress and collaborate with your team in Vectice's UI.\n4. Discover previous work and reuse your team knowledge for new projects.\n\nWe are constantly improving the Vectice Python library and the Vectice application. Let us know what improvements you would like to see in the solution and what your favorite features are after completing this tutorial. \n\nFeel free to explore more and come up with your own ideas on how to best start leveraging Vectice!\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a085ddaa37d258d3d91509017df21575c25e4ae
| 8,879 |
ipynb
|
Jupyter Notebook
|
julia/1/day1.ipynb
|
ilanpillemer/adventofcode2019
|
f7425650c447f66f7ef56b072aed5161a650ae01
|
[
"MIT"
] | null | null | null |
julia/1/day1.ipynb
|
ilanpillemer/adventofcode2019
|
f7425650c447f66f7ef56b072aed5161a650ae01
|
[
"MIT"
] | 1 |
2019-12-01T08:43:41.000Z
|
2019-12-03T07:48:18.000Z
|
julia/1/day1.ipynb
|
ilanpillemer/adventofcode2019
|
f7425650c447f66f7ef56b072aed5161a650ae01
|
[
"MIT"
] | null | null | null | 33.254682 | 426 | 0.566393 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a08754cb8dbd32e010a7fc3d40f6467126b7d42
| 499,552 |
ipynb
|
Jupyter Notebook
|
projects/load_stringer_orientations.ipynb
|
da5nsy/course-content
|
ac44f263caaec30da3e10dee78d59c7dbde9762b
|
[
"CC-BY-4.0"
] | null | null | null |
projects/load_stringer_orientations.ipynb
|
da5nsy/course-content
|
ac44f263caaec30da3e10dee78d59c7dbde9762b
|
[
"CC-BY-4.0"
] | null | null | null |
projects/load_stringer_orientations.ipynb
|
da5nsy/course-content
|
ac44f263caaec30da3e10dee78d59c7dbde9762b
|
[
"CC-BY-4.0"
] | 1 |
2021-04-26T11:30:26.000Z
|
2021-04-26T11:30:26.000Z
| 1,504.674699 | 241,086 | 0.955977 |
[
[
[
"## Loading of Stringer orientations data\n\nincludes some visualizations",
"_____no_output_____"
]
],
[
[
"#@title Data retrieval\nimport os, requests\n\nfname = \"stringer_orientations.npy\"\nurl = \"https://osf.io/ny4ut/download\"\n\nif not os.path.isfile(fname):\n try:\n r = requests.get(url)\n except requests.ConnectionError:\n print(\"!!! Failed to download data !!!\")\n else:\n if r.status_code != requests.codes.ok:\n print(\"!!! Failed to download data !!!\")\n else:\n with open(fname, \"wb\") as fid:\n fid.write(r.content)",
"_____no_output_____"
],
[
"#@title Data loading\n\nimport numpy as np\ndat = np.load('stringer_orientations.npy', allow_pickle=True).item()\nprint(dat.keys())",
"dict_keys(['sresp', 'istim', 'stat', 'u_spont', 'v_spont', 'mean_spont', 'std_spont', 'stimtimes', 'frametimes', 'camtimes', 'run', 'info'])\n"
]
],
[
[
"dat has fields:\n* dat['sresp']: neurons by stimuli, a.k.a. the neural response data (23589 by 4598)\n* dat['run']: 1 by stimuli, a.k.a. the running speed of the animal in a.u.\n* dat['istim']: 1 by stimuli, goes from 0 to 2*np.pi, the orientations shown on each trial\n* dat['stat']: 1 by neurons, some statistics for each neuron, see Suite2p for full documentation.\n* dat['stat'][k]['med']: 1 by 2, the position of each neuron k in tissue, in pixels, at a resolution of ~2um/pix. \n* dat['u_spont']: neurons by 128, the weights for the top 128 principal components of spontaneous activity. Unit norm.\n* dat['v_spont']: 128 by 910, the timecourses for the top 128 PCs of spont activity.\n* dat['u_spont'] @ dat['v_spont']: a reconstruction of the spontaneous activity for 910 timepoints interspersed throughout the recording.",
"_____no_output_____"
]
],
[
[
"print(dat['sresp'].shape)\nprint(len(dat['stat']))",
"(23589, 4598)\n23589\n"
],
[
"#@title import matplotlib and set defaults\nfrom matplotlib import rcParams \nfrom matplotlib import pyplot as plt\nrcParams['figure.figsize'] = [20, 4]\nrcParams['font.size'] =15\nrcParams['axes.spines.top'] = False\nrcParams['axes.spines.right'] = False\nrcParams['figure.autolayout'] = True",
"_____no_output_____"
],
[
"#@title Basic data properties using plot, hist and scatter\nax = plt.subplot(1,5,1)\nplt.hist(dat['istim'])\nax.set(xlabel='orientations', ylabel = '# trials')\n\nax = plt.subplot(1,5,2)\nplt.scatter(dat['istim'], dat['sresp'][1000], s= 1)\nax.set(xlabel = 'orientation', ylabel = 'neural response')\n\nax = plt.subplot(1,5,3)\nplt.plot(dat['run'][:1000])\nax.set(xlabel = 'timepoints', ylabel = 'running')\n\nax = plt.subplot(1,5,4)\nplt.scatter(dat['run'], dat['sresp'][20998], s= 1)\nax.set(xlabel = 'running', ylabel = 'neural response')\n\nplt.show()",
"_____no_output_____"
],
[
"#@title take PCA after preparing data by z-score\nfrom scipy.stats import zscore\nfrom sklearn.decomposition import PCA \nZ = zscore(dat['sresp'], axis=1)\nX = PCA(n_components = 200).fit_transform(Z.T)",
"_____no_output_____"
],
[
"#@title plot PCs as function of stimulus orientation\nfor j in range(5):\n ax = plt.subplot(1,5,j+1)\n plt.scatter(dat['istim'], X[:,j], s = 1)\n ax.set(xlabel='orientation', ylabel = 'PC%d'%j)\nplt.show()",
"_____no_output_____"
],
[
"#@title run a manifold embedding algorithm (UMAP) in two or three dimensions. \n!pip install umap-learn\nfrom umap import UMAP\nncomp = 3 # try 2, then try 3\nxinit = 3 * zscore(X[:,:ncomp], axis=0)\nembed = UMAP(n_components=ncomp, init = xinit, n_neighbors = 25, \n metric = 'correlation', transform_seed = 42).fit_transform(X)",
"Requirement already satisfied: umap-learn in /usr/local/lib/python3.6/dist-packages (0.4.6)\nRequirement already satisfied: scikit-learn>=0.20 in /usr/local/lib/python3.6/dist-packages (from umap-learn) (0.22.2.post1)\nRequirement already satisfied: numba!=0.47,>=0.46 in /usr/local/lib/python3.6/dist-packages (from umap-learn) (0.48.0)\nRequirement already satisfied: scipy>=1.3.1 in /usr/local/lib/python3.6/dist-packages (from umap-learn) (1.4.1)\nRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.6/dist-packages (from umap-learn) (1.18.5)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn>=0.20->umap-learn) (0.16.0)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from numba!=0.47,>=0.46->umap-learn) (49.1.0)\nRequirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba!=0.47,>=0.46->umap-learn) (0.31.0)\n"
],
[
"plt.figure(figsize=(8,8))\nfor i in range(ncomp):\n for j in range(ncomp):\n plt.subplot(ncomp,ncomp, j + ncomp*i + 1)\n if i==j:\n plt.scatter(dat['istim'], embed[:,i], s = 1)\n else:\n plt.scatter(embed[:,j], embed[:,i], s = 1, c= dat['istim'], cmap = 'hsv')\n# Is that a Mobius strip? A good project would be to try to figure out why (I don't know). ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a08791410a352d044a1dcfa0355dbc7e3cab567
| 177,236 |
ipynb
|
Jupyter Notebook
|
deep-learning/udacity-deeplearning/tensorboard/Anna_KaRNNa_Summaries.ipynb
|
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
|
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
|
[
"Apache-2.0"
] | 4,171 |
2017-01-29T23:58:50.000Z
|
2022-03-27T14:58:47.000Z
|
deep-learning/udacity-deeplearning/tensorboard/Anna_KaRNNa_Summaries.ipynb
|
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
|
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
|
[
"Apache-2.0"
] | 154 |
2017-03-03T12:42:46.000Z
|
2021-07-27T18:21:10.000Z
|
deep-learning/udacity-deeplearning/tensorboard/Anna_KaRNNa_Summaries.ipynb
|
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
|
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
|
[
"Apache-2.0"
] | 4,928 |
2017-01-30T05:07:08.000Z
|
2022-03-31T02:09:34.000Z
| 68.06298 | 519 | 0.660972 |
[
[
[
"# Anna KaRNNa\n\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\n\nThis network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.\n\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"_____no_output_____"
]
],
[
[
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"_____no_output_____"
]
],
[
[
"First we'll load the text file and convert it into integers for our network to use.",
"_____no_output_____"
]
],
[
[
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)",
"_____no_output_____"
],
[
"text[:100]",
"_____no_output_____"
],
[
"chars[:100]",
"_____no_output_____"
]
],
[
[
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\n\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\n\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"_____no_output_____"
]
],
[
[
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y",
"_____no_output_____"
],
[
"train_x, train_y, val_x, val_y = split_data(chars, 10, 200)",
"_____no_output_____"
],
[
"train_x.shape",
"_____no_output_____"
],
[
"train_x[:,:10]",
"_____no_output_____"
]
],
[
[
"I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"_____no_output_____"
]
],
[
[
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]",
"_____no_output_____"
],
[
"def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n with tf.name_scope('inputs'):\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n \n with tf.name_scope('targets'):\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n with tf.name_scope(\"RNN_cells\"):\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n \n with tf.name_scope(\"RNN_init_state\"):\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n with tf.name_scope(\"RNN_forward\"):\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)\n \n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n with tf.name_scope('sequence_reshape'):\n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN outputs to a softmax layer and calculate the cost\n with tf.name_scope('logits'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n tf.summary.histogram('softmax_w', softmax_w)\n tf.summary.histogram('softmax_b', softmax_b)\n\n with tf.name_scope('predictions'):\n preds = tf.nn.softmax(logits, name='predictions')\n tf.summary.histogram('predictions', preds)\n \n with tf.name_scope('cost'):\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n tf.summary.scalar('cost', cost)\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n with tf.name_scope('train'):\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n merged = tf.summary.merge_all()\n \n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer', 'merged']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"_____no_output_____"
]
],
[
[
"## Hyperparameters\n\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.",
"_____no_output_____"
]
],
[
[
"batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001",
"_____no_output_____"
]
],
[
[
"## Training\n\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.",
"_____no_output_____"
]
],
[
[
"!mkdir -p checkpoints/anna",
"_____no_output_____"
],
[
"epochs = 10\nsave_every_n = 100\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)\n test_writer = tf.summary.FileWriter('./logs/2/test')\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost, \n model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n train_writer.add_summary(summary, iteration)\n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n summary, batch_loss, new_state = sess.run([model.merged, model.cost, \n model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n \n test_writer.add_summary(summary, iteration)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n #saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))",
"Epoch 1/10 Iteration 1/1780 Training loss: 4.4188 1.2876 sec/batch\nEpoch 1/10 Iteration 2/1780 Training loss: 4.3775 0.1364 sec/batch\nEpoch 1/10 Iteration 3/1780 Training loss: 4.2100 0.1310 sec/batch\nEpoch 1/10 Iteration 4/1780 Training loss: 4.5256 0.1212 sec/batch\nEpoch 1/10 Iteration 5/1780 Training loss: 4.4524 0.1271 sec/batch\nEpoch 1/10 Iteration 6/1780 Training loss: 4.3496 0.1272 sec/batch\nEpoch 1/10 Iteration 7/1780 Training loss: 4.2637 0.1260 sec/batch\nEpoch 1/10 Iteration 8/1780 Training loss: 4.1856 0.1231 sec/batch\nEpoch 1/10 Iteration 9/1780 Training loss: 4.1126 0.1210 sec/batch\nEpoch 1/10 Iteration 10/1780 Training loss: 4.0469 0.1198 sec/batch\nEpoch 1/10 Iteration 11/1780 Training loss: 3.9883 0.1211 sec/batch\nEpoch 1/10 Iteration 12/1780 Training loss: 3.9390 0.1232 sec/batch\nEpoch 1/10 Iteration 13/1780 Training loss: 3.8954 0.1352 sec/batch\nEpoch 1/10 Iteration 14/1780 Training loss: 3.8584 0.1232 sec/batch\nEpoch 1/10 Iteration 15/1780 Training loss: 3.8247 0.1217 sec/batch\nEpoch 1/10 Iteration 16/1780 Training loss: 3.7941 0.1202 sec/batch\nEpoch 1/10 Iteration 17/1780 Training loss: 3.7654 0.1205 sec/batch\nEpoch 1/10 Iteration 18/1780 Training loss: 3.7406 0.1200 sec/batch\nEpoch 1/10 Iteration 19/1780 Training loss: 3.7170 0.1227 sec/batch\nEpoch 1/10 Iteration 20/1780 Training loss: 3.6936 0.1193 sec/batch\nEpoch 1/10 Iteration 21/1780 Training loss: 3.6733 0.1201 sec/batch\nEpoch 1/10 Iteration 22/1780 Training loss: 3.6542 0.1187 sec/batch\nEpoch 1/10 Iteration 23/1780 Training loss: 3.6371 0.1194 sec/batch\nEpoch 1/10 Iteration 24/1780 Training loss: 3.6212 0.1194 sec/batch\nEpoch 1/10 Iteration 25/1780 Training loss: 3.6055 0.1203 sec/batch\nEpoch 1/10 Iteration 26/1780 Training loss: 3.5918 0.1209 sec/batch\nEpoch 1/10 Iteration 27/1780 Training loss: 3.5789 0.1189 sec/batch\nEpoch 1/10 Iteration 28/1780 Training loss: 3.5657 0.1197 sec/batch\nEpoch 1/10 Iteration 29/1780 Training loss: 3.5534 0.1242 sec/batch\nEpoch 1/10 Iteration 30/1780 Training loss: 3.5425 0.1184 sec/batch\nEpoch 1/10 Iteration 31/1780 Training loss: 3.5325 0.1204 sec/batch\nEpoch 1/10 Iteration 32/1780 Training loss: 3.5224 0.1203 sec/batch\nEpoch 1/10 Iteration 33/1780 Training loss: 3.5125 0.1236 sec/batch\nEpoch 1/10 Iteration 34/1780 Training loss: 3.5037 0.1195 sec/batch\nEpoch 1/10 Iteration 35/1780 Training loss: 3.4948 0.1202 sec/batch\nEpoch 1/10 Iteration 36/1780 Training loss: 3.4867 0.1190 sec/batch\nEpoch 1/10 Iteration 37/1780 Training loss: 3.4782 0.1226 sec/batch\nEpoch 1/10 Iteration 38/1780 Training loss: 3.4702 0.1201 sec/batch\nEpoch 1/10 Iteration 39/1780 Training loss: 3.4625 0.1223 sec/batch\nEpoch 1/10 Iteration 40/1780 Training loss: 3.4553 0.1196 sec/batch\nEpoch 1/10 Iteration 41/1780 Training loss: 3.4482 0.1200 sec/batch\nEpoch 1/10 Iteration 42/1780 Training loss: 3.4415 0.1195 sec/batch\nEpoch 1/10 Iteration 43/1780 Training loss: 3.4350 0.1209 sec/batch\nEpoch 1/10 Iteration 44/1780 Training loss: 3.4287 0.1215 sec/batch\nEpoch 1/10 Iteration 45/1780 Training loss: 3.4225 0.1255 sec/batch\nEpoch 1/10 Iteration 46/1780 Training loss: 3.4170 0.1194 sec/batch\nEpoch 1/10 Iteration 47/1780 Training loss: 3.4116 0.1194 sec/batch\nEpoch 1/10 Iteration 48/1780 Training loss: 3.4067 0.1190 sec/batch\nEpoch 1/10 Iteration 49/1780 Training loss: 3.4020 0.1215 sec/batch\nEpoch 1/10 Iteration 50/1780 Training loss: 3.3972 0.1203 sec/batch\nEpoch 1/10 Iteration 51/1780 Training loss: 3.3926 0.1199 sec/batch\nEpoch 1/10 Iteration 52/1780 Training loss: 3.3878 0.1188 sec/batch\nEpoch 1/10 Iteration 53/1780 Training loss: 3.3836 0.1214 sec/batch\nEpoch 1/10 Iteration 54/1780 Training loss: 3.3791 0.1201 sec/batch\nEpoch 1/10 Iteration 55/1780 Training loss: 3.3750 0.1199 sec/batch\nEpoch 1/10 Iteration 56/1780 Training loss: 3.3707 0.1201 sec/batch\nEpoch 1/10 Iteration 57/1780 Training loss: 3.3667 0.1234 sec/batch\nEpoch 1/10 Iteration 58/1780 Training loss: 3.3630 0.1213 sec/batch\nEpoch 1/10 Iteration 59/1780 Training loss: 3.3592 0.1229 sec/batch\nEpoch 1/10 Iteration 60/1780 Training loss: 3.3557 0.1194 sec/batch\nEpoch 1/10 Iteration 61/1780 Training loss: 3.3522 0.1205 sec/batch\nEpoch 1/10 Iteration 62/1780 Training loss: 3.3493 0.1189 sec/batch\nEpoch 1/10 Iteration 63/1780 Training loss: 3.3464 0.1201 sec/batch\nEpoch 1/10 Iteration 64/1780 Training loss: 3.3429 0.1210 sec/batch\nEpoch 1/10 Iteration 65/1780 Training loss: 3.3396 0.1213 sec/batch\nEpoch 1/10 Iteration 66/1780 Training loss: 3.3368 0.1218 sec/batch\nEpoch 1/10 Iteration 67/1780 Training loss: 3.3340 0.1202 sec/batch\nEpoch 1/10 Iteration 68/1780 Training loss: 3.3306 0.1195 sec/batch\nEpoch 1/10 Iteration 69/1780 Training loss: 3.3276 0.1225 sec/batch\nEpoch 1/10 Iteration 70/1780 Training loss: 3.3249 0.1188 sec/batch\nEpoch 1/10 Iteration 71/1780 Training loss: 3.3221 0.1208 sec/batch\nEpoch 1/10 Iteration 72/1780 Training loss: 3.3197 0.1201 sec/batch\nEpoch 1/10 Iteration 73/1780 Training loss: 3.3170 0.1206 sec/batch\nEpoch 1/10 Iteration 74/1780 Training loss: 3.3145 0.1192 sec/batch\nEpoch 1/10 Iteration 75/1780 Training loss: 3.3122 0.1233 sec/batch\nEpoch 1/10 Iteration 76/1780 Training loss: 3.3099 0.1197 sec/batch\nEpoch 1/10 Iteration 77/1780 Training loss: 3.3076 0.1204 sec/batch\nEpoch 1/10 Iteration 78/1780 Training loss: 3.3053 0.1199 sec/batch\nEpoch 1/10 Iteration 79/1780 Training loss: 3.3029 0.1232 sec/batch\nEpoch 1/10 Iteration 80/1780 Training loss: 3.3004 0.1190 sec/batch\nEpoch 1/10 Iteration 81/1780 Training loss: 3.2982 0.1201 sec/batch\nEpoch 1/10 Iteration 82/1780 Training loss: 3.2961 0.1196 sec/batch\nEpoch 1/10 Iteration 83/1780 Training loss: 3.2940 0.1213 sec/batch\nEpoch 1/10 Iteration 84/1780 Training loss: 3.2919 0.1184 sec/batch\nEpoch 1/10 Iteration 85/1780 Training loss: 3.2899 0.1199 sec/batch\nEpoch 1/10 Iteration 86/1780 Training loss: 3.2881 0.1190 sec/batch\nEpoch 1/10 Iteration 87/1780 Training loss: 3.2862 0.1201 sec/batch\nEpoch 1/10 Iteration 88/1780 Training loss: 3.2843 0.1217 sec/batch\nEpoch 1/10 Iteration 89/1780 Training loss: 3.2826 0.1199 sec/batch\nEpoch 1/10 Iteration 90/1780 Training loss: 3.2809 0.1191 sec/batch\nEpoch 1/10 Iteration 91/1780 Training loss: 3.2791 0.1204 sec/batch\nEpoch 1/10 Iteration 92/1780 Training loss: 3.2773 0.1219 sec/batch\nEpoch 1/10 Iteration 93/1780 Training loss: 3.2756 0.1192 sec/batch\nEpoch 1/10 Iteration 94/1780 Training loss: 3.2738 0.1213 sec/batch\nEpoch 1/10 Iteration 95/1780 Training loss: 3.2721 0.1207 sec/batch\nEpoch 1/10 Iteration 96/1780 Training loss: 3.2703 0.1186 sec/batch\nEpoch 1/10 Iteration 97/1780 Training loss: 3.2688 0.1207 sec/batch\nEpoch 1/10 Iteration 98/1780 Training loss: 3.2670 0.1203 sec/batch\nEpoch 1/10 Iteration 99/1780 Training loss: 3.2654 0.1201 sec/batch\nEpoch 1/10 Iteration 100/1780 Training loss: 3.2637 0.1199 sec/batch\nValidation loss: 3.05181 Saving checkpoint!\nEpoch 1/10 Iteration 101/1780 Training loss: 3.2620 0.1184 sec/batch\nEpoch 1/10 Iteration 102/1780 Training loss: 3.2603 0.1201 sec/batch\nEpoch 1/10 Iteration 103/1780 Training loss: 3.2587 0.1201 sec/batch\nEpoch 1/10 Iteration 104/1780 Training loss: 3.2569 0.1208 sec/batch\nEpoch 1/10 Iteration 105/1780 Training loss: 3.2553 0.1187 sec/batch\nEpoch 1/10 Iteration 106/1780 Training loss: 3.2536 0.1201 sec/batch\nEpoch 1/10 Iteration 107/1780 Training loss: 3.2519 0.1227 sec/batch\nEpoch 1/10 Iteration 108/1780 Training loss: 3.2502 0.1205 sec/batch\nEpoch 1/10 Iteration 109/1780 Training loss: 3.2487 0.1224 sec/batch\nEpoch 1/10 Iteration 110/1780 Training loss: 3.2469 0.1220 sec/batch\nEpoch 1/10 Iteration 111/1780 Training loss: 3.2453 0.1191 sec/batch\nEpoch 1/10 Iteration 112/1780 Training loss: 3.2437 0.1204 sec/batch\nEpoch 1/10 Iteration 113/1780 Training loss: 3.2421 0.1191 sec/batch\nEpoch 1/10 Iteration 114/1780 Training loss: 3.2404 0.1207 sec/batch\nEpoch 1/10 Iteration 115/1780 Training loss: 3.2387 0.1202 sec/batch\nEpoch 1/10 Iteration 116/1780 Training loss: 3.2371 0.1201 sec/batch\nEpoch 1/10 Iteration 117/1780 Training loss: 3.2354 0.1195 sec/batch\nEpoch 1/10 Iteration 118/1780 Training loss: 3.2340 0.1217 sec/batch\nEpoch 1/10 Iteration 119/1780 Training loss: 3.2325 0.1211 sec/batch\nEpoch 1/10 Iteration 120/1780 Training loss: 3.2309 0.1200 sec/batch\nEpoch 1/10 Iteration 121/1780 Training loss: 3.2295 0.1187 sec/batch\nEpoch 1/10 Iteration 122/1780 Training loss: 3.2280 0.1229 sec/batch\nEpoch 1/10 Iteration 123/1780 Training loss: 3.2264 0.1189 sec/batch\nEpoch 1/10 Iteration 124/1780 Training loss: 3.2249 0.1207 sec/batch\nEpoch 1/10 Iteration 125/1780 Training loss: 3.2232 0.1194 sec/batch\nEpoch 1/10 Iteration 126/1780 Training loss: 3.2214 0.1226 sec/batch\nEpoch 1/10 Iteration 127/1780 Training loss: 3.2197 0.1201 sec/batch\nEpoch 1/10 Iteration 128/1780 Training loss: 3.2181 0.1190 sec/batch\nEpoch 1/10 Iteration 129/1780 Training loss: 3.2164 0.1223 sec/batch\nEpoch 1/10 Iteration 130/1780 Training loss: 3.2148 0.1223 sec/batch\nEpoch 1/10 Iteration 131/1780 Training loss: 3.2132 0.1215 sec/batch\nEpoch 1/10 Iteration 132/1780 Training loss: 3.2114 0.1222 sec/batch\nEpoch 1/10 Iteration 133/1780 Training loss: 3.2097 0.1211 sec/batch\nEpoch 1/10 Iteration 134/1780 Training loss: 3.2079 0.1204 sec/batch\nEpoch 1/10 Iteration 135/1780 Training loss: 3.2059 0.1228 sec/batch\nEpoch 1/10 Iteration 136/1780 Training loss: 3.2039 0.1214 sec/batch\nEpoch 1/10 Iteration 137/1780 Training loss: 3.2020 0.1199 sec/batch\nEpoch 1/10 Iteration 138/1780 Training loss: 3.2000 0.1207 sec/batch\nEpoch 1/10 Iteration 139/1780 Training loss: 3.1982 0.1205 sec/batch\nEpoch 1/10 Iteration 140/1780 Training loss: 3.1961 0.1202 sec/batch\nEpoch 1/10 Iteration 141/1780 Training loss: 3.1941 0.1209 sec/batch\nEpoch 1/10 Iteration 142/1780 Training loss: 3.1921 0.1225 sec/batch\nEpoch 1/10 Iteration 143/1780 Training loss: 3.1901 0.1191 sec/batch\nEpoch 1/10 Iteration 144/1780 Training loss: 3.1880 0.1246 sec/batch\nEpoch 1/10 Iteration 145/1780 Training loss: 3.1860 0.1200 sec/batch\nEpoch 1/10 Iteration 146/1780 Training loss: 3.1840 0.1214 sec/batch\nEpoch 1/10 Iteration 147/1780 Training loss: 3.1820 0.1289 sec/batch\nEpoch 1/10 Iteration 148/1780 Training loss: 3.1800 0.1206 sec/batch\nEpoch 1/10 Iteration 149/1780 Training loss: 3.1778 0.1210 sec/batch\nEpoch 1/10 Iteration 150/1780 Training loss: 3.1756 0.1208 sec/batch\nEpoch 1/10 Iteration 151/1780 Training loss: 3.1736 0.1197 sec/batch\nEpoch 1/10 Iteration 152/1780 Training loss: 3.1716 0.1201 sec/batch\nEpoch 1/10 Iteration 153/1780 Training loss: 3.1694 0.1216 sec/batch\nEpoch 1/10 Iteration 154/1780 Training loss: 3.1671 0.1206 sec/batch\nEpoch 1/10 Iteration 155/1780 Training loss: 3.1648 0.1193 sec/batch\nEpoch 1/10 Iteration 156/1780 Training loss: 3.1624 0.1201 sec/batch\nEpoch 1/10 Iteration 157/1780 Training loss: 3.1599 0.1191 sec/batch\nEpoch 1/10 Iteration 158/1780 Training loss: 3.1574 0.1211 sec/batch\nEpoch 1/10 Iteration 159/1780 Training loss: 3.1548 0.1318 sec/batch\nEpoch 1/10 Iteration 160/1780 Training loss: 3.1523 0.1204 sec/batch\nEpoch 1/10 Iteration 161/1780 Training loss: 3.1498 0.1213 sec/batch\nEpoch 1/10 Iteration 162/1780 Training loss: 3.1471 0.1204 sec/batch\nEpoch 1/10 Iteration 163/1780 Training loss: 3.1446 0.1221 sec/batch\nEpoch 1/10 Iteration 164/1780 Training loss: 3.1430 0.1203 sec/batch\nEpoch 1/10 Iteration 165/1780 Training loss: 3.1411 0.1189 sec/batch\nEpoch 1/10 Iteration 166/1780 Training loss: 3.1390 0.1221 sec/batch\nEpoch 1/10 Iteration 167/1780 Training loss: 3.1367 0.1196 sec/batch\nEpoch 1/10 Iteration 168/1780 Training loss: 3.1346 0.1224 sec/batch\nEpoch 1/10 Iteration 169/1780 Training loss: 3.1325 0.1187 sec/batch\nEpoch 1/10 Iteration 170/1780 Training loss: 3.1301 0.1226 sec/batch\nEpoch 1/10 Iteration 171/1780 Training loss: 3.1278 0.1188 sec/batch\nEpoch 1/10 Iteration 172/1780 Training loss: 3.1258 0.1196 sec/batch\nEpoch 1/10 Iteration 173/1780 Training loss: 3.1237 0.1192 sec/batch\nEpoch 1/10 Iteration 174/1780 Training loss: 3.1215 0.1223 sec/batch\nEpoch 1/10 Iteration 175/1780 Training loss: 3.1193 0.1186 sec/batch\nEpoch 1/10 Iteration 176/1780 Training loss: 3.1179 0.1208 sec/batch\nEpoch 1/10 Iteration 177/1780 Training loss: 3.1162 0.1187 sec/batch\nEpoch 1/10 Iteration 178/1780 Training loss: 3.1137 0.1232 sec/batch\nEpoch 2/10 Iteration 179/1780 Training loss: 2.6953 0.1210 sec/batch\nEpoch 2/10 Iteration 180/1780 Training loss: 2.6538 0.1232 sec/batch\nEpoch 2/10 Iteration 181/1780 Training loss: 2.6371 0.1197 sec/batch\nEpoch 2/10 Iteration 182/1780 Training loss: 2.6328 0.1235 sec/batch\nEpoch 2/10 Iteration 183/1780 Training loss: 2.6298 0.1185 sec/batch\nEpoch 2/10 Iteration 184/1780 Training loss: 2.6251 0.1227 sec/batch\nEpoch 2/10 Iteration 185/1780 Training loss: 2.6222 0.1192 sec/batch\nEpoch 2/10 Iteration 186/1780 Training loss: 2.6206 0.1228 sec/batch\nEpoch 2/10 Iteration 187/1780 Training loss: 2.6176 0.1232 sec/batch\nEpoch 2/10 Iteration 188/1780 Training loss: 2.6138 0.1206 sec/batch\nEpoch 2/10 Iteration 189/1780 Training loss: 2.6088 0.1204 sec/batch\nEpoch 2/10 Iteration 190/1780 Training loss: 2.6067 0.1209 sec/batch\nEpoch 2/10 Iteration 191/1780 Training loss: 2.6035 0.1196 sec/batch\nEpoch 2/10 Iteration 192/1780 Training loss: 2.6023 0.1203 sec/batch\nEpoch 2/10 Iteration 193/1780 Training loss: 2.5985 0.1229 sec/batch\nEpoch 2/10 Iteration 194/1780 Training loss: 2.5957 0.1262 sec/batch\nEpoch 2/10 Iteration 195/1780 Training loss: 2.5928 0.1223 sec/batch\nEpoch 2/10 Iteration 196/1780 Training loss: 2.5922 0.1223 sec/batch\nEpoch 2/10 Iteration 197/1780 Training loss: 2.5893 0.1192 sec/batch\nEpoch 2/10 Iteration 198/1780 Training loss: 2.5853 0.1222 sec/batch\nEpoch 2/10 Iteration 199/1780 Training loss: 2.5819 0.1228 sec/batch\nEpoch 2/10 Iteration 200/1780 Training loss: 2.5808 0.1213 sec/batch\nValidation loss: 2.43305 Saving checkpoint!\nEpoch 2/10 Iteration 201/1780 Training loss: 2.5788 0.1208 sec/batch\nEpoch 2/10 Iteration 202/1780 Training loss: 2.5758 0.1206 sec/batch\nEpoch 2/10 Iteration 203/1780 Training loss: 2.5726 0.1197 sec/batch\nEpoch 2/10 Iteration 204/1780 Training loss: 2.5701 0.1203 sec/batch\nEpoch 2/10 Iteration 205/1780 Training loss: 2.5674 0.1191 sec/batch\nEpoch 2/10 Iteration 206/1780 Training loss: 2.5649 0.1218 sec/batch\nEpoch 2/10 Iteration 207/1780 Training loss: 2.5627 0.1205 sec/batch\nEpoch 2/10 Iteration 208/1780 Training loss: 2.5605 0.1194 sec/batch\nEpoch 2/10 Iteration 209/1780 Training loss: 2.5589 0.1231 sec/batch\nEpoch 2/10 Iteration 210/1780 Training loss: 2.5562 0.1208 sec/batch\nEpoch 2/10 Iteration 211/1780 Training loss: 2.5533 0.1237 sec/batch\nEpoch 2/10 Iteration 212/1780 Training loss: 2.5509 0.1243 sec/batch\nEpoch 2/10 Iteration 213/1780 Training loss: 2.5486 0.1192 sec/batch\nEpoch 2/10 Iteration 214/1780 Training loss: 2.5464 0.1218 sec/batch\nEpoch 2/10 Iteration 215/1780 Training loss: 2.5440 0.1228 sec/batch\nEpoch 2/10 Iteration 216/1780 Training loss: 2.5412 0.1224 sec/batch\nEpoch 2/10 Iteration 217/1780 Training loss: 2.5388 0.1195 sec/batch\nEpoch 2/10 Iteration 218/1780 Training loss: 2.5362 0.1229 sec/batch\nEpoch 2/10 Iteration 219/1780 Training loss: 2.5336 0.1219 sec/batch\nEpoch 2/10 Iteration 220/1780 Training loss: 2.5310 0.1241 sec/batch\nEpoch 2/10 Iteration 221/1780 Training loss: 2.5286 0.1194 sec/batch\nEpoch 2/10 Iteration 222/1780 Training loss: 2.5260 0.1209 sec/batch\nEpoch 2/10 Iteration 223/1780 Training loss: 2.5238 0.1195 sec/batch\nEpoch 2/10 Iteration 224/1780 Training loss: 2.5209 0.1212 sec/batch\nEpoch 2/10 Iteration 225/1780 Training loss: 2.5193 0.1191 sec/batch\nEpoch 2/10 Iteration 226/1780 Training loss: 2.5171 0.1196 sec/batch\nEpoch 2/10 Iteration 227/1780 Training loss: 2.5150 0.1202 sec/batch\nEpoch 2/10 Iteration 228/1780 Training loss: 2.5135 0.1234 sec/batch\nEpoch 2/10 Iteration 229/1780 Training loss: 2.5115 0.1213 sec/batch\nEpoch 2/10 Iteration 230/1780 Training loss: 2.5097 0.1203 sec/batch\nEpoch 2/10 Iteration 231/1780 Training loss: 2.5077 0.1210 sec/batch\nEpoch 2/10 Iteration 232/1780 Training loss: 2.5057 0.1202 sec/batch\nEpoch 2/10 Iteration 233/1780 Training loss: 2.5035 0.1194 sec/batch\nEpoch 2/10 Iteration 234/1780 Training loss: 2.5019 0.1208 sec/batch\nEpoch 2/10 Iteration 235/1780 Training loss: 2.5001 0.1209 sec/batch\nEpoch 2/10 Iteration 236/1780 Training loss: 2.4982 0.1326 sec/batch\nEpoch 2/10 Iteration 237/1780 Training loss: 2.4963 0.1190 sec/batch\nEpoch 2/10 Iteration 238/1780 Training loss: 2.4948 0.1222 sec/batch\nEpoch 2/10 Iteration 239/1780 Training loss: 2.4930 0.1195 sec/batch\nEpoch 2/10 Iteration 240/1780 Training loss: 2.4915 0.1190 sec/batch\nEpoch 2/10 Iteration 241/1780 Training loss: 2.4902 0.1215 sec/batch\nEpoch 2/10 Iteration 242/1780 Training loss: 2.4885 0.1208 sec/batch\nEpoch 2/10 Iteration 243/1780 Training loss: 2.4867 0.1213 sec/batch\nEpoch 2/10 Iteration 244/1780 Training loss: 2.4853 0.1208 sec/batch\nEpoch 2/10 Iteration 245/1780 Training loss: 2.4836 0.1193 sec/batch\nEpoch 2/10 Iteration 246/1780 Training loss: 2.4816 0.1196 sec/batch\nEpoch 2/10 Iteration 247/1780 Training loss: 2.4796 0.1220 sec/batch\nEpoch 2/10 Iteration 248/1780 Training loss: 2.4781 0.1227 sec/batch\nEpoch 2/10 Iteration 249/1780 Training loss: 2.4767 0.1215 sec/batch\nEpoch 2/10 Iteration 250/1780 Training loss: 2.4754 0.1240 sec/batch\nEpoch 2/10 Iteration 251/1780 Training loss: 2.4740 0.1215 sec/batch\nEpoch 2/10 Iteration 252/1780 Training loss: 2.4723 0.1198 sec/batch\nEpoch 2/10 Iteration 253/1780 Training loss: 2.4707 0.1199 sec/batch\nEpoch 2/10 Iteration 254/1780 Training loss: 2.4696 0.1210 sec/batch\nEpoch 2/10 Iteration 255/1780 Training loss: 2.4681 0.1215 sec/batch\nEpoch 2/10 Iteration 256/1780 Training loss: 2.4667 0.1201 sec/batch\nEpoch 2/10 Iteration 257/1780 Training loss: 2.4651 0.1189 sec/batch\nEpoch 2/10 Iteration 258/1780 Training loss: 2.4635 0.1210 sec/batch\nEpoch 2/10 Iteration 259/1780 Training loss: 2.4619 0.1193 sec/batch\nEpoch 2/10 Iteration 260/1780 Training loss: 2.4604 0.1212 sec/batch\nEpoch 2/10 Iteration 261/1780 Training loss: 2.4588 0.1281 sec/batch\nEpoch 2/10 Iteration 262/1780 Training loss: 2.4575 0.1231 sec/batch\nEpoch 2/10 Iteration 263/1780 Training loss: 2.4561 0.1188 sec/batch\nEpoch 2/10 Iteration 264/1780 Training loss: 2.4546 0.1216 sec/batch\nEpoch 2/10 Iteration 265/1780 Training loss: 2.4534 0.1192 sec/batch\nEpoch 2/10 Iteration 266/1780 Training loss: 2.4521 0.1232 sec/batch\nEpoch 2/10 Iteration 267/1780 Training loss: 2.4507 0.1201 sec/batch\nEpoch 2/10 Iteration 268/1780 Training loss: 2.4495 0.1327 sec/batch\nEpoch 2/10 Iteration 269/1780 Training loss: 2.4480 0.1185 sec/batch\nEpoch 2/10 Iteration 270/1780 Training loss: 2.4466 0.1232 sec/batch\nEpoch 2/10 Iteration 271/1780 Training loss: 2.4452 0.1174 sec/batch\nEpoch 2/10 Iteration 272/1780 Training loss: 2.4437 0.1204 sec/batch\nEpoch 2/10 Iteration 273/1780 Training loss: 2.4423 0.1197 sec/batch\nEpoch 2/10 Iteration 274/1780 Training loss: 2.4408 0.1207 sec/batch\nEpoch 2/10 Iteration 275/1780 Training loss: 2.4395 0.1204 sec/batch\nEpoch 2/10 Iteration 276/1780 Training loss: 2.4380 0.1194 sec/batch\nEpoch 2/10 Iteration 277/1780 Training loss: 2.4365 0.1200 sec/batch\nEpoch 2/10 Iteration 278/1780 Training loss: 2.4351 0.1209 sec/batch\nEpoch 2/10 Iteration 279/1780 Training loss: 2.4338 0.1203 sec/batch\nEpoch 2/10 Iteration 280/1780 Training loss: 2.4325 0.1200 sec/batch\nEpoch 2/10 Iteration 281/1780 Training loss: 2.4309 0.1201 sec/batch\nEpoch 2/10 Iteration 282/1780 Training loss: 2.4294 0.1218 sec/batch\nEpoch 2/10 Iteration 283/1780 Training loss: 2.4279 0.1224 sec/batch\nEpoch 2/10 Iteration 284/1780 Training loss: 2.4266 0.1209 sec/batch\nEpoch 2/10 Iteration 285/1780 Training loss: 2.4253 0.1194 sec/batch\nEpoch 2/10 Iteration 286/1780 Training loss: 2.4242 0.1218 sec/batch\nEpoch 2/10 Iteration 287/1780 Training loss: 2.4229 0.1196 sec/batch\nEpoch 2/10 Iteration 288/1780 Training loss: 2.4215 0.1220 sec/batch\nEpoch 2/10 Iteration 289/1780 Training loss: 2.4202 0.1193 sec/batch\nEpoch 2/10 Iteration 290/1780 Training loss: 2.4189 0.1216 sec/batch\nEpoch 2/10 Iteration 291/1780 Training loss: 2.4175 0.1196 sec/batch\nEpoch 2/10 Iteration 292/1780 Training loss: 2.4160 0.1214 sec/batch\nEpoch 2/10 Iteration 293/1780 Training loss: 2.4146 0.1197 sec/batch\nEpoch 2/10 Iteration 294/1780 Training loss: 2.4130 0.1226 sec/batch\nEpoch 2/10 Iteration 295/1780 Training loss: 2.4117 0.1220 sec/batch\nEpoch 2/10 Iteration 296/1780 Training loss: 2.4103 0.1206 sec/batch\nEpoch 2/10 Iteration 297/1780 Training loss: 2.4092 0.1215 sec/batch\nEpoch 2/10 Iteration 298/1780 Training loss: 2.4080 0.1216 sec/batch\nEpoch 2/10 Iteration 299/1780 Training loss: 2.4068 0.1187 sec/batch\nEpoch 2/10 Iteration 300/1780 Training loss: 2.4054 0.1198 sec/batch\nValidation loss: 2.16109 Saving checkpoint!\nEpoch 2/10 Iteration 301/1780 Training loss: 2.4042 0.1188 sec/batch\nEpoch 2/10 Iteration 302/1780 Training loss: 2.4030 0.1222 sec/batch\nEpoch 2/10 Iteration 303/1780 Training loss: 2.4017 0.1224 sec/batch\nEpoch 2/10 Iteration 304/1780 Training loss: 2.4002 0.1229 sec/batch\nEpoch 2/10 Iteration 305/1780 Training loss: 2.3991 0.1241 sec/batch\nEpoch 2/10 Iteration 306/1780 Training loss: 2.3979 0.1218 sec/batch\nEpoch 2/10 Iteration 307/1780 Training loss: 2.3968 0.1212 sec/batch\nEpoch 2/10 Iteration 308/1780 Training loss: 2.3956 0.1210 sec/batch\nEpoch 2/10 Iteration 309/1780 Training loss: 2.3943 0.1204 sec/batch\nEpoch 2/10 Iteration 310/1780 Training loss: 2.3929 0.1215 sec/batch\nEpoch 2/10 Iteration 311/1780 Training loss: 2.3916 0.1196 sec/batch\nEpoch 2/10 Iteration 312/1780 Training loss: 2.3905 0.1224 sec/batch\nEpoch 2/10 Iteration 313/1780 Training loss: 2.3893 0.1192 sec/batch\nEpoch 2/10 Iteration 314/1780 Training loss: 2.3881 0.1197 sec/batch\nEpoch 2/10 Iteration 315/1780 Training loss: 2.3869 0.1214 sec/batch\nEpoch 2/10 Iteration 316/1780 Training loss: 2.3857 0.1206 sec/batch\nEpoch 2/10 Iteration 317/1780 Training loss: 2.3848 0.1216 sec/batch\nEpoch 2/10 Iteration 318/1780 Training loss: 2.3835 0.1205 sec/batch\nEpoch 2/10 Iteration 319/1780 Training loss: 2.3824 0.1217 sec/batch\nEpoch 2/10 Iteration 320/1780 Training loss: 2.3811 0.1205 sec/batch\nEpoch 2/10 Iteration 321/1780 Training loss: 2.3799 0.1201 sec/batch\nEpoch 2/10 Iteration 322/1780 Training loss: 2.3787 0.1232 sec/batch\nEpoch 2/10 Iteration 323/1780 Training loss: 2.3775 0.1197 sec/batch\nEpoch 2/10 Iteration 324/1780 Training loss: 2.3765 0.1205 sec/batch\nEpoch 2/10 Iteration 325/1780 Training loss: 2.3754 0.1203 sec/batch\nEpoch 2/10 Iteration 326/1780 Training loss: 2.3744 0.1205 sec/batch\nEpoch 2/10 Iteration 327/1780 Training loss: 2.3732 0.1204 sec/batch\nEpoch 2/10 Iteration 328/1780 Training loss: 2.3720 0.1210 sec/batch\nEpoch 2/10 Iteration 329/1780 Training loss: 2.3710 0.1191 sec/batch\nEpoch 2/10 Iteration 330/1780 Training loss: 2.3701 0.1199 sec/batch\nEpoch 2/10 Iteration 331/1780 Training loss: 2.3691 0.1218 sec/batch\nEpoch 2/10 Iteration 332/1780 Training loss: 2.3680 0.1200 sec/batch\nEpoch 2/10 Iteration 333/1780 Training loss: 2.3668 0.1206 sec/batch\nEpoch 2/10 Iteration 334/1780 Training loss: 2.3656 0.1211 sec/batch\nEpoch 2/10 Iteration 335/1780 Training loss: 2.3645 0.1201 sec/batch\nEpoch 2/10 Iteration 336/1780 Training loss: 2.3633 0.1229 sec/batch\nEpoch 2/10 Iteration 337/1780 Training loss: 2.3620 0.1186 sec/batch\nEpoch 2/10 Iteration 338/1780 Training loss: 2.3610 0.1238 sec/batch\nEpoch 2/10 Iteration 339/1780 Training loss: 2.3600 0.1197 sec/batch\nEpoch 2/10 Iteration 340/1780 Training loss: 2.3588 0.1216 sec/batch\nEpoch 2/10 Iteration 341/1780 Training loss: 2.3577 0.1209 sec/batch\nEpoch 2/10 Iteration 342/1780 Training loss: 2.3566 0.1204 sec/batch\nEpoch 2/10 Iteration 343/1780 Training loss: 2.3555 0.1199 sec/batch\nEpoch 2/10 Iteration 344/1780 Training loss: 2.3544 0.1249 sec/batch\nEpoch 2/10 Iteration 345/1780 Training loss: 2.3533 0.1188 sec/batch\nEpoch 2/10 Iteration 346/1780 Training loss: 2.3524 0.1219 sec/batch\nEpoch 2/10 Iteration 347/1780 Training loss: 2.3513 0.1242 sec/batch\nEpoch 2/10 Iteration 348/1780 Training loss: 2.3501 0.1230 sec/batch\nEpoch 2/10 Iteration 349/1780 Training loss: 2.3489 0.1213 sec/batch\nEpoch 2/10 Iteration 350/1780 Training loss: 2.3479 0.1217 sec/batch\nEpoch 2/10 Iteration 351/1780 Training loss: 2.3469 0.1192 sec/batch\nEpoch 2/10 Iteration 352/1780 Training loss: 2.3459 0.1199 sec/batch\nEpoch 2/10 Iteration 353/1780 Training loss: 2.3450 0.1217 sec/batch\nEpoch 2/10 Iteration 354/1780 Training loss: 2.3439 0.1213 sec/batch\nEpoch 2/10 Iteration 355/1780 Training loss: 2.3428 0.1294 sec/batch\nEpoch 2/10 Iteration 356/1780 Training loss: 2.3417 0.1208 sec/batch\nEpoch 3/10 Iteration 357/1780 Training loss: 2.2072 0.1212 sec/batch\nEpoch 3/10 Iteration 358/1780 Training loss: 2.1648 0.1217 sec/batch\nEpoch 3/10 Iteration 359/1780 Training loss: 2.1521 0.1214 sec/batch\nEpoch 3/10 Iteration 360/1780 Training loss: 2.1456 0.1205 sec/batch\nEpoch 3/10 Iteration 361/1780 Training loss: 2.1434 0.1209 sec/batch\nEpoch 3/10 Iteration 362/1780 Training loss: 2.1387 0.1210 sec/batch\nEpoch 3/10 Iteration 363/1780 Training loss: 2.1379 0.1210 sec/batch\nEpoch 3/10 Iteration 364/1780 Training loss: 2.1381 0.1228 sec/batch\nEpoch 3/10 Iteration 365/1780 Training loss: 2.1400 0.1193 sec/batch\nEpoch 3/10 Iteration 366/1780 Training loss: 2.1401 0.1203 sec/batch\nEpoch 3/10 Iteration 367/1780 Training loss: 2.1372 0.1216 sec/batch\nEpoch 3/10 Iteration 368/1780 Training loss: 2.1354 0.1204 sec/batch\nEpoch 3/10 Iteration 369/1780 Training loss: 2.1345 0.1225 sec/batch\nEpoch 3/10 Iteration 370/1780 Training loss: 2.1361 0.1210 sec/batch\nEpoch 3/10 Iteration 371/1780 Training loss: 2.1352 0.1214 sec/batch\nEpoch 3/10 Iteration 372/1780 Training loss: 2.1337 0.1213 sec/batch\nEpoch 3/10 Iteration 373/1780 Training loss: 2.1331 0.1198 sec/batch\nEpoch 3/10 Iteration 374/1780 Training loss: 2.1347 0.1227 sec/batch\nEpoch 3/10 Iteration 375/1780 Training loss: 2.1341 0.1211 sec/batch\nEpoch 3/10 Iteration 376/1780 Training loss: 2.1330 0.1210 sec/batch\nEpoch 3/10 Iteration 377/1780 Training loss: 2.1319 0.1197 sec/batch\nEpoch 3/10 Iteration 378/1780 Training loss: 2.1329 0.1203 sec/batch\nEpoch 3/10 Iteration 379/1780 Training loss: 2.1318 0.1201 sec/batch\nEpoch 3/10 Iteration 380/1780 Training loss: 2.1302 0.1209 sec/batch\nEpoch 3/10 Iteration 381/1780 Training loss: 2.1293 0.1218 sec/batch\nEpoch 3/10 Iteration 382/1780 Training loss: 2.1279 0.1216 sec/batch\nEpoch 3/10 Iteration 383/1780 Training loss: 2.1265 0.1213 sec/batch\nEpoch 3/10 Iteration 384/1780 Training loss: 2.1257 0.1228 sec/batch\nEpoch 3/10 Iteration 385/1780 Training loss: 2.1261 0.1213 sec/batch\nEpoch 3/10 Iteration 386/1780 Training loss: 2.1254 0.1203 sec/batch\nEpoch 3/10 Iteration 387/1780 Training loss: 2.1249 0.1194 sec/batch\nEpoch 3/10 Iteration 388/1780 Training loss: 2.1232 0.1196 sec/batch\nEpoch 3/10 Iteration 389/1780 Training loss: 2.1223 0.1218 sec/batch\nEpoch 3/10 Iteration 390/1780 Training loss: 2.1222 0.1218 sec/batch\nEpoch 3/10 Iteration 391/1780 Training loss: 2.1212 0.1194 sec/batch\nEpoch 3/10 Iteration 392/1780 Training loss: 2.1202 0.1205 sec/batch\nEpoch 3/10 Iteration 393/1780 Training loss: 2.1193 0.1268 sec/batch\nEpoch 3/10 Iteration 394/1780 Training loss: 2.1172 0.1223 sec/batch\nEpoch 3/10 Iteration 395/1780 Training loss: 2.1155 0.1202 sec/batch\nEpoch 3/10 Iteration 396/1780 Training loss: 2.1140 0.1208 sec/batch\nEpoch 3/10 Iteration 397/1780 Training loss: 2.1126 0.1198 sec/batch\nEpoch 3/10 Iteration 398/1780 Training loss: 2.1118 0.1209 sec/batch\nEpoch 3/10 Iteration 399/1780 Training loss: 2.1106 0.1202 sec/batch\nEpoch 3/10 Iteration 400/1780 Training loss: 2.1093 0.1228 sec/batch\nValidation loss: 1.97191 Saving checkpoint!\nEpoch 3/10 Iteration 401/1780 Training loss: 2.1092 0.1196 sec/batch\nEpoch 3/10 Iteration 402/1780 Training loss: 2.1071 0.1222 sec/batch\nEpoch 3/10 Iteration 403/1780 Training loss: 2.1064 0.1206 sec/batch\nEpoch 3/10 Iteration 404/1780 Training loss: 2.1050 0.1231 sec/batch\nEpoch 3/10 Iteration 405/1780 Training loss: 2.1041 0.1221 sec/batch\nEpoch 3/10 Iteration 406/1780 Training loss: 2.1039 0.1212 sec/batch\nEpoch 3/10 Iteration 407/1780 Training loss: 2.1025 0.1207 sec/batch\nEpoch 3/10 Iteration 408/1780 Training loss: 2.1023 0.1207 sec/batch\nEpoch 3/10 Iteration 409/1780 Training loss: 2.1013 0.1184 sec/batch\nEpoch 3/10 Iteration 410/1780 Training loss: 2.1005 0.1197 sec/batch\nEpoch 3/10 Iteration 411/1780 Training loss: 2.0995 0.1209 sec/batch\nEpoch 3/10 Iteration 412/1780 Training loss: 2.0988 0.1208 sec/batch\nEpoch 3/10 Iteration 413/1780 Training loss: 2.0982 0.1197 sec/batch\nEpoch 3/10 Iteration 414/1780 Training loss: 2.0972 0.1195 sec/batch\nEpoch 3/10 Iteration 415/1780 Training loss: 2.0961 0.1209 sec/batch\nEpoch 3/10 Iteration 416/1780 Training loss: 2.0957 0.1206 sec/batch\nEpoch 3/10 Iteration 417/1780 Training loss: 2.0948 0.1214 sec/batch\nEpoch 3/10 Iteration 418/1780 Training loss: 2.0947 0.1225 sec/batch\nEpoch 3/10 Iteration 419/1780 Training loss: 2.0947 0.1187 sec/batch\nEpoch 3/10 Iteration 420/1780 Training loss: 2.0944 0.1204 sec/batch\nEpoch 3/10 Iteration 421/1780 Training loss: 2.0935 0.1222 sec/batch\nEpoch 3/10 Iteration 422/1780 Training loss: 2.0933 0.1246 sec/batch\nEpoch 3/10 Iteration 423/1780 Training loss: 2.0927 0.1190 sec/batch\nEpoch 3/10 Iteration 424/1780 Training loss: 2.0916 0.1198 sec/batch\nEpoch 3/10 Iteration 425/1780 Training loss: 2.0905 0.1207 sec/batch\nEpoch 3/10 Iteration 426/1780 Training loss: 2.0898 0.1204 sec/batch\nEpoch 3/10 Iteration 427/1780 Training loss: 2.0894 0.1208 sec/batch\nEpoch 3/10 Iteration 428/1780 Training loss: 2.0889 0.1200 sec/batch\nEpoch 3/10 Iteration 429/1780 Training loss: 2.0885 0.1201 sec/batch\nEpoch 3/10 Iteration 430/1780 Training loss: 2.0876 0.1215 sec/batch\nEpoch 3/10 Iteration 431/1780 Training loss: 2.0870 0.1207 sec/batch\nEpoch 3/10 Iteration 432/1780 Training loss: 2.0867 0.1202 sec/batch\nEpoch 3/10 Iteration 433/1780 Training loss: 2.0858 0.1208 sec/batch\nEpoch 3/10 Iteration 434/1780 Training loss: 2.0852 0.1213 sec/batch\nEpoch 3/10 Iteration 435/1780 Training loss: 2.0842 0.1193 sec/batch\nEpoch 3/10 Iteration 436/1780 Training loss: 2.0833 0.1194 sec/batch\nEpoch 3/10 Iteration 437/1780 Training loss: 2.0821 0.1204 sec/batch\nEpoch 3/10 Iteration 438/1780 Training loss: 2.0815 0.1223 sec/batch\nEpoch 3/10 Iteration 439/1780 Training loss: 2.0803 0.1218 sec/batch\nEpoch 3/10 Iteration 440/1780 Training loss: 2.0795 0.1225 sec/batch\nEpoch 3/10 Iteration 441/1780 Training loss: 2.0783 0.1221 sec/batch\nEpoch 3/10 Iteration 442/1780 Training loss: 2.0774 0.1201 sec/batch\nEpoch 3/10 Iteration 443/1780 Training loss: 2.0766 0.1228 sec/batch\nEpoch 3/10 Iteration 444/1780 Training loss: 2.0757 0.1219 sec/batch\nEpoch 3/10 Iteration 445/1780 Training loss: 2.0745 0.1194 sec/batch\nEpoch 3/10 Iteration 446/1780 Training loss: 2.0738 0.1230 sec/batch\nEpoch 3/10 Iteration 447/1780 Training loss: 2.0728 0.1217 sec/batch\nEpoch 3/10 Iteration 448/1780 Training loss: 2.0721 0.1196 sec/batch\nEpoch 3/10 Iteration 449/1780 Training loss: 2.0709 0.1204 sec/batch\nEpoch 3/10 Iteration 450/1780 Training loss: 2.0699 0.1205 sec/batch\nEpoch 3/10 Iteration 451/1780 Training loss: 2.0689 0.1191 sec/batch\nEpoch 3/10 Iteration 452/1780 Training loss: 2.0681 0.1223 sec/batch\nEpoch 3/10 Iteration 453/1780 Training loss: 2.0673 0.1236 sec/batch\nEpoch 3/10 Iteration 454/1780 Training loss: 2.0663 0.1206 sec/batch\nEpoch 3/10 Iteration 455/1780 Training loss: 2.0654 0.1197 sec/batch\nEpoch 3/10 Iteration 456/1780 Training loss: 2.0643 0.1199 sec/batch\nEpoch 3/10 Iteration 457/1780 Training loss: 2.0636 0.1196 sec/batch\nEpoch 3/10 Iteration 458/1780 Training loss: 2.0630 0.1228 sec/batch\nEpoch 3/10 Iteration 459/1780 Training loss: 2.0621 0.1223 sec/batch\nEpoch 3/10 Iteration 460/1780 Training loss: 2.0612 0.1226 sec/batch\nEpoch 3/10 Iteration 461/1780 Training loss: 2.0604 0.1220 sec/batch\nEpoch 3/10 Iteration 462/1780 Training loss: 2.0596 0.1246 sec/batch\nEpoch 3/10 Iteration 463/1780 Training loss: 2.0587 0.1215 sec/batch\nEpoch 3/10 Iteration 464/1780 Training loss: 2.0581 0.1226 sec/batch\nEpoch 3/10 Iteration 465/1780 Training loss: 2.0576 0.1210 sec/batch\nEpoch 3/10 Iteration 466/1780 Training loss: 2.0568 0.1232 sec/batch\nEpoch 3/10 Iteration 467/1780 Training loss: 2.0560 0.1268 sec/batch\nEpoch 3/10 Iteration 468/1780 Training loss: 2.0552 0.1210 sec/batch\nEpoch 3/10 Iteration 469/1780 Training loss: 2.0545 0.1212 sec/batch\nEpoch 3/10 Iteration 470/1780 Training loss: 2.0538 0.1225 sec/batch\nEpoch 3/10 Iteration 471/1780 Training loss: 2.0528 0.1192 sec/batch\nEpoch 3/10 Iteration 472/1780 Training loss: 2.0518 0.1195 sec/batch\nEpoch 3/10 Iteration 473/1780 Training loss: 2.0511 0.1205 sec/batch\nEpoch 3/10 Iteration 474/1780 Training loss: 2.0504 0.1211 sec/batch\nEpoch 3/10 Iteration 475/1780 Training loss: 2.0497 0.1213 sec/batch\nEpoch 3/10 Iteration 476/1780 Training loss: 2.0490 0.1193 sec/batch\nEpoch 3/10 Iteration 477/1780 Training loss: 2.0484 0.1204 sec/batch\nEpoch 3/10 Iteration 478/1780 Training loss: 2.0475 0.1215 sec/batch\nEpoch 3/10 Iteration 479/1780 Training loss: 2.0467 0.1205 sec/batch\nEpoch 3/10 Iteration 480/1780 Training loss: 2.0461 0.1211 sec/batch\nEpoch 3/10 Iteration 481/1780 Training loss: 2.0455 0.1203 sec/batch\nEpoch 3/10 Iteration 482/1780 Training loss: 2.0444 0.1209 sec/batch\nEpoch 3/10 Iteration 483/1780 Training loss: 2.0439 0.1194 sec/batch\nEpoch 3/10 Iteration 484/1780 Training loss: 2.0433 0.1259 sec/batch\nEpoch 3/10 Iteration 485/1780 Training loss: 2.0428 0.1202 sec/batch\nEpoch 3/10 Iteration 486/1780 Training loss: 2.0422 0.1211 sec/batch\nEpoch 3/10 Iteration 487/1780 Training loss: 2.0414 0.1222 sec/batch\nEpoch 3/10 Iteration 488/1780 Training loss: 2.0406 0.1208 sec/batch\nEpoch 3/10 Iteration 489/1780 Training loss: 2.0399 0.1209 sec/batch\nEpoch 3/10 Iteration 490/1780 Training loss: 2.0394 0.1232 sec/batch\nEpoch 3/10 Iteration 491/1780 Training loss: 2.0388 0.1193 sec/batch\nEpoch 3/10 Iteration 492/1780 Training loss: 2.0383 0.1196 sec/batch\nEpoch 3/10 Iteration 493/1780 Training loss: 2.0377 0.1202 sec/batch\nEpoch 3/10 Iteration 494/1780 Training loss: 2.0372 0.1228 sec/batch\nEpoch 3/10 Iteration 495/1780 Training loss: 2.0368 0.1212 sec/batch\nEpoch 3/10 Iteration 496/1780 Training loss: 2.0361 0.1201 sec/batch\nEpoch 3/10 Iteration 497/1780 Training loss: 2.0357 0.1209 sec/batch\nEpoch 3/10 Iteration 498/1780 Training loss: 2.0349 0.1231 sec/batch\nEpoch 3/10 Iteration 499/1780 Training loss: 2.0343 0.1196 sec/batch\nEpoch 3/10 Iteration 500/1780 Training loss: 2.0337 0.1215 sec/batch\nValidation loss: 1.84066 Saving checkpoint!\nEpoch 3/10 Iteration 501/1780 Training loss: 2.0332 0.1197 sec/batch\nEpoch 3/10 Iteration 502/1780 Training loss: 2.0326 0.1207 sec/batch\nEpoch 3/10 Iteration 503/1780 Training loss: 2.0320 0.1198 sec/batch\nEpoch 3/10 Iteration 504/1780 Training loss: 2.0316 0.1234 sec/batch\nEpoch 3/10 Iteration 505/1780 Training loss: 2.0310 0.1202 sec/batch\nEpoch 3/10 Iteration 506/1780 Training loss: 2.0302 0.1211 sec/batch\nEpoch 3/10 Iteration 507/1780 Training loss: 2.0296 0.1195 sec/batch\nEpoch 3/10 Iteration 508/1780 Training loss: 2.0292 0.1198 sec/batch\nEpoch 3/10 Iteration 509/1780 Training loss: 2.0287 0.1225 sec/batch\nEpoch 3/10 Iteration 510/1780 Training loss: 2.0282 0.1203 sec/batch\nEpoch 3/10 Iteration 511/1780 Training loss: 2.0275 0.1199 sec/batch\nEpoch 3/10 Iteration 512/1780 Training loss: 2.0269 0.1206 sec/batch\nEpoch 3/10 Iteration 513/1780 Training loss: 2.0262 0.1189 sec/batch\nEpoch 3/10 Iteration 514/1780 Training loss: 2.0255 0.1226 sec/batch\nEpoch 3/10 Iteration 515/1780 Training loss: 2.0248 0.1220 sec/batch\nEpoch 3/10 Iteration 516/1780 Training loss: 2.0243 0.1205 sec/batch\nEpoch 3/10 Iteration 517/1780 Training loss: 2.0239 0.1193 sec/batch\nEpoch 3/10 Iteration 518/1780 Training loss: 2.0233 0.1198 sec/batch\nEpoch 3/10 Iteration 519/1780 Training loss: 2.0227 0.1220 sec/batch\nEpoch 3/10 Iteration 520/1780 Training loss: 2.0222 0.1213 sec/batch\nEpoch 3/10 Iteration 521/1780 Training loss: 2.0216 0.1206 sec/batch\nEpoch 3/10 Iteration 522/1780 Training loss: 2.0209 0.1222 sec/batch\nEpoch 3/10 Iteration 523/1780 Training loss: 2.0204 0.1224 sec/batch\nEpoch 3/10 Iteration 524/1780 Training loss: 2.0202 0.1204 sec/batch\nEpoch 3/10 Iteration 525/1780 Training loss: 2.0195 0.1218 sec/batch\nEpoch 3/10 Iteration 526/1780 Training loss: 2.0189 0.1204 sec/batch\nEpoch 3/10 Iteration 527/1780 Training loss: 2.0182 0.1211 sec/batch\nEpoch 3/10 Iteration 528/1780 Training loss: 2.0174 0.1203 sec/batch\nEpoch 3/10 Iteration 529/1780 Training loss: 2.0170 0.1214 sec/batch\nEpoch 3/10 Iteration 530/1780 Training loss: 2.0164 0.1214 sec/batch\nEpoch 3/10 Iteration 531/1780 Training loss: 2.0159 0.1194 sec/batch\nEpoch 3/10 Iteration 532/1780 Training loss: 2.0153 0.1246 sec/batch\nEpoch 3/10 Iteration 533/1780 Training loss: 2.0146 0.1200 sec/batch\nEpoch 3/10 Iteration 534/1780 Training loss: 2.0141 0.1202 sec/batch\nEpoch 4/10 Iteration 535/1780 Training loss: 1.9760 0.1208 sec/batch\nEpoch 4/10 Iteration 536/1780 Training loss: 1.9361 0.1223 sec/batch\nEpoch 4/10 Iteration 537/1780 Training loss: 1.9218 0.1204 sec/batch\nEpoch 4/10 Iteration 538/1780 Training loss: 1.9151 0.1209 sec/batch\nEpoch 4/10 Iteration 539/1780 Training loss: 1.9126 0.1238 sec/batch\nEpoch 4/10 Iteration 540/1780 Training loss: 1.9034 0.1229 sec/batch\nEpoch 4/10 Iteration 541/1780 Training loss: 1.9039 0.1209 sec/batch\nEpoch 4/10 Iteration 542/1780 Training loss: 1.9039 0.1225 sec/batch\nEpoch 4/10 Iteration 543/1780 Training loss: 1.9061 0.1197 sec/batch\nEpoch 4/10 Iteration 544/1780 Training loss: 1.9051 0.1224 sec/batch\nEpoch 4/10 Iteration 545/1780 Training loss: 1.9024 0.1202 sec/batch\nEpoch 4/10 Iteration 546/1780 Training loss: 1.9002 0.1227 sec/batch\nEpoch 4/10 Iteration 547/1780 Training loss: 1.8999 0.1223 sec/batch\nEpoch 4/10 Iteration 548/1780 Training loss: 1.9020 0.1240 sec/batch\nEpoch 4/10 Iteration 549/1780 Training loss: 1.9007 0.1216 sec/batch\nEpoch 4/10 Iteration 550/1780 Training loss: 1.8991 0.1226 sec/batch\nEpoch 4/10 Iteration 551/1780 Training loss: 1.8983 0.1221 sec/batch\nEpoch 4/10 Iteration 552/1780 Training loss: 1.9000 0.1202 sec/batch\nEpoch 4/10 Iteration 553/1780 Training loss: 1.8991 0.1264 sec/batch\nEpoch 4/10 Iteration 554/1780 Training loss: 1.8990 0.1214 sec/batch\nEpoch 4/10 Iteration 555/1780 Training loss: 1.8978 0.1221 sec/batch\nEpoch 4/10 Iteration 556/1780 Training loss: 1.8984 0.1208 sec/batch\nEpoch 4/10 Iteration 557/1780 Training loss: 1.8970 0.1240 sec/batch\nEpoch 4/10 Iteration 558/1780 Training loss: 1.8962 0.1213 sec/batch\nEpoch 4/10 Iteration 559/1780 Training loss: 1.8953 0.1198 sec/batch\nEpoch 4/10 Iteration 560/1780 Training loss: 1.8938 0.1210 sec/batch\nEpoch 4/10 Iteration 561/1780 Training loss: 1.8923 0.1204 sec/batch\nEpoch 4/10 Iteration 562/1780 Training loss: 1.8923 0.1199 sec/batch\nEpoch 4/10 Iteration 563/1780 Training loss: 1.8930 0.1227 sec/batch\nEpoch 4/10 Iteration 564/1780 Training loss: 1.8926 0.1244 sec/batch\nEpoch 4/10 Iteration 565/1780 Training loss: 1.8921 0.1201 sec/batch\nEpoch 4/10 Iteration 566/1780 Training loss: 1.8908 0.1202 sec/batch\nEpoch 4/10 Iteration 567/1780 Training loss: 1.8904 0.1212 sec/batch\nEpoch 4/10 Iteration 568/1780 Training loss: 1.8909 0.1223 sec/batch\nEpoch 4/10 Iteration 569/1780 Training loss: 1.8899 0.1218 sec/batch\nEpoch 4/10 Iteration 570/1780 Training loss: 1.8891 0.1203 sec/batch\nEpoch 4/10 Iteration 571/1780 Training loss: 1.8882 0.1242 sec/batch\nEpoch 4/10 Iteration 572/1780 Training loss: 1.8867 0.1219 sec/batch\nEpoch 4/10 Iteration 573/1780 Training loss: 1.8851 0.1204 sec/batch\nEpoch 4/10 Iteration 574/1780 Training loss: 1.8840 0.1215 sec/batch\nEpoch 4/10 Iteration 575/1780 Training loss: 1.8830 0.1208 sec/batch\nEpoch 4/10 Iteration 576/1780 Training loss: 1.8829 0.1234 sec/batch\nEpoch 4/10 Iteration 577/1780 Training loss: 1.8819 0.1202 sec/batch\nEpoch 4/10 Iteration 578/1780 Training loss: 1.8806 0.1211 sec/batch\nEpoch 4/10 Iteration 579/1780 Training loss: 1.8802 0.1227 sec/batch\nEpoch 4/10 Iteration 580/1780 Training loss: 1.8786 0.1222 sec/batch\nEpoch 4/10 Iteration 581/1780 Training loss: 1.8781 0.1202 sec/batch\nEpoch 4/10 Iteration 582/1780 Training loss: 1.8771 0.1230 sec/batch\nEpoch 4/10 Iteration 583/1780 Training loss: 1.8766 0.1202 sec/batch\nEpoch 4/10 Iteration 584/1780 Training loss: 1.8769 0.1311 sec/batch\nEpoch 4/10 Iteration 585/1780 Training loss: 1.8759 0.1200 sec/batch\nEpoch 4/10 Iteration 586/1780 Training loss: 1.8765 0.1204 sec/batch\nEpoch 4/10 Iteration 587/1780 Training loss: 1.8759 0.1209 sec/batch\nEpoch 4/10 Iteration 588/1780 Training loss: 1.8754 0.1236 sec/batch\nEpoch 4/10 Iteration 589/1780 Training loss: 1.8746 0.1202 sec/batch\nEpoch 4/10 Iteration 590/1780 Training loss: 1.8742 0.1205 sec/batch\nEpoch 4/10 Iteration 591/1780 Training loss: 1.8739 0.1211 sec/batch\nEpoch 4/10 Iteration 592/1780 Training loss: 1.8733 0.1207 sec/batch\nEpoch 4/10 Iteration 593/1780 Training loss: 1.8725 0.1201 sec/batch\nEpoch 4/10 Iteration 594/1780 Training loss: 1.8726 0.1218 sec/batch\nEpoch 4/10 Iteration 595/1780 Training loss: 1.8722 0.1220 sec/batch\nEpoch 4/10 Iteration 596/1780 Training loss: 1.8725 0.1204 sec/batch\nEpoch 4/10 Iteration 597/1780 Training loss: 1.8725 0.1209 sec/batch\nEpoch 4/10 Iteration 598/1780 Training loss: 1.8724 0.1206 sec/batch\nEpoch 4/10 Iteration 599/1780 Training loss: 1.8720 0.1207 sec/batch\nEpoch 4/10 Iteration 600/1780 Training loss: 1.8718 0.1195 sec/batch\nValidation loss: 1.73093 Saving checkpoint!\nEpoch 4/10 Iteration 601/1780 Training loss: 1.8722 0.1200 sec/batch\nEpoch 4/10 Iteration 602/1780 Training loss: 1.8713 0.1218 sec/batch\nEpoch 4/10 Iteration 603/1780 Training loss: 1.8707 0.1227 sec/batch\nEpoch 4/10 Iteration 604/1780 Training loss: 1.8703 0.1217 sec/batch\nEpoch 4/10 Iteration 605/1780 Training loss: 1.8703 0.1209 sec/batch\nEpoch 4/10 Iteration 606/1780 Training loss: 1.8699 0.1209 sec/batch\nEpoch 4/10 Iteration 607/1780 Training loss: 1.8698 0.1248 sec/batch\nEpoch 4/10 Iteration 608/1780 Training loss: 1.8691 0.1225 sec/batch\nEpoch 4/10 Iteration 609/1780 Training loss: 1.8687 0.1215 sec/batch\nEpoch 4/10 Iteration 610/1780 Training loss: 1.8685 0.1204 sec/batch\nEpoch 4/10 Iteration 611/1780 Training loss: 1.8680 0.1221 sec/batch\nEpoch 4/10 Iteration 612/1780 Training loss: 1.8676 0.1204 sec/batch\nEpoch 4/10 Iteration 613/1780 Training loss: 1.8668 0.1208 sec/batch\nEpoch 4/10 Iteration 614/1780 Training loss: 1.8662 0.1245 sec/batch\nEpoch 4/10 Iteration 615/1780 Training loss: 1.8652 0.1214 sec/batch\nEpoch 4/10 Iteration 616/1780 Training loss: 1.8650 0.1223 sec/batch\nEpoch 4/10 Iteration 617/1780 Training loss: 1.8640 0.1206 sec/batch\nEpoch 4/10 Iteration 618/1780 Training loss: 1.8637 0.1236 sec/batch\nEpoch 4/10 Iteration 619/1780 Training loss: 1.8629 0.1221 sec/batch\nEpoch 4/10 Iteration 620/1780 Training loss: 1.8622 0.1234 sec/batch\nEpoch 4/10 Iteration 621/1780 Training loss: 1.8617 0.1212 sec/batch\nEpoch 4/10 Iteration 622/1780 Training loss: 1.8611 0.1245 sec/batch\nEpoch 4/10 Iteration 623/1780 Training loss: 1.8601 0.1201 sec/batch\nEpoch 4/10 Iteration 624/1780 Training loss: 1.8600 0.1217 sec/batch\nEpoch 4/10 Iteration 625/1780 Training loss: 1.8593 0.1225 sec/batch\nEpoch 4/10 Iteration 626/1780 Training loss: 1.8587 0.1230 sec/batch\nEpoch 4/10 Iteration 627/1780 Training loss: 1.8579 0.1226 sec/batch\nEpoch 4/10 Iteration 628/1780 Training loss: 1.8573 0.1205 sec/batch\nEpoch 4/10 Iteration 629/1780 Training loss: 1.8566 0.1207 sec/batch\nEpoch 4/10 Iteration 630/1780 Training loss: 1.8560 0.1212 sec/batch\nEpoch 4/10 Iteration 631/1780 Training loss: 1.8556 0.1198 sec/batch\nEpoch 4/10 Iteration 632/1780 Training loss: 1.8549 0.1219 sec/batch\nEpoch 4/10 Iteration 633/1780 Training loss: 1.8542 0.1227 sec/batch\nEpoch 4/10 Iteration 634/1780 Training loss: 1.8533 0.1207 sec/batch\nEpoch 4/10 Iteration 635/1780 Training loss: 1.8528 0.1207 sec/batch\nEpoch 4/10 Iteration 636/1780 Training loss: 1.8524 0.1216 sec/batch\nEpoch 4/10 Iteration 637/1780 Training loss: 1.8517 0.1208 sec/batch\nEpoch 4/10 Iteration 638/1780 Training loss: 1.8512 0.1207 sec/batch\nEpoch 4/10 Iteration 639/1780 Training loss: 1.8506 0.1199 sec/batch\nEpoch 4/10 Iteration 640/1780 Training loss: 1.8501 0.1212 sec/batch\nEpoch 4/10 Iteration 641/1780 Training loss: 1.8497 0.1322 sec/batch\nEpoch 4/10 Iteration 642/1780 Training loss: 1.8493 0.1219 sec/batch\nEpoch 4/10 Iteration 643/1780 Training loss: 1.8490 0.1222 sec/batch\nEpoch 4/10 Iteration 644/1780 Training loss: 1.8485 0.1210 sec/batch\nEpoch 4/10 Iteration 645/1780 Training loss: 1.8481 0.1205 sec/batch\nEpoch 4/10 Iteration 646/1780 Training loss: 1.8475 0.1213 sec/batch\nEpoch 4/10 Iteration 647/1780 Training loss: 1.8470 0.1216 sec/batch\nEpoch 4/10 Iteration 648/1780 Training loss: 1.8465 0.1219 sec/batch\nEpoch 4/10 Iteration 649/1780 Training loss: 1.8458 0.1224 sec/batch\nEpoch 4/10 Iteration 650/1780 Training loss: 1.8451 0.1233 sec/batch\nEpoch 4/10 Iteration 651/1780 Training loss: 1.8447 0.1205 sec/batch\nEpoch 4/10 Iteration 652/1780 Training loss: 1.8442 0.1225 sec/batch\nEpoch 4/10 Iteration 653/1780 Training loss: 1.8437 0.1217 sec/batch\nEpoch 4/10 Iteration 654/1780 Training loss: 1.8432 0.1231 sec/batch\nEpoch 4/10 Iteration 655/1780 Training loss: 1.8428 0.1208 sec/batch\nEpoch 4/10 Iteration 656/1780 Training loss: 1.8421 0.1206 sec/batch\nEpoch 4/10 Iteration 657/1780 Training loss: 1.8415 0.1199 sec/batch\nEpoch 4/10 Iteration 658/1780 Training loss: 1.8412 0.1228 sec/batch\nEpoch 4/10 Iteration 659/1780 Training loss: 1.8407 0.1206 sec/batch\nEpoch 4/10 Iteration 660/1780 Training loss: 1.8398 0.1207 sec/batch\nEpoch 4/10 Iteration 661/1780 Training loss: 1.8395 0.1210 sec/batch\nEpoch 4/10 Iteration 662/1780 Training loss: 1.8391 0.1215 sec/batch\nEpoch 4/10 Iteration 663/1780 Training loss: 1.8386 0.1224 sec/batch\nEpoch 4/10 Iteration 664/1780 Training loss: 1.8382 0.1221 sec/batch\nEpoch 4/10 Iteration 665/1780 Training loss: 1.8375 0.1245 sec/batch\nEpoch 4/10 Iteration 666/1780 Training loss: 1.8369 0.1218 sec/batch\nEpoch 4/10 Iteration 667/1780 Training loss: 1.8365 0.1198 sec/batch\nEpoch 4/10 Iteration 668/1780 Training loss: 1.8361 0.1224 sec/batch\nEpoch 4/10 Iteration 669/1780 Training loss: 1.8357 0.1211 sec/batch\nEpoch 4/10 Iteration 670/1780 Training loss: 1.8354 0.1217 sec/batch\nEpoch 4/10 Iteration 671/1780 Training loss: 1.8350 0.1198 sec/batch\nEpoch 4/10 Iteration 672/1780 Training loss: 1.8347 0.1214 sec/batch\nEpoch 4/10 Iteration 673/1780 Training loss: 1.8345 0.1196 sec/batch\nEpoch 4/10 Iteration 674/1780 Training loss: 1.8340 0.1197 sec/batch\nEpoch 4/10 Iteration 675/1780 Training loss: 1.8338 0.1204 sec/batch\nEpoch 4/10 Iteration 676/1780 Training loss: 1.8333 0.1227 sec/batch\nEpoch 4/10 Iteration 677/1780 Training loss: 1.8330 0.1210 sec/batch\nEpoch 4/10 Iteration 678/1780 Training loss: 1.8326 0.1234 sec/batch\nEpoch 4/10 Iteration 679/1780 Training loss: 1.8320 0.1201 sec/batch\nEpoch 4/10 Iteration 680/1780 Training loss: 1.8317 0.1209 sec/batch\nEpoch 4/10 Iteration 681/1780 Training loss: 1.8314 0.1212 sec/batch\nEpoch 4/10 Iteration 682/1780 Training loss: 1.8312 0.1226 sec/batch\nEpoch 4/10 Iteration 683/1780 Training loss: 1.8308 0.1227 sec/batch\nEpoch 4/10 Iteration 684/1780 Training loss: 1.8303 0.1220 sec/batch\nEpoch 4/10 Iteration 685/1780 Training loss: 1.8297 0.1231 sec/batch\nEpoch 4/10 Iteration 686/1780 Training loss: 1.8295 0.1194 sec/batch\nEpoch 4/10 Iteration 687/1780 Training loss: 1.8292 0.1221 sec/batch\nEpoch 4/10 Iteration 688/1780 Training loss: 1.8288 0.1206 sec/batch\nEpoch 4/10 Iteration 689/1780 Training loss: 1.8284 0.1210 sec/batch\nEpoch 4/10 Iteration 690/1780 Training loss: 1.8280 0.1226 sec/batch\nEpoch 4/10 Iteration 691/1780 Training loss: 1.8277 0.1197 sec/batch\nEpoch 4/10 Iteration 692/1780 Training loss: 1.8273 0.1207 sec/batch\nEpoch 4/10 Iteration 693/1780 Training loss: 1.8267 0.1224 sec/batch\nEpoch 4/10 Iteration 694/1780 Training loss: 1.8264 0.1267 sec/batch\nEpoch 4/10 Iteration 695/1780 Training loss: 1.8263 0.1214 sec/batch\nEpoch 4/10 Iteration 696/1780 Training loss: 1.8259 0.1224 sec/batch\nEpoch 4/10 Iteration 697/1780 Training loss: 1.8255 0.1230 sec/batch\nEpoch 4/10 Iteration 698/1780 Training loss: 1.8252 0.1234 sec/batch\nEpoch 4/10 Iteration 699/1780 Training loss: 1.8248 0.1210 sec/batch\nEpoch 4/10 Iteration 700/1780 Training loss: 1.8243 0.1202 sec/batch\nValidation loss: 1.65231 Saving checkpoint!\nEpoch 4/10 Iteration 701/1780 Training loss: 1.8245 0.1202 sec/batch\nEpoch 4/10 Iteration 702/1780 Training loss: 1.8245 0.1223 sec/batch\nEpoch 4/10 Iteration 703/1780 Training loss: 1.8241 0.1228 sec/batch\nEpoch 4/10 Iteration 704/1780 Training loss: 1.8237 0.1214 sec/batch\nEpoch 4/10 Iteration 705/1780 Training loss: 1.8233 0.1206 sec/batch\nEpoch 4/10 Iteration 706/1780 Training loss: 1.8228 0.1249 sec/batch\nEpoch 4/10 Iteration 707/1780 Training loss: 1.8225 0.1202 sec/batch\nEpoch 4/10 Iteration 708/1780 Training loss: 1.8221 0.1202 sec/batch\nEpoch 4/10 Iteration 709/1780 Training loss: 1.8218 0.1235 sec/batch\nEpoch 4/10 Iteration 710/1780 Training loss: 1.8214 0.1214 sec/batch\nEpoch 4/10 Iteration 711/1780 Training loss: 1.8208 0.1219 sec/batch\nEpoch 4/10 Iteration 712/1780 Training loss: 1.8206 0.1209 sec/batch\nEpoch 5/10 Iteration 713/1780 Training loss: 1.8258 0.1203 sec/batch\nEpoch 5/10 Iteration 714/1780 Training loss: 1.7858 0.1202 sec/batch\nEpoch 5/10 Iteration 715/1780 Training loss: 1.7699 0.1205 sec/batch\nEpoch 5/10 Iteration 716/1780 Training loss: 1.7626 0.1229 sec/batch\nEpoch 5/10 Iteration 717/1780 Training loss: 1.7575 0.1229 sec/batch\nEpoch 5/10 Iteration 718/1780 Training loss: 1.7478 0.1233 sec/batch\nEpoch 5/10 Iteration 719/1780 Training loss: 1.7484 0.1197 sec/batch\nEpoch 5/10 Iteration 720/1780 Training loss: 1.7470 0.1201 sec/batch\nEpoch 5/10 Iteration 721/1780 Training loss: 1.7486 0.1205 sec/batch\nEpoch 5/10 Iteration 722/1780 Training loss: 1.7473 0.1214 sec/batch\nEpoch 5/10 Iteration 723/1780 Training loss: 1.7443 0.1229 sec/batch\nEpoch 5/10 Iteration 724/1780 Training loss: 1.7431 0.1229 sec/batch\nEpoch 5/10 Iteration 725/1780 Training loss: 1.7430 0.1201 sec/batch\nEpoch 5/10 Iteration 726/1780 Training loss: 1.7453 0.1203 sec/batch\nEpoch 5/10 Iteration 727/1780 Training loss: 1.7441 0.1212 sec/batch\nEpoch 5/10 Iteration 728/1780 Training loss: 1.7419 0.1239 sec/batch\nEpoch 5/10 Iteration 729/1780 Training loss: 1.7417 0.1221 sec/batch\nEpoch 5/10 Iteration 730/1780 Training loss: 1.7430 0.1210 sec/batch\nEpoch 5/10 Iteration 731/1780 Training loss: 1.7426 0.1208 sec/batch\nEpoch 5/10 Iteration 732/1780 Training loss: 1.7427 0.1209 sec/batch\nEpoch 5/10 Iteration 733/1780 Training loss: 1.7420 0.1212 sec/batch\nEpoch 5/10 Iteration 734/1780 Training loss: 1.7431 0.1232 sec/batch\nEpoch 5/10 Iteration 735/1780 Training loss: 1.7421 0.1198 sec/batch\nEpoch 5/10 Iteration 736/1780 Training loss: 1.7414 0.1201 sec/batch\nEpoch 5/10 Iteration 737/1780 Training loss: 1.7411 0.1225 sec/batch\nEpoch 5/10 Iteration 738/1780 Training loss: 1.7398 0.1234 sec/batch\nEpoch 5/10 Iteration 739/1780 Training loss: 1.7383 0.1267 sec/batch\nEpoch 5/10 Iteration 740/1780 Training loss: 1.7388 0.1221 sec/batch\nEpoch 5/10 Iteration 741/1780 Training loss: 1.7394 0.1196 sec/batch\nEpoch 5/10 Iteration 742/1780 Training loss: 1.7393 0.1201 sec/batch\nEpoch 5/10 Iteration 743/1780 Training loss: 1.7389 0.1211 sec/batch\nEpoch 5/10 Iteration 744/1780 Training loss: 1.7376 0.1209 sec/batch\nEpoch 5/10 Iteration 745/1780 Training loss: 1.7375 0.1205 sec/batch\nEpoch 5/10 Iteration 746/1780 Training loss: 1.7376 0.1219 sec/batch\nEpoch 5/10 Iteration 747/1780 Training loss: 1.7371 0.1199 sec/batch\nEpoch 5/10 Iteration 748/1780 Training loss: 1.7366 0.1196 sec/batch\nEpoch 5/10 Iteration 749/1780 Training loss: 1.7359 0.1233 sec/batch\nEpoch 5/10 Iteration 750/1780 Training loss: 1.7344 0.1204 sec/batch\nEpoch 5/10 Iteration 751/1780 Training loss: 1.7328 0.1230 sec/batch\nEpoch 5/10 Iteration 752/1780 Training loss: 1.7319 0.1207 sec/batch\nEpoch 5/10 Iteration 753/1780 Training loss: 1.7311 0.1234 sec/batch\nEpoch 5/10 Iteration 754/1780 Training loss: 1.7316 0.1203 sec/batch\nEpoch 5/10 Iteration 755/1780 Training loss: 1.7307 0.1209 sec/batch\nEpoch 5/10 Iteration 756/1780 Training loss: 1.7298 0.1219 sec/batch\nEpoch 5/10 Iteration 757/1780 Training loss: 1.7298 0.1231 sec/batch\nEpoch 5/10 Iteration 758/1780 Training loss: 1.7289 0.1212 sec/batch\nEpoch 5/10 Iteration 759/1780 Training loss: 1.7284 0.1206 sec/batch\nEpoch 5/10 Iteration 760/1780 Training loss: 1.7277 0.1196 sec/batch\nEpoch 5/10 Iteration 761/1780 Training loss: 1.7273 0.1217 sec/batch\nEpoch 5/10 Iteration 762/1780 Training loss: 1.7279 0.1209 sec/batch\nEpoch 5/10 Iteration 763/1780 Training loss: 1.7271 0.1225 sec/batch\nEpoch 5/10 Iteration 764/1780 Training loss: 1.7277 0.1207 sec/batch\nEpoch 5/10 Iteration 765/1780 Training loss: 1.7274 0.1227 sec/batch\nEpoch 5/10 Iteration 766/1780 Training loss: 1.7272 0.1203 sec/batch\nEpoch 5/10 Iteration 767/1780 Training loss: 1.7266 0.1216 sec/batch\nEpoch 5/10 Iteration 768/1780 Training loss: 1.7263 0.1207 sec/batch\nEpoch 5/10 Iteration 769/1780 Training loss: 1.7264 0.1211 sec/batch\nEpoch 5/10 Iteration 770/1780 Training loss: 1.7259 0.1199 sec/batch\nEpoch 5/10 Iteration 771/1780 Training loss: 1.7252 0.1231 sec/batch\nEpoch 5/10 Iteration 772/1780 Training loss: 1.7255 0.1210 sec/batch\nEpoch 5/10 Iteration 773/1780 Training loss: 1.7252 0.1195 sec/batch\nEpoch 5/10 Iteration 774/1780 Training loss: 1.7257 0.1259 sec/batch\nEpoch 5/10 Iteration 775/1780 Training loss: 1.7260 0.1205 sec/batch\nEpoch 5/10 Iteration 776/1780 Training loss: 1.7263 0.1208 sec/batch\nEpoch 5/10 Iteration 777/1780 Training loss: 1.7260 0.1231 sec/batch\nEpoch 5/10 Iteration 778/1780 Training loss: 1.7261 0.1212 sec/batch\nEpoch 5/10 Iteration 779/1780 Training loss: 1.7263 0.1223 sec/batch\nEpoch 5/10 Iteration 780/1780 Training loss: 1.7258 0.1207 sec/batch\nEpoch 5/10 Iteration 781/1780 Training loss: 1.7255 0.1205 sec/batch\nEpoch 5/10 Iteration 782/1780 Training loss: 1.7252 0.1242 sec/batch\nEpoch 5/10 Iteration 783/1780 Training loss: 1.7255 0.1230 sec/batch\nEpoch 5/10 Iteration 784/1780 Training loss: 1.7254 0.1201 sec/batch\nEpoch 5/10 Iteration 785/1780 Training loss: 1.7255 0.1203 sec/batch\nEpoch 5/10 Iteration 786/1780 Training loss: 1.7250 0.1259 sec/batch\nEpoch 5/10 Iteration 787/1780 Training loss: 1.7245 0.1374 sec/batch\nEpoch 5/10 Iteration 788/1780 Training loss: 1.7246 0.1230 sec/batch\nEpoch 5/10 Iteration 789/1780 Training loss: 1.7241 0.1200 sec/batch\nEpoch 5/10 Iteration 790/1780 Training loss: 1.7240 0.1225 sec/batch\nEpoch 5/10 Iteration 791/1780 Training loss: 1.7233 0.1212 sec/batch\nEpoch 5/10 Iteration 792/1780 Training loss: 1.7228 0.1208 sec/batch\nEpoch 5/10 Iteration 793/1780 Training loss: 1.7221 0.1241 sec/batch\nEpoch 5/10 Iteration 794/1780 Training loss: 1.7219 0.1229 sec/batch\nEpoch 5/10 Iteration 795/1780 Training loss: 1.7211 0.1293 sec/batch\nEpoch 5/10 Iteration 796/1780 Training loss: 1.7208 0.1219 sec/batch\nEpoch 5/10 Iteration 797/1780 Training loss: 1.7201 0.1216 sec/batch\nEpoch 5/10 Iteration 798/1780 Training loss: 1.7196 0.1230 sec/batch\nEpoch 5/10 Iteration 799/1780 Training loss: 1.7191 0.1198 sec/batch\nEpoch 5/10 Iteration 800/1780 Training loss: 1.7186 0.1205 sec/batch\nValidation loss: 1.57561 Saving checkpoint!\nEpoch 5/10 Iteration 801/1780 Training loss: 1.7186 0.1210 sec/batch\nEpoch 5/10 Iteration 802/1780 Training loss: 1.7185 0.1207 sec/batch\nEpoch 5/10 Iteration 803/1780 Training loss: 1.7179 0.1223 sec/batch\nEpoch 5/10 Iteration 804/1780 Training loss: 1.7174 0.1230 sec/batch\nEpoch 5/10 Iteration 805/1780 Training loss: 1.7167 0.1220 sec/batch\nEpoch 5/10 Iteration 806/1780 Training loss: 1.7161 0.1199 sec/batch\nEpoch 5/10 Iteration 807/1780 Training loss: 1.7155 0.1211 sec/batch\nEpoch 5/10 Iteration 808/1780 Training loss: 1.7151 0.1203 sec/batch\nEpoch 5/10 Iteration 809/1780 Training loss: 1.7146 0.1232 sec/batch\nEpoch 5/10 Iteration 810/1780 Training loss: 1.7139 0.1223 sec/batch\nEpoch 5/10 Iteration 811/1780 Training loss: 1.7132 0.1211 sec/batch\nEpoch 5/10 Iteration 812/1780 Training loss: 1.7124 0.1235 sec/batch\nEpoch 5/10 Iteration 813/1780 Training loss: 1.7121 0.1200 sec/batch\nEpoch 5/10 Iteration 814/1780 Training loss: 1.7117 0.1205 sec/batch\nEpoch 5/10 Iteration 815/1780 Training loss: 1.7111 0.1223 sec/batch\nEpoch 5/10 Iteration 816/1780 Training loss: 1.7107 0.1205 sec/batch\nEpoch 5/10 Iteration 817/1780 Training loss: 1.7101 0.1216 sec/batch\nEpoch 5/10 Iteration 818/1780 Training loss: 1.7097 0.1225 sec/batch\nEpoch 5/10 Iteration 819/1780 Training loss: 1.7093 0.1204 sec/batch\nEpoch 5/10 Iteration 820/1780 Training loss: 1.7089 0.1207 sec/batch\nEpoch 5/10 Iteration 821/1780 Training loss: 1.7086 0.1228 sec/batch\nEpoch 5/10 Iteration 822/1780 Training loss: 1.7084 0.1201 sec/batch\nEpoch 5/10 Iteration 823/1780 Training loss: 1.7080 0.1211 sec/batch\nEpoch 5/10 Iteration 824/1780 Training loss: 1.7075 0.1227 sec/batch\nEpoch 5/10 Iteration 825/1780 Training loss: 1.7071 0.1207 sec/batch\nEpoch 5/10 Iteration 826/1780 Training loss: 1.7067 0.1305 sec/batch\nEpoch 5/10 Iteration 827/1780 Training loss: 1.7061 0.1222 sec/batch\nEpoch 5/10 Iteration 828/1780 Training loss: 1.7056 0.1232 sec/batch\nEpoch 5/10 Iteration 829/1780 Training loss: 1.7053 0.1210 sec/batch\nEpoch 5/10 Iteration 830/1780 Training loss: 1.7049 0.1211 sec/batch\nEpoch 5/10 Iteration 831/1780 Training loss: 1.7045 0.1220 sec/batch\nEpoch 5/10 Iteration 832/1780 Training loss: 1.7042 0.1217 sec/batch\nEpoch 5/10 Iteration 833/1780 Training loss: 1.7038 0.1219 sec/batch\nEpoch 5/10 Iteration 834/1780 Training loss: 1.7032 0.1204 sec/batch\nEpoch 5/10 Iteration 835/1780 Training loss: 1.7026 0.1212 sec/batch\nEpoch 5/10 Iteration 836/1780 Training loss: 1.7023 0.1233 sec/batch\nEpoch 5/10 Iteration 837/1780 Training loss: 1.7020 0.1250 sec/batch\nEpoch 5/10 Iteration 838/1780 Training loss: 1.7014 0.1192 sec/batch\nEpoch 5/10 Iteration 839/1780 Training loss: 1.7012 0.1248 sec/batch\nEpoch 5/10 Iteration 840/1780 Training loss: 1.7010 0.1203 sec/batch\nEpoch 5/10 Iteration 841/1780 Training loss: 1.7006 0.1225 sec/batch\nEpoch 5/10 Iteration 842/1780 Training loss: 1.7002 0.1236 sec/batch\nEpoch 5/10 Iteration 843/1780 Training loss: 1.6995 0.1222 sec/batch\nEpoch 5/10 Iteration 844/1780 Training loss: 1.6990 0.1244 sec/batch\nEpoch 5/10 Iteration 845/1780 Training loss: 1.6988 0.1213 sec/batch\nEpoch 5/10 Iteration 846/1780 Training loss: 1.6986 0.1207 sec/batch\nEpoch 5/10 Iteration 847/1780 Training loss: 1.6984 0.1214 sec/batch\nEpoch 5/10 Iteration 848/1780 Training loss: 1.6983 0.1206 sec/batch\nEpoch 5/10 Iteration 849/1780 Training loss: 1.6981 0.1198 sec/batch\nEpoch 5/10 Iteration 850/1780 Training loss: 1.6979 0.1218 sec/batch\nEpoch 5/10 Iteration 851/1780 Training loss: 1.6978 0.1207 sec/batch\nEpoch 5/10 Iteration 852/1780 Training loss: 1.6975 0.1204 sec/batch\nEpoch 5/10 Iteration 853/1780 Training loss: 1.6975 0.1233 sec/batch\nEpoch 5/10 Iteration 854/1780 Training loss: 1.6972 0.1210 sec/batch\nEpoch 5/10 Iteration 855/1780 Training loss: 1.6969 0.1209 sec/batch\nEpoch 5/10 Iteration 856/1780 Training loss: 1.6968 0.1205 sec/batch\nEpoch 5/10 Iteration 857/1780 Training loss: 1.6964 0.1226 sec/batch\nEpoch 5/10 Iteration 858/1780 Training loss: 1.6962 0.1232 sec/batch\nEpoch 5/10 Iteration 859/1780 Training loss: 1.6959 0.1227 sec/batch\nEpoch 5/10 Iteration 860/1780 Training loss: 1.6959 0.1222 sec/batch\nEpoch 5/10 Iteration 861/1780 Training loss: 1.6957 0.1203 sec/batch\nEpoch 5/10 Iteration 862/1780 Training loss: 1.6953 0.1207 sec/batch\nEpoch 5/10 Iteration 863/1780 Training loss: 1.6949 0.1236 sec/batch\nEpoch 5/10 Iteration 864/1780 Training loss: 1.6946 0.1209 sec/batch\nEpoch 5/10 Iteration 865/1780 Training loss: 1.6944 0.1202 sec/batch\nEpoch 5/10 Iteration 866/1780 Training loss: 1.6942 0.1209 sec/batch\nEpoch 5/10 Iteration 867/1780 Training loss: 1.6939 0.1204 sec/batch\nEpoch 5/10 Iteration 868/1780 Training loss: 1.6936 0.1210 sec/batch\nEpoch 5/10 Iteration 869/1780 Training loss: 1.6934 0.1235 sec/batch\nEpoch 5/10 Iteration 870/1780 Training loss: 1.6931 0.1229 sec/batch\nEpoch 5/10 Iteration 871/1780 Training loss: 1.6926 0.1198 sec/batch\nEpoch 5/10 Iteration 872/1780 Training loss: 1.6925 0.1215 sec/batch\nEpoch 5/10 Iteration 873/1780 Training loss: 1.6924 0.1214 sec/batch\nEpoch 5/10 Iteration 874/1780 Training loss: 1.6922 0.1201 sec/batch\nEpoch 5/10 Iteration 875/1780 Training loss: 1.6919 0.1214 sec/batch\nEpoch 5/10 Iteration 876/1780 Training loss: 1.6917 0.1223 sec/batch\nEpoch 5/10 Iteration 877/1780 Training loss: 1.6914 0.1232 sec/batch\nEpoch 5/10 Iteration 878/1780 Training loss: 1.6911 0.1215 sec/batch\nEpoch 5/10 Iteration 879/1780 Training loss: 1.6909 0.1216 sec/batch\nEpoch 5/10 Iteration 880/1780 Training loss: 1.6911 0.1222 sec/batch\nEpoch 5/10 Iteration 881/1780 Training loss: 1.6908 0.1216 sec/batch\nEpoch 5/10 Iteration 882/1780 Training loss: 1.6905 0.1212 sec/batch\nEpoch 5/10 Iteration 883/1780 Training loss: 1.6902 0.1205 sec/batch\nEpoch 5/10 Iteration 884/1780 Training loss: 1.6898 0.1205 sec/batch\nEpoch 5/10 Iteration 885/1780 Training loss: 1.6896 0.1211 sec/batch\nEpoch 5/10 Iteration 886/1780 Training loss: 1.6894 0.1205 sec/batch\nEpoch 5/10 Iteration 887/1780 Training loss: 1.6893 0.1241 sec/batch\nEpoch 5/10 Iteration 888/1780 Training loss: 1.6889 0.1212 sec/batch\nEpoch 5/10 Iteration 889/1780 Training loss: 1.6885 0.1205 sec/batch\nEpoch 5/10 Iteration 890/1780 Training loss: 1.6883 0.1202 sec/batch\nEpoch 6/10 Iteration 891/1780 Training loss: 1.7285 0.1223 sec/batch\nEpoch 6/10 Iteration 892/1780 Training loss: 1.6840 0.1218 sec/batch\nEpoch 6/10 Iteration 893/1780 Training loss: 1.6686 0.1203 sec/batch\nEpoch 6/10 Iteration 894/1780 Training loss: 1.6615 0.1204 sec/batch\nEpoch 6/10 Iteration 895/1780 Training loss: 1.6549 0.1203 sec/batch\nEpoch 6/10 Iteration 896/1780 Training loss: 1.6431 0.1222 sec/batch\nEpoch 6/10 Iteration 897/1780 Training loss: 1.6436 0.1213 sec/batch\nEpoch 6/10 Iteration 898/1780 Training loss: 1.6423 0.1211 sec/batch\nEpoch 6/10 Iteration 899/1780 Training loss: 1.6439 0.1205 sec/batch\nEpoch 6/10 Iteration 900/1780 Training loss: 1.6417 0.1204 sec/batch\nValidation loss: 1.51374 Saving checkpoint!\nEpoch 6/10 Iteration 901/1780 Training loss: 1.6435 0.1198 sec/batch\nEpoch 6/10 Iteration 902/1780 Training loss: 1.6409 0.1217 sec/batch\nEpoch 6/10 Iteration 903/1780 Training loss: 1.6401 0.1229 sec/batch\nEpoch 6/10 Iteration 904/1780 Training loss: 1.6419 0.1198 sec/batch\nEpoch 6/10 Iteration 905/1780 Training loss: 1.6410 0.1214 sec/batch\nEpoch 6/10 Iteration 906/1780 Training loss: 1.6389 0.1208 sec/batch\nEpoch 6/10 Iteration 907/1780 Training loss: 1.6384 0.1208 sec/batch\nEpoch 6/10 Iteration 908/1780 Training loss: 1.6397 0.1241 sec/batch\nEpoch 6/10 Iteration 909/1780 Training loss: 1.6398 0.1209 sec/batch\nEpoch 6/10 Iteration 910/1780 Training loss: 1.6401 0.1209 sec/batch\nEpoch 6/10 Iteration 911/1780 Training loss: 1.6394 0.1210 sec/batch\nEpoch 6/10 Iteration 912/1780 Training loss: 1.6401 0.1213 sec/batch\nEpoch 6/10 Iteration 913/1780 Training loss: 1.6389 0.1220 sec/batch\nEpoch 6/10 Iteration 914/1780 Training loss: 1.6386 0.1233 sec/batch\nEpoch 6/10 Iteration 915/1780 Training loss: 1.6385 0.1227 sec/batch\nEpoch 6/10 Iteration 916/1780 Training loss: 1.6368 0.1203 sec/batch\nEpoch 6/10 Iteration 917/1780 Training loss: 1.6351 0.1212 sec/batch\nEpoch 6/10 Iteration 918/1780 Training loss: 1.6350 0.1206 sec/batch\nEpoch 6/10 Iteration 919/1780 Training loss: 1.6354 0.1205 sec/batch\nEpoch 6/10 Iteration 920/1780 Training loss: 1.6355 0.1209 sec/batch\nEpoch 6/10 Iteration 921/1780 Training loss: 1.6349 0.1202 sec/batch\nEpoch 6/10 Iteration 922/1780 Training loss: 1.6336 0.1199 sec/batch\nEpoch 6/10 Iteration 923/1780 Training loss: 1.6337 0.1205 sec/batch\nEpoch 6/10 Iteration 924/1780 Training loss: 1.6339 0.1198 sec/batch\nEpoch 6/10 Iteration 925/1780 Training loss: 1.6333 0.1222 sec/batch\nEpoch 6/10 Iteration 926/1780 Training loss: 1.6328 0.1208 sec/batch\nEpoch 6/10 Iteration 927/1780 Training loss: 1.6318 0.1206 sec/batch\nEpoch 6/10 Iteration 928/1780 Training loss: 1.6305 0.1246 sec/batch\nEpoch 6/10 Iteration 929/1780 Training loss: 1.6290 0.1223 sec/batch\nEpoch 6/10 Iteration 930/1780 Training loss: 1.6284 0.1204 sec/batch\nEpoch 6/10 Iteration 931/1780 Training loss: 1.6278 0.1215 sec/batch\nEpoch 6/10 Iteration 932/1780 Training loss: 1.6279 0.1207 sec/batch\nEpoch 6/10 Iteration 933/1780 Training loss: 1.6270 0.1205 sec/batch\nEpoch 6/10 Iteration 934/1780 Training loss: 1.6259 0.1233 sec/batch\nEpoch 6/10 Iteration 935/1780 Training loss: 1.6258 0.1209 sec/batch\nEpoch 6/10 Iteration 936/1780 Training loss: 1.6249 0.1227 sec/batch\nEpoch 6/10 Iteration 937/1780 Training loss: 1.6244 0.1219 sec/batch\nEpoch 6/10 Iteration 938/1780 Training loss: 1.6237 0.1207 sec/batch\nEpoch 6/10 Iteration 939/1780 Training loss: 1.6229 0.1205 sec/batch\nEpoch 6/10 Iteration 940/1780 Training loss: 1.6233 0.1218 sec/batch\nEpoch 6/10 Iteration 941/1780 Training loss: 1.6226 0.1203 sec/batch\nEpoch 6/10 Iteration 942/1780 Training loss: 1.6233 0.1199 sec/batch\nEpoch 6/10 Iteration 943/1780 Training loss: 1.6230 0.1208 sec/batch\nEpoch 6/10 Iteration 944/1780 Training loss: 1.6229 0.1203 sec/batch\nEpoch 6/10 Iteration 945/1780 Training loss: 1.6224 0.1228 sec/batch\nEpoch 6/10 Iteration 946/1780 Training loss: 1.6224 0.1207 sec/batch\nEpoch 6/10 Iteration 947/1780 Training loss: 1.6225 0.1222 sec/batch\nEpoch 6/10 Iteration 948/1780 Training loss: 1.6218 0.1209 sec/batch\nEpoch 6/10 Iteration 949/1780 Training loss: 1.6210 0.1236 sec/batch\nEpoch 6/10 Iteration 950/1780 Training loss: 1.6212 0.1225 sec/batch\nEpoch 6/10 Iteration 951/1780 Training loss: 1.6212 0.1220 sec/batch\nEpoch 6/10 Iteration 952/1780 Training loss: 1.6218 0.1220 sec/batch\nEpoch 6/10 Iteration 953/1780 Training loss: 1.6221 0.1199 sec/batch\nEpoch 6/10 Iteration 954/1780 Training loss: 1.6221 0.1196 sec/batch\nEpoch 6/10 Iteration 955/1780 Training loss: 1.6219 0.1219 sec/batch\nEpoch 6/10 Iteration 956/1780 Training loss: 1.6220 0.1207 sec/batch\nEpoch 6/10 Iteration 957/1780 Training loss: 1.6221 0.1216 sec/batch\nEpoch 6/10 Iteration 958/1780 Training loss: 1.6217 0.1203 sec/batch\nEpoch 6/10 Iteration 959/1780 Training loss: 1.6216 0.1221 sec/batch\nEpoch 6/10 Iteration 960/1780 Training loss: 1.6213 0.1203 sec/batch\nEpoch 6/10 Iteration 961/1780 Training loss: 1.6217 0.1245 sec/batch\nEpoch 6/10 Iteration 962/1780 Training loss: 1.6217 0.1221 sec/batch\nEpoch 6/10 Iteration 963/1780 Training loss: 1.6219 0.1204 sec/batch\nEpoch 6/10 Iteration 964/1780 Training loss: 1.6214 0.1210 sec/batch\nEpoch 6/10 Iteration 965/1780 Training loss: 1.6211 0.1251 sec/batch\nEpoch 6/10 Iteration 966/1780 Training loss: 1.6212 0.1232 sec/batch\nEpoch 6/10 Iteration 967/1780 Training loss: 1.6208 0.1242 sec/batch\nEpoch 6/10 Iteration 968/1780 Training loss: 1.6206 0.1223 sec/batch\nEpoch 6/10 Iteration 969/1780 Training loss: 1.6198 0.1259 sec/batch\nEpoch 6/10 Iteration 970/1780 Training loss: 1.6196 0.1220 sec/batch\nEpoch 6/10 Iteration 971/1780 Training loss: 1.6189 0.1229 sec/batch\nEpoch 6/10 Iteration 972/1780 Training loss: 1.6187 0.1230 sec/batch\nEpoch 6/10 Iteration 973/1780 Training loss: 1.6180 0.1245 sec/batch\nEpoch 6/10 Iteration 974/1780 Training loss: 1.6177 0.1251 sec/batch\nEpoch 6/10 Iteration 975/1780 Training loss: 1.6173 0.1277 sec/batch\nEpoch 6/10 Iteration 976/1780 Training loss: 1.6168 0.1243 sec/batch\nEpoch 6/10 Iteration 977/1780 Training loss: 1.6163 0.1280 sec/batch\nEpoch 6/10 Iteration 978/1780 Training loss: 1.6159 0.1237 sec/batch\nEpoch 6/10 Iteration 979/1780 Training loss: 1.6152 0.1262 sec/batch\nEpoch 6/10 Iteration 980/1780 Training loss: 1.6152 0.1246 sec/batch\nEpoch 6/10 Iteration 981/1780 Training loss: 1.6148 0.1235 sec/batch\nEpoch 6/10 Iteration 982/1780 Training loss: 1.6144 0.1269 sec/batch\nEpoch 6/10 Iteration 983/1780 Training loss: 1.6138 0.1269 sec/batch\nEpoch 6/10 Iteration 984/1780 Training loss: 1.6133 0.1274 sec/batch\nEpoch 6/10 Iteration 985/1780 Training loss: 1.6127 0.1229 sec/batch\nEpoch 6/10 Iteration 986/1780 Training loss: 1.6125 0.1256 sec/batch\nEpoch 6/10 Iteration 987/1780 Training loss: 1.6123 0.1259 sec/batch\nEpoch 6/10 Iteration 988/1780 Training loss: 1.6117 0.1219 sec/batch\nEpoch 6/10 Iteration 989/1780 Training loss: 1.6112 0.1202 sec/batch\nEpoch 6/10 Iteration 990/1780 Training loss: 1.6104 0.1220 sec/batch\nEpoch 6/10 Iteration 991/1780 Training loss: 1.6103 0.1203 sec/batch\nEpoch 6/10 Iteration 992/1780 Training loss: 1.6100 0.1209 sec/batch\nEpoch 6/10 Iteration 993/1780 Training loss: 1.6096 0.1252 sec/batch\nEpoch 6/10 Iteration 994/1780 Training loss: 1.6092 0.1334 sec/batch\nEpoch 6/10 Iteration 995/1780 Training loss: 1.6089 0.1243 sec/batch\nEpoch 6/10 Iteration 996/1780 Training loss: 1.6086 0.1249 sec/batch\nEpoch 6/10 Iteration 997/1780 Training loss: 1.6084 0.1239 sec/batch\nEpoch 6/10 Iteration 998/1780 Training loss: 1.6081 0.1253 sec/batch\nEpoch 6/10 Iteration 999/1780 Training loss: 1.6079 0.1252 sec/batch\nEpoch 6/10 Iteration 1000/1780 Training loss: 1.6078 0.1274 sec/batch\nValidation loss: 1.46721 Saving checkpoint!\nEpoch 6/10 Iteration 1001/1780 Training loss: 1.6081 0.1217 sec/batch\nEpoch 6/10 Iteration 1002/1780 Training loss: 1.6078 0.1249 sec/batch\nEpoch 6/10 Iteration 1003/1780 Training loss: 1.6076 0.1228 sec/batch\nEpoch 6/10 Iteration 1004/1780 Training loss: 1.6073 0.1246 sec/batch\nEpoch 6/10 Iteration 1005/1780 Training loss: 1.6069 0.1240 sec/batch\nEpoch 6/10 Iteration 1006/1780 Training loss: 1.6063 0.1261 sec/batch\nEpoch 6/10 Iteration 1007/1780 Training loss: 1.6061 0.1235 sec/batch\nEpoch 6/10 Iteration 1008/1780 Training loss: 1.6059 0.1219 sec/batch\nEpoch 6/10 Iteration 1009/1780 Training loss: 1.6055 0.1258 sec/batch\nEpoch 6/10 Iteration 1010/1780 Training loss: 1.6052 0.1300 sec/batch\nEpoch 6/10 Iteration 1011/1780 Training loss: 1.6050 0.1216 sec/batch\nEpoch 6/10 Iteration 1012/1780 Training loss: 1.6044 0.1244 sec/batch\nEpoch 6/10 Iteration 1013/1780 Training loss: 1.6039 0.1259 sec/batch\nEpoch 6/10 Iteration 1014/1780 Training loss: 1.6037 0.1246 sec/batch\nEpoch 6/10 Iteration 1015/1780 Training loss: 1.6034 0.1218 sec/batch\nEpoch 6/10 Iteration 1016/1780 Training loss: 1.6029 0.1248 sec/batch\nEpoch 6/10 Iteration 1017/1780 Training loss: 1.6028 0.1233 sec/batch\nEpoch 6/10 Iteration 1018/1780 Training loss: 1.6026 0.1230 sec/batch\nEpoch 6/10 Iteration 1019/1780 Training loss: 1.6023 0.1232 sec/batch\nEpoch 6/10 Iteration 1020/1780 Training loss: 1.6019 0.1207 sec/batch\nEpoch 6/10 Iteration 1021/1780 Training loss: 1.6013 0.1210 sec/batch\nEpoch 6/10 Iteration 1022/1780 Training loss: 1.6009 0.1231 sec/batch\nEpoch 6/10 Iteration 1023/1780 Training loss: 1.6008 0.1217 sec/batch\nEpoch 6/10 Iteration 1024/1780 Training loss: 1.6006 0.1226 sec/batch\nEpoch 6/10 Iteration 1025/1780 Training loss: 1.6005 0.1207 sec/batch\nEpoch 6/10 Iteration 1026/1780 Training loss: 1.6003 0.1200 sec/batch\nEpoch 6/10 Iteration 1027/1780 Training loss: 1.6003 0.1213 sec/batch\nEpoch 6/10 Iteration 1028/1780 Training loss: 1.6002 0.1204 sec/batch\nEpoch 6/10 Iteration 1029/1780 Training loss: 1.6001 0.1207 sec/batch\nEpoch 6/10 Iteration 1030/1780 Training loss: 1.5998 0.1212 sec/batch\nEpoch 6/10 Iteration 1031/1780 Training loss: 1.6000 0.1235 sec/batch\nEpoch 6/10 Iteration 1032/1780 Training loss: 1.5998 0.1233 sec/batch\nEpoch 6/10 Iteration 1033/1780 Training loss: 1.5995 0.1257 sec/batch\nEpoch 6/10 Iteration 1034/1780 Training loss: 1.5996 0.1232 sec/batch\nEpoch 6/10 Iteration 1035/1780 Training loss: 1.5993 0.1234 sec/batch\nEpoch 6/10 Iteration 1036/1780 Training loss: 1.5992 0.1231 sec/batch\nEpoch 6/10 Iteration 1037/1780 Training loss: 1.5990 0.1219 sec/batch\nEpoch 6/10 Iteration 1038/1780 Training loss: 1.5990 0.1212 sec/batch\nEpoch 6/10 Iteration 1039/1780 Training loss: 1.5988 0.1233 sec/batch\nEpoch 6/10 Iteration 1040/1780 Training loss: 1.5985 0.1216 sec/batch\nEpoch 6/10 Iteration 1041/1780 Training loss: 1.5981 0.1213 sec/batch\nEpoch 6/10 Iteration 1042/1780 Training loss: 1.5979 0.1204 sec/batch\nEpoch 6/10 Iteration 1043/1780 Training loss: 1.5977 0.1201 sec/batch\nEpoch 6/10 Iteration 1044/1780 Training loss: 1.5976 0.1202 sec/batch\nEpoch 6/10 Iteration 1045/1780 Training loss: 1.5974 0.1203 sec/batch\nEpoch 6/10 Iteration 1046/1780 Training loss: 1.5971 0.1194 sec/batch\nEpoch 6/10 Iteration 1047/1780 Training loss: 1.5970 0.1269 sec/batch\nEpoch 6/10 Iteration 1048/1780 Training loss: 1.5968 0.1310 sec/batch\nEpoch 6/10 Iteration 1049/1780 Training loss: 1.5963 0.1288 sec/batch\nEpoch 6/10 Iteration 1050/1780 Training loss: 1.5963 0.1232 sec/batch\nEpoch 6/10 Iteration 1051/1780 Training loss: 1.5963 0.1242 sec/batch\nEpoch 6/10 Iteration 1052/1780 Training loss: 1.5961 0.1242 sec/batch\nEpoch 6/10 Iteration 1053/1780 Training loss: 1.5959 0.1205 sec/batch\nEpoch 6/10 Iteration 1054/1780 Training loss: 1.5958 0.1194 sec/batch\nEpoch 6/10 Iteration 1055/1780 Training loss: 1.5955 0.1237 sec/batch\nEpoch 6/10 Iteration 1056/1780 Training loss: 1.5953 0.1210 sec/batch\nEpoch 6/10 Iteration 1057/1780 Training loss: 1.5953 0.1208 sec/batch\nEpoch 6/10 Iteration 1058/1780 Training loss: 1.5956 0.1224 sec/batch\nEpoch 6/10 Iteration 1059/1780 Training loss: 1.5954 0.1305 sec/batch\nEpoch 6/10 Iteration 1060/1780 Training loss: 1.5951 0.1221 sec/batch\nEpoch 6/10 Iteration 1061/1780 Training loss: 1.5948 0.1207 sec/batch\nEpoch 6/10 Iteration 1062/1780 Training loss: 1.5945 0.1215 sec/batch\nEpoch 6/10 Iteration 1063/1780 Training loss: 1.5944 0.1195 sec/batch\nEpoch 6/10 Iteration 1064/1780 Training loss: 1.5942 0.1209 sec/batch\nEpoch 6/10 Iteration 1065/1780 Training loss: 1.5941 0.1220 sec/batch\nEpoch 6/10 Iteration 1066/1780 Training loss: 1.5938 0.1227 sec/batch\nEpoch 6/10 Iteration 1067/1780 Training loss: 1.5935 0.1240 sec/batch\nEpoch 6/10 Iteration 1068/1780 Training loss: 1.5934 0.1240 sec/batch\nEpoch 7/10 Iteration 1069/1780 Training loss: 1.6372 0.1212 sec/batch\nEpoch 7/10 Iteration 1070/1780 Training loss: 1.5995 0.1223 sec/batch\nEpoch 7/10 Iteration 1071/1780 Training loss: 1.5822 0.1204 sec/batch\nEpoch 7/10 Iteration 1072/1780 Training loss: 1.5759 0.1216 sec/batch\nEpoch 7/10 Iteration 1073/1780 Training loss: 1.5702 0.1213 sec/batch\nEpoch 7/10 Iteration 1074/1780 Training loss: 1.5601 0.1211 sec/batch\nEpoch 7/10 Iteration 1075/1780 Training loss: 1.5609 0.1205 sec/batch\nEpoch 7/10 Iteration 1076/1780 Training loss: 1.5592 0.1235 sec/batch\nEpoch 7/10 Iteration 1077/1780 Training loss: 1.5611 0.1202 sec/batch\nEpoch 7/10 Iteration 1078/1780 Training loss: 1.5607 0.1200 sec/batch\nEpoch 7/10 Iteration 1079/1780 Training loss: 1.5565 0.1236 sec/batch\nEpoch 7/10 Iteration 1080/1780 Training loss: 1.5554 0.1219 sec/batch\nEpoch 7/10 Iteration 1081/1780 Training loss: 1.5553 0.1225 sec/batch\nEpoch 7/10 Iteration 1082/1780 Training loss: 1.5567 0.1216 sec/batch\nEpoch 7/10 Iteration 1083/1780 Training loss: 1.5559 0.1224 sec/batch\nEpoch 7/10 Iteration 1084/1780 Training loss: 1.5534 0.1221 sec/batch\nEpoch 7/10 Iteration 1085/1780 Training loss: 1.5536 0.1237 sec/batch\nEpoch 7/10 Iteration 1086/1780 Training loss: 1.5556 0.1209 sec/batch\nEpoch 7/10 Iteration 1087/1780 Training loss: 1.5558 0.1226 sec/batch\nEpoch 7/10 Iteration 1088/1780 Training loss: 1.5568 0.1211 sec/batch\nEpoch 7/10 Iteration 1089/1780 Training loss: 1.5560 0.1229 sec/batch\nEpoch 7/10 Iteration 1090/1780 Training loss: 1.5560 0.1223 sec/batch\nEpoch 7/10 Iteration 1091/1780 Training loss: 1.5549 0.1239 sec/batch\nEpoch 7/10 Iteration 1092/1780 Training loss: 1.5546 0.1231 sec/batch\nEpoch 7/10 Iteration 1093/1780 Training loss: 1.5543 0.1223 sec/batch\nEpoch 7/10 Iteration 1094/1780 Training loss: 1.5528 0.1209 sec/batch\nEpoch 7/10 Iteration 1095/1780 Training loss: 1.5511 0.1235 sec/batch\nEpoch 7/10 Iteration 1096/1780 Training loss: 1.5513 0.1219 sec/batch\nEpoch 7/10 Iteration 1097/1780 Training loss: 1.5517 0.1222 sec/batch\nEpoch 7/10 Iteration 1098/1780 Training loss: 1.5517 0.1243 sec/batch\nEpoch 7/10 Iteration 1099/1780 Training loss: 1.5511 0.1266 sec/batch\nEpoch 7/10 Iteration 1100/1780 Training loss: 1.5501 0.1268 sec/batch\nValidation loss: 1.43194 Saving checkpoint!\nEpoch 7/10 Iteration 1101/1780 Training loss: 1.5527 0.1234 sec/batch\nEpoch 7/10 Iteration 1102/1780 Training loss: 1.5528 0.1254 sec/batch\nEpoch 7/10 Iteration 1103/1780 Training loss: 1.5527 0.1247 sec/batch\nEpoch 7/10 Iteration 1104/1780 Training loss: 1.5524 0.1249 sec/batch\nEpoch 7/10 Iteration 1105/1780 Training loss: 1.5513 0.1250 sec/batch\nEpoch 7/10 Iteration 1106/1780 Training loss: 1.5504 0.1252 sec/batch\nEpoch 7/10 Iteration 1107/1780 Training loss: 1.5489 0.1261 sec/batch\nEpoch 7/10 Iteration 1108/1780 Training loss: 1.5482 0.1202 sec/batch\nEpoch 7/10 Iteration 1109/1780 Training loss: 1.5475 0.1234 sec/batch\nEpoch 7/10 Iteration 1110/1780 Training loss: 1.5480 0.1218 sec/batch\nEpoch 7/10 Iteration 1111/1780 Training loss: 1.5474 0.1225 sec/batch\nEpoch 7/10 Iteration 1112/1780 Training loss: 1.5466 0.1223 sec/batch\nEpoch 7/10 Iteration 1113/1780 Training loss: 1.5469 0.1227 sec/batch\nEpoch 7/10 Iteration 1114/1780 Training loss: 1.5458 0.1221 sec/batch\nEpoch 7/10 Iteration 1115/1780 Training loss: 1.5455 0.1231 sec/batch\nEpoch 7/10 Iteration 1116/1780 Training loss: 1.5449 0.1215 sec/batch\nEpoch 7/10 Iteration 1117/1780 Training loss: 1.5446 0.1237 sec/batch\nEpoch 7/10 Iteration 1118/1780 Training loss: 1.5449 0.1218 sec/batch\nEpoch 7/10 Iteration 1119/1780 Training loss: 1.5442 0.1245 sec/batch\nEpoch 7/10 Iteration 1120/1780 Training loss: 1.5449 0.1219 sec/batch\nEpoch 7/10 Iteration 1121/1780 Training loss: 1.5447 0.1229 sec/batch\nEpoch 7/10 Iteration 1122/1780 Training loss: 1.5446 0.1212 sec/batch\nEpoch 7/10 Iteration 1123/1780 Training loss: 1.5441 0.1248 sec/batch\nEpoch 7/10 Iteration 1124/1780 Training loss: 1.5439 0.1228 sec/batch\nEpoch 7/10 Iteration 1125/1780 Training loss: 1.5441 0.1267 sec/batch\nEpoch 7/10 Iteration 1126/1780 Training loss: 1.5434 0.1204 sec/batch\nEpoch 7/10 Iteration 1127/1780 Training loss: 1.5426 0.1258 sec/batch\nEpoch 7/10 Iteration 1128/1780 Training loss: 1.5429 0.1223 sec/batch\nEpoch 7/10 Iteration 1129/1780 Training loss: 1.5426 0.1226 sec/batch\nEpoch 7/10 Iteration 1130/1780 Training loss: 1.5432 0.1215 sec/batch\nEpoch 7/10 Iteration 1131/1780 Training loss: 1.5435 0.1222 sec/batch\nEpoch 7/10 Iteration 1132/1780 Training loss: 1.5435 0.1218 sec/batch\nEpoch 7/10 Iteration 1133/1780 Training loss: 1.5432 0.1226 sec/batch\nEpoch 7/10 Iteration 1134/1780 Training loss: 1.5432 0.1231 sec/batch\nEpoch 7/10 Iteration 1135/1780 Training loss: 1.5431 0.1242 sec/batch\nEpoch 7/10 Iteration 1136/1780 Training loss: 1.5425 0.1238 sec/batch\nEpoch 7/10 Iteration 1137/1780 Training loss: 1.5423 0.1258 sec/batch\nEpoch 7/10 Iteration 1138/1780 Training loss: 1.5420 0.1209 sec/batch\nEpoch 7/10 Iteration 1139/1780 Training loss: 1.5424 0.1254 sec/batch\nEpoch 7/10 Iteration 1140/1780 Training loss: 1.5424 0.1215 sec/batch\nEpoch 7/10 Iteration 1141/1780 Training loss: 1.5425 0.1297 sec/batch\nEpoch 7/10 Iteration 1142/1780 Training loss: 1.5419 0.1234 sec/batch\nEpoch 7/10 Iteration 1143/1780 Training loss: 1.5414 0.1258 sec/batch\nEpoch 7/10 Iteration 1144/1780 Training loss: 1.5414 0.1236 sec/batch\nEpoch 7/10 Iteration 1145/1780 Training loss: 1.5409 0.1230 sec/batch\nEpoch 7/10 Iteration 1146/1780 Training loss: 1.5407 0.1220 sec/batch\nEpoch 7/10 Iteration 1147/1780 Training loss: 1.5398 0.1226 sec/batch\nEpoch 7/10 Iteration 1148/1780 Training loss: 1.5395 0.1218 sec/batch\nEpoch 7/10 Iteration 1149/1780 Training loss: 1.5387 0.1263 sec/batch\nEpoch 7/10 Iteration 1150/1780 Training loss: 1.5385 0.1233 sec/batch\nEpoch 7/10 Iteration 1151/1780 Training loss: 1.5378 0.1249 sec/batch\nEpoch 7/10 Iteration 1152/1780 Training loss: 1.5376 0.1244 sec/batch\nEpoch 7/10 Iteration 1153/1780 Training loss: 1.5370 0.1239 sec/batch\nEpoch 7/10 Iteration 1154/1780 Training loss: 1.5367 0.1217 sec/batch\nEpoch 7/10 Iteration 1155/1780 Training loss: 1.5362 0.1217 sec/batch\nEpoch 7/10 Iteration 1156/1780 Training loss: 1.5357 0.1245 sec/batch\nEpoch 7/10 Iteration 1157/1780 Training loss: 1.5351 0.1221 sec/batch\nEpoch 7/10 Iteration 1158/1780 Training loss: 1.5350 0.1242 sec/batch\nEpoch 7/10 Iteration 1159/1780 Training loss: 1.5345 0.1215 sec/batch\nEpoch 7/10 Iteration 1160/1780 Training loss: 1.5340 0.1224 sec/batch\nEpoch 7/10 Iteration 1161/1780 Training loss: 1.5334 0.1254 sec/batch\nEpoch 7/10 Iteration 1162/1780 Training loss: 1.5330 0.1213 sec/batch\nEpoch 7/10 Iteration 1163/1780 Training loss: 1.5324 0.1225 sec/batch\nEpoch 7/10 Iteration 1164/1780 Training loss: 1.5322 0.1216 sec/batch\nEpoch 7/10 Iteration 1165/1780 Training loss: 1.5320 0.1235 sec/batch\nEpoch 7/10 Iteration 1166/1780 Training loss: 1.5313 0.1236 sec/batch\nEpoch 7/10 Iteration 1167/1780 Training loss: 1.5307 0.1213 sec/batch\nEpoch 7/10 Iteration 1168/1780 Training loss: 1.5300 0.1235 sec/batch\nEpoch 7/10 Iteration 1169/1780 Training loss: 1.5298 0.1229 sec/batch\nEpoch 7/10 Iteration 1170/1780 Training loss: 1.5296 0.1226 sec/batch\nEpoch 7/10 Iteration 1171/1780 Training loss: 1.5292 0.1262 sec/batch\nEpoch 7/10 Iteration 1172/1780 Training loss: 1.5290 0.1220 sec/batch\nEpoch 7/10 Iteration 1173/1780 Training loss: 1.5286 0.1236 sec/batch\nEpoch 7/10 Iteration 1174/1780 Training loss: 1.5283 0.1318 sec/batch\nEpoch 7/10 Iteration 1175/1780 Training loss: 1.5281 0.1231 sec/batch\nEpoch 7/10 Iteration 1176/1780 Training loss: 1.5278 0.1227 sec/batch\nEpoch 7/10 Iteration 1177/1780 Training loss: 1.5275 0.1247 sec/batch\nEpoch 7/10 Iteration 1178/1780 Training loss: 1.5274 0.1220 sec/batch\nEpoch 7/10 Iteration 1179/1780 Training loss: 1.5270 0.1229 sec/batch\nEpoch 7/10 Iteration 1180/1780 Training loss: 1.5267 0.1219 sec/batch\nEpoch 7/10 Iteration 1181/1780 Training loss: 1.5263 0.1224 sec/batch\nEpoch 7/10 Iteration 1182/1780 Training loss: 1.5259 0.1216 sec/batch\nEpoch 7/10 Iteration 1183/1780 Training loss: 1.5255 0.1224 sec/batch\nEpoch 7/10 Iteration 1184/1780 Training loss: 1.5249 0.1230 sec/batch\nEpoch 7/10 Iteration 1185/1780 Training loss: 1.5246 0.1255 sec/batch\nEpoch 7/10 Iteration 1186/1780 Training loss: 1.5244 0.1230 sec/batch\nEpoch 7/10 Iteration 1187/1780 Training loss: 1.5241 0.1254 sec/batch\nEpoch 7/10 Iteration 1188/1780 Training loss: 1.5238 0.1218 sec/batch\nEpoch 7/10 Iteration 1189/1780 Training loss: 1.5236 0.1256 sec/batch\nEpoch 7/10 Iteration 1190/1780 Training loss: 1.5231 0.1229 sec/batch\nEpoch 7/10 Iteration 1191/1780 Training loss: 1.5225 0.1222 sec/batch\nEpoch 7/10 Iteration 1192/1780 Training loss: 1.5223 0.1212 sec/batch\nEpoch 7/10 Iteration 1193/1780 Training loss: 1.5220 0.1227 sec/batch\nEpoch 7/10 Iteration 1194/1780 Training loss: 1.5214 0.1209 sec/batch\nEpoch 7/10 Iteration 1195/1780 Training loss: 1.5212 0.1247 sec/batch\nEpoch 7/10 Iteration 1196/1780 Training loss: 1.5210 0.1214 sec/batch\nEpoch 7/10 Iteration 1197/1780 Training loss: 1.5208 0.1254 sec/batch\nEpoch 7/10 Iteration 1198/1780 Training loss: 1.5203 0.1230 sec/batch\nEpoch 7/10 Iteration 1199/1780 Training loss: 1.5197 0.1251 sec/batch\nEpoch 7/10 Iteration 1200/1780 Training loss: 1.5192 0.1241 sec/batch\nValidation loss: 1.37934 Saving checkpoint!\nEpoch 7/10 Iteration 1201/1780 Training loss: 1.5201 0.1215 sec/batch\nEpoch 7/10 Iteration 1202/1780 Training loss: 1.5200 0.1217 sec/batch\nEpoch 7/10 Iteration 1203/1780 Training loss: 1.5199 0.1240 sec/batch\nEpoch 7/10 Iteration 1204/1780 Training loss: 1.5199 0.1249 sec/batch\nEpoch 7/10 Iteration 1205/1780 Training loss: 1.5199 0.1257 sec/batch\nEpoch 7/10 Iteration 1206/1780 Training loss: 1.5199 0.1239 sec/batch\nEpoch 7/10 Iteration 1207/1780 Training loss: 1.5198 0.1222 sec/batch\nEpoch 7/10 Iteration 1208/1780 Training loss: 1.5196 0.1239 sec/batch\nEpoch 7/10 Iteration 1209/1780 Training loss: 1.5199 0.1251 sec/batch\nEpoch 7/10 Iteration 1210/1780 Training loss: 1.5197 0.1215 sec/batch\nEpoch 7/10 Iteration 1211/1780 Training loss: 1.5195 0.1221 sec/batch\nEpoch 7/10 Iteration 1212/1780 Training loss: 1.5196 0.1212 sec/batch\nEpoch 7/10 Iteration 1213/1780 Training loss: 1.5193 0.1230 sec/batch\nEpoch 7/10 Iteration 1214/1780 Training loss: 1.5192 0.1225 sec/batch\nEpoch 7/10 Iteration 1215/1780 Training loss: 1.5191 0.1219 sec/batch\nEpoch 7/10 Iteration 1216/1780 Training loss: 1.5191 0.1232 sec/batch\nEpoch 7/10 Iteration 1217/1780 Training loss: 1.5191 0.1278 sec/batch\nEpoch 7/10 Iteration 1218/1780 Training loss: 1.5187 0.1211 sec/batch\nEpoch 7/10 Iteration 1219/1780 Training loss: 1.5182 0.1224 sec/batch\nEpoch 7/10 Iteration 1220/1780 Training loss: 1.5180 0.1219 sec/batch\nEpoch 7/10 Iteration 1221/1780 Training loss: 1.5178 0.1227 sec/batch\nEpoch 7/10 Iteration 1222/1780 Training loss: 1.5177 0.1235 sec/batch\nEpoch 7/10 Iteration 1223/1780 Training loss: 1.5175 0.1230 sec/batch\nEpoch 7/10 Iteration 1224/1780 Training loss: 1.5173 0.1233 sec/batch\nEpoch 7/10 Iteration 1225/1780 Training loss: 1.5173 0.1238 sec/batch\nEpoch 7/10 Iteration 1226/1780 Training loss: 1.5171 0.1222 sec/batch\nEpoch 7/10 Iteration 1227/1780 Training loss: 1.5166 0.1256 sec/batch\nEpoch 7/10 Iteration 1228/1780 Training loss: 1.5166 0.1215 sec/batch\nEpoch 7/10 Iteration 1229/1780 Training loss: 1.5167 0.1268 sec/batch\nEpoch 7/10 Iteration 1230/1780 Training loss: 1.5165 0.1226 sec/batch\nEpoch 7/10 Iteration 1231/1780 Training loss: 1.5163 0.1264 sec/batch\nEpoch 7/10 Iteration 1232/1780 Training loss: 1.5162 0.1214 sec/batch\nEpoch 7/10 Iteration 1233/1780 Training loss: 1.5160 0.1232 sec/batch\nEpoch 7/10 Iteration 1234/1780 Training loss: 1.5157 0.1234 sec/batch\nEpoch 7/10 Iteration 1235/1780 Training loss: 1.5157 0.1232 sec/batch\nEpoch 7/10 Iteration 1236/1780 Training loss: 1.5160 0.1216 sec/batch\nEpoch 7/10 Iteration 1237/1780 Training loss: 1.5158 0.1224 sec/batch\nEpoch 7/10 Iteration 1238/1780 Training loss: 1.5156 0.1227 sec/batch\nEpoch 7/10 Iteration 1239/1780 Training loss: 1.5153 0.1241 sec/batch\nEpoch 7/10 Iteration 1240/1780 Training loss: 1.5150 0.1266 sec/batch\nEpoch 7/10 Iteration 1241/1780 Training loss: 1.5150 0.1220 sec/batch\nEpoch 7/10 Iteration 1242/1780 Training loss: 1.5149 0.1254 sec/batch\nEpoch 7/10 Iteration 1243/1780 Training loss: 1.5148 0.1229 sec/batch\nEpoch 7/10 Iteration 1244/1780 Training loss: 1.5145 0.1249 sec/batch\nEpoch 7/10 Iteration 1245/1780 Training loss: 1.5142 0.1246 sec/batch\nEpoch 7/10 Iteration 1246/1780 Training loss: 1.5141 0.1222 sec/batch\nEpoch 8/10 Iteration 1247/1780 Training loss: 1.5910 0.1253 sec/batch\nEpoch 8/10 Iteration 1248/1780 Training loss: 1.5418 0.1228 sec/batch\nEpoch 8/10 Iteration 1249/1780 Training loss: 1.5191 0.1264 sec/batch\nEpoch 8/10 Iteration 1250/1780 Training loss: 1.5103 0.1242 sec/batch\nEpoch 8/10 Iteration 1251/1780 Training loss: 1.5018 0.1317 sec/batch\nEpoch 8/10 Iteration 1252/1780 Training loss: 1.4903 0.1233 sec/batch\nEpoch 8/10 Iteration 1253/1780 Training loss: 1.4902 0.1234 sec/batch\nEpoch 8/10 Iteration 1254/1780 Training loss: 1.4880 0.1212 sec/batch\nEpoch 8/10 Iteration 1255/1780 Training loss: 1.4880 0.1231 sec/batch\nEpoch 8/10 Iteration 1256/1780 Training loss: 1.4869 0.1224 sec/batch\nEpoch 8/10 Iteration 1257/1780 Training loss: 1.4827 0.1234 sec/batch\nEpoch 8/10 Iteration 1258/1780 Training loss: 1.4808 0.1275 sec/batch\nEpoch 8/10 Iteration 1259/1780 Training loss: 1.4796 0.1237 sec/batch\nEpoch 8/10 Iteration 1260/1780 Training loss: 1.4813 0.1243 sec/batch\nEpoch 8/10 Iteration 1261/1780 Training loss: 1.4808 0.1225 sec/batch\nEpoch 8/10 Iteration 1262/1780 Training loss: 1.4793 0.1227 sec/batch\nEpoch 8/10 Iteration 1263/1780 Training loss: 1.4792 0.1220 sec/batch\nEpoch 8/10 Iteration 1264/1780 Training loss: 1.4808 0.1213 sec/batch\nEpoch 8/10 Iteration 1265/1780 Training loss: 1.4807 0.1260 sec/batch\nEpoch 8/10 Iteration 1266/1780 Training loss: 1.4814 0.1235 sec/batch\nEpoch 8/10 Iteration 1267/1780 Training loss: 1.4810 0.1230 sec/batch\nEpoch 8/10 Iteration 1268/1780 Training loss: 1.4813 0.1216 sec/batch\nEpoch 8/10 Iteration 1269/1780 Training loss: 1.4802 0.1255 sec/batch\nEpoch 8/10 Iteration 1270/1780 Training loss: 1.4797 0.1209 sec/batch\nEpoch 8/10 Iteration 1271/1780 Training loss: 1.4796 0.1219 sec/batch\nEpoch 8/10 Iteration 1272/1780 Training loss: 1.4779 0.1234 sec/batch\nEpoch 8/10 Iteration 1273/1780 Training loss: 1.4763 0.1255 sec/batch\nEpoch 8/10 Iteration 1274/1780 Training loss: 1.4762 0.1246 sec/batch\nEpoch 8/10 Iteration 1275/1780 Training loss: 1.4763 0.1245 sec/batch\nEpoch 8/10 Iteration 1276/1780 Training loss: 1.4768 0.1216 sec/batch\nEpoch 8/10 Iteration 1277/1780 Training loss: 1.4763 0.1245 sec/batch\nEpoch 8/10 Iteration 1278/1780 Training loss: 1.4751 0.1246 sec/batch\nEpoch 8/10 Iteration 1279/1780 Training loss: 1.4754 0.1230 sec/batch\nEpoch 8/10 Iteration 1280/1780 Training loss: 1.4753 0.1217 sec/batch\nEpoch 8/10 Iteration 1281/1780 Training loss: 1.4749 0.1261 sec/batch\nEpoch 8/10 Iteration 1282/1780 Training loss: 1.4746 0.1282 sec/batch\nEpoch 8/10 Iteration 1283/1780 Training loss: 1.4740 0.1229 sec/batch\nEpoch 8/10 Iteration 1284/1780 Training loss: 1.4728 0.1230 sec/batch\nEpoch 8/10 Iteration 1285/1780 Training loss: 1.4715 0.1233 sec/batch\nEpoch 8/10 Iteration 1286/1780 Training loss: 1.4708 0.1212 sec/batch\nEpoch 8/10 Iteration 1287/1780 Training loss: 1.4703 0.1309 sec/batch\nEpoch 8/10 Iteration 1288/1780 Training loss: 1.4706 0.1227 sec/batch\nEpoch 8/10 Iteration 1289/1780 Training loss: 1.4699 0.1234 sec/batch\nEpoch 8/10 Iteration 1290/1780 Training loss: 1.4689 0.1222 sec/batch\nEpoch 8/10 Iteration 1291/1780 Training loss: 1.4688 0.1241 sec/batch\nEpoch 8/10 Iteration 1292/1780 Training loss: 1.4678 0.1217 sec/batch\nEpoch 8/10 Iteration 1293/1780 Training loss: 1.4674 0.1230 sec/batch\nEpoch 8/10 Iteration 1294/1780 Training loss: 1.4666 0.1219 sec/batch\nEpoch 8/10 Iteration 1295/1780 Training loss: 1.4662 0.1222 sec/batch\nEpoch 8/10 Iteration 1296/1780 Training loss: 1.4664 0.1223 sec/batch\nEpoch 8/10 Iteration 1297/1780 Training loss: 1.4658 0.1235 sec/batch\nEpoch 8/10 Iteration 1298/1780 Training loss: 1.4666 0.1228 sec/batch\nEpoch 8/10 Iteration 1299/1780 Training loss: 1.4663 0.1276 sec/batch\nEpoch 8/10 Iteration 1300/1780 Training loss: 1.4664 0.1240 sec/batch\nValidation loss: 1.34587 Saving checkpoint!\nEpoch 8/10 Iteration 1301/1780 Training loss: 1.4681 0.1217 sec/batch\nEpoch 8/10 Iteration 1302/1780 Training loss: 1.4686 0.1215 sec/batch\nEpoch 8/10 Iteration 1303/1780 Training loss: 1.4691 0.1234 sec/batch\nEpoch 8/10 Iteration 1304/1780 Training loss: 1.4687 0.1223 sec/batch\nEpoch 8/10 Iteration 1305/1780 Training loss: 1.4681 0.1259 sec/batch\nEpoch 8/10 Iteration 1306/1780 Training loss: 1.4686 0.1227 sec/batch\nEpoch 8/10 Iteration 1307/1780 Training loss: 1.4686 0.1248 sec/batch\nEpoch 8/10 Iteration 1308/1780 Training loss: 1.4693 0.1232 sec/batch\nEpoch 8/10 Iteration 1309/1780 Training loss: 1.4698 0.1257 sec/batch\nEpoch 8/10 Iteration 1310/1780 Training loss: 1.4699 0.1215 sec/batch\nEpoch 8/10 Iteration 1311/1780 Training loss: 1.4697 0.1221 sec/batch\nEpoch 8/10 Iteration 1312/1780 Training loss: 1.4698 0.1216 sec/batch\nEpoch 8/10 Iteration 1313/1780 Training loss: 1.4699 0.1241 sec/batch\nEpoch 8/10 Iteration 1314/1780 Training loss: 1.4693 0.1206 sec/batch\nEpoch 8/10 Iteration 1315/1780 Training loss: 1.4692 0.1240 sec/batch\nEpoch 8/10 Iteration 1316/1780 Training loss: 1.4688 0.1212 sec/batch\nEpoch 8/10 Iteration 1317/1780 Training loss: 1.4693 0.1223 sec/batch\nEpoch 8/10 Iteration 1318/1780 Training loss: 1.4694 0.1248 sec/batch\nEpoch 8/10 Iteration 1319/1780 Training loss: 1.4697 0.1271 sec/batch\nEpoch 8/10 Iteration 1320/1780 Training loss: 1.4693 0.1231 sec/batch\nEpoch 8/10 Iteration 1321/1780 Training loss: 1.4689 0.1228 sec/batch\nEpoch 8/10 Iteration 1322/1780 Training loss: 1.4689 0.1220 sec/batch\nEpoch 8/10 Iteration 1323/1780 Training loss: 1.4686 0.1254 sec/batch\nEpoch 8/10 Iteration 1324/1780 Training loss: 1.4684 0.1223 sec/batch\nEpoch 8/10 Iteration 1325/1780 Training loss: 1.4678 0.1257 sec/batch\nEpoch 8/10 Iteration 1326/1780 Training loss: 1.4676 0.1218 sec/batch\nEpoch 8/10 Iteration 1327/1780 Training loss: 1.4671 0.1226 sec/batch\nEpoch 8/10 Iteration 1328/1780 Training loss: 1.4670 0.1216 sec/batch\nEpoch 8/10 Iteration 1329/1780 Training loss: 1.4664 0.1234 sec/batch\nEpoch 8/10 Iteration 1330/1780 Training loss: 1.4663 0.1209 sec/batch\nEpoch 8/10 Iteration 1331/1780 Training loss: 1.4660 0.1228 sec/batch\nEpoch 8/10 Iteration 1332/1780 Training loss: 1.4656 0.1228 sec/batch\nEpoch 8/10 Iteration 1333/1780 Training loss: 1.4652 0.1246 sec/batch\nEpoch 8/10 Iteration 1334/1780 Training loss: 1.4648 0.1222 sec/batch\nEpoch 8/10 Iteration 1335/1780 Training loss: 1.4642 0.1232 sec/batch\nEpoch 8/10 Iteration 1336/1780 Training loss: 1.4642 0.1230 sec/batch\nEpoch 8/10 Iteration 1337/1780 Training loss: 1.4638 0.1238 sec/batch\nEpoch 8/10 Iteration 1338/1780 Training loss: 1.4636 0.1248 sec/batch\nEpoch 8/10 Iteration 1339/1780 Training loss: 1.4631 0.1230 sec/batch\nEpoch 8/10 Iteration 1340/1780 Training loss: 1.4627 0.1242 sec/batch\nEpoch 8/10 Iteration 1341/1780 Training loss: 1.4623 0.1226 sec/batch\nEpoch 8/10 Iteration 1342/1780 Training loss: 1.4621 0.1210 sec/batch\nEpoch 8/10 Iteration 1343/1780 Training loss: 1.4620 0.1234 sec/batch\nEpoch 8/10 Iteration 1344/1780 Training loss: 1.4613 0.1231 sec/batch\nEpoch 8/10 Iteration 1345/1780 Training loss: 1.4608 0.1213 sec/batch\nEpoch 8/10 Iteration 1346/1780 Training loss: 1.4603 0.1220 sec/batch\nEpoch 8/10 Iteration 1347/1780 Training loss: 1.4601 0.1239 sec/batch\nEpoch 8/10 Iteration 1348/1780 Training loss: 1.4598 0.1213 sec/batch\nEpoch 8/10 Iteration 1349/1780 Training loss: 1.4595 0.1239 sec/batch\nEpoch 8/10 Iteration 1350/1780 Training loss: 1.4593 0.1215 sec/batch\nEpoch 8/10 Iteration 1351/1780 Training loss: 1.4589 0.1238 sec/batch\nEpoch 8/10 Iteration 1352/1780 Training loss: 1.4587 0.1239 sec/batch\nEpoch 8/10 Iteration 1353/1780 Training loss: 1.4584 0.1232 sec/batch\nEpoch 8/10 Iteration 1354/1780 Training loss: 1.4582 0.1216 sec/batch\nEpoch 8/10 Iteration 1355/1780 Training loss: 1.4579 0.1229 sec/batch\nEpoch 8/10 Iteration 1356/1780 Training loss: 1.4578 0.1244 sec/batch\nEpoch 8/10 Iteration 1357/1780 Training loss: 1.4574 0.1214 sec/batch\nEpoch 8/10 Iteration 1358/1780 Training loss: 1.4572 0.1208 sec/batch\nEpoch 8/10 Iteration 1359/1780 Training loss: 1.4569 0.1254 sec/batch\nEpoch 8/10 Iteration 1360/1780 Training loss: 1.4566 0.1203 sec/batch\nEpoch 8/10 Iteration 1361/1780 Training loss: 1.4562 0.1283 sec/batch\nEpoch 8/10 Iteration 1362/1780 Training loss: 1.4558 0.1203 sec/batch\nEpoch 8/10 Iteration 1363/1780 Training loss: 1.4556 0.1239 sec/batch\nEpoch 8/10 Iteration 1364/1780 Training loss: 1.4555 0.1228 sec/batch\nEpoch 8/10 Iteration 1365/1780 Training loss: 1.4553 0.1224 sec/batch\nEpoch 8/10 Iteration 1366/1780 Training loss: 1.4552 0.1244 sec/batch\nEpoch 8/10 Iteration 1367/1780 Training loss: 1.4549 0.1228 sec/batch\nEpoch 8/10 Iteration 1368/1780 Training loss: 1.4543 0.1223 sec/batch\nEpoch 8/10 Iteration 1369/1780 Training loss: 1.4539 0.1212 sec/batch\nEpoch 8/10 Iteration 1370/1780 Training loss: 1.4538 0.1215 sec/batch\nEpoch 8/10 Iteration 1371/1780 Training loss: 1.4536 0.1222 sec/batch\nEpoch 8/10 Iteration 1372/1780 Training loss: 1.4530 0.1205 sec/batch\nEpoch 8/10 Iteration 1373/1780 Training loss: 1.4529 0.1241 sec/batch\nEpoch 8/10 Iteration 1374/1780 Training loss: 1.4529 0.1216 sec/batch\nEpoch 8/10 Iteration 1375/1780 Training loss: 1.4528 0.1249 sec/batch\nEpoch 8/10 Iteration 1376/1780 Training loss: 1.4525 0.1219 sec/batch\nEpoch 8/10 Iteration 1377/1780 Training loss: 1.4519 0.1220 sec/batch\nEpoch 8/10 Iteration 1378/1780 Training loss: 1.4517 0.1238 sec/batch\nEpoch 8/10 Iteration 1379/1780 Training loss: 1.4518 0.1225 sec/batch\nEpoch 8/10 Iteration 1380/1780 Training loss: 1.4518 0.1212 sec/batch\nEpoch 8/10 Iteration 1381/1780 Training loss: 1.4517 0.1227 sec/batch\nEpoch 8/10 Iteration 1382/1780 Training loss: 1.4517 0.1211 sec/batch\nEpoch 8/10 Iteration 1383/1780 Training loss: 1.4517 0.1249 sec/batch\nEpoch 8/10 Iteration 1384/1780 Training loss: 1.4517 0.1227 sec/batch\nEpoch 8/10 Iteration 1385/1780 Training loss: 1.4517 0.1229 sec/batch\nEpoch 8/10 Iteration 1386/1780 Training loss: 1.4517 0.1213 sec/batch\nEpoch 8/10 Iteration 1387/1780 Training loss: 1.4519 0.1219 sec/batch\nEpoch 8/10 Iteration 1388/1780 Training loss: 1.4518 0.1215 sec/batch\nEpoch 8/10 Iteration 1389/1780 Training loss: 1.4516 0.1220 sec/batch\nEpoch 8/10 Iteration 1390/1780 Training loss: 1.4517 0.1204 sec/batch\nEpoch 8/10 Iteration 1391/1780 Training loss: 1.4515 0.1230 sec/batch\nEpoch 8/10 Iteration 1392/1780 Training loss: 1.4515 0.1228 sec/batch\nEpoch 8/10 Iteration 1393/1780 Training loss: 1.4514 0.1241 sec/batch\nEpoch 8/10 Iteration 1394/1780 Training loss: 1.4515 0.1228 sec/batch\nEpoch 8/10 Iteration 1395/1780 Training loss: 1.4515 0.1224 sec/batch\nEpoch 8/10 Iteration 1396/1780 Training loss: 1.4513 0.1202 sec/batch\nEpoch 8/10 Iteration 1397/1780 Training loss: 1.4509 0.1257 sec/batch\nEpoch 8/10 Iteration 1398/1780 Training loss: 1.4506 0.1229 sec/batch\nEpoch 8/10 Iteration 1399/1780 Training loss: 1.4506 0.1253 sec/batch\nEpoch 8/10 Iteration 1400/1780 Training loss: 1.4504 0.1224 sec/batch\nValidation loss: 1.3216 Saving checkpoint!\nEpoch 8/10 Iteration 1401/1780 Training loss: 1.4511 0.1220 sec/batch\nEpoch 8/10 Iteration 1402/1780 Training loss: 1.4510 0.1214 sec/batch\nEpoch 8/10 Iteration 1403/1780 Training loss: 1.4510 0.1233 sec/batch\nEpoch 8/10 Iteration 1404/1780 Training loss: 1.4510 0.1211 sec/batch\nEpoch 8/10 Iteration 1405/1780 Training loss: 1.4507 0.1231 sec/batch\nEpoch 8/10 Iteration 1406/1780 Training loss: 1.4507 0.1231 sec/batch\nEpoch 8/10 Iteration 1407/1780 Training loss: 1.4508 0.1250 sec/batch\nEpoch 8/10 Iteration 1408/1780 Training loss: 1.4507 0.1235 sec/batch\nEpoch 8/10 Iteration 1409/1780 Training loss: 1.4506 0.1251 sec/batch\nEpoch 8/10 Iteration 1410/1780 Training loss: 1.4504 0.1218 sec/batch\nEpoch 8/10 Iteration 1411/1780 Training loss: 1.4502 0.1239 sec/batch\nEpoch 8/10 Iteration 1412/1780 Training loss: 1.4501 0.1212 sec/batch\nEpoch 8/10 Iteration 1413/1780 Training loss: 1.4502 0.1235 sec/batch\nEpoch 8/10 Iteration 1414/1780 Training loss: 1.4505 0.1242 sec/batch\nEpoch 8/10 Iteration 1415/1780 Training loss: 1.4504 0.1225 sec/batch\nEpoch 8/10 Iteration 1416/1780 Training loss: 1.4503 0.1239 sec/batch\nEpoch 8/10 Iteration 1417/1780 Training loss: 1.4501 0.1255 sec/batch\nEpoch 8/10 Iteration 1418/1780 Training loss: 1.4498 0.1217 sec/batch\nEpoch 8/10 Iteration 1419/1780 Training loss: 1.4499 0.1256 sec/batch\nEpoch 8/10 Iteration 1420/1780 Training loss: 1.4498 0.1245 sec/batch\nEpoch 8/10 Iteration 1421/1780 Training loss: 1.4498 0.1246 sec/batch\nEpoch 8/10 Iteration 1422/1780 Training loss: 1.4496 0.1216 sec/batch\nEpoch 8/10 Iteration 1423/1780 Training loss: 1.4493 0.1225 sec/batch\nEpoch 8/10 Iteration 1424/1780 Training loss: 1.4494 0.1255 sec/batch\nEpoch 9/10 Iteration 1425/1780 Training loss: 1.5353 0.1220 sec/batch\nEpoch 9/10 Iteration 1426/1780 Training loss: 1.4841 0.1218 sec/batch\nEpoch 9/10 Iteration 1427/1780 Training loss: 1.4645 0.1242 sec/batch\nEpoch 9/10 Iteration 1428/1780 Training loss: 1.4598 0.1219 sec/batch\nEpoch 9/10 Iteration 1429/1780 Training loss: 1.4487 0.1245 sec/batch\nEpoch 9/10 Iteration 1430/1780 Training loss: 1.4362 0.1217 sec/batch\nEpoch 9/10 Iteration 1431/1780 Training loss: 1.4347 0.1253 sec/batch\nEpoch 9/10 Iteration 1432/1780 Training loss: 1.4325 0.1219 sec/batch\nEpoch 9/10 Iteration 1433/1780 Training loss: 1.4321 0.1241 sec/batch\nEpoch 9/10 Iteration 1434/1780 Training loss: 1.4305 0.1243 sec/batch\nEpoch 9/10 Iteration 1435/1780 Training loss: 1.4266 0.1241 sec/batch\nEpoch 9/10 Iteration 1436/1780 Training loss: 1.4252 0.1217 sec/batch\nEpoch 9/10 Iteration 1437/1780 Training loss: 1.4243 0.1271 sec/batch\nEpoch 9/10 Iteration 1438/1780 Training loss: 1.4250 0.1232 sec/batch\nEpoch 9/10 Iteration 1439/1780 Training loss: 1.4237 0.1221 sec/batch\nEpoch 9/10 Iteration 1440/1780 Training loss: 1.4216 0.1235 sec/batch\nEpoch 9/10 Iteration 1441/1780 Training loss: 1.4219 0.1228 sec/batch\nEpoch 9/10 Iteration 1442/1780 Training loss: 1.4232 0.1223 sec/batch\nEpoch 9/10 Iteration 1443/1780 Training loss: 1.4232 0.1219 sec/batch\nEpoch 9/10 Iteration 1444/1780 Training loss: 1.4239 0.1281 sec/batch\nEpoch 9/10 Iteration 1445/1780 Training loss: 1.4230 0.1237 sec/batch\nEpoch 9/10 Iteration 1446/1780 Training loss: 1.4234 0.1218 sec/batch\nEpoch 9/10 Iteration 1447/1780 Training loss: 1.4223 0.1256 sec/batch\nEpoch 9/10 Iteration 1448/1780 Training loss: 1.4222 0.1235 sec/batch\nEpoch 9/10 Iteration 1449/1780 Training loss: 1.4222 0.1227 sec/batch\nEpoch 9/10 Iteration 1450/1780 Training loss: 1.4206 0.1216 sec/batch\nEpoch 9/10 Iteration 1451/1780 Training loss: 1.4192 0.1242 sec/batch\nEpoch 9/10 Iteration 1452/1780 Training loss: 1.4198 0.1221 sec/batch\nEpoch 9/10 Iteration 1453/1780 Training loss: 1.4205 0.1231 sec/batch\nEpoch 9/10 Iteration 1454/1780 Training loss: 1.4204 0.1242 sec/batch\nEpoch 9/10 Iteration 1455/1780 Training loss: 1.4202 0.1229 sec/batch\nEpoch 9/10 Iteration 1456/1780 Training loss: 1.4191 0.1219 sec/batch\nEpoch 9/10 Iteration 1457/1780 Training loss: 1.4193 0.1238 sec/batch\nEpoch 9/10 Iteration 1458/1780 Training loss: 1.4194 0.1247 sec/batch\nEpoch 9/10 Iteration 1459/1780 Training loss: 1.4191 0.1228 sec/batch\nEpoch 9/10 Iteration 1460/1780 Training loss: 1.4188 0.1231 sec/batch\nEpoch 9/10 Iteration 1461/1780 Training loss: 1.4179 0.1235 sec/batch\nEpoch 9/10 Iteration 1462/1780 Training loss: 1.4167 0.1234 sec/batch\nEpoch 9/10 Iteration 1463/1780 Training loss: 1.4153 0.1228 sec/batch\nEpoch 9/10 Iteration 1464/1780 Training loss: 1.4146 0.1217 sec/batch\nEpoch 9/10 Iteration 1465/1780 Training loss: 1.4139 0.1258 sec/batch\nEpoch 9/10 Iteration 1466/1780 Training loss: 1.4143 0.1233 sec/batch\nEpoch 9/10 Iteration 1467/1780 Training loss: 1.4137 0.1252 sec/batch\nEpoch 9/10 Iteration 1468/1780 Training loss: 1.4128 0.1249 sec/batch\nEpoch 9/10 Iteration 1469/1780 Training loss: 1.4131 0.1238 sec/batch\nEpoch 9/10 Iteration 1470/1780 Training loss: 1.4122 0.1247 sec/batch\nEpoch 9/10 Iteration 1471/1780 Training loss: 1.4121 0.1243 sec/batch\nEpoch 9/10 Iteration 1472/1780 Training loss: 1.4117 0.1209 sec/batch\nEpoch 9/10 Iteration 1473/1780 Training loss: 1.4118 0.1266 sec/batch\nEpoch 9/10 Iteration 1474/1780 Training loss: 1.4120 0.1234 sec/batch\nEpoch 9/10 Iteration 1475/1780 Training loss: 1.4115 0.1269 sec/batch\nEpoch 9/10 Iteration 1476/1780 Training loss: 1.4123 0.1232 sec/batch\nEpoch 9/10 Iteration 1477/1780 Training loss: 1.4122 0.1286 sec/batch\nEpoch 9/10 Iteration 1478/1780 Training loss: 1.4125 0.1287 sec/batch\nEpoch 9/10 Iteration 1479/1780 Training loss: 1.4124 0.1227 sec/batch\nEpoch 9/10 Iteration 1480/1780 Training loss: 1.4124 0.1213 sec/batch\nEpoch 9/10 Iteration 1481/1780 Training loss: 1.4128 0.1257 sec/batch\nEpoch 9/10 Iteration 1482/1780 Training loss: 1.4123 0.1241 sec/batch\nEpoch 9/10 Iteration 1483/1780 Training loss: 1.4117 0.1232 sec/batch\nEpoch 9/10 Iteration 1484/1780 Training loss: 1.4122 0.1224 sec/batch\nEpoch 9/10 Iteration 1485/1780 Training loss: 1.4121 0.1224 sec/batch\nEpoch 9/10 Iteration 1486/1780 Training loss: 1.4127 0.1207 sec/batch\nEpoch 9/10 Iteration 1487/1780 Training loss: 1.4132 0.1240 sec/batch\nEpoch 9/10 Iteration 1488/1780 Training loss: 1.4131 0.1215 sec/batch\nEpoch 9/10 Iteration 1489/1780 Training loss: 1.4130 0.1224 sec/batch\nEpoch 9/10 Iteration 1490/1780 Training loss: 1.4131 0.1227 sec/batch\nEpoch 9/10 Iteration 1491/1780 Training loss: 1.4131 0.1268 sec/batch\nEpoch 9/10 Iteration 1492/1780 Training loss: 1.4127 0.1229 sec/batch\nEpoch 9/10 Iteration 1493/1780 Training loss: 1.4128 0.1231 sec/batch\nEpoch 9/10 Iteration 1494/1780 Training loss: 1.4128 0.1236 sec/batch\nEpoch 9/10 Iteration 1495/1780 Training loss: 1.4132 0.1247 sec/batch\nEpoch 9/10 Iteration 1496/1780 Training loss: 1.4134 0.1217 sec/batch\nEpoch 9/10 Iteration 1497/1780 Training loss: 1.4138 0.1244 sec/batch\nEpoch 9/10 Iteration 1498/1780 Training loss: 1.4133 0.1215 sec/batch\nEpoch 9/10 Iteration 1499/1780 Training loss: 1.4131 0.1271 sec/batch\nEpoch 9/10 Iteration 1500/1780 Training loss: 1.4132 0.1231 sec/batch\nValidation loss: 1.29403 Saving checkpoint!\nEpoch 9/10 Iteration 1501/1780 Training loss: 1.4146 0.1211 sec/batch\nEpoch 9/10 Iteration 1502/1780 Training loss: 1.4145 0.1221 sec/batch\nEpoch 9/10 Iteration 1503/1780 Training loss: 1.4140 0.1238 sec/batch\nEpoch 9/10 Iteration 1504/1780 Training loss: 1.4139 0.1218 sec/batch\nEpoch 9/10 Iteration 1505/1780 Training loss: 1.4134 0.1228 sec/batch\nEpoch 9/10 Iteration 1506/1780 Training loss: 1.4134 0.1236 sec/batch\nEpoch 9/10 Iteration 1507/1780 Training loss: 1.4129 0.1233 sec/batch\nEpoch 9/10 Iteration 1508/1780 Training loss: 1.4128 0.1248 sec/batch\nEpoch 9/10 Iteration 1509/1780 Training loss: 1.4126 0.1222 sec/batch\nEpoch 9/10 Iteration 1510/1780 Training loss: 1.4124 0.1211 sec/batch\nEpoch 9/10 Iteration 1511/1780 Training loss: 1.4121 0.1235 sec/batch\nEpoch 9/10 Iteration 1512/1780 Training loss: 1.4119 0.1201 sec/batch\nEpoch 9/10 Iteration 1513/1780 Training loss: 1.4114 0.1221 sec/batch\nEpoch 9/10 Iteration 1514/1780 Training loss: 1.4114 0.1207 sec/batch\nEpoch 9/10 Iteration 1515/1780 Training loss: 1.4112 0.1237 sec/batch\nEpoch 9/10 Iteration 1516/1780 Training loss: 1.4110 0.1217 sec/batch\nEpoch 9/10 Iteration 1517/1780 Training loss: 1.4107 0.1230 sec/batch\nEpoch 9/10 Iteration 1518/1780 Training loss: 1.4103 0.1231 sec/batch\nEpoch 9/10 Iteration 1519/1780 Training loss: 1.4100 0.1228 sec/batch\nEpoch 9/10 Iteration 1520/1780 Training loss: 1.4100 0.1228 sec/batch\nEpoch 9/10 Iteration 1521/1780 Training loss: 1.4100 0.1254 sec/batch\nEpoch 9/10 Iteration 1522/1780 Training loss: 1.4095 0.1229 sec/batch\nEpoch 9/10 Iteration 1523/1780 Training loss: 1.4092 0.1239 sec/batch\nEpoch 9/10 Iteration 1524/1780 Training loss: 1.4087 0.1210 sec/batch\nEpoch 9/10 Iteration 1525/1780 Training loss: 1.4087 0.1227 sec/batch\nEpoch 9/10 Iteration 1526/1780 Training loss: 1.4086 0.1227 sec/batch\nEpoch 9/10 Iteration 1527/1780 Training loss: 1.4085 0.1252 sec/batch\nEpoch 9/10 Iteration 1528/1780 Training loss: 1.4083 0.1245 sec/batch\nEpoch 9/10 Iteration 1529/1780 Training loss: 1.4081 0.1335 sec/batch\nEpoch 9/10 Iteration 1530/1780 Training loss: 1.4080 0.1237 sec/batch\nEpoch 9/10 Iteration 1531/1780 Training loss: 1.4078 0.1220 sec/batch\nEpoch 9/10 Iteration 1532/1780 Training loss: 1.4078 0.1213 sec/batch\nEpoch 9/10 Iteration 1533/1780 Training loss: 1.4076 0.1215 sec/batch\nEpoch 9/10 Iteration 1534/1780 Training loss: 1.4076 0.1235 sec/batch\nEpoch 9/10 Iteration 1535/1780 Training loss: 1.4072 0.1248 sec/batch\nEpoch 9/10 Iteration 1536/1780 Training loss: 1.4071 0.1215 sec/batch\nEpoch 9/10 Iteration 1537/1780 Training loss: 1.4069 0.1226 sec/batch\nEpoch 9/10 Iteration 1538/1780 Training loss: 1.4067 0.1229 sec/batch\nEpoch 9/10 Iteration 1539/1780 Training loss: 1.4063 0.1254 sec/batch\nEpoch 9/10 Iteration 1540/1780 Training loss: 1.4059 0.1226 sec/batch\nEpoch 9/10 Iteration 1541/1780 Training loss: 1.4058 0.1251 sec/batch\nEpoch 9/10 Iteration 1542/1780 Training loss: 1.4058 0.1242 sec/batch\nEpoch 9/10 Iteration 1543/1780 Training loss: 1.4055 0.1250 sec/batch\nEpoch 9/10 Iteration 1544/1780 Training loss: 1.4055 0.1237 sec/batch\nEpoch 9/10 Iteration 1545/1780 Training loss: 1.4053 0.1282 sec/batch\nEpoch 9/10 Iteration 1546/1780 Training loss: 1.4049 0.1247 sec/batch\nEpoch 9/10 Iteration 1547/1780 Training loss: 1.4043 0.1252 sec/batch\nEpoch 9/10 Iteration 1548/1780 Training loss: 1.4042 0.1225 sec/batch\nEpoch 9/10 Iteration 1549/1780 Training loss: 1.4040 0.1220 sec/batch\nEpoch 9/10 Iteration 1550/1780 Training loss: 1.4036 0.1226 sec/batch\nEpoch 9/10 Iteration 1551/1780 Training loss: 1.4036 0.1226 sec/batch\nEpoch 9/10 Iteration 1552/1780 Training loss: 1.4035 0.1206 sec/batch\nEpoch 9/10 Iteration 1553/1780 Training loss: 1.4032 0.1232 sec/batch\nEpoch 9/10 Iteration 1554/1780 Training loss: 1.4028 0.1234 sec/batch\nEpoch 9/10 Iteration 1555/1780 Training loss: 1.4023 0.1214 sec/batch\nEpoch 9/10 Iteration 1556/1780 Training loss: 1.4020 0.1207 sec/batch\nEpoch 9/10 Iteration 1557/1780 Training loss: 1.4020 0.1221 sec/batch\nEpoch 9/10 Iteration 1558/1780 Training loss: 1.4019 0.1221 sec/batch\nEpoch 9/10 Iteration 1559/1780 Training loss: 1.4019 0.1257 sec/batch\nEpoch 9/10 Iteration 1560/1780 Training loss: 1.4018 0.1217 sec/batch\nEpoch 9/10 Iteration 1561/1780 Training loss: 1.4020 0.1255 sec/batch\nEpoch 9/10 Iteration 1562/1780 Training loss: 1.4020 0.1208 sec/batch\nEpoch 9/10 Iteration 1563/1780 Training loss: 1.4020 0.1229 sec/batch\nEpoch 9/10 Iteration 1564/1780 Training loss: 1.4018 0.1239 sec/batch\nEpoch 9/10 Iteration 1565/1780 Training loss: 1.4021 0.1219 sec/batch\nEpoch 9/10 Iteration 1566/1780 Training loss: 1.4021 0.1223 sec/batch\nEpoch 9/10 Iteration 1567/1780 Training loss: 1.4020 0.1235 sec/batch\nEpoch 9/10 Iteration 1568/1780 Training loss: 1.4021 0.1228 sec/batch\nEpoch 9/10 Iteration 1569/1780 Training loss: 1.4019 0.1253 sec/batch\nEpoch 9/10 Iteration 1570/1780 Training loss: 1.4020 0.1220 sec/batch\nEpoch 9/10 Iteration 1571/1780 Training loss: 1.4019 0.1231 sec/batch\nEpoch 9/10 Iteration 1572/1780 Training loss: 1.4021 0.1341 sec/batch\nEpoch 9/10 Iteration 1573/1780 Training loss: 1.4021 0.1225 sec/batch\nEpoch 9/10 Iteration 1574/1780 Training loss: 1.4019 0.1217 sec/batch\nEpoch 9/10 Iteration 1575/1780 Training loss: 1.4015 0.1227 sec/batch\nEpoch 9/10 Iteration 1576/1780 Training loss: 1.4013 0.1202 sec/batch\nEpoch 9/10 Iteration 1577/1780 Training loss: 1.4013 0.1273 sec/batch\nEpoch 9/10 Iteration 1578/1780 Training loss: 1.4011 0.1213 sec/batch\nEpoch 9/10 Iteration 1579/1780 Training loss: 1.4011 0.1290 sec/batch\nEpoch 9/10 Iteration 1580/1780 Training loss: 1.4009 0.1201 sec/batch\nEpoch 9/10 Iteration 1581/1780 Training loss: 1.4009 0.1219 sec/batch\nEpoch 9/10 Iteration 1582/1780 Training loss: 1.4008 0.1235 sec/batch\nEpoch 9/10 Iteration 1583/1780 Training loss: 1.4005 0.1212 sec/batch\nEpoch 9/10 Iteration 1584/1780 Training loss: 1.4005 0.1224 sec/batch\nEpoch 9/10 Iteration 1585/1780 Training loss: 1.4006 0.1226 sec/batch\nEpoch 9/10 Iteration 1586/1780 Training loss: 1.4005 0.1227 sec/batch\nEpoch 9/10 Iteration 1587/1780 Training loss: 1.4005 0.1289 sec/batch\nEpoch 9/10 Iteration 1588/1780 Training loss: 1.4004 0.1241 sec/batch\nEpoch 9/10 Iteration 1589/1780 Training loss: 1.4003 0.1219 sec/batch\nEpoch 9/10 Iteration 1590/1780 Training loss: 1.4001 0.1237 sec/batch\nEpoch 9/10 Iteration 1591/1780 Training loss: 1.4002 0.1235 sec/batch\nEpoch 9/10 Iteration 1592/1780 Training loss: 1.4006 0.1215 sec/batch\nEpoch 9/10 Iteration 1593/1780 Training loss: 1.4005 0.1251 sec/batch\nEpoch 9/10 Iteration 1594/1780 Training loss: 1.4004 0.1221 sec/batch\nEpoch 9/10 Iteration 1595/1780 Training loss: 1.4003 0.1227 sec/batch\nEpoch 9/10 Iteration 1596/1780 Training loss: 1.4000 0.1242 sec/batch\nEpoch 9/10 Iteration 1597/1780 Training loss: 1.4001 0.1221 sec/batch\nEpoch 9/10 Iteration 1598/1780 Training loss: 1.4000 0.1211 sec/batch\nEpoch 9/10 Iteration 1599/1780 Training loss: 1.4000 0.1315 sec/batch\nEpoch 9/10 Iteration 1600/1780 Training loss: 1.3999 0.1200 sec/batch\nValidation loss: 1.27288 Saving checkpoint!\nEpoch 9/10 Iteration 1601/1780 Training loss: 1.4005 0.1202 sec/batch\nEpoch 9/10 Iteration 1602/1780 Training loss: 1.4007 0.1246 sec/batch\nEpoch 10/10 Iteration 1603/1780 Training loss: 1.5037 0.1222 sec/batch\nEpoch 10/10 Iteration 1604/1780 Training loss: 1.4527 0.1217 sec/batch\nEpoch 10/10 Iteration 1605/1780 Training loss: 1.4277 0.1252 sec/batch\nEpoch 10/10 Iteration 1606/1780 Training loss: 1.4221 0.1206 sec/batch\nEpoch 10/10 Iteration 1607/1780 Training loss: 1.4116 0.1220 sec/batch\nEpoch 10/10 Iteration 1608/1780 Training loss: 1.3979 0.1219 sec/batch\nEpoch 10/10 Iteration 1609/1780 Training loss: 1.3973 0.1243 sec/batch\nEpoch 10/10 Iteration 1610/1780 Training loss: 1.3954 0.1233 sec/batch\nEpoch 10/10 Iteration 1611/1780 Training loss: 1.3955 0.1269 sec/batch\nEpoch 10/10 Iteration 1612/1780 Training loss: 1.3939 0.1219 sec/batch\nEpoch 10/10 Iteration 1613/1780 Training loss: 1.3906 0.1257 sec/batch\nEpoch 10/10 Iteration 1614/1780 Training loss: 1.3894 0.1240 sec/batch\nEpoch 10/10 Iteration 1615/1780 Training loss: 1.3886 0.1228 sec/batch\nEpoch 10/10 Iteration 1616/1780 Training loss: 1.3897 0.1220 sec/batch\nEpoch 10/10 Iteration 1617/1780 Training loss: 1.3882 0.1248 sec/batch\nEpoch 10/10 Iteration 1618/1780 Training loss: 1.3860 0.1238 sec/batch\nEpoch 10/10 Iteration 1619/1780 Training loss: 1.3862 0.1225 sec/batch\nEpoch 10/10 Iteration 1620/1780 Training loss: 1.3878 0.1222 sec/batch\nEpoch 10/10 Iteration 1621/1780 Training loss: 1.3873 0.1219 sec/batch\nEpoch 10/10 Iteration 1622/1780 Training loss: 1.3886 0.1232 sec/batch\nEpoch 10/10 Iteration 1623/1780 Training loss: 1.3874 0.1277 sec/batch\nEpoch 10/10 Iteration 1624/1780 Training loss: 1.3876 0.1213 sec/batch\nEpoch 10/10 Iteration 1625/1780 Training loss: 1.3860 0.1217 sec/batch\nEpoch 10/10 Iteration 1626/1780 Training loss: 1.3856 0.1226 sec/batch\nEpoch 10/10 Iteration 1627/1780 Training loss: 1.3855 0.1219 sec/batch\nEpoch 10/10 Iteration 1628/1780 Training loss: 1.3835 0.1222 sec/batch\nEpoch 10/10 Iteration 1629/1780 Training loss: 1.3821 0.1256 sec/batch\nEpoch 10/10 Iteration 1630/1780 Training loss: 1.3825 0.1217 sec/batch\nEpoch 10/10 Iteration 1631/1780 Training loss: 1.3826 0.1251 sec/batch\nEpoch 10/10 Iteration 1632/1780 Training loss: 1.3828 0.1245 sec/batch\nEpoch 10/10 Iteration 1633/1780 Training loss: 1.3823 0.1274 sec/batch\nEpoch 10/10 Iteration 1634/1780 Training loss: 1.3810 0.1231 sec/batch\nEpoch 10/10 Iteration 1635/1780 Training loss: 1.3813 0.1290 sec/batch\nEpoch 10/10 Iteration 1636/1780 Training loss: 1.3817 0.1234 sec/batch\nEpoch 10/10 Iteration 1637/1780 Training loss: 1.3814 0.1252 sec/batch\nEpoch 10/10 Iteration 1638/1780 Training loss: 1.3810 0.1226 sec/batch\nEpoch 10/10 Iteration 1639/1780 Training loss: 1.3801 0.1261 sec/batch\nEpoch 10/10 Iteration 1640/1780 Training loss: 1.3790 0.1215 sec/batch\nEpoch 10/10 Iteration 1641/1780 Training loss: 1.3775 0.1235 sec/batch\nEpoch 10/10 Iteration 1642/1780 Training loss: 1.3768 0.1250 sec/batch\nEpoch 10/10 Iteration 1643/1780 Training loss: 1.3763 0.1233 sec/batch\nEpoch 10/10 Iteration 1644/1780 Training loss: 1.3766 0.1223 sec/batch\nEpoch 10/10 Iteration 1645/1780 Training loss: 1.3763 0.1227 sec/batch\nEpoch 10/10 Iteration 1646/1780 Training loss: 1.3757 0.1231 sec/batch\nEpoch 10/10 Iteration 1647/1780 Training loss: 1.3760 0.1232 sec/batch\nEpoch 10/10 Iteration 1648/1780 Training loss: 1.3749 0.1222 sec/batch\nEpoch 10/10 Iteration 1649/1780 Training loss: 1.3745 0.1225 sec/batch\nEpoch 10/10 Iteration 1650/1780 Training loss: 1.3739 0.1226 sec/batch\nEpoch 10/10 Iteration 1651/1780 Training loss: 1.3737 0.1242 sec/batch\nEpoch 10/10 Iteration 1652/1780 Training loss: 1.3741 0.1255 sec/batch\nEpoch 10/10 Iteration 1653/1780 Training loss: 1.3734 0.1230 sec/batch\nEpoch 10/10 Iteration 1654/1780 Training loss: 1.3742 0.1211 sec/batch\nEpoch 10/10 Iteration 1655/1780 Training loss: 1.3738 0.1227 sec/batch\nEpoch 10/10 Iteration 1656/1780 Training loss: 1.3740 0.1254 sec/batch\nEpoch 10/10 Iteration 1657/1780 Training loss: 1.3737 0.1261 sec/batch\nEpoch 10/10 Iteration 1658/1780 Training loss: 1.3739 0.1222 sec/batch\nEpoch 10/10 Iteration 1659/1780 Training loss: 1.3742 0.1254 sec/batch\nEpoch 10/10 Iteration 1660/1780 Training loss: 1.3737 0.1211 sec/batch\nEpoch 10/10 Iteration 1661/1780 Training loss: 1.3733 0.1233 sec/batch\nEpoch 10/10 Iteration 1662/1780 Training loss: 1.3740 0.1232 sec/batch\nEpoch 10/10 Iteration 1663/1780 Training loss: 1.3740 0.1230 sec/batch\nEpoch 10/10 Iteration 1664/1780 Training loss: 1.3747 0.1234 sec/batch\nEpoch 10/10 Iteration 1665/1780 Training loss: 1.3751 0.1225 sec/batch\nEpoch 10/10 Iteration 1666/1780 Training loss: 1.3752 0.1269 sec/batch\nEpoch 10/10 Iteration 1667/1780 Training loss: 1.3751 0.1221 sec/batch\nEpoch 10/10 Iteration 1668/1780 Training loss: 1.3751 0.1244 sec/batch\nEpoch 10/10 Iteration 1669/1780 Training loss: 1.3752 0.1228 sec/batch\nEpoch 10/10 Iteration 1670/1780 Training loss: 1.3748 0.1214 sec/batch\nEpoch 10/10 Iteration 1671/1780 Training loss: 1.3748 0.1236 sec/batch\nEpoch 10/10 Iteration 1672/1780 Training loss: 1.3747 0.1221 sec/batch\nEpoch 10/10 Iteration 1673/1780 Training loss: 1.3751 0.1268 sec/batch\nEpoch 10/10 Iteration 1674/1780 Training loss: 1.3753 0.1213 sec/batch\nEpoch 10/10 Iteration 1675/1780 Training loss: 1.3758 0.1251 sec/batch\nEpoch 10/10 Iteration 1676/1780 Training loss: 1.3754 0.1224 sec/batch\nEpoch 10/10 Iteration 1677/1780 Training loss: 1.3753 0.1224 sec/batch\nEpoch 10/10 Iteration 1678/1780 Training loss: 1.3753 0.1225 sec/batch\nEpoch 10/10 Iteration 1679/1780 Training loss: 1.3751 0.1225 sec/batch\nEpoch 10/10 Iteration 1680/1780 Training loss: 1.3749 0.1229 sec/batch\nEpoch 10/10 Iteration 1681/1780 Training loss: 1.3742 0.1252 sec/batch\nEpoch 10/10 Iteration 1682/1780 Training loss: 1.3741 0.1226 sec/batch\nEpoch 10/10 Iteration 1683/1780 Training loss: 1.3736 0.1255 sec/batch\nEpoch 10/10 Iteration 1684/1780 Training loss: 1.3736 0.1217 sec/batch\nEpoch 10/10 Iteration 1685/1780 Training loss: 1.3729 0.1251 sec/batch\nEpoch 10/10 Iteration 1686/1780 Training loss: 1.3728 0.1218 sec/batch\nEpoch 10/10 Iteration 1687/1780 Training loss: 1.3725 0.1235 sec/batch\nEpoch 10/10 Iteration 1688/1780 Training loss: 1.3723 0.1215 sec/batch\nEpoch 10/10 Iteration 1689/1780 Training loss: 1.3720 0.1262 sec/batch\nEpoch 10/10 Iteration 1690/1780 Training loss: 1.3716 0.1229 sec/batch\nEpoch 10/10 Iteration 1691/1780 Training loss: 1.3711 0.1232 sec/batch\nEpoch 10/10 Iteration 1692/1780 Training loss: 1.3711 0.1215 sec/batch\nEpoch 10/10 Iteration 1693/1780 Training loss: 1.3708 0.1228 sec/batch\nEpoch 10/10 Iteration 1694/1780 Training loss: 1.3705 0.1233 sec/batch\nEpoch 10/10 Iteration 1695/1780 Training loss: 1.3702 0.1253 sec/batch\nEpoch 10/10 Iteration 1696/1780 Training loss: 1.3699 0.1233 sec/batch\nEpoch 10/10 Iteration 1697/1780 Training loss: 1.3696 0.1231 sec/batch\nEpoch 10/10 Iteration 1698/1780 Training loss: 1.3695 0.1218 sec/batch\nEpoch 10/10 Iteration 1699/1780 Training loss: 1.3695 0.1242 sec/batch\nEpoch 10/10 Iteration 1700/1780 Training loss: 1.3691 0.1220 sec/batch\nValidation loss: 1.25628 Saving checkpoint!\nEpoch 10/10 Iteration 1701/1780 Training loss: 1.3703 0.1237 sec/batch\nEpoch 10/10 Iteration 1702/1780 Training loss: 1.3699 0.1257 sec/batch\nEpoch 10/10 Iteration 1703/1780 Training loss: 1.3698 0.1244 sec/batch\nEpoch 10/10 Iteration 1704/1780 Training loss: 1.3697 0.1210 sec/batch\nEpoch 10/10 Iteration 1705/1780 Training loss: 1.3696 0.1271 sec/batch\nEpoch 10/10 Iteration 1706/1780 Training loss: 1.3695 0.1220 sec/batch\nEpoch 10/10 Iteration 1707/1780 Training loss: 1.3693 0.1230 sec/batch\nEpoch 10/10 Iteration 1708/1780 Training loss: 1.3691 0.1214 sec/batch\nEpoch 10/10 Iteration 1709/1780 Training loss: 1.3691 0.1233 sec/batch\nEpoch 10/10 Iteration 1710/1780 Training loss: 1.3690 0.1252 sec/batch\nEpoch 10/10 Iteration 1711/1780 Training loss: 1.3689 0.1254 sec/batch\nEpoch 10/10 Iteration 1712/1780 Training loss: 1.3689 0.1226 sec/batch\nEpoch 10/10 Iteration 1713/1780 Training loss: 1.3688 0.1226 sec/batch\nEpoch 10/10 Iteration 1714/1780 Training loss: 1.3686 0.1216 sec/batch\nEpoch 10/10 Iteration 1715/1780 Training loss: 1.3684 0.1223 sec/batch\nEpoch 10/10 Iteration 1716/1780 Training loss: 1.3683 0.1222 sec/batch\nEpoch 10/10 Iteration 1717/1780 Training loss: 1.3679 0.1280 sec/batch\nEpoch 10/10 Iteration 1718/1780 Training loss: 1.3676 0.1235 sec/batch\nEpoch 10/10 Iteration 1719/1780 Training loss: 1.3675 0.1218 sec/batch\nEpoch 10/10 Iteration 1720/1780 Training loss: 1.3675 0.1205 sec/batch\nEpoch 10/10 Iteration 1721/1780 Training loss: 1.3673 0.1237 sec/batch\nEpoch 10/10 Iteration 1722/1780 Training loss: 1.3672 0.1234 sec/batch\nEpoch 10/10 Iteration 1723/1780 Training loss: 1.3670 0.1233 sec/batch\nEpoch 10/10 Iteration 1724/1780 Training loss: 1.3666 0.1210 sec/batch\nEpoch 10/10 Iteration 1725/1780 Training loss: 1.3661 0.1220 sec/batch\nEpoch 10/10 Iteration 1726/1780 Training loss: 1.3661 0.1216 sec/batch\nEpoch 10/10 Iteration 1727/1780 Training loss: 1.3660 0.1231 sec/batch\nEpoch 10/10 Iteration 1728/1780 Training loss: 1.3656 0.1217 sec/batch\nEpoch 10/10 Iteration 1729/1780 Training loss: 1.3656 0.1358 sec/batch\nEpoch 10/10 Iteration 1730/1780 Training loss: 1.3655 0.1230 sec/batch\nEpoch 10/10 Iteration 1731/1780 Training loss: 1.3653 0.1226 sec/batch\nEpoch 10/10 Iteration 1732/1780 Training loss: 1.3650 0.1224 sec/batch\nEpoch 10/10 Iteration 1733/1780 Training loss: 1.3645 0.1263 sec/batch\nEpoch 10/10 Iteration 1734/1780 Training loss: 1.3642 0.1268 sec/batch\nEpoch 10/10 Iteration 1735/1780 Training loss: 1.3642 0.1247 sec/batch\nEpoch 10/10 Iteration 1736/1780 Training loss: 1.3642 0.1221 sec/batch\nEpoch 10/10 Iteration 1737/1780 Training loss: 1.3641 0.1220 sec/batch\nEpoch 10/10 Iteration 1738/1780 Training loss: 1.3641 0.1220 sec/batch\nEpoch 10/10 Iteration 1739/1780 Training loss: 1.3642 0.1242 sec/batch\nEpoch 10/10 Iteration 1740/1780 Training loss: 1.3642 0.1230 sec/batch\nEpoch 10/10 Iteration 1741/1780 Training loss: 1.3641 0.1222 sec/batch\nEpoch 10/10 Iteration 1742/1780 Training loss: 1.3641 0.1229 sec/batch\nEpoch 10/10 Iteration 1743/1780 Training loss: 1.3644 0.1305 sec/batch\nEpoch 10/10 Iteration 1744/1780 Training loss: 1.3643 0.1230 sec/batch\nEpoch 10/10 Iteration 1745/1780 Training loss: 1.3642 0.1237 sec/batch\nEpoch 10/10 Iteration 1746/1780 Training loss: 1.3644 0.1235 sec/batch\nEpoch 10/10 Iteration 1747/1780 Training loss: 1.3643 0.1240 sec/batch\nEpoch 10/10 Iteration 1748/1780 Training loss: 1.3643 0.1214 sec/batch\nEpoch 10/10 Iteration 1749/1780 Training loss: 1.3643 0.1250 sec/batch\nEpoch 10/10 Iteration 1750/1780 Training loss: 1.3644 0.1210 sec/batch\nEpoch 10/10 Iteration 1751/1780 Training loss: 1.3644 0.1213 sec/batch\nEpoch 10/10 Iteration 1752/1780 Training loss: 1.3643 0.1221 sec/batch\nEpoch 10/10 Iteration 1753/1780 Training loss: 1.3640 0.1228 sec/batch\nEpoch 10/10 Iteration 1754/1780 Training loss: 1.3637 0.1214 sec/batch\nEpoch 10/10 Iteration 1755/1780 Training loss: 1.3637 0.1229 sec/batch\nEpoch 10/10 Iteration 1756/1780 Training loss: 1.3636 0.1205 sec/batch\nEpoch 10/10 Iteration 1757/1780 Training loss: 1.3635 0.1220 sec/batch\nEpoch 10/10 Iteration 1758/1780 Training loss: 1.3635 0.1227 sec/batch\nEpoch 10/10 Iteration 1759/1780 Training loss: 1.3634 0.1219 sec/batch\nEpoch 10/10 Iteration 1760/1780 Training loss: 1.3634 0.1237 sec/batch\nEpoch 10/10 Iteration 1761/1780 Training loss: 1.3630 0.1224 sec/batch\nEpoch 10/10 Iteration 1762/1780 Training loss: 1.3631 0.1231 sec/batch\nEpoch 10/10 Iteration 1763/1780 Training loss: 1.3633 0.1252 sec/batch\nEpoch 10/10 Iteration 1764/1780 Training loss: 1.3632 0.1230 sec/batch\nEpoch 10/10 Iteration 1765/1780 Training loss: 1.3631 0.1226 sec/batch\nEpoch 10/10 Iteration 1766/1780 Training loss: 1.3631 0.1220 sec/batch\nEpoch 10/10 Iteration 1767/1780 Training loss: 1.3630 0.1261 sec/batch\nEpoch 10/10 Iteration 1768/1780 Training loss: 1.3630 0.1215 sec/batch\nEpoch 10/10 Iteration 1769/1780 Training loss: 1.3630 0.1260 sec/batch\nEpoch 10/10 Iteration 1770/1780 Training loss: 1.3634 0.1234 sec/batch\nEpoch 10/10 Iteration 1771/1780 Training loss: 1.3633 0.1226 sec/batch\nEpoch 10/10 Iteration 1772/1780 Training loss: 1.3633 0.1212 sec/batch\nEpoch 10/10 Iteration 1773/1780 Training loss: 1.3631 0.1219 sec/batch\nEpoch 10/10 Iteration 1774/1780 Training loss: 1.3629 0.1213 sec/batch\nEpoch 10/10 Iteration 1775/1780 Training loss: 1.3630 0.1227 sec/batch\nEpoch 10/10 Iteration 1776/1780 Training loss: 1.3629 0.1212 sec/batch\nEpoch 10/10 Iteration 1777/1780 Training loss: 1.3630 0.1228 sec/batch\nEpoch 10/10 Iteration 1778/1780 Training loss: 1.3627 0.1205 sec/batch\nEpoch 10/10 Iteration 1779/1780 Training loss: 1.3625 0.1228 sec/batch\nEpoch 10/10 Iteration 1780/1780 Training loss: 1.3626 0.1239 sec/batch\nValidation loss: 1.24267 Saving checkpoint!\n"
],
[
"tf.train.get_checkpoint_state('checkpoints/anna')",
"_____no_output_____"
]
],
[
[
"## Sampling\n\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\n\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.\n\n",
"_____no_output_____"
]
],
[
[
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c",
"_____no_output_____"
],
[
"def sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)",
"_____no_output_____"
],
[
"checkpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)",
"Farlathit that if had so\nlike it that it were. He could not trouble to his wife, and there was\nanything in them of the side of his weaky in the creature at his forteren\nto him.\n\n\"What is it? I can't bread to those,\" said Stepan Arkadyevitch. \"It's not\nmy children, and there is an almost this arm, true it mays already,\nand tell you what I have say to you, and was not looking at the peasant,\nwhy is, I don't know him out, and she doesn't speak to me immediately, as\nyou would say the countess and the more frest an angelembre, and time and\nthings's silent, but I was not in my stand that is in my head. But if he\nsay, and was so feeling with his soul. A child--in his soul of his\nsoul of his soul. He should not see that any of that sense of. Here he\nhad not been so composed and to speak for as in a whole picture, but\nall the setting and her excellent and society, who had been delighted\nand see to anywing had been being troed to thousand words on them,\nwe liked him.\n\nThat set in her money at the table, he came into the party. The capable\nof his she could not be as an old composure.\n\n\"That's all something there will be down becime by throe is\nsuch a silent, as in a countess, I should state it out and divorct.\nThe discussion is not for me. I was that something was simply they are\nall three manshess of a sensitions of mind it all.\"\n\n\"No,\" he thought, shouted and lifting his soul. \"While it might see your\nhonser and she, I could burst. And I had been a midelity. And I had a\nmarnief are through the countess,\" he said, looking at him, a chosing\nwhich they had been carried out and still solied, and there was a sen that\nwas to be completely, and that this matter of all the seconds of it, and\na concipation were to her husband, who came up and conscaously, that he\nwas not the station. All his fourse she was always at the country,,\nto speak oft, and though they were to hear the delightful throom and\nwhether they came towards the morning, and his living and a coller and\nhold--the children. \n"
],
[
"checkpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)",
"Farnt him oste wha sorind thans tout thint asd an sesand an hires on thime sind thit aled, ban thand and out hore as the ter hos ton ho te that, was tis tart al the hand sostint him sore an tit an son thes, win he se ther san ther hher tas tarereng,.\n\nAnl at an ades in ond hesiln, ad hhe torers teans, wast tar arering tho this sos alten sorer has hhas an siton ther him he had sin he ard ate te anling the sosin her ans and\narins asd and ther ale te tot an tand tanginge wath and ho ald, so sot th asend sat hare sother horesinnd, he hesense wing ante her so tith tir sherinn, anded and to the toul anderin he sorit he torsith she se atere an ting ot hand and thit hhe so the te wile har\nens ont in the sersise, and we he seres tar aterer, to ato tat or has he he wan ton here won and sen heren he sosering, to to theer oo adent har herere the wosh oute, was serild ward tous hed astend..\n\nI's sint on alt in har tor tit her asd hade shithans ored he talereng an soredendere tim tot hees. Tise sor and \n"
],
[
"checkpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)",
"Fard as astice her said he celatice of to seress in the raice, and to be the some and sere allats to that said to that the sark and a cast a the wither ald the pacinesse of her had astition, he said to the sount as she west at hissele. Af the cond it he was a fact onthis astisarianing.\n\n\n\"Or a ton to to be that's a more at aspestale as the sont of anstiring as\nthours and trey.\n\nThe same wo dangring the\nraterst, who sore and somethy had ast out an of his book. \"We had's beane were that, and a morted a thay he had to tere. Then to\nher homent andertersed his his ancouted to the pirsted, the soution for of the pirsice inthirgest and stenciol, with the hard and and\na colrice of to be oneres,\nthe song to this anderssad.\nThe could ounterss the said to serom of\nsoment a carsed of sheres of she\ntorded\nhar and want in their of hould, but\nher told in that in he tad a the same to her. Serghing an her has and with the seed, and the camt ont his about of the\nsail, the her then all houg ant or to hus to \n"
],
[
"checkpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)",
"Farrat, his felt has at it.\n\n\"When the pose ther hor exceed\nto his sheant was,\" weat a sime of his sounsed. The coment and the facily that which had began terede a marilicaly whice whether the pose of his hand, at she was alligated herself the same on she had to\ntaiking to his forthing and streath how to hand\nbegan in a lang at some at it, this he cholded not set all her. \"Wo love that is setthing. Him anstering as seen that.\"\n\n\"Yes in the man that say the mare a crances is it?\" said Sergazy Ivancatching. \"You doon think were somether is ifficult of a mone of\nthough the most at the countes that the\nmean on the come to say the most, to\nhis feesing of\na man she, whilo he\nsained and well, that he would still at to said. He wind at his for the sore in the most\nof hoss and almoved to see him. They have betine the sumper into at he his stire, and what he was that at the so steate of the\nsound, and shin should have a geest of shall feet on the conderation to she had been at that imporsing the dre\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0886dac01acacd23ffddc0fc8fb84050f18ee4
| 630,095 |
ipynb
|
Jupyter Notebook
|
labs24_notebooks/daily_reports/Daily_Report_Calculation.ipynb
|
Lambda-School-Labs/sftma-data-analysis-ds
|
844225ab99c768506503d3817fb6077358518430
|
[
"MIT"
] | null | null | null |
labs24_notebooks/daily_reports/Daily_Report_Calculation.ipynb
|
Lambda-School-Labs/sftma-data-analysis-ds
|
844225ab99c768506503d3817fb6077358518430
|
[
"MIT"
] | 15 |
2020-05-18T15:25:54.000Z
|
2020-06-18T20:21:48.000Z
|
labs24_notebooks/daily_reports/Daily_Report_Calculation.ipynb
|
Lambda-School-Labs/sftma-data-analysis-ds
|
844225ab99c768506503d3817fb6077358518430
|
[
"MIT"
] | 3 |
2020-03-26T22:01:07.000Z
|
2020-06-26T04:06:32.000Z
| 99.478213 | 234,622 | 0.743437 |
[
[
[
"# Exploring data to calculate bunches and gaps\n\nThis first half of this notebook is the exploratory code I used to find this method in the first place. The second half will be a cleaned version that can be run on its own.",
"_____no_output_____"
],
[
"## Initialize database connection\n\nSo I can load the data I'll need",
"_____no_output_____"
]
],
[
[
"import getpass\nimport psycopg2 as pg\nimport pandas as pd\nimport pandas.io.sql as sqlio\nimport plotly.express as px",
"/usr/local/lib/python3.6/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.\n \"\"\")\n"
],
[
"# Enter database credentials. Requires you to paste in the user and\n# password so it isn't saved in the notebook file\nprint(\"Enter database username:\")\nuser = getpass.getpass()\nprint(\"Enter database password:\")\npassword = getpass.getpass()\n\ncreds = {\n 'user': user,\n 'password': password,\n 'host': \"lambdalabs24sfmta.cykkiwxbfvpg.us-east-1.rds.amazonaws.com\",\n 'dbname': \"historicalTransitData\"\n}\n\n# Set up connection to database\ncnx = pg.connect(**creds)\ncursor = cnx.cursor()\n\nprint('\\nDatabase connection successful')",
"Enter database username:\n··········\nEnter database password:\n··········\n\nDatabase connection successful\n"
]
],
[
[
"## Load route configuration and schedule data using custom classes\n\nAllows me to build on previous work",
"_____no_output_____"
]
],
[
[
"# Change this one cell and run all to change to a different route/day\nrid = \"1\"\nbegin = \"2020/6/1 07:00:00\"\nend = \"2020/6/2 07:00:00\" # non-inclusive",
"_____no_output_____"
],
[
"# Load in the other classes I wrote (I uploaded class files to colab manually for this)\nfrom schedule import Schedule\nfrom route import Route\n\n# Day is not working right. We currently have the same data for all days available, so this will still work\n# I will investigate why my class isn't pulling data right later, it worked before...\nday = \"2020-5-20\"\n\nschedule = Schedule(rid, day, cnx)\nroute = Route(rid, day, cnx)",
"WARNING: 2 unused sub-paths\n"
],
[
"# One thing I was after there\nschedule.inbound_table.head(10)",
"_____no_output_____"
],
[
"# I didn't have the stops functionality complete in my class, so I'll write it here\n# Read in route configuration data\nstops = pd.DataFrame(route.route_data['stop'])\ndirections = pd.DataFrame(route.route_data['direction'])\n\n# Change stop arrays to just the list of numbers\nfor i in range(len(directions)):\n directions.at[i, 'stop'] = [s['tag'] for s in directions.at[i, 'stop']]\n\n# Find which stops are inbound or outbound\ninbound = []\nfor stop_list in directions[directions['name'] == \"Inbound\"]['stop']:\n for stop in stop_list:\n if stop not in inbound:\n inbound.append(stop)\n\noutbound = []\nfor stop_list in directions[directions['name'] == \"Outbound\"]['stop']:\n for stop in stop_list:\n if stop not in inbound:\n outbound.append(stop)\n\n# Label each stop as inbound or outbound\nstops['direction'] = ['none'] * len(stops)\nfor i in range(len(stops)):\n if stops.at[i, 'tag'] in inbound:\n stops.at[i, 'direction'] = 'inbound'\n elif stops.at[i, 'tag'] in outbound:\n stops.at[i, 'direction'] = 'outbound'\n\n# Convert from string to float\nstops['lat'] = stops['lat'].astype(float)\nstops['lon'] = stops['lon'].astype(float)\n\nstops.head()",
"_____no_output_____"
],
[
"# Combine with the schedule class\n# Add a boolean that is true if the stop is also in the schedule table (not all stops get scheduled)\nscheduled_stops = list(schedule.inbound_table) + list(schedule.outbound_table)\nstops['scheduled'] = [(tag in scheduled_stops) for tag in stops['tag']]\n\nstops.head()",
"_____no_output_____"
]
],
[
[
"## Load bus data for the day\n\nI'm directly accessing the database for this",
"_____no_output_____"
]
],
[
[
"# Build query to select location data\nquery = f\"\"\"\n SELECT *\n FROM locations\n WHERE rid = '{rid}' AND\n timestamp > '{begin}'::TIMESTAMP AND\n timestamp < '{end}'::TIMESTAMP\n ORDER BY id;\n\"\"\"\n\n# read the query directly into pandas\nlocations = sqlio.read_sql_query(query, cnx)\nlocations.head()",
"_____no_output_____"
],
[
"# Convert those UTC timestamps to local PST by subtracting 7 hours\nlocations['timestamp'] = locations['timestamp'] - pd.Timedelta(hours=7)\nlocations.head()",
"_____no_output_____"
],
[
"locations.tail()",
"_____no_output_____"
]
],
[
[
"## Method exploration: bus trips\n\nCan we break bus location data into trips? And will it match the number of scheduled trips?\n\n### Conclusion\n\nAfter exploring with the code below, I can't use trips in the way I had hoped. There is too much inconsistency, it was actually hard to find one that showed data for the entire inbound or outbound trip. And of course the number of observed trips doesn't match the number of scheduled trips.",
"_____no_output_____"
]
],
[
[
"print(\"Number of inbound trips:\", len(schedule.inbound_table))\nprint(\"Number of outbound trips:\", len(schedule.outbound_table))",
"Number of inbound trips: 158\nNumber of outbound trips: 158\n"
],
[
"# Remove all rows where direction is empty (I may review this step later)\nlocations_cleaned = locations[~pd.isna(locations['direction'])]",
"_____no_output_____"
],
[
"# Start simple, use just 1 vid\nlocations['vid'].value_counts()[:5]",
"_____no_output_____"
],
[
"# trips will be stored as a list of numbers, row indices matching the above table\ntrips = pd.DataFrame(columns=['direction', 'vid', 'start', 'ids', 'count'])\n\n# perform search separately for each vid\nfor vid in locations_cleaned['vid'].unique():\n # filter to just that vid\n df = locations_cleaned[locations_cleaned['vid'] == vid]\n\n # Start with the first direction value\n direction = df.at[df.index[0], 'direction']\n current_trip = []\n for i in df.index:\n if df.at[i, 'direction'] == direction:\n # same direction, add to current trip\n current_trip.append(i)\n else:\n # changed direction, append current trip to the dataframe of trips\n trips = trips.append({'direction': direction, \n 'vid': vid,\n 'start': current_trip[0], \n 'ids': current_trip,\n 'count': len(current_trip)}, ignore_index=True)\n \n # and reset variables for the next one\n current_trip = [i]\n direction = df.at[i, 'direction']\n\n# Sort trips to run chronologically\ntrips = trips.sort_values('start').reset_index(drop=True)\n\nprint(trips.shape)\ntrips.head()",
"(286, 5)\n"
]
],
[
[
"I saw at the beginning of this exploration that there were 316 trips on the schedule. So 286 found doesn't cover all of it. Additionally some trips have few data points and will be cut short.",
"_____no_output_____"
]
],
[
[
"# needed to filter properly\nlocations_cleaned = locations_cleaned.reset_index()",
"_____no_output_____"
],
[
"# Plot a trip on the map to explore visually\n\n# plot the route path\nfig = px.line_mapbox(route.path_coords, lat=0, lon=1)\n\n# filter to just 1 trip\nfiltered = [i in trips.at[102, 'ids'] for i in locations_cleaned['index']]\n\n# plot the trip dots\nfig.add_trace(px.scatter_mapbox(locations_cleaned[filtered], lat='latitude', \n lon='longitude', text='timestamp').data[0])\n\nfig.update_layout(title=f\"Route {rid}\", mapbox_style=\"stamen-terrain\", mapbox_zoom=12)\nfig.show()",
"_____no_output_____"
]
],
[
[
"## Assigning Stops\n\nThe core objective here is to figure out when busses are at each stop. Since tracking this based on bus location is so inconsistent, I'm going to try turning it around.\n\nI will calculate which stop each bus is at for each location report, then I will switch viewpoints and analyse what times each stop sees a bus.",
"_____no_output_____"
]
],
[
[
"# I'll need an efficient method to get distance between lat/lon coordinates\n\n# Austie's notebooks already had this:\nfrom math import sqrt, cos\n\ndef fcc_projection(loc1, loc2):\n \"\"\"\n function to apply FCC recommended formulae\n for calculating distances on earth projected to a plane\n \n significantly faster computationally, negligible loss in accuracy\n \n Args: \n loc1 - a tuple of lat/lon\n loc2 - a tuple of lat/lon\n \"\"\"\n lat1, lat2 = loc1[0], loc2[0]\n lon1, lon2 = loc1[1], loc2[1]\n \n mean_lat = (lat1+lat2)/2\n delta_lat = lat2 - lat1\n delta_lon = lon2 - lon1\n \n k1 = 111.13209 - 0.56605*cos(2*mean_lat) + .0012*cos(4*mean_lat)\n k2 = 111.41513*cos(mean_lat) - 0.09455*cos(3*mean_lat) + 0.00012*cos(5*mean_lat)\n \n distance = sqrt((k1*delta_lat)**2 + (k2*delta_lon)**2)\n \n return distance",
"_____no_output_____"
],
[
"locations.head()",
"_____no_output_____"
],
[
"# We get data every 60 seconds, so I'll drop any that are more than\n# 60 seconds old. Those would end up as duplicates otherwise.\ndf = locations[locations['age'] < 60].copy()\n\nprint(\"Old rows removed:\", len(locations)-len(df), \"out of\", len(locations))",
"Old rows removed: 256 out of 14461\n"
],
[
"# Again, I'll remove rows with no direction so we are only tracking \n# buses that report they are going one way or the other\ndf = df[~pd.isna(df['direction'])]",
"_____no_output_____"
],
[
"# Since these reports are a few seconds old, I'm going to shift the timestamps\n# so that they match when the vehicle was actually at that location\n\ndef shift_timestamp(row):\n \"\"\" subtracts row['age'] from row['timestamp'] \"\"\"\n return row['timestamp'] - pd.Timedelta(seconds=row['age'])\n\ndf['timestamp'] = df.apply(shift_timestamp, axis=1)\ndf.head()",
"_____no_output_____"
],
[
"# For stop comparison, I'll make lists of all inbound or outbound stops\ninbound_stops = stops[stops['direction'] == 'inbound'].reset_index(drop=True)\noutbound_stops = stops[stops['direction'] == 'outbound'].reset_index(drop=True)\n\ninbound_stops.head()",
"_____no_output_____"
],
[
"%%time\n\n# initialize columns for efficiency\ndf['closestStop'] = [0] * len(df)\ndf['distance'] = [0.0] * len(df)\n\nfor i in df.index:\n if '_I_' in df.at[i, 'direction']:\n candidates = inbound_stops.copy()\n elif '_O_' in df.at[i, 'direction']:\n candidates = outbound_stops.copy()\n else:\n # Skip row if bus is not found to be either inbound or outbound\n continue\n \n bus_coord = (df.at[i, 'latitude'], df.at[i, 'longitude'])\n\n # Find closest stop within candidates\n # Assume the first stop\n closest = candidates.iloc[0]\n distance = fcc_projection(bus_coord, (closest['lat'], closest['lon']))\n\n # Check each stop after that\n for j in range(1, len(candidates)):\n # find distance to this stop\n dist = fcc_projection(bus_coord, (candidates.iloc[j]['lat'], \n candidates.iloc[j]['lon']))\n if dist < distance:\n # closer stop found, save it\n closest = candidates.iloc[j]\n distance = dist\n \n # Save the tag of the closest stop and the distance to it\n df.at[i, 'closestStop'] = closest['tag']\n df.at[i, 'distance'] = distance",
"CPU times: user 2min 49s, sys: 13.4 ms, total: 2min 49s\nWall time: 2min 49s\n"
]
],
[
[
"There are 48 stops on both the inbound and outbound routes, and 10445 location reports for the day after removing any with no direction.\n\nSo the code cell above checked 501,360 combinations of bus locations and stops.\n\nGood candidate for optimization.",
"_____no_output_____"
]
],
[
[
"print(df.shape)\ndf.head(10)",
"(10445, 12)\n"
],
[
"import matplotlib.pyplot as plt\nimport seaborn as sns\n\nfig, ax = plt.subplots()\nfig.set_size_inches(16, 16)\n\n# All vehicles\nsns.scatterplot(x=df['timestamp'], \n y=df['closestStop'].astype(str))\n\n# Single vehicle plot\n# vid = 5798\n# sns.scatterplot(x=df[df['vid']==vid]['timestamp'], \n# y=df[df['vid']==vid]['closestStop'].astype(str), \n# hue=df[df['vid']==vid]['direction'])\n\n# sort Y axis by the order stops should visited in\nstoplist = list(inbound_stops['tag']) + list(outbound_stops['tag'])\nplt.yticks(ticks=stoplist, labels=stoplist)\n\nplt.show()",
"/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning:\n\npandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n\n"
]
],
[
[
"## Calculating when each bus is at each stop\n\nGeneral steps:\n\n1. Find which stop is closest to each reported location (done above)\n - This step could use improvement. Using the closest stop will be a good approximation, but we could get more accurate by taking speed, distance, or maybe heading into account.\n2. Fill in times for stops in between each location report (interpolation)\n - \n - Possible improvement: the current approach assumes equal distances between stops\n3. Save those times for each stop, to get a list of all times each stop sees a bus\n4. Analyze that list of times the stop sees a bus to get the bunches and gaps",
"_____no_output_____"
]
],
[
[
"# Slimming down to just 1 vehicle so I can test faster\ndf_full = df.copy()\ndf = df[df['vid'] == 5797]",
"_____no_output_____"
],
[
"# This is the first inbound group\nprint(df.shape)\ndf.head(12)",
"(362, 12)\n"
],
[
"# The inbound and outbound lists have stops in the correct order\ninbound[:5]",
"_____no_output_____"
],
[
"# example, index of a stop\ninbound.index('4016')",
"_____no_output_____"
],
[
"# example, interpolate the time between stops at rows 0 and 1\ndf.at[0, 'timestamp'] + (df.at[1, 'timestamp'] - df.at[0, 'timestamp'])/2",
"_____no_output_____"
],
[
"%%time\n\n# Initialize the data structure I will store results in\n# Dict with the stop tag as a key, and a list of timestamps\nstop_times = {}\nfor stop in inbound + outbound:\n stop_times[str(stop)] = []\n\nfor vid in df_full['vid'].unique():\n # Process the route one vehicle at a time\n df = df_full[df_full['vid'] == vid]\n\n # process 1st row on its own\n prev_row = df.loc[df.index[0]]\n stop_times[str(prev_row['closestStop'])].append(prev_row['timestamp'])\n\n # loop through the rest of the rows, comparing each to the previous one\n for i, row in df[1:].iterrows():\n if row['direction'] != prev_row['direction']:\n # changed directions, don't compare to previous row\n stop_times[str(row['closestStop'])].append(row['timestamp'])\n else:\n # same direction, compare to previous\n if '_I_' in row['direction']: # get correct stop list\n stoplist = inbound\n else:\n stoplist = outbound\n\n current = stoplist.index(str(row['closestStop']))\n previous = stoplist.index(str(prev_row['closestStop']))\n gap = current - previous\n if gap > 1: # need to interpolate\n diff = (row['timestamp'] - prev_row['timestamp'])/gap\n counter = 1\n for stop in stoplist[previous+1:current]:\n # save interpolated time\n stop_times[str(stop)].append(prev_row['timestamp'] + (counter * diff))\n # print('interpolated time for stop', stop)\n\n # increase counter for the next stop\n # example: with 2 interpolated stops, gap would be 3\n # 1st diff is 1/3, next is 2/3\n counter += 1\n \n if row['closestStop'] != prev_row['closestStop']:\n # only save time if the stop has changed, \n # otherwise the bus hasn't moved since last time\n stop_times[str(row['closestStop'])].append(row['timestamp'])\n \n # advance for next row\n prev_row = row",
"CPU times: user 2.39 s, sys: 0 ns, total: 2.39 s\nWall time: 2.39 s\n"
],
[
"# meta-analysis\ntotal = 0\nfor key in stop_times.keys():\n total += len(stop_times[key])\n print(f\"Stop {key} recorded {len(stop_times[key])} timestamps\")\n\nprint(f\"\\n{total} total timestamps were recorded, meaning {total - len(df_full)} were interpolated\")",
"Stop 4277 recorded 32 timestamps\nStop 3555 recorded 122 timestamps\nStop 3548 recorded 127 timestamps\nStop 3546 recorded 128 timestamps\nStop 3844 recorded 130 timestamps\nStop 3842 recorded 130 timestamps\nStop 3840 recorded 131 timestamps\nStop 3838 recorded 132 timestamps\nStop 3836 recorded 132 timestamps\nStop 3834 recorded 132 timestamps\nStop 3887 recorded 132 timestamps\nStop 3832 recorded 132 timestamps\nStop 3830 recorded 133 timestamps\nStop 3827 recorded 133 timestamps\nStop 3825 recorded 133 timestamps\nStop 3823 recorded 133 timestamps\nStop 3846 recorded 134 timestamps\nStop 3853 recorded 134 timestamps\nStop 3897 recorded 134 timestamps\nStop 3876 recorded 130 timestamps\nStop 3893 recorded 124 timestamps\nStop 3848 recorded 120 timestamps\nStop 3859 recorded 120 timestamps\nStop 3885 recorded 120 timestamps\nStop 6489 recorded 121 timestamps\nStop 6296 recorded 121 timestamps\nStop 6320 recorded 120 timestamps\nStop 6292 recorded 121 timestamps\nStop 6306 recorded 122 timestamps\nStop 6310 recorded 126 timestamps\nStop 4905 recorded 130 timestamps\nStop 4016 recorded 131 timestamps\nStop 4031 recorded 131 timestamps\nStop 4026 recorded 132 timestamps\nStop 4022 recorded 133 timestamps\nStop 4019 recorded 133 timestamps\nStop 4023 recorded 133 timestamps\nStop 4020 recorded 133 timestamps\nStop 4030 recorded 135 timestamps\nStop 4024 recorded 135 timestamps\nStop 4027 recorded 162 timestamps\nStop 4029 recorded 133 timestamps\nStop 4018 recorded 95 timestamps\nStop 4021 recorded 95 timestamps\nStop 4025 recorded 93 timestamps\nStop 4028 recorded 90 timestamps\nStop 4017 recorded 87 timestamps\nStop 34015 recorded 43 timestamps\nStop 4015 recorded 20 timestamps\nStop 6294 recorded 89 timestamps\nStop 6290 recorded 88 timestamps\nStop 6314 recorded 93 timestamps\nStop 6307 recorded 94 timestamps\nStop 6302 recorded 95 timestamps\nStop 6299 recorded 95 timestamps\nStop 6316 recorded 95 timestamps\nStop 6312 recorded 96 timestamps\nStop 6315 recorded 119 timestamps\nStop 6301 recorded 124 timestamps\nStop 6304 recorded 130 timestamps\nStop 6300 recorded 132 timestamps\nStop 6303 recorded 132 timestamps\nStop 6311 recorded 133 timestamps\nStop 6317 recorded 133 timestamps\nStop 6297 recorded 133 timestamps\nStop 6298 recorded 133 timestamps\nStop 6309 recorded 133 timestamps\nStop 6305 recorded 133 timestamps\nStop 6291 recorded 133 timestamps\nStop 6319 recorded 133 timestamps\nStop 6295 recorded 133 timestamps\nStop 6486 recorded 133 timestamps\nStop 3884 recorded 132 timestamps\nStop 3858 recorded 132 timestamps\nStop 3847 recorded 126 timestamps\nStop 3892 recorded 123 timestamps\nStop 3875 recorded 119 timestamps\nStop 3896 recorded 118 timestamps\nStop 3852 recorded 119 timestamps\nStop 3845 recorded 120 timestamps\nStop 3822 recorded 122 timestamps\nStop 3824 recorded 122 timestamps\nStop 7160 recorded 122 timestamps\nStop 3828 recorded 123 timestamps\nStop 3831 recorded 128 timestamps\nStop 3886 recorded 130 timestamps\nStop 3833 recorded 130 timestamps\nStop 3835 recorded 130 timestamps\nStop 3837 recorded 132 timestamps\nStop 3839 recorded 132 timestamps\nStop 3841 recorded 133 timestamps\nStop 3843 recorded 133 timestamps\nStop 3547 recorded 133 timestamps\nStop 3549 recorded 126 timestamps\nStop 3550 recorded 74 timestamps\nStop 34277 recorded 27 timestamps\n\n11481 total timestamps were recorded, meaning 1036 were interpolated\n"
]
],
[
[
"## Finding bunches and gaps\n\nNow that we have the lists of each time a bus was at each stop, we can go through those lists to check if the times were long or short enough to be bunches or gaps.",
"_____no_output_____"
]
],
[
[
"# Expected headway (in minutes)\nschedule.common_interval",
"_____no_output_____"
],
[
"%%time\n# sort every array so that the times are in order\nfor stop in stop_times.keys():\n stop_times[stop].sort()",
"CPU times: user 5.59 ms, sys: 0 ns, total: 5.59 ms\nWall time: 5.54 ms\n"
],
[
"%%time\n\n# Initialize dataframe for the bunces and gaps\nproblems = pd.DataFrame(columns=['type', 'time', 'duration', 'stop'])\ncounter = 0\n\n# Set the bunch/gap thresholds (in seconds)\nbunch_threshold = (schedule.common_interval * 60) * .2 # 20%\ngap_threshold = (schedule.common_interval * 60) * 1.5 # 150%\n\nfor stop in stop_times.keys():\n # ensure we have a time at all for this stop\n if len(stop_times[stop]) == 0:\n print(f\"Stop {stop} had no recorded times\")\n continue # go to next stop in the loop\n\n # save initial time\n prev_time = stop_times[stop][0]\n\n # loop through all others, comparing to the previous one\n for time in stop_times[stop][1:]:\n diff = (time - prev_time).seconds\n if diff <= bunch_threshold:\n # bunch found, save it\n problems.at[counter] = ['bunch', prev_time, diff, stop]\n counter += 1\n elif diff >= gap_threshold:\n problems.at[counter] = ['gap', prev_time, diff, stop]\n counter += 1\n \n prev_time = time",
"CPU times: user 2.39 s, sys: 35.6 ms, total: 2.43 s\nWall time: 2.37 s\n"
],
[
"# type - 'bunch' or 'gap'\n# time - the start time of the problem interval\n# duration - the length of the interval, in seconds\n# stop - the stop this interval was recorded at\n\nproblems[problems['type'] == 'gap']",
"_____no_output_____"
],
[
"problems[problems['type'] == 'bunch']",
"_____no_output_____"
],
[
"# how many is that?\nbunches = len(problems[problems['type'] == 'bunch'])\ngaps = len(problems[problems['type'] == 'gap'])\nintervals = total-len(stop_times)\n\nprint(f\"Out of {intervals} recorded intervals, we found {bunches} bunches and {gaps} gaps\")\nprint(f\"\\t{(bunches/intervals)*100 : .2f}% bunched\")\nprint(f\"\\t{(gaps/intervals)*100 : .2f}% gapped\")",
"Out of 11385 recorded intervals, we found 532 bunches and 1873 gaps\n\t 4.67% bunched\n\t 16.45% gapped\n"
]
],
[
[
"## On time percentage\n\nWe can also use the list of stop times to calculate on-time percentage. For each expected time, did we have a bus there on time?\n\nSFMTA defines \"on time\" as \"within four minutes late or one minute early of the scheduled arrival time\"\n\n- source: https://www.sfmta.com/reports/muni-time-performance\n\nWe don't have enough precision in our data to distinguish arrivals and departures from every specific stop, so our results may not match up exactly with SFMTA's reported results. That website reports monthly statistics, but not daily.\n\nThese approximations also do not have info on which bus is supposed to be which trip. This code does not distinguish between early or late if a scheduled stop was missed, because we do not know if the bus before or after was supposed to make that stop.",
"_____no_output_____"
]
],
[
[
"# Remind myself what I'm working with\nschedule.inbound_table.head(10)",
"_____no_output_____"
],
[
"# What times did we actually see?\nstop_times['4026'][:10]",
"_____no_output_____"
],
[
"pd.to_datetime(inbound_times['4277'])",
"_____no_output_____"
],
[
"# Save schedules with timestamp data types, set date to match\nexample = df_full['timestamp'][0]\n\ninbound_times = schedule.inbound_table\nfor col in inbound_times.columns:\n inbound_times[col] = pd.to_datetime(inbound_times[col]).apply(\n lambda dt: dt.replace(year=example.year, \n month=example.month, \n day=example.day))\n\noutbound_times = schedule.outbound_table\nfor col in outbound_times.columns:\n outbound_times[col] = pd.to_datetime(outbound_times[col]).apply(\n lambda dt: dt.replace(year=example.year, \n month=example.month, \n day=example.day))",
"_____no_output_____"
],
[
"%%time\n# This performs a sequential search to find observed times that match \n# expected times. Could switch to a binary search to improve speed if needed.\n\ndef count_on_time(expected_times, observed_times):\n \"\"\" Returns the number of on-time stops found \"\"\"\n\n # set up early/late thresholds (in seconds)\n early_threshold = pd.Timedelta(seconds=1*60) # 1 minute early\n late_threshold = pd.Timedelta(seconds=4*60) # 4 minutes late\n\n count = 0\n for stop in expected_times.columns:\n for expected in expected_times[stop]:\n if pd.isna(expected):\n continue # skip NaT values in the expected schedule\n\n # for each expected time...\n # find first observed time after the early threshold\n found_time = None\n early = expected - early_threshold\n for observed in observed_times[stop]:\n if observed >= early:\n found_time = observed\n break \n\n # if found time is still None, then all observed times were too early\n # if found_time is before the late threshold then we were on time\n if (not pd.isna(found_time)) and found_time <= (expected + late_threshold):\n # found_time is within the on-time window\n count += 1\n\n return count\n\n# count times for both inbound and outbound schedules\non_time_count = (count_on_time(inbound_times, stop_times) + \n count_on_time(outbound_times, stop_times))\n\n# get total expected count\ntotal_expected = inbound_times.count().sum() + outbound_times.count().sum()\n\n# and there we have it, the on-time percentage\nprint(f\"Found {on_time_count} on time stops out of {total_expected} scheduled\")\nprint(f\"On-time percentage is {(on_time_count/total_expected)*100 : .2f}%\\n\")",
"Found 952 on time stops out of 2324 scheduled\nOn-time percentage is 40.96%\n\nCPU times: user 60.5 ms, sys: 2 ms, total: 62.5 ms\nWall time: 66.2 ms\n"
],
[
"",
"_____no_output_____"
]
],
[
[
"## Bunches and Gaps Line Graph\n",
"_____no_output_____"
]
],
[
[
"problems.head()",
"_____no_output_____"
],
[
"index = pd.DatetimeIndex(problems['time'])\ndf = problems.copy()\ndf.index = index\ndf.head()",
"_____no_output_____"
],
[
"# How to select rows between a time\ninterval = pd.Timedelta(minutes=10)\n\nselect_start = pd.to_datetime('2020/6/8 11:00:00')\nselect_end = select_start + interval\n\ndf.between_time(select_start.time(), select_end.time())",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\ndef draw_bunch_gap_graph(problems, interval=5):\n \"\"\"\n plots a graph of the bunches and gaps throughout the day\n\n problems - the dataframe of bunches and gaps\n\n interval - the number of minutes to bin data into\n \"\"\"\n\n # generate the DatetimeIndex needed\n index = pd.DatetimeIndex(problems['time'])\n df = problems.copy()\n df.index = index\n\n # lists for graph data\n bunches = []\n gaps = []\n times = []\n\n # set the time interval and selectino times\n interval = pd.Timedelta(minutes=interval)\n start_date = problems.at[0, 'time'].replace(hour=0, minute=0, second=0)\n select_start = start_date\n select_end = date + interval\n\n while select_start.day == start_date.day:\n # get the count of each type of problem in this time interval\n count = df.between_time(select_start.time(), select_end.time())['type'].value_counts()\n\n # append the counts to the data list\n if 'bunch' in count.index:\n bunches.append(count['bunch'])\n else:\n bunches.append(0)\n \n if 'gap' in count.index:\n gaps.append(count['gap'])\n else:\n gaps.append(0)\n\n # save the start time for the x axis\n times.append(str(select_start.time())[:5])\n \n # increment the selection window\n select_start += interval\n select_end += interval\n\n # draw the graph\n fig, ax = plt.subplots()\n fig.set_size_inches(16, 6)\n\n plt.plot(times, bunches, label='Bunches')\n plt.plot(times, gaps, label='Gaps')\n\n plt.title(f\"Bunches and Gaps for route {rid} on {str(pd.to_datetime(begin).date())}\")\n ax.xaxis.set_major_locator(plt.MaxNLocator(24)) # limit number of ticks on x-axis\n plt.legend()\n plt.show()",
"_____no_output_____"
],
[
"bunch_gap_graph(problems, interval=5)",
"_____no_output_____"
],
[
"bunch_gap_graph(problems, interval=10)",
"_____no_output_____"
]
],
[
[
"# Cleaning code\n\nThe above code achieves what we're after: it finds the number of bunches and gaps in a given route on a given day, and also calculates the on-time percentage.\n\nThe code below can be run on it's own, and does not include any of the exploratory bits.",
"_____no_output_____"
],
[
"## Run Once",
"_____no_output_____"
]
],
[
[
"# Used in many places\nimport psycopg2 as pg\nimport pandas as pd\n\n# Used to enter database credentials without saving them to the notebook file\nimport getpass\n\n# Used to easily read in bus location data\nimport pandas.io.sql as sqlio\n\n# Only used in the schedule class definition\nimport numpy as np\nfrom scipy import stats\n\n# Used in the fcc_projection function to find distances\nfrom math import sqrt, cos\n\n# Used at the end, to convert the final product to JSON\nimport json",
"/usr/local/lib/python3.6/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.\n \"\"\")\n"
],
[
"# Enter database credentials. Requires you to paste in the user and\n# password so it isn't saved in the notebook file\nprint(\"Enter database username:\")\nuser = getpass.getpass()\nprint(\"Enter database password:\")\npassword = getpass.getpass()\n\ncreds = {\n 'user': user,\n 'password': password,\n 'host': \"lambdalabs24sfmta.cykkiwxbfvpg.us-east-1.rds.amazonaws.com\",\n 'dbname': \"historicalTransitData\"\n}\n\n# Set up connection to database\ncnx = pg.connect(**creds)\ncursor = cnx.cursor()\n\nprint('\\nDatabase connection successful')",
"Enter database username:\n··········\nEnter database password:\n··········\n\nDatabase connection successful\n"
],
[
"# Schedule class definition\n# Copied from previous work, has extra methods that are not all used in this notebook\n\nclass Schedule:\n def __init__(self, route_id, date, connection):\n \"\"\"\n The Schedule class loads the schedule for a particular route and day,\n and makes several accessor methods available for it.\n\n Parameters:\n\n route_id (str or int)\n - The route id to load\n\n date (str or pandas.Timestamp)\n - Which date to load\n - Converted with pandas.to_datetime so many formats are acceptable\n \"\"\"\n\n self.route_id = str(route_id)\n self.date = pd.to_datetime(date)\n\n # load the schedule for that date and route\n self.route_data = load_schedule(self.route_id, self.date, connection)\n\n # process data into a table\n self.inbound_table, self.outbound_table = extract_schedule_tables(self.route_data)\n\n # calculate the common interval values\n self.mean_interval, self.common_interval = get_common_intervals(\n [self.inbound_table, self.outbound_table])\n\n def list_stops(self):\n \"\"\"\n returns the list of all stops used by this schedule\n \"\"\"\n\n # get stops for both inbound and outbound routes\n inbound = list(self.inbound_table.columns)\n outbound = list(self.outbound_table.columns)\n\n # convert to set to ensure no duplicates,\n # then back to list for the correct output type\n return list(set(inbound + outbound))\n\n def get_specific_interval(self, stop, time, inbound=True):\n \"\"\"\n Returns the expected interval, in minutes, for a given stop and\n time of day.\n\n Parameters:\n\n stop (str or int)\n - the stop tag/id of the bus stop to check\n\n time (str or pandas.Timestamp)\n - the time of day to check, uses pandas.to_datetime to convert\n - examples that work: \"6:00\", \"3:30pm\", \"15:30\"\n\n inbound (bool, optional)\n - whether to check the inbound or outbound schedule\n - ignored unless the given stop is in both inbound and outbound\n \"\"\"\n\n # ensure correct parameter types\n stop = str(stop)\n time = pd.to_datetime(time)\n\n # check which route to use, and extract the column for the given stop\n if (stop in self.inbound_table.columns and\n stop in self.outbound_table.columns):\n # stop exists in both, use inbound parameter to decide\n if inbound:\n sched = self.inbound_table[stop]\n else:\n sched = self.outbound_table[stop]\n elif (stop in self.inbound_table.columns):\n # stop is in the inbound schedule, use that\n sched = self.inbound_table[stop]\n elif (stop in self.outbound_table.columns):\n # stop is in the outbound schedule, use that\n sched = self.outbound_table[stop]\n else:\n # stop doesn't exist in either, throw an error\n raise ValueError(f\"Stop id '{stop}' doesn't exist in either inbound or outbound schedules\")\n\n # 1: convert schedule to datetime for comparison statements\n # 2: drop any NaN values\n # 3: convert to list since pd.Series threw errors on i indexing\n sched = list(pd.to_datetime(sched).dropna())\n\n # reset the date portion of the time parameter to\n # ensure we are checking the schedule correctly\n time = time.replace(year=self.date.year, month=self.date.month,\n day=self.date.day)\n\n # iterate through that list to find where the time parameter fits\n for i in range(1, len(sched)):\n # start at 1 and move forward,\n # is the time parameter before this schedule entry?\n if(time < sched[i]):\n # return the difference between this entry and the previous one\n return (sched[i] - sched[i-1]).seconds / 60\n\n # can only reach this point if the time parameter is after all entries\n # in the schedule, return the last available interval\n return (sched[len(sched)-1] - sched[len(sched)-2]).seconds / 60\n\n\ndef load_schedule(route, date, connection):\n \"\"\"\n loads schedule data from the database and returns it\n\n Parameters:\n\n route (str)\n - The route id to load\n\n date (str or pd.Timestamp)\n - Which date to load\n - Converted with pandas.to_datetime so many formats are acceptable\n \"\"\"\n\n # ensure correct parameter types\n route = str(route)\n date = pd.to_datetime(date)\n\n # DB connection\n cursor = connection.cursor()\n\n # build selection query\n query = \"\"\"\n SELECT content\n FROM schedules\n WHERE rid = %s AND\n begin_date <= %s::TIMESTAMP AND\n (end_date IS NULL OR end_date >= %s::TIMESTAMP);\n \"\"\"\n\n # execute query and save the route data to a local variable\n cursor.execute(query, (route, str(date), str(date)))\n data = cursor.fetchone()[0]['route']\n\n # pd.Timestamp.dayofweek returns 0 for monday and 6 for Sunday\n # the actual serviceClass strings are defined by Nextbus\n # these are the only 3 service classes we can currently observe,\n # if others are published later then this will need to change\n if(date.dayofweek <= 4):\n serviceClass = 'wkd'\n elif(date.dayofweek == 5):\n serviceClass = 'sat'\n else:\n serviceClass = 'sun'\n\n # the schedule format has two entries for each serviceClass,\n # one each for inbound and outbound.\n\n # return each entry in the data list with the correct serviceClass\n return [sched for sched in data if (sched['serviceClass'] == serviceClass)]\n\n\ndef extract_schedule_tables(route_data):\n \"\"\"\n converts raw schedule data to two pandas dataframes\n\n columns are stops, and rows are individual trips\n\n returns inbound_df, outbound_df\n \"\"\"\n\n # assuming 2 entries, but not assuming order\n if(route_data[0]['direction'] == 'Inbound'):\n inbound = 0\n else:\n inbound = 1\n\n # extract a list of stops to act as columns\n inbound_stops = [s['tag'] for s in route_data[inbound]['header']['stop']]\n\n # initialize dataframe\n inbound_df = pd.DataFrame(columns=inbound_stops)\n\n # extract each row from the data\n if type(route_data[inbound]['tr']) == list:\n # if there are multiple trips in a day, structure will be a list\n i = 0\n for trip in route_data[inbound]['tr']:\n for stop in trip['stop']:\n # '--' indicates the bus is not going to that stop on this trip\n if stop['content'] != '--':\n inbound_df.at[i, stop['tag']] = stop['content']\n # increment for the next row\n i += 1\n else:\n # if there is only 1 trip in a day, the object is a dict and\n # must be handled slightly differently\n for stop in route_data[inbound]['tr']['stop']:\n if stop['content'] != '--':\n inbound_df.at[0, stop['tag']] = stop['content']\n\n # flip between 0 and 1\n outbound = int(not inbound)\n\n # repeat steps for the outbound schedule\n outbound_stops = [s['tag'] for s in route_data[outbound]['header']['stop']]\n outbound_df = pd.DataFrame(columns=outbound_stops)\n\n if type(route_data[outbound]['tr']) == list:\n i = 0\n for trip in route_data[outbound]['tr']:\n for stop in trip['stop']:\n if stop['content'] != '--':\n outbound_df.at[i, stop['tag']] = stop['content']\n i += 1\n else:\n for stop in route_data[outbound]['tr']['stop']:\n if stop['content'] != '--':\n outbound_df.at[0, stop['tag']] = stop['content']\n\n # return both dataframes\n return inbound_df, outbound_df\n\n\ndef get_common_intervals(df_list):\n \"\"\"\n takes route schedule tables and returns both the average interval (mean)\n and the most common interval (mode), measured in number of minutes\n\n takes a list of dataframes and combines them before calculating statistics\n\n intended to combine inbound and outbound schedules for a single route\n \"\"\"\n\n # ensure we have at least one dataframe\n if len(df_list) == 0:\n raise ValueError(\"Function requires at least one dataframe\")\n\n # append all dataframes in the array together\n df = df_list[0].copy()\n for i in range(1, len(df_list)):\n df.append(df_list[i].copy())\n\n # convert all values to datetime so we can get an interval easily\n for col in df.columns:\n df[col] = pd.to_datetime(df[col])\n\n # initialize a table to hold each individual interval\n intervals = pd.DataFrame(columns=df.columns)\n intervals['temp'] = range(len(df))\n\n # take each column and find the intervals in it\n for col in df.columns:\n prev_time = np.nan\n for i in range(len(df)):\n # find the first non-null value and save it to prev_time\n if pd.isnull(prev_time):\n prev_time = df.at[i, col]\n # if the current time is not null, save the interval\n elif ~pd.isnull(df.at[i, col]):\n intervals.at[i, col] = (df.at[i, col] - prev_time).seconds / 60\n prev_time = df.at[i, col]\n\n # this runs without adding a temp column, but the above loop runs 3x as\n # fast if the rows already exist\n intervals = intervals.drop('temp', axis=1)\n\n # calculate the mean of the entire table\n mean = intervals.mean().mean()\n\n # calculate the mode of the entire table, the [0][0] at the end is\n # because scipy.stats returns an entire ModeResult class\n mode = stats.mode(intervals.values.flatten())[0][0]\n\n return mean, mode",
"_____no_output_____"
],
[
"# Route class definition\n# Copied from previous work, has extra methods that are not all used in this notebook\n\nclass Route:\n def __init__(self, route_id, date, connection):\n \"\"\"\n The Route class loads the route configuration data for a particular\n route, and makes several accessor methods available for it.\n\n Parameters:\n\n route_id (str or int)\n - The route id to load\n\n date (str or pandas.Timestamp)\n - Which date to load\n - Converted with pandas.to_datetime so many formats are acceptable\n \"\"\"\n\n self.route_id = str(route_id)\n self.date = pd.to_datetime(date)\n\n # load the route data\n self.route_data, self.route_type, self.route_name = load_route(self.route_id, self.date, connection)\n\n # extract stops info and rearrange columns to be more human readable\n # note: the stop tag is what was used in the schedule data, not stopId\n self.stops_table = pd.DataFrame(self.route_data['stop'])\n self.stops_table = self.stops_table[['stopId', 'tag', 'title', 'lat', 'lon']]\n\n # extract route path, list of (lat, lon) pairs\n self.path_coords = extract_path(self.route_data)\n\n # extract stops table\n self.stops_table, self.inbound, self.outbound = extract_stops(self.route_data)\n\n\ndef load_route(route, date, connection):\n \"\"\"\n loads raw route data from the database\n\n Parameters:\n\n route (str or int)\n - The route id to load\n\n date (str or pd.Timestamp)\n - Which date to load\n - Converted with pandas.to_datetime so many formats are acceptable\n \n Returns route_data (dict), route_type (str), route_name (str)\n \"\"\"\n\n # ensure correct parameter types\n route = str(route)\n date = pd.to_datetime(date)\n\n # DB connection\n cursor = connection.cursor()\n\n # build selection query\n query = \"\"\"\n SELECT route_name, route_type, content\n FROM routes\n WHERE rid = %s AND\n begin_date <= %s::TIMESTAMP AND\n (end_date IS NULL OR end_date > %s::TIMESTAMP);\n \"\"\"\n\n # execute query and return the route data\n cursor.execute(query, (route, str(date), str(date)))\n result = cursor.fetchone()\n return result[2]['route'], result[1], result[0]\n\n\ndef extract_path(route_data):\n \"\"\"\n Extracts the list of path coordinates for a route.\n\n The raw data stores this as an unordered list of sub-routes, so this\n function deciphers the order they should go in and returns a single list.\n \"\"\"\n\n # KNOWN BUG\n # this approach assumed all routes were either a line or a loop.\n # routes that have multiple sub-paths meeting at a point break this,\n # route 24 is a current example.\n # I'm committing this now to get the rest of the code out there\n\n # extract the list of subpaths as just (lat,lon) coordinates\n # also converts from string to float (raw data has strings)\n path = []\n for sub_path in route_data['path']:\n path.append([(float(p['lat']), float(p['lon'])) \n for p in sub_path['point']])\n\n # start with the first element, remove it from path\n final = path[0]\n path.pop(0)\n\n # loop until the first and last coordinates in final match\n counter = len(path)\n done = True\n while final[0] != final[-1]:\n # loop through the sub-paths that we haven't yet moved to final\n for i in range(len(path)):\n # check if the last coordinate in final matches the first \n # coordinate of another sub-path\n if final[-1] == path[i][0]:\n # match found, move it to final\n # leave out the first coordinate to avoid duplicates\n final = final + path[i][1:]\n path.pop(i)\n break # break the for loop\n \n # protection against infinite loops, if the path never closes\n counter -= 1\n if counter < 0:\n done = False\n break\n\n if not done:\n # route did not connect in a loop, perform same steps backwards \n # to get the rest of the line\n for _ in range(len(path)):\n # loop through the sub-paths that we haven't yet moved to final\n for i in range(len(path)):\n # check if the first coordinate in final matches the last \n # coordinate of another sub-path\n if final[0] == path[i][-1]:\n # match found, move it to final\n # leave out the last coordinate to avoid duplicates\n final = path[i][:-1] + final\n path.pop(i)\n break # break the for loop\n\n # some routes may have un-used sub-paths\n # Route 1 for example has two sub-paths that are almost identical, with the \n # same start and end points\n # if len(path) > 0:\n # print(f\"WARNING: {len(path)} unused sub-paths\")\n\n # return the final result\n return final\n\n\ndef extract_stops(route_data):\n \"\"\"\n Extracts a dataframe of stops info\n\n Returns the main stops dataframe, and a list of inbound and outbound stops \n in the order they are intended to be on the route\n \"\"\"\n\n stops = pd.DataFrame(route_data['stop'])\n directions = pd.DataFrame(route_data['direction'])\n\n # Change stop arrays to just the list of numbers\n for i in range(len(directions)):\n directions.at[i, 'stop'] = [s['tag'] for s in directions.at[i, 'stop']]\n\n # Find which stops are inbound or outbound\n inbound = []\n for stop_list in directions[directions['name'] == \"Inbound\"]['stop']:\n for stop in stop_list:\n if stop not in inbound:\n inbound.append(stop)\n\n outbound = []\n for stop_list in directions[directions['name'] == \"Outbound\"]['stop']:\n for stop in stop_list:\n if stop not in inbound:\n outbound.append(stop)\n\n # Label each stop as inbound or outbound\n stops['direction'] = ['none'] * len(stops)\n for i in range(len(stops)):\n if stops.at[i, 'tag'] in inbound:\n stops.at[i, 'direction'] = 'inbound'\n elif stops.at[i, 'tag'] in outbound:\n stops.at[i, 'direction'] = 'outbound'\n\n # Convert from string to float\n stops['lat'] = stops['lat'].astype(float)\n stops['lon'] = stops['lon'].astype(float)\n\n return stops, inbound, outbound",
"_____no_output_____"
],
[
"def get_location_data(rid, begin, end, connection):\n # Build query to select location data\n query = f\"\"\"\n SELECT *\n FROM locations\n WHERE rid = '{rid}' AND\n timestamp > '{begin}'::TIMESTAMP AND\n timestamp < '{end}'::TIMESTAMP\n ORDER BY id;\n \"\"\"\n\n # read the query directly into pandas\n locations = sqlio.read_sql_query(query, connection)\n\n # Convert those UTC timestamps to local PST by subtracting 7 hours\n locations['timestamp'] = locations['timestamp'] - pd.Timedelta(hours=7)\n\n # return the result\n return locations\n",
"_____no_output_____"
],
[
"# Written by Austie\ndef fcc_projection(loc1, loc2):\n \"\"\"\n function to apply FCC recommended formulae\n for calculating distances on earth projected to a plane\n \n significantly faster computationally, negligible loss in accuracy\n \n Args: \n loc1 - a tuple of lat/lon\n loc2 - a tuple of lat/lon\n \"\"\"\n lat1, lat2 = loc1[0], loc2[0]\n lon1, lon2 = loc1[1], loc2[1]\n \n mean_lat = (lat1+lat2)/2\n delta_lat = lat2 - lat1\n delta_lon = lon2 - lon1\n \n k1 = 111.13209 - 0.56605*cos(2*mean_lat) + .0012*cos(4*mean_lat)\n k2 = 111.41513*cos(mean_lat) - 0.09455*cos(3*mean_lat) + 0.00012*cos(5*mean_lat)\n \n distance = sqrt((k1*delta_lat)**2 + (k2*delta_lon)**2)\n \n return distance\n",
"_____no_output_____"
],
[
"def clean_locations(locations, stops):\n \"\"\"\n takes a dataframe of bus locations and a dataframe of \n\n returns the locations dataframe with nearest stop added\n \"\"\"\n \n # remove old location reports that would be duplicates\n df = locations[locations['age'] < 60].copy()\n\n # remove rows with no direction value\n df = df[~pd.isna(df['direction'])]\n\n # shift timestamps according to the age column\n df['timestamp'] = df.apply(shift_timestamp, axis=1)\n\n # Make lists of all inbound or outbound stops\n inbound_stops = stops[stops['direction'] == 'inbound'].reset_index(drop=True)\n outbound_stops = stops[stops['direction'] == 'outbound'].reset_index(drop=True)\n\n # initialize new columns for efficiency\n df['closestStop'] = [0] * len(df)\n df['distance'] = [0.0] * len(df)\n\n for i in df.index:\n if '_I_' in df.at[i, 'direction']:\n candidates = inbound_stops\n elif '_O_' in df.at[i, 'direction']:\n candidates = outbound_stops\n else:\n # Skip row if bus is not found to be either inbound or outbound\n continue\n \n bus_coord = (df.at[i, 'latitude'], df.at[i, 'longitude'])\n\n # Find closest stop within candidates\n # Assume the first stop\n closest = candidates.iloc[0]\n distance = fcc_projection(bus_coord, (closest['lat'], closest['lon']))\n\n # Check each stop after that\n for _, row in candidates[1:].iterrows():\n # find distance to this stop\n dist = fcc_projection(bus_coord, (row['lat'], row['lon']))\n if dist < distance:\n # closer stop found, save it\n closest = row\n distance = dist\n \n # Save the tag of the closest stop and the distance to it\n df.at[i, 'closestStop'] = closest['tag']\n df.at[i, 'distance'] = distance\n\n return df\n\n\ndef shift_timestamp(row):\n \"\"\" subtracts row['age'] from row['timestamp'] \"\"\"\n return row['timestamp'] - pd.Timedelta(seconds=row['age'])",
"_____no_output_____"
],
[
"def get_stop_times(locations, route):\n \"\"\"\n returns a dict, keys are stop tags and values are lists of timestamps \n that describe every time a bus was seen at that stop\n \"\"\"\n\n # Initialize the data structure I will store results in\n stop_times = {}\n for stop in route.inbound + route.outbound:\n stop_times[str(stop)] = []\n\n for vid in locations['vid'].unique():\n # Process the route one vehicle at a time\n df = locations[locations['vid'] == vid]\n\n # process 1st row on its own\n prev_row = df.loc[df.index[0]]\n stop_times[str(prev_row['closestStop'])].append(prev_row['timestamp'])\n\n # loop through the rest of the rows, comparing each to the previous one\n for i, row in df[1:].iterrows():\n if row['direction'] != prev_row['direction']:\n # changed directions, don't compare to previous row\n stop_times[str(row['closestStop'])].append(row['timestamp'])\n else:\n # same direction, compare to previous row\n if '_I_' in row['direction']: # get correct stop list\n stoplist = route.inbound\n else:\n stoplist = route.outbound\n\n current = stoplist.index(str(row['closestStop']))\n previous = stoplist.index(str(prev_row['closestStop']))\n gap = current - previous\n if gap > 1: # need to interpolate\n diff = (row['timestamp'] - prev_row['timestamp'])/gap\n counter = 1\n for stop in stoplist[previous+1:current]:\n # save interpolated time\n stop_times[str(stop)].append(prev_row['timestamp'] + (counter * diff))\n\n # increase counter for the next stop\n # example: with 2 interpolated stops, gap would be 3\n # 1st diff is 1/3, next is 2/3\n counter += 1\n \n if row['closestStop'] != prev_row['closestStop']:\n # only save time if the stop has changed, \n # otherwise the bus hasn't moved since last time\n stop_times[str(row['closestStop'])].append(row['timestamp'])\n \n # advance for next row\n prev_row = row\n \n # Sort each list before returning\n for stop in stop_times.keys():\n stop_times[stop].sort()\n\n return stop_times\n",
"_____no_output_____"
],
[
"def get_bunches_gaps(stop_times, schedule, bunch_threshold=.2, gap_threshold=1.5):\n \"\"\"\n returns a dataframe of all bunches and gaps found\n\n default thresholds define a bunch as 20% and a gap as 150% of scheduled headway\n \"\"\"\n\n # Initialize dataframe for the bunces and gaps\n problems = pd.DataFrame(columns=['type', 'time', 'duration', 'stop'])\n counter = 0\n\n # Set the bunch/gap thresholds (in seconds)\n bunch_threshold = (schedule.common_interval * 60) * bunch_threshold\n gap_threshold = (schedule.common_interval * 60) * gap_threshold\n\n for stop in stop_times.keys():\n # ensure we have any times at all for this stop\n if len(stop_times[stop]) == 0:\n #print(f\"Stop {stop} had no recorded times\")\n continue # go to next stop in the loop\n\n # save initial time\n prev_time = stop_times[stop][0]\n\n # loop through all others, comparing to the previous one\n for time in stop_times[stop][1:]:\n diff = (time - prev_time).seconds\n if diff <= bunch_threshold:\n # bunch found, save it\n problems.at[counter] = ['bunch', prev_time, diff, stop]\n counter += 1\n elif diff >= gap_threshold:\n problems.at[counter] = ['gap', prev_time, diff, stop]\n counter += 1\n \n prev_time = time\n \n return problems\n",
"_____no_output_____"
],
[
"# this uses sequential search, could speed up with binary search if needed,\n# but it currently uses hardly any time in comparison to other steps\ndef helper_count(expected_times, observed_times):\n \"\"\" Returns the number of on-time stops found \"\"\"\n\n # set up early/late thresholds (in seconds)\n early_threshold = pd.Timedelta(seconds=1*60) # 1 minute early\n late_threshold = pd.Timedelta(seconds=4*60) # 4 minutes late\n\n count = 0\n for stop in expected_times.columns:\n for expected in expected_times[stop]:\n if pd.isna(expected):\n continue # skip NaN values in the expected schedule\n\n # for each expected time...\n # find first observed time after the early threshold\n found_time = None\n early = expected - early_threshold\n\n # BUG: some schedule data may have stop tags that are not in the inbound\n # or outbound definitions for a route. That would throw a key error here.\n # Example: stop 14148 on route 24\n # current solution is to ignore those stops with the try/except statement\n try:\n for observed in observed_times[stop]:\n if observed >= early:\n found_time = observed\n break\n except:\n continue\n\n # if found time is still None, then all observed times were too early\n # if found_time is before the late threshold then we were on time\n if (not pd.isna(found_time)) and found_time <= (expected + late_threshold):\n # found_time is within the on-time window\n count += 1\n\n return count\n\ndef calculate_ontime(stop_times, schedule):\n \"\"\" Returns the on-time percentage and total scheduled stops for this route \"\"\"\n\n # Save schedules with timestamp data types, set date to match\n inbound_times = schedule.inbound_table\n for col in inbound_times.columns:\n inbound_times[col] = pd.to_datetime(inbound_times[col]).apply(\n lambda dt: dt.replace(year=schedule.date.year, \n month=schedule.date.month, \n day=schedule.date.day))\n\n outbound_times = schedule.outbound_table\n for col in outbound_times.columns:\n outbound_times[col] = pd.to_datetime(outbound_times[col]).apply(\n lambda dt: dt.replace(year=schedule.date.year, \n month=schedule.date.month, \n day=schedule.date.day))\n \n # count times for both inbound and outbound schedules\n on_time_count = (helper_count(inbound_times, stop_times) +\n helper_count(outbound_times, stop_times))\n \n # get total expected count\n total_expected = inbound_times.count().sum() + outbound_times.count().sum()\n\n # return on-time percentage\n return (on_time_count / total_expected), total_expected\n",
"_____no_output_____"
],
[
"def bunch_gap_graph(problems, interval=10):\n \"\"\"\n returns data for a graph of the bunches and gaps throughout the day\n\n problems - the dataframe of bunches and gaps\n\n interval - the number of minutes to bin data into\n\n returns\n {\n \"times\": [time values (x)],\n \"bunches\": [bunch counts (y1)],\n \"gaps\": [gap counts (y2)]\n }\n \"\"\"\n\n # set the time interval\n interval = pd.Timedelta(minutes=interval)\n\n # rest of code doesn't work if there are no bunches or gaps\n # return the empty graph manually\n if len(problems) == 0:\n # generate list of times according to the interval\n start = pd.Timestamp('today').replace(hour=0, minute=0, second=0)\n t = start\n times = []\n while t.day == start.day:\n times.append(str(t.time())[:5])\n t += interval\n\n return {\n \"times\": times,\n \"bunches\": [0] * len(times),\n \"gaps\": [0] * len(times)\n }\n\n # generate the DatetimeIndex needed\n index = pd.DatetimeIndex(problems['time'])\n df = problems.copy()\n df.index = index\n\n # lists for graph data\n bunches = []\n gaps = []\n times = []\n \n # set selection times\n start_date = problems.at[0, 'time'].replace(hour=0, minute=0, second=0)\n select_start = start_date\n select_end = select_start + interval\n\n while select_start.day == start_date.day:\n # get the count of each type of problem in this time interval\n count = df.between_time(select_start.time(), select_end.time())['type'].value_counts()\n\n # append the counts to the data list\n if 'bunch' in count.index:\n bunches.append(int(count['bunch']))\n else:\n bunches.append(0)\n \n if 'gap' in count.index:\n gaps.append(int(count['gap']))\n else:\n gaps.append(0)\n\n # save the start time for the x axis\n times.append(str(select_start.time())[:5])\n \n # increment the selection window\n select_start += interval\n select_end += interval\n \n return {\n \"times\": times,\n \"bunches\": bunches,\n \"gaps\": gaps\n }",
"_____no_output_____"
],
[
"def create_simple_geojson(bunches, rid):\n\n geojson = {'type': 'FeatureCollection',\n 'bunches': create_geojson_features(bunches, rid)}\n\n return geojson\n\ndef create_geojson_features(df, rid):\n \"\"\"\n function to generate list of geojson features\n for plotting vehicle locations on timestamped map\n\n Expects a dataframe containing lat/lon, vid, timestamp\n returns list of basic geojson formatted features:\n\n {\n type: Feature\n geometry: {\n type: Point,\n coordinates:[lat, lon]\n },\n properties: {\n route_id: rid\n time: timestamp\n }\n }\n \"\"\"\n # initializing empty features list\n features = []\n\n # iterating through df to pull coords, vid, timestamp\n # and format for json\n for index, row in df.iterrows():\n feature = {\n 'type': 'Feature',\n 'geometry': {\n 'type':'Point', \n 'coordinates':[row.lon, row.lat]\n },\n 'properties': {\n 'time': row.time.__str__(),\n 'stop': {'stopId': row.stopId.__str__(),\n 'stopTitle': row.title.__str__()},\n 'direction': row.direction.__str__()\n }\n }\n features.append(feature) # adding point to features list\n return features",
"_____no_output_____"
]
],
[
[
"## Run each time to get a new report",
"_____no_output_____"
],
[
"### Timing breakdown",
"_____no_output_____"
]
],
[
[
"# Change this one cell to change to a different route/day\n# Uses 7am to account for the UTC to PST conversion\nrid = \"1\"\nbegin = \"2020/6/1 07:00:00\"\nend = \"2020/6/2 07:00:00\"",
"_____no_output_____"
],
[
"%%time\n# Most time in this cell is waiting on the database responses\n\n# Load schedule and route data\nschedule = Schedule(rid, begin, cnx)\nroute = Route(rid, begin, cnx)\n\n# Load bus location data\nlocations = get_location_data(rid, begin, end, cnx)",
"CPU times: user 674 ms, sys: 34.2 ms, total: 708 ms\nWall time: 4.43 s\n"
],
[
"%%time\n# Apply cleaning function (this usually takes 1-2 minutes)\nlocations = clean_locations(locations, route.stops_table)",
"CPU times: user 1min 3s, sys: 27.8 ms, total: 1min 3s\nWall time: 1min 3s\n"
],
[
"%%time\n# Calculate all times a bus was at each stop\nstop_times = get_stop_times(locations, route)",
"CPU times: user 2.48 s, sys: 1.99 ms, total: 2.48 s\nWall time: 2.49 s\n"
],
[
"%%time\n# Find all bunches and gaps\nproblems = get_bunches_gaps(stop_times, schedule)",
"CPU times: user 2.28 s, sys: 4.04 ms, total: 2.28 s\nWall time: 2.28 s\n"
],
[
"%%time\n# Calculate on-time percentage\non_time, total_scheduled = calculate_ontime(stop_times, schedule)",
"CPU times: user 228 ms, sys: 1.01 ms, total: 229 ms\nWall time: 232 ms\n"
],
[
"%%time\n# Get the bunch/gap graph\nbg_graph = bunch_gap_graph(problems, interval=10)",
"CPU times: user 236 ms, sys: 2.01 ms, total: 238 ms\nWall time: 244 ms\n"
],
[
"%%time\n# Generate the geojson object\nbunch_df = problems[problems.type.eq('bunch')]\nbunch_df = bunch_df.merge(route.stops_table, left_on='stop', right_on='tag', how='left')\n\n# Creating GeoJSON of bunch times / locations\ngeojson = create_simple_geojson(bunch_df, rid)",
"CPU times: user 110 ms, sys: 1.01 ms, total: 111 ms\nWall time: 121 ms\n"
],
[
"# Print results\nprint(f\"--- Report for route {rid} on {str(pd.to_datetime(begin).date())} ---\")\n\ntotal = 0\nfor key in stop_times.keys():\n total += len(stop_times[key])\n\nintervals = total-len(stop_times)\nbunches = len(problems[problems['type'] == 'bunch'])\ngaps = len(problems[problems['type'] == 'gap'])\n\nprint(f\"\\nOut of {intervals} recorded intervals, we found {bunches} bunches and {gaps} gaps\")\nprint(f\"\\t{(bunches/intervals)*100 : .2f}% bunched\")\nprint(f\"\\t{(gaps/intervals)*100 : .2f}% gapped\")\n\nprint(f\"\\nFound {int(on_time * total_scheduled + .5)} on-time stops out of {total_scheduled} scheduled\")\nprint(f\"On-time percentage is {(on_time)*100 : .2f}%\")",
"--- Report for route 1 on 2020-06-01 ---\n\nOut of 11385 recorded intervals, we found 532 bunches and 1873 gaps\n\t 4.67% bunched\n\t 16.45% gapped\n\nFound 952 on-time stops out of 2324 scheduled\nOn-time percentage is 40.96%\n"
]
],
[
[
"### All in one function",
"_____no_output_____"
]
],
[
[
"# Expect this cell to take 60-90 seconds to run\n\n# Change this to change to a different route/day\n# Uses 7am to account for the UTC to PST conversion\n# rid = \"LBUS\"\n# begin = \"2020/6/8 07:00:00\"\n# end = \"2020/6/9 07:00:00\"\n\ndef print_report(rid, date):\n \"\"\"\n Prints a daily report for the given rid and date\n\n rid : (str)\n the route id to generate a report for\n \n date : (str or pd.Datetime)\n the date to generate a report for\n \"\"\"\n\n # get begin and end timestamps for the date\n begin = pd.to_datetime(date).replace(hour=7)\n end = begin + pd.Timedelta(days=1)\n\n # Load schedule and route data\n schedule = Schedule(rid, begin, cnx)\n route = Route(rid, begin, cnx)\n\n # Load bus location data\n locations = get_location_data(rid, begin, end, cnx)\n\n # Apply cleaning function (this usually takes 1-2 minutes)\n locations = clean_locations(locations, route.stops_table)\n\n # Calculate all times a bus was at each stop\n stop_times = get_stop_times(locations, route)\n\n # Find all bunches and gaps\n problems = get_bunches_gaps(stop_times, schedule)\n\n # Calculate on-time percentage\n on_time, total_scheduled = calculate_ontime(stop_times, schedule)\n\n # Print results\n print(f\"--- Report for route {rid} on {str(pd.to_datetime(begin).date())} ---\")\n\n total = 0\n for key in stop_times.keys():\n total += len(stop_times[key])\n\n intervals = total-len(stop_times)\n bunches = len(problems[problems['type'] == 'bunch'])\n gaps = len(problems[problems['type'] == 'gap'])\n\n print(f\"\\nOut of {intervals} recorded intervals, we found {bunches} bunches and {gaps} gaps\")\n print(f\"\\t{(bunches/intervals)*100 :.2f}% bunched\")\n print(f\"\\t{(gaps/intervals)*100 :.2f}% gapped\")\n\n print(f\"\\nFound {int(on_time * total_scheduled + .5)} on-time stops out of {total_scheduled} scheduled\")\n print(f\"On-time percentage is {(on_time)*100 :.2f}%\")\n\n coverage = (total_scheduled * on_time + bunches) / total_scheduled\n print(f\"\\nCoverage is {coverage * 100 :.2f}%\")",
"_____no_output_____"
],
[
"print_report(rid='1', date='2020/6/2')",
"--- Report for route 1 on 2020-06-02 ---\n\nOut of 12348 recorded intervals, we found 731 bunches and 1713 gaps\n\t5.92% bunched\n\t13.87% gapped\n\nFound 1066 on-time stops out of 2324 scheduled\nOn-time percentage is 45.87%\n\nCoverage is 77.32%\n"
],
[
"print_report(rid='5', date='2020/6/2')",
"--- Report for route 5 on 2020-06-02 ---\n\nOut of 3266 recorded intervals, we found 111 bunches and 467 gaps\n\t3.40% bunched\n\t14.30% gapped\n\nFound 528 on-time stops out of 1327 scheduled\nOn-time percentage is 39.79%\n\nCoverage is 48.15%\n"
]
],
[
[
"## Generating report JSON",
"_____no_output_____"
]
],
[
[
"def generate_report(rid, date):\n \"\"\"\n Generates a daily report for the given rid and date\n\n rid : (str)\n the route id to generate a report for\n \n date : (str or pd.Datetime)\n the date to generate a report for\n\n returns a dict of the report info\n \"\"\"\n\n # get begin and end timestamps for the date\n begin = pd.to_datetime(date).replace(hour=7)\n end = begin + pd.Timedelta(days=1)\n # Load schedule and route data\n schedule = Schedule(rid, begin, cnx)\n \n route = Route(rid, begin, cnx)\n \n # Load bus location data\n locations = get_location_data(rid, begin, end, cnx)\n \n # Apply cleaning function (this usually takes 1-2 minutes)\n locations = clean_locations(locations, route.stops_table)\n \n # Calculate all times a bus was at each stop\n stop_times = get_stop_times(locations, route)\n\n # Find all bunches and gaps\n problems = get_bunches_gaps(stop_times, schedule)\n\n # Calculate on-time percentage\n on_time, total_scheduled = calculate_ontime(stop_times, schedule)\n\n # Build result dict\n count_times = 0\n for key in stop_times.keys():\n count_times += len(stop_times[key])\n\n # Number of recorded intervals ( sum(len(each list of time)) - number or lists of times)\n intervals = count_times-len(stop_times)\n\n bunches = len(problems[problems['type'] == 'bunch'])\n gaps = len(problems[problems['type'] == 'gap'])\n\n coverage = (total_scheduled * on_time + bunches) / total_scheduled\n\n # Isolating bunches, merging with stops to assign locations to bunches\n bunch_df = problems[problems.type.eq('bunch')]\n bunch_df = bunch_df.merge(route.stops_table, left_on='stop', right_on='tag', how='left')\n\n # Creating GeoJSON of bunch times / locations\n geojson = create_simple_geojson(bunch_df, rid)\n \n # int/float conversions are because the json library doesn't work with numpy types\n result = {\n 'route_id': rid,\n 'route_name': route.route_name,\n 'route_type': route.route_type,\n 'date': str(pd.to_datetime(date)),\n 'overall_health': 0, # TODO: implement this\n 'num_bunches': bunches,\n 'num_gaps': gaps,\n 'total_intervals': intervals,\n 'on_time_percentage': float(round(on_time * 100, 2)),\n 'scheduled_stops': int(total_scheduled),\n 'coverage': float(round(coverage * 100, 2)),\n # line_chart contains all data needed to generate the line chart\n 'line_chart': bunch_gap_graph(problems, interval=10),\n # route_table is an array of all rows that should show up in the table\n # it will be filled in after all reports are generated\n 'route_table': [\n {\n 'route_id': rid,\n 'route_name': route.route_name,\n 'bunches': bunches,\n 'gaps': gaps,\n 'on-time': float(round(on_time * 100, 2)),\n 'coverage': float(round(coverage * 100, 2))\n }\n ],\n 'map_data': geojson\n }\n\n return result",
"_____no_output_____"
],
[
"# Route 1 usage example\nreport_1 = generate_report(rid='1', date='2020/6/1')",
"_____no_output_____"
],
[
"report_1.keys()",
"_____no_output_____"
],
[
"# Route 714 usage example\nreport_714 = generate_report(rid='714', date='2020/6/1')",
"_____no_output_____"
],
[
"print('Route:', report_714['route_name'])\nprint('Bunches:', report_714['num_bunches'])\nprint('Gaps:', report_714['num_gaps'])\nprint('On-time:', report_714['on_time_percentage'])\nprint('Coverage:', report_714['coverage'])",
"Route: 714-Bart Early Bird\nBunches: 0\nGaps: 0\nOn-time: 55.56\nCoverage: 55.56\n"
]
],
[
[
"# Generating report for all routes",
"_____no_output_____"
]
],
[
[
"def get_active_routes(date):\n \"\"\"\n returns a list of all active route id's for the given date\n \"\"\"\n\n query = \"\"\"\n SELECT DISTINCT rid\n FROM routes\n WHERE begin_date <= %s ::TIMESTAMP AND\n (end_date IS NULL OR end_date > %s ::TIMESTAMP);\n \"\"\"\n\n cursor.execute(query, (date, date))\n return [result[0] for result in cursor.fetchall()]",
"_____no_output_____"
],
[
"%%time\n# since this is not optimized yet, this takes about 20 minutes\n\n# choose a day\ndate = '2020-6-1'\n\n# get all active routes \nroute_ids = get_active_routes(date)\nroute_ids.sort()\n\n# get the report for all routes\nall_reports = []\nfor rid in route_ids:\n try:\n all_reports.append(generate_report(rid, date))\n print(\"Generated report for route\", rid)\n except: # in case any particular route throws an error\n print(f\"Route {rid} failed\")",
"Generated report for route 1\nGenerated report for route 12\nGenerated report for route 14\nGenerated report for route 14R\nGenerated report for route 19\nGenerated report for route 22\nGenerated report for route 24\nGenerated report for route 25\nGenerated report for route 28\nGenerated report for route 29\nGenerated report for route 38\nGenerated report for route 38R\nGenerated report for route 44\nGenerated report for route 49\nGenerated report for route 5\nGenerated report for route 54\nGenerated report for route 714\nGenerated report for route 8\nGenerated report for route 9\nGenerated report for route 90\nGenerated report for route 91\nGenerated report for route 9R\nGenerated report for route LBUS\nGenerated report for route L_OWL\nGenerated report for route MBUS\nGenerated report for route NBUS\nGenerated report for route N_OWL\nGenerated report for route TBUS\nCPU times: user 20min 16s, sys: 1.16 s, total: 20min 17s\nWall time: 20min 57s\n"
],
[
"len(all_reports)",
"_____no_output_____"
],
[
"# generate aggregate reports\n\n# read existing reports into a dataframe to work with them easily\ndf = pd.DataFrame(all_reports)\n\n# for each aggregate type\ntypes = ['All'] + list(df['route_type'].unique())\ncounter = 0\nfor t in types:\n # filter df to the routes we are adding up\n if t == 'All':\n filtered = df\n else:\n filtered = df[df['route_type'] == t]\n\n # on-time percentage: sum([all on-time stops]) / sum([all scheduled stops])\n count_on_time = (filtered['on_time_percentage'] * filtered['scheduled_stops']).sum()\n on_time_perc = count_on_time / filtered['scheduled_stops'].sum()\n\n # coverage: (sum([all on-time stops]) + sum([all bunches])) / sum([all scheduled stops])\n coverage = (count_on_time + filtered['num_bunches'].sum()) / filtered['scheduled_stops'].sum()\n\n # aggregate the graph object\n # x-axis is same for all\n first = filtered.index[0]\n times = filtered.at[first, 'line_chart']['times']\n\n # sum up all y-axis values\n bunches = pd.Series(filtered.at[first, 'line_chart']['bunches'])\n gaps = pd.Series(filtered.at[first, 'line_chart']['gaps'])\n\n # same pattern for the geojson list\n geojson = filtered.at[first, 'map_data']['bunches']\n\n for i, report in filtered[1:].iterrows():\n # pd.Series adds all values in the lists together\n bunches += pd.Series(report['line_chart']['bunches'])\n gaps += pd.Series(report['line_chart']['gaps'])\n\n # lists concatenate together\n geojson += report['map_data']['bunches']\n\n # save a new report object\n new_report = {\n 'route_id': t,\n 'route_name': t,\n 'route_type': t,\n 'date': all_reports[0]['date'],\n 'overall_health': 0, # TODO, implement. Either avg of all routes, or recalculate using the aggregate statistics\n 'num_bunches': int(filtered['num_bunches'].sum()),\n 'num_gaps': int(filtered['num_gaps'].sum()),\n 'total_intervals': int(filtered['total_intervals'].sum()),\n 'on_time_percentage': float(round(on_time_perc, 2)),\n 'scheduled_stops': int(filtered['scheduled_stops'].sum()),\n 'coverage': float(round(coverage, 2)),\n 'line_chart': {\n 'times': times,\n 'bunches': list(bunches),\n 'gaps': list(gaps)\n },\n 'route_table': [\n {\n 'route_id': t,\n 'route_name': t,\n 'bunches': int(filtered['num_bunches'].sum()),\n 'gaps': int(filtered['num_gaps'].sum()),\n 'on-time': float(round(on_time_perc, 2)),\n 'coverage': float(round(coverage, 2))\n }\n ],\n 'map_data': {\n # 'type': 'FeatureCollection',\n # 'bunches': geojson\n }\n }\n \n # put aggregate reports at the beginning of the list\n all_reports.insert(counter, new_report)\n counter += 1\n\n# Add route_table rows to the aggregate report\n# Set up a dict to hold each aggregate table\ntables = {}\nfor t in types:\n tables[t] = []\n\n# Add rows from each report\nfor report in all_reports:\n # add to the route type's table\n tables[report['route_type']].append(report['route_table'][0])\n\n # also add to all routes table\n if report['route_id'] != \"All\":\n # if statement needed to not duplicate the \"All\" row twice\n tables['All'].append(report['route_table'][0])\n\n\n# find matching report and set the table there\nfor key in tables.keys():\n for report in all_reports:\n if report['route_id'] == key:\n # override it because the new table includes the row that was already there\n report['route_table'] = tables[key]\n break # only 1 report needs each aggregate table",
"_____no_output_____"
],
[
"len(all_reports)",
"_____no_output_____"
],
[
"# save the all_reports object to a file so I can download it\nwith open(f'report_{date}_slimmed.json', 'w') as outfile:\n json.dump(all_reports, outfile)",
"_____no_output_____"
],
[
"# rollback code if needed for testing: remove all aggregate reports\nfor i in reversed(range(len(all_reports))):\n if all_reports[i]['route_id'] in types:\n del(all_reports[i])",
"_____no_output_____"
],
[
"# save report in the database\nquery = \"\"\"\n INSERT INTO reports (date, report)\n VALUES (%s, %s);\n\"\"\"\ncursor.execute(query, (date, json.dumps(all_reports)))\ncnx.commit()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a088813dbb7d17547fe11bd2dbf4f86e6904cbb
| 236,008 |
ipynb
|
Jupyter Notebook
|
feature-engineering-Transformations/Box-Cox_Transform.ipynb
|
DheerajKumar97/Feature-Engineering-Techniques---Python--ML
|
12a13eecc3838bff57ed76131d07328eaf9e1d79
|
[
"Apache-2.0"
] | 3 |
2020-04-29T18:13:41.000Z
|
2020-05-02T16:23:23.000Z
|
feature-engineering-Transformations/Box-Cox_Transform.ipynb
|
DheerajKumar97/Feature-Engineering-Techniques---Python--ML
|
12a13eecc3838bff57ed76131d07328eaf9e1d79
|
[
"Apache-2.0"
] | null | null | null |
feature-engineering-Transformations/Box-Cox_Transform.ipynb
|
DheerajKumar97/Feature-Engineering-Techniques---Python--ML
|
12a13eecc3838bff57ed76131d07328eaf9e1d79
|
[
"Apache-2.0"
] | 1 |
2020-06-01T02:25:30.000Z
|
2020-06-01T02:25:30.000Z
| 131.62744 | 89,857 | 0.814078 |
[
[
[
"import json\nimport pandas as pd\nfrom scipy import stats\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib notebook\nsns.set_style('whitegrid')",
"_____no_output_____"
],
[
"biz_f = open('data/yelp/v6/yelp_dataset_challenge_academic_dataset/yelp_academic_dataset_business.json')\nbiz_df = pd.DataFrame([json.loads(x) for x in biz_f.readlines()])\nbiz_f.close()",
"_____no_output_____"
],
[
"# Box-Cox transform assumes that input data is positive. \n# Check the min to make sure.\nbiz_df['review_count'].min()",
"_____no_output_____"
],
[
"# Setting input parameter lmbda to 0 gives us the log transform (without constant offset)\nrc_log = stats.boxcox(biz_df['review_count'], lmbda=0)\n# By default, the scipy implementation of Box-Cox transform finds the lmbda parameter\n# that will make the output the closest to a normal distribution\nrc_bc, bc_params = stats.boxcox(biz_df['review_count'])\nbc_params",
"_____no_output_____"
],
[
"biz_df['rc_bc'] = rc_bc\nbiz_df['rc_log'] = rc_log",
"_____no_output_____"
],
[
"fig, (ax1, ax2, ax3) = plt.subplots(3,1)\n# original review count histogram\nbiz_df['review_count'].hist(ax=ax1, bins=100)\nax1.set_yscale('log')\nax1.tick_params(labelsize=14)\nax1.set_title('Review Counts Histogram', fontsize=14)\nax1.set_xlabel('')\nax1.set_ylabel('Occurrence', fontsize=14)\n# review count after log transform\nbiz_df['rc_log'].hist(ax=ax2, bins=100)\nax2.set_yscale('log')\nax2.tick_params(labelsize=14)\nax2.set_title('Log Transformed Counts Histogram', fontsize=14)\nax2.set_xlabel('')\nax2.set_ylabel('Occurrence', fontsize=14)# review count after optimal Box-Cox transform\nbiz_df['rc_bc'].hist(ax=ax3, bins=100)\nax3.set_yscale('log')\nax3.tick_params(labelsize=14)\nax3.set_title('Box-Cox Transformed Counts Histogram', fontsize=14)\nax3.set_xlabel('')\nax3.set_ylabel('Occurrence', fontsize=14)",
"_____no_output_____"
],
[
"fig.savefig('box-cox-hist.jpg')",
"_____no_output_____"
],
[
"fig2, (ax1, ax2, ax3) = plt.subplots(3,1)\nprob1 = stats.probplot(biz_df['review_count'], dist=stats.norm, plot=ax1)\nax1.set_xlabel('')\nax1.set_title('Probplot against normal distribution')\nprob2 = stats.probplot(biz_df['rc_log'], dist=stats.norm, plot=ax2)\nax2.set_xlabel('')\nax2.set_title('Probplot after log transform')\nprob3 = stats.probplot(biz_df['rc_bc'], dist=stats.norm, plot=ax3)\nax3.set_xlabel('Theoretical quantiles')\nax3.set_title('Probplot after Box-Cox transform')",
"_____no_output_____"
],
[
"fig2.savefig('box-cox-probplot.jpg')",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a088e203351e1169868a4470a6365806074a4ea
| 31,766 |
ipynb
|
Jupyter Notebook
|
module4-logistic-regression/Copy_of_LS_DS_214_assignment.ipynb
|
mkhalil7625/DS-Unit-2-Linear-Models
|
93331db75429da09a271b7fb1c6ad58aff2e7dd4
|
[
"MIT"
] | null | null | null |
module4-logistic-regression/Copy_of_LS_DS_214_assignment.ipynb
|
mkhalil7625/DS-Unit-2-Linear-Models
|
93331db75429da09a271b7fb1c6ad58aff2e7dd4
|
[
"MIT"
] | null | null | null |
module4-logistic-regression/Copy_of_LS_DS_214_assignment.ipynb
|
mkhalil7625/DS-Unit-2-Linear-Models
|
93331db75429da09a271b7fb1c6ad58aff2e7dd4
|
[
"MIT"
] | null | null | null | 34.083691 | 290 | 0.321161 |
[
[
[
"<a href=\"https://colab.research.google.com/github/mkhalil7625/DS-Unit-2-Linear-Models/blob/master/module4-logistic-regression/Copy_of_LS_DS_214_assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"Lambda School Data Science\n\n*Unit 2, Sprint 1, Module 4*\n\n---",
"_____no_output_____"
],
[
"# Logistic Regression\n\n\n## Assignment 🌯\n\nYou'll use a [**dataset of 400+ burrito reviews**](https://srcole.github.io/100burritos/). How accurately can you predict whether a burrito is rated 'Great'?\n\n> We have developed a 10-dimensional system for rating the burritos in San Diego. ... Generate models for what makes a burrito great and investigate correlations in its dimensions.\n\n- [ ] Do train/validate/test split. Train on reviews from 2016 & earlier. Validate on 2017. Test on 2018 & later.\n- [ ] Begin with baselines for classification.\n- [ ] Use scikit-learn for logistic regression.\n- [ ] Get your model's validation accuracy. (Multiple times if you try multiple iterations.)\n- [ ] Get your model's test accuracy. (One time, at the end.)\n- [ ] Commit your notebook to your fork of the GitHub repo.\n\n\n## Stretch Goals\n\n- [ ] Add your own stretch goal(s) !\n- [ ] Make exploratory visualizations.\n- [ ] Do one-hot encoding.\n- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).\n- [ ] Get and plot your coefficients.\n- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html).",
"_____no_output_____"
]
],
[
[
"%%capture\nimport sys\n\n# If you're on Colab:\nif 'google.colab' in sys.modules:\n DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'\n !pip install category_encoders==2.*\n\n# If you're working locally:\nelse:\n DATA_PATH = '../data/'",
"_____no_output_____"
],
[
"# Load data downloaded from https://srcole.github.io/100burritos/\nimport pandas as pd\ndf = pd.read_csv(DATA_PATH+'burritos/burritos.csv')",
"_____no_output_____"
],
[
"# Derive binary classification target:\n# We define a 'Great' burrito as having an\n# overall rating of 4 or higher, on a 5 point scale.\n# Drop unrated burritos.\ndf = df.dropna(subset=['overall'])\ndf['Great'] = df['overall'] >= 4",
"_____no_output_____"
],
[
"# Clean/combine the Burrito categories\ndf['Burrito'] = df['Burrito'].str.lower()\n\ncalifornia = df['Burrito'].str.contains('california')\nasada = df['Burrito'].str.contains('asada')\nsurf = df['Burrito'].str.contains('surf')\ncarnitas = df['Burrito'].str.contains('carnitas')\n\ndf.loc[california, 'Burrito'] = 'California'\ndf.loc[asada, 'Burrito'] = 'Asada'\ndf.loc[surf, 'Burrito'] = 'Surf & Turf'\ndf.loc[carnitas, 'Burrito'] = 'Carnitas'\ndf.loc[~california & ~asada & ~surf & ~carnitas, 'Burrito'] = 'Other'",
"_____no_output_____"
],
[
"# Drop some high cardinality categoricals\ndf = df.drop(columns=['Notes', 'Location', 'Reviewer', 'Address', 'URL', 'Neighborhood'])",
"_____no_output_____"
],
[
"# Drop some columns to prevent \"leakage\"\ndf = df.drop(columns=['Rec', 'overall'])",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df['Date']=pd.to_datetime(df['Date'],infer_datetime_format=True)",
"_____no_output_____"
],
[
"train = df[df['Date'].dt.year <= 2016]\nval = df[df['Date'].dt.year == 2017]\ntest = df[df['Date'].dt.year >2017]",
"_____no_output_____"
],
[
"train['Date'].dt.year.value_counts()",
"_____no_output_____"
],
[
"#baseline (majority class)\ntarget = 'Great'\ny_train = train[target]\ny_val = val[target]\ny_test = test[target]\ny_train.value_counts(normalize=True)\n",
"_____no_output_____"
],
[
"features = ['Fillings','Meat','Meat:filling','Uniformity',\t'Salsa','Tortilla','Temp','Cost']\nX_train = train[features]\nX_val = val[features]\nX_test = test[features]",
"_____no_output_____"
],
[
"X_train.head()",
"_____no_output_____"
],
[
"from sklearn.impute import SimpleImputer\n\nimputer = SimpleImputer(strategy='mean')\nX_train_imputed = imputer.fit_transform(X_train)\nX_val_imputed = imputer.transform(X_val)\nX_test_imputed = imputer.transform(X_test)",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train_imputed)\nX_val_scaled = scaler.transform(X_val_imputed)\nX_test_scaled = scaler.transform(X_test_imputed)",
"_____no_output_____"
],
[
"from sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import accuracy_score\n\nmodel = LogisticRegression(solver='lbfgs')\nmodel.fit(X_train_scaled,y_train)\ny_pred = model.predict(X_test_scaled)\n\nprint('Validation Accuracy', model.score(X_val_scaled, y_val))\nprint('Test Accuracy', accuracy_score(y_test, y_pred))",
"Validation Accuracy 0.8588235294117647\nTest Accuracy 0.7894736842105263\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0891ddd4ab778d228a7d3c3452d324fa48fa0a
| 843,162 |
ipynb
|
Jupyter Notebook
|
Tensorflow_Projects/Classification_Tasks/Fruits_Classification/Fruits_360_test_1.ipynb
|
JagadeeshaSV/Computer_Vision_Projects
|
0447757ec514461ecff0e926a462fb0493a6ca13
|
[
"MIT"
] | null | null | null |
Tensorflow_Projects/Classification_Tasks/Fruits_Classification/Fruits_360_test_1.ipynb
|
JagadeeshaSV/Computer_Vision_Projects
|
0447757ec514461ecff0e926a462fb0493a6ca13
|
[
"MIT"
] | null | null | null |
Tensorflow_Projects/Classification_Tasks/Fruits_Classification/Fruits_360_test_1.ipynb
|
JagadeeshaSV/Computer_Vision_Projects
|
0447757ec514461ecff0e926a462fb0493a6ca13
|
[
"MIT"
] | null | null | null | 1,426.670051 | 392,048 | 0.954709 |
[
[
[
"# Datset source\n# https://www.kaggle.com/moltean/fruits",
"_____no_output_____"
],
[
"# Problem Statement: Multiclass classification problem for 131 categories of fruits and vegetables",
"_____no_output_____"
],
[
"# Created using the following tutorial template\n# https://keras.io/examples/vision/image_classification_from_scratch/",
"_____no_output_____"
],
[
"# import required libraries\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport PIL\nimport tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\nfrom tensorflow.keras.models import Sequential",
"_____no_output_____"
],
[
"# Define dataset paths and image properties\n\nbatch_size = 32\nimg_height = 100\nimg_width = 100\nimage_size = (img_height, img_width)\ntrain_data_dir = 'dataset/fruits-360/Training/'\ntest_data_dir = 'dataset/fruits-360/Test/'",
"_____no_output_____"
],
[
"# Create a train dataset\n\ntrain_ds = tf.keras.preprocessing.image_dataset_from_directory(\n train_data_dir,\n seed=2,\n image_size=(img_height, img_width),\n batch_size=batch_size)",
"Found 67692 files belonging to 131 classes.\n"
],
[
"# Create a test dataset\n\ntest_ds = tf.keras.preprocessing.image_dataset_from_directory(\n test_data_dir,\n seed=2,\n image_size=(img_height, img_width),\n batch_size=batch_size)",
"Found 22689 files belonging to 131 classes.\n"
],
[
"# Print train data class names\n\ntrain_class_names = train_ds.class_names\nprint(train_class_names)",
"['Apple Braeburn', 'Apple Crimson Snow', 'Apple Golden 1', 'Apple Golden 2', 'Apple Golden 3', 'Apple Granny Smith', 'Apple Pink Lady', 'Apple Red 1', 'Apple Red 2', 'Apple Red 3', 'Apple Red Delicious', 'Apple Red Yellow 1', 'Apple Red Yellow 2', 'Apricot', 'Avocado', 'Avocado ripe', 'Banana', 'Banana Lady Finger', 'Banana Red', 'Beetroot', 'Blueberry', 'Cactus fruit', 'Cantaloupe 1', 'Cantaloupe 2', 'Carambula', 'Cauliflower', 'Cherry 1', 'Cherry 2', 'Cherry Rainier', 'Cherry Wax Black', 'Cherry Wax Red', 'Cherry Wax Yellow', 'Chestnut', 'Clementine', 'Cocos', 'Corn', 'Corn Husk', 'Cucumber Ripe', 'Cucumber Ripe 2', 'Dates', 'Eggplant', 'Fig', 'Ginger Root', 'Granadilla', 'Grape Blue', 'Grape Pink', 'Grape White', 'Grape White 2', 'Grape White 3', 'Grape White 4', 'Grapefruit Pink', 'Grapefruit White', 'Guava', 'Hazelnut', 'Huckleberry', 'Kaki', 'Kiwi', 'Kohlrabi', 'Kumquats', 'Lemon', 'Lemon Meyer', 'Limes', 'Lychee', 'Mandarine', 'Mango', 'Mango Red', 'Mangostan', 'Maracuja', 'Melon Piel de Sapo', 'Mulberry', 'Nectarine', 'Nectarine Flat', 'Nut Forest', 'Nut Pecan', 'Onion Red', 'Onion Red Peeled', 'Onion White', 'Orange', 'Papaya', 'Passion Fruit', 'Peach', 'Peach 2', 'Peach Flat', 'Pear', 'Pear 2', 'Pear Abate', 'Pear Forelle', 'Pear Kaiser', 'Pear Monster', 'Pear Red', 'Pear Stone', 'Pear Williams', 'Pepino', 'Pepper Green', 'Pepper Orange', 'Pepper Red', 'Pepper Yellow', 'Physalis', 'Physalis with Husk', 'Pineapple', 'Pineapple Mini', 'Pitahaya Red', 'Plum', 'Plum 2', 'Plum 3', 'Pomegranate', 'Pomelo Sweetie', 'Potato Red', 'Potato Red Washed', 'Potato Sweet', 'Potato White', 'Quince', 'Rambutan', 'Raspberry', 'Redcurrant', 'Salak', 'Strawberry', 'Strawberry Wedge', 'Tamarillo', 'Tangelo', 'Tomato 1', 'Tomato 2', 'Tomato 3', 'Tomato 4', 'Tomato Cherry Red', 'Tomato Heart', 'Tomato Maroon', 'Tomato Yellow', 'Tomato not Ripened', 'Walnut', 'Watermelon']\n"
],
[
"# Print test data class names\n\ntest_class_names = train_ds.class_names\nprint(test_class_names)",
"['Apple Braeburn', 'Apple Crimson Snow', 'Apple Golden 1', 'Apple Golden 2', 'Apple Golden 3', 'Apple Granny Smith', 'Apple Pink Lady', 'Apple Red 1', 'Apple Red 2', 'Apple Red 3', 'Apple Red Delicious', 'Apple Red Yellow 1', 'Apple Red Yellow 2', 'Apricot', 'Avocado', 'Avocado ripe', 'Banana', 'Banana Lady Finger', 'Banana Red', 'Beetroot', 'Blueberry', 'Cactus fruit', 'Cantaloupe 1', 'Cantaloupe 2', 'Carambula', 'Cauliflower', 'Cherry 1', 'Cherry 2', 'Cherry Rainier', 'Cherry Wax Black', 'Cherry Wax Red', 'Cherry Wax Yellow', 'Chestnut', 'Clementine', 'Cocos', 'Corn', 'Corn Husk', 'Cucumber Ripe', 'Cucumber Ripe 2', 'Dates', 'Eggplant', 'Fig', 'Ginger Root', 'Granadilla', 'Grape Blue', 'Grape Pink', 'Grape White', 'Grape White 2', 'Grape White 3', 'Grape White 4', 'Grapefruit Pink', 'Grapefruit White', 'Guava', 'Hazelnut', 'Huckleberry', 'Kaki', 'Kiwi', 'Kohlrabi', 'Kumquats', 'Lemon', 'Lemon Meyer', 'Limes', 'Lychee', 'Mandarine', 'Mango', 'Mango Red', 'Mangostan', 'Maracuja', 'Melon Piel de Sapo', 'Mulberry', 'Nectarine', 'Nectarine Flat', 'Nut Forest', 'Nut Pecan', 'Onion Red', 'Onion Red Peeled', 'Onion White', 'Orange', 'Papaya', 'Passion Fruit', 'Peach', 'Peach 2', 'Peach Flat', 'Pear', 'Pear 2', 'Pear Abate', 'Pear Forelle', 'Pear Kaiser', 'Pear Monster', 'Pear Red', 'Pear Stone', 'Pear Williams', 'Pepino', 'Pepper Green', 'Pepper Orange', 'Pepper Red', 'Pepper Yellow', 'Physalis', 'Physalis with Husk', 'Pineapple', 'Pineapple Mini', 'Pitahaya Red', 'Plum', 'Plum 2', 'Plum 3', 'Pomegranate', 'Pomelo Sweetie', 'Potato Red', 'Potato Red Washed', 'Potato Sweet', 'Potato White', 'Quince', 'Rambutan', 'Raspberry', 'Redcurrant', 'Salak', 'Strawberry', 'Strawberry Wedge', 'Tamarillo', 'Tangelo', 'Tomato 1', 'Tomato 2', 'Tomato 3', 'Tomato 4', 'Tomato Cherry Red', 'Tomato Heart', 'Tomato Maroon', 'Tomato Yellow', 'Tomato not Ripened', 'Walnut', 'Watermelon']\n"
],
[
"# Print a few images\n\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 10))\nfor images, labels in train_ds.take(1):\n for i in range(9):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(images[i].numpy().astype(\"uint8\"))\n plt.title(train_class_names[labels[i]])\n plt.axis(\"off\")",
"_____no_output_____"
],
[
"# Use data augmentation\n\ndata_augmentation = keras.Sequential(\n [\n layers.experimental.preprocessing.RandomFlip(\"horizontal\"),\n layers.experimental.preprocessing.RandomRotation(0.1),\n ]\n)",
"_____no_output_____"
],
[
"# Plot an image after data augmentation\n\nplt.figure(figsize=(10, 10))\nfor images, _ in train_ds.take(1):\n for i in range(9):\n augmented_images = data_augmentation(images)\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(augmented_images[0].numpy().astype(\"uint8\"))\n plt.axis(\"off\")",
"_____no_output_____"
],
[
"# Buffered prefetching to mitigate I/O blocking\n\ntrain_ds = train_ds.prefetch(buffer_size=32)\ntest_ds = test_ds.prefetch(buffer_size=32)",
"_____no_output_____"
],
[
"# Use this for tf 2.3 gpu \n# !pip install numpy==1.19.5",
"_____no_output_____"
],
[
"# Small version of Xception network\n\ndef make_model(input_shape, num_classes):\n inputs = keras.Input(shape=input_shape)\n # Image augmentation block\n x = data_augmentation(inputs)\n\n # Entry block\n x = layers.experimental.preprocessing.Rescaling(1.0 / 255)(x)\n x = layers.Conv2D(32, 3, strides=2, padding=\"same\")(x)\n x = layers.BatchNormalization()(x)\n x = layers.Activation(\"relu\")(x)\n\n x = layers.Conv2D(64, 3, padding=\"same\")(x)\n x = layers.BatchNormalization()(x)\n x = layers.Activation(\"relu\")(x)\n\n previous_block_activation = x # Set aside residual\n\n for size in [128, 256, 512, 728]:\n x = layers.Activation(\"relu\")(x)\n x = layers.SeparableConv2D(size, 3, padding=\"same\")(x)\n x = layers.BatchNormalization()(x)\n\n x = layers.Activation(\"relu\")(x)\n x = layers.SeparableConv2D(size, 3, padding=\"same\")(x)\n x = layers.BatchNormalization()(x)\n\n x = layers.MaxPooling2D(3, strides=2, padding=\"same\")(x)\n\n # Project residual\n residual = layers.Conv2D(size, 1, strides=2, padding=\"same\")(\n previous_block_activation\n )\n x = layers.add([x, residual]) # Add back residual\n previous_block_activation = x # Set aside next residual\n\n x = layers.SeparableConv2D(1024, 3, padding=\"same\")(x)\n x = layers.BatchNormalization()(x)\n x = layers.Activation(\"relu\")(x)\n\n x = layers.GlobalAveragePooling2D()(x)\n if num_classes == 2:\n activation = \"sigmoid\"\n units = 1\n else:\n activation = \"softmax\"\n units = num_classes\n\n x = layers.Dropout(0.5)(x)\n outputs = layers.Dense(units, activation=activation)(x)\n return keras.Model(inputs, outputs)\n\n\nmodel = make_model(input_shape=image_size + (3,), num_classes=131)\nkeras.utils.plot_model(model, show_shapes=True)\n",
"('Failed to import pydot. You must `pip install pydot` and install graphviz (https://graphviz.gitlab.io/download/), ', 'for `pydotprint` to work.')\n"
],
[
"model.summary()",
"Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, 100, 100, 3) 0 \n__________________________________________________________________________________________________\nsequential (Sequential) (None, 100, 100, 3) 0 input_1[0][0] \n__________________________________________________________________________________________________\nrescaling (Rescaling) (None, 100, 100, 3) 0 sequential[0][0] \n__________________________________________________________________________________________________\nconv2d (Conv2D) (None, 50, 50, 32) 896 rescaling[0][0] \n__________________________________________________________________________________________________\nbatch_normalization (BatchNorma (None, 50, 50, 32) 128 conv2d[0][0] \n__________________________________________________________________________________________________\nactivation (Activation) (None, 50, 50, 32) 0 batch_normalization[0][0] \n__________________________________________________________________________________________________\nconv2d_1 (Conv2D) (None, 50, 50, 64) 18496 activation[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_1 (BatchNor (None, 50, 50, 64) 256 conv2d_1[0][0] \n__________________________________________________________________________________________________\nactivation_1 (Activation) (None, 50, 50, 64) 0 batch_normalization_1[0][0] \n__________________________________________________________________________________________________\nactivation_2 (Activation) (None, 50, 50, 64) 0 activation_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d (SeparableConv (None, 50, 50, 128) 8896 activation_2[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_2 (BatchNor (None, 50, 50, 128) 512 separable_conv2d[0][0] \n__________________________________________________________________________________________________\nactivation_3 (Activation) (None, 50, 50, 128) 0 batch_normalization_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_1 (SeparableCo (None, 50, 50, 128) 17664 activation_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_3 (BatchNor (None, 50, 50, 128) 512 separable_conv2d_1[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 25, 25, 128) 0 batch_normalization_3[0][0] \n__________________________________________________________________________________________________\nconv2d_2 (Conv2D) (None, 25, 25, 128) 8320 activation_1[0][0] \n__________________________________________________________________________________________________\nadd (Add) (None, 25, 25, 128) 0 max_pooling2d[0][0] \n conv2d_2[0][0] \n__________________________________________________________________________________________________\nactivation_4 (Activation) (None, 25, 25, 128) 0 add[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_2 (SeparableCo (None, 25, 25, 256) 34176 activation_4[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_4 (BatchNor (None, 25, 25, 256) 1024 separable_conv2d_2[0][0] \n__________________________________________________________________________________________________\nactivation_5 (Activation) (None, 25, 25, 256) 0 batch_normalization_4[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_3 (SeparableCo (None, 25, 25, 256) 68096 activation_5[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_5 (BatchNor (None, 25, 25, 256) 1024 separable_conv2d_3[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_1 (MaxPooling2D) (None, 13, 13, 256) 0 batch_normalization_5[0][0] \n__________________________________________________________________________________________________\nconv2d_3 (Conv2D) (None, 13, 13, 256) 33024 add[0][0] \n__________________________________________________________________________________________________\nadd_1 (Add) (None, 13, 13, 256) 0 max_pooling2d_1[0][0] \n conv2d_3[0][0] \n__________________________________________________________________________________________________\nactivation_6 (Activation) (None, 13, 13, 256) 0 add_1[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_4 (SeparableCo (None, 13, 13, 512) 133888 activation_6[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_6 (BatchNor (None, 13, 13, 512) 2048 separable_conv2d_4[0][0] \n__________________________________________________________________________________________________\nactivation_7 (Activation) (None, 13, 13, 512) 0 batch_normalization_6[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_5 (SeparableCo (None, 13, 13, 512) 267264 activation_7[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_7 (BatchNor (None, 13, 13, 512) 2048 separable_conv2d_5[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_2 (MaxPooling2D) (None, 7, 7, 512) 0 batch_normalization_7[0][0] \n__________________________________________________________________________________________________\nconv2d_4 (Conv2D) (None, 7, 7, 512) 131584 add_1[0][0] \n__________________________________________________________________________________________________\nadd_2 (Add) (None, 7, 7, 512) 0 max_pooling2d_2[0][0] \n conv2d_4[0][0] \n__________________________________________________________________________________________________\nactivation_8 (Activation) (None, 7, 7, 512) 0 add_2[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_6 (SeparableCo (None, 7, 7, 728) 378072 activation_8[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_8 (BatchNor (None, 7, 7, 728) 2912 separable_conv2d_6[0][0] \n__________________________________________________________________________________________________\nactivation_9 (Activation) (None, 7, 7, 728) 0 batch_normalization_8[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_7 (SeparableCo (None, 7, 7, 728) 537264 activation_9[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_9 (BatchNor (None, 7, 7, 728) 2912 separable_conv2d_7[0][0] \n__________________________________________________________________________________________________\nmax_pooling2d_3 (MaxPooling2D) (None, 4, 4, 728) 0 batch_normalization_9[0][0] \n__________________________________________________________________________________________________\nconv2d_5 (Conv2D) (None, 4, 4, 728) 373464 add_2[0][0] \n__________________________________________________________________________________________________\nadd_3 (Add) (None, 4, 4, 728) 0 max_pooling2d_3[0][0] \n conv2d_5[0][0] \n__________________________________________________________________________________________________\nseparable_conv2d_8 (SeparableCo (None, 4, 4, 1024) 753048 add_3[0][0] \n__________________________________________________________________________________________________\nbatch_normalization_10 (BatchNo (None, 4, 4, 1024) 4096 separable_conv2d_8[0][0] \n__________________________________________________________________________________________________\nactivation_10 (Activation) (None, 4, 4, 1024) 0 batch_normalization_10[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d (Globa (None, 1024) 0 activation_10[0][0] \n__________________________________________________________________________________________________\ndropout (Dropout) (None, 1024) 0 global_average_pooling2d[0][0] \n__________________________________________________________________________________________________\ndense (Dense) (None, 131) 134275 dropout[0][0] \n==================================================================================================\nTotal params: 2,915,899\nTrainable params: 2,907,163\nNon-trainable params: 8,736\n__________________________________________________________________________________________________\n"
],
[
"# Train the model\n\nepochs = 5\n\ncallbacks = [\n keras.callbacks.ModelCheckpoint(\"save_at_{epoch}.h5\"),\n]\nmodel.compile(\n optimizer=keras.optimizers.Adam(1e-3),\n loss=\"SparseCategoricalCrossentropy\",\n metrics=[\"accuracy\"],\n)\nmodel.fit(\n train_ds, epochs=epochs, callbacks=callbacks, validation_data=test_ds,\n)\n",
"Epoch 1/5\n2116/2116 [==============================] - 103s 47ms/step - loss: 1.1721 - accuracy: 0.6887 - val_loss: 0.9164 - val_accuracy: 0.7475\nEpoch 2/5\n2116/2116 [==============================] - 101s 48ms/step - loss: 0.1290 - accuracy: 0.9587 - val_loss: 7.3700 - val_accuracy: 0.2734\nEpoch 3/5\n2116/2116 [==============================] - 101s 48ms/step - loss: 0.0895 - accuracy: 0.9717 - val_loss: 0.1336 - val_accuracy: 0.9636\nEpoch 4/5\n2116/2116 [==============================] - 101s 47ms/step - loss: 0.0746 - accuracy: 0.9767 - val_loss: 0.2967 - val_accuracy: 0.9385\nEpoch 5/5\n2116/2116 [==============================] - 101s 47ms/step - loss: 0.0443 - accuracy: 0.9870 - val_loss: 1.4121 - val_accuracy: 0.7358\n"
],
[
"# Test the model\n\nimport matplotlib.image as mpimg\n\nimg = keras.preprocessing.image.load_img(\n \"dataset/fruits-360/Test/Banana/100_100.jpg\", target_size=image_size\n)\nimg_array = keras.preprocessing.image.img_to_array(img)\nimg_array = tf.expand_dims(img_array, 0) # Create batch axis\npredictions = model.predict(img_array)\nscore = predictions[0]\nimg = mpimg.imread('dataset/fruits-360/Test/Banana/100_100.jpg')\nimgplot = plt.imshow(img)\nplt.title(\"Predicted Fruit: \" + train_class_names[score.tolist().index(max(score.tolist()))])\nplt.axis(\"off\")\nplt.show()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a08aa6a51ef6d9b5a69c38fdc8884a1f0df76f6
| 118,348 |
ipynb
|
Jupyter Notebook
|
week_01/Notebook_week_1.ipynb
|
ebelingbarros/Spiced-academy-other-projects
|
725e03ad5e69f0ae443daa617ccc9c555fbf4352
|
[
"CC0-1.0"
] | null | null | null |
week_01/Notebook_week_1.ipynb
|
ebelingbarros/Spiced-academy-other-projects
|
725e03ad5e69f0ae443daa617ccc9c555fbf4352
|
[
"CC0-1.0"
] | null | null | null |
week_01/Notebook_week_1.ipynb
|
ebelingbarros/Spiced-academy-other-projects
|
725e03ad5e69f0ae443daa617ccc9c555fbf4352
|
[
"CC0-1.0"
] | null | null | null | 40.950865 | 17,368 | 0.430933 |
[
[
[
"# Project Week 1: Creating an animated scatterplot with Python",
"_____no_output_____"
]
],
[
[
"#Import libraries\nimport pandas as pd\nimport imageio\nimport seaborn as sns\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"### Step 1: Read in data",
"_____no_output_____"
]
],
[
[
"gdppercapitagrowth = pd.read_csv('data/gdp_per_capita_growth.csv', index_col=0)\ngdppercapitagrowth",
"_____no_output_____"
],
[
"industry = pd.read_csv('data/industry_value_added_growth.csv', index_col=0)\nindustry",
"_____no_output_____"
],
[
"gdp = pd.read_csv('data/gdp_per_capita.csv', index_col=0)\ngdp",
"_____no_output_____"
]
],
[
[
"### Step 2: some preliminary data exploration",
"_____no_output_____"
]
],
[
[
"print(gdppercapitagrowth.shape)",
"(266, 61)\n"
],
[
"print(industry.shape)",
"(266, 61)\n"
],
[
"print(gdp.shape)",
"(266, 61)\n"
],
[
"gdppercapitagrowth.columns",
"_____no_output_____"
],
[
"industry.columns",
"_____no_output_____"
],
[
"gdp.columns",
"_____no_output_____"
],
[
"gdppercapitagrowth.index",
"_____no_output_____"
],
[
"industry.index",
"_____no_output_____"
],
[
"gdp.index",
"_____no_output_____"
]
],
[
[
"### Step 3: some preliminary data transformation ",
"_____no_output_____"
]
],
[
[
"gdppercapitagrowth.columns = gdppercapitagrowth.columns.astype(int)\nindustry.columns = industry.columns.astype(int)\ngdp.columns = gdp.columns.astype(int)",
"_____no_output_____"
],
[
"gdppercapitagrowth.index.name = 'country'",
"_____no_output_____"
],
[
"gdppercapitagrowth = gdppercapitagrowth.reset_index()",
"_____no_output_____"
],
[
"gdppercapitagrowth = gdppercapitagrowth.melt(id_vars='country', var_name='year', value_name='gdppercapitagrowth')",
"_____no_output_____"
],
[
"industry.index.name = 'country'",
"_____no_output_____"
],
[
"industry = industry.reset_index()",
"_____no_output_____"
],
[
"industry = industry.melt(id_vars='country', var_name='year', value_name='industry')\n",
"_____no_output_____"
],
[
"gdp.index.name = 'country'",
"_____no_output_____"
],
[
"gdp = gdp.reset_index()",
"_____no_output_____"
],
[
"gdp = gdp.melt(id_vars='country', var_name='year', value_name='gdp')\n",
"_____no_output_____"
]
],
[
[
"### Step 4: merging the dataframes",
"_____no_output_____"
]
],
[
[
"df = gdppercapitagrowth.merge(industry)\ndf",
"_____no_output_____"
],
[
"continents = pd.read_csv('data/continents.csv', index_col=0)\ncontinents",
"_____no_output_____"
],
[
"df = pd.merge(df, continents, how = \"left\", on = \"country\" )\ndf",
"_____no_output_____"
],
[
"df = df.reset_index()\ngdp = gdp.reset_index()\ngdp",
"_____no_output_____"
],
[
"df = pd.merge(df, gdp, how = \"outer\", on='index')\ndf",
"_____no_output_____"
],
[
"# Retrieving different values from \"Continent\" column\ndf['Continent'].value_counts()",
"_____no_output_____"
],
[
"indexNames = df[df['Continent'] == \"0\"].index\ndf.drop(indexNames, inplace=True)\ndf.dropna()\ndf",
"_____no_output_____"
],
[
"column_a = df[\"gdppercapitagrowth\"]\nxmax = column_a.max()\nxmin = column_a.min()\ncolumn_b = df[\"industry\"]\nymax = column_b.max()\nymin = column_b.min()\n",
"_____no_output_____"
],
[
"df_subset = df.loc[df['year_x'] == 2000]\nsns.scatterplot(x='gdppercapitagrowth', y='industry', \n data=df_subset, alpha=0.6)\nplt.show()\n",
"_____no_output_____"
]
],
[
[
"### Step 5: creating a version of the visualisation with outliers",
"_____no_output_____"
]
],
[
[
"year_list = list(range(1965, 2020))",
"_____no_output_____"
],
[
"for a in (year_list):\n df_subset = df.loc[df['year_x'] == a]\n sns.scatterplot(x='gdppercapitagrowth', y='industry', size='gdp', hue='gdp', data=df_subset, alpha=0.8, sizes=(60, 200), legend = False)\n plt.axis((xmin, xmax, ymin, ymax))\n plt.xlabel(\"GDP per capita growth\")\n plt.ylabel(\"Industrial value added growth\")\n plt.title(str(a)) \n plt.savefig('growth_'+str(a)+'.png', dpi=300)\n plt.close()\n\nimages = []\n\nfor a in year_list:\n filename = 'growth_{}.png'.format(a)\n images.append(imageio.imread(filename))\n\nimageio.mimsave('output1.gif', images, fps=0.9)",
"_____no_output_____"
]
],
[
[
"<img width=\"60%\" height=\"60%\" align=\"left\" src=\"output1.gif\"> ",
"_____no_output_____"
],
[
"### Step 6: creating a version of the visualisation without outliers",
"_____no_output_____"
]
],
[
[
"for a in (year_list):\n df_subset = df.loc[df['year_x'] == a]\n sns.scatterplot(x='gdppercapitagrowth', y='industry', size='gdp', sizes=(60, 200),\n data=df_subset, alpha=0.75, hue='gdp', legend = False)\n plt.axis((-15, 20, -15, 20))\n plt.xlabel(\"GDP per capita growth\")\n plt.ylabel(\"Industrial value added growth\")\n plt.title(str(a)) \n plt.savefig('growthb_'+str(a)+'.png', dpi=300)\n plt.close()\n\nimages = []\n\nfor a in year_list:\n filename = 'growthb_{}.png'.format(a)\n images.append(imageio.imread(filename))\n\nimageio.mimsave('output2.gif', images, fps=0.9)",
"_____no_output_____"
]
],
[
[
"<img width=\"60%\" height=\"60%\" align=\"left\" src=\"output2.gif\"> ",
"_____no_output_____"
],
[
"### Step 7: creating a version of the visualisation with selected continents are hues",
"_____no_output_____"
]
],
[
[
"df['continents'] = \"\"\ndf.loc[df['Continent'] == \"Asia\", \"continents\"] = \"Asia\"\ndf.loc[df['Continent'] == \"South America\", \"continents\"] = \"South America\"\n\ncolors = [\"#c0c0c0\",\"#ff6600\",'#0080ff']\n\nfor a in (year_list):\n df_subset = df.loc[df['year_x'] == a]\n g = sns.scatterplot(x='gdppercapitagrowth', y='industry', sizes=(50,1500), size=\"gdp\", palette=colors,\n data=df_subset, alpha=0.6, hue='continents', legend=\"brief\")\n h,l = g.get_legend_handles_labels()\n plt.axis((-20, 20, -20, 20))\n plt.xlabel(\"GDP per capita growth\")\n plt.ylabel(\"Industrial value added growth\")\n sns.despine()\n plt.legend(h[0:3],l[0:3], fontsize=8, loc='lower right', bbox_to_anchor=(1, 0.1))\n plt.text(0.5, -19, '*the larger the bubble, the larger the GDP per capita',\n verticalalignment='bottom', horizontalalignment='center', color='black', fontsize=6)\n plt.text(-17.5, 19, str(a),\n verticalalignment='top', horizontalalignment='left', color='grey', fontsize=10)\n plt.title(\"Does industrial growth drive GDP per capita growth?\") \n plt.savefig('growthc_'+str(a)+'.png', dpi=300)\n plt.close()\n \nimages = []\n\nfor a in year_list:\n filename = 'growthc_{}.png'.format(a)\n images.append(imageio.imread(filename))\n\nimageio.mimsave('output3.gif', images, fps=1.2)",
"_____no_output_____"
]
],
[
[
"<img width=\"80%\" height=\"80%\" align=\"left\" src=\"output3.gif\"> ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a08bd200de7123e15cc3914f723bc27c24cda4e
| 10,009 |
ipynb
|
Jupyter Notebook
|
Control_Structure.ipynb
|
PhilippeJustine/CPEN-21A-ECE-2-1
|
4c238e0924c642fd52fc4b089eb5ca3017cc1413
|
[
"Apache-2.0"
] | null | null | null |
Control_Structure.ipynb
|
PhilippeJustine/CPEN-21A-ECE-2-1
|
4c238e0924c642fd52fc4b089eb5ca3017cc1413
|
[
"Apache-2.0"
] | null | null | null |
Control_Structure.ipynb
|
PhilippeJustine/CPEN-21A-ECE-2-1
|
4c238e0924c642fd52fc4b089eb5ca3017cc1413
|
[
"Apache-2.0"
] | null | null | null | 22.291759 | 246 | 0.398142 |
[
[
[
"<a href=\"https://colab.research.google.com/github/PhilippeJustine/CPEN-21A-ECE-2-1/blob/main/Control_Structure.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"##If Statement",
"_____no_output_____"
]
],
[
[
"a = 12\nb = 100\nif b>a:\n print(\"b is greater than a\")",
"b is greater than a\n"
]
],
[
[
"##Elif Statement",
"_____no_output_____"
]
],
[
[
"a = 12\nb = 13\n\nif b>a:\n print(\"b is greater than a\")\nelif b==a:\n print(\"b is equal to a\")",
"b is greater than a\n"
]
],
[
[
"##Else Statement",
"_____no_output_____"
]
],
[
[
"a = 30\nb = 30\nif a>b:\n print(\"a is greater than b\")\nelif b>a:\n print(\"b is greater than a\")\nelse:\n print(\"a is equal to b\")",
"a is equal to b\n"
]
],
[
[
"##Short Hand If Statement",
"_____no_output_____"
]
],
[
[
"a = 12\nb = 6\nif a>b: print(\"a is greater than b\")",
"a is greater than b\n"
]
],
[
[
"##Short Hand If...Else Statement",
"_____no_output_____"
]
],
[
[
"a = 7\nb = 14\nprint(\"a is greater than b\")if a>b else print(\"b is greater than a\")",
"b is greater than a\n"
]
],
[
[
"And logical condition",
"_____no_output_____"
]
],
[
[
"a = 200\nb = 300\nc = 500\n\nif a>b and c>a:\n print(\"Both conditions are True\")\nelse:\n print(\"Evaluated as False\")",
"Evaluated as False\n"
]
],
[
[
"Or logical condition",
"_____no_output_____"
]
],
[
[
"a = 200\nb = 300\nc = 500\n\nif a>b or c>a:\n print(\"Evaluated as True\")\nelse:\n print(\"Evaluated as False\")",
"Evaluated as True\n"
]
],
[
[
"##Nested If.. Else Statement",
"_____no_output_____"
]
],
[
[
"x = 20\n\nif x>10:\n print(\"Above ten\")\n if x>20:\n print(\"Above twenty\")\n else:\n print(\"Above ten but Not above twenty\")\nelse:\n print(\"Not above 10\")",
"Above ten\nAbove ten but Not above twenty\n"
]
],
[
[
"Example 1",
"_____no_output_____"
]
],
[
[
"# The qualifying age to vote\nage = int(input(\"Enter your age:\"))\n\nif age>=18:\n print(\"You are qualified to vote\")\nelse:\n print(\"You are not qualified to vote\")",
"Enter your age:15\nYou are not qualified to vote\n"
]
],
[
[
"Example 2",
"_____no_output_____"
]
],
[
[
"num = int(input(\"Enter a number:\"))\nif num==0:\n print(\"Zero\")\nelif num>0:\n print(\"Positive\")\nelse:\n print(\"Negative\")",
"Enter a number:-125\nNegative\n"
]
],
[
[
"Example 3",
"_____no_output_____"
]
],
[
[
"grade = float(input(\"Enter your grade:\"))\nif grade>=75:\n print(\"Passed\")\nelif grade<74:\n print(\"Failed\")\nelse:\n print(\"Remedial\")",
"Enter your grade:74.9\nRemedial\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a08e107d0ae2146245d3b1d1dca0b6129a60816
| 473,881 |
ipynb
|
Jupyter Notebook
|
devel/Try to Find Yield Strength Automatically.ipynb
|
snesnehne/MatPy
|
291debff0796124c34ddfad2270976dcd2f445e7
|
[
"MIT"
] | 3 |
2018-05-18T10:18:58.000Z
|
2020-08-05T13:52:32.000Z
|
devel/Try to Find Yield Strength Automatically.ipynb
|
snesnehne/MatPy
|
291debff0796124c34ddfad2270976dcd2f445e7
|
[
"MIT"
] | 2 |
2016-08-05T19:36:16.000Z
|
2021-09-14T13:51:28.000Z
|
devel/Try to Find Yield Strength Automatically.ipynb
|
snesnehne/MatPy
|
291debff0796124c34ddfad2270976dcd2f445e7
|
[
"MIT"
] | 7 |
2017-05-21T16:43:34.000Z
|
2021-09-22T07:34:16.000Z
| 106.898489 | 64,221 | 0.767819 |
[
[
[
"%load_ext autoreload\n%autoreload 2 \"\"\"Reloads all functions automatically\"\"\"\n%matplotlib notebook\n\nfrom irreversible_stressstrain import StressStrain as strainmodel\nimport test_suite as suite\nimport graph_suite as plot\nimport numpy as np\n\nmodel = strainmodel('ref/HSRS/22').get_experimental_data()\n\nslopes = suite.get_slopes(model)\nsecond_deriv_slopes = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))\n\n# -- we think that yield occurs where the standard deviation is decreasing AND the slopes are mostly negative\ndef findYieldInterval(slopes, numberofsections):\n \n def numneg(val):\n return sum((val<0).astype(int))\n \n # -- divide into ten intervals and save stddev of each\n splitslopes = np.array_split(slopes,numberofsections)\n splitseconds = np.array_split(second_deriv_slopes,numberofsections)\n \n # -- displays the number of negative values in a range (USEFUL!!!)\n for section in splitslopes:\n print numneg(section), len(section)\n \n print \"-------------------------------\" \n \n for section in splitseconds:\n print numneg(section), len(section)\n\n divs = [np.std(vals) for vals in splitslopes]\n \n # -- stddev of the whole thing\n stdev = np.std(slopes)\n \n interval = 0\n \n slopesect = splitslopes[interval]\n secondsect = splitseconds[interval]\n \n print divs, stdev\n \n # -- the proportion of slope values in an interval that must be negative to determine that material yields\n cutoff = 3./4.\n \n while numneg(slopesect)<len(slopesect)*cutoff and numneg(secondsect)<len(secondsect)*cutoff:\n \n interval = interval + 1\n \n \"\"\"Guard against going out of bounds\"\"\"\n if interval==len(splitslopes): break\n \n slopesect = splitslopes[interval]\n secondsect = splitseconds[interval] \n \n print \n print interval\n return interval\n\nnumberofsections = 15\ninterval_length = len(model)/numberofsections\n\n\"\"\"\nMiddle of selected interval\n\nGuard against going out of bounds\n\"\"\"\nyield_interval = findYieldInterval(slopes,numberofsections)\nyield_index = min(yield_interval*interval_length + interval_length/2,len(model[:])-1) \nyield_value = np.array(model[yield_index])[None,:]\n\nprint \nprint yield_value",
"9 18\n9 18\n6 18\n13 18\n6 18\n3 18\n10 17\n3 17\n2 17\n6 17\n8 17\n10 17\n5 17\n8 17\n9 17\n-------------------------------\n9 18\n10 18\n11 18\n7 18\n8 18\n9 17\n7 17\n8 17\n9 17\n7 17\n9 17\n9 17\n9 17\n10 17\n8 17\n[5541.7766300054009, 1130.1050382995807, 206.15448932657284, 1715.0729506094001, 1006.5319683583111, 211.84733851967869, 48.696494230800624, 29.03240086835461, 36.499604939329146, 29.665834653640491, 26.478056069311805, 38.349696543136389, 30.852118972258211, 47.043996580356165, 68.012199348965737] 1589.11298719\n\n15\n\n[[ 16.68366237 1080.26033278]]\n"
]
],
[
[
"## Make these estimates more reliable and robust",
"_____no_output_____"
]
],
[
[
"model = strainmodel('ref/HSRS/326').get_experimental_data()\n\nstrain = model[:,0]\nstress = model[:,1]\n\nslopes = suite.get_slopes(model)\nsecond_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))\n\n\n\"\"\"Now what if we have strain vs slope\"\"\"\nstrainvslope = suite.combine_data(strain,slopes)\nstrainvsecond = suite.combine_data(strain,second_deriv)\nplot.plot2D(strainvsecond,'Strain','Slope',marker=\"ro\")\nplot.plot2D(model,'Strain','Stress',marker=\"ro\")",
"_____no_output_____"
],
[
"model = strainmodel('ref/HSRS/326').get_experimental_data()\n\nstrain = model[:,0]\nstress = model[:,1]\n\nslopes = suite.get_slopes(model)\nsecond_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))\n\nnum_intervals = 80\n\ninterval_length = len(second_deriv)/num_intervals\nsplit_2nd_derivs = np.array_split(second_deriv,num_intervals)\n\nprint np.mean(second_deriv)\ndown_index = 0\n\nfor index, section in enumerate(split_2nd_derivs):\n if sum(section)<np.mean(slopes):\n down_index = index\n break\n \nyield_index = down_index*interval_length\n\nprint strain[yield_index], stress[yield_index]",
"-6666.04436529\n0.0 5.145343902\n"
],
[
"model = strainmodel('ref/HSRS/326').get_experimental_data()\n\nstrain = model[:,0]\nstress = model[:,1]\n\nfirst_deriv = suite.get_slopes(model)\nsecond_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))\n\nplot1 = suite.combine_data(strain,first_deriv)\nplot2 = suite.combine_data(strain,second_deriv)\n\nplot.plot2D(model)\nplot.plot2D(plot1)\nplot.plot2D(plot2)",
"_____no_output_____"
]
],
[
[
"### See when standard deviation of second derivative begins to decrease",
"_____no_output_____"
]
],
[
[
"model = strainmodel('ref/HSRS/222').get_experimental_data()\n\nstrain = model[:,0]\nstress = model[:,1]\n\nfirst_deriv = suite.get_slopes(model)\nsecond_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))\n\n\nave_deviation = np.std(second_deriv)\ndeviation_second = [np.std(val) for val in np.array_split(second_deriv,30)]\n\nyielding = 0\n\n\nfor index,value in enumerate(deviation_second):\n \n if value != 0.0 and value<ave_deviation and index!=0:\n yielding = index\n break\n \nprint second_deriv\n#print \"It seems to yield at index:\", yielding\n \n#print \"These are all of the standard deviations, by section:\", deviation_second, \"\\n\"\n#print \"The overall standard deviation of the second derivative is:\", ave_deviation",
"[ 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. -6776.87625563\n 296.23291549 -156.19202143 -352.28396043 754.53060214 -1037.24226547\n 479.31111184 -534.81673959 105.7008789 106.46757059 -962.86676321\n 1680.77526221 -1569.19091298 851.45993862 -41.64822174 -155.29739017\n 486.40934245 -557.29866406 594.9654457 -255.82293098 433.16858797\n 62.63117338 -657.43822297 690.76546859 -675.77767438 608.15611117\n -361.3962951 -132.15071279 345.1779598 -614.91620366 544.79027965\n -697.9904423 342.22531992 -226.3045817 -40.0599308 515.76046675\n -908.16854263 772.87596143 238.22005663 -585.27638344 482.28368143\n -351.35111924 581.70455112 -740.66411342 814.99913529 -501.9481535\n 99.41460147 151.32849157 -263.22282944 223.04433555 -427.7673028\n 338.27040208 -82.13561266 -100.90748705 340.79621481 -677.41733036\n 519.69635306 -251.60747017 364.91173493 -277.15712286 -504.08614187\n 1216.31663555 -1315.08334567 841.81869564 -409.06209002 142.21101519\n 214.81709502 -646.67613677 1079.64715155 -666.68742294 -222.60041479\n 802.78937805 -576.97985332 448.69343929 -437.98386767 226.68527686\n -325.77772433 101.88306366 376.46160256 -734.19731114 890.32975968\n -858.36964398 643.73227405 -385.10602314 -134.89175399 662.48274562\n -828.27543591 1050.83381943 -923.78138391 184.51439376 355.71132778\n -372.00232167 374.81072846 -331.08647805 64.61386587 98.09189437\n -158.80476326 -34.14178375 51.69262139 326.62490304 -499.92737765\n 538.32301097 -209.03889562 -35.09787137 254.99758787 -608.18950424\n 611.73382014 -342.72264437 -57.68562555 373.92308928 -820.2360959\n 1057.40910536 -1170.06759126 957.55988151 -432.64162636 -151.14946246\n 409.04834469 -27.8563003 359.91698621 -967.47989682 874.02979184\n -577.01305852 234.67101707 376.46472787 -734.08333712 498.40127068\n -605.01855362 454.57554048 133.75910127 -645.13309903 990.51129439\n -621.17057106 581.9148706 -854.60253926 488.98331365 49.52018043\n -278.03710337 420.16975052 -646.13662661]\n"
]
],
[
[
"## The actual yield values are as follows (These are approximate):\n### ref/HSRS/22: Index 106 [1.3912797535, 900.2614980977]\n### ref/HSRS/222: Index 119 [0, 904.6702299]\n### ref/HSRS/326: Index 150 [6.772314989, 906.275032]",
"_____no_output_____"
],
[
"### Index of max standard deviation of the curve",
"_____no_output_____"
]
],
[
[
"model = strainmodel('ref/HSRS/22').get_experimental_data()\n\nstrain = model[:,0]\nstress = model[:,1]\n\nfirst_deriv = suite.get_slopes(model)\nsecond_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))\n\nprint second_deriv;\nreturn;\n\nchunks = 20\nint_length = len(model[:])/chunks\n\nderiv2spl = np.array_split(second_deriv,chunks)\ndeviation_second = [abs(np.mean(val)) for val in deriv2spl]\n\ndel(deviation_second[0])\nprint deviation_second\nprint np.argmax(deviation_second)\n#print \"The standard deviation of all the second derivatives is\", np.std(second_deriv)",
"[ -7.76721827e+05 -7.11985507e+05 5.81422420e+06 -4.71471809e+06\n 5.38641028e+06 -4.56214023e+06 -1.47978263e+04 3.87732270e+05\n -9.93015554e+04 -9.29494291e+06 1.60701493e+07 -1.20516718e+07\n 1.13624590e+08 1.64835742e+08 -5.06672773e+06 2.29989414e+07\n 1.09368986e+06 2.07889789e+06 -4.41745241e+06 2.01049954e+06\n -3.30783576e+06 1.68017207e+06 4.12439919e+04 -4.01957378e+05\n -1.20699492e+05 -1.23165837e+05 -2.66198835e+05 3.31631221e+04\n 8.02841749e+04 -3.96416985e+05 4.51099456e+05 -1.57914399e+05\n -5.91717842e+04 1.61009661e+05 -1.88063949e+05 1.38439177e+05\n -9.52232980e+04 5.24562224e+04 -1.10151968e+04 -3.97584349e+04\n 1.17486078e+04 -1.14553167e+04 6.00884875e+03 2.35124595e+03\n -1.04245584e+04 6.40640497e+03 -3.25928214e+03 1.11984469e+04\n -2.95013722e+03 -4.86681203e+03 -8.97662427e+03 2.67297076e+04\n -6.03511177e+03 -9.23829988e+03 3.72043629e+04 7.57754313e+03\n 1.58171294e+05 6.19987302e+05 -6.93158411e+06 -1.38450374e+06\n -2.27835562e+05 2.39881150e+04 -4.03095706e+04 4.22609257e+03\n 1.85298004e+04 -2.94149153e+04 1.76433177e+04 -9.43774287e+03\n 1.71419794e+04 -2.60654495e+04 8.21317190e+03 6.30343064e+03\n 4.32566924e+03 -4.27857519e+03 -2.61264652e+01 4.18478310e+03\n 4.04954236e+04 -2.50366281e+05 1.30812286e+04 7.03907200e+02\n -3.12684305e+04 7.10534411e+03 -1.04200433e+04 6.49543986e+03\n 4.97619162e+03 1.37593048e+03 1.50345871e+04 -1.10839200e+04\n -2.76635216e+03 -7.71236119e+03 -1.91443005e+03 7.29230057e+02\n -4.07370440e+03 2.43172349e+03 -5.52794803e+03 1.93977627e+03\n -2.42056983e+03 1.95111854e+02 -2.21146202e+03 1.11135688e+02\n -8.63398695e+02 -3.16283117e+02 2.57358574e+02 1.19480775e+02\n -7.34935916e+02 6.23416230e+02 -2.03521830e+02 -1.79099120e+02\n -8.86149685e+01 1.94905385e+02 -4.99737204e+02 4.70378527e+02\n -2.80449954e+01 1.33298159e+01 1.18421037e+02 1.04740249e+02\n -2.85806315e+02 3.18456681e+02 8.09280259e+02 -7.62946745e+02\n 3.53964453e+02 2.86949271e+02 -4.75045057e+02 3.12975910e+02\n 1.24724171e+02 6.22434828e+01 -4.15304527e+02 5.21381872e+01\n -7.30183962e+01 -5.37745061e+01 3.42871347e+02 -1.60804035e+02\n -4.77217483e+02 2.29096174e+02 -2.32442880e+02 1.84911850e+02\n 8.25145310e+01 -2.75254147e+02 4.60332268e+02 -3.32159241e+02\n 3.50070116e+02 -2.37492948e+02 -1.33562078e+02 1.95298331e+01\n 1.04000816e+03 -2.65515331e+02 -5.65585340e+02 -7.70094672e+01\n 1.23055931e+02 -7.02176493e+02 3.17081930e+02 6.11170533e+02\n -2.51551888e+02 3.30980008e+02 -5.15161578e+02 8.51794828e+01\n -1.34764112e+02 1.09103089e+02 -4.35700217e+02 3.90611626e+02\n -1.40873732e+01 6.30726202e+02 -8.87370312e+02 5.92823975e+02\n 1.32909271e+02 -8.41671357e+02 2.13925145e+02 1.70176435e+02\n 1.44669023e+02 3.30717200e+02 -3.59031971e+02 -2.79619592e+02\n 7.90005493e+01 3.62907663e+02 -4.07413926e+02 -2.58625698e+02\n 5.85895679e+02 -4.94082291e+02 6.06522400e+02 9.26195917e+01\n -6.49111916e+02 6.63189733e+01 9.07544268e+01 -9.11950413e+01\n -1.33371503e+02 8.63984661e+02 -6.85372230e+02 1.93539389e+02\n -2.57400628e+02 4.90324979e+02 -2.99425821e+02 -1.24554525e+02\n -1.06320535e+02 3.60566191e+02 -2.56317159e+02 5.23998726e+02\n -9.59563302e+02 7.69359713e+02 -4.56371738e+02 1.05749478e-01\n 2.57866848e+02 2.16239920e+02 -4.08454749e+02 -4.00837435e+02\n 1.19162144e+03 -9.10081525e+02 -4.17843719e+02 1.44167661e+03\n -1.28041419e+03 4.13260640e+02 1.82012022e+02 -2.35206652e+01\n 4.45252694e+02 -4.70952816e+02 -4.90716585e+01 -1.88190705e+02\n 4.13543840e+02 -2.91996630e+02 -4.39167105e+02 6.95632312e+02\n 3.02849839e+02 -1.21640091e+03 6.61534552e+02 -1.78212491e+02\n -2.81376458e+02 1.00540767e+03 -2.85703278e+02 -1.06963116e+02\n -6.50324062e+02 9.89847465e+02 -5.99041518e+02 5.36053716e+01\n 4.32673439e+02 1.12606707e+02 -4.39183213e+02 -6.53283835e+02\n -6.63708632e+01 1.68820763e+03 -1.35949073e+03 -8.14011521e+02\n 1.61585646e+03 -9.39137654e+02 1.57473958e+03 -1.12996963e+03\n -1.53984482e+03 2.78894897e+03 -1.65299079e+03 7.49186257e+01\n 2.04458515e+02 4.76328834e+02 -6.22179334e+01 -1.88131663e+02\n -1.28877360e+03 7.59932613e+02 1.10385740e+03 4.26797858e+02\n -4.60088965e+03 4.06866352e+03 -8.76160314e+02 3.50457241e+03]\n"
]
],
[
[
"### If our data dips, we can attempt to find local maxima",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n# -- climbs a discrete dataset to find local max\ndef hillclimber(data, guessindex = 0):\n \n x = data[:,0]\n y = data[:,1]\n \n curx = x[guessindex]\n cury = y[guessindex]\n \n guessleft = max(0,guessindex-1)\n guessright = min(len(x)-1,guessindex+1)\n \n done = False\n \n while not done:\n \n left = y[guessleft]\n right = y[guessright]\n\n difleft = left-cury\n difright = right-cury\n\n if difleft<0 and difright<0 or (difleft==0 and difright==0):\n done = True\n elif difleft>difright:\n cur = left\n guessindex = guessleft\n elif difright>difleft or difright==difleft:\n cur = right\n guessindex = guessright\n \n return guessindex\n\nfunc = lambda x: x**2\nxs = np.linspace(0.,10.,5)\nys = func(xs)\n\ndata = suite.combine_data(xs,ys)\nprint hillclimber(data)\n ",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a08e21419acd7fe6dfab675129580949a25d24f
| 3,493 |
ipynb
|
Jupyter Notebook
|
3x3Gridworld/gridworld3x3.ipynb
|
jerryzenghao/ReinformanceLearning
|
41da6bcf14bb588a0a29abbb576d71970b41a771
|
[
"MIT"
] | null | null | null |
3x3Gridworld/gridworld3x3.ipynb
|
jerryzenghao/ReinformanceLearning
|
41da6bcf14bb588a0a29abbb576d71970b41a771
|
[
"MIT"
] | null | null | null |
3x3Gridworld/gridworld3x3.ipynb
|
jerryzenghao/ReinformanceLearning
|
41da6bcf14bb588a0a29abbb576d71970b41a771
|
[
"MIT"
] | null | null | null | 30.911504 | 1,047 | 0.539651 |
[
[
[
"from gridworld.environment import *\nfrom gridworld.agent import *\n\ngrid_size = (3,3)\nstate_reward = {(0,0):10}\njump = {(0,0):(2,2)}\n\nenv = GridWorld(grid_size, state_reward,jump)\ndp_agt = DPAgent(env) \nqlearning_agt = QlearningAgent(env)\ntd_agt = TDAgent(env)\n \n \nprint(\"DP state value:\\n\", dp_agt.policy_evaluation())\nprint(\"TD state value:\\n\", td_agt.temporal_difference(0.01))\nprint(\"policy iteration:\\n\", dp_agt.policy_iteration())\nprint(\"Q learning:\\n\", qlearning_agt.Q_learning(0.01))\n\n",
"DP state value:\n [[ 8.84585576 2.48708754 -0.07860397]\n [ 2.48708754 0.91441864 -0.45308389]\n [-0.07860397 -0.45308389 -1.28140137]]\nTD state value:\n [[ 8.80414207 2.41183136 -0.17575528]\n [ 2.52581862 1.0226199 -0.51408979]\n [-0.18863601 -0.39410003 -1.31197813]]\npolicy iteration:\n (array([[24.41663445, 21.97478124, 19.77712021],\n [21.97478124, 19.77712021, 17.79928298],\n [19.77712021, 17.79928298, 16.01935457]]), [[[0.25, 0.25, 0.25, 0.25], [0.0, 0.0, 0.0, 1.0], [0.0, 0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.5], [0.5, 0.0, 0.0, 0.5]], [[1.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.5], [0.5, 0.0, 0.0, 0.5]]])\nQ learning:\n (array([[24.4194281 , 21.97748529, 19.77973676],\n [21.97748529, 19.77973676, 17.80176308],\n [19.77973676, 17.80176308, 16.02158677]]), [[[0.25, 0.25, 0.25, 0.25], [0.0, 0.0, 0.0, 1.0], [0.0, 0.0, 0.0, 1.0]], [[1.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.5], [0.5, 0.0, 0.0, 0.5]], [[1.0, 0.0, 0.0, 0.0], [0.5, 0.0, 0.0, 0.5], [0.5, 0.0, 0.0, 0.5]]])\n"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4a08ef7b27738cdd4ad5f32e4af1158a8cd75dd2
| 22,222 |
ipynb
|
Jupyter Notebook
|
example.ipynb
|
adavidzh/scinum
|
b0bc2f6992605247c9011118da129962fe86cd2f
|
[
"BSD-3-Clause"
] | 17 |
2017-09-11T14:41:08.000Z
|
2021-11-26T17:17:34.000Z
|
example.ipynb
|
adavidzh/scinum
|
b0bc2f6992605247c9011118da129962fe86cd2f
|
[
"BSD-3-Clause"
] | 11 |
2017-09-10T18:32:41.000Z
|
2022-03-07T13:22:52.000Z
|
example.ipynb
|
adavidzh/scinum
|
b0bc2f6992605247c9011118da129962fe86cd2f
|
[
"BSD-3-Clause"
] | 2 |
2018-02-21T08:54:42.000Z
|
2021-11-16T20:47:07.000Z
| 25.109605 | 544 | 0.52502 |
[
[
[
"# `scinum` example",
"_____no_output_____"
]
],
[
[
"from scinum import Number, Correlation, NOMINAL, UP, DOWN, ABS, REL",
"_____no_output_____"
]
],
[
[
"The examples below demonstrate\n\n- [Numbers and formatting](#Numbers-and-formatting)\n- [Defining uncertainties](#Defining-uncertainties)\n- [Multiple uncertainties](#Multiple-uncertainties)\n- [Configuration of correlations](#Configuration-of-correlations)\n- [Automatic uncertainty propagation](#Automatic-uncertainty-propagation)",
"_____no_output_____"
],
[
"### Numbers and formatting",
"_____no_output_____"
]
],
[
[
"n = Number(1.234, 0.2)\nn",
"_____no_output_____"
]
],
[
[
"The uncertainty definition is absolute. See the examples with [multiple uncertainties](#Multiple-uncertainties) for relative uncertainty definitions.\n\nThe representation of numbers (`repr`) in jupyter notebooks uses latex-style formatting. Internally, [`Number.str()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.str) is called, which - among others - accepts a `format` argument, defaulting to `\"%s\"` (configurable globally or per instance via [`Number.default_format`](https://scinum.readthedocs.io/en/latest/#scinum.Number.default_format)). Let's change the format for this notebook:",
"_____no_output_____"
]
],
[
[
"Number.default_format = \"%.2f\"\nn",
"_____no_output_____"
],
[
"# or\nn.str(\"%.3f\")",
"_____no_output_____"
]
],
[
[
"### Defining uncertainties",
"_____no_output_____"
],
[
"Above, `n` is defined with a single, symmetric uncertainty. Here are some basic examples to access and play it:",
"_____no_output_____"
]
],
[
[
"# nominal value\nprint(n.nominal)\nprint(type(n.nominal))",
"1.234\n<class 'float'>\n"
],
[
"# get the uncertainty\nprint(n.get_uncertainty())\nprint(n.get_uncertainty(direction=UP))\nprint(n.get_uncertainty(direction=DOWN))",
"(0.2, 0.2)\n0.2\n0.2\n"
],
[
"# get the nominal value, shifted by the uncertainty\nprint(n.get()) # nominal value\nprint(n.get(UP)) # up variation\nprint(n.get(DOWN)) # down variation",
"1.234\n1.434\n1.034\n"
],
[
"# some more advanved use-cases:\n\n# 1. get the multiplicative factor that would scale the nomninal value to the UP/DOWN varied ones\nprint(\"absolute factors:\")\nprint(n.get(UP, factor=True))\nprint(n.get(DOWN, factor=True))\n\n# 2. get the factor to obtain the uncertainty only (i.e., the relative unceratinty)\n# (this is, of course, more useful in case of multiple uncertainties, see below)\nprint(\"\\nrelative factors:\")\nprint(n.get(UP, factor=True, diff=True))\nprint(n.get(DOWN, factor=True, diff=True))",
"absolute factors:\n1.1620745542949757\n0.8379254457050244\n\nrelative factors:\n0.1620745542949757\n0.1620745542949757\n"
]
],
[
[
"There are also a few shorthands for the above methods:",
"_____no_output_____"
]
],
[
[
"# __call__ is forwarded to get()\nprint(n())\nprint(n(UP))\n\n# u() is forwarded to get_uncertainty()\nprint(n.u())\nprint(n.u(direction=UP))",
"1.234\n1.434\n(0.2, 0.2)\n0.2\n"
]
],
[
[
"### Multiple uncertainties",
"_____no_output_____"
],
[
"Let's create a number that has two uncertainties: `\"stat\"` and `\"syst\"`. The `\"stat\"` uncertainty is asymmetric, and the `\"syst\"` uncertainty is relative.",
"_____no_output_____"
]
],
[
[
"n = Number(8848, {\n \"stat\": (30, 20), # absolute +30-20 uncertainty\n \"syst\": (REL, 0.5), # relative +-50% uncertainty\n})\nn",
"_____no_output_____"
]
],
[
[
"Similar to above, we can access the uncertainties and shifted values with [`get()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.get) (or `__call__`) and [`get_uncertainty()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.get_uncertainty) (or [`u()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.u)). But this time, we can distinguish between the combined (in quadrature) value or the particular uncertainty sources:",
"_____no_output_____"
]
],
[
[
"# nominal value as before\nprint(n.nominal)\n\n# get all uncertainties (stored absolute internally)\nprint(n.uncertainties)",
"8848.0\n{'stat': (30.0, 20.0), 'syst': (4424.0, 4424.0)}\n"
],
[
"# get particular uncertainties\nprint(n.u(\"syst\"))\nprint(n.u(\"stat\"))\nprint(n.u(\"stat\", direction=UP))",
"(4424.0, 4424.0)\n(30.0, 20.0)\n30.0\n"
],
[
"# get the nominal value, shifted by particular uncertainties\nprint(n(UP, \"stat\"))\nprint(n(DOWN, \"syst\"))\n\n# compute the shifted value for both uncertainties, added in quadrature without correlation (default but configurable)\nprint(n(UP))",
"8878.0\n4424.0\n13272.101716733014\n"
]
],
[
[
"As before, we can also access certain aspects of the uncertainties:",
"_____no_output_____"
]
],
[
[
"print(\"factors for particular uncertainties:\")\nprint(n.get(UP, \"stat\", factor=True))\nprint(n.get(DOWN, \"syst\", factor=True))\n\nprint(\"\\nfactors for the combined uncertainty:\")\nprint(n.get(UP, factor=True))\nprint(n.get(DOWN, factor=True))",
"factors for particular uncertainties:\n1.0033905967450272\n0.5\n\nfactors for the combined uncertainty:\n1.500011496014129\n0.49999489062775576\n"
]
],
[
[
"We can also apply some nice formatting:",
"_____no_output_____"
]
],
[
[
"print(n.str())\nprint(n.str(\"%.2f\"))\nprint(n.str(\"%.2f\", unit=\"m\"))\nprint(n.str(\"%.2f\", unit=\"m\", force_asymmetric=True))\nprint(n.str(\"%.2f\", unit=\"m\", scientific=True))\nprint(n.str(\"%.2f\", unit=\"m\", si=True))\nprint(n.str(\"%.2f\", unit=\"m\", style=\"root\"))",
"8848.00 +30.00-20.00 (stat) +- 4424.00 (syst)\n8848.00 +30.00-20.00 (stat) +- 4424.00 (syst)\n8848.00 +30.00-20.00 (stat) +- 4424.00 (syst) m\n8848.00 +30.00-20.00 (stat) +4424.00-4424.00 (syst) m\n8.85 +0.03-0.02 (stat) +- 4.42 (syst) x 1E3 m\n8.85 +0.03-0.02 (stat) +- 4.42 (syst) km\n8848.00 ^{+30.00}_{-20.00} #left(stat#right) #pm 4424.00 #left(syst#right) m\n"
]
],
[
[
"### Configuration of correlations",
"_____no_output_____"
],
[
"Let's assume that we have a second measurement for the quantity `n` we defined above,",
"_____no_output_____"
]
],
[
[
"n",
"_____no_output_____"
]
],
[
[
"and we measured it with the same sources of uncertainty,",
"_____no_output_____"
]
],
[
[
"n2 = Number(8920, {\n \"stat\": (35, 15), # absolute +35-15 uncertainty\n \"syst\": (REL, 0.3), # relative +-30% uncertainty\n})\nn2",
"_____no_output_____"
]
],
[
[
" Now, we want to compute the average measurement, including correct error propagation under consideration of sensible correlations. For more info on automatic uncertainty propagation, see the [subsequent section](#Automatic-uncertainty-propagation).\n \nIn this example, we want to fully correlate the *systematic* uncertainty, whereas we can treat *statistical* effects as uncorrelated. However, just wirting `(n + n2) / 2` will consider equally named uncertainty sources to be 100% correlated, i.e., both `syst` and `stat` uncertainties will be simply averaged. This is the default behavior in scinum as it is not possible (nor wise) to *guesstimate* the meaning of an uncertainty from its name.\n\nWhile this approach is certainly correct for `syst`, we don't achieve the correct treatment for `stat`:",
"_____no_output_____"
]
],
[
[
"(n + n2) / 2",
"_____no_output_____"
]
],
[
[
"Instead, we need to define the correlation specifically for `stat`. This can be achieved in multiple ways, but the most pythonic way is to use a [`Correlation`](https://scinum.readthedocs.io/en/latest/#correlation) object.",
"_____no_output_____"
]
],
[
[
"(n @ Correlation(stat=0) + n2) / 2",
"_____no_output_____"
]
],
[
[
"**Note** that the statistical uncertainty decreased as desired, whereas the systematic one remained the same.\n`Correlation` objects have a default value that can be set as the first positional, yet optional parameter, and itself defaults to one.\n\nInternally, the operation `n @ Correlation(stat=0)` (or `n * Correlation(stat=0)` in Python 2) is evaluated prior to the addition of `n2` and generates a so-called [`DeferredResult`](https://scinum.readthedocs.io/en/latest/#deferredresult). This object carries the information of `n` and the correlation over to the next operation, at which point the uncertainty propagation is eventually resolved. As usual, in situations where the operator precedence might seem unclear, it is recommended to use parentheses to structure the expression.",
"_____no_output_____"
],
[
"### Automatic uncertainty propagation",
"_____no_output_____"
],
[
"Let's continue working with the number `n` from above.\n\nUncertainty propagation works in a pythonic way:",
"_____no_output_____"
]
],
[
[
"n + 200",
"_____no_output_____"
],
[
"n / 2",
"_____no_output_____"
],
[
"n**0.5",
"_____no_output_____"
]
],
[
[
"In cases such as the last one, formatting makes a lot of sense ...",
"_____no_output_____"
]
],
[
[
"(n**0.5).str(\"%.2f\")",
"_____no_output_____"
]
],
[
[
"More complex operations such as `exp`, `log`, `sin`, etc, are provided on the `ops` object, which mimics Python's `math` module. The benefit of the `ops` object is that all its operations are aware of Gaussian error propagation rules.",
"_____no_output_____"
]
],
[
[
"from scinum import ops\n\n# change the default format for convenience\nNumber.default_format = \"%.3f\"\n\n# compute the log of n\nops.log(n)",
"_____no_output_____"
]
],
[
[
"The propagation is actually performed simultaneously per uncertainty source.",
"_____no_output_____"
]
],
[
[
"m = Number(5000, {\"syst\": 1000})\n\nn + m",
"_____no_output_____"
],
[
"n / m",
"_____no_output_____"
]
],
[
[
"As described [above](#Configuration-of-correlations), equally named uncertainty sources are assumed to be fully correlated. You can configure the correlation in operations through `Correlation` objects, or by using explicit methods on the number object.",
"_____no_output_____"
]
],
[
[
"# n.add(m, rho=0.5, inplace=False)\n\n# same as\nn @ Correlation(0.5) + m",
"_____no_output_____"
]
],
[
[
"When you set `inplace` to `True` (the default), `n` is updated inplace.",
"_____no_output_____"
]
],
[
[
"n.add(m, rho=0.5)\nn",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a08f2ba5819ee74a4b92abbd5f7f032841da100
| 9,641 |
ipynb
|
Jupyter Notebook
|
Briefly.ipynb
|
Devika2902/Briefly
|
dad9fb02183d6e68f550929d7e4ebcea5d582a09
|
[
"MIT"
] | null | null | null |
Briefly.ipynb
|
Devika2902/Briefly
|
dad9fb02183d6e68f550929d7e4ebcea5d582a09
|
[
"MIT"
] | null | null | null |
Briefly.ipynb
|
Devika2902/Briefly
|
dad9fb02183d6e68f550929d7e4ebcea5d582a09
|
[
"MIT"
] | null | null | null | 28.523669 | 324 | 0.528783 |
[
[
[
"## Briefly\n\n\n### __ Problem Statement __\n- Obtain news from google news articles\n- Sammarize the articles within 60 words\n- Obtain keywords from the articles\n\n\n\n\n\n\n\n\n\n\n##### Importing all the necessary libraries required to run the following code ",
"_____no_output_____"
]
],
[
[
"from gnewsclient import gnewsclient # for fetching google news\nfrom newspaper import Article # to obtain text from news articles\nfrom transformers import pipeline # to summarize text\nimport spacy # for named entity recognition",
"_____no_output_____"
]
],
[
[
"##### Load sshleifer/distilbart-cnn-12-6 model",
"_____no_output_____"
]
],
[
[
"def load_model(): \n model = pipeline('summarization')\n return model\ndata = gnewsclient.NewsClient(max_results=0)\nnlp = spacy.load(\"en_core_web_lg\") ",
"_____no_output_____"
]
],
[
[
"##### Obtain urls and it's content",
"_____no_output_____"
]
],
[
[
"def getNews(topic,location): \n count=0\n contents=[]\n titles=[]\n authors=[]\n urls=[]\n data = gnewsclient.NewsClient(language='english',location=location,topic=topic,max_results=10) \n news = data.get_news() \n for item in news:\n url=item['link']\n article = Article(url)\n try:\n article.download()\n article.parse()\n temp=item['title'][::-1]\n index=temp.find(\"-\")\n temp=temp[:index-1][::-1]\n urls.append(url)\n contents.append(article.text)\n titles.append(item['title'][:-index-1]) \n authors.append(temp)\n count+=1\n if(count==5):\n break\n except:\n continue \n return contents,titles,authors,urls ",
"_____no_output_____"
]
],
[
[
"##### Summarizes the content- minimum word limit 30 and maximum 60",
"_____no_output_____"
]
],
[
[
"def getNewsSummary(contents,summarizer): \n summaries=[] \n for content in contents:\n minimum=len(content.split())\n summaries.append(summarizer(content,max_length=60,min_length=min(30,minimum),do_sample=False,truncation=True)[0]['summary_text']) \n return summaries",
"_____no_output_____"
]
],
[
[
"##### Named Entity Recognition",
"_____no_output_____"
]
],
[
[
"# Obtain 4 keywords from content (person,organisation or geopolitical entity) \ndef generateKeyword(contents): \n keywords=[]\n words=[] \n labels=[\"PERSON\",\"ORG\",\"GPE\"]\n for content in contents:\n doc=nlp(content)\n keys=[]\n limit=0\n for ent in doc.ents:\n key=ent.text.upper()\n label=ent.label_\n if(key not in words and key not in keywords and label in labels): \n keys.append(key)\n limit+=1\n for element in key.split():\n words.append(element)\n if(limit==4):\n keywords.append(keys)\n break \n return keywords\n ",
"_____no_output_____"
]
],
[
[
"##### Displaying keywords ",
"_____no_output_____"
]
],
[
[
"def printKeywords(keywords):\n for keyword in keywords:\n print(keyword)",
"_____no_output_____"
]
],
[
[
"##### Displaying the Summary with keywords in it highlighted",
"_____no_output_____"
]
],
[
[
"def printSummary(summaries,titles):\n for summary,title in zip(summaries,titles):\n print(title.upper(),'\\n')\n print(summary)\n print(\"\\n\\n\")",
"_____no_output_____"
],
[
"summarizer=load_model() ",
"No model was supplied, defaulted to sshleifer/distilbart-cnn-12-6 (https://huggingface.co/sshleifer/distilbart-cnn-12-6)\n"
],
[
"contents,titles,authors,urls=getNews(\"Sports\",\"India\")",
"_____no_output_____"
],
[
"summaries=getNewsSummary(contents,summarizer)",
"_____no_output_____"
],
[
"keywords=generateKeyword(contents)",
"_____no_output_____"
],
[
"printKeywords(keywords)",
"['INDIA', 'SCOTLAND', 'SUPER 12', 'DUBAI']\n[\"VIRAT KOHLI'S\", 'TEAM INDIA', 'DHONI', 'UAE']\n['AUSTRALIA', 'AFGHANISTAN', 'CRICKET AUSTRALIA', 'CRICBUZZ STAFF •']\n['GARY STEAD', 'TRENT BOULT', 'COLIN DE GRANDHOMME', 'BLACKCAPS']\n['DWAYNE BRAVO', 'SRI LANKA', 'ICC', 'THE WEST INDIES']\n"
],
[
"printSummary(summaries,titles)",
"T20 WORLD CUP 2021, IND VS SCO PREVIEW: INDIA FACE SCOTLAND, EYE ANOTHER BIG WIN \n\n India take on Scotland in a Super 12 clash of the 2021 T20 World Cup in Dubai on Friday . Virat Kohli-led side beat Afghanistan by 66 runs in Abu Dhabi on Wednesday . India must win their remaining two games while maintaining high run rates and hope for New Zealand to\n\n\n\n‘THERE ARE MANY CANDIDATES BUT HE’S THE BEST': SEHWAG PICKS NEXT INDIA CAPTAIN AFTER KOHLI STEPS DOWN AT END OF T20 WC \n\n Virat Kohli set to step down as T20I captain after this World Cup in UAE and Oman . Many experts are anticipating his deputy Rohit Sharma to fill up the position . Former India opener Virender Sehwag backed Rohit as the ideal candidate .\n\n\n\nONE-OFF TEST VS AFGHANISTAN POSTPONED, CONFIRMS CRICKET AUSTRALIA | CRICBUZZ.COM - CRICBUZZ \n\n Cricket Australia's one-off Test against Afghanistan has officially been postponed . The historic Test has been hanging in the balance since the CA revealed that they wouldn't support the Taliban government's stance against the inclusion of women in sports . Instead of cancelling the Test match, CA has vowed to\n\n\n\nNEW ZEALAND INCLUDE FIVE SPINNERS FOR INDIA TOUR, TRENT BOULT OPTS OUT CITING BUBBLE FATIGUE \n\n New Zealand name five spinners in 15-man squad for two-Test series against India . Senior pacer Trent Boult and fast-bowling all-rounder Colin de Grandhomme will miss tour due to bio-bubble fatigue . Ajaz Patel, Will Somerville and\n\n\n\nT20 WORLD CUP 2021: WEST INDIES AND CHENNAI SUPER KINGS ALL-ROUNDER DWAYNE BRAVO TO RETIRE AFTER SHOWPIECE... \n\n West Indies all-rounder Dwayne Bravo will hang his boots at the end of the ICC T20 World Cup 2021 . Bravo told ICC on the post-match Facebook Live show that he will be drawing the curtains on his international career . West Indies lost to Sri Lanka by 20 runs in\n\n\n\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0902dbcaeadaa15107be52a85027dfa008faa4
| 43,690 |
ipynb
|
Jupyter Notebook
|
Copy_of_Black_Friday.ipynb
|
SamH3pn3r/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
|
0ee3119f59d45633f5cff518d551a265f30e71fd
|
[
"MIT"
] | null | null | null |
Copy_of_Black_Friday.ipynb
|
SamH3pn3r/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
|
0ee3119f59d45633f5cff518d551a265f30e71fd
|
[
"MIT"
] | null | null | null |
Copy_of_Black_Friday.ipynb
|
SamH3pn3r/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling
|
0ee3119f59d45633f5cff518d551a265f30e71fd
|
[
"MIT"
] | null | null | null | 32.726592 | 279 | 0.301785 |
[
[
[
"<a href=\"https://colab.research.google.com/github/SamH3pn3r/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/Copy_of_Black_Friday.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"# Imports\nimport pandas as pd",
"_____no_output_____"
],
[
"# Url\nblack_friday_csv_link = \"https://raw.githubusercontent.com/pierretd/datasets-1/master/BlackFriday.csv\"\n# Reading in csv\nblack_friday_df = pd.read_csv(black_friday_csv_link)",
"_____no_output_____"
],
[
"black_friday_df.head(10)",
"_____no_output_____"
]
],
[
[
"### The above dataset has already been loaded in for you, answer the following questions. If you get done quickly, move onto stretch goals and super stretch goals. Work on improving your model until 8:40.",
"_____no_output_____"
],
[
"#### 1) Clean the data set and drop the Null/NaN values. Rename Product_Category_1-3 columns with an actual Product.",
"_____no_output_____"
]
],
[
[
"black_friday_df = black_friday_df.dropna()",
"_____no_output_____"
],
[
"black_friday_df.isnull().sum()",
"_____no_output_____"
],
[
"black_friday_df = black_friday_df.rename(index = str, columns={\"Product_Category_1\": \"Widscreen Tv\", \"Product_Category_2\": \"Nintendo Wii\", \"Product_Category_3\": \"Electronic Drumset\"})",
"_____no_output_____"
],
[
"black_friday_df.head()",
"_____no_output_____"
]
],
[
[
"#### 2) How many unique user_ids does the data set contain?",
"_____no_output_____"
]
],
[
[
"unique_values = black_friday_df['User_ID'].value_counts()",
"_____no_output_____"
],
[
"print(len(unique_values))",
"5868\n"
]
],
[
[
"#### 3) How many unique age brackets are in the dataset. Which Age bracket has the most entries? Which has the least?",
"_____no_output_____"
]
],
[
[
"unique_ages = black_friday_df['Age'].value_counts()",
"_____no_output_____"
],
[
"len(unique_ages)",
"_____no_output_____"
],
[
"unique_ages",
"_____no_output_____"
]
],
[
[
"#### 4) Transform the Gender categorical variable into a numerical variable. Then transform that numerical value into a Boolean.",
"_____no_output_____"
]
],
[
[
"black_friday_df['Gender'] = black_friday_df['Gender'].replace(\"M\",0)\nblack_friday_df['Gender'] = black_friday_df['Gender'].replace(\"F\",1)",
"_____no_output_____"
],
[
"black_friday_df.head(10)",
"_____no_output_____"
],
[
"black_friday_df['Gender'] = black_friday_df['Gender'].replace(0,\"F\")\nblack_friday_df['Gender'] = black_friday_df['Gender'].replace(1,\"T\")",
"_____no_output_____"
],
[
"black_friday_df.head()",
"_____no_output_____"
]
],
[
[
"#### 5) What is the average Occupation score? What is the Standard Deviation? What is the maximum and minimum value?",
"_____no_output_____"
]
],
[
[
"black_friday_df['Occupation'].describe()",
"_____no_output_____"
]
],
[
[
"#### 6) Group Age by Gender and print out a cross tab with age as the y axis",
"_____no_output_____"
]
],
[
[
"pd.crosstab(black_friday_df['Age'], black_friday_df['Gender'])",
"_____no_output_____"
]
],
[
[
"### Stretch Goal:",
"_____no_output_____"
],
[
"#### Build a linear regression model to predict the purchase amount given the other features in the data set with scikit learn.",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"### Super Stretch Goals: ",
"_____no_output_____"
],
[
"#### Plot the actual values vs the predicted values.",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
],
[
[
"#### Find a good way to measure your model's predictive power.\n",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a0906ee26890ed18e5efdcb65a33f324c7fbc4a
| 39,612 |
ipynb
|
Jupyter Notebook
|
keras/one_char_to_one_char.ipynb
|
hywel1994/mac-workspace
|
10c20555104ce6ebba77657c7605ce2b7fa2fc34
|
[
"MIT"
] | null | null | null |
keras/one_char_to_one_char.ipynb
|
hywel1994/mac-workspace
|
10c20555104ce6ebba77657c7605ce2b7fa2fc34
|
[
"MIT"
] | null | null | null |
keras/one_char_to_one_char.ipynb
|
hywel1994/mac-workspace
|
10c20555104ce6ebba77657c7605ce2b7fa2fc34
|
[
"MIT"
] | null | null | null | 33.769821 | 260 | 0.455897 |
[
[
[
"import numpy\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\nfrom keras.utils import np_utils",
"/home/hywel/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n"
],
[
"# define the raw dataset\nalphabet = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\n# create mapping of characters to integers (0-25) and the reverse\nchar_to_int = dict((c, i) for i, c in enumerate(alphabet))\nint_to_char = dict((i, c) for i, c in enumerate(alphabet))",
"_____no_output_____"
],
[
"# prepare the dataset of input to output pairs encoded as integers\nseq_length = 1\ndataX = []\ndataY = []\nfor i in range(0, len(alphabet) - seq_length, 1):\n seq_in = alphabet[i:i + seq_length]\n seq_out = alphabet[i + seq_length]\n dataX.append([char_to_int[char] for char in seq_in])\n dataY.append(char_to_int[seq_out])\n print (seq_in, '->', seq_out)",
"A -> B\nB -> C\nC -> D\nD -> E\nE -> F\nF -> G\nG -> H\nH -> I\nI -> J\nJ -> K\nK -> L\nL -> M\nM -> N\nN -> O\nO -> P\nP -> Q\nQ -> R\nR -> S\nS -> T\nT -> U\nU -> V\nV -> W\nW -> X\nX -> Y\nY -> Z\n"
],
[
"# reshape X to be [samples, time steps, features]\nX = numpy.reshape(dataX, (len(dataX), seq_length, 1))\ny = np_utils.to_categorical(dataY)\n# create and fit the model\nmodel = Sequential()\nmodel.add(LSTM(32, input_shape=(X.shape[1], X.shape[2])))\nmodel.add(Dense(y.shape[1], activation='softmax'))\nmodel.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])\nmodel.fit(X, y, nb_epoch=500, batch_size=1, verbose=2)\n",
"/home/hywel/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:9: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.\n if __name__ == '__main__':\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
4a09079c986a3a3574c0727800a5d809c53e1346
| 95,318 |
ipynb
|
Jupyter Notebook
|
Sell Prediction.ipynb
|
bolaram/Cross-Sell-Prediction
|
55fcc652c888b86751277d063d04f32bde4ef5fb
|
[
"MIT"
] | null | null | null |
Sell Prediction.ipynb
|
bolaram/Cross-Sell-Prediction
|
55fcc652c888b86751277d063d04f32bde4ef5fb
|
[
"MIT"
] | null | null | null |
Sell Prediction.ipynb
|
bolaram/Cross-Sell-Prediction
|
55fcc652c888b86751277d063d04f32bde4ef5fb
|
[
"MIT"
] | null | null | null | 88.012927 | 66,584 | 0.777408 |
[
[
[
"# ♠ Sell Prediction ♠",
"_____no_output_____"
],
[
"Importing necessary files",
"_____no_output_____"
]
],
[
[
"# Importing pandas to read file\nimport pandas as pd",
"_____no_output_____"
],
[
"# Reading csv file directly from url\ndata = pd.read_csv(\"http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv\", index_col = 0)\n\n# Display data\ndata",
"_____no_output_____"
],
[
"# Checking the shape of data (Rows, Column)\ndata.shape",
"_____no_output_____"
]
],
[
[
"### About the data ->>\n\nTV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)\nRadio: advertising dollars spent on Radio\nNewspaper: advertising dollars spent on Newspaper\n \nResponse:\nSales: sales of a single product in a given market (in thousands of items)",
"_____no_output_____"
]
],
[
[
"# Display the relationship of sales to TV, Radio, Newspaper",
"_____no_output_____"
]
],
[
[
"# Importing seaborn for data visualization\nimport seaborn as sns\n\n# for plotting within notebook\n% matplotlib inline",
"_____no_output_____"
],
[
"sns.pairplot(data, x_vars=('TV','radio','newspaper'), y_vars='sales', height = 5, aspect = .8, kind='reg');\n\n# This is just a warning that in future this module will moderate.",
"C:\\Users\\NILOY\\Anaconda3\\lib\\site-packages\\scipy\\stats\\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n"
]
],
[
[
"# Storing data separately",
"_____no_output_____"
]
],
[
[
"# Create a python list\nfeature_cols = [\"TV\", \"radio\", \"newspaper\"]\nresponse_col = [\"sales\"]\n\n# Storing values into X\nX = data[feature_cols]\n\n# Display X\nX.head()",
"_____no_output_____"
],
[
"# Display the shape of features\nX.shape",
"_____no_output_____"
],
[
"# Storing response_col in y\ny = data[response_col]\n\n# Display the sales column\ny.head()",
"_____no_output_____"
],
[
"# Display the shape of response column\ny.shape",
"_____no_output_____"
]
],
[
[
"# Splitting X and y into train and test",
"_____no_output_____"
]
],
[
[
"from sklearn.cross_validation import train_test_split\nsplit = .30\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 42, test_size = split)",
"_____no_output_____"
],
[
"# See if the split works or not\nprint(\"X_Train: {0}\".format(X_train.shape))\nprint(\"X_test: {0}\".format(X_test.shape))\nprint(\"y_train: {0}\".format(y_train.shape))\nprint(\"y_test: {0}\".format(y_test.shape))",
"X_Train: (140, 2)\nX_test: (60, 2)\ny_train: (140, 1)\ny_test: (60, 1)\n"
]
],
[
[
"# Import Linear Regression",
"_____no_output_____"
]
],
[
[
"# Import model\nfrom sklearn.linear_model import LinearRegression\n\n# Store into a classifier\nclf = LinearRegression()",
"_____no_output_____"
],
[
"# Fit the model by X_train, y_test\nclf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"# Prediction\ny_pred = clf.predict(X_test)",
"_____no_output_____"
]
],
[
[
"# Accuracy",
"_____no_output_____"
]
],
[
[
"# Error_evaluation\nimport numpy as np\nfrom sklearn import metrics\n\nprint(\"Error: {0}\".format(np.sqrt(metrics.mean_squared_error(y_test, y_pred))))",
"Error: 1.9154756731764258\n"
]
],
[
[
"# We can also minimize this error by removing newspaper column. As newspaper has week linear relation to sales.",
"_____no_output_____"
]
],
[
[
"# Feature columns\nf_col = [\"TV\", \"radio\"]\nr_col = [\"sales\"]\n\n# Store into X and y\nX = data[f_col]\ny = data[r_col]\n\n# Split the data\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1)\n\n# Fit the model \nclf.fit(X_train, y_train)\n\n# Prediction\ny_pred = clf.predict(X_test)\n\n# Error evaluation\nprint(\"New Error: {0}\".format(np.sqrt(metrics.mean_squared_error(y_test, y_pred))))",
"New Error: 1.3879034699382886\n"
]
],
[
[
"So as you can see that our error has decreased 1.92 to 1.39. And that is a good news for our model. So if we spent more money in TV and radio, instead of newspaper then sells will go high.",
"_____no_output_____"
],
[
"# Thank You",
"_____no_output_____"
],
[
"© NELOY CHANDRA BARDHAN",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4a090cc88d6f92afa027354b2d906a56bccb9926
| 15,811 |
ipynb
|
Jupyter Notebook
|
features/magic-feature-v2.ipynb
|
Syhen/quora-question-pairs
|
377dda3debeefdaacc5a1a5aaee94b93fbc1106b
|
[
"MIT"
] | null | null | null |
features/magic-feature-v2.ipynb
|
Syhen/quora-question-pairs
|
377dda3debeefdaacc5a1a5aaee94b93fbc1106b
|
[
"MIT"
] | null | null | null |
features/magic-feature-v2.ipynb
|
Syhen/quora-question-pairs
|
377dda3debeefdaacc5a1a5aaee94b93fbc1106b
|
[
"MIT"
] | null | null | null | 29.608614 | 140 | 0.423692 |
[
[
[
"from collections import defaultdict\n\nimport numpy as np\nimport pandas as pd",
"_____no_output_____"
],
[
"train_orig = pd.read_csv('../datasets/train.csv')\ntest_orig = pd.read_csv('../datasets/test_unique.csv')",
"_____no_output_____"
],
[
"ques = pd.concat([train_orig[['question1', 'question2']], test_orig[['question1', 'question2']]], axis=0).reset_index(drop='index')\nques.shape",
"_____no_output_____"
],
[
"q_dict = defaultdict(set)\nfor i in range(ques.shape[0]):\n q_dict[ques.question1[i]].add(ques.question2[i])\n q_dict[ques.question2[i]].add(ques.question1[i])",
"_____no_output_____"
],
[
"def q1_q2_intersect(row):\n return(len(set(q_dict[row['question1']]).intersection(set(q_dict[row['question2']]))))",
"_____no_output_____"
],
[
"train_orig['q1_q2_intersect'] = train_orig.apply(q1_q2_intersect, axis=1, raw=True)\ntest_orig['q1_q2_intersect'] = test_orig.apply(q1_q2_intersect, axis=1, raw=True)",
"_____no_output_____"
],
[
"train_orig.corr()",
"_____no_output_____"
],
[
"train_orig.head()",
"_____no_output_____"
],
[
"test_orig.head()",
"_____no_output_____"
],
[
"columns = ['test_id', 'q1_q2_intersect']",
"_____no_output_____"
],
[
"train_orig[['id', 'q1_q2_intersect']].to_csv(\"../datasets/train_magic_feature_v2.csv\", index=False)\ntest_orig[columns].to_csv(\"../datasets/test_magic_feature_v2.csv\", index=False)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a09376e13e4fe69bae290f5f9ca1bbf72a155aa
| 97,150 |
ipynb
|
Jupyter Notebook
|
notebooks/chapter5.ipynb
|
ihiroky/statistics_for_marketing
|
84ec751dda819e80dbc3dab71ffae6b03a917b81
|
[
"MIT"
] | null | null | null |
notebooks/chapter5.ipynb
|
ihiroky/statistics_for_marketing
|
84ec751dda819e80dbc3dab71ffae6b03a917b81
|
[
"MIT"
] | null | null | null |
notebooks/chapter5.ipynb
|
ihiroky/statistics_for_marketing
|
84ec751dda819e80dbc3dab71ffae6b03a917b81
|
[
"MIT"
] | null | null | null | 50.180785 | 20,020 | 0.620731 |
[
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\n\n%matplotlib inline",
"_____no_output_____"
],
[
"df_id_pos = pd.read_excel('978-4-274-22101-9.xlsx', 'ID付きPOSデータ(POSデータ)')\ndf_id_pos.head()",
"_____no_output_____"
]
],
[
[
"# 5 売り場の評価",
"_____no_output_____"
],
[
"## 5.1 集計による売上の評価",
"_____no_output_____"
]
],
[
[
"df7 = df_id_pos[df_id_pos['日'] <= 7][['大カテゴリ名', '日']]",
"_____no_output_____"
],
[
"# 表5.1 日別・大カテゴリ名別販売個数\ntable_5_1 = df7.groupby(['大カテゴリ名', '日']).size().unstack()\ntable_5_1 = table_5_1.fillna(0)\ntable_5_1['合計'] = table_5_1.sum(axis=1)\ntable_5_1.loc['合計'] = table_5_1.sum(axis=0)\ntable_5_1",
"_____no_output_____"
],
[
"# 表5.2 大カテゴリの変動係数(標準偏差/算術平均)\ntable_5_2 = table_5_1.copy()\ntable_5_2 = table_5_2.drop(['合計'], axis=1).drop(['合計'], axis=0)\ntable_5_2['変動係数'] = table_5_2.std(axis=1) / table_5_2.mean(axis=1)\ntable_5_2 = table_5_2.drop([i for i in range(1, 8)], axis=1)\ntable_5_2",
"_____no_output_____"
],
[
"# 表5.3 即席食品のドリルダウン\ndf7c = df[(df['日'] <= 7) & (df['大カテゴリ名'] == '即席食品')][['中カテゴリ名', '日']]\ntable_5_3 = df7c.groupby(['中カテゴリ名', '日']).size().unstack()\ntable_5_3 = table_5_3.fillna(0)\ntable_5_3",
"_____no_output_____"
]
],
[
[
"## 5.2 売り場の計数管理",
"_____no_output_____"
]
],
[
[
"df.head()",
"_____no_output_____"
],
[
"# 表5.4 売上の因数分解\n\npd.options.display.float_format = '{:.2f}'.format\n\n\ndef add_sales_factorize (df, col, df0):\n sales = df0[['税抜価格']].sum().values[0]\n values = [\n sales,\n df0['顧客ID'].nunique(),\n sales / unique_customer,\n df0['レシートNo'].nunique() / unique_customer,\n sales / df0['レシートNo'].nunique(),\n df0.groupby(['レシートNo']).size().mean(),\n sales / len(df0)\n ]\n print(values)\n df[col] = values\n\ntable_5_4 = pd.DataFrame(index=[\n '売上(円)',\n 'ユニーク顧客(人)',\n '1人あたり購買金額(円)',\n '来店頻度(回)',\n '1回あたり客単価(円)',\n '買上点数(点)',\n '買上商品単価(円)'\n])\ndf_former_15 = df_id_pos[df_id_pos['日'] <= 15]\ndf_later_15 = df_id_pos[df_id_pos['日'] > 15]\nadd_sales_factorize(table_5_4, '1〜15日', df_former_15)\nadd_sales_factorize(table_5_4, '16〜30日', df_later_15)\ntable_5_4",
"[6953035, 877, 7928.204104903079, 3.9566704675028506, 2003.7564841498559, 10.057925072046109, 199.22165553995586]\n[6845840, 871, 7805.974914481186, 4.022805017103763, 1940.4308390022675, 9.687641723356009, 200.2996079349289]\n"
],
[
"# 表5.5 点数PI値上位20小カテゴリ\ncustomers = df['顧客ID'].nunique()\ntable_5_5_item = df_id_pos.groupby(['小カテゴリ名']).size() / customers * 1000\ntable_5_5_item = table_5_5_item.sort_values(ascending=False)\ntable_5_5_item = table_5_5_item.iloc[:20]\nprint('点数PI上位20位\\n', table_5_5_item)\n\nprint('')\n\n# 表5.5 金額PI値上位20小カテゴリ\ntable_5_5_price = df_id_pos.groupby(['小カテゴリ名'])['税抜価格'].sum() / customers * 1000\ntable_5_5_price = table_5_5_price.sort_values(ascending=False)\ntable_5_5_price = table_5_5_price.iloc[:20]\nprint('金額PI上位20位\\n', table_5_5_price)",
"点数PI上位20位\n\n金額PI上位20位\n 小カテゴリ名\nブランド豚 414025.00\nうるち米 402690.00\n寿司惣菜 383485.00\nパン惣菜 364365.00\n揚物惣菜 319205.00\nその他刺身 279675.00\n水産塩干し 210620.00\n牛乳 205075.00\n漬物 182335.00\nひき肉 179075.00\n茶系飲料 176380.00\n新ジャンル 171235.00\nヨーグルト 169200.00\n鶏卵 166000.00\n国産鶏 160760.00\n菓子パン 157760.00\n米飯惣菜 156605.00\nサラダ惣菜 155195.00\n弁当 152900.00\nビール 147500.00\nName: 税抜価格, dtype: float64\n"
]
],
[
[
"## 5.3 ABC分析による重要カテゴリの評価",
"_____no_output_____"
],
[
"### 5.3.1 ABC分析による分析の方法",
"_____no_output_____"
]
],
[
[
"df_id_pos.head()",
"_____no_output_____"
],
[
"# 表5.6 大カテゴリのABC分析\ndef rank(s):\n if s <= 60:\n return 'A'\n elif s <= 90:\n return 'B'\n else:\n return 'C'\n \ntotal = len(df_id_pos)\ntable_5_6 = df_id_pos.groupby(['大カテゴリ名']).size()\ntable_5_6 = table_5_6.sort_values(ascending=False)\ntable_5_6 = pd.DataFrame(table_5_6, columns = ['販売個数'])\ntable_5_6['構成比率'] = table_5_6['販売個数'] / len(df_id_pos) * 100\ntable_5_6['累積構成比率'] = table_5_6['構成比率'].cumsum()\ntable_5_6['ランク'] = table_5_6['累積構成比率'].map(rank)\ntable_5_6",
"_____no_output_____"
],
[
"# 図5.4 大カテゴリのパレート図\ngraph_5_4 = table_5_6\nax = graph_5_4['構成比率'].plot.bar(color='green')\ngraph_5_4['累積構成比率'].plot(ax=ax)",
"_____no_output_____"
],
[
"# 図5.5 顧客別購買金額のパレート図\n\npd.options.display.float_format = '{:.3f}'.format\n\ngraph_5_5 = df_id_pos.groupby(['顧客ID'])['税抜価格'].sum()\ngraph_5_5 = graph_5_5.sort_values(ascending=False)\ngraph_5_5 = pd.DataFrame(graph_5_5)\ngraph_5_5['構成比率'] = graph_5_5['税抜価格'] / graph_5_5['税抜価格'].sum() * 100\ngraph_5_5['累積構成比率'] = graph_5_5['構成比率'].cumsum()\ngraph_5_5['順位'] = range(1, df_id_pos['顧客ID'].nunique() + 1)\ngraph_5_5 = graph_5_5.set_index('順位')\n\nxticks=[1, 300, 600, 900, 1200]\nax = graph_5_5['構成比率'].plot(color='green', xticks=xticks, legend=True)\ngraph_5_5['累積構成比率'].plot(ax=ax.twinx(), xticks=xticks, legend=True)",
"_____no_output_____"
]
],
[
[
"### 5.3.2 Gini係数",
"_____no_output_____"
]
],
[
[
"# 表 5.4のGini係数\ncumsum = graph_5_4['累積構成比率'] / 100\ncumsum = cumsum.values.tolist()\ncumsum.insert(0, 0) # 左端は0スタート\nn = len(cumsum)\nspan = 1 / (n - 1) # 植木算\ns = 0\nfor i in range(1, n):\n s += (cumsum[i] + cumsum[i - 1] ) * span / 2\ngini = (s - 0.5) / 0.5\ngini",
"_____no_output_____"
]
],
[
[
"## 5.4 吸引力モデルによる商圏分析",
"_____no_output_____"
],
[
"吸引力モデルの一種で商圏評価のモデルであるバフモデルにおいては、顧客$i$が店舗$j$を選択確率する確率$p_{ij}$は店舗の面積$S_j$と顧客$i$と店舗$j$の距離$d_{ij}$を用いて\n$$\np_{ij} = \\frac{ \\frac{S_j}{d^\\lambda_{ij}} }{ \\sum^m_{k=1} \\frac{S_k}{d^\\lambda_{ik}} }\n$$\nと表される。$\\lambda$は交通抵抗パラメータとよばれる。",
"_____no_output_____"
]
],
[
[
"# 店舗の座標と面積\nshops = pd.DataFrame([[-3, -3, 3000], [0, 3, 5000], [5, -5, 10000]],\n index=['店舗A', '店舗B', '店舗C'], columns=['x', 'y', 'S'])\nshops",
"_____no_output_____"
],
[
"# 住宅区域の座標と人数\nregidents = pd.DataFrame([\n [-8, 5, 1000],\n [-7, -8, 1000],\n [-3, 4, 3000],\n [3, -3, 5000],\n [7, 7, 4000],\n [7, 0, 6000]\n], index=['地域1', '地域2', '地域3', '地域4', '地域5', '地域6'],\n columns=['x', 'y', 'N'])\nregidents",
"_____no_output_____"
],
[
"# 店舗選択確率の比から期待集客数を求める\nhuff = pd.DataFrame(index=regidents.index, columns=shops.index)\n\n# ハフモデル\nfor s in huff.columns:\n huff[s] = shops.loc[s].S / ((regidents.x - shops.loc[s].x) ** 2 + (regidents.y - shops.loc[s].y) ** 2)\n# 地域毎に割合化\nfor s in huff.index:\n huff.loc[s] = huff.loc[s] / huff.loc[s].sum() #\nhuff['N'] = regidents.N\nhuff",
"_____no_output_____"
],
[
"print('期待集客数')\nfor c in shops.index:\n # 地域の人数 * 店舗に来る割合\n \n print(c, (huff.N * huff[c]).sum())",
"期待集客数\n店舗A 1985.345750497359\n店舗B 6506.338254264321\n店舗C 11508.315995238321\n"
]
],
[
[
"## 5.5 回帰分析による売上予測",
"_____no_output_____"
],
[
"### 5.5.1 単回帰分析",
"_____no_output_____"
]
],
[
[
"# 日々の商品単価を説明変数、来店客数を目的変数とした回帰分析\navg_unit_price = df_id_pos.groupby('日')['税抜単価'].mean().values.reshape(-1, 1)\nvisits = df_id_pos.groupby('日')['顧客ID'].nunique().values\n\nmodel = LinearRegression()\nmodel.fit(avg_unit_price, visits)\nprint('回帰係数', model.coef_)\nprint('切片', model.intercept_)\nprint('決定係数', model.score(avg_unit_price, visits)) # 小さすぎない?\n\nplt.scatter(avg_unit_price, visits)\nplt.plot(avg_unit_price, model.predict(avg_unit_price), color='red')",
"回帰係数 [-0.54664566]\n切片 332.82098664551256\n決定係数 0.08758324438358511\n"
]
],
[
[
"### 5.5.2 重回帰分析",
"_____no_output_____"
]
],
[
[
"# 日々の商品単価、曜日(日 mod 7)を説明変数、来店客数を目的変数とした回帰分析\nX = pd.DataFrame()\nX['税抜単価'] = df_id_pos.groupby('日')['税抜単価'].mean()\nX['曜日'] = df_id_pos['日'].unique() % 7\nX = X.loc[:].values\ny = df_id_pos.groupby('日')['顧客ID'].nunique().values\n\nmodel = LinearRegression()\nmodel.fit(X, y)\n\nprint('回帰係数', model.coef_)\nprint('切片', model.intercept_)\nprint('決定係数', model.score(X, y)) # 小さすぎない?",
"回帰係数 [-0.72728059 -4.10522186]\n切片 377.62310321494215\n決定係数 0.23450838289377596\n"
],
[
"1",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a093e06f0d391a7a667a8be2d9d8bb86897e7a6
| 41,574 |
ipynb
|
Jupyter Notebook
|
docs_jupyter/peppro.ipynb
|
mr-c/bulker
|
ef851f484248374692fa546164879499f32933b3
|
[
"BSD-2-Clause"
] | 14 |
2019-09-04T22:16:25.000Z
|
2022-02-22T08:28:37.000Z
|
docs_jupyter/peppro.ipynb
|
mr-c/bulker
|
ef851f484248374692fa546164879499f32933b3
|
[
"BSD-2-Clause"
] | 65 |
2019-08-02T14:58:55.000Z
|
2021-12-08T17:35:52.000Z
|
docs_jupyter/peppro.ipynb
|
mr-c/bulker
|
ef851f484248374692fa546164879499f32933b3
|
[
"BSD-2-Clause"
] | 2 |
2019-08-16T18:50:06.000Z
|
2020-07-23T15:12:44.000Z
| 47.731343 | 516 | 0.62046 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4a093e12dc7fbae202167f4d4f47117753279359
| 34,398 |
ipynb
|
Jupyter Notebook
|
mlmodels/model_dev/nlp_tfflow/text-similarity/5.transformer-crossentropy.ipynb
|
gitter-badger/mlmodels
|
f08cc9b6ec202d4ad25ecdda2f44487da387569d
|
[
"MIT"
] | 16 |
2020-07-21T17:24:55.000Z
|
2021-11-25T00:26:49.000Z
|
text-similarity/5.transformer-crossentropy.ipynb
|
eridgd/NLP-Models-Tensorflow
|
d46e746cd038f25e8ee2df434facbe12e31576a1
|
[
"MIT"
] | null | null | null |
text-similarity/5.transformer-crossentropy.ipynb
|
eridgd/NLP-Models-Tensorflow
|
d46e746cd038f25e8ee2df434facbe12e31576a1
|
[
"MIT"
] | 27 |
2020-06-04T13:51:29.000Z
|
2022-03-22T19:44:06.000Z
| 38.262514 | 386 | 0.517298 |
[
[
[
"# !wget http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv",
"_____no_output_____"
],
[
"import tensorflow as tf\nimport re\nimport numpy as np\nimport pandas as pd\nfrom tqdm import tqdm\nimport collections\nfrom unidecode import unidecode\nfrom sklearn.cross_validation import train_test_split",
"/home/jupyter/.local/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.\n \"This module will be removed in 0.20.\", DeprecationWarning)\n"
],
[
"def build_dataset(words, n_words):\n count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3], ['SEPARATOR', 4]]\n count.extend(collections.Counter(words).most_common(n_words - 1))\n dictionary = dict()\n for word, _ in count:\n dictionary[word] = len(dictionary)\n data = list()\n unk_count = 0\n for word in words:\n index = dictionary.get(word, 0)\n if index == 0:\n unk_count += 1\n data.append(index)\n count[0][1] = unk_count\n reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))\n return data, count, dictionary, reversed_dictionary\n\ndef str_idx(corpus, dic, maxlen, UNK=3):\n X = np.zeros((len(corpus),maxlen))\n for i in range(len(corpus)):\n for no, k in enumerate(corpus[i][:maxlen][::-1]):\n val = dic[k] if k in dic else UNK\n X[i,-1 - no]= val\n return X\n\ndef cleaning(string):\n string = unidecode(string).replace('.', ' . ').replace(',', ' , ')\n string = re.sub('[^A-Za-z\\- ]+', ' ', string)\n string = re.sub(r'[ ]+', ' ', string).strip()\n return string.lower()",
"_____no_output_____"
],
[
"df = pd.read_csv('quora_duplicate_questions.tsv', delimiter='\\t').dropna()\ndf.head()",
"_____no_output_____"
],
[
"left, right, label = df['question1'].tolist(), df['question2'].tolist(), df['is_duplicate'].tolist()",
"_____no_output_____"
],
[
"np.unique(label, return_counts = True)",
"_____no_output_____"
],
[
"for i in tqdm(range(len(left))):\n left[i] = cleaning(left[i])\n right[i] = cleaning(right[i])\n left[i] = left[i] + ' SEPARATOR ' + right[i]",
"100%|██████████| 404287/404287 [00:07<00:00, 52786.23it/s]\n"
],
[
"concat = ' '.join(left).split()\nvocabulary_size = len(list(set(concat)))\ndata, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)\nprint('vocab from size: %d'%(vocabulary_size))\nprint('Most common words', count[4:10])\nprint('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])",
"vocab from size: 87662\nMost common words [['SEPARATOR', 4], ('SEPARATOR', 404287), ('the', 377593), ('what', 324635), ('is', 269934), ('i', 223893)]\nSample data [6, 7, 5, 1286, 63, 1286, 2502, 11, 565, 12] ['what', 'is', 'the', 'step', 'by', 'step', 'guide', 'to', 'invest', 'in']\n"
],
[
"def position_encoding(inputs):\n T = tf.shape(inputs)[1]\n repr_dim = inputs.get_shape()[-1].value\n pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])\n i = np.arange(0, repr_dim, 2, np.float32)\n denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])\n enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)\n return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])\n\ndef layer_norm(inputs, epsilon=1e-8):\n mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)\n normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))\n params_shape = inputs.get_shape()[-1:]\n gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())\n beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())\n return gamma * normalized + beta\n\ndef self_attention(inputs, is_training, num_units, num_heads = 8, activation=None):\n T_q = T_k = tf.shape(inputs)[1]\n Q_K_V = tf.layers.dense(inputs, 3*num_units, activation)\n Q, K, V = tf.split(Q_K_V, 3, -1)\n Q_ = tf.concat(tf.split(Q, num_heads, axis=2), 0)\n K_ = tf.concat(tf.split(K, num_heads, axis=2), 0)\n V_ = tf.concat(tf.split(V, num_heads, axis=2), 0)\n align = tf.matmul(Q_, K_, transpose_b=True)\n align *= tf.rsqrt(tf.to_float(K_.get_shape()[-1].value))\n paddings = tf.fill(tf.shape(align), float('-inf'))\n lower_tri = tf.ones([T_q, T_k])\n lower_tri = tf.linalg.LinearOperatorLowerTriangular(lower_tri).to_dense()\n masks = tf.tile(tf.expand_dims(lower_tri,0), [tf.shape(align)[0],1,1])\n align = tf.where(tf.equal(masks, 0), paddings, align)\n align = tf.nn.softmax(align)\n align = tf.layers.dropout(align, 0.1, training=is_training) \n x = tf.matmul(align, V_)\n x = tf.concat(tf.split(x, num_heads, axis=0), 2)\n x += inputs\n x = layer_norm(x)\n return x\n\ndef ffn(inputs, hidden_dim, activation=tf.nn.relu):\n x = tf.layers.conv1d(inputs, 4* hidden_dim, 1, activation=activation) \n x = tf.layers.conv1d(x, hidden_dim, 1, activation=None)\n x += inputs\n x = layer_norm(x)\n return x\n\nclass Model:\n def __init__(self, size_layer, num_layers, embedded_size,\n dict_size, learning_rate, dropout, kernel_size = 5):\n \n def cnn(x, scope):\n x += position_encoding(x)\n with tf.variable_scope(scope, reuse = tf.AUTO_REUSE):\n for n in range(num_layers):\n with tf.variable_scope('attn_%d'%i,reuse=tf.AUTO_REUSE):\n x = self_attention(x, True, size_layer)\n with tf.variable_scope('ffn_%d'%i, reuse=tf.AUTO_REUSE):\n x = ffn(x, size_layer)\n \n with tf.variable_scope('logits', reuse=tf.AUTO_REUSE):\n return tf.layers.dense(x, 2)[:, -1]\n \n self.X = tf.placeholder(tf.int32, [None, None])\n self.Y = tf.placeholder(tf.int32, [None])\n encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))\n embedded_left = tf.nn.embedding_lookup(encoder_embeddings, self.X)\n \n self.logits = cnn(embedded_left, 'left')\n self.cost = tf.reduce_mean(\n tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits = self.logits, labels = self.Y\n )\n )\n \n self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)\n correct_pred = tf.equal(\n tf.argmax(self.logits, 1, output_type = tf.int32), self.Y\n )\n self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"_____no_output_____"
],
[
"size_layer = 128\nnum_layers = 4\nembedded_size = 128\nlearning_rate = 1e-4\nmaxlen = 50\nbatch_size = 128\ndropout = 0.8",
"_____no_output_____"
],
[
"from sklearn.cross_validation import train_test_split\n\nvectors = str_idx(left, dictionary, maxlen)\ntrain_X, test_X, train_Y, test_Y = train_test_split(vectors, label, test_size = 0.2)",
"_____no_output_____"
],
[
"tf.reset_default_graph()\nsess = tf.InteractiveSession()\nmodel = Model(size_layer,num_layers,embedded_size,len(dictionary),learning_rate,dropout)\nsess.run(tf.global_variables_initializer())",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nColocations handled automatically by placer.\nWARNING:tensorflow:From <ipython-input-9-3c97a48061fd>:4: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\nWARNING:tensorflow:From <ipython-input-9-3c97a48061fd>:20: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.dense instead.\nWARNING:tensorflow:From <ipython-input-9-3c97a48061fd>:33: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.dropout instead.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py:143: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\nInstructions for updating:\nPlease use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\nWARNING:tensorflow:From <ipython-input-9-3c97a48061fd>:41: conv1d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse keras.layers.conv1d instead.\nWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.cast instead.\n"
],
[
"import time\n\nEARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0\n\nwhile True:\n lasttime = time.time()\n if CURRENT_CHECKPOINT == EARLY_STOPPING:\n print('break epoch:%d\\n' % (EPOCH))\n break\n\n train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0\n pbar = tqdm(range(0, len(train_X), batch_size), desc='train minibatch loop')\n for i in pbar:\n batch_x = train_X[i:min(i+batch_size,train_X.shape[0])]\n batch_y = train_Y[i:min(i+batch_size,train_X.shape[0])]\n acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer], \n feed_dict = {model.X : batch_x,\n model.Y : batch_y})\n assert not np.isnan(loss)\n train_loss += loss\n train_acc += acc\n pbar.set_postfix(cost=loss, accuracy = acc)\n \n pbar = tqdm(range(0, len(test_X), batch_size), desc='test minibatch loop')\n for i in pbar:\n batch_x = test_X[i:min(i+batch_size,test_X.shape[0])]\n batch_y = test_Y[i:min(i+batch_size,test_X.shape[0])]\n acc, loss = sess.run([model.accuracy, model.cost], \n feed_dict = {model.X : batch_x,\n model.Y : batch_y})\n test_loss += loss\n test_acc += acc\n pbar.set_postfix(cost=loss, accuracy = acc)\n \n train_loss /= (len(train_X) / batch_size)\n train_acc /= (len(train_X) / batch_size)\n test_loss /= (len(test_X) / batch_size)\n test_acc /= (len(test_X) / batch_size)\n \n if test_acc > CURRENT_ACC:\n print(\n 'epoch: %d, pass acc: %f, current acc: %f'\n % (EPOCH, CURRENT_ACC, test_acc)\n )\n CURRENT_ACC = test_acc\n CURRENT_CHECKPOINT = 0\n else:\n CURRENT_CHECKPOINT += 1\n \n print('time taken:', time.time()-lasttime)\n print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\\n'%(EPOCH,train_loss,\n train_acc,test_loss,\n test_acc))",
"train minibatch loop: 100%|██████████| 2527/2527 [00:54<00:00, 46.20it/s, accuracy=0.663, cost=0.652]\ntest minibatch loop: 100%|██████████| 632/632 [00:05<00:00, 110.07it/s, accuracy=0.644, cost=0.674]\ntrain minibatch loop: 0%| | 5/2527 [00:00<00:54, 46.61it/s, accuracy=0.648, cost=0.617]"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a094625878076e8d0a57002cebfafb1263b4434
| 423,590 |
ipynb
|
Jupyter Notebook
|
scripts/optest.ipynb
|
ProtolabSBRE/PyOpenPose
|
83a406fffeec7bd6ac0fdafefa256594c0ab5037
|
[
"BSD-3-Clause"
] | 300 |
2017-08-03T12:20:17.000Z
|
2021-12-20T13:23:44.000Z
|
scripts/optest.ipynb
|
ProtolabSBRE/PyOpenPose
|
83a406fffeec7bd6ac0fdafefa256594c0ab5037
|
[
"BSD-3-Clause"
] | 83 |
2017-08-16T07:24:11.000Z
|
2021-07-27T15:57:45.000Z
|
scripts/optest.ipynb
|
ProtolabSBRE/PyOpenPose
|
83a406fffeec7bd6ac0fdafefa256594c0ab5037
|
[
"BSD-3-Clause"
] | 69 |
2017-08-09T19:37:38.000Z
|
2022-03-14T08:30:27.000Z
| 2,161.173469 | 418,854 | 0.950159 |
[
[
[
"%pylab inline\nimport cv2\nimport os\nimport PyOpenPose as OP\nOPENPOSE_ROOT = os.environ[\"OPENPOSE_ROOT\"]",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"img = cv2.imread(\"galloping knights.jpg\")",
"_____no_output_____"
],
[
"op = OP.OpenPose((656, 368), (368, 368), (1280, 720), \"COCO\", OPENPOSE_ROOT + os.sep + \"models\" + os.sep, 0, False)",
"_____no_output_____"
],
[
"op.detectPose(img)",
"_____no_output_____"
],
[
"viz = op.render(img)\nplt.imshow(viz[:,:,::-1])",
"_____no_output_____"
],
[
"# getKeypoints returns an array of matrices\n# When POSE is requested the return array contains one entry with all persons.\npersons = op.getKeypoints(op.KeypointType.POSE)[0]\n\nprint \"Found\", persons.shape[0],\"persons.\"\nprint \"Result info:\",persons.shape, persons.dtype",
"Found 7 persons.\nResult info: (7, 18, 3) float32\n"
],
[
"persons[0]",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a095fa92f80648d4ee27727580a0daf06267def
| 77,696 |
ipynb
|
Jupyter Notebook
|
examples/Plotting Examples.ipynb
|
albertanis/BioCRNPyler
|
d3a37fcc44166ef23d89ab0091276a655cd4d594
|
[
"BSD-3-Clause"
] | null | null | null |
examples/Plotting Examples.ipynb
|
albertanis/BioCRNPyler
|
d3a37fcc44166ef23d89ab0091276a655cd4d594
|
[
"BSD-3-Clause"
] | 1 |
2020-06-15T20:09:25.000Z
|
2020-06-15T20:52:26.000Z
|
examples/Plotting Examples.ipynb
|
albertanis/BioCRNPyler
|
d3a37fcc44166ef23d89ab0091276a655cd4d594
|
[
"BSD-3-Clause"
] | null | null | null | 121.971743 | 23,679 | 0.60841 |
[
[
[
"## Plotting Installation:\n\nThe plotting package allows you to make an interactive CRN plot. Plotting requires the [Bokeh](https://docs.bokeh.org/en/latest/docs/installation.html) and [ForceAtlas2](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0098679) libraries to be installed on your machine. Bokeh is used for plotting, ForceAtlas2 is used for graph layout. To install, type the following into the consol:\n\n Conda install bokeh\n pip install fa2\n\nAlternatively, follow the installation instructions in the links above.\n\n\n",
"_____no_output_____"
],
[
"## Plotting Example\nThe CRN plot has a default representation:\n* species are circles\n - orange circles are RNA species\n - blue circles are complexes\n - green circles are proteins\n - grey circles are DNA\n -there's always a purple circle that represents nothing. If your CRN doesn't have reactions that go to nothing, then it won't have any connections.\n \n* reactions are squares\n\nClick on a node (either reaction or species) and all arrows including that node will be highlighted\nMouse over a node (either reaction or species) and a tooltip will tell you what it is.",
"_____no_output_____"
],
[
"### Create a BioCRNpyler Model\nHere we model a DNA assembly with a regulated promoter in a TxTl mixture.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\n\nfrom biocrnpyler import *\nimport numpy as np\n\ntxtl = CRNLab(\"GFP\")\ntxtl.mixture(\"mixture1\", extract = \"TxTlExtract\", mixture_volume = 1e-6, mixture_parameters = 'BasicExtract.tsv')\ndna = DNAassembly(\"mydna\",promoter=RegulatedPromoter(\"plac\",[\"laci\"]),rbs=\"UTR1\",protein=\"GFP\")\ntxtl.add_dna(name = \"G1\", promoter = \"pBest\", rbs = \"BCD2\", protein = \"GFP\", initial_conc = 10, volume = 1e-7)\ncrn1 = txtl.get_model()\nprint(crn1.pretty_print())",
"Species (9) = {0. protein[RNAP], 1. protein[Ribo], 2. protein[RNAase], 3. complexprotein[RNAase]:rna[G1]], 4. complexprotein[Ribo]:rna[G1]], 5. rna[G1], 6. complexdna[G1]:protein[RNAP]], 7. protein[GFP], 8. dna[G1]}\nReactions (6) = [\n0. dna[G1] + protein[RNAP] <--> complexdna[G1]:protein[RNAP]] \n massaction: k_f(dna[G1],protein[RNAP])=100.0*dna[G1]*protein[RNAP]\n k_r(complexdna[G1]:protein[RNAP]])=10.0*complexdna[G1]:protein[RNAP]]\n1. complexdna[G1]:protein[RNAP]] --> dna[G1] + rna[G1] + protein[RNAP] \n massaction: k_f(complexdna[G1]:protein[RNAP]])=3.0*complexdna[G1]:protein[RNAP]]\n2. rna[G1] + protein[Ribo] <--> complexprotein[Ribo]:rna[G1]] \n massaction: k_f(rna[G1],protein[Ribo])=10.0*rna[G1]*protein[Ribo]\n k_r(complexprotein[Ribo]:rna[G1]])=0.25*complexprotein[Ribo]:rna[G1]]\n3. complexprotein[Ribo]:rna[G1]] --> rna[G1] + protein[GFP] + protein[Ribo] \n massaction: k_f(complexprotein[Ribo]:rna[G1]])=2.0*complexprotein[Ribo]:rna[G1]]\n4. rna[G1] + protein[RNAase] <--> complexprotein[RNAase]:rna[G1]] \n massaction: k_f(rna[G1],protein[RNAase])=10.0*rna[G1]*protein[RNAase]\n k_r(complexprotein[RNAase]:rna[G1]])=0.5*complexprotein[RNAase]:rna[G1]]\n5. complexprotein[RNAase]:rna[G1]] --> protein[RNAase] \n massaction: k_f(complexprotein[RNAase]:rna[G1]])=1.0*complexprotein[RNAase]:rna[G1]]\n]\n"
]
],
[
[
"## Plotting a reaction graph\nFirst we import bokeh in order to plot an interactive graph then use the graphPlot function and the BioCRNpyler.plotting.generate_networkx_graph(crn) function to produce a graph. Mouseover the circles to see which species they identify and the squares to see reactions.",
"_____no_output_____"
]
],
[
[
"from bokeh.models import (Plot , Range1d)\nimport bokeh.plotting\nimport bokeh.io\nbokeh.io.output_notebook() #this makes the graph appear in line with the notebook\nDG, DGspec, DGrxn = generate_networkx_graph(crn1) #this creates the networkx objects\nplot = Plot(plot_width=500, plot_height=500, x_range=Range1d(-500, 500), y_range=Range1d(-500, 500)) #this generates a bokeh plot\ngraphPlot(DG,DGspec,DGrxn,plot,layout=\"force\",posscale=1) #now you draw the network on the plot. Layout \"force\" is the default. \n#\"posscale\" scales the entire graph. This mostly just affects the size of the arrows relative to everything else\nbokeh.io.show(plot) #if you don't type this the plot won't show\n\n#mouse over nodes to get a tooltip telling you what they are\n#mouse over single lines to outline them\n#click on nodes to highlight all edges that touch them\n#use the magnifying glass symbol to zoom in and out\n#click and drag to move around\n\n#NOTE: this function is not deterministic in how it displays the network, because it uses randomness to push the nodes around",
"_____no_output_____"
]
],
[
[
"### Advanced options for generate_networkx_graph\nThe following options are useful changing how plotting works.\n\nTo get better control over the way reactions and species text is display using the following keywords (default values shown below):\n\n use_pretty_print=False\n pp_show_rates=True\n pp_show_attributes=True\n pp_show_material=True\n\nTo get better control over the colors of different nodes, use the keywords to set a species' color.\nThe higher keywords will take precedence.\n\n repr(species): \"color\"\n species.name: \"color\"\n (species.material_type, tuple(specie.attributes)): \"color\"\n species.material_type: \"color\"\n tuple(species.attributes): \"color\"\n",
"_____no_output_____"
]
],
[
[
"#this demonstrates the \"circle\" layout. reactions are in the middle with species on the outside.\n#also, the pretty_print text display style\n\ncolordict = {\n \"G1\":\"red\", #will only effect the species dna_G1 and rna_G1\n \"protein_GFP\": \"green\", #will only effect the species protein_GFP\n \"protein\": \"blue\" #All protein species, protein_Ribo, protein_RNAase, and protein_RNAP will be blue\n #All other species will be grey by default. This will include all complexes.\n}\nDG, DGspec, DGrxn = generate_networkx_graph(crn1,\n colordict = colordict,\n use_pretty_print=True, #uses pretty print\n pp_show_rates=False, #this would put the reaction rates in the reaction name. It's already listed seperately in the tool tip\n pp_show_attributes=False, \n pp_show_material = True #this lists the material of the species being displayed\n)\n\nplot = Plot(plot_width=500, plot_height=500, x_range=Range1d(-500, 500), y_range=Range1d(-500, 500)) \ngraphPlot(DG,DGspec,DGrxn,plot,layout=\"circle\",posscale=1) \nbokeh.io.show(plot) ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a096a740c7cdaeab9602b316594b320b8ef9a33
| 69,316 |
ipynb
|
Jupyter Notebook
|
homework04-python-scientific-ecosystem/homework04_karnerk.ipynb
|
barbarakalous/homework-scientific-computing
|
6fa7cb00b751c1a03431943f3a851a696b5d371a
|
[
"MIT"
] | null | null | null |
homework04-python-scientific-ecosystem/homework04_karnerk.ipynb
|
barbarakalous/homework-scientific-computing
|
6fa7cb00b751c1a03431943f3a851a696b5d371a
|
[
"MIT"
] | 1 |
2020-03-24T19:30:51.000Z
|
2020-03-24T19:30:51.000Z
|
homework04-python-scientific-ecosystem/homework04_karnerk.ipynb
|
barbarakalous/homework-scientific-computing
|
6fa7cb00b751c1a03431943f3a851a696b5d371a
|
[
"MIT"
] | null | null | null | 193.620112 | 19,264 | 0.90011 |
[
[
[
"# Homework 04 - Numpy",
"_____no_output_____"
],
[
"### Exercise 1 - Terminology\n\nDescribe the following terms with your own words:\n\n***numpy array:*** an array only readable/useable in numpy\n\n***broadcasting:*** how numpy treats arrays",
"_____no_output_____"
],
[
"Answer the following questions:\n\n***What is the difference between a Python list and a Numpy array?*** the numpy library can only handle Numpy arrays and is way faster and arrays need to me symmetrical in Numpy (i.e. each row needs to have the same amount of columns)\n\n\n***How can you avoid using loops or list comprehensions when working with Numpy?*** Numpy can combine arrays and numbers arithmetrically while this is not possible for Python lists where you would need loops or list comprehensions: e.g. np.array([2,4,5]) + 10 results in array([12,14,15]) while this would cause a type error with Ptyhon lists; here you would need to do: [for i+10 for i in list]\n\n\n***Give different examples of usages of square brackets `[]` in Python and Numpy? Describe at least two completely different ones!*** in Python: creating lists and accessing elements of an list, e.g. l[1]; in Numpy to access elements of the array e.g. ar[1,1]\n\n\n***Give different examples of usages of round brackets `()` in Python and Numpy? Describe at least two completely different ones! (Bonus: give a third example not covered in the lecture until now!)*** to get the length of the list or numpy array, e.g. len(); in Numpy to create arrays; i.e. np.array([x,y,z])\n\n",
"_____no_output_____"
],
[
"### Exercise 2 - rotate and plot points in 2D\n\nPlot the 5 points in 2D defined in the array `points`, then rotate the points by 90 degrees by performing a matrix multiplication with a [rotation matrix](https://en.wikipedia.org/wiki/Rotation_matrix) by using `rotation_matrix @ points` and plot the result in the same plot. The rotation angle needs to be converted to radians before it is passed to `np.cos()` and `np.sin()`, use `np.radians(90)` to do so.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\n\npoints = np.array([[0, 0],\n [1, 1],\n [-1, -1],\n [0, 1],\n [0, 0.7],\n ]).T\n\nangle = np.radians(90)\ncosinus_thet = np.cos(angle)\nsinus_thet = np.sin(angle)\nrotation_matrix = np.array([[cosinus_thet, -sinus_thet],\n [sinus_thet,cosinus_thet]])\n\npoints_rotated = rotation_matrix @ points",
"_____no_output_____"
]
],
[
[
"The result should like like this:",
"_____no_output_____"
]
],
[
[
"plt.plot(*points, 'o', label='original points')\nplt.plot(*points_rotated, 'o', label='rotated points')\nplt.gca().set_aspect('equal'),\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Exercise 3 - Flatten the curve\n\nCopy the function `new_infections(t, k)` from last week's homework (exercise 3) and re-do the exercise using Numpy arrays instead of Python lists.\n\nWhat needs to be changed in the function `new_infections(t, k)` to make this work?",
"_____no_output_____"
]
],
[
[
"import math\nimport matplotlib.pyplot as plt \nimport numpy as np\n\ndef new_infections(t,k,P,i0):\n result = (((np.exp(-k*P*t))*k*(P**2)*(-1+(P/i0)))/(1+(np.exp(-k*P*t))*(-1+(P/i0)))**2) \n return result\n \n\ntime=np.arange(250)\nPop=1000000\nk= 3/(Pop*10)\ninf=new_infections(time,k,Pop,1) \n\ncap=[10000]*len(time)\nplt.plot(time,cap,label=\"health sysetm capacity\")\nplt.xlabel(\"Time\")\nplt.ylabel(\"Number of new infections\")\n\nplt.plot(time,inf,label=\"new infections,k=3/(10P)\")\n\nplt.legend()",
"_____no_output_____"
]
],
[
[
"### Exercise 4 - Mean of random numbers\n\nGenerate 100 random values between 0 and 1 (uniformly distributed) and plot them. Then calculate the mean value of the first i values for $i=1,\\ldots,100$ and plot this list too.\n\nTo solve the exercise find out how to generate random values with Numpy! How did you find an answer? Which possible ways are there? List at least ***2 to 5 different ways*** to look up what a numpy function does!\n\nNote: To solve this exercise, a list comprehension is necessary. Pure Numpy is faster, but probably not a good idea here.",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\nrandom_1 = np.random.uniform(0,1,100)\n#np.random.random_sample((100,))\n\nplt.plot(random_1,'o',label=\"100 random numbers\")\nplt.title(\"100 random numbers\")",
"_____no_output_____"
],
[
"#calc mean \nmean_1 = [np.mean(random_1[:i+1]) for i in np.arange(100)]\n#comparing if it worked correctly:\nprint(mean_1[19]) \nprint(np.mean(random_1[:20]))\n#plot\nplt.plot(mean_1,'o',label=\"Mean of x numbers\",color=\"red\")\nlabel1 = f'Mean of all 100 numbers = {round(mean_1[99],5)}'\nplt.plot(99,mean_1[99],'o',label=label1, color=\"blue\")\nlabel2 = f'Mean of first 50 numbers = {round(mean_1[49],5)}'\nplt.plot(49,mean_1[49],'o',label=label2, color=\"green\")\n\nplt.title(\"Mean of x random numbers\")\nplt.xlabel(\"x\")\nplt.ylabel(\"Mean\")\nplt.legend()",
"0.5003380095402897\n0.5003380095402897\n"
],
[
"#other ways to calculate random numbers\n#np.random?\n\nnp.random.rand(100)\nnp.random.randn(100)\nnp.random.ranf(100)\nnp.random.uniform(0,1,100)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4a0972c1cd4aaa668b1502dbcb966b642dda187c
| 492,165 |
ipynb
|
Jupyter Notebook
|
examples/od_vae_cifar10.ipynb
|
mbrner/alibi-detect
|
8f42897befe85a5836bf781bc5b5c957103776e2
|
[
"Apache-2.0"
] | 1 |
2021-01-19T09:13:12.000Z
|
2021-01-19T09:13:12.000Z
|
examples/od_vae_cifar10.ipynb
|
mbrner/alibi-detect
|
8f42897befe85a5836bf781bc5b5c957103776e2
|
[
"Apache-2.0"
] | null | null | null |
examples/od_vae_cifar10.ipynb
|
mbrner/alibi-detect
|
8f42897befe85a5836bf781bc5b5c957103776e2
|
[
"Apache-2.0"
] | null | null | null | 688.342657 | 212,460 | 0.948641 |
[
[
[
"# VAE outlier detection on CIFAR10\n\n## Method\n\nThe Variational Auto-Encoder ([VAE](https://arxiv.org/abs/1312.6114)) outlier detector is first trained on a batch of unlabeled, but normal (*inlier*) data. Unsupervised training is desireable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process.\n\n## Dataset\n\n[CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) consists of 60,000 32 by 32 RGB images equally distributed over 10 classes.",
"_____no_output_____"
]
],
[
[
"import logging\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\ntf.keras.backend.clear_session()\nfrom tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Layer, Reshape, InputLayer\nfrom tqdm import tqdm\n\nfrom alibi_detect.models.losses import elbo\nfrom alibi_detect.od import OutlierVAE\nfrom alibi_detect.utils.fetching import fetch_detector\nfrom alibi_detect.utils.perturbation import apply_mask\nfrom alibi_detect.utils.saving import save_detector, load_detector\nfrom alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image\n\nlogger = tf.get_logger()\nlogger.setLevel(logging.ERROR)",
"_____no_output_____"
]
],
[
[
"## Load CIFAR10 data",
"_____no_output_____"
]
],
[
[
"train, test = tf.keras.datasets.cifar10.load_data()\nX_train, y_train = train\nX_test, y_test = test\n\nX_train = X_train.astype('float32') / 255\nX_test = X_test.astype('float32') / 255\nprint(X_train.shape, y_train.shape, X_test.shape, y_test.shape)",
"(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)\n"
]
],
[
[
"## Load or define outlier detector\n\nThe pretrained outlier and adversarial detectors used in the example notebooks can be found [here](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect). You can use the built-in ```fetch_detector``` function which saves the pre-trained models in a local directory ```filepath``` and loads the detector. Alternatively, you can train a detector from scratch:",
"_____no_output_____"
]
],
[
[
"load_outlier_detector = True",
"_____no_output_____"
],
[
"filepath = 'my_path' # change to directory where model is downloaded\nif load_outlier_detector: # load pretrained outlier detector\n detector_type = 'outlier'\n dataset = 'cifar10'\n detector_name = 'OutlierVAE'\n od = fetch_detector(filepath, detector_type, dataset, detector_name)\n filepath = os.path.join(filepath, detector_name)\nelse: # define model, initialize, train and save outlier detector\n latent_dim = 1024\n \n encoder_net = tf.keras.Sequential(\n [\n InputLayer(input_shape=(32, 32, 3)),\n Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),\n Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),\n Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)\n ])\n\n decoder_net = tf.keras.Sequential(\n [\n InputLayer(input_shape=(latent_dim,)),\n Dense(4*4*128),\n Reshape(target_shape=(4, 4, 128)),\n Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),\n Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),\n Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')\n ])\n \n # initialize outlier detector\n od = OutlierVAE(threshold=.015, # threshold for outlier score\n score_type='mse', # use MSE of reconstruction error for outlier detection\n encoder_net=encoder_net, # can also pass VAE model instead\n decoder_net=decoder_net, # of separate encoder and decoder\n latent_dim=latent_dim,\n samples=2)\n # train\n od.fit(X_train, \n loss_fn=elbo,\n cov_elbo=dict(sim=.05),\n epochs=50,\n verbose=False)\n \n # save the trained outlier detector\n save_detector(od, filepath)",
"_____no_output_____"
]
],
[
[
"## Check quality VAE model",
"_____no_output_____"
]
],
[
[
"idx = 8\nX = X_train[idx].reshape(1, 32, 32, 3)\nX_recon = od.vae(X)",
"_____no_output_____"
],
[
"plt.imshow(X.reshape(32, 32, 3))\nplt.axis('off')\nplt.show()",
"_____no_output_____"
],
[
"plt.imshow(X_recon.numpy().reshape(32, 32, 3))\nplt.axis('off')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Check outliers on original CIFAR images",
"_____no_output_____"
]
],
[
[
"X = X_train[:500]\nprint(X.shape)",
"(500, 32, 32, 3)\n"
],
[
"od_preds = od.predict(X,\n outlier_type='instance', # use 'feature' or 'instance' level\n return_feature_score=True, # scores used to determine outliers\n return_instance_score=True)\nprint(list(od_preds['data'].keys()))",
"['instance_score', 'feature_score', 'is_outlier']\n"
]
],
[
[
"### Plot instance level outlier scores",
"_____no_output_____"
]
],
[
[
"target = np.zeros(X.shape[0],).astype(int) # all normal CIFAR10 training instances\nlabels = ['normal', 'outlier']\nplot_instance_score(od_preds, target, labels, od.threshold)",
"_____no_output_____"
]
],
[
[
"### Visualize predictions",
"_____no_output_____"
]
],
[
[
"X_recon = od.vae(X).numpy()\nplot_feature_outlier_image(od_preds, \n X, \n X_recon=X_recon,\n instance_ids=[8, 60, 100, 330], # pass a list with indices of instances to display\n max_instances=5, # max nb of instances to display\n outliers_only=False) # only show outlier predictions",
"_____no_output_____"
]
],
[
[
"## Predict outliers on perturbed CIFAR images",
"_____no_output_____"
],
[
"We perturb CIFAR images by adding random noise to patches (masks) of the image. For each mask size in `n_mask_sizes`, sample `n_masks` and apply those to each of the `n_imgs` images. Then we predict outliers on the masked instances: ",
"_____no_output_____"
]
],
[
[
"# nb of predictions per image: n_masks * n_mask_sizes \nn_mask_sizes = 10\nn_masks = 20\nn_imgs = 50",
"_____no_output_____"
]
],
[
[
"Define masks and get images:",
"_____no_output_____"
]
],
[
[
"mask_sizes = [(2*n,2*n) for n in range(1,n_mask_sizes+1)]\nprint(mask_sizes)\nimg_ids = np.arange(n_imgs)\nX_orig = X[img_ids].reshape(img_ids.shape[0], 32, 32, 3)\nprint(X_orig.shape)",
"[(2, 2), (4, 4), (6, 6), (8, 8), (10, 10), (12, 12), (14, 14), (16, 16), (18, 18), (20, 20)]\n(50, 32, 32, 3)\n"
]
],
[
[
"Calculate instance level outlier scores:",
"_____no_output_____"
]
],
[
[
"all_img_scores = []\nfor i in tqdm(range(X_orig.shape[0])):\n img_scores = np.zeros((len(mask_sizes),))\n for j, mask_size in enumerate(mask_sizes):\n # create masked instances\n X_mask, mask = apply_mask(X_orig[i].reshape(1, 32, 32, 3),\n mask_size=mask_size,\n n_masks=n_masks,\n channels=[0,1,2],\n mask_type='normal',\n noise_distr=(0,1),\n clip_rng=(0,1))\n # predict outliers\n od_preds_mask = od.predict(X_mask)\n score = od_preds_mask['data']['instance_score']\n # store average score over `n_masks` for a given mask size\n img_scores[j] = np.mean(score)\n all_img_scores.append(img_scores)",
"100%|██████████| 50/50 [00:39<00:00, 1.26it/s]\n"
]
],
[
[
"### Visualize outlier scores vs. mask sizes",
"_____no_output_____"
]
],
[
[
"x_plt = [mask[0] for mask in mask_sizes]",
"_____no_output_____"
],
[
"for ais in all_img_scores:\n plt.plot(x_plt, ais)\n plt.xticks(x_plt)\nplt.title('Outlier Score All Images for Increasing Mask Size')\nplt.xlabel('Mask size')\nplt.ylabel('Outlier Score')\nplt.show()",
"_____no_output_____"
],
[
"ais_np = np.zeros((len(all_img_scores), all_img_scores[0].shape[0]))\nfor i, ais in enumerate(all_img_scores):\n ais_np[i, :] = ais\nais_mean = np.mean(ais_np, axis=0)\nplt.title('Mean Outlier Score All Images for Increasing Mask Size')\nplt.xlabel('Mask size')\nplt.ylabel('Outlier score')\nplt.plot(x_plt, ais_mean)\nplt.xticks(x_plt)\nplt.show()",
"_____no_output_____"
]
],
[
[
"### Investigate instance level outlier",
"_____no_output_____"
]
],
[
[
"i = 8 # index of instance to look at",
"_____no_output_____"
],
[
"plt.plot(x_plt, all_img_scores[i])\nplt.xticks(x_plt)\nplt.title('Outlier Scores Image {} for Increasing Mask Size'.format(i))\nplt.xlabel('Mask size')\nplt.ylabel('Outlier score')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Reconstruction of masked images and outlier scores per channel:",
"_____no_output_____"
]
],
[
[
"all_X_mask = []\nX_i = X_orig[i].reshape(1, 32, 32, 3)\nall_X_mask.append(X_i)\n# apply masks\nfor j, mask_size in enumerate(mask_sizes):\n # create masked instances\n X_mask, mask = apply_mask(X_i,\n mask_size=mask_size,\n n_masks=1, # just 1 for visualization purposes\n channels=[0,1,2],\n mask_type='normal',\n noise_distr=(0,1),\n clip_rng=(0,1))\n all_X_mask.append(X_mask)\nall_X_mask = np.concatenate(all_X_mask, axis=0)\nall_X_recon = od.vae(all_X_mask).numpy()\nod_preds = od.predict(all_X_mask)",
"_____no_output_____"
]
],
[
[
"Visualize:",
"_____no_output_____"
]
],
[
[
"plot_feature_outlier_image(od_preds, \n all_X_mask, \n X_recon=all_X_recon, \n max_instances=all_X_mask.shape[0], \n n_channels=3)",
"_____no_output_____"
]
],
[
[
"## Predict outliers on a subset of features\n\nThe sensitivity of the outlier detector can not only be controlled via the `threshold`, but also by selecting the percentage of the features used for the instance level outlier score computation. For instance, we might want to flag outliers if 40% of the features (pixels for images) have an average outlier score above the threshold. This is possible via the `outlier_perc` argument in the `predict` function. It specifies the percentage of the features that are used for outlier detection, sorted in descending outlier score order. ",
"_____no_output_____"
]
],
[
[
"perc_list = [20, 40, 60, 80, 100]\n\nall_perc_scores = []\nfor perc in perc_list:\n od_preds_perc = od.predict(all_X_mask, outlier_perc=perc)\n iscore = od_preds_perc['data']['instance_score']\n all_perc_scores.append(iscore)",
"_____no_output_____"
]
],
[
[
"Visualize outlier scores vs. mask sizes and percentage of features used:",
"_____no_output_____"
]
],
[
[
"x_plt = [0] + x_plt\nfor aps in all_perc_scores:\n plt.plot(x_plt, aps)\n plt.xticks(x_plt)\nplt.legend(perc_list)\nplt.title('Outlier Score for Increasing Mask Size and Different Feature Subsets')\nplt.xlabel('Mask Size')\nplt.ylabel('Outlier Score')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Infer outlier threshold value\n\nFinding good threshold values can be tricky since they are typically not easy to interpret. The `infer_threshold` method helps finding a sensible value. We need to pass a batch of instances `X` and specify what percentage of those we consider to be normal via `threshold_perc`.",
"_____no_output_____"
]
],
[
[
"print('Current threshold: {}'.format(od.threshold))\nod.infer_threshold(X, threshold_perc=99) # assume 1% of the training data are outliers\nprint('New threshold: {}'.format(od.threshold))",
"Current threshold: 0.015\nNew threshold: 0.010383214280009267\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a09780fd1f71ddc078e8cebd48eb480f997851c
| 2,336 |
ipynb
|
Jupyter Notebook
|
interviewq_exercises/q073_stats_1sample_ttest_selenium_concentration.ipynb
|
gentimouton/dives
|
5441379f592c7055a086db6426dbb367072848c6
|
[
"Unlicense"
] | 1 |
2020-02-28T17:08:43.000Z
|
2020-02-28T17:08:43.000Z
|
interviewq_exercises/q073_stats_1sample_ttest_selenium_concentration.ipynb
|
gentimouton/dives
|
5441379f592c7055a086db6426dbb367072848c6
|
[
"Unlicense"
] | null | null | null |
interviewq_exercises/q073_stats_1sample_ttest_selenium_concentration.ipynb
|
gentimouton/dives
|
5441379f592c7055a086db6426dbb367072848c6
|
[
"Unlicense"
] | null | null | null | 24.333333 | 177 | 0.558647 |
[
[
[
"# Question 73 - Testing the toxicity of water\n\nSuppose you're trying to measure the Selenium toxicity in your tap water, and obtain the following values for each day:\n\n```\nday \tselenium\n1 \t0.051\n2 \t0.0505\n3 \t0.049\n4 \t0.0516\n5 \t0.052\n6 \t0.0508\n7 \t0.0506\n```\n\nThe maxiumum level for safe drinking water is 0.05 mg/L -- using this as your alpha, does the selenium tap level exceed the legal limit? Hint: you can use a t-test here \n",
"_____no_output_____"
],
[
"TLDR: Do not drink the water! \nIt's above the safety limit 6 out of 7 days, regardless of significance.\n\nNow, safety and legality are two different things. \nIs the water selenium concentration significantly above the legal threshold of .05mg/L?\n",
"_____no_output_____"
]
],
[
[
"from scipy.stats import ttest_1samp\nimport numpy as np\n\nsample = [.051, .0505, .049, .0516, .052, .0508, .0506]\nthreshold = .05\ntvalue, pvalue = ttest_1samp(sample, threshold)\nprint('delta:', np.mean(sample)-threshold, 'mg/L; pvalue:', pvalue)",
"delta: 0.0007857142857142854 mg/L; pvalue: 0.07271011867964246\n"
]
],
[
[
"p-value is <.1 which is suggestive (but not certain) that the concentration is above threshold.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4a097e96cf0c5432f041b010e39f214787c85c82
| 61,329 |
ipynb
|
Jupyter Notebook
|
1-Lessons/Lesson05/dev_src/Lab5/Lab5-FullNarrative.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null |
1-Lessons/Lesson05/dev_src/Lab5/Lab5-FullNarrative.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null |
1-Lessons/Lesson05/dev_src/Lab5/Lab5-FullNarrative.ipynb
|
dustykat/engr-1330-psuedo-course
|
3e7e31a32a1896fcb1fd82b573daa5248e465a36
|
[
"CC0-1.0"
] | null | null | null | 66.589577 | 16,752 | 0.786903 |
[
[
[
"## Full name: Farhang Forghanparast\n## R#: 321654987\n## HEX: 0x132c10cb\n## Title of the notebook\n## Date: 9/3/2020",
"_____no_output_____"
],
[
"# Laboratory 5 Functions\nFunctions are simply pre-written code fragments that perform a certain task.\nIn older procedural languages functions and subroutines are similar, but a function returns a value whereas\na subroutine operates on data. \nThe difference is subtle but important. \n\nMore recent thinking has functions being able to operate on data (they always could) and the value returned may be simply an exit code.\nAn analogy are the functions in *MS Excel*. \nTo add numbers, we can use the sum(range) function and type `=sum(A1:A5)` instead of typing `=A1+A2+A3+A4+A5`\n## Special Notes for ENGR 1330\n1. This notebook has graphics dependencies, remember to download the three .png files included in the web directory. CoCalc (free) does not allow the notebook to do the downloads automatically, so download to your local machine, then upload to CoCalc.\n2. There is a python module dependency on a separate file named `mylibrary.py`, as with the .png files download this file and upload to CoCalc to run this notebook without errors. If you forget, its easy enough to create in realtime and that will be demonstrated below.\n\n## Calling the Function\nWe call a function simply by typing the name of the function or by using the dot notation.\nWhether we can use the dot notation or not depends on how the function is written, whether it is part of a class, and how it is imported into a program.\n\nSome functions expect us to pass data to them to perform their tasks. \nThese data are known as parameters( older terminology is arguments, or argument list) and we pass them to the function by enclosing their values in parenthesis ( ) separated by commas. \n\nFor instance, the `print()` function for displaying text on the screen is \\called\" by typing `print('Hello World')` where print is the name of the function and the literal (a string) 'Hello World' is the argument.\n\n## Program flow\nA function, whether built-in, or added must be defined *before* it is called, otherwise the script will fail. Certain built-in functions \"self define\" upon start (such as `print()` and `type()` and we need not worry about those funtions). The diagram below illustrates the requesite flow control for functions that need to be defined before use.\n\n\n\nAn example below will illustrate, change the cell to code and run it, you should get an error.\nThen fix the indicated line (remove the leading \"#\" in the import math ... line) and rerun, should get a functioning script.",
"_____no_output_____"
]
],
[
[
"# reset the notebook using a magic function in JupyterLab\n%reset -f \n# An example, run once as is then activate indicated line, run again - what happens?\nx= 4.\nsqrt_by_arithmetic = x**0.5\nprint('Using arithmetic square root of ', x, ' is ',sqrt_by_arithmetic )\nimport math # import the math package ## activate and rerun\nsqrt_by_math = math.sqrt(x) # note the dot notation\nprint('Using math package square root of ', x,' is ',sqrt_by_arithmetic)",
"Using arithmetic square root of 4.0 is 2.0\nUsing math package square root of 4.0 is 2.0\n"
]
],
[
[
"An alternate way to load just the sqrt() function is shown below, either way is fine.",
"_____no_output_____"
]
],
[
[
"# reset the notebook using a magic function in JupyterLab\n%reset -f \n# An example, run once as is then activate indicated line, run again - what happens?\nx= 4.\nsqrt_by_arithmetic = x**0.5\nprint('Using arithmetic square root of ', x, ' is ',sqrt_by_arithmetic )\nfrom math import sqrt # import sqrt from the math package ## activate and rerun\nsqrt_by_math = sqrt(x) # note the notation\nprint('Using math package square root of ', x,' is ',sqrt_by_arithmetic)",
"Using arithmetic square root of 4.0 is 2.0\nUsing math package square root of 4.0 is 2.0\n"
]
],
[
[
"## Built-In in Primitive Python (Base install)\n\nThe base Python functions and types built into it that are always available, the figure below lists those functions.\n\n\n\nNotice all have the structure of `function_name()`, except `__import__()` which has a constructor type structure, and is not intended for routine use. We will learn about constructors later.\n",
"_____no_output_____"
],
[
"## Added-In using External Packages/Modules and Libaries (e.g. math)\n\nPython is also distributed with a large number of external functions. \nThese functions are saved\nin files known as modules. \nTo use the built-in codes in Python modules, we have to import\nthem into our programs first. We do that by using the import keyword. \nThere are three\nways to import:\n1. Import the entire module by writing import moduleName; For instance, to import the random module, we write import random. To use the randrange() function in the random module, we write random.randrange( 1, 10);28\n2. Import and rename the module by writing import random as r (where r is any name of your choice). Now to use the randrange() function, you simply write r.randrange(1, 10); and\n3. Import specific functions from the module by writing from moduleName import name1[,name2[, ... nameN]]. For instance, to import the randrange() function from the random module, we write from random import randrange. To import multiple functions, we separate them with a comma. To import the randrange() and randint() functions, we write from random import randrange, randint. To use the function now, we do not have to use the dot notation anymore. Just write randrange( 1, 10).",
"_____no_output_____"
]
],
[
[
"# Example 1 of import\n%reset -f \nimport random\nlow = 1 ; high = 10\nrandom.randrange(low,high) #generate random number in range low to high",
"_____no_output_____"
],
[
"# Example 2 of import\n%reset -f \nimport random as r\nlow = 1 ; high = 10\nr.randrange(low,high)",
"_____no_output_____"
],
[
"# Example 3 of import\n%reset -f \nfrom random import randrange \nlow = 1 ; high = 10\nrandrange(low,high)",
"_____no_output_____"
]
],
[
[
"The modules that come with Python are extensive and listed at \nhttps://docs.python.org/3/py-modindex.html.\nThere are also other modules that can be downloaded and used\n(just like user defined modules below). \nIn these labs we are building primitive codes to learn how to code and how to create algorithms. \nFor many practical cases you will want to load a well-tested package to accomplish the tasks. \n\nThat exercise is saved for the end of the document.\n\n## User-Built \nWe can define our own functions in Python and reuse them throughout the program.\nThe syntax for defining a function is:\n\n def functionName( argument ):\n code detailing what the function should do\n note the colon above and indentation\n ...\n ...\n return [expression]\n \nThe keyword `def` tells the program that the indented code from the next line onwards is\npart of the function. \nThe keyword `return `tells the program to return an answer from the\nfunction. \nThere can be multiple return statements in a function. \nOnce the function executes\na return statement, the program exits the function and continues with *its* next executable\nstatement. \nIf the function does not need to return any value, you can omit the return\nstatement.\n\nFunctions can be pretty elaborate; they can search for things in a list, determine variable\ntypes, open and close files, read and write to files. \n\nTo get started we will build a few really\nsimple mathematical functions; we will need this skill in the future anyway, especially in\nscientific programming contexts.",
"_____no_output_____"
],
[
"### User-built within a Code Block\nFor our first function we will code $$f(x) = x\\sqrt{1 + x}$$ into a function named `dusty()`.\n\nWhen you run the next cell, all it does is prototype the function (defines it), nothing happens until we use the function.",
"_____no_output_____"
]
],
[
[
"def dusty(x) :\n temp = x * ((1.0+x)**(0.5)) # don't need the math package\n return temp\n# the function should make the evaluation\n# store in the local variable temp\n# return contents of temp",
"_____no_output_____"
],
[
"# wrapper to run the dusty function\nyes = 0\nwhile yes == 0:\n xvalue = input('enter a numeric value')\n try:\n xvalue = float(xvalue)\n yes = 1\n except:\n print('enter a bloody number! Try again \\n')\n# call the function, get value , write output\nyvalue = dusty(xvalue)\nprint('f(',xvalue,') = ',yvalue) # and we are done ",
"enter a numeric value 1\n"
]
],
[
[
"## Example\n\nCreate the AVERAGE function for three values and test it for these values:\n- 3,4,5\n- 10,100,1000\n- -5,15,5",
"_____no_output_____"
],
[
"## Example\n\nCreate the FC function to convert Fahrenhiet to Celsius and test it for these values:\n- 32\n- 15\n- 100\n\n*hint: Formula-(°F − 32) × 5/9 = °C",
"_____no_output_____"
],
[
"## Exercise 1\n\nCreate the function $$f(x) = e^x - 10 cos(x) - 100$$ as a function (i.e. use the `def` keyword)\n\n def name(parameters) :\n operations on parameters\n ...\n ...\n return (value, or null)\n\nThen apply your function to the value.\n\nUse your function to complete the table below:\n\n| x | f(x) |\n|---:|---:|\n| 0.0 | |\n| 1.50 | |\n| 2.00 | |\n| 2.25 | |\n| 3.0 | |\n| 4.25 | |\n",
"_____no_output_____"
],
[
"## Variable Scope\nAn important concept when defining a function is the concept of variable scope. \nVariables defined inside a function are treated differently from variables defined outside. \nFirstly, any variable declared within a function is only accessible within the function. \nThese are known as local variables. \n\nIn the `dusty()` function, the variables `x` and `temp` are local to the function.\nAny variable declared outside a function in a main program is known as a program variable\nand is accessible anywhere in the program. \n\nIn the example, the variables `xvalue` and `yvalue` are program variables (global to the program; if they are addressed within a function, they could be operated on.)\nGenerally we want to protect the program variables from the function unless the intent is to change their values. \nThe way the function is written in the example, the function cannot damage `xvalue` or `yvalue`.\n\nIf a local variable shares the same name as a program variable, any code inside the function is\naccessing the local variable. Any code outside is accessing the program variable",
"_____no_output_____"
],
[
"### As Separate Module/File\n\nIn this section we will invent the `neko()` function, export it to a file, so we can reuse it in later notebooks without having to retype or cut-and-paste. The `neko()` function evaluates:\n\n$$f(x) = x\\sqrt{|(1 + x)|}$$\n\nIts the same as the dusty() function, except operates on the absolute value in the wadical.\n\n1. Create a text file named \"mylibrary.txt\"\n2. Copy the neko() function script below into that file.\n\n def neko(input_argument) :\n import math #ok to import into a function\n local_variable = input_argument * math.sqrt(abs(1.0+input_argument))\n return local_variable\n\n\n4. rename mylibrary.txt to mylibrary.py\n5. modify the wrapper script to use the neko function as an external module",
"_____no_output_____"
]
],
[
[
"# wrapper to run the neko function\nimport mylibrary\nyes = 0\nwhile yes == 0:\n xvalue = input('enter a numeric value')\n try:\n xvalue = float(xvalue)\n yes = 1\n except:\n print('enter a bloody number! Try again \\n')\n# call the function, get value , write output\nyvalue = mylibrary.neko(xvalue)\nprint('f(',xvalue,') = ',yvalue) # and we are done ",
"enter a numeric value 1\n"
]
],
[
[
"In JupyterHub environments, you may discover that changes you make to your external python file are not reflected when you re-run your script; you need to restart the kernel to get the changes to actually update. The figure below depicts the notebook, external file relatonship\n\n\n\n\n* Future version - explain absolute path",
"_____no_output_____"
],
[
"## Rudimentary Graphics\n\nGraphing values is part of the broader field of data visualization, which has two main\ngoals:\n\n 1. To explore data, and\n 2. To communicate data.\n\nIn this subsection we will concentrate on introducing skills to start exploring data and to\nproduce meaningful visualizations we can use throughout the rest of this notebook. \nData visualization is a rich field of study that fills entire books.\nThe reason to start visualization here instead of elsewhere is that with functions plotting\nis a natural activity and we have to import the matplotlib module to make the plots.\n\nThe example below is code adapted from Grus (2015) that illustrates simple generic\nplots. I added a single line (label the x-axis), and corrected some transcription\nerrors (not the original author's mistake, just the consequence of how the API handled the\ncut-and-paste), but otherwise the code is unchanged.",
"_____no_output_____"
]
],
[
[
"# python script to illustrate plotting\n# CODE BELOW IS ADAPTED FROM:\n# Grus, Joel (2015-04-14). Data Science from Scratch: First Principles with Python\n# (Kindle Locations 1190-1191). O'Reilly Media. Kindle Edition. \n#\nfrom matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()\n\nyears = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define one list for years\ngdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3] # and another one for Gross Domestic Product (GDP)\nplt.plot( years, gdp, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis\n # what if \"^\", \"P\", \"*\" for marker?\n # what if \"red\" for color? \n # what if \"dashdot\", '--' for linestyle? \n\n\nplt.title(\"Nominal GDP\")# add a title\nplt.ylabel(\"Billions of $\")# add a label to the x and y-axes\nplt.xlabel(\"Year\")\nplt.show() # display the plot",
"_____no_output_____"
]
],
[
[
"Now lets put the plotting script into a function so we can make line charts of any two numeric lists",
"_____no_output_____"
]
],
[
[
"def plotAline(list1,list2,strx,stry,strtitle): # plot list1 on x, list2 on y, xlabel, ylabel, title\n from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()\n plt.plot( list1, list2, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis\n plt.title(strtitle)# add a title\n plt.ylabel(stry)# add a label to the x and y-axes\n plt.xlabel(strx)\n plt.show() # display the plot\n return #null return",
"_____no_output_____"
],
[
"# wrapper\nyears = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define two lists years and gdp\ngdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]\nprint(type(years[0]))\nprint(type(gdp[0]))\nplotAline(years,gdp,\"Year\",\"Billions of $\",\"Nominal GDP\")",
"<class 'int'>\n<class 'float'>\n"
]
],
[
[
"## Example \nUse the plotting script and create a function that draws a straight line between two points.",
"_____no_output_____"
],
[
"## Example- Lets have some fun! \nCopy the wrapper script for the `plotAline()` function, and modify the copy to create a plot of\n$$ x = 16sin^3(t) $$\n$$ y = 13cos(t) - 5cos(2t) - 2cos(3t) - cos(4t) $$\nfor t raging from [0,2$\\Pi$] (inclusive).\n\nLabel the plot and the plot axes.\n",
"_____no_output_____"
],
[
"## Exercise 2\nCopy the wrapper script for the `plotAline()` function, and modify the copy to create a plot of\n$$ y = x^2 $$\nfor x raging from 0 to 9 (inclusive) in steps of 1.\n\nLabel the plot and the plot axes.\n",
"_____no_output_____"
],
[
"## Exercise 3 \nUse your function from Exercise 1. \n\n$$f(x) = e^x - 10 cos(x) - 100$$ \n\nAnd make a plot where $x$ ranges from 0 to 15 in increments of 0.25. Label the plot and the plot axes.",
"_____no_output_____"
],
[
"## References\n\n1. Grus, Joel (2015-04-14). Data Science from Scratch: First Principles with Python\n(Kindle Locations 1190-1191). O'Reilly Media. Kindle Edition. \n\n2. Call Expressions in \"Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science\" https://www.inferentialthinking.com/chapters/03/3/Calls.html\n\n3. Functions and Tables in \"Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science\" https://www.inferentialthinking.com/chapters/08/Functions_and_Tables.html\n\n4. Visualization in \"Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science\" https://www.inferentialthinking.com/chapters/07/Visualization.html",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a098cb1e46e20aab221131fe2cc2ba13082c176
| 11,760 |
ipynb
|
Jupyter Notebook
|
Sources/libosp/python/osp/monitor.ipynb
|
nihospr01/OpenSpeechPlatform
|
799fb5baa5b8cdfad0f5387dd48b394adc583ede
|
[
"BSD-2-Clause"
] | null | null | null |
Sources/libosp/python/osp/monitor.ipynb
|
nihospr01/OpenSpeechPlatform
|
799fb5baa5b8cdfad0f5387dd48b394adc583ede
|
[
"BSD-2-Clause"
] | null | null | null |
Sources/libosp/python/osp/monitor.ipynb
|
nihospr01/OpenSpeechPlatform
|
799fb5baa5b8cdfad0f5387dd48b394adc583ede
|
[
"BSD-2-Clause"
] | null | null | null | 44.885496 | 1,684 | 0.580017 |
[
[
[
"## Audio Monitor\n\nMonitors the audio by continuously recording and plotting the recorded audio.\n\nYou will need to start osp running in a terminal.",
"_____no_output_____"
]
],
[
[
"# make Jupyter use the whole width of the browser\nfrom IPython.display import Image, display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))",
"_____no_output_____"
],
[
"# Pull in all the necessary python modules\nimport plotly.graph_objects as go\nfrom IPython.display import Audio\nfrom osp_control import OspControl\nimport os\nimport json\nimport numpy as np\nimport time\nimport ipywidgets as widgets\n",
"_____no_output_____"
],
[
"# Connect to OSP process\nosp = OspControl() # connects to a local process\n\n# set the number of bands and mute mic\nosp.send({\"method\": \"set\", \"data\": {\"num_bands\": 11}})\nosp.send_chan({\"alpha\": 0})",
"_____no_output_____"
],
[
"def plot_fft(fig, title, out1, update=True):\n ftrans = np.abs(np.fft.fft(out1)/len(out1))\n outlen = len(out1)\n values = np.arange(int(outlen/2))\n period = outlen/32000\n frequencies = values/period\n\n if update:\n with fig.batch_update():\n fig.data[0].y = ftrans\n fig.data[0].x = frequencies\n fig.update_layout()\n return\n \n \n fig.add_trace(go.Scatter(x=frequencies, line_color='green', opacity=1, y=ftrans))\n\n fig.update_layout(title=title,\n xaxis_title='Frequency',\n yaxis_title='Amplitude',\n template='plotly_white')",
"_____no_output_____"
],
[
"def plot_mono(fig, title, out1, lab1, rate=48000, update=True):\n\n # Create x values.\n x = np.array(range(len(out1)))/rate\n if update:\n with fig.batch_update():\n fig.data[0].y = out1\n fig.data[0].x = x\n fig.update_layout()\n return\n fig.add_trace(go.Scatter(x=x,\n y=out1,\n name=lab1,\n opacity=1))\n fig.update_layout(title=title,\n# yaxis_range=[-1,1],\n xaxis_title='Time(sec)',\n yaxis_title='Amplitude',\n template='plotly_white')",
"_____no_output_____"
],
[
"def monitor(afig, ffig, interval=2):\n update=False\n while True:\n # set record file name\n filename = os.path.join(os.getcwd(), 'tmpaudio')\n osp.send_chan({'audio_rfile': filename})\n\n #start recording output\n osp.send_chan({\"audio_record\": 1})\n\n # record for {interval} seconds\n time.sleep(interval)\n\n #stop recording\n osp.send_chan({\"audio_record\": 0})\n\n # read the saved recording data\n with open(filename, \"rb\") as f:\n data = f.read()\n data = np.frombuffer(data, dtype=np.float32)\n data = np.reshape(data, (2, -1), 'F')\n \n # plot\n plot_mono(afig, '', data[0], 'out', update=update)\n plot_fft(ffig, '', data[0], update=update)\n if update == False:\n update = True",
"_____no_output_____"
],
[
"audio_fig = go.FigureWidget()\nfft_fig = go.FigureWidget()\nwidgets.VBox([audio_fig, fft_fig])",
"_____no_output_____"
],
[
"monitor(audio_fig, fft_fig)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0998a4b7930be326f1f38e25c686bafb0497d4
| 1,622 |
ipynb
|
Jupyter Notebook
|
openpyxl/numpy.ipynb
|
liqijunchn/PythonProjects
|
796c642032834902264e3485122967e497ef520e
|
[
"MIT"
] | null | null | null |
openpyxl/numpy.ipynb
|
liqijunchn/PythonProjects
|
796c642032834902264e3485122967e497ef520e
|
[
"MIT"
] | null | null | null |
openpyxl/numpy.ipynb
|
liqijunchn/PythonProjects
|
796c642032834902264e3485122967e497ef520e
|
[
"MIT"
] | null | null | null | 21.626667 | 51 | 0.589396 |
[] |
[] |
[] |
4a09a98a5858915dddc650deedfbc460dd8b5109
| 60,030 |
ipynb
|
Jupyter Notebook
|
Lesson01/Lesson01.ipynb
|
Jefexon/Alura-Data-Immersion-3
|
96b45ed04eb791bbf1fe063817e1f85cb4dffb42
|
[
"MIT"
] | null | null | null |
Lesson01/Lesson01.ipynb
|
Jefexon/Alura-Data-Immersion-3
|
96b45ed04eb791bbf1fe063817e1f85cb4dffb42
|
[
"MIT"
] | null | null | null |
Lesson01/Lesson01.ipynb
|
Jefexon/Alura-Data-Immersion-3
|
96b45ed04eb791bbf1fe063817e1f85cb4dffb42
|
[
"MIT"
] | null | null | null | 51.132879 | 9,120 | 0.610545 |
[
[
[
"import pandas as pd\n\nurl_dados = \"https://github.com/alura-cursos/imersaodados3/blob/main/dados/dados_experimentos.zip?raw=true\"\n\ndados = pd.read_csv(url_dados, compression = 'zip')\ndados",
"_____no_output_____"
],
[
"dados.head()",
"_____no_output_____"
],
[
"dados.shape",
"_____no_output_____"
],
[
"dados['tratamento']",
"_____no_output_____"
],
[
"dados['tratamento'].unique()",
"_____no_output_____"
],
[
"dados['tempo'].unique()",
"_____no_output_____"
],
[
"dados['dose'].unique()\n",
"_____no_output_____"
],
[
"dados['droga'].unique()",
"_____no_output_____"
],
[
"dados['g-0'].unique()",
"_____no_output_____"
],
[
"dados['tratamento'].value_counts()",
"_____no_output_____"
],
[
"dados['dose'].value_counts()",
"_____no_output_____"
],
[
"dados['tratamento'].value_counts(normalize = True)",
"_____no_output_____"
],
[
"dados['dose'].value_counts(normalize = True)",
"_____no_output_____"
],
[
"dados['tratamento'].value_counts().plot.pie()",
"_____no_output_____"
],
[
"dados['tempo'].value_counts().plot.pie()",
"_____no_output_____"
],
[
"dados['tempo'].value_counts().plot.bar()",
"_____no_output_____"
],
[
"dados_filtrados = dados[dados['g-0']> 0]\ndados_filtrados.head()",
"_____no_output_____"
]
],
[
[
"### Desafio 01: Investigar por que a classe tratamento e' tao desbalanceada? (my guess is that you only need 1 com_controle for a group of com_droga\n\n### Desafio 02: Plotar as 5 ultimas linhas da tabela.\n\n### Desafio 03: Proporcao das classes tratamento. (search pandas documentation)\n\n### Desafio 04: Quantas tipos de drogas foram investigados.\n\n### Desafio 05: Procurar na documentacao o metodo query (pandas).\n\n### Desafio 06: Renomear as colunas tirando o hifen.\n\n### Desafio 07: Deixar os graficos bonitoes. (Matplotlib.pyplot)\n\n### Desafio 08: Resumo do que voce aprendeu com os dados.",
"_____no_output_____"
]
]
] |
[
"code",
"markdown"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4a09b3a9d88ea7982543b7a857337c5904ed5ef9
| 11,251 |
ipynb
|
Jupyter Notebook
|
p2/main.ipynb
|
Icewell2/CS-220-FA20
|
7c451cb2a4a912bb055efe3054b33dd20a909869
|
[
"MIT"
] | null | null | null |
p2/main.ipynb
|
Icewell2/CS-220-FA20
|
7c451cb2a4a912bb055efe3054b33dd20a909869
|
[
"MIT"
] | null | null | null |
p2/main.ipynb
|
Icewell2/CS-220-FA20
|
7c451cb2a4a912bb055efe3054b33dd20a909869
|
[
"MIT"
] | null | null | null | 20.990672 | 216 | 0.423696 |
[
[
[
"# This line is a comment because it starts with a pound sign (#). That \n# means Python ignores it. A comment is just for a human reading the \n# code. This project involves 20 small problems to give you practice \n# with operators, types, and boolean logic. We'll give you directions \n# (as comments) on what to do for each problem. \n# \n# Before you get started, please tell us who you are, putting your\n# Net ID and your partner's Net ID below (or none if your working solo) \n\n# project: p2\n# submitter: zchen697\n# partner: none",
"_____no_output_____"
],
[
"#q1: what is 1+1?\n\n# we did this one for you :)\n\n1+1",
"_____no_output_____"
],
[
"#q2: what is 2000+20?\n\n2000+20",
"_____no_output_____"
],
[
"#q3: what is the type of 2020?\n\n# we did this one for you too...\n\ntype(2020)",
"_____no_output_____"
],
[
"#q4: what is the type of 2020/8?\n\ntype(2020/8)",
"_____no_output_____"
],
[
"#q5: what is the type of 2020//8?\n\ntype(2020//8)",
"_____no_output_____"
],
[
"#q6: what is the type of \"2020\"? Note the quotes!\n\ntype(\"2020\")",
"_____no_output_____"
],
[
"#q7: what is the type of \"True\"? Note the quotes!\n\ntype(\"True\")",
"_____no_output_____"
],
[
"#q8: what is the type of False?\n\ntype(False)",
"_____no_output_____"
],
[
"#q9: what is the type of 2020<2019?\n\ntype(2020<2019)",
"_____no_output_____"
],
[
"#q10: what is two thousand plus twenty?\n\n# fix the below to make Python do an addition that produces 2020\n\n2000 + 20 \n",
"_____no_output_____"
],
[
"#q11: please fix the following to display 6 smileys\n\n\":)\" * 6 \n",
"_____no_output_____"
],
[
"#q12: please fix the following to get 42\n\n6 * 7 \n",
"_____no_output_____"
],
[
"#q13: what is the volume of a cube with side length of 8?\n\n# oops, it is currently computing the area of a square (please fix)\n\n8 ** 3 \n",
"_____no_output_____"
],
[
"#q14: you start with 20 dollars and triple your money every decade.\n# how much will you have after 4 decades?\n\n20 * (3 ** 4)",
"_____no_output_____"
],
[
"#q15: fix the Python logic to match the English\n\n# In English: 220 is less than 2020 and greater than 120\n\n# In Python:\n\n220 < 2020 and 220 > 120 \n",
"_____no_output_____"
],
[
"#q16: change ONLY the value of x to get True for the output\n\nx = 270 \n\nx < 2020 and x > 220",
"_____no_output_____"
],
[
"#q17: change ONLY the value of x to get True for the output\n\nx = 2020.5\n\n(x > 2000 and x < 2021) and (x > 2019 and x < 3000)",
"_____no_output_____"
],
[
"#q18: what???\n\nx = 2020\n\n# fix the following logic so we get True, not 2020.\n# The correct logic should check whether x is either\n# -2020 or 2020. The problem is with types: to the left\n# of the or, we have a boolean, and to the right of\n# the or, we have an integer. This semester, we will\n# always try to have a boolean to both the left and\n# right of logical operators such as or.\n\nx == -2020 or x == 2020 ",
"_____no_output_____"
],
[
"#q19: we should win!\n\nnumber = 36 \n\n# There are three winners, with numbers 36, 220, and 2020.\n# The following should output True if number is changed\n# to be any of these (but the logic is currently wrong).\n#\n# Please update the following expression so we get True\n# if the number variable is changed to any of the winning\n# options.\n\n# Did we win?\nnumber == 36 or number == 220 or number == 2020",
"_____no_output_____"
],
[
"#q20: what is the volume of a cylinder with radius 4 and height 2?\n# you may assume PI is 3.14 if you like (a close answer is good enough)\n\n\n4 * 4 * 3.14 * 2",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a09c4d4d0ccc1712db966c178ce87d0dca69aa0
| 1,458 |
ipynb
|
Jupyter Notebook
|
Statistics/Statistics_khb(180619).ipynb
|
hbkimhbkim/statistics
|
af71313d18c97d54f93fef60056c0f05ccda8928
|
[
"MIT"
] | 1 |
2018-06-18T00:18:06.000Z
|
2018-06-18T00:18:06.000Z
|
Statistics/Statistics_khb(180619).ipynb
|
hbkimhbkim/Study
|
af71313d18c97d54f93fef60056c0f05ccda8928
|
[
"MIT"
] | null | null | null |
Statistics/Statistics_khb(180619).ipynb
|
hbkimhbkim/Study
|
af71313d18c97d54f93fef60056c0f05ccda8928
|
[
"MIT"
] | null | null | null | 16.021978 | 91 | 0.424554 |
[
[
[
"## KDE??\n\n * kernal density",
"_____no_output_____"
],
[
"## Desktop 정리",
"_____no_output_____"
],
[
"## 일반통계",
"_____no_output_____"
],
[
"### 분포\n\n * 확률변수 : 확률변수란 확률에 따라 여러 가지 값을 갖는 변수를 말한다. 그리고 이 확률변수가 가질 수 있는 값과 그 확률을 나타낸 것\n 을'확률분포'라고 하고, 혹은 줄여서 '분포'라고 부르기도 한다.\n \n * 이항분포 :\n \n * 정규분포 :\n \n ",
"_____no_output_____"
],
[
"### 검정\n\n *\n \n *",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4a09c9e4b683614abbef2af08d31973d95222845
| 8,856 |
ipynb
|
Jupyter Notebook
|
Module2/Module2 - Lab4.ipynb
|
sudarshna-gangwar/DAT210x
|
bb93dd397cda7de94d952fb19174db33d064db9c
|
[
"MIT"
] | null | null | null |
Module2/Module2 - Lab4.ipynb
|
sudarshna-gangwar/DAT210x
|
bb93dd397cda7de94d952fb19174db33d064db9c
|
[
"MIT"
] | null | null | null |
Module2/Module2 - Lab4.ipynb
|
sudarshna-gangwar/DAT210x
|
bb93dd397cda7de94d952fb19174db33d064db9c
|
[
"MIT"
] | null | null | null | 37.525424 | 1,187 | 0.622403 |
[
[
[
"# DAT210x - Programming with Python for DS",
"_____no_output_____"
],
[
"## Module2 - Lab4",
"_____no_output_____"
],
[
"Import and alias Pandas:",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"Load up the table from the link, and extract the dataset out of it. If you're having issues with this, look carefully at the sample code provided in the reading:",
"_____no_output_____"
]
],
[
[
"nhl = pd.read_html('http://www.espn.com/nhl/statistics/player/_/stat/points/sort/points/year/2015/seasontype/2')",
"_____no_output_____"
]
],
[
[
"Next up, rename the columns so that they are _similar_ to the column definitions provided to you on the website. Be careful and don't accidentally use any column names twice. If a column uses special characters, you can replace them with regular characters to make it easier to work with:",
"_____no_output_____"
]
],
[
[
"# .. your code here ..",
"_____no_output_____"
]
],
[
[
"Get rid of any row that has at least 4 NANs in it. That is, any rows that do not contain player points statistics:",
"_____no_output_____"
]
],
[
[
"# .. your code here ..",
"_____no_output_____"
]
],
[
[
"At this point, look through your dataset by printing it. There probably still are some erroneous rows in there. What indexing command(s) will you use to select all rows EXCEPT those rows?",
"_____no_output_____"
]
],
[
[
"# .. your code here ..",
"_____no_output_____"
]
],
[
[
"Get rid of the 'RK' column:",
"_____no_output_____"
]
],
[
[
"# .. your code here ..",
"_____no_output_____"
]
],
[
[
"Make sure there are no holes in your index by resetting it. There is an example of this in the reading material. By the way, drop the original index.",
"_____no_output_____"
]
],
[
[
"# .. your code here ..",
"_____no_output_____"
]
],
[
[
"Check the data type of all columns, and ensure those that should be numeric are numeric.",
"_____no_output_____"
]
],
[
[
"# .. your code here ..",
"_____no_output_____"
]
],
[
[
"Your dataframe is now ready! Use the appropriate commands to answer the questions on the course lab page.",
"_____no_output_____"
]
],
[
[
"# .. your code here ..",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a09d3657e4c0f5f9ceeceb9b55dc9ea2a8fb11e
| 3,083 |
ipynb
|
Jupyter Notebook
|
05 Prediction 01.ipynb
|
tgreiser/board-game-recommendation
|
675e411dc8ce0776c2869d4e6fb5c48bf346b00a
|
[
"MIT"
] | null | null | null |
05 Prediction 01.ipynb
|
tgreiser/board-game-recommendation
|
675e411dc8ce0776c2869d4e6fb5c48bf346b00a
|
[
"MIT"
] | null | null | null |
05 Prediction 01.ipynb
|
tgreiser/board-game-recommendation
|
675e411dc8ce0776c2869d4e6fb5c48bf346b00a
|
[
"MIT"
] | null | null | null | 31.783505 | 79 | 0.537464 |
[
[
[
"import pandas as pd\n\nusum = pd.read_csv('usum.csv', sep='\\t', index_col=0)\ngsum = pd.read_csv('gsum.csv', sep='\\t', index_col='userID')\nunbr = pd.read_csv('uneighbors.csv', sep='\\t', index_col=0)\ngnbr = pd.read_csv('gneighbors.csv', sep='\\t', index_col=0)\ndel gnbr['1']\ndel unbr['1']\n \n# user neighbors mean scores for the game\n# averaged with\n# game neighbors mean scores for the user\ndef predict(gameID, userID):\n global usum, gsum, unbr, gnbr\n # Series of games in rank - neighbors of original\n #display(gnbr.loc['gameID_'+gameID])\n # All the scores for one game\n #display(usum.loc['gameID_'+gameID])\n # Series of users in rank - neighbors of original\n #display(unbr.loc[userID])\n # All the ratings from one user\n #display(gsum.loc[userID])\n gameList = gnbr.loc[['gameID_'+gameID]].transpose()\n gameList = gameList[gameList.columns[0]].tolist()\n #display(gnbr.head(3))\n #display(gsum.head(5))\n #display(gsum.loc[[userID]])\n \n # median of user scores for neighboring games\n gavg = gsum.loc[[userID]][gameList].transpose().median()\n #display(gavg)\n \n userList = unbr.loc[[userID]].transpose()\n userList = list(map(str, userList[userList.columns[0]].tolist()))\n # userList are numbers, but need string references for column list\n #display(userList)\n #display(usum.loc[['gameID_'+gameID]][userList])\n uavg = usum.loc[['gameID_'+gameID]][userList].transpose().median()\n return (uavg.values[0]+gavg.values[0])/2\n\n#display(gsum.index)\n\n#predict('13', 272)\ntest = pd.read_csv('boardgame-users-test.csv', sep=',')\nfor index, row in test.iterrows():\n gid = str(int(row.values[1]))\n uid = int(row.values[0])\n #display(uid)\n# display(row['gameID'])\n if uid in gsum.index:\n rating = predict(gid, uid)\n test.iloc[index:index+1,2:3] = rating\n \ntest.to_csv('filled-test-tgreiser.csv', sep='\\t')",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4a09d4a276c94499723e78dfd243fb05ecc1d03f
| 45,132 |
ipynb
|
Jupyter Notebook
|
1_introduction/w1_python_fundamentals/1_week1/week1.ipynb
|
shijiansu/coursera-applied-data-science-with-python
|
a0f2bbd0b9201805f26d18b73a25183cf0b3a0e9
|
[
"MIT"
] | null | null | null |
1_introduction/w1_python_fundamentals/1_week1/week1.ipynb
|
shijiansu/coursera-applied-data-science-with-python
|
a0f2bbd0b9201805f26d18b73a25183cf0b3a0e9
|
[
"MIT"
] | null | null | null |
1_introduction/w1_python_fundamentals/1_week1/week1.ipynb
|
shijiansu/coursera-applied-data-science-with-python
|
a0f2bbd0b9201805f26d18b73a25183cf0b3a0e9
|
[
"MIT"
] | null | null | null | 18.489144 | 296 | 0.491824 |
[
[
[
"---\n\n_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._\n\n---",
"_____no_output_____"
],
[
"# The Python Programming Language: Functions",
"_____no_output_____"
]
],
[
[
"x = 1\ny = 2\nx + y",
"_____no_output_____"
],
[
"x",
"_____no_output_____"
]
],
[
[
"<br>\n`add_numbers` is a function that takes two numbers and adds them together.",
"_____no_output_____"
]
],
[
[
"def add_numbers(x, y):\n return x + y\n\nadd_numbers(1, 2)",
"_____no_output_____"
]
],
[
[
"<br>\n`add_numbers` updated to take an optional 3rd parameter. Using `print` allows printing of multiple expressions within a single cell.",
"_____no_output_____"
]
],
[
[
"def add_numbers(x,y,z=None):\n if (z==None):\n return x+y\n else:\n return x+y+z\n\nprint(add_numbers(1, 2))\nprint(add_numbers(1, 2, 3))",
"_____no_output_____"
]
],
[
[
"<br>\n`add_numbers` updated to take an optional flag parameter.",
"_____no_output_____"
]
],
[
[
"def add_numbers(x, y, z=None, flag=False):\n if (flag):\n print('Flag is true!')\n if (z==None):\n return x + y\n else:\n return x + y + z\n \nprint(add_numbers(1, 2, flag=True))",
"_____no_output_____"
]
],
[
[
"<br>\nAssign function `add_numbers` to variable `a`.",
"_____no_output_____"
]
],
[
[
"def add_numbers(x,y):\n return x+y\n\na = add_numbers\na(1,2)",
"_____no_output_____"
]
],
[
[
"<br>\n# The Python Programming Language: Types and Sequences",
"_____no_output_____"
],
[
"<br>\nUse `type` to return the object's type.",
"_____no_output_____"
]
],
[
[
"type('This is a string')",
"_____no_output_____"
],
[
"type(None)",
"_____no_output_____"
],
[
"type(1)",
"_____no_output_____"
],
[
"type(1.0)",
"_____no_output_____"
],
[
"type(add_numbers)",
"_____no_output_____"
]
],
[
[
"<br>\nTuples are an immutable data structure (cannot be altered).",
"_____no_output_____"
]
],
[
[
"x = (1, 'a', 2, 'b')\ntype(x)",
"_____no_output_____"
]
],
[
[
"<br>\nLists are a mutable data structure.",
"_____no_output_____"
]
],
[
[
"x = [1, 'a', 2, 'b']\ntype(x)",
"_____no_output_____"
]
],
[
[
"<br>\nUse `append` to append an object to a list.",
"_____no_output_____"
]
],
[
[
"x.append(3.3)\nprint(x)",
"_____no_output_____"
]
],
[
[
"<br>\nThis is an example of how to loop through each item in the list.",
"_____no_output_____"
]
],
[
[
"for item in x:\n print(item)",
"_____no_output_____"
]
],
[
[
"<br>\nOr using the indexing operator:",
"_____no_output_____"
]
],
[
[
"i=0\nwhile( i != len(x) ):\n print(x[i])\n i = i + 1",
"_____no_output_____"
]
],
[
[
"<br>\nUse `+` to concatenate lists.",
"_____no_output_____"
]
],
[
[
"[1,2] + [3,4]",
"_____no_output_____"
]
],
[
[
"<br>\nUse `*` to repeat lists.",
"_____no_output_____"
]
],
[
[
"[1]*3",
"_____no_output_____"
]
],
[
[
"<br>\nUse the `in` operator to check if something is inside a list.",
"_____no_output_____"
]
],
[
[
"1 in [1, 2, 3]",
"_____no_output_____"
]
],
[
[
"<br>\nNow let's look at strings. Use bracket notation to slice a string.",
"_____no_output_____"
]
],
[
[
"x = 'This is a string'\nprint(x[0]) #first character\nprint(x[0:1]) #first character, but we have explicitly set the end character\nprint(x[0:2]) #first two characters\n",
"_____no_output_____"
]
],
[
[
"<br>\nThis will return the last element of the string.",
"_____no_output_____"
]
],
[
[
"x[-1]",
"_____no_output_____"
]
],
[
[
"<br>\nThis will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.",
"_____no_output_____"
]
],
[
[
"x[-4:-2]",
"_____no_output_____"
]
],
[
[
"<br>\nThis is a slice from the beginning of the string and stopping before the 3rd element.",
"_____no_output_____"
]
],
[
[
"x[:3]",
"_____no_output_____"
]
],
[
[
"<br>\nAnd this is a slice starting from the 3rd element of the string and going all the way to the end.",
"_____no_output_____"
]
],
[
[
"x[3:]",
"_____no_output_____"
],
[
"firstname = 'Christopher'\nlastname = 'Brooks'\n\nprint(firstname + ' ' + lastname)\nprint(firstname*3)\nprint('Chris' in firstname)\n",
"_____no_output_____"
]
],
[
[
"<br>\n`split` returns a list of all the words in a string, or a list split on a specific character.",
"_____no_output_____"
]
],
[
[
"firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list\nlastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list\nprint(firstname)\nprint(lastname)",
"_____no_output_____"
]
],
[
[
"<br>\nMake sure you convert objects to strings before concatenating.",
"_____no_output_____"
]
],
[
[
"'Chris' + 2",
"_____no_output_____"
],
[
"'Chris' + str(2)",
"_____no_output_____"
]
],
[
[
"<br>\nDictionaries associate keys with values.",
"_____no_output_____"
]
],
[
[
"x = {'Christopher Brooks': '[email protected]', 'Bill Gates': '[email protected]'}\nx['Christopher Brooks'] # Retrieve a value by using the indexing operator\n",
"_____no_output_____"
],
[
"x['Kevyn Collins-Thompson'] = None\nx['Kevyn Collins-Thompson']",
"_____no_output_____"
]
],
[
[
"<br>\nIterate over all of the keys:",
"_____no_output_____"
]
],
[
[
"for name in x:\n print(x[name])",
"_____no_output_____"
]
],
[
[
"<br>\nIterate over all of the values:",
"_____no_output_____"
]
],
[
[
"for email in x.values():\n print(email)",
"_____no_output_____"
]
],
[
[
"<br>\nIterate over all of the items in the list:",
"_____no_output_____"
]
],
[
[
"for name, email in x.items():\n print(name)\n print(email)",
"_____no_output_____"
]
],
[
[
"<br>\nYou can unpack a sequence into different variables:",
"_____no_output_____"
]
],
[
[
"x = ('Christopher', 'Brooks', '[email protected]')\nfname, lname, email = x",
"_____no_output_____"
],
[
"fname",
"_____no_output_____"
],
[
"lname",
"_____no_output_____"
]
],
[
[
"<br>\nMake sure the number of values you are unpacking matches the number of variables being assigned.",
"_____no_output_____"
]
],
[
[
"x = ('Christopher', 'Brooks', '[email protected]', 'Ann Arbor')\nfname, lname, email = x",
"_____no_output_____"
]
],
[
[
"<br>\n# The Python Programming Language: More on Strings",
"_____no_output_____"
]
],
[
[
"print('Chris' + 2)",
"_____no_output_____"
],
[
"print('Chris' + str(2))",
"_____no_output_____"
]
],
[
[
"<br>\nPython has a built in method for convenient string formatting.",
"_____no_output_____"
]
],
[
[
"sales_record = {\n'price': 3.24,\n'num_items': 4,\n'person': 'Chris'}\n\nsales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'\n\nprint(sales_statement.format(sales_record['person'],\n sales_record['num_items'],\n sales_record['price'],\n sales_record['num_items']*sales_record['price']))\n",
"_____no_output_____"
]
],
[
[
"<br>\n# Reading and Writing CSV files",
"_____no_output_____"
],
[
"<br>\nLet's import our datafile mpg.csv, which contains fuel economy data for 234 cars.",
"_____no_output_____"
]
],
[
[
"import csv\n\n%precision 2\n\nwith open('mpg.csv') as csvfile:\n mpg = list(csv.DictReader(csvfile))\n \nmpg[:3] # The first three dictionaries in our list.",
"_____no_output_____"
]
],
[
[
"<br>\n`csv.Dictreader` has read in each row of our csv file as a dictionary. `len` shows that our list is comprised of 234 dictionaries.",
"_____no_output_____"
]
],
[
[
"len(mpg)",
"_____no_output_____"
]
],
[
[
"<br>\n`keys` gives us the column names of our csv.",
"_____no_output_____"
]
],
[
[
"mpg[0].keys()",
"_____no_output_____"
]
],
[
[
"<br>\nThis is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.",
"_____no_output_____"
]
],
[
[
"sum(float(d['cty']) for d in mpg) / len(mpg)",
"_____no_output_____"
]
],
[
[
"<br>\nSimilarly this is how to find the average hwy fuel economy across all cars.",
"_____no_output_____"
]
],
[
[
"sum(float(d['hwy']) for d in mpg) / len(mpg)",
"_____no_output_____"
]
],
[
[
"<br>\nUse `set` to return the unique values for the number of cylinders the cars in our dataset have.",
"_____no_output_____"
]
],
[
[
"cylinders = set(d['cyl'] for d in mpg)\ncylinders",
"_____no_output_____"
]
],
[
[
"<br>\nHere's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.",
"_____no_output_____"
]
],
[
[
"CtyMpgByCyl = []\n\nfor c in cylinders: # iterate over all the cylinder levels\n summpg = 0\n cyltypecount = 0\n for d in mpg: # iterate over all dictionaries\n if d['cyl'] == c: # if the cylinder level type matches,\n summpg += float(d['cty']) # add the cty mpg\n cyltypecount += 1 # increment the count\n CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')\n\nCtyMpgByCyl.sort(key=lambda x: x[0])\nCtyMpgByCyl",
"_____no_output_____"
]
],
[
[
"<br>\nUse `set` to return the unique values for the class types in our dataset.",
"_____no_output_____"
]
],
[
[
"vehicleclass = set(d['class'] for d in mpg) # what are the class types\nvehicleclass",
"_____no_output_____"
]
],
[
[
"<br>\nAnd here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.",
"_____no_output_____"
]
],
[
[
"HwyMpgByClass = []\n\nfor t in vehicleclass: # iterate over all the vehicle classes\n summpg = 0\n vclasscount = 0\n for d in mpg: # iterate over all dictionaries\n if d['class'] == t: # if the cylinder amount type matches,\n summpg += float(d['hwy']) # add the hwy mpg\n vclasscount += 1 # increment the count\n HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')\n\nHwyMpgByClass.sort(key=lambda x: x[1])\nHwyMpgByClass",
"_____no_output_____"
]
],
[
[
"<br>\n# The Python Programming Language: Dates and Times",
"_____no_output_____"
]
],
[
[
"import datetime as dt\nimport time as tm",
"_____no_output_____"
]
],
[
[
"<br>\n`time` returns the current time in seconds since the Epoch. (January 1st, 1970)",
"_____no_output_____"
]
],
[
[
"tm.time()",
"_____no_output_____"
]
],
[
[
"<br>\nConvert the timestamp to datetime.",
"_____no_output_____"
]
],
[
[
"dtnow = dt.datetime.fromtimestamp(tm.time())\ndtnow",
"_____no_output_____"
]
],
[
[
"<br>\nHandy datetime attributes:",
"_____no_output_____"
]
],
[
[
"dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime",
"_____no_output_____"
]
],
[
[
"<br>\n`timedelta` is a duration expressing the difference between two dates.",
"_____no_output_____"
]
],
[
[
"delta = dt.timedelta(days = 100) # create a timedelta of 100 days\ndelta",
"_____no_output_____"
]
],
[
[
"<br>\n`date.today` returns the current local date.",
"_____no_output_____"
]
],
[
[
"today = dt.date.today()",
"_____no_output_____"
],
[
"today - delta # the date 100 days ago",
"_____no_output_____"
],
[
"today > today-delta # compare dates",
"_____no_output_____"
]
],
[
[
"<br>\n# The Python Programming Language: Objects and map()",
"_____no_output_____"
],
[
"<br>\nAn example of a class in python:",
"_____no_output_____"
]
],
[
[
"class Person:\n department = 'School of Information' #a class variable\n\n def set_name(self, new_name): #a method\n self.name = new_name\n def set_location(self, new_location):\n self.location = new_location",
"_____no_output_____"
],
[
"person = Person()\nperson.set_name('Christopher Brooks')\nperson.set_location('Ann Arbor, MI, USA')\nprint('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))",
"_____no_output_____"
]
],
[
[
"<br>\nHere's an example of mapping the `min` function between two lists.",
"_____no_output_____"
]
],
[
[
"store1 = [10.00, 11.00, 12.34, 2.34]\nstore2 = [9.00, 11.10, 12.34, 2.01]\ncheapest = map(min, store1, store2)\ncheapest",
"_____no_output_____"
]
],
[
[
"<br>\nNow let's iterate through the map object to see the values.",
"_____no_output_____"
]
],
[
[
"for item in cheapest:\n print(item)",
"_____no_output_____"
]
],
[
[
"<br>\n# The Python Programming Language: Lambda and List Comprehensions",
"_____no_output_____"
],
[
"<br>\nHere's an example of lambda that takes in three parameters and adds the first two.",
"_____no_output_____"
]
],
[
[
"my_function = lambda a, b, c : a + b",
"_____no_output_____"
],
[
"my_function(1, 2, 3)",
"_____no_output_____"
]
],
[
[
"<br>\nLet's iterate from 0 to 999 and return the even numbers.",
"_____no_output_____"
]
],
[
[
"my_list = []\nfor number in range(0, 1000):\n if number % 2 == 0:\n my_list.append(number)\nmy_list",
"_____no_output_____"
]
],
[
[
"<br>\nNow the same thing but with list comprehension.",
"_____no_output_____"
]
],
[
[
"my_list = [number for number in range(0,1000) if number % 2 == 0]\nmy_list",
"_____no_output_____"
]
],
[
[
"<br>\n# The Python Programming Language: Numerical Python (NumPy)",
"_____no_output_____"
]
],
[
[
"import numpy as np",
"_____no_output_____"
]
],
[
[
"<br>\n## Creating Arrays",
"_____no_output_____"
],
[
"Create a list and convert it to a numpy array",
"_____no_output_____"
]
],
[
[
"mylist = [1, 2, 3]\nx = np.array(mylist)\nx",
"_____no_output_____"
]
],
[
[
"<br>\nOr just pass in a list directly",
"_____no_output_____"
]
],
[
[
"y = np.array([4, 5, 6])\ny",
"_____no_output_____"
]
],
[
[
"<br>\nPass in a list of lists to create a multidimensional array.",
"_____no_output_____"
]
],
[
[
"m = np.array([[7, 8, 9], [10, 11, 12]])\nm",
"_____no_output_____"
]
],
[
[
"<br>\nUse the shape method to find the dimensions of the array. (rows, columns)",
"_____no_output_____"
]
],
[
[
"m.shape",
"_____no_output_____"
]
],
[
[
"<br>\n`arange` returns evenly spaced values within a given interval.",
"_____no_output_____"
]
],
[
[
"n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30\nn",
"_____no_output_____"
]
],
[
[
"<br>\n`reshape` returns an array with the same data with a new shape.",
"_____no_output_____"
]
],
[
[
"n = n.reshape(3, 5) # reshape array to be 3x5\nn",
"_____no_output_____"
]
],
[
[
"<br>\n`linspace` returns evenly spaced numbers over a specified interval.",
"_____no_output_____"
]
],
[
[
"o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4\no",
"_____no_output_____"
]
],
[
[
"<br>\n`resize` changes the shape and size of array in-place.",
"_____no_output_____"
]
],
[
[
"o.resize(3, 3)\no",
"_____no_output_____"
]
],
[
[
"<br>\n`ones` returns a new array of given shape and type, filled with ones.",
"_____no_output_____"
]
],
[
[
"np.ones((3, 2))",
"_____no_output_____"
]
],
[
[
"<br>\n`zeros` returns a new array of given shape and type, filled with zeros.",
"_____no_output_____"
]
],
[
[
"np.zeros((2, 3))",
"_____no_output_____"
]
],
[
[
"<br>\n`eye` returns a 2-D array with ones on the diagonal and zeros elsewhere.",
"_____no_output_____"
]
],
[
[
"np.eye(3)",
"_____no_output_____"
]
],
[
[
"<br>\n`diag` extracts a diagonal or constructs a diagonal array.",
"_____no_output_____"
]
],
[
[
"np.diag(y)",
"_____no_output_____"
]
],
[
[
"<br>\nCreate an array using repeating list (or see `np.tile`)",
"_____no_output_____"
]
],
[
[
"np.array([1, 2, 3] * 3)",
"_____no_output_____"
]
],
[
[
"<br>\nRepeat elements of an array using `repeat`.",
"_____no_output_____"
]
],
[
[
"np.repeat([1, 2, 3], 3)",
"_____no_output_____"
]
],
[
[
"<br>\n#### Combining Arrays",
"_____no_output_____"
]
],
[
[
"p = np.ones([2, 3], int)\np",
"_____no_output_____"
]
],
[
[
"<br>\nUse `vstack` to stack arrays in sequence vertically (row wise).",
"_____no_output_____"
]
],
[
[
"np.vstack([p, 2*p])",
"_____no_output_____"
]
],
[
[
"<br>\nUse `hstack` to stack arrays in sequence horizontally (column wise).",
"_____no_output_____"
]
],
[
[
"np.hstack([p, 2*p])",
"_____no_output_____"
]
],
[
[
"<br>\n## Operations",
"_____no_output_____"
],
[
"Use `+`, `-`, `*`, `/` and `**` to perform element wise addition, subtraction, multiplication, division and power.",
"_____no_output_____"
]
],
[
[
"print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]\nprint(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]",
"_____no_output_____"
],
[
"print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]\nprint(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]",
"_____no_output_____"
],
[
"print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]",
"_____no_output_____"
]
],
[
[
"<br>\n**Dot Product:** \n\n$ \\begin{bmatrix}x_1 \\ x_2 \\ x_3\\end{bmatrix}\n\\cdot\n\\begin{bmatrix}y_1 \\\\ y_2 \\\\ y_3\\end{bmatrix}\n= x_1 y_1 + x_2 y_2 + x_3 y_3$",
"_____no_output_____"
]
],
[
[
"x.dot(y) # dot product 1*4 + 2*5 + 3*6",
"_____no_output_____"
],
[
"z = np.array([y, y**2])\nprint(len(z)) # number of rows of array",
"_____no_output_____"
]
],
[
[
"<br>\nLet's look at transposing arrays. Transposing permutes the dimensions of the array.",
"_____no_output_____"
]
],
[
[
"z = np.array([y, y**2])\nz",
"_____no_output_____"
]
],
[
[
"<br>\nThe shape of array `z` is `(2,3)` before transposing.",
"_____no_output_____"
]
],
[
[
"z.shape",
"_____no_output_____"
]
],
[
[
"<br>\nUse `.T` to get the transpose.",
"_____no_output_____"
]
],
[
[
"z.T",
"_____no_output_____"
]
],
[
[
"<br>\nThe number of rows has swapped with the number of columns.",
"_____no_output_____"
]
],
[
[
"z.T.shape",
"_____no_output_____"
]
],
[
[
"<br>\nUse `.dtype` to see the data type of the elements in the array.",
"_____no_output_____"
]
],
[
[
"z.dtype",
"_____no_output_____"
]
],
[
[
"<br>\nUse `.astype` to cast to a specific type.",
"_____no_output_____"
]
],
[
[
"z = z.astype('f')\nz.dtype",
"_____no_output_____"
]
],
[
[
"<br>\n## Math Functions",
"_____no_output_____"
],
[
"Numpy has many built in math functions that can be performed on arrays.",
"_____no_output_____"
]
],
[
[
"a = np.array([-4, -2, 1, 3, 5])",
"_____no_output_____"
],
[
"a.sum()",
"_____no_output_____"
],
[
"a.max()",
"_____no_output_____"
],
[
"a.min()",
"_____no_output_____"
],
[
"a.mean()",
"_____no_output_____"
],
[
"a.std()",
"_____no_output_____"
]
],
[
[
"<br>\n`argmax` and `argmin` return the index of the maximum and minimum values in the array.",
"_____no_output_____"
]
],
[
[
"a.argmax()",
"_____no_output_____"
],
[
"a.argmin()",
"_____no_output_____"
]
],
[
[
"<br>\n## Indexing / Slicing",
"_____no_output_____"
]
],
[
[
"s = np.arange(13)**2\ns",
"_____no_output_____"
]
],
[
[
"<br>\nUse bracket notation to get the value at a specific index. Remember that indexing starts at 0.",
"_____no_output_____"
]
],
[
[
"s[0], s[4], s[-1]",
"_____no_output_____"
]
],
[
[
"<br>\nUse `:` to indicate a range. `array[start:stop]`\n\n\nLeaving `start` or `stop` empty will default to the beginning/end of the array.",
"_____no_output_____"
]
],
[
[
"s[1:5]",
"_____no_output_____"
]
],
[
[
"<br>\nUse negatives to count from the back.",
"_____no_output_____"
]
],
[
[
"s[-4:]",
"_____no_output_____"
]
],
[
[
"<br>\nA second `:` can be used to indicate step-size. `array[start:stop:stepsize]`\n\nHere we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.",
"_____no_output_____"
]
],
[
[
"s[-5::-2]",
"_____no_output_____"
]
],
[
[
"<br>\nLet's look at a multidimensional array.",
"_____no_output_____"
]
],
[
[
"r = np.arange(36)\nr.resize((6, 6))\nr",
"_____no_output_____"
]
],
[
[
"<br>\nUse bracket notation to slice: `array[row, column]`",
"_____no_output_____"
]
],
[
[
"r[2, 2]",
"_____no_output_____"
]
],
[
[
"<br>\nAnd use : to select a range of rows or columns",
"_____no_output_____"
]
],
[
[
"r[3, 3:6]",
"_____no_output_____"
]
],
[
[
"<br>\nHere we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.",
"_____no_output_____"
]
],
[
[
"r[:2, :-1]",
"_____no_output_____"
]
],
[
[
"<br>\nThis is a slice of the last row, and only every other element.",
"_____no_output_____"
]
],
[
[
"r[-1, ::2]",
"_____no_output_____"
]
],
[
[
"<br>\nWe can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see `np.where`)",
"_____no_output_____"
]
],
[
[
"r[r > 30]",
"_____no_output_____"
]
],
[
[
"<br>\nHere we are assigning all values in the array that are greater than 30 to the value of 30.",
"_____no_output_____"
]
],
[
[
"r[r > 30] = 30\nr",
"_____no_output_____"
]
],
[
[
"<br>\n## Copying Data",
"_____no_output_____"
],
[
"Be careful with copying and modifying arrays in NumPy!\n\n\n`r2` is a slice of `r`",
"_____no_output_____"
]
],
[
[
"r2 = r[:3,:3]\nr2",
"_____no_output_____"
]
],
[
[
"<br>\nSet this slice's values to zero ([:] selects the entire array)",
"_____no_output_____"
]
],
[
[
"r2[:] = 0\nr2",
"_____no_output_____"
]
],
[
[
"<br>\n`r` has also been changed!",
"_____no_output_____"
]
],
[
[
"r",
"_____no_output_____"
]
],
[
[
"<br>\nTo avoid this, use `r.copy` to create a copy that will not affect the original array",
"_____no_output_____"
]
],
[
[
"r_copy = r.copy()\nr_copy",
"_____no_output_____"
]
],
[
[
"<br>\nNow when r_copy is modified, r will not be changed.",
"_____no_output_____"
]
],
[
[
"r_copy[:] = 10\nprint(r_copy, '\\n')\nprint(r)",
"_____no_output_____"
]
],
[
[
"<br>\n### Iterating Over Arrays",
"_____no_output_____"
],
[
"Let's create a new 4 by 3 array of random numbers 0-9.",
"_____no_output_____"
]
],
[
[
"test = np.random.randint(0, 10, (4,3))\ntest",
"_____no_output_____"
]
],
[
[
"<br>\nIterate by row:",
"_____no_output_____"
]
],
[
[
"for row in test:\n print(row)",
"_____no_output_____"
]
],
[
[
"<br>\nIterate by index:",
"_____no_output_____"
]
],
[
[
"for i in range(len(test)):\n print(test[i])",
"_____no_output_____"
]
],
[
[
"<br>\nIterate by row and index:",
"_____no_output_____"
]
],
[
[
"for i, row in enumerate(test):\n print('row', i, 'is', row)",
"_____no_output_____"
]
],
[
[
"<br>\nUse `zip` to iterate over multiple iterables.",
"_____no_output_____"
]
],
[
[
"test2 = test**2\ntest2",
"_____no_output_____"
],
[
"for i, j in zip(test, test2):\n print(i,'+',j,'=',i+j)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4a09deaf9ae87ffe4ead7f9c24b856f3512fc295
| 8,184 |
ipynb
|
Jupyter Notebook
|
notebooks/clustering/cuml-dbscan.ipynb
|
TheScienceMuseum/heritage-connector-vectors
|
efa2bb37ebd2a2b93d68cb76c8b59330e63e3a7d
|
[
"MIT"
] | 1 |
2021-06-22T13:52:03.000Z
|
2021-06-22T13:52:03.000Z
|
notebooks/clustering/cuml-dbscan.ipynb
|
TheScienceMuseum/heritage-connector-vectors
|
efa2bb37ebd2a2b93d68cb76c8b59330e63e3a7d
|
[
"MIT"
] | 3 |
2021-06-22T14:07:52.000Z
|
2021-11-09T10:55:46.000Z
|
notebooks/clustering/cuml-dbscan.ipynb
|
TheScienceMuseum/heritage-connector-vectors
|
efa2bb37ebd2a2b93d68cb76c8b59330e63e3a7d
|
[
"MIT"
] | null | null | null | 50.832298 | 1,063 | 0.620601 |
[
[
[
"## dbscan clustering using `cuml`\n",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n\nimport utils\nfrom cuml.cluster import DBSCAN\n\nimport numpy as np",
"_____no_output_____"
],
[
"# load embeddings matrix using KGEmbeddingStore class\nemb_store = utils.load_embedding_store()\nX = emb_store.ent_embedding_matrix\ndim = X.shape[1]\n\ndim",
"_____no_output_____"
],
[
"labels = {}\n\nfor EPS in [0.25, 0.5, 0.75]:\n print(f\"EPS = {EPS}\")\n \n dbscan_float = DBSCAN(\n eps=EPS, \n min_samples=2,\n verbose=False,\n )\n labels[EPS] = dbscan_float.fit_predict(X)\n print(f\"no clusters = {len(np.unique(labels[EPS])) - 1}\")\n \n with open(f\"./dbscan_cluster_idxs_EPS_{EPS}.txt\", \"wb\") as f:\n np.savetxt(f, labels[EPS].astype(int), fmt='%i', delimiter=\",\")\n ",
"EPS = 0.25\n[W] [14:26:02.228180] Batch size limited by the chosen integer type (4 bytes). 3558 -> 3326. Using the larger integer type might result in better performance\nno clusters = 26267\nEPS = 0.75\n[W] [14:29:53.530086] Batch size limited by the chosen integer type (4 bytes). 3558 -> 3326. Using the larger integer type might result in better performance\n"
],
[
"len(np.unique(labels[EPS])) - 1",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a09e1da4aa3907a92281406261a28f2a57fd0ef
| 11,833 |
ipynb
|
Jupyter Notebook
|
NUM/3.3-7.ipynb
|
5AF1/LabWorksML
|
ddd702678aad1f62ce25b1d971ea1fc666c24f9d
|
[
"MIT"
] | null | null | null |
NUM/3.3-7.ipynb
|
5AF1/LabWorksML
|
ddd702678aad1f62ce25b1d971ea1fc666c24f9d
|
[
"MIT"
] | null | null | null |
NUM/3.3-7.ipynb
|
5AF1/LabWorksML
|
ddd702678aad1f62ce25b1d971ea1fc666c24f9d
|
[
"MIT"
] | null | null | null | 37.328076 | 1,737 | 0.393814 |
[
[
[
"from prettytable import PrettyTable",
"_____no_output_____"
],
[
"def math_expr(x):\n return 230*x**4+18*x**3+9*x**2-221*x-9\n #return x**3 - 0.165*x**2 + 3.993e-4\n #return (x-1)**3 + .512\n\n\ndig = 5\nu = -1\nv = 1\nit = 50\n#u,v,it,dig\n#x 3 − 0.165x 2 + 3.993×10−4 = 0",
"_____no_output_____"
],
[
"def bisection_method(func,xl=0,xu=5,iter=10,digits = 16):\n tab = PrettyTable()\n acc = 0.5*10**(-digits)\n if (func(xl)*func(xu) > 0) :\n print(\"f(xl)*f(xu) > 0 \")\n return\n xm = (xl+xu)/2\n i = 1\n e_a = None\n #print(\"Iter\\txl\\txu\\txm\\t|ε_a|%\\tf(xm)\")\n tab.field_names = ['Iter','xl','f(xl)','xu','f(xu)','xm','f(xm)','|ε_a|%']\n #print(f\"{i}\\t{xl:.5f}\\t{xu:.5f}\\t{xm:.5f}\\t{e_a}\\t{func(xm):.5g}\")\n tab.add_row([i,\"%.5f\" % xl,\"%.5g\" % func(xl),\"%.5f\" % xu,\"%.5g\" % func(xu),\"%.5f\" % xm,\"%.5g\"% func(xm),e_a])\n e_a = acc+1\n\n while(i < iter and e_a > acc):\n if func(xl)*func(xm) < 0:\n xu = xm\n elif func(xl)*func(xm) > 0:\n xl = xm\n else:\n break\n\n i+=1\n xmn = (xl+xu)/2\n e_a = abs((xmn-xm)/xmn)\n xm = xmn\n #print(f\"{i}\\t{xl:.5f}\\t{xu:.5f}\\t{xm:.5f}\\t{e_a*100:.5f}\\t{func(xm):.5g}\")\n tab.add_row([i,\"%.5g\" % xl,\"%.5g\" % func(xl),\"%.5g\" % xu,\"%.5g\" % func(xu),\"%.6f\" % xm,\"%.5g\" % func(xm),\"%.5f\" % (e_a*100)])\n \n print(tab)\n",
"_____no_output_____"
],
[
"bisection_method(math_expr,u,v,it,dig)",
"f(xl)*f(xu) > 0 \n"
],
[
"def derivative(func,h=1e-5):\n def f_prime(x):\n return (func(x+h)-func(x))/h\n \n a = f_prime\n return a\n",
"_____no_output_____"
],
[
"def newtonraphson_method(func,x=0,iter=10,digits = 16):\n tab = PrettyTable()\n acc = 0.5*10**(-digits)\n i = 0\n e_a = None\n tab.field_names = ['Iter','x_i-1','f(x)',\"f'(x)\",'x_i','|ε_a|%']\n #print(f\"{i}\\t{x:.5g}\\t{e_a}\\t{func(x):.5g}\")\n fprime = derivative(func)\n e_a = acc+1\n while(i < iter and e_a > acc):\n i+=1\n xn = x - func(x)/fprime(x)\n e_a = abs((xn-x)/xn)\n tab.add_row([i,\"%.5g\" % x,\"%.5g\" % func(x),\"%.5g\" % fprime(x),\"%.6g\" % xn,\"%.5f\" % (e_a*100)])\n x = xn\n #print(f\"{i}\\t{x:.5g}\\t{e_a*100:.5g}\\t{func(x):.5g}\")\n print(tab)\n",
"_____no_output_____"
],
[
"newtonraphson_method(math_expr,u,it,dig)",
"+------+-----------+-------------+---------+------------+-----------+\n| Iter | x_i-1 | f(x) | f'(x) | x_i | |ε_a|% |\n+------+-----------+-------------+---------+------------+-----------+\n| 1 | -1 | 433 | -1105 | -0.60814 | 64.43580 |\n| 2 | -0.60814 | 156.14 | -418.89 | -0.235397 | 158.34652 |\n| 3 | -0.2354 | 43.993 | -234.24 | -0.0475895 | 394.64058 |\n| 4 | -0.04759 | 1.5369 | -221.83 | -0.0406613 | 17.03879 |\n| 5 | -0.040661 | 0.00044996 | -221.7 | -0.0406593 | 0.00499 |\n| 6 | -0.040659 | -1.4697e-10 | -221.7 | -0.0406593 | 0.00000 |\n+------+-----------+-------------+---------+------------+-----------+\n"
],
[
"def secant_method(func,x0=0,x1=5,iter=10,digits = 16):\n tab = PrettyTable()\n acc = 0.5*10**(-digits)\n i = 0\n #e_a = abs((x1-x0)/x1)\n tab.field_names = ['Iter','x_i-1','f(x_i-1)','x_i','f(x_i)','x_i+1','f(x_i+1)','|ε_a|%']\n #print(f\"{i}\\t{x:.5f}\\t{e_a}\\t{func(x):.5g}\")\n #tab.add_row([i,\"%.5g\" % x0,\"%.5g\" % x1,None,\"%.5g\" % (e_a*100),None])\n e_a = acc+1\n while(i < iter and e_a > acc):\n fprime = (func(x1)-func(x0))/(x1-x0)\n i+=1\n xn = x1 - func(x1)/fprime\n e_a = abs((xn-x1)/xn)\n tab.add_row([i,\"%.5g\" % x0,\"%.5g\" % func(x0),\"%.5g\" % x1,\"%.5g\" % func(x1),\"%.6g\" % xn,\"%.5g\" % func(xn),\"%.5f\" % (e_a*100)])\n x0 = x1\n x1 = xn\n #print(f\"{i}\\t{x:.5f}\\t{e_a*100:.5f}\\t{func(x):.5g}\")\n print(tab)",
"_____no_output_____"
],
[
"secant_method(math_expr,u,v,it,dig)",
"+------+---------+----------+---------+------------+----------+------------+----------+\n| Iter | x_i-1 | f(x_i-1) | x_i | f(x_i) | x_i+1 | f(x_i+1) | |ε_a|% |\n+------+---------+----------+---------+------------+----------+------------+----------+\n| 1 | -1 | 433 | 1 | 27 | 1.133 | 157.35 | 11.73913 |\n| 2 | 1 | 27 | 1.133 | 157.35 | 0.972451 | 6.8352 | 16.51027 |\n| 3 | 1.133 | 157.35 | 0.97245 | 6.8352 | 0.96516 | 1.8504 | 0.75541 |\n| 4 | 0.97245 | 6.8352 | 0.96516 | 1.8504 | 0.962453 | 0.03655 | 0.28121 |\n| 5 | 0.96516 | 1.8504 | 0.96245 | 0.03655 | 0.962399 | 0.00020202 | 0.00567 |\n| 6 | 0.96245 | 0.03655 | 0.9624 | 0.00020202 | 0.962398 | 2.2261e-08 | 0.00003 |\n+------+---------+----------+---------+------------+----------+------------+----------+\n"
],
[
"def falseposition_method(func,xl=0,xu=5,iter=10,digits = 16):\n tab = PrettyTable()\n acc = 0.5*10**(-digits)\n if (func(xl)*func(xu) > 0) :\n print(\"f(xl)*f(xu) > 0 \")\n return\n xm = (xu*func(xl) - xl*func(xu))/(func(xl) - func(xu))\n i = 1\n e_a = None\n #print(\"Iter\\txl\\txu\\txm\\t|ε_a|%\\tf(xm)\")\n tab.field_names = ['Iter','xl','f(xl)','xu','f(xu)','xm','f(xm)','|ε_a|%']\n #print(f\"{i}\\t{xl:.5f}\\t{xu:.5f}\\t{xm:.5f}\\t{e_a}\\t{func(xm):.5g}\")\n tab.add_row([i,\"%.5f\" % xl,\"%.5g\" % func(xl),\"%.5f\" % xu,\"%.5g\" % func(xu),\"%.5f\" % xm,\"%.6g\" % func(xm),e_a])\n e_a = acc+1\n\n while(i < iter and e_a > acc):\n if func(xl)*func(xm) < 0:\n xu = xm\n elif func(xl)*func(xm) > 0:\n xl = xm\n else:\n break\n\n i+=1\n xmn = (xu*func(xl) - xl*func(xu))/(func(xl) - func(xu))\n e_a = abs((xmn-xm)/xmn)\n xm = xmn\n #print(f\"{i}\\t{xl:.5f}\\t{xu:.5f}\\t{xm:.5f}\\t{e_a*100:.5f}\\t{func(xm):.5g}\")\n tab.add_row([i,\"%.5f\" % xl,\"%.5g\" % func(xl),\"%.5f\" % xu,\"%.5g\" % func(xu),\"%.5f\" % xm,\"%.6g\" % func(xm),\"%.5f\" % (e_a*100)])\n \n print(tab)\n",
"_____no_output_____"
],
[
"falseposition_method(math_expr,u,v,it,dig)",
"f(xl)*f(xu) > 0 \n"
],
[
"def math_expr(x):\n #return 40*x**1.5-875*x+35000\n return 230*x**4+18*x**3+9*x**2-221*x-9\n #return x**3 - 0.165*x**2 + 3.993e-4\n #return (x-1)**3 + .512\n\n\ndig = 4\nu = -1\nv = 0\nit = 50\n#u,v,it,dig\n#x 3 − 0.165x 2 + 3.993×10−4 = 0\n#newtonraphson_method(math_expr,300,it,dig)\nfalseposition_method(math_expr,-1,0,it,dig)",
"+------+----------+-------+----------+-------------+----------+--------------+----------+\n| Iter | xl | f(xl) | xu | f(xu) | xm | f(xm) | |ε_a|% |\n+------+----------+-------+----------+-------------+----------+--------------+----------+\n| 1 | -1.00000 | 433 | 0.00000 | -9 | -0.02036 | -4.49638 | None |\n| 2 | -1.00000 | 433 | -0.02036 | -4.4964 | -0.03043 | -2.26689 | 33.08634 |\n| 3 | -1.00000 | 433 | -0.03043 | -2.2669 | -0.03548 | -1.14807 | 14.23222 |\n| 4 | -1.00000 | 433 | -0.03548 | -1.1481 | -0.03803 | -0.582771 | 6.70674 |\n| 5 | -1.00000 | 433 | -0.03803 | -0.58277 | -0.03932 | -0.296161 | 3.28803 |\n| 6 | -1.00000 | 433 | -0.03932 | -0.29616 | -0.03998 | -0.150595 | 1.64239 |\n| 7 | -1.00000 | 433 | -0.03998 | -0.1506 | -0.04031 | -0.0765991 | 0.82794 |\n| 8 | -1.00000 | 433 | -0.04031 | -0.076599 | -0.04048 | -0.0389675 | 0.41929 |\n| 9 | -1.00000 | 433 | -0.04048 | -0.038967 | -0.04057 | -0.019825 | 0.21283 |\n| 10 | -1.00000 | 433 | -0.04057 | -0.019825 | -0.04061 | -0.0100865 | 0.10815 |\n| 11 | -1.00000 | 433 | -0.04061 | -0.010087 | -0.04064 | -0.00513192 | 0.05500 |\n| 12 | -1.00000 | 433 | -0.04064 | -0.0051319 | -0.04065 | -0.00261109 | 0.02797 |\n| 13 | -1.00000 | 433 | -0.04065 | -0.0026111 | -0.04065 | -0.00132851 | 0.01423 |\n| 14 | -1.00000 | 433 | -0.04065 | -0.0013285 | -0.04066 | -0.000675943 | 0.00724 |\n| 15 | -1.00000 | 433 | -0.04066 | -0.00067594 | -0.04066 | -0.000343918 | 0.00368 |\n+------+----------+-------+----------+-------------+----------+--------------+----------+\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a09f99abac00ce921d9b4ac1560eb0227dc71a5
| 108,872 |
ipynb
|
Jupyter Notebook
|
timings.ipynb
|
danielalcalde/apalis
|
95d023c9d8950ad68b95fae84c62c70dd52a337a
|
[
"MIT"
] | null | null | null |
timings.ipynb
|
danielalcalde/apalis
|
95d023c9d8950ad68b95fae84c62c70dd52a337a
|
[
"MIT"
] | null | null | null |
timings.ipynb
|
danielalcalde/apalis
|
95d023c9d8950ad68b95fae84c62c70dd52a337a
|
[
"MIT"
] | null | null | null | 171.451969 | 27,800 | 0.898835 |
[
[
[
" <font size=\"6\"> <center> **[Open in Github](https://github.com/danielalcalde/apalis/blob/master/timings.ipynb)**</center></font> ",
"_____no_output_____"
],
[
"# Timings",
"_____no_output_____"
],
[
"## Apalis vs Ray",
"_____no_output_____"
],
[
"In this notebook, the overhead of both the libraries' Ray and Apalis is measured. For Apalis also its different syntaxes are compared in the next section. We conclude that for parallel workloads of more than **10ms** ray and Apalis perform similarly but under that Apalis outperforms ray.",
"_____no_output_____"
]
],
[
[
"import apalis\nimport ray\nimport timeit\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport matplotlib.ticker as mticker\n\nfrom collections import defaultdict",
"_____no_output_____"
],
[
"num_cpus = 16\nray.init(num_cpus=num_cpus);",
"2020-08-26 17:22:17,482\tINFO resource_spec.py:212 -- Starting Ray with 35.25 GiB memory available for workers and up to 17.63 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).\n2020-08-26 17:22:17,708\tWARNING services.py:923 -- Redis failed to start, retrying now.\n2020-08-26 17:22:17,949\tINFO services.py:1165 -- View the Ray dashboard at \u001b[1m\u001b[32mlocalhost:8265\u001b[39m\u001b[22m\n"
],
[
"class A:\n def function(self, n):\n cum = 0\n for i in range(n):\n cum += i\n return cum\n\nAr = ray.remote(A)",
"_____no_output_____"
],
[
"obj_number = 16\nobjs = [A() for _ in range(obj_number)]\n\nobjs_apalis = [apalis.Handler(a) for a in objs]\nobjs_apalis_G = apalis.GroupHandler(objs, threads=num_cpus)\nobjs_ray = [Ar.remote() for _ in range(obj_number)]",
"_____no_output_____"
]
],
[
[
"Ray:",
"_____no_output_____"
]
],
[
[
"%%time\ntokens = [obj.function.remote(1) for obj in objs_ray]\nprint(ray.get(tokens))",
"[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\nCPU times: user 4 ms, sys: 8 ms, total: 12 ms\nWall time: 16.6 ms\n"
]
],
[
[
"Apalis:",
"_____no_output_____"
]
],
[
[
"%%time\ntokens = [obj.function(1) for obj in objs_apalis]\nprint(apalis.get(tokens))",
"[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\nCPU times: user 0 ns, sys: 0 ns, total: 0 ns\nWall time: 4.23 ms\n"
]
],
[
[
"All the functions that will be tested:",
"_____no_output_____"
]
],
[
[
"funcs = dict()\n\nfuncs[\"single\"] = lambda x: [obj.function(x) for obj in objs]\nfuncs[\"ray\"] = lambda x: ray.get([obj.function.remote(x) for obj in objs_ray])\nfuncs[\"apalis\"] = lambda x: apalis.get([obj.function(x) for obj in objs_apalis])",
"_____no_output_____"
]
],
[
[
"Testing:",
"_____no_output_____"
]
],
[
[
"repeat = 50\n\nNs = np.asarray(np.logspace(2, 7, 20), dtype=\"int\")\n\nts = defaultdict(lambda: np.zeros((len(Ns), repeat)))\n\nfor r in range(repeat):\n for i, n in enumerate(Ns):\n for j, (label, func) in enumerate(funcs.items()):\n number = max(1, 2 * 10 ** 4 // n)\n\n t = timeit.timeit(lambda : func(n), number=number)\n ts[label][i][r] = t / number\n\nts_std = defaultdict(lambda: np.zeros(len(Ns)))\nfor label in ts:\n ts_std[label] = ts[label].std(axis=1) / np.sqrt(repeat)\n ts[label] = ts[label].mean(axis=1)",
"_____no_output_____"
]
],
[
[
"Apalis vs Ray timings:",
"_____no_output_____"
]
],
[
[
"plt.ylabel(\"Time / [s]\")\nplt.xlabel(\"Operations\")\n\nfor label in [\"single\", \"ray\", \"apalis\"]:\n plt.errorbar(Ns, ts[label], yerr=ts_std[label], fmt=\"--o\", label=label)\n\nplt.xscale(\"log\")\nplt.yscale(\"log\")\n\nplt.xticks(fontsize=14)\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"The improvement over single-threaded vs the time it takes the function to execute in single-threaded:",
"_____no_output_____"
]
],
[
[
"ax = plt.gca()\n\nplt.ylabel(r\"$\\frac{t_{multi}}{t_{single}}$\", size=25)\nplt.xlabel(\"$t_{single}$/[ms]\", size=20)\nplt.plot([],[])\n\nfor label in [\"ray\", \"apalis\"]:\n error = ts_std[\"single\"] / ts[label] + ts[\"single\"] * ts_std[label] / ts[label] ** 2\n plt.errorbar(ts[\"single\"] * 1000 / num_cpus, ts[\"single\"] / ts[label], yerr=error, fmt=\"--o\", label=label)\n\nplt.xscale(\"log\")\nplt.yscale(\"log\")\n\nplt.axhline(1, color=\"gray\")\nplt.axhline(num_cpus, color=\"gray\")\n\nax.yaxis.set_major_formatter(mticker.ScalarFormatter())\nax.xaxis.set_major_formatter(mticker.ScalarFormatter())\n\nax.set_xticks([0.01, 0.1, 1, 10, 100, 1000])\nax.set_xticklabels([\"0.01\", \"0.1\", \"1\", \"10\", \"200\", \"1000\"])\n\nax.set_yticks([0.1, 1, 16])\nax.set_yticklabels([\"0.1\", \"1\", \"16\"])\n\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Note that the longer the task takes the closer to 16 times improvement we come.",
"_____no_output_____"
],
[
"## Performance of the different apalis syntaxes",
"_____no_output_____"
],
[
"The GroupHandler handles several objects at a time. It should be used instead of Handler if the amount of objects that need to be parallelized is larger than the number of cores. Here we compare for the case where we want to parallelize 32 objects on 16 cores.",
"_____no_output_____"
]
],
[
[
"obj_number = 32\nobjs = [A() for _ in range(obj_number)]\n\nobjs_apalis = [apalis.Handler(a) for a in objs]\nobjs_apalis_G = apalis.GroupHandler(objs, threads=num_cpus)",
"_____no_output_____"
]
],
[
[
"Handler:",
"_____no_output_____"
]
],
[
[
"%%time\ntokens = [obj.function(1) for obj in objs_apalis]\nprint(apalis.get(tokens))",
"[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\nCPU times: user 4 ms, sys: 0 ns, total: 4 ms\nWall time: 8.91 ms\n"
]
],
[
[
"Group Handler:",
"_____no_output_____"
]
],
[
[
"%%time\ntasks = [obj.function(1) for obj in objs_apalis_G]\nprint(objs_apalis_G.multiple_run(tasks)())",
"[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\nCPU times: user 4 ms, sys: 0 ns, total: 4 ms\nWall time: 5.48 ms\n"
]
],
[
[
"Faster but more cumbersome syntax:",
"_____no_output_____"
]
],
[
[
"%%time\ntasks = [{\"i\": i, \"mode\": \"run\", \"name\": \"function\", \"args\": (1,)} for i in range(obj_number)]\nprint(objs_apalis_G.multiple_run(tasks)())",
"[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\nCPU times: user 4 ms, sys: 0 ns, total: 4 ms\nWall time: 2.2 ms\n"
]
],
[
[
"Runs the tasks and directly returns the outputs. Does not need to deal with the overhead of creating Tokens.",
"_____no_output_____"
]
],
[
[
"%%time\ntasks = [{\"i\": i, \"mode\": \"run\", \"name\": \"function\", \"args\": (1,)} for i in range(obj_number)]\nprint(objs_apalis_G.run(tasks))",
"[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]\nCPU times: user 0 ns, sys: 0 ns, total: 0 ns\nWall time: 1.84 ms\n"
]
],
[
[
"Timing for different parallelization solutions:",
"_____no_output_____"
]
],
[
[
"funcsA = dict()\n\nfuncsA[\"single\"] = lambda x: [obj.function(x) for obj in objs]\nfuncsA[\"handler\"] = lambda x: apalis.get([obj.function(x) for obj in objs_apalis])\nfuncsA[\"group handler\"] = lambda x: objs_apalis_G.multiple_run([obj.function(x) for obj in objs_apalis_G])()\nfuncsA[\"run\"] = lambda x: objs_apalis_G.run([{\"i\": i, \"mode\": \"run\", \"name\": \"function\", \"args\": (x,)} for i in range(obj_number)])\nfuncsA[\"fast syntax\"] = lambda x: objs_apalis_G.multiple_run([{\"i\": i, \"mode\": \"run\", \"name\": \"function\", \"args\": (x,)} for i in range(obj_number)])()",
"_____no_output_____"
]
],
[
[
"Testing:",
"_____no_output_____"
]
],
[
[
"repeat = 50\n\nNs = np.asarray(np.logspace(2, 7, 20), dtype=\"int\")\n\ntsA = defaultdict(lambda: np.zeros((len(Ns), repeat)))\n\nfor r in range(repeat):\n for i, n in enumerate(Ns):\n for j, (label, func) in enumerate(funcsA.items()):\n number = max(1, 2 * 10 ** 4 // n)\n\n t = timeit.timeit(lambda : func(n), number=number)\n tsA[label][i][r] = t / number\n\nts_stdA = defaultdict(lambda: np.zeros(len(Ns)))\nfor label in tsA:\n ts_stdA[label] = tsA[label].std(axis=1) / np.sqrt(repeat)\n tsA[label] = tsA[label].mean(axis=1)",
"_____no_output_____"
]
],
[
[
"Plotting:",
"_____no_output_____"
]
],
[
[
"ax = plt.gca()\n\nplt.ylabel(r\"$\\frac{t_{multi}}{t_{single}}$\", size=25)\nplt.xlabel(\"$t_{single}$/[ms]\", size=20)\n\nfor label in [\"handler\", \"group handler\"]:\n error = ts_stdA[\"single\"] / tsA[label] + tsA[\"single\"] * ts_stdA[label] / tsA[label] ** 2\n plt.errorbar(tsA[\"single\"] * 1000 / obj_number, tsA[\"single\"] / tsA[label], yerr=error, fmt=\"--o\", label=label)\n\nplt.xscale(\"log\")\nplt.yscale(\"log\")\n\nplt.axhline(1, color=\"gray\")\nplt.axhline(num_cpus, color=\"gray\") \n\nax.yaxis.set_major_formatter(mticker.ScalarFormatter())\nax.xaxis.set_major_formatter(mticker.ScalarFormatter())\n\nax.set_xticks([0.01, 0.1, 1, 10, 100, 1000])\nax.set_xticklabels([\"0.01\", \"0.1\", \"1\", \"10\", \"200\", \"1000\"])\n\nax.set_yticks([0.1, 1, 16])\nax.set_yticklabels([\"0.1\", \"1\", \"16\"])\n\nplt.legend()\nplt.show()",
"_____no_output_____"
]
],
[
[
"Percentage improvement of run vs the other methods:",
"_____no_output_____"
]
],
[
[
"plt.ylabel(r\"$\\frac{t_{run}}{t_{single}}$\", size=25)\nplt.xlabel(\"$t_{single}$/[ms]\", size=20)\n\nfor label in [\"group handler\", \"fast syntax\"]:\n error = ts_stdA[\"run\"] / tsA[label] + tsA[\"run\"] * ts_stdA[label] / tsA[label] ** 2\n plt.errorbar(tsA[\"single\"] * 1000 / obj_number, tsA[\"run\"] / tsA[label], yerr=error, fmt=\"--o\", label=label)\n\nplt.xscale(\"log\")\n \nplt.axhline(1, color=\"gray\")\nplt.legend()\nplt.show()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4a0a001e60427464589f38da193aabd55dbd418a
| 124,662 |
ipynb
|
Jupyter Notebook
|
1_1_Image_Representation/1. Images as Numerical Data.ipynb
|
enlighter/CVND_Exercises
|
359c96e99646a00e5965f20ea019df227676ce85
|
[
"MIT"
] | null | null | null |
1_1_Image_Representation/1. Images as Numerical Data.ipynb
|
enlighter/CVND_Exercises
|
359c96e99646a00e5965f20ea019df227676ce85
|
[
"MIT"
] | null | null | null |
1_1_Image_Representation/1. Images as Numerical Data.ipynb
|
enlighter/CVND_Exercises
|
359c96e99646a00e5965f20ea019df227676ce85
|
[
"MIT"
] | null | null | null | 596.4689 | 116,836 | 0.94826 |
[
[
[
"# Images as Grids of Pixels",
"_____no_output_____"
],
[
"### Import resources",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.image as mpimg # for reading in images\n\nimport matplotlib.pyplot as plt\nimport cv2 # computer vision library\n\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"### Read in and display the image",
"_____no_output_____"
]
],
[
[
"# Read in the image\nimage = mpimg.imread('images/waymo_car.jpg')\n\n# Print out the image dimensions\nprint('Image dimensions:', image.shape)\n\n# Change from color to grayscale\ngray_image = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)\n\nplt.imshow(gray_image, cmap='gray')",
"Image dimensions: (427, 640, 3)\n"
],
[
"# Print specific grayscale pixel values\n# What is the pixel value at x = 400 and y = 300 (on the body of the car)?\n\nx = 400\ny = 300\n\nprint(gray_image[y,x])\n",
"159\n"
],
[
"#Find the maximum and minimum grayscale values in this image\n\nmax_val = np.amax(gray_image)\nmin_val = np.amin(gray_image)\n\nprint('Max: ', max_val)\nprint('Min: ', min_val)",
"Max: 255\nMin: 2\n"
],
[
"# Create a 5x5 image using just grayscale, numerical values\ntiny_image = np.array([[0, 20, 250, 20, 0],\n [20, 250, 0, 250, 3],\n [250, 180, 85, 40, 250],\n [240, 100, 50, 255, 10],\n [30, 0, 75, 190, 220]])\n\n# To show the pixel grid, use matshow\nplt.matshow(tiny_image, cmap='gray')\n\n## TODO: See if you can draw a tiny smiley face or something else!",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a0a1789dec0a71cc320276d58931171a41a5d8a
| 15,026 |
ipynb
|
Jupyter Notebook
|
notebooks/Dane County Visualizations.ipynb
|
dgfitch/wi-bail
|
749dd2caef75a3bdedc7d7792e3c4ee01707dfd4
|
[
"MIT"
] | null | null | null |
notebooks/Dane County Visualizations.ipynb
|
dgfitch/wi-bail
|
749dd2caef75a3bdedc7d7792e3c4ee01707dfd4
|
[
"MIT"
] | null | null | null |
notebooks/Dane County Visualizations.ipynb
|
dgfitch/wi-bail
|
749dd2caef75a3bdedc7d7792e3c4ee01707dfd4
|
[
"MIT"
] | 1 |
2021-02-06T19:03:04.000Z
|
2021-02-06T19:03:04.000Z
| 35.2723 | 140 | 0.448157 |
[
[
[
"import altair as alt\nimport pandas as pd\nfrom pony.orm import *\nfrom datetime import datetime\nimport random\nfrom bail.db import DB, Case, Inmate",
"_____no_output_____"
],
[
"# Connect to SQLite database using Pony\ndb = DB()",
"_____no_output_____"
],
[
"statuses = set(select(i.status for i in Inmate))\npretrial = set(s for s in statuses if (\"Prearraignment\" in s or \"Pretrial\" in s))\n \npretrial_inmates = list(select(i for i in Inmate if i.status in pretrial))\nother_inmates = list(select(i for i in Inmate if not i.status in pretrial))\n \nblack_pretrial_inmates = list(select(i for i in Inmate if i.race == \"African American\" and i.status in pretrial))\nwhite_pretrial_inmates = list(select(i for i in Inmate if i.race == \"Caucasian\" and i.status in pretrial))\nother_pretrial_inmates = list(select(i for i in Inmate if \n i.race != \"African American\" and \n i.race != \"Caucasian\" and\n i.status in pretrial))",
"_____no_output_____"
],
[
"def severity(inmate):\n if any(\"Felony\" in x.severity for c in inmate.cases for x in c.charges):\n return 'felony'\n else:\n return 'misdemeanor'\n \ndef bail_value(inmate):\n bails = [c.cash_bond for c in inmate.cases if c.cash_bond]\n if len(bails) == 0:\n bails = [0]\n return max(bails)\n\ndef build_data(inmates):\n return pd.DataFrame([{\n 'race': x.race,\n 'bail': bail_value(x),\n 'days_since_booking': x.days_since_booking(),\n 'most_recent_arrest': x.most_recent_arrest(),\n 'days_since_most_recent_arrest': x.days_since_most_recent_arrest(),\n 'severity': severity(x),\n 'url': x.url,\n } for x in inmates])\n\ndata = build_data(pretrial_inmates)",
"_____no_output_____"
],
[
"days = data.query('severity==\"felony\"')['days_since_booking']\nprint(f\"Mean days in jail for a felony: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}\")\n\ndays = data.query('severity==\"misdemeanor\"')['days_since_booking']\nprint(f\"Mean days in jail for a misdemeanor: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}\")",
"Mean days in jail for a felony: 180.38636363636363 with standard deviation 193.9453196470258 and maximum 1237\nMean days in jail for a misdemeanor: 74.70149253731343 with standard deviation 120.43469435257926 and maximum 603\n"
],
[
"# Instead try most recent arrest date...\ndays = data.query('severity==\"felony\"')['days_since_most_recent_arrest']\nprint(f\"Mean days since arrest for a felony: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}\")\n\ndays = data.query('severity==\"misdemeanor\"')['days_since_most_recent_arrest']\nprint(f\"Mean days since arrest for a misdemeanor: {days.mean()} with standard deviation {days.std()} and maximum {days.max()}\")",
"Mean days since arrest for a felony: 148.4715909090909 with standard deviation 174.04340768825534 and maximum 1237\nMean days since arrest for a misdemeanor: 58.865671641791046 with standard deviation 81.57352057042888 and maximum 419\n"
],
[
"data.query('severity==\"misdemeanor\"').sort_values('days_since_most_recent_arrest')",
"_____no_output_____"
],
[
"# Prefab stable colors for charts\ndomain = ['African American', 'American Indian or Alaskan Native', 'Asian or Pacific Islander', 'Caucasian', 'Hispanic', 'Unknown']\nrange_ = ['red', 'green', 'blue', 'orange', 'purple', 'grey']\nrace = alt.Color('race:N', scale=alt.Scale(domain=domain, range=range_))\n\n# Facet across both felonies and misdemeanors\nalt.Chart(data).mark_circle(size=40, opacity=0.85, clip=True).encode(\n y=alt.Y('days_in_prison:Q',\n scale=alt.Scale(domain=(0, 365))\n ),\n x=alt.X('bail:Q',\n scale=alt.Scale(domain=(0, 1000))\n ),\n color=race,\n).facet(\n row='severity:N'\n)",
"_____no_output_____"
],
[
"# Misdemeanors only with trend lines\nmisdemeanors = data.query('severity==\"misdemeanor\"')\n\nbase = alt.Chart(misdemeanors).mark_circle(size=30, clip=True).encode(\n y=alt.Y('days_in_prison:Q',\n scale=alt.Scale(domain=(0, 365))\n ),\n x=alt.X('bail:Q',\n scale=alt.Scale(domain=(0, 800))\n ),\n color=race,\n)\n\n\npolynomial_fit = [\n base.transform_regression(\n \"x\", \"y\", method=\"poly\", order=1, as_=[\"x\", str(thing)]\n )\n .mark_line()\n .transform_fold([str(thing)], as_=[\"race\", \"y\"])\n .encode(race)\n for thing in ['African American', 'Caucasian']\n]\n\nalt.layer(base, *polynomial_fit)\nbase",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4a0a248ca04264da9412ca262b4901f6333cd71d
| 539,663 |
ipynb
|
Jupyter Notebook
|
modelling/random_forest_with_unknow.ipynb
|
haonan99/Battery-Optimisation
|
96eb631e9fff7759ce0aa300bd225af7304c8404
|
[
"MIT"
] | 2 |
2021-12-03T17:55:39.000Z
|
2022-03-11T21:28:02.000Z
|
modelling/random_forest_with_unknow.ipynb
|
greysonchung/Battery-Optimisation
|
758bcf071d34802f7615706554ffacc8c3fff94e
|
[
"MIT"
] | null | null | null |
modelling/random_forest_with_unknow.ipynb
|
greysonchung/Battery-Optimisation
|
758bcf071d34802f7615706554ffacc8c3fff94e
|
[
"MIT"
] | null | null | null | 387.132712 | 156,986 | 0.922198 |
[
[
[
"# Notes:\nThis notebook is used to predict demand of Victoria state (without using any future dataset)",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nfrom tsa_utils import *\nfrom statsmodels.tsa.stattools import pacf\nfrom sklearn.ensemble import RandomForestRegressor\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# show float in two decimal form\nplt.style.use('ggplot')\npd.set_option('display.float_format',lambda x : '%.2f' % x)",
"_____no_output_____"
]
],
[
[
"## 1) Load dataset",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv(\"../../data/all.csv\").reset_index(drop=True)\ndf.head(3)",
"_____no_output_____"
],
[
"df = df[df.time <= '2021-08-11 23:30:00']\ndf.head(3)",
"_____no_output_____"
]
],
[
[
"## 3) Feature Engineering",
"_____no_output_____"
]
],
[
[
"drop_columns = ['demand_nsw',\n 'demand_sa',\n 'demand_tas',\n 'spot_price_nsw',\n 'spot_price_sa',\n 'spot_price_tas',\n 'spot_price_vic', \n 'inter_gen_nsw', 'inter_gen_sa', 'inter_gen_tas', 'inter_gen_vic',]\nvic = df.drop(columns=drop_columns)\nvic.columns = ['time', 'demand_vic', 'period']\nvic.head(3)",
"_____no_output_____"
],
[
"# Feature engineering on datetime\nvic['time'] = vic.time.astype('datetime64[ns]')\nvic['month'] = vic.time.dt.month\nvic['day'] = vic.time.dt.day\nvic['day_of_year'] = vic.time.dt.dayofyear\nvic['year'] = vic.time.dt.year\nvic['weekday'] = vic['time'].apply(lambda x: x.weekday())\nvic['week'] = vic.time.dt.week\nvic['hour'] = vic.time.dt.hour\n\nvic.loc[vic['month'].isin([12,1,2]), 'season'] = 1\nvic.loc[vic['month'].isin([3,4,5]), 'season'] = 2\nvic.loc[vic['month'].isin([6,7,8]), 'season'] = 3\nvic.loc[vic['month'].isin([9, 10, 11]), 'season'] = 4\nvic.tail(3)",
"_____no_output_____"
],
[
"# Add fourier terms\nfourier_terms = add_fourier_terms(vic.time, year_k=3, week_k=3, day_k=3)\nvic = pd.concat([vic, fourier_terms], 1).drop(columns=['datetime'])\nvic.head(3)",
"_____no_output_____"
],
[
"# Plot autocorrelation\nnlags=144\nplot_tsc(vic.demand_vic, lags=nlags)",
"_____no_output_____"
],
[
"# Add nlag features (choosing the first 10 highest autocorrelation nlag)\ndict_pacf = dict()\nlist_pacf = pacf(df['demand_vic'], nlags=nlags)\nfor nlag in range(nlags):\n if nlag >= 48:\n dict_pacf[nlag] = list_pacf[nlag]\ndict_pacf = {k: v for k, v in sorted(dict_pacf.items(), key=lambda item: abs(item[1]), reverse=True)}\n\n# 10 highest pacf nlag\nmax_pacf_nlags = list(dict_pacf.keys())[:5]\nfor nlag in max_pacf_nlags:\n vic['n_lag'+str(nlag)] = df.reset_index()['demand_vic'].shift(nlag)",
"_____no_output_____"
],
[
"vic_train = vic[vic[\"time\"] <= \"2020-12-31 23:30:00\"]\nvic_cv = vic[(vic['time'] >= \"2021-01-01 00:00:00\") & (vic['time'] <= \"2021-06-30 23:30:00\")].reset_index(drop=True)\nvic_test = vic[(vic['time'] >= \"2021-07-01 00:00:00\") & (vic['time'] <= \"2021-08-11 23:30:00\")].reset_index(drop=True)",
"_____no_output_____"
],
[
"X_train = vic_train.drop(columns=['demand_vic', 'time'])[nlags:]\ny_train = vic_train.demand_vic[nlags:]\nX_cv = vic_cv.drop(columns=['demand_vic', 'time'])\ny_cv = vic_cv.demand_vic\nX_test = vic_test.drop(columns=['demand_vic', 'time'])\ny_test = vic_test.demand_vic\nX_train.head(3)",
"_____no_output_____"
],
[
"X_train.columns\n",
"_____no_output_____"
]
],
[
[
"## 4) First look at Random Forest Regressor",
"_____no_output_____"
]
],
[
[
"rfr_clf = RandomForestRegressor(n_estimators=100)\nrfr_clf = rfr_clf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"print(\"Random Forest Regressor accuracy: \")\nrfr_result = rfr_clf.predict(X_test)\nrfr_residuals = y_test - rfr_result\nprint('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))\nprint('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))\n\nplt.figure(figsize=(20, 4))\nplt.plot(y_test[:200], label='true value')\nplt.plot(rfr_result[:200], label='predict')\nplt.legend()\nplt.show()",
"Random Forest Regressor accuracy: \nMean Absolute Percent Error: 0.0434\nRoot Mean Squared Error: 347.7397461845399\n"
],
[
"plt.figure(figsize=(20, 4))\nplt.plot(rfr_residuals)\nplt.show()",
"_____no_output_____"
],
[
"# Get numerical feature importances\nimportances = list(rfr_clf.feature_importances_)\n# List of tuples with variable and importance\nfeature_importances = [(feature, round(importance, 2)) for feature, importance in zip(X_train.columns, importances)]\n# Sort the feature importances by most important first\nfeature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)\n# Print out the feature and importances \n[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances]",
"Variable: n_lag48 Importance: 0.62\nVariable: week_sin1 Importance: 0.04\nVariable: period Importance: 0.03\nVariable: weekday Importance: 0.03\nVariable: year_cos1 Importance: 0.03\nVariable: day_of_year Importance: 0.02\nVariable: n_lag97 Importance: 0.02\nVariable: day Importance: 0.01\nVariable: year Importance: 0.01\nVariable: week Importance: 0.01\nVariable: hour Importance: 0.01\nVariable: year_sin1 Importance: 0.01\nVariable: year_sin2 Importance: 0.01\nVariable: year_cos2 Importance: 0.01\nVariable: year_sin3 Importance: 0.01\nVariable: year_cos3 Importance: 0.01\nVariable: week_cos1 Importance: 0.01\nVariable: week_cos2 Importance: 0.01\nVariable: week_cos3 Importance: 0.01\nVariable: hour_sin1 Importance: 0.01\nVariable: hour_cos1 Importance: 0.01\nVariable: hour_sin2 Importance: 0.01\nVariable: n_lag49 Importance: 0.01\nVariable: n_lag50 Importance: 0.01\nVariable: n_lag98 Importance: 0.01\nVariable: month Importance: 0.0\nVariable: season Importance: 0.0\nVariable: week_sin2 Importance: 0.0\nVariable: week_sin3 Importance: 0.0\nVariable: hour_cos2 Importance: 0.0\nVariable: hour_sin3 Importance: 0.0\nVariable: hour_cos3 Importance: 0.0\n"
]
],
[
[
"## 6) Predict CV and Test period demand",
"_____no_output_____"
],
[
"### 6.1) Predict CV period demand",
"_____no_output_____"
]
],
[
[
"X_train = vic_train.drop(columns=['demand_vic', 'time'])[nlags:]\ny_train = vic_train.demand_vic[nlags:]\nX_cv = vic_cv.drop(columns=['demand_vic', 'time'])\ny_cv = vic_cv.demand_vic\n\nrfr_clf = RandomForestRegressor(n_estimators=100)\nrfr_clf = rfr_clf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"print(\"Random Forest Regressor accuracy: \")\nrfr_result = rfr_clf.predict(X_cv)\nrfr_residuals = y_cv - rfr_result\nprint('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))\nprint('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))\n\nplt.figure(figsize=(20, 4))\nplt.plot(y_cv, label='true value')\nplt.plot(rfr_result, label='predict')\nplt.legend()\nplt.show()",
"Random Forest Regressor accuracy: \nMean Absolute Percent Error: 0.0796\nRoot Mean Squared Error: 447.26252159890885\n"
],
[
"vic_demand_cv_rfr = pd.DataFrame({'time': vic_cv.time, 'demand_vic': vic_cv.demand_vic})\nvic_demand_cv_rfr['predicted_demand_vic'] = rfr_result\nvic_demand_cv_rfr.tail(3)",
"_____no_output_____"
],
[
"vic_demand_cv_rfr.to_csv('predictions/vic_demand_unknow_cv_rfr.csv', index=False, header=True)",
"_____no_output_____"
]
],
[
[
"### 6.2) Predict Test period demand",
"_____no_output_____"
]
],
[
[
"idx_test_start = 61296 # index of df(full) where test start\nX_train = vic.drop(columns=['demand_vic', 'time'])[nlags:idx_test_start]\ny_train = vic.demand_vic[nlags:idx_test_start]\nX_test = vic_test.drop(columns=['demand_vic', 'time'])\ny_test = vic_test.demand_vic\n\nrfr_clf = RandomForestRegressor(n_estimators=100, random_state=1)\nrfr_clf = rfr_clf.fit(X_train, y_train)",
"_____no_output_____"
],
[
"print(\"Random Forest Regressor accuracy: \")\nrfr_result = rfr_clf.predict(X_test)\nrfr_residuals = y_test - rfr_result\nprint('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))\nprint('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))\n\nplt.figure(figsize=(20, 4))\nplt.plot(y_test, label='true value')\nplt.plot(rfr_result, label='predict')\nplt.legend()\nplt.show()",
"Random Forest Regressor accuracy: \nMean Absolute Percent Error: 0.0426\nRoot Mean Squared Error: 337.34638457101516\n"
],
[
"vic_demand_test_rfr = pd.DataFrame({'time': vic_test.time, 'demand_vic': vic_test.demand_vic})\nvic_demand_test_rfr['predicted_demand_vic'] = rfr_result\nvic_demand_test_rfr.tail(3)",
"_____no_output_____"
],
[
"vic_demand_test_rfr.to_csv('predictions/vic_demand_unknow_test_rfr.csv', index=False, header=True)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4a0a340285891fd4602320187d3d14c55e83c990
| 21,456 |
ipynb
|
Jupyter Notebook
|
docs/tutorials/02_converters_for_quadratic_programs.ipynb
|
garrison/qiskit-optimization
|
e330d338549e3eb6c47fb8ee6306dc21ca56e0be
|
[
"Apache-2.0"
] | 1 |
2021-11-18T07:22:57.000Z
|
2021-11-18T07:22:57.000Z
|
docs/tutorials/02_converters_for_quadratic_programs.ipynb
|
garrison/qiskit-optimization
|
e330d338549e3eb6c47fb8ee6306dc21ca56e0be
|
[
"Apache-2.0"
] | null | null | null |
docs/tutorials/02_converters_for_quadratic_programs.ipynb
|
garrison/qiskit-optimization
|
e330d338549e3eb6c47fb8ee6306dc21ca56e0be
|
[
"Apache-2.0"
] | null | null | null | 37.314783 | 726 | 0.570004 |
[
[
[
"# Converters for Quadratic Programs",
"_____no_output_____"
],
[
"Optimization problems in Qiskit's optimization module are represented with the `QuadraticProgram` class, which is generic and powerful representation for optimization problems. In general, optimization algorithms are defined for a certain formulation of a quadratic program and we need to convert our problem to the right type.\n\nFor instance, Qiskit provides several optimization algorithms that can handle Quadratic Unconstrained Binary Optimization (QUBO) problems. These are mapped to Ising Hamiltonians, for which Qiskit uses the `qiskit.aqua.operators` module, and then their ground state is approximated. For this optimization commonly known algorithms such as VQE or QAOA can be used as underlying routine. See the following tutorial about the [Minimum Eigen Optimizer](./03_minimum_eigen_optimizer.ipynb) for more detail. Note that also other algorithms exist that work differently, such as the `GroverOptimizer`.\n\nTo map a problem to the correct input format, the optimization module of Qiskit offers a variety of converters. In this tutorial we're providing an overview on this functionality. Currently, Qiskit contains the following converters.\n\n- `InequalityToEquality`: converts inequality constraints into equality constraints with additional slack variables.\n- `IntegerToBinary`: converts integer variables into binary variables and corresponding coefficients. \n- `LinearEqualityToPenalty`: convert equality constraints into additional terms of the object function.\n- `QuadraticProgramToQubo`: a wrapper for `InequalityToEquality`, `IntegerToBinary`, and `LinearEqualityToPenalty` for convenience.",
"_____no_output_____"
],
[
"## InequalityToEquality\n`InequalityToEqualityConverter` converts inequality constraints into equality constraints with additional slack variables to remove inequality constraints from `QuadraticProgram`. The upper bounds and the lower bounds of slack variables will be calculated from the difference between the left sides and the right sides of constraints. Signs of slack variables depend on symbols in constraints such as $\\leq$ and $\\geq$.\n\nThe following is an example of a maximization problem with two inequality constraints. Variable $x$ and $y$ are binary variables and variable $z$ is an integer variable.\n\n\\begin{aligned}\n & \\text{maximize}\n & 2x + y + z\\\\\n & \\text{subject to:}\n & x+y+z \\leq 5.5\\\\\n & & x+y+z \\geq 2.5\\\\\n & & x, y \\in \\{0,1\\}\\\\\n & & z \\in \\{0,1,2,3,4,5,6,7\\} \\\\\n\\end{aligned}\n\nWith `QuadraticProgram`, an optimization model of the problem is written as follows.",
"_____no_output_____"
]
],
[
[
"from qiskit_optimization import QuadraticProgram",
"_____no_output_____"
],
[
"qp = QuadraticProgram()\nqp.binary_var('x')\nqp.binary_var('y')\nqp.integer_var(lowerbound=0, upperbound=7, name='z')\n\nqp.maximize(linear={'x': 2, 'y': 1, 'z': 1})\nqp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='LE', rhs=5.5,name='xyz_leq')\nqp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='GE', rhs=2.5,name='xyz_geq')\nprint(qp.export_as_lp_string())",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: CPLEX\n\nMaximize\n obj: 2 x + y + z\nSubject To\n xyz_leq: x + y + z <= 5.500000000000\n xyz_geq: x + y + z >= 2.500000000000\n\nBounds\n 0 <= x <= 1\n 0 <= y <= 1\n z <= 7\n\nBinaries\n x y\n\nGenerals\n z\nEnd\n\n"
]
],
[
[
"Call `convert` method of `InequalityToEquality` to convert.",
"_____no_output_____"
]
],
[
[
"from qiskit_optimization.converters import InequalityToEquality",
"_____no_output_____"
],
[
"ineq2eq = InequalityToEquality()\nqp_eq = ineq2eq.convert(qp)\nprint(qp_eq.export_as_lp_string())",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: CPLEX\n\nMaximize\n obj: 2 x + y + z\nSubject To\n xyz_leq: x + y + z + xyz_leq@int_slack = 5\n xyz_geq: x + y + z - xyz_geq@int_slack = 3\n\nBounds\n 0 <= x <= 1\n 0 <= y <= 1\n z <= 7\n xyz_leq@int_slack <= 5\n xyz_geq@int_slack <= 6\n\nBinaries\n x y\n\nGenerals\n z xyz_leq@int_slack xyz_geq@int_slack\nEnd\n\n"
]
],
[
[
"After converting, the formulation of the problem looks like as the follows. As we can see, the inequality constraints are replaced with equality constraints with additional integer slack variables, $xyz\\_leg\\text{@}int\\_slack$ and $xyz\\_geq\\text{@}int\\_slack$. \n\nLet us explain how the conversion works. For example, the lower bound of the left side of the first constraint is $0$ which is the case of $x=0$, $y=0$, and $z=0$. Thus, the upperbound of the additional integer variable must be $5$ to be able to satisfy even the case of $x=0$, $y=0$, and $z=0$. Note that we cut off the part after the decimal point in the converted formulation since the left side of the first constraint in the original formulation can be only integer values. For the second constraint, basically we apply the same approach. However, the symbol in the second constraint is $\\geq$, so we add minus before $xyz\\_geq\\text{@}int\\_slack$ to be able to satisfy even the case of $x=1, y=1$, and $z=7$.\n\n\\begin{aligned}\n & \\text{maximize}\n & 2x + y + z\\\\\n & \\text{subject to:}\n & x+y+z+ xyz\\_leg\\text{@}int\\_slack= 5\\\\\n & & x+y+z+xyz\\_geq\\text{@}int\\_slack= 3\\\\\n & & x, y \\in \\{0,1\\}\\\\\n & & z \\in \\{0,1,2,3,4,5,6,7\\} \\\\\n & & xyz\\_leg\\text{@}int\\_slack \\in \\{0,1,2,3,4,5\\} \\\\\n & & xyz\\_geq\\text{@}int\\_slack \\in \\{0,1,2,3,4,5,6\\} \\\\\n\\end{aligned}\n\n\n\n",
"_____no_output_____"
],
[
"## IntegerToBinary",
"_____no_output_____"
],
[
"`IntegerToBinary` converts integer variables into binary variables and coefficients to remove integer variables from `QuadraticProgram`. For converting, bounded-coefficient encoding proposed in [arxiv:1706.01945](https://arxiv.org/abs/1706.01945) (Eq. (5)) is used. For more detail of the encoding method, please see the paper.\n\nWe use the output of `InequalityToEquality` as starting point. Variable $x$ and $y$ are binary variables, while the variable $z$ and the slack variables $xyz\\_leq\\text{@}int\\_slack$ and $xyz\\_geq\\text{@}int\\_slack$ are integer variables. We print the problem again for reference.",
"_____no_output_____"
]
],
[
[
"print(qp_eq.export_as_lp_string())",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: CPLEX\n\nMaximize\n obj: 2 x + y + z\nSubject To\n xyz_leq: x + y + z + xyz_leq@int_slack = 5\n xyz_geq: x + y + z - xyz_geq@int_slack = 3\n\nBounds\n 0 <= x <= 1\n 0 <= y <= 1\n z <= 7\n xyz_leq@int_slack <= 5\n xyz_geq@int_slack <= 6\n\nBinaries\n x y\n\nGenerals\n z xyz_leq@int_slack xyz_geq@int_slack\nEnd\n\n"
]
],
[
[
"Call `convert` method of `IntegerToBinary` to convert.",
"_____no_output_____"
]
],
[
[
"from qiskit_optimization.converters import IntegerToBinary",
"_____no_output_____"
],
[
"int2bin = IntegerToBinary()\nqp_eq_bin = int2bin.convert(qp_eq)\nprint(qp_eq_bin.export_as_lp_string())",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: CPLEX\n\nMaximize\n obj: 2 x + y + z@0 + 2 z@1 + 4 z@2\nSubject To\n xyz_leq: x + y + z@0 + 2 z@1 + 4 z@2 + xyz_leq@int_slack@0\n + 2 xyz_leq@int_slack@1 + 2 xyz_leq@int_slack@2 = 5\n xyz_geq: x + y + z@0 + 2 z@1 + 4 z@2 - xyz_geq@int_slack@0\n - 2 xyz_geq@int_slack@1 - 3 xyz_geq@int_slack@2 = 3\n\nBounds\n 0 <= x <= 1\n 0 <= y <= 1\n 0 <= z@0 <= 1\n 0 <= z@1 <= 1\n 0 <= z@2 <= 1\n 0 <= xyz_leq@int_slack@0 <= 1\n 0 <= xyz_leq@int_slack@1 <= 1\n 0 <= xyz_leq@int_slack@2 <= 1\n 0 <= xyz_geq@int_slack@0 <= 1\n 0 <= xyz_geq@int_slack@1 <= 1\n 0 <= xyz_geq@int_slack@2 <= 1\n\nBinaries\n x y z@0 z@1 z@2 xyz_leq@int_slack@0 xyz_leq@int_slack@1 xyz_leq@int_slack@2\n xyz_geq@int_slack@0 xyz_geq@int_slack@1 xyz_geq@int_slack@2\nEnd\n\n"
]
],
[
[
"After converting, integer variables $z$ is replaced with three binary variables $z\\text{@}0$, $z\\text{@}1$ and $z\\text{@}2$ with coefficients 1, 2 and 4, respectively as the above. \nThe slack variables $xyz\\_leq\\text{@}int\\_slack$ and $xyz\\_geq\\text{@}int\\_slack$ that were introduced by `InequalityToEquality` are also both replaced with three binary variables with coefficients 1, 2, 2, and 1, 2, 3, respectively.\n\nNote: Essentially the coefficients mean that the sum of these binary variables with coefficients can be the sum of a subset of $\\{1, 2, 4\\}$, $\\{1, 2, 2\\}$, and $\\{1, 2, 3\\}$ to represent that acceptable values $\\{0, \\ldots, 7\\}$, $\\{0, \\ldots, 5\\}$, and $\\{0, \\ldots, 6\\}$, which respects the lower bound and the upper bound of original integer variables correctly.\n\n`IntegerToBinary` also provides `interpret` method that is the functionality to translate a given binary result back to the original integer representation.",
"_____no_output_____"
],
[
"## LinearEqualityToPenalty",
"_____no_output_____"
],
[
"`LinearEqualityToPenalty` converts linear equality constraints into additional quadratic penalty terms of the objective function to map `QuadraticProgram` to an unconstrained form.\nAn input to the converter has to be a `QuadraticProgram` with only linear equality constraints. Those equality constraints, e.g. $\\sum_i a_i x_i = b$ where $a_i$ and $b$ are numbers and $x_i$ is a variable, will be added to the objective function in the form of $M(b - \\sum_i a_i x_i)^2$ where $M$ is a large number as penalty factor. \nBy default $M= 1e5$. The sign of the term depends on whether the problem type is a maximization or minimization.\n\nWe use the output of `IntegerToBinary` as starting point, where all variables are binary variables and all inequality constraints have been mapped to equality constraints. \nWe print the problem again for reference.",
"_____no_output_____"
]
],
[
[
"print(qp_eq_bin.export_as_lp_string())",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: CPLEX\n\nMaximize\n obj: 2 x + y + z@0 + 2 z@1 + 4 z@2\nSubject To\n xyz_leq: x + y + z@0 + 2 z@1 + 4 z@2 + xyz_leq@int_slack@0\n + 2 xyz_leq@int_slack@1 + 2 xyz_leq@int_slack@2 = 5\n xyz_geq: x + y + z@0 + 2 z@1 + 4 z@2 - xyz_geq@int_slack@0\n - 2 xyz_geq@int_slack@1 - 3 xyz_geq@int_slack@2 = 3\n\nBounds\n 0 <= x <= 1\n 0 <= y <= 1\n 0 <= z@0 <= 1\n 0 <= z@1 <= 1\n 0 <= z@2 <= 1\n 0 <= xyz_leq@int_slack@0 <= 1\n 0 <= xyz_leq@int_slack@1 <= 1\n 0 <= xyz_leq@int_slack@2 <= 1\n 0 <= xyz_geq@int_slack@0 <= 1\n 0 <= xyz_geq@int_slack@1 <= 1\n 0 <= xyz_geq@int_slack@2 <= 1\n\nBinaries\n x y z@0 z@1 z@2 xyz_leq@int_slack@0 xyz_leq@int_slack@1 xyz_leq@int_slack@2\n xyz_geq@int_slack@0 xyz_geq@int_slack@1 xyz_geq@int_slack@2\nEnd\n\n"
]
],
[
[
"Call `convert` method of `LinearEqualityToPenalty` to convert.",
"_____no_output_____"
]
],
[
[
"from qiskit_optimization.converters import LinearEqualityToPenalty",
"_____no_output_____"
],
[
"lineq2penalty = LinearEqualityToPenalty()\nqubo = lineq2penalty.convert(qp_eq_bin)\nprint(qubo.export_as_lp_string())",
"\\ This file has been generated by DOcplex\n\\ ENCODING=ISO-8859-1\n\\Problem name: CPLEX\n\nMaximize\n obj: 178 x + 177 y + 177 z@0 + 354 z@1 + 708 z@2 + 110 xyz_leq@int_slack@0\n + 220 xyz_leq@int_slack@1 + 220 xyz_leq@int_slack@2\n - 66 xyz_geq@int_slack@0 - 132 xyz_geq@int_slack@1\n - 198 xyz_geq@int_slack@2 + [ - 44 x^2 - 88 x*y - 88 x*z@0 - 176 x*z@1\n - 352 x*z@2 - 44 x*xyz_leq@int_slack@0 - 88 x*xyz_leq@int_slack@1\n - 88 x*xyz_leq@int_slack@2 + 44 x*xyz_geq@int_slack@0\n + 88 x*xyz_geq@int_slack@1 + 132 x*xyz_geq@int_slack@2 - 44 y^2 - 88 y*z@0\n - 176 y*z@1 - 352 y*z@2 - 44 y*xyz_leq@int_slack@0\n - 88 y*xyz_leq@int_slack@1 - 88 y*xyz_leq@int_slack@2\n + 44 y*xyz_geq@int_slack@0 + 88 y*xyz_geq@int_slack@1\n + 132 y*xyz_geq@int_slack@2 - 44 z@0^2 - 176 z@0*z@1 - 352 z@0*z@2\n - 44 z@0*xyz_leq@int_slack@0 - 88 z@0*xyz_leq@int_slack@1\n - 88 z@0*xyz_leq@int_slack@2 + 44 z@0*xyz_geq@int_slack@0\n + 88 z@0*xyz_geq@int_slack@1 + 132 z@0*xyz_geq@int_slack@2 - 176 z@1^2\n - 704 z@1*z@2 - 88 z@1*xyz_leq@int_slack@0 - 176 z@1*xyz_leq@int_slack@1\n - 176 z@1*xyz_leq@int_slack@2 + 88 z@1*xyz_geq@int_slack@0\n + 176 z@1*xyz_geq@int_slack@1 + 264 z@1*xyz_geq@int_slack@2 - 704 z@2^2\n - 176 z@2*xyz_leq@int_slack@0 - 352 z@2*xyz_leq@int_slack@1\n - 352 z@2*xyz_leq@int_slack@2 + 176 z@2*xyz_geq@int_slack@0\n + 352 z@2*xyz_geq@int_slack@1 + 528 z@2*xyz_geq@int_slack@2\n - 22 xyz_leq@int_slack@0^2 - 88 xyz_leq@int_slack@0*xyz_leq@int_slack@1\n - 88 xyz_leq@int_slack@0*xyz_leq@int_slack@2 - 88 xyz_leq@int_slack@1^2\n - 176 xyz_leq@int_slack@1*xyz_leq@int_slack@2 - 88 xyz_leq@int_slack@2^2\n - 22 xyz_geq@int_slack@0^2 - 88 xyz_geq@int_slack@0*xyz_geq@int_slack@1\n - 132 xyz_geq@int_slack@0*xyz_geq@int_slack@2 - 88 xyz_geq@int_slack@1^2\n - 264 xyz_geq@int_slack@1*xyz_geq@int_slack@2 - 198 xyz_geq@int_slack@2^2\n ]/2 -374\nSubject To\n\nBounds\n 0 <= x <= 1\n 0 <= y <= 1\n 0 <= z@0 <= 1\n 0 <= z@1 <= 1\n 0 <= z@2 <= 1\n 0 <= xyz_leq@int_slack@0 <= 1\n 0 <= xyz_leq@int_slack@1 <= 1\n 0 <= xyz_leq@int_slack@2 <= 1\n 0 <= xyz_geq@int_slack@0 <= 1\n 0 <= xyz_geq@int_slack@1 <= 1\n 0 <= xyz_geq@int_slack@2 <= 1\n\nBinaries\n x y z@0 z@1 z@2 xyz_leq@int_slack@0 xyz_leq@int_slack@1 xyz_leq@int_slack@2\n xyz_geq@int_slack@0 xyz_geq@int_slack@1 xyz_geq@int_slack@2\nEnd\n\n"
]
],
[
[
"After converting, the equality constraints are added to the objective function as additional terms with the default penalty factor $M=1e5$.\nThe resulting problem is now a QUBO and compatible with many quantum optimization algorithms such as VQE, QAOA and so on.",
"_____no_output_____"
],
[
"This gives the same result as before.",
"_____no_output_____"
]
],
[
[
"import qiskit.tools.jupyter\n%qiskit_version_table\n%qiskit_copyright",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4a0a3b96838a8f6f35435b5dcc08737877bbe298
| 503,685 |
ipynb
|
Jupyter Notebook
|
QuarentenaDados_aula03.ipynb
|
MattBizzo/alura-quarentenadados
|
b320809dfbfb2d5f9ee183057d527459dd0bebf8
|
[
"MIT"
] | null | null | null |
QuarentenaDados_aula03.ipynb
|
MattBizzo/alura-quarentenadados
|
b320809dfbfb2d5f9ee183057d527459dd0bebf8
|
[
"MIT"
] | null | null | null |
QuarentenaDados_aula03.ipynb
|
MattBizzo/alura-quarentenadados
|
b320809dfbfb2d5f9ee183057d527459dd0bebf8
|
[
"MIT"
] | null | null | null | 185.861624 | 215,790 | 0.851375 |
[
[
[
"<a href=\"https://colab.research.google.com/github/MattBizzo/alura-quarentenadados/blob/master/QuarentenaDados_aula03.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Introdução\n\nOlá, seja bem-vinda e bem-vindo ao notebook da **aula 03**! A partir desta aula iremos analisar e discutir uma base de dados junto com você. Por isso, será **importante que as discussões nos vídeos sejam acompanhadas** para entender todos os processos das análises.\n\n",
"_____no_output_____"
],
[
"Nessa aula utilizaremos uma base totalmente nova, que nós também não conhecíamos até o momento da análise. Você vai acompanhar a exploração e, principalmente, as dificuldades ao analisar uma base de dados desconhecida.\n\nVamos começar importando a nossa base de dados! Nessa aula iremos trabalhar com a IMBD 5000, base que contém uma série de informações sobre filmes, sendo uma pequena amostra da famosa base de dados [IMBD](https://www.imdb.com/).",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimdb = pd.read_csv(\"https://gist.githubusercontent.com/guilhermesilveira/24e271e68afe8fd257911217b88b2e07/raw/e70287fb1dcaad4215c3f3c9deda644058a616bc/movie_metadata.csv\")\nimdb.head()",
"_____no_output_____"
]
],
[
[
"Como você acompanhou, iniciamos a aula tentando conhecer as diversas colunas de cada filme e uma das que chamou mais a atenção foi a color. Vamos conhecer quais valores temos nesta colunas?!",
"_____no_output_____"
]
],
[
[
"imdb[\"color\"].unique()",
"_____no_output_____"
]
],
[
[
"Verificamos que essa coluna **color** informa se o filme é colorido ou é preto e branco. Vamos descobrir agora quantos filmes de cada tipo nós temos:",
"_____no_output_____"
]
],
[
[
"imdb[\"color\"].value_counts()",
"_____no_output_____"
],
[
"imdb[\"color\"].value_counts(normalize=True)",
"_____no_output_____"
]
],
[
[
"Agora já descobrimos quantos filmes coloridos e preto e branco temos, e também sabemos que há mais de 5000 filmes na base. Fizemos algo novo, que foi chamar o `value_counts()`, passando o parâmetro **normalize como True**. Desse modo, já calculamos qual é a participação de cada um dos tipos de filmes (**95% são filmes coloridos**).\n\nExcelente! Agora vamos explorar outra coluna a fim de conhecer os diretores que tem mais filmes na nossa base de dados (**lembrando que nossa base é uma amostra muito pequena da realidade**)",
"_____no_output_____"
]
],
[
[
"imdb[\"director_name\"].value_counts()",
"_____no_output_____"
]
],
[
[
"**Steven Spielberg e Woody Allen** são os diretores com mais filmes no **IMDB 5000**.\n\nContinuando com nossa exploração de algumas informações, vamos olhar para o número de críticas por filmes.",
"_____no_output_____"
]
],
[
[
"imdb[\"num_critic_for_reviews\"]",
"_____no_output_____"
],
[
"imdb[\"num_critic_for_reviews\"].describe()",
"_____no_output_____"
]
],
[
[
"Veja que as colunas **color** e **director_name** são *strings*, não fazendo sentido olhar para médias, medianas e afins. Olhar para o número de avaliações já pode ser interessante, por isso usamos o `.describe()`.\n\nAgora podemos até plotar um histograma para avaliar o número de review.",
"_____no_output_____"
]
],
[
[
"import seaborn as sns\nsns.set_style(\"whitegrid\")\nimdb[\"num_critic_for_reviews\"].plot(kind='hist')",
"/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n"
]
],
[
[
"Verificamos que poucos filmes tem mais de 500 votos, por isso um paralelo que podemos fazer é que filmes com muitos votos são mais populares e filmes com poucos votos não são tão populares. Logo, pelo histograma fica evidente que poucos filmes fazem muito muito sucesso. Claro que não conseguimos afirmar isso com propriedade, pois, novamente, estamos lidando com um número restrito de dados, mas são pontos interessantes de se pensar.\n\nOutra informação interessante de se analisar, são os orçamentos e receitas de um filme, ou seja o aspecto financeiro. Vamos começar pelo gross:",
"_____no_output_____"
]
],
[
[
"imdb[\"gross\"].hist()",
"_____no_output_____"
]
],
[
[
" Como você deve ter reparado, essa é a primeira vez que as escalas estão totalmente diferentes, pois no eixo **X** temos valores tão altos que a escala teve que ser de centena de milhões. Veja como pouquíssimos filmes tem **alto faturamento**, o que nos acende um primeiro alerta de que tem algo estranho (ou temos filmes que rendem muito dinheiro neste dataset).\n\n Vamos tentar conhecer quais são esses filmes com faturamento astronômico.\n",
"_____no_output_____"
]
],
[
[
"imdb.sort_values(\"gross\", ascending=False).head()",
"_____no_output_____"
]
],
[
[
"Nessa lista temos **Avatar, Titanic, Jurassic World e The Avengers**, o que parece fazer sentido para nós, pois sabemos que esses foram filmes com bilheterias gigantescas. Analisando esses dados conseguimos verificar que os maiores faturamentos fazem sentido, mas encontramos um problema nos dados, dado que encontramos duas linhas diplicadas. Podemos usar o pandas para remover esses dados, mas por enquanto vamos manter todas as informações (Se estiver curioso em saber como se faz, consulte o [`.drop_duplicates()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html)).\n\nMaravilha, agora temos o faturamento e parece estar OK. Queremos começar a responder algumas perguntas e uma delas é: será que filmes coloridos tem faturamento maior que filmes preto e branco?\n\nPara começar a responder essa pergunta precisamos transformar a coluna Color:",
"_____no_output_____"
]
],
[
[
"color_or_bw = imdb.query(\"color in ['Color', ' Black and White']\")\ncolor_or_bw[\"color_0_ou_1\"] = (color_or_bw[\"color\"]==\"Color\") * 1\ncolor_or_bw[\"color_0_ou_1\"].value_counts()",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:2: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \n"
],
[
"color_or_bw.head()",
"_____no_output_____"
]
],
[
[
"Veja que agora nós temos uma última coluna em nosso dataframe com valores 0 e 1. Agora podemos construir gráficos com essa informação de filmes coloridos ou não.\n\nP.S: Em aula tivemos problemas porque Black and White tinha um espaço no início, vou cortar esses detalhes aqui no notebook, mas reforço a importância de acompanhar este processo no vídeo.",
"_____no_output_____"
]
],
[
[
"sns.scatterplot(data=color_or_bw, x=\"color_0_ou_1\", y=\"gross\")",
"_____no_output_____"
]
],
[
[
"Então plotamos nossos dados com um displot! Existem várias formas de visualizar essa informação, mas por ora essa nos ajuda a comparar os resultados. Repare como filmes coloridos tem valores bem maiores (isso já era até esperado), mas também temos pontos bem altos em filmes preto e branco, chamando muito atenção.\n\nVamos explorar algumas estatísticas destes filmes:",
"_____no_output_____"
]
],
[
[
"color_or_bw.groupby(\"color\").mean()[\"gross\"]",
"_____no_output_____"
],
[
"color_or_bw.groupby(\"color\").mean()[\"imdb_score\"]",
"_____no_output_____"
],
[
"color_or_bw.groupby(\"color\").median()[\"imdb_score\"]",
"_____no_output_____"
]
],
[
[
"Das estatísticas temos duas bem interessantes, a média e mediana das notas de filmes preto e branco são maiores. Há várias possíveis explicações sobre o porquê disso, reflita aí sobre algumas delas e compartilhe conosco!\n\n\nA partir de agora, vamos fazer uma investigação melhor em relação às finanças dos filmes (faturamento e orçamento). Vamos iniciar plotando e interpretando um gráfico de **gross** por **budget**:",
"_____no_output_____"
]
],
[
[
"budget_gross= imdb[[\"budget\", \"gross\"]].dropna().query(\"budget >0 | gross > 0\")\n\nsns.scatterplot(x=\"budget\", y=\"gross\", data = budget_gross)",
"_____no_output_____"
]
],
[
[
"Para plotar os dados, primeiro removemos as linhas com informações de faturamento e orçamento vazias e também com valores igual a 0, para então gerar o gráfico.\n\nAgora vamos analisar esse gráfico juntos, veja que a escala de **budget** mudou, agora é **e10**. Repare que apenas poucos filmes tem orçamentos tão grandes assim, e seus faturamentos são muito baixos. Será que temos algum problema nos dados? Vamos investigar melhor!",
"_____no_output_____"
]
],
[
[
"imdb.sort_values(\"budget\", ascending=False).head()",
"_____no_output_____"
]
],
[
[
"Ordenando os dados pelo **budget** percebemos que as primeiras posições são de filmes asiáticos. O Guilherme trouxe um ponto interessante para a investigação, pois países como a Coreia usam moedas que tem três casas decimais a mais que o dólar. Então provavelmente o que está ocorrendo é que os dados de orçamento tem valores na moeda local, por isso detectamos valores tão discrepantes. \n\nComo não temos garantia dos números, vamos precisar trabalhar apenas com filmes americanos, assim garantimos que tanto gross e budget estão em dólares. Então vamos iniciar esse processo:",
"_____no_output_____"
],
[
"#Não esqueça de compartilhar a solução dos seus desafios com nossos instrutores, seja no Twitter, seja no LinkedIn. Boa sorte!",
"_____no_output_____"
]
],
[
[
"imdb[\"country\"].unique()",
"_____no_output_____"
]
],
[
[
"Veja que temos filmes de diversos locais de origem:",
"_____no_output_____"
]
],
[
[
"imdb = imdb.drop_duplicates()\nimdb_usa = imdb.query(\"country == 'USA'\")\nimdb_usa.sort_values(\"budget\", ascending=False).head()",
"_____no_output_____"
]
],
[
[
"Agora temos os dados para fazer uma análise melhor entre gross e budget. Vamos plotar o gráfico novamente:\n\n",
"_____no_output_____"
]
],
[
[
"budget_gross = imdb_usa[[\"budget\", \"gross\"]].dropna().query(\"budget >0 | gross > 0\")\n\nsns.scatterplot(x=\"budget\", y=\"gross\", data = budget_gross)",
"_____no_output_____"
]
],
[
[
"Veja que interessante, aparentemente temos uma relação entre orçamento e faturamento. Quanto maior o orçamento, maior o faturamento.\n\nJá que estamos trabalhando com orçamento e faturamento, podemos construir uma nova informação, o lucro, para analisar. De forma bem simplista esse processo de construir novas informações a partir das existentes no dataset é conhecido como [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering).",
"_____no_output_____"
]
],
[
[
"imdb_usa['lucro'] = imdb_usa['gross'] - imdb_usa['budget']\n\nbudget_gross = imdb_usa.query(\"budget >0 | gross > 0\")[[\"budget\", \"lucro\"]].dropna()\n\nsns.scatterplot(x=\"budget\", y=\"lucro\", data = budget_gross)",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
]
],
[
[
"MUito bom! Nós construímos nossa coluna lucro na base de dados e plotamos o orçamento contra lucro.\n\nRepare que temos pontos interessantes nesta visualização, um deles são esses filmes com muito custo e prejuizo. Isso pode ser um prejuizo real, mas também podem ser filmes que ainda não tiveram tempo de recuperar o investimento (lançamentos recentes). Outros pontos interessantes de se anlisar seriam os filmes com baixos orçamentos e muito lucro, será que são estão corretos ou pode ser algum erro da base? Parece que nem sempre gastar uma tonelada de dinheiro vai gerar lucros absurdos, será que é isso é verdade? \n\nEsse gráfico é muito rico em informações, vale a pena você gastar um tempo criando hipóteses.\n\nJá que essa nova feature (lucro) parace ser interessante de se analisar, vamos continuar! Mas agora quero ver o lucro em relação ao ano de produção:",
"_____no_output_____"
]
],
[
[
"budget_gross = imdb_usa.query(\"budget >0 | gross > 0\")[[\"title_year\", \"lucro\"]].dropna()\n\nsns.scatterplot(x=\"title_year\", y=\"lucro\", data = budget_gross)",
"_____no_output_____"
]
],
[
[
"Olha que legal esse gráfico, veja como alguns pontos mais recentes reforça a teoria de que alguns filmes podem ainda não ter recuperado o dinheiro investido (Claro que temos muitas variáveis para se analisar, mas é um indício relevante).\n\nOutro ponto que chama muito atenção, são os filmes da década de 30 e 40 com lucros tão altos. Quais serão esses filmes? Bom, essa pergunta você vai responder no desafio do Paulo, que está louco para descobrir!\n\nFalando em Paulo, ele sugeriu uma análise com os nome dos diretores e o orçamento de seus filmes, vamos ver se conseguimos concluir alguma coisa:",
"_____no_output_____"
]
],
[
[
"filmes_por_diretor = imdb_usa[\"director_name\"].value_counts()\ngross_director = imdb_usa[[\"director_name\", \"gross\"]].set_index(\"director_name\").join(filmes_por_diretor, on=\"director_name\")\ngross_director.columns=[\"dindin\", \"filmes_irmaos\"]\ngross_director = gross_director.reset_index()\ngross_director.head()",
"_____no_output_____"
],
[
"sns.scatterplot(x=\"filmes_irmaos\", y=\"dindin\", data = gross_director)",
"_____no_output_____"
]
],
[
[
"Essa imagem aparentemente não é muito conclusiva, então não conseguimos inferir tantas informações.\n\nEsse processo de gerar dados, visualizações e acabar não sendo conclusivo é muito comum na vida de um cientista de dados, pode ir se acostumando =P.\n\nPara finalizar, que tal realizar uma análise das correlações dos dados? EXistem várias formas de calcular a correlação, esse é um assunto denso.Você pode ler mais sobre essas métricas neste [link](https://pt.wikipedia.org/wiki/Correla%C3%A7%C3%A3o).\n\nVamos então inciar a análise das correlações plotando o pairplot.",
"_____no_output_____"
]
],
[
[
"sns.pairplot(data = imdb_usa[[\"gross\", \"budget\", \"lucro\", \"title_year\"]])",
"_____no_output_____"
]
],
[
[
"O pairplot mostra muita informação e a melhor forma de você entender é assistindo as conclusões que tiramos sobre esses gráficos na vídeoaula.\n\nEmbora plotamos um monte de informação, não necessariamente reduzimos a correlação em um número para simplificar a análise. Vamos fazer isso com a ajuda do `.corr()` do [pandas](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.corr.html). ",
"_____no_output_____"
]
],
[
[
"imdb_usa[[\"gross\", \"budget\", \"lucro\", \"title_year\"]].corr()",
"_____no_output_____"
]
],
[
[
"Com o pandas é simples de se calcular a correlação, mas precisamos saber interpretar os resultados. Vamos fazer isso?\n\nA correlação é uma métrica que vai de 1 a -1. Quando a correlação é 1, dizemos que é totalmente correlacionada (relação linear perfeita e positiva), ou seja se uma variável aumenta em 10 a outra também irá aumentar em 10. Quando o valor da correlação é -1, também temos variáveis totalmente correlacionda, só que de maneira negativa (relação linear perfeita negativa), neste caso, se uma variável aumenta em 10 a outra reduz em 10. Agora quando a correlação é 0 temos a inexistência de correlação, ou seja, uma variável não tem influêcia sobre a outra. \n\nAgora sim, entendido sobre a correlação vamos analisar as nossas. Veja que lucro e gross tem uma correlação alta, o que indica que quanto maior o orçamento maior o lucro (mas repare que a correlação não é perfeita), já o title_yers e lucro tem correlação negativa, mas muito perto de zero (ou seja quase não tem correlação). Viu como conseguimos analisar muitas coisas com a correlação?! Pense e tente analisar os outros casos também.\n\n\nCom isso chegamos ao final de mais uma aula da #quarentenadados. E aí, o que está achando, cada vez mais legal e ao mesmo tempo mais complexo né?\n\nO que importa é estar iniciando e entendendo o que fazemos para analisar os dados! **Continue até o fim, garanto que vai valer a pena.**\nVamos praticar?\n\n**Crie seu próprio notebook, reproduza nossa aula e resolva os desafios que deixamos para vocês**.\n\n\nAté a próxima aula!\n\n**P.S: A partir de agora teremos muitos desafios envolvendo mais análises e conclusões, então não haverá um \"gabarito\". O importante é você compartilhar suas soluções com os colegas e debater os seus resultados e das outras pessoas**",
"_____no_output_____"
],
[
"## Desafio 1 do [Thiago Gonçalves](https://twitter.com/tgcsantos)\n\nPlotar e analisar o Boxplot da média (coluna imbd_score) dos filmes em preto e branco e coloridos.",
"_____no_output_____"
],
[
"##Desafio 2 do [Guilherme Silveira](https://twitter.com/guilhermecaelum)\n\nNo gráfico de **budget por lucro** temos um ponto com muito custo e prejuizo, descubra com é esse filme (budget próximo de 2.5).",
"_____no_output_____"
],
[
"##Desafio 3 do [Guilherme Silveira](https://twitter.com/guilhermecaelum)\n\n\nEm aula falamos que talvez, filmes mais recentes podem ter prejuizo pois ainda não tiveram tempo de recuperar o investimento. Analise essas informações e nos conte quais foram suas conclusões.",
"_____no_output_____"
],
[
"## Desafio 4 do [Paulo Silveira](https://twitter.com/paulo_caelum)\n\nQuais foram os filmes da decada pré 2° guerra que tiveram muito lucro.",
"_____no_output_____"
],
[
"## Desafio 5 do [Paulo Silveira](https://twitter.com/paulo_caelum)\n\nNo gráfico de **filmes_irmaos por dindin** temos alguns pontos estranhos entre 15 e 20. Confirme a tese genial do Paulo que o cidadão estranho é o Woody Allen. (Se ele tiver errado pode cornete nas redes sociais kkkkk)",
"_____no_output_____"
],
[
"## Desafio 6 do [Thiago Gonçalves](https://twitter.com/tgcsantos)\n\nAnalise mais detalhadamente o gráfico pairplot, gaste um tempo pensando e tentando enteder os gráficos.",
"_____no_output_____"
],
[
"## Desafio 7 do [Thiago Gonçalves](https://twitter.com/tgcsantos)\n\n\nCalcular a correlação apenas dos filmes pós anos 2000 (Jogar fora filmes antes de 2000) e interpretar essa correlação.",
"_____no_output_____"
],
[
"## Desafio 8 do [Allan Spadini](https://twitter.com/allanspadini)\n\nTentar encontrar uma reta, pode ser com uma régua no monitor (não faça isso), com o excel/google sheets, com o python, no gráfico que parece se aproximar com uma reta (por exemplo budget/lucro, gross/lucro)",
"_____no_output_____"
],
[
"## Desafio 9 da [Thais André](https://twitter.com/thais_tandre)\n\nAnalisar e interpretar a correlação de outras variáveis além das feitas em sala (notas é uma boa). Número de avaliações por ano pode ser também uma feature.\n",
"_____no_output_____"
],
[
"#Não esqueça de compartilhar a solução dos seus desafios com nossos instrutores, seja no Twitter, seja LinkedIn. Boa sorte!",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.