hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
4a8653415498ade4f362e7cd88250944783d6e59
1,522
ipynb
Jupyter Notebook
algorithm-dynamic_programming-back11726.ipynb
joongbo/algorithm_training
4fe57277e9b9d876c127a64677f320e8b78ac668
[ "MIT" ]
null
null
null
algorithm-dynamic_programming-back11726.ipynb
joongbo/algorithm_training
4fe57277e9b9d876c127a64677f320e8b78ac668
[ "MIT" ]
null
null
null
algorithm-dynamic_programming-back11726.ipynb
joongbo/algorithm_training
4fe57277e9b9d876c127a64677f320e8b78ac668
[ "MIT" ]
null
null
null
18.560976
53
0.467806
[ [ [ "2xn 크기의 면적을 1x2, 2x1 타일로 타일링 하는 방법의 가짓 수 구하기\n- 조건 1. 1 <= n <= 1000 입력\n- 조건 2. 출력은 10,007로 나눈 나머지를 출력", "_____no_output_____" ] ], [ [ "import sys", "_____no_output_____" ], [ "limit = 1000\nmemory = [0] * (limit + 1)\nsys.setrecursionlimit(limit + 1)\n\ndef tiling(n):\n if n == 1: return 1\n if n == 2: return 2\n if memory[n] != 0: return memory[n]\n memory[n] = tiling(n-1) + tiling(n-2)\n return memory[n]", "_____no_output_____" ], [ "n = int(input())\nprint(tiling(n)%10007)", "1000\n\n1115\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
4a865ea72975f6bfa732d28f02bd2b0af50e3240
346,602
ipynb
Jupyter Notebook
assignment1/assignment_1.ipynb
tnv875/Group18-NoTeeth
f5861d58e880d99f8d6ce0beaaf5e93059d00929
[ "MIT" ]
null
null
null
assignment1/assignment_1.ipynb
tnv875/Group18-NoTeeth
f5861d58e880d99f8d6ce0beaaf5e93059d00929
[ "MIT" ]
null
null
null
assignment1/assignment_1.ipynb
tnv875/Group18-NoTeeth
f5861d58e880d99f8d6ce0beaaf5e93059d00929
[ "MIT" ]
null
null
null
166.956647
148,288
0.889481
[ [ [ "# Mandatory Assignment 1\n\nThis is the first of two mandatory assignments which must be completed during the course. First some practical information:\n\n* When is the assignment due?: **23:59, Sunday, August 19, 2018.**\n* How do you grade the assignment?: You will **peergrade** each other as primary grading. \n* Can i work with my group?: **yes**\n\nThe assigment consist of one to tree problems from each of the exercise sets you have solved so far (excluding Set 1). We've tried to select problems which are self contained, but it might be nessecary to solve some of the previous exercises in each set to fully answer the problems in this assignment.\n\n## Problems from Exercise Set 2:\n\n> **Ex. 2.2**: Make two lists. The first should be numbered. The second should be unnumbered and contain at least one sublevel. ", "_____no_output_____" ], [ "### Converted (Using shortkey M)\n\n1. Ordered 1\n1. Ordered 2\n 1. Ordered 1.1\n 1. Ordered 1.2\n \n \n- Unnumbered 1\n- Unnumbered 2\n - Unnumbered sublevel 1", "_____no_output_____" ], [ "## Problems from Exercise set 3:\n\n> **Ex. 3.1.3:** Let `l1 = ['r ', 'Is', '>', ' < ', 'g ', '?']`. Create from `l1` the sentence `\"Is r > g?\"` using your knowledge about string formatting. Store this new string in a variable called `answer_31`. Make sure there is only one space in between worlds.\n>\n>> _Hint:_ You should be able to combine the above informations to solve this exercise.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 3.1.3 here]\nl1 = ['r ', 'Is', '>', ' < ', 'g ', '?']\n\n# answer_31 = \n\nanswer_31 = l1[1] + \" \" + l1[0].strip() + \" \" + l1[2] + \" \" + l1[4].strip() + l1[-1]\nprint(answer_31)", "Is r > g?\n" ], [ "assert answer_31 == \"Is r > g?\"", "_____no_output_____" ] ], [ [ "> **Ex. 3.1.4**: Create an empty dictionary `words` using the `dict()`function. Then add each of the words in `['animal', 'coffee', 'python', 'unit', 'knowledge', 'tread', 'arise']` as a key, with the value being a boolean indicator for whether the word begins with a vowel. The results should look like `{'bacon': False, 'asynchronous': True ...}`. Store the result in a new variable called `answer_32`.\n>\n>> _Hint:_ You might want co first construct a function that asseses whether a given word begins with a vowel or not.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 3.1.4 here]\nW = ['animal', 'coffee', 'python', 'unit', 'knowledge', 'tread', 'arise']\n\n# answer_32 = \n\n\n# YOUR CODE HERE\nwords = dict()\nkeys = ['animal', 'coffee', 'python', 'unit', 'knowledge', 'tread', 'arise']\nvalues = []\n\nfor word in keys:\n if word[0] in \"aeiouy\":\n values.append(True)\n else:\n values.append(False)\n\nwords = list(zip(keys, values))\nanswer_32 = dict(words)\nprint(answer_32)", "{'animal': True, 'coffee': False, 'python': False, 'unit': True, 'knowledge': False, 'tread': False, 'arise': True}\n" ], [ "assert answer_32 == {i: i[0] in 'aeiou' for i in W}\nassert sorted(answer_32) == sorted(W)", "_____no_output_____" ] ], [ [ "> **Ex. 3.3.2:** use the `requests` module (get it with `pip install requests`) and `construct_link()` which you defined in the previous question (ex 3.3.1) to request birth data from the \"FOD\" table. Get all available years (variable \"Tid\"), but only female births (BARNKON=P) . Unpack the json payload and store the result. Wrap the whole thing in a function which takes an url as input and returns the corresponding output.\n>\n> Store the birth data in a new variable called `answer_33`.\n>\n>> _Hint:_ The `requests.response` object has a `.json()` method. \n>\n>> _Note:_ you wrote `construct_link()` in 3.3.1, if you didn't heres the link you need to get: `https://api.statbank.dk/v1/data/FOLK1A/JSONSTAT?lang=en&Tid=*`", "_____no_output_____" ] ], [ [ "'https://api.statbank.dk/v1/data/FOLK1A/JSONSTAT?lang=en&Tid=*'\ndef construct_link(table_id, variables):\n base = 'https://api.statbank.dk/v1/data/{id}/JSONSTAT?lang=en'.format(id = table_id)\n \n for var in variables:\n base += '&{v}'.format(v = var)\n\n return base \n\nconstruct_link('FOLK1A', ['Tid=*'])\n\n\nURL = construct_link('FOD',['Tid=*','BARNKON=P'])\n\nimport requests \nimport pprint\nimport json\n\ndef unpackjson(x , file_name):\n response = requests.get(x) \n response_json = response.json()\n with open(file_name, 'w') as f:\n response_json_str = json.dumps(response_json)\n f.write(response_json_str)\n return(response_json)\n \nresponse_json = unpackjson(URL, 'my_filelec')\n\nanswer_33 = response_json", "_____no_output_____" ], [ "assert sorted(answer_33['dataset'].keys()) == ['dimension', 'label', 'source', 'updated', 'value']\nassert 'BARNKON' in answer_33['dataset']['dimension'].keys()", "_____no_output_____" ] ], [ [ "## Problems from exercise set 4\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ] ], [ [ "> **Ex. 4.1.1:** Use Pandas' CSV reader to fetch daily data weather from 1864 for various stations - available [here](https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/). Store the dataframe in a variable called `answer_41`.\n>\n>> *Hint 1*: for compressed files you may need to specify the keyword `compression`.\n>\n>> *Hint 2*: keyword `header` can be specified as the CSV has no column names.\n>\n>> *Hint 3*: Specify the path, as the URL linking directly to the 1864 file. ", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 4.1.1 here]\n\n# answer_41 = \n\nurl = 'https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/1864.csv.gz'\n\nanswer_41 = pd.read_csv(url,\n compression='gzip',\n header=None)#.iloc[:,:4] #for exercise we want all 8 columns..", "_____no_output_____" ], [ "assert answer_41.shape == (27349, 8)\nassert list(answer_41.columns) == list(range(8))", "_____no_output_____" ] ], [ [ "> **Ex. 4.1.2:** Structure your weather DataFrame by using only the relevant columns (station identifier, data, observation type, observation value), rename them. Make sure observations are correctly formated (how many decimals should we add? one?).\n>\n> Store the resulting dataframe in a new variable called `answer_42`.\n>\n>> *Hint:* rename can be done with `df.columns=COLS` where `COLS` is a list of column names.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 4.1.2 here]\n\n# answer_42 = \n\n# YOUR CODE HERE\nanswer_42 = answer_41.iloc[:,:4]\n\nanswer_42.columns = ['station', 'datetime', 'obs_type', 'obs_value']\nanswer_42['obs_value'] = answer_42['obs_value'] / 10", "_____no_output_____" ], [ "assert answer_42.shape == (27349, 4)\nassert 144.8 in [answer_42[i].max() for i in answer_42]\nassert -666.0 in [answer_42[i].min() for i in answer_42]\nassert 18640101 in [answer_42[i].min() for i in answer_42]", "_____no_output_____" ] ], [ [ "> **Ex. 4.1.3:** Select data for the station `ITE00100550` and only observations for maximal temperature. Make a copy of the DataFrame. Explain in a one or two sentences how copying works.\n>\n> Store the subsetted dataframe in a new variable called `answer_43`.\n>\n>> *Hint 1*: the `&` operator works elementwise on boolean series (like `and` in core python).\n>\n>> *Hint 2*: copying of the dataframe is done with the `copy` method for DataFrames.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 4.1.3 here]\n\n# answer_43 = \n\n\n# YOUR CODE HERE\nanswer_43 = answer_42[(answer_42.station == 'ITE00100550') & (answer_42.obs_type == 'TMAX')].copy()", "_____no_output_____" ], [ "assert 'ITE00100550' in [answer_43[i].min() for i in answer_43]\nassert 'ITE00100550' in [answer_43[i].max() for i in answer_43]\nassert 'TMAX' in [answer_43[i].min() for i in answer_43]\nassert 'TMAX' in [answer_43[i].max() for i in answer_43]", "_____no_output_____" ] ], [ [ "> **Ex. 4.1.4:** Make a new column in `answer_44` called `TMAX_F` where you have converted the temperature variables to Fahrenheit. Make sure not to overwrite `answer_43`.\n>\n> Store the resulting dataframe in a variable called `answer_44`.\n>\n>> *Hint*: Conversion is $F = 32 + 1.8*C$ where $F$ is Fahrenheit and $C$ is Celsius.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 4.1.4 here]\nanswer_44 = answer_43.copy()\n# answer_44 = \n\n# YOUR CODE HERE\nanswer_44['TMAX_F'] = 32 + 1.8 * answer_44['obs_value']", "_____no_output_____" ], [ "assert set(answer_44.columns) - set(answer_43.columns) == {'TMAX_F'}", "_____no_output_____" ] ], [ [ "## Problems from exercise set 5", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np \nimport pandas as pd\nimport seaborn as sns \n\n%matplotlib inline \n\niris = sns.load_dataset('iris')\ntitanic = sns.load_dataset('titanic')", "_____no_output_____" ] ], [ [ "> **Ex. 5.1.1:**: Show the first five rows of the titanic dataset. What information is in the dataset? Use a barplot to show the probability of survival for men and women within each passenger class. Can you make a boxplot showing the same information (why/why not?). _Bonus:_ show a boxplot for the fare-prices within each passenger class. \n>\n> Spend five minutes discussing what you can learn about the survival-selection aboard titanic from the figure(s).\n>\n> > _Hint:_ https://seaborn.pydata.org/generated/seaborn.barplot.html, specifically the `hue` option.\n", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 5.1.1 here]\n\n# YOUR CODE HERE\nprint(titanic.head(5))\n\nfig1 = sns.barplot(x = 'sex', y = 'survived', hue = 'class', data = titanic)\n\n#boxplot for fare prices\nsns.boxplot(x='class', y='fare', data=titanic)", " survived pclass sex age sibsp parch fare embarked class \\\n0 0 3 male 22.0 1 0 7.2500 S Third \n1 1 1 female 38.0 1 0 71.2833 C First \n2 1 3 female 26.0 0 0 7.9250 S Third \n3 1 1 female 35.0 1 0 53.1000 S First \n4 0 3 male 35.0 0 0 8.0500 S Third \n\n who adult_male deck embark_town alive alone \n0 man True NaN Southampton no False \n1 woman False C Cherbourg yes False \n2 woman False NaN Southampton yes True \n3 woman False C Southampton yes False \n4 man True NaN Southampton no True \n" ] ], [ [ "> **Ex. 5.1.2:** Using the iris flower dataset, draw a scatterplot of sepal length and petal length. Include a second order polynomial fitted to the data. Add a title to the plot and rename the axis labels.\n> _Discuss:_ Is this a meaningful way to display the data? What could we do differently?\n>\n> For a better understanding of the dataset this image might be useful:\n> <img src=\"iris_pic.png\" alt=\"Drawing\" style=\"width: 200px;\"/>\n>\n>> _Hint:_ use the `.regplot` method from seaborn. ", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 5.1.2 here]\n\n# YOUR CODE HERE\niris.head(5)\n\nplt.scatter(x=iris['sepal_length'], y=iris['petal_length'])\n\n\nflower = sns.regplot(x='sepal_length', y='petal_length', order = 2, data = iris)\nflower.set_title('Flowers')\nflower.set_ylabel('xss')\nflower.set_xlabel('asdasd')", "_____no_output_____" ] ], [ [ "> **Ex. 5.1.3:** Combine the two of the figures you created above into a two-panel figure similar to the one shown here:\n> <img src=\"Example.png\" alt=\"Drawing\" style=\"width: 600px;\"/>\n>\n> Save the figure as a png file on your computer. \n>> _Hint:_ See [this question](https://stackoverflow.com/questions/41384040/subplot-for-seaborn-boxplot) on stackoverflow for inspiration.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 5.1.3 here]\n\n# YOUR CODE HERE\nf, axes = plt.subplots(1, 2)\n\nflower = sns.regplot(x='sepal_length', y='petal_length', order = 2, data = iris, ax=axes[0])\nflower.set_title('Flowers')\nflower.set_ylabel('xss')\nflower.set_xlabel('asdasd')\n\nfig1 = sns.barplot(x = 'sex', y = 'survived', hue = 'class', data = titanic, ax=axes[1])", "_____no_output_____" ] ], [ [ "> **Ex. 5.1.4:** Use [pairplot with hue](https://seaborn.pydata.org/generated/seaborn.pairplot.html) to create a figure that clearly shows how the different species vary across measurements. Change the color palette and remove the shading from the density plots. _Bonus:_ Try to explain how the `diag_kws` argument works (_hint:_ [read here](https://stackoverflow.com/questions/1769403/understanding-kwargs-in-python))", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 5.1.4 here]\n\n# YOUR CODE HERE\nsns.pairplot(iris, hue = 'species', diag_kws=dict(shade=False), palette=\"dark\") #", "_____no_output_____" ] ], [ [ "## Problems from exercise set 6\n\n> _Note:_ In the exercises we asked you to download weather data from the NOAA website. For this assignment the data are loaded in the following code cell into two pandas dataframes.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nweather_1864 = pd.read_csv('weather_data_1864.csv')", "_____no_output_____" ] ], [ [ "> **Ex. 6.1.4:** Extract the country code from the station name into a separate column.\n>\n> Create a new column in `weather_1864` called `answer_61` and store the country codes here.\n>\n>> _Hint:_ The station column contains a GHCND ID, given to each weather station by NOAA. The format of these ID's is a 2-3 letter country code, followed by a integer identifying the specific station. A simple approach is to assume a fixed length of the country ID. A more complex way would be to use the [`re`](https://docs.python.org/2/library/re.html) module.", "_____no_output_____" ] ], [ [ "weather_1864.head()", "_____no_output_____" ], [ "# [Answer to Ex. 6.1.4]\n# weather_1864['answer_61'] =\n\n# YOUR CODE HERE\n#answer_42['station'].unique() \n\n#weather_1864['answer_61']=answer_42['station'].str.replace('\\d','') \n\nweather_1864['answer_61'] = weather_1864['station'].str[:2]\n\n#wsorted(.unique())\n#sorted(['SZ', 'CA', 'EZ', 'GM', 'AU', 'IT', 'BE', 'UK', 'EI', 'AG', 'AS']) # der er for få lande???", "_____no_output_____" ], [ "assert sorted(weather_1864['answer_61'].str[:2].unique()) == sorted(['SZ', 'CA', 'EZ', 'GM', 'AU', 'IT', 'BE', 'UK', 'EI', 'AG', 'AS'])", "_____no_output_____" ] ], [ [ "> **Ex. 6.1.5:** Make a function that downloads and formats the weather data according to previous exercises in Exercise Section 4.1, 6.1. You should use data for ALL stations but still only select maximal temperature. _Bonus:_ To validate that your function works plot the temperature curve for each country in the same window. Use `plt.legend()` to add a legend. \n>\n> Name your function `prepareWeatherData`.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 6.1.5]\n\ndef prepareWeatherData(year):\n # Your code here\n return \n\n# YOUR CODE HERE\ndef prepareWeatherData(year):\n start_url ='https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/'\n endpoint_url = year + '.csv.gz'\n url = start_url + endpoint_url\n \n df_weather = pd.read_csv(url,\n compression='gzip',\n header=None).iloc[:,:4]\n\n df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value']\n df_weather['obs_value'] = df_weather['obs_value'] / 10\n df_select = df_weather[(df_weather.obs_type == 'TMAX')].copy()\n df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']\n df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])\n \n #Converting strings to the correct datetime format.\n df_sorted['datetime']=pd.to_datetime(df_sorted['datetime'].astype(str)) \n #New column month.\n month = df_sorted['datetime'].dt.month # taking month\n df_sorted['month'] = month\n \n #Making date time as indexindex.\n #df_sorted = df_sorted.set_index('datetime') TAGET UD FOR DENNE ASSIGNMENT\n \n return(df_sorted)\n\ndata1 = prepareWeatherData('1864')\nprint(data1.shape)\ndata1.head()", "(5686, 6)\n" ], [ "assert prepareWeatherData('1864').shape == (5686, 6)", "_____no_output_____" ] ], [ [ "## Problems from exercise set 7\n\n> _Note:_ Once again if you haven't managed to download the data from NOAA, you can refer to the github repo to get csv-files containing the required data.", "_____no_output_____" ] ], [ [ "%matplotlib inline \n\nimport pandas as pd \nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\n\n# Increases the plot size a little\nmpl.rcParams['figure.figsize'] = 11, 6", "_____no_output_____" ] ], [ [ "> **Ex. 7.1.1:** Plot the monthly max,min, mean, first and third quartiles for maximum temperature for our station with the ID _'ITE00100550'_ in 1864. \n\n> *Hint*: the method `describe` computes all these measures.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 7.1.1]\n\n# YOUR CODE HERE\ndef dlweather(year):\n start_url ='https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/'\n endpoint_url = year + '.csv.gz'\n url = start_url + endpoint_url\n \n df_weather = pd.read_csv(url,\n compression='gzip',\n header=None).iloc[:,:5]\n\n df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value','tst']\n df_weather['obs_value'] = df_weather['obs_value'] / 10\n df_select = df_weather[(df_weather.obs_type == 'TMAX')].copy()\n df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']\n df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])\n \n #Converting strings to the correct datetime format.\n df_sorted['datetime']=pd.to_datetime(df_sorted['datetime'].astype(str)) \n #New column month.\n month = df_sorted['datetime'].dt.month # taking month\n df_sorted['month'] = month\n \n #Making date time as indexindex.\n #df_sorted = df_sorted.set_index('datetime') TAGET UD FOR DENNE ASSIGNMENT\n \n return(df_sorted)\n\nweathersp = dlweather('1864')", "_____no_output_____" ], [ "weathersp = weathersp[(weathersp.station == 'ITE00100550')].copy()\nweathersp.head()", "_____no_output_____" ], [ "#grouping on month\nsplit_vars = ['month'] \napply_vars = ['TMAX_F']\n#apply_fcts = ['max','min','mean','']\n\nweathersp_final = weathersp.groupby(split_vars)[apply_vars].describe()\nprint(weathersp_final.head())\n\n#dropping count and std, husk at det er ligesom en ordbog. \nweathersp_final = weathersp_final['TMAX_F'][['mean','min','25%','50%','75%','max']]\n\nweathersp_final.plot(figsize=(10,7))", " TMAX_F \n count mean std min 25% 50% 75% max\nmonth \n1 31.0 31.860645 5.516290 20.66 28.040 32.00 35.42 41.54\n2 29.0 39.442069 5.456973 28.76 34.700 39.92 44.24 47.12\n3 31.0 53.960000 5.004734 46.22 50.090 53.78 57.47 64.40\n4 30.0 61.238000 8.076958 43.34 57.965 61.25 65.12 77.18\n5 31.0 70.647742 5.962719 57.20 67.100 69.98 74.84 80.60\n" ] ], [ [ "> **Ex. 7.1.2:** Get the processed data from years 1864-1867 as a list of DataFrames. Convert the list into a single DataFrame by concatenating vertically. \n>\n> Name the concatenated data `answer_72`", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 7.1.2]\n\n# YOUR CODE HERE\n\ndc={}\nfor year in ['1864','1865','1866','1867']:\n dc[year] = dlweather(year)\n\nanswer_72 = pd.concat([dc['1864'],dc['1865'],dc['1866'],dc['1867']], join ='outer',axis=0, sort=False)", "_____no_output_____" ], [ "assert answer_72.shape == (30003, 7)", "_____no_output_____" ] ], [ [ "> **Ex. 7.1.3:** Parse the station location data which you can find at https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt. Merge station locations onto the weather data spanning 1864-1867. \n>\n> Store the merged data in a new variable called `answer_73`.\n>\n> _Hint:_ The location data have the folllowing format, \n\n```\n------------------------------\nVariable Columns Type\n------------------------------\nID 1-11 Character\nLATITUDE 13-20 Real\nLONGITUDE 22-30 Real\nELEVATION 32-37 Real\nSTATE 39-40 Character\nNAME 42-71 Character\nGSN FLAG 73-75 Character\nHCN/CRN FLAG 77-79 Character\nWMO ID 81-85 Character\n------------------------------\n```\n\n> *Hint*: The station information has fixed width format - does there exist a pandas reader for that?", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 7.1.3]\n\n# YOUR CODE HERE\ndf_country = pd.read_fwf('https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt') #fwf = fixed width format\n#print(df_country.head())", "_____no_output_____" ], [ "assert answer_73.shape == (5686, 15) or answer_73.shape == (30003, 15)", "_____no_output_____" ] ], [ [ "## Problems from exercise set 8\n\n> **Ex. 8.1.2.:** Use the `request` module to collect the first page of job postings.\n>\n> Store the response.json() object in a new variable called `answer_81`.\n> ", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 8.1.2]\n\n# YOUR CODE HERE\nimport pandas as pd\nimport requests\n\nurl ='https://job.jobnet.dk/CV/FindWork/Search?Offset=20' # tag /Search?Offset=20 af hvis du vil have ALLE jobs. \nr = requests.get(url)\nr.status_code\nanswer_81=r.json()\nanswer_81.keys()", "_____no_output_____" ], [ "assert sorted(answer_81.keys()) == sorted(['Expression', 'Facets', 'JobPositionPostings', 'TotalResultCount'])", "_____no_output_____" ] ], [ [ "> **Ex. 8.1.3.:** Store the 'TotalResultCount' value for later use. Also create a dataframe from the 'JobPositionPostings' field in the json. Name this dataframe `answer_82`.", "_____no_output_____" ] ], [ [ "# [Answer to Ex. 8.1.3]\n# answer_82 = \n\n# YOUR CODE HERE\ntrCount= r.json()['TotalResultCount']\n\nanswer_82=pd.DataFrame(r.json()['JobPositionPostings'])\nanswer_82.shape", "_____no_output_____" ], [ "assert answer_82.shape == (20,44)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a865f6bdb564c906bee6c3db2a64e4c1f8b8391
296,984
ipynb
Jupyter Notebook
PEC2/nsanchezpoPEC2.ipynb
nadia1477/C-digo-visualizaci-n-de-datos
7cb720a7b0c94b14c626bfe9500e4f1e00ee5179
[ "CC0-1.0" ]
null
null
null
PEC2/nsanchezpoPEC2.ipynb
nadia1477/C-digo-visualizaci-n-de-datos
7cb720a7b0c94b14c626bfe9500e4f1e00ee5179
[ "CC0-1.0" ]
null
null
null
PEC2/nsanchezpoPEC2.ipynb
nadia1477/C-digo-visualizaci-n-de-datos
7cb720a7b0c94b14c626bfe9500e4f1e00ee5179
[ "CC0-1.0" ]
null
null
null
38.80622
95
0.336001
[ [ [ " PEC 2 Visualización de datos ", "_____no_output_____" ], [ "Nadia Sánchez", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport pandas as pd\nfrom pandas import DataFrame\npd.set_option('display.max_columns', None)", "_____no_output_____" ] ], [ [ "Cargar dataset\n", "_____no_output_____" ] ], [ [ "df = pd.read_csv('pax_data_1789_agreements_08-11-19.csv')\nprint(\"número de filas:\", df.shape[0], \"número de columnas:\", df.shape[1])", "número de filas: 1789 número de columnas: 266\n" ], [ "df.describe()", "_____no_output_____" ], [ "df.head(3)", "_____no_output_____" ] ], [ [ "Valores perdidos\n", "_____no_output_____" ] ], [ [ "df.isnull().sum() \ndf.isna().sum()", "_____no_output_____" ], [ "df.dropna(axis=1, how='all', inplace=True)\ndf.isnull().sum()/len(df)*100\n#eliminar variables con mas del 5% de valores NaN\ndelete_columns = df.columns[df.isnull().sum()/len(df)*100 > 5]\ndelete_columns", "_____no_output_____" ], [ "df2= DataFrame(df.drop(delete_columns, axis=1))\nprint(\"número de filas:\", df2.shape[0], \"número de columnas:\", df2.shape[1])", "número de filas: 1789 número de columnas: 258\n" ], [ "df3= DataFrame(df2.dropna())\nprint(\"número de filas:\", df3.shape[0], \"número de columnas:\", df3.shape[1])", "número de filas: 1698 número de columnas: 258\n" ], [ "# Crear matriz de correlacion\ncorr_matrix = df3.corr().abs()\n\n# Seleccione el triángulo superior de la matriz de correlación\nsuperior = corr_matrix.where(np.triu(np.ones(corr_matrix.shape), k=1).astype(np.bool))\n\n# indice de variables con correlación superir al 80%\nv_eliminar = [column for column in superior.columns if any(superior[column] > 0.8)]", "_____no_output_____" ], [ "v_eliminar", "_____no_output_____" ], [ "# Drop features \ndf4=DataFrame(df3.drop(df[v_eliminar], axis=1))\nprint(\"número de filas:\", df4.shape[0], \"número de columnas:\", df4.shape[1])\n", "número de filas: 1698 número de columnas: 242\n" ], [ "# valor medio y desviación estándard de todos los atributos numéricos:\ndf4.describe().loc[[\"mean\", \"std\", \"min\",\"max\"],:] ", "_____no_output_____" ], [ "df4.head(5)\n", "_____no_output_____" ], [ "df4.isnull().sum() \ndf4.isna().sum()", "_____no_output_____" ], [ "acuerdos_c = df4.to_csv (r'acuerdos_limpio.csv', index = None, header=True, sep=';' )", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a8662af5ba378b0ddb5950b3c2136e8e32db042
156,701
ipynb
Jupyter Notebook
capsuleNet.ipynb
kennethrithvik/capsnet_image_class
7fb4705516c655fb53ff8691d54d66bbc02541e4
[ "MIT" ]
1
2019-03-17T05:56:47.000Z
2019-03-17T05:56:47.000Z
capsuleNet.ipynb
kennethrithvik/capsnet_image_class
7fb4705516c655fb53ff8691d54d66bbc02541e4
[ "MIT" ]
1
2020-10-31T05:32:09.000Z
2020-10-31T05:32:09.000Z
capsuleNet.ipynb
kennethrithvik/capsnet_image_class
7fb4705516c655fb53ff8691d54d66bbc02541e4
[ "MIT" ]
null
null
null
336.991398
136,946
0.902151
[ [ [ "import os\nimport argparse\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras import callbacks\nimport numpy as np\nfrom keras import layers, models, optimizers\nfrom keras import backend as K\nfrom keras.utils import to_categorical\nimport matplotlib.pyplot as plt\nfrom utils import combine_images\nfrom PIL import Image\nfrom capsulelayers import CapsuleLayer, PrimaryCap, Length, Mask\n\nfrom keras.utils import multi_gpu_model\n\nK.set_image_data_format('channels_last')", "_____no_output_____" ], [ "class dotdict(dict):\n \"\"\"dot.notation access to dictionary attributes\"\"\"\n __getattr__ = dict.get\n __setattr__ = dict.__setitem__\n __delattr__ = dict.__delitem__\nargs={\n'epochs':200,\n'batch_size':32,\n'lr':0.001, #Initial learning rate\n'lr_decay':0.9, #The value multiplied by lr at each epoch. Set a larger value for larger epochs\n'lam_recon':0.392, #The coefficient for the loss of decoder\n'routings':3, #Number of iterations used in routing algorithm. should > 0\n'shift_fraction':0.2, #Fraction of pixels to shift at most in each direction.\n'debug':False, #Save weights by TensorBoard\n'save_dir':'./result',\n'digit':1,\n'gpus':2,\n'train_dir':'./data/train/',\n'test_dir':'./data/test/'\n}\nargs=dotdict(args)\nif not os.path.exists(args.save_dir):\n os.makedirs(args.save_dir)", "_____no_output_____" ], [ "# Load Data\n\ntrain_datagen = ImageDataGenerator(rescale = 1./255,\n horizontal_flip=True,\n rotation_range = args.shift_fraction,\n zoom_range = args.shift_fraction,\n width_shift_range = args.shift_fraction,\n height_shift_range = args.shift_fraction)\n#generator = train_datagen.flow(x, y, batch_size=batch_size)\ntrain_set = train_datagen.flow_from_directory(args.train_dir,\n target_size = (64, 64),\n batch_size = args.batch_size,\n class_mode = 'categorical')\ntest_datagen = ImageDataGenerator(rescale = 1./255)\ntest_set = test_datagen.flow_from_directory(args.test_dir,\n target_size = (64, 64),\n batch_size = 3753, #args.batch_size,\n class_mode = 'categorical')\n", "Found 8751 images belonging to 5 classes.\nFound 3753 images belonging to 5 classes.\n" ], [ "def margin_loss(y_true, y_pred):\n \"\"\"\n Margin loss for Eq.(4). When y_true[i, :] contains not just one `1`, this loss should work too. Not test it.\n :param y_true: [None, n_classes]\n :param y_pred: [None, num_capsule]\n :return: a scalar loss value.\n \"\"\"\n L = y_true * K.square(K.maximum(0., 0.9 - y_pred)) + \\\n 0.5 * (1 - y_true) * K.square(K.maximum(0., y_pred - 0.1))\n\n return K.mean(K.sum(L, 1))\ndef train(model, args):\n \"\"\"\n Training a CapsuleNet\n :param model: the CapsuleNet model\n :param data: a tuple containing training and testing data, like `((x_train, y_train), (x_test, y_test))`\n :param args: arguments\n :return: The trained model\n \"\"\"\n # callbacks\n log = callbacks.CSVLogger(args.save_dir + '/log.csv')\n tb = callbacks.TensorBoard(log_dir=args.save_dir + '/tensorboard-logs',\n batch_size=args.batch_size, histogram_freq=int(args.debug))\n checkpoint = callbacks.ModelCheckpoint(args.save_dir + '/Model{epoch:02d}_{val_acc:.2f}.h5', monitor='val_capsnet_acc',\n save_best_only=True, save_weights_only=False, verbose=1)\n lr_decay = callbacks.LearningRateScheduler(schedule=lambda epoch: args.lr * (args.lr_decay ** epoch))\n\n # compile the model\n model.compile(optimizer=optimizers.Adam(lr=args.lr),\n loss=[margin_loss, 'mse'],\n loss_weights=[1., args.lam_recon],\n metrics={'capsnet': 'accuracy'})\n\n # Begin: Training with data augmentation ---------------------------------------------------------------------#\n \n def train_generator(batch_size, shift_fraction=0.2):\n while 1:\n x_batch, y_batch = train_set.next()\n yield ([x_batch, y_batch], [y_batch, x_batch])\n \n \n # Training with data augmentation. If shift_fraction=0., also no augmentation.\n x_test, y_test = test_set.next()\n model.fit_generator(generator=train_generator(args.batch_size,args.shift_fraction),\n steps_per_epoch=int(len(train_set.classes) / args.batch_size),\n epochs=args.epochs,\n validation_data = [[x_test, y_test], [y_test, x_test]],\n callbacks=[log, tb, checkpoint, lr_decay])\n # End: Training with data augmentation -----------------------------------------------------------------------#\n\n model.save(args.save_dir + '/trained_model.h5')\n print('Trained model saved to \\'%s/trained_model.h5\\'' % args.save_dir)\n\n from utils import plot_log\n plot_log(args.save_dir + '/log.csv', show=True)\n\n return model", "_____no_output_____" ], [ "\n#define model\ndef CapsNet(input_shape, n_class, routings):\n \"\"\"\n A Capsule Network.\n :param input_shape: data shape, 3d, [width, height, channels]\n :param n_class: number of classes\n :param routings: number of routing iterations\n :return: Two Keras Models, the first one used for training, and the second one for evaluation.\n `eval_model` can also be used for training.\n \"\"\"\n x = layers.Input(shape=input_shape)\n\n # Layer 1: Just a conventional Conv2D layer\n conv1 = layers.Conv2D(filters=256, kernel_size=9, strides=1, padding='valid', activation='relu', name='conv1')(x)\n\n # Layer 2: Conv2D layer with `squash` activation, then reshape to [None, num_capsule, dim_capsule]\n primarycaps = PrimaryCap(conv1, dim_capsule=8, n_channels=32, kernel_size=9, strides=2, padding='valid')\n\n # Layer 3: Capsule layer. Routing algorithm works here.\n digitcaps = CapsuleLayer(num_capsule=n_class, dim_capsule=16, routings=routings,\n name='digitcaps')(primarycaps)\n\n # Layer 4: This is an auxiliary layer to replace each capsule with its length. Just to match the true label's shape.\n # If using tensorflow, this will not be necessary. :)\n out_caps = Length(name='capsnet')(digitcaps)\n\n # Decoder network.\n y = layers.Input(shape=(n_class,))\n masked_by_y = Mask()([digitcaps, y]) # The true label is used to mask the output of capsule layer. For training\n masked = Mask()(digitcaps) # Mask using the capsule with maximal length. For prediction\n\n # Shared Decoder model in training and prediction\n decoder = models.Sequential(name='decoder')\n decoder.add(layers.Dense(512, activation='relu', input_dim=16*n_class))\n decoder.add(layers.Dense(1024, activation='relu'))\n decoder.add(layers.Dense(np.prod(input_shape), activation='sigmoid'))\n decoder.add(layers.Reshape(target_shape=input_shape, name='out_recon'))\n\n # Models for training and evaluation (prediction)\n train_model = models.Model([x, y], [out_caps, decoder(masked_by_y)])\n eval_model = models.Model(x, [out_caps, decoder(masked)])\n\n # manipulate model\n noise = layers.Input(shape=(n_class, 16))\n noised_digitcaps = layers.Add()([digitcaps, noise])\n masked_noised_y = Mask()([noised_digitcaps, y])\n manipulate_model = models.Model([x, y, noise], decoder(masked_noised_y))\n return train_model, eval_model, manipulate_model\n\nmodel, eval_model, manipulate_model = CapsNet(input_shape=train_set.image_shape,\n n_class=train_set.num_classes,\n routings=args.routings)\nmodel.summary()", "__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) (None, 64, 64, 3) 0 \n__________________________________________________________________________________________________\nconv1 (Conv2D) (None, 56, 56, 256) 62464 input_1[0][0] \n__________________________________________________________________________________________________\nprimarycap_conv2d (Conv2D) (None, 24, 24, 256) 5308672 conv1[0][0] \n__________________________________________________________________________________________________\nprimarycap_reshape (Reshape) (None, 18432, 8) 0 primarycap_conv2d[0][0] \n__________________________________________________________________________________________________\nprimarycap_squash (Lambda) (None, 18432, 8) 0 primarycap_reshape[0][0] \n__________________________________________________________________________________________________\ndigitcaps (CapsuleLayer) (None, 5, 16) 11796480 primarycap_squash[0][0] \n__________________________________________________________________________________________________\ninput_2 (InputLayer) (None, 5) 0 \n__________________________________________________________________________________________________\nmask_1 (Mask) (None, 80) 0 digitcaps[0][0] \n input_2[0][0] \n__________________________________________________________________________________________________\ncapsnet (Length) (None, 5) 0 digitcaps[0][0] \n__________________________________________________________________________________________________\ndecoder (Sequential) (None, 64, 64, 3) 13161984 mask_1[0][0] \n==================================================================================================\nTotal params: 30,329,600\nTrainable params: 30,329,600\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ], [ "# train or model\ntrain(model=model, args=args)", "_____no_output_____" ], [ "final_test_set = test_datagen.flow_from_directory(args.test_dir,\n target_size = (64, 64),\n batch_size = 3753, #args.batch_size,\n class_mode = 'categorical')\n\n#Reconstruct the image\ndef manipulate_latent(model):\n print('-'*30 + 'Begin: manipulate' + '-'*30)\n x_test, y_test = final_test_set.next()\n index = np.argmax(y_test, 1) == args.digit\n number = np.random.randint(low=0, high=sum(index) - 1)\n x, y = x_test[index][number], y_test[index][number]\n x, y = np.expand_dims(x, 0), np.expand_dims(y, 0)\n noise = np.zeros([1, 5, 16])\n x_recons = []\n for dim in range(16):\n for r in [-0.15, -0.1, -0.05, 0, 0.05, 0.1, 0.15]:\n tmp = np.copy(noise)\n tmp[:,:,dim] = r\n x_recon = model.predict([x, y, tmp])\n x_recons.append(x_recon)\n\n x_recons = np.concatenate(x_recons)\n\n img = combine_images(x_recons, height=16)\n image = img*255\n Image.fromarray(image.astype(np.uint8)).save(args.save_dir + '/manipulate-%d.png' % args.digit)\n print('manipulated result saved to %s/manipulate-%d.png' % (args.save_dir, args.digit))\n print('-' * 30 + 'End: manipulate' + '-' * 30)", "Found 3753 images belonging to 5 classes.\n" ], [ "#function to test\nfinal_test_set = test_datagen.flow_from_directory(args.test_dir,\n target_size = (64, 64),\n shuffle=False,\n batch_size = 3753, #args.batch_size,\n class_mode = 'categorical')\n\ndef test(model):\n x_test, y_test = final_test_set.next()\n y_pred, x_recon = model.predict(x_test, batch_size=100)\n print('-'*30 + 'Begin: test' + '-'*30)\n print('Test acc:', np.sum(np.argmax(y_pred, 1) == np.argmax(y_test, 1))/y_test.shape[0])\n\n img = combine_images(np.concatenate([x_test[:50],x_recon[:50]]))\n image = img * 255\n Image.fromarray(image.astype(np.uint8)).save(args.save_dir + \"/real_and_recon.png\")\n print()\n print('Reconstructed images are saved to %s/real_and_recon.png' % args.save_dir)\n print('-' * 30 + 'End: test' + '-' * 30)\n plt.imshow(plt.imread(args.save_dir + \"/real_and_recon.png\"))\n plt.show()\n \n print('-' * 30 + 'Test Metrics' + '-' * 30)\n np.savetxt(\"./result/capsnet_657.csv\", y_pred, delimiter=\",\")\n y_pred = np.argmax(y_pred,axis = 1) \n y_actual = np.argmax(y_test, axis = 1)\n\n classnames=[]\n for classname in final_test_set.class_indices:\n classnames.append(classname)\n confusion_mtx = confusion_matrix(y_actual, y_pred) \n print(confusion_mtx)\n target_names = classnames\n print(classification_report(y_actual, y_pred, target_names=target_names))\n print(\"accuracy= \",(confusion_mtx.diagonal().sum()/confusion_mtx.sum())*100)", "Found 3753 images belonging to 5 classes.\n" ], [ "##Evaluation\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.metrics import confusion_matrix, classification_report\nfrom keras.models import load_model\n\nmodel.load_weights('./result/trained_model.h5')\n\n#manipulate_latent(manipulate_model)\ntest(model=eval_model)\n\n", "------------------------------Begin: test------------------------------\nTest acc: 0.654143351985\n\nReconstructed images are saved to ./result/real_and_recon.png\n------------------------------End: test------------------------------\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a867c6bceb916d2575271b08187c2609ac57e8a
63,320
ipynb
Jupyter Notebook
docs/tutorials/custom_federated_algorithms_2.ipynb
OpenMined/federated
4f001b0fa20109993c0f88392ab987ac6f29a304
[ "Apache-2.0" ]
5
2020-06-04T20:10:25.000Z
2020-07-22T02:15:38.000Z
docs/tutorials/custom_federated_algorithms_2.ipynb
OpenMined/federated
4f001b0fa20109993c0f88392ab987ac6f29a304
[ "Apache-2.0" ]
null
null
null
docs/tutorials/custom_federated_algorithms_2.ipynb
OpenMined/federated
4f001b0fa20109993c0f88392ab987ac6f29a304
[ "Apache-2.0" ]
null
null
null
38.190591
5,888
0.564356
[ [ [ "##### Copyright 2019 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Custom Federated Algorithms, Part 2: Implementing Federated Averaging", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/federated/tutorials/custom_federated_algorithms_2\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/federated/blob/v0.15.0/docs/tutorials/custom_federated_algorithms_2.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/federated/blob/v0.15.0/docs/tutorials/custom_federated_algorithms_2.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>", "_____no_output_____" ], [ "This tutorial is the second part of a two-part series that demonstrates how to\nimplement custom types of federated algorithms in TFF using the\n[Federated Core (FC)](../federated_core.md), which serves as a foundation for\nthe [Federated Learning (FL)](../federated_learning.md) layer (`tff.learning`).\n\nWe encourage you to first read the\n[first part of this series](custom_federated_algorithms_1.ipynb), which\nintroduce some of the key concepts and programming abstractions used here.\n\nThis second part of the series uses the mechanisms introduced in the first part\nto implement a simple version of federated training and evaluation algorithms.\n\nWe encourage you to review the\n[image classification](federated_learning_for_image_classification.ipynb) and\n[text generation](federated_learning_for_text_generation.ipynb) tutorials for a\nhigher-level and more gentle introduction to TFF's Federated Learning APIs, as\nthey will help you put the concepts we describe here in context.", "_____no_output_____" ], [ "## Before we start\n\nBefore we start, try to run the following \"Hello World\" example to make sure\nyour environment is correctly setup. If it doesn't work, please refer to the\n[Installation](../install.md) guide for instructions.", "_____no_output_____" ] ], [ [ "#@test {\"skip\": true}\n!pip install --quiet --upgrade tensorflow_federated\n!pip install --quiet --upgrade nest_asyncio\n\nimport nest_asyncio\nnest_asyncio.apply()", "_____no_output_____" ], [ "import collections\n\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_federated as tff\n\n# TODO(b/148678573,b/148685415): must use the ReferenceExecutor because it\n# supports unbounded references and tff.sequence_* intrinsics.\ntff.framework.set_default_context(tff.test.ReferenceExecutor())", "_____no_output_____" ], [ "@tff.federated_computation\ndef hello_world():\n return 'Hello, World!'\n\nhello_world()", "_____no_output_____" ] ], [ [ "## Implementing Federated Averaging\n\nAs in\n[Federated Learning for Image Classification](federated_learning_for_image_classification.ipynb),\nwe are going to use the MNIST example, but since this is intended as a low-level\ntutorial, we are going to bypass the Keras API and `tff.simulation`, write raw\nmodel code, and construct a federated data set from scratch.\n", "_____no_output_____" ], [ "\n### Preparing federated data sets\n\nFor the sake of a demonstration, we're going to simulate a scenario in which we\nhave data from 10 users, and each of the users contributes knowledge how to\nrecognize a different digit. This is about as\nnon-[i.i.d.](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables)\nas it gets.\n\nFirst, let's load the standard MNIST data:", "_____no_output_____" ] ], [ [ "mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\n11501568/11490434 [==============================] - 0s 0us/step\n" ], [ "[(x.dtype, x.shape) for x in mnist_train]", "_____no_output_____" ] ], [ [ "The data comes as Numpy arrays, one with images and another with digit labels, both\nwith the first dimension going over the individual examples. Let's write a\nhelper function that formats it in a way compatible with how we feed federated\nsequences into TFF computations, i.e., as a list of lists - the outer list\nranging over the users (digits), the inner ones ranging over batches of data in\neach client's sequence. As is customary, we will structure each batch as a pair\nof tensors named `x` and `y`, each with the leading batch dimension. While at\nit, we'll also flatten each image into a 784-element vector and rescale the\npixels in it into the `0..1` range, so that we don't have to clutter the model\nlogic with data conversions.", "_____no_output_____" ] ], [ [ "NUM_EXAMPLES_PER_USER = 1000\nBATCH_SIZE = 100\n\n\ndef get_data_for_digit(source, digit):\n output_sequence = []\n all_samples = [i for i, d in enumerate(source[1]) if d == digit]\n for i in range(0, min(len(all_samples), NUM_EXAMPLES_PER_USER), BATCH_SIZE):\n batch_samples = all_samples[i:i + BATCH_SIZE]\n output_sequence.append({\n 'x':\n np.array([source[0][i].flatten() / 255.0 for i in batch_samples],\n dtype=np.float32),\n 'y':\n np.array([source[1][i] for i in batch_samples], dtype=np.int32)\n })\n return output_sequence\n\n\nfederated_train_data = [get_data_for_digit(mnist_train, d) for d in range(10)]\n\nfederated_test_data = [get_data_for_digit(mnist_test, d) for d in range(10)]", "_____no_output_____" ] ], [ [ "As a quick sanity check, let's look at the `Y` tensor in the last batch of data\ncontributed by the fifth client (the one corresponding to the digit `5`).", "_____no_output_____" ] ], [ [ "federated_train_data[5][-1]['y']", "_____no_output_____" ] ], [ [ "Just to be sure, let's also look at the image corresponding to the last element of that batch.", "_____no_output_____" ] ], [ [ "from matplotlib import pyplot as plt\n\nplt.imshow(federated_train_data[5][-1]['x'][-1].reshape(28, 28), cmap='gray')\nplt.grid(False)\nplt.show()", "_____no_output_____" ] ], [ [ "### On combining TensorFlow and TFF\n\nIn this tutorial, for compactness we immediately decorate functions that\nintroduce TensorFlow logic with `tff.tf_computation`. However, for more complex\nlogic, this is not the pattern we recommend. Debugging TensorFlow can already be\na challenge, and debugging TensorFlow after it has been fully serialized and\nthen re-imported necessarily loses some metadata and limits interactivity,\nmaking debugging even more of a challenge.\n\nTherefore, **we strongly recommend writing complex TF logic as stand-alone\nPython functions** (that is, without `tff.tf_computation` decoration). This way\nthe TensorFlow logic can be developed and tested using TF best practices and\ntools (like eager mode), before serializing the computation for TFF (e.g., by invoking `tff.tf_computation` with a Python function as the argument).", "_____no_output_____" ], [ "### Defining a loss function\n\nNow that we have the data, let's define a loss function that we can use for\ntraining. First, let's define the type of input as a TFF named tuple. Since the\nsize of data batches may vary, we set the batch dimension to `None` to indicate\nthat the size of this dimension is unknown.", "_____no_output_____" ] ], [ [ "BATCH_SPEC = collections.OrderedDict(\n x=tf.TensorSpec(shape=[None, 784], dtype=tf.float32),\n y=tf.TensorSpec(shape=[None], dtype=tf.int32))\nBATCH_TYPE = tff.to_type(BATCH_SPEC)\n\nstr(BATCH_TYPE)", "_____no_output_____" ] ], [ [ "You may be wondering why we can't just define an ordinary Python type. Recall\nthe discussion in [part 1](custom_federated_algorithms_1.ipynb), where we\nexplained that while we can express the logic of TFF computations using Python,\nunder the hood TFF computations *are not* Python. The symbol `BATCH_TYPE`\ndefined above represents an abstract TFF type specification. It is important to\ndistinguish this *abstract* TFF type from concrete Python *representation*\ntypes, e.g., containers such as `dict` or `collections.namedtuple` that may be\nused to represent the TFF type in the body of a Python function. Unlike Python,\nTFF has a single abstract type constructor `tff.StructType` for tuple-like\ncontainers, with elements that can be individually named or left unnamed. This\ntype is also used to model formal parameters of computations, as TFF\ncomputations can formally only declare one parameter and one result - you will\nsee examples of this shortly.\n\nLet's now define the TFF type of model parameters, again as a TFF named tuple of\n*weights* and *bias*.", "_____no_output_____" ] ], [ [ "MODEL_SPEC = collections.OrderedDict(\n weights=tf.TensorSpec(shape=[784, 10], dtype=tf.float32),\n bias=tf.TensorSpec(shape=[10], dtype=tf.float32))\nMODEL_TYPE = tff.to_type(MODEL_SPEC)\n\nprint(MODEL_TYPE)", "<weights=float32[784,10],bias=float32[10]>\n" ] ], [ [ "With those definitions in place, now we can define the loss for a given model, over a single batch. Note the usage of `@tf.function` decorator inside the `@tff.tf_computation` decorator. This allows us to write TF using Python like semantics even though were inside a `tf.Graph` context created by the `tff.tf_computation` decorator.", "_____no_output_____" ] ], [ [ "# NOTE: `forward_pass` is defined separately from `batch_loss` so that it can \n# be later called from within another tf.function. Necessary because a\n# @tf.function decorated method cannot invoke a @tff.tf_computation.\n\[email protected]\ndef forward_pass(model, batch):\n predicted_y = tf.nn.softmax(\n tf.matmul(batch['x'], model['weights']) + model['bias'])\n return -tf.reduce_mean(\n tf.reduce_sum(\n tf.one_hot(batch['y'], 10) * tf.math.log(predicted_y), axis=[1]))\n\[email protected]_computation(MODEL_TYPE, BATCH_TYPE)\ndef batch_loss(model, batch):\n return forward_pass(model, batch)", "_____no_output_____" ] ], [ [ "As expected, computation `batch_loss` returns `float32` loss given the model and\na single data batch. Note how the `MODEL_TYPE` and `BATCH_TYPE` have been lumped\ntogether into a 2-tuple of formal parameters; you can recognize the type of\n`batch_loss` as `(<MODEL_TYPE,BATCH_TYPE> -> float32)`.", "_____no_output_____" ] ], [ [ "str(batch_loss.type_signature)", "_____no_output_____" ] ], [ [ "As a sanity check, let's construct an initial model filled with zeros and\ncompute the loss over the batch of data we visualized above.", "_____no_output_____" ] ], [ [ "initial_model = collections.OrderedDict(\n weights=np.zeros([784, 10], dtype=np.float32),\n bias=np.zeros([10], dtype=np.float32))\n\nsample_batch = federated_train_data[5][-1]\n\nbatch_loss(initial_model, sample_batch)", "_____no_output_____" ] ], [ [ "Note that we feed the TFF computation with the initial model defined as a\n`dict`, even though the body of the Python function that defines it consumes\nmodel parameters as `model.weight` and `model.bias`. The arguments of the call\nto `batch_loss` aren't simply passed to the body of that function.\n\n\nWhat happens when we invoke `batch_loss`?\nThe Python body of `batch_loss` has already been traced and serialized in the above cell where it was defined. TFF acts as the caller to `batch_loss`\nat the computation definition time, and as the target of invocation at the time\n`batch_loss` is invoked. In both roles, TFF serves as the bridge between TFF's\nabstract type system and Python representation types. At the invocation time,\nTFF will accept most standard Python container types (`dict`, `list`, `tuple`,\n`collections.namedtuple`, etc.) as concrete representations of abstract TFF\ntuples. Also, although as noted above, TFF computations formally only accept a\nsingle parameter, you can use the familiar Python call syntax with positional\nand/or keyword arguments in case where the type of the parameter is a tuple - it\nworks as expected.", "_____no_output_____" ], [ "### Gradient descent on a single batch\n\nNow, let's define a computation that uses this loss function to perform a single\nstep of gradient descent. Note how in defining this function, we use\n`batch_loss` as a subcomponent. You can invoke a computation constructed with\n`tff.tf_computation` inside the body of another computation, though typically\nthis is not necessary - as noted above, because serialization looses some\ndebugging information, it is often preferable for more complex computations to\nwrite and test all the TensorFlow without the `tff.tf_computation` decorator.", "_____no_output_____" ] ], [ [ "@tff.tf_computation(MODEL_TYPE, BATCH_TYPE, tf.float32)\ndef batch_train(initial_model, batch, learning_rate):\n # Define a group of model variables and set them to `initial_model`. Must\n # be defined outside the @tf.function.\n model_vars = collections.OrderedDict([\n (name, tf.Variable(name=name, initial_value=value))\n for name, value in initial_model.items()\n ])\n optimizer = tf.keras.optimizers.SGD(learning_rate)\n\n @tf.function\n def _train_on_batch(model_vars, batch):\n # Perform one step of gradient descent using loss from `batch_loss`.\n with tf.GradientTape() as tape:\n loss = forward_pass(model_vars, batch)\n grads = tape.gradient(loss, model_vars)\n optimizer.apply_gradients(\n zip(tf.nest.flatten(grads), tf.nest.flatten(model_vars)))\n return model_vars\n\n return _train_on_batch(model_vars, batch)", "_____no_output_____" ], [ "str(batch_train.type_signature)", "_____no_output_____" ] ], [ [ "When you invoke a Python function decorated with `tff.tf_computation` within the\nbody of another such function, the logic of the inner TFF computation is\nembedded (essentially, inlined) in the logic of the outer one. As noted above,\nif you are writing both computations, it is likely preferable to make the inner\nfunction (`batch_loss` in this case) a regular Python or `tf.function` rather\nthan a `tff.tf_computation`. However, here we illustrate that calling one\n`tff.tf_computation` inside another basically works as expected. This may be\nnecessary if, for example, you do not have the Python code defining\n`batch_loss`, but only its serialized TFF representation.\n\nNow, let's apply this function a few times to the initial model to see whether\nthe loss decreases.", "_____no_output_____" ] ], [ [ "model = initial_model\nlosses = []\nfor _ in range(5):\n model = batch_train(model, sample_batch, 0.1)\n losses.append(batch_loss(model, sample_batch))", "_____no_output_____" ], [ "losses", "_____no_output_____" ] ], [ [ "### Gradient descent on a sequence of local data\n\nNow, since `batch_train` appears to work, let's write a similar training\nfunction `local_train` that consumes the entire sequence of all batches from one\nuser instead of just a single batch. The new computation will need to now\nconsume `tff.SequenceType(BATCH_TYPE)` instead of `BATCH_TYPE`.", "_____no_output_____" ] ], [ [ "LOCAL_DATA_TYPE = tff.SequenceType(BATCH_TYPE)\n\[email protected]_computation(MODEL_TYPE, tf.float32, LOCAL_DATA_TYPE)\ndef local_train(initial_model, learning_rate, all_batches):\n\n # Mapping function to apply to each batch.\n @tff.federated_computation(MODEL_TYPE, BATCH_TYPE)\n def batch_fn(model, batch):\n return batch_train(model, batch, learning_rate)\n\n return tff.sequence_reduce(all_batches, initial_model, batch_fn)", "_____no_output_____" ], [ "str(local_train.type_signature)", "_____no_output_____" ] ], [ [ "There are quite a few details buried in this short section of code, let's go\nover them one by one.\n\nFirst, while we could have implemented this logic entirely in TensorFlow,\nrelying on `tf.data.Dataset.reduce` to process the sequence similarly to how\nwe've done it earlier, we've opted this time to express the logic in the glue\nlanguage, as a `tff.federated_computation`. We've used the federated operator\n`tff.sequence_reduce` to perform the reduction.\n\nThe operator `tff.sequence_reduce` is used similarly to\n`tf.data.Dataset.reduce`. You can think of it as essentially the same as\n`tf.data.Dataset.reduce`, but for use inside federated computations, which as\nyou may remember, cannot contain TensorFlow code. It is a template operator with\na formal parameter 3-tuple that consists of a *sequence* of `T`-typed elements,\nthe initial state of the reduction (we'll refer to it abstractly as *zero*) of\nsome type `U`, and the *reduction operator* of type `(<U,T> -> U)` that alters the\nstate of the reduction by processing a single element. The result is the final\nstate of the reduction, after processing all elements in a sequential order. In\nour example, the state of the reduction is the model trained on a prefix of the\ndata, and the elements are data batches.\n\nSecond, note that we have again used one computation (`batch_train`) as a\ncomponent within another (`local_train`), but not directly. We can't use it as a\nreduction operator because it takes an additional parameter - the learning rate.\nTo resolve this, we define an embedded federated computation `batch_fn` that\nbinds to the `local_train`'s parameter `learning_rate` in its body. It is\nallowed for a child computation defined this way to capture a formal parameter\nof its parent as long as the child computation is not invoked outside the body\nof its parent. You can think of this pattern as an equivalent of\n`functools.partial` in Python.\n\nThe practical implication of capturing `learning_rate` this way is, of course,\nthat the same learning rate value is used across all batches.\n\nNow, let's try the newly defined local training function on the entire sequence\nof data from the same user who contributed the sample batch (digit `5`).", "_____no_output_____" ] ], [ [ "locally_trained_model = local_train(initial_model, 0.1, federated_train_data[5])", "_____no_output_____" ] ], [ [ "Did it work? To answer this question, we need to implement evaluation.", "_____no_output_____" ], [ "### Local evaluation\n\nHere's one way to implement local evaluation by adding up the losses across all data\nbatches (we could have just as well computed the average; we'll leave it as an\nexercise for the reader).", "_____no_output_____" ] ], [ [ "@tff.federated_computation(MODEL_TYPE, LOCAL_DATA_TYPE)\ndef local_eval(model, all_batches):\n # TODO(b/120157713): Replace with `tff.sequence_average()` once implemented.\n return tff.sequence_sum(\n tff.sequence_map(\n tff.federated_computation(lambda b: batch_loss(model, b), BATCH_TYPE),\n all_batches))", "_____no_output_____" ], [ "str(local_eval.type_signature)", "_____no_output_____" ] ], [ [ "Again, there are a few new elements illustrated by this code, let's go over them\none by one.\n\nFirst, we have used two new federated operators for processing sequences:\n`tff.sequence_map` that takes a *mapping function* `T->U` and a *sequence* of\n`T`, and emits a sequence of `U` obtained by applying the mapping function\npointwise, and `tff.sequence_sum` that just adds all the elements. Here, we map\neach data batch to a loss value, and then add the resulting loss values to\ncompute the total loss.\n\nNote that we could have again used `tff.sequence_reduce`, but this wouldn't be\nthe best choice - the reduction process is, by definition, sequential, whereas\nthe mapping and sum can be computed in parallel. When given a choice, it's best\nto stick with operators that don't constrain implementation choices, so that\nwhen our TFF computation is compiled in the future to be deployed to a specific\nenvironment, one can take full advantage of all potential opportunities for a\nfaster, more scalable, more resource-efficient execution.\n\nSecond, note that just as in `local_train`, the component function we need\n(`batch_loss`) takes more parameters than what the federated operator\n(`tff.sequence_map`) expects, so we again define a partial, this time inline by\ndirectly wrapping a `lambda` as a `tff.federated_computation`. Using wrappers\ninline with a function as an argument is the recommended way to use\n`tff.tf_computation` to embed TensorFlow logic in TFF.\n\nNow, let's see whether our training worked.", "_____no_output_____" ] ], [ [ "print('initial_model loss =', local_eval(initial_model,\n federated_train_data[5]))\nprint('locally_trained_model loss =',\n local_eval(locally_trained_model, federated_train_data[5]))", "initial_model loss = 23.025854\nlocally_trained_model loss = 0.4348469\n" ] ], [ [ "Indeed, the loss decreased. But what happens if we evaluated it on another\nuser's data?", "_____no_output_____" ] ], [ [ "print('initial_model loss =', local_eval(initial_model,\n federated_train_data[0]))\nprint('locally_trained_model loss =',\n local_eval(locally_trained_model, federated_train_data[0]))", "initial_model loss = 23.025854\nlocally_trained_model loss = 74.50075\n" ] ], [ [ "As expected, things got worse. The model was trained to recognize `5`, and has\nnever seen a `0`. This brings the question - how did the local training impact\nthe quality of the model from the global perspective?", "_____no_output_____" ], [ "### Federated evaluation\n\nThis is the point in our journey where we finally circle back to federated types\nand federated computations - the topic that we started with. Here's a pair of\nTFF types definitions for the model that originates at the server, and the data\nthat remains on the clients.", "_____no_output_____" ] ], [ [ "SERVER_MODEL_TYPE = tff.FederatedType(MODEL_TYPE, tff.SERVER)\nCLIENT_DATA_TYPE = tff.FederatedType(LOCAL_DATA_TYPE, tff.CLIENTS)", "_____no_output_____" ] ], [ [ "With all the definitions introduced so far, expressing federated evaluation in\nTFF is a one-liner - we distribute the model to clients, let each client invoke\nlocal evaluation on its local portion of data, and then average out the loss.\nHere's one way to write this.", "_____no_output_____" ] ], [ [ "@tff.federated_computation(SERVER_MODEL_TYPE, CLIENT_DATA_TYPE)\ndef federated_eval(model, data):\n return tff.federated_mean(\n tff.federated_map(local_eval, [tff.federated_broadcast(model), data]))", "_____no_output_____" ] ], [ [ "We've already seen examples of `tff.federated_mean` and `tff.federated_map`\nin simpler scenarios, and at the intuitive level, they work as expected, but\nthere's more in this section of code than meets the eye, so let's go over it\ncarefully.\n\nFirst, let's break down the *let each client invoke local evaluation on its\nlocal portion of data* part. As you may recall from the preceding sections,\n`local_eval` has a type signature of the form `(<MODEL_TYPE, LOCAL_DATA_TYPE> ->\nfloat32)`.\n\nThe federated operator `tff.federated_map` is a template that accepts as a\nparameter a 2-tuple that consists of the *mapping function* of some type `T->U`\nand a federated value of type `{T}@CLIENTS` (i.e., with member constituents of\nthe same type as the parameter of the mapping function), and returns a result of\ntype `{U}@CLIENTS`.\n\nSince we're feeding `local_eval` as a mapping function to apply on a per-client\nbasis, the second argument should be of a federated type `{<MODEL_TYPE,\nLOCAL_DATA_TYPE>}@CLIENTS`, i.e., in the nomenclature of the preceding sections,\nit should be a federated tuple. Each client should hold a full set of arguments\nfor `local_eval` as a member consituent. Instead, we're feeding it a 2-element\nPython `list`. What's happening here?\n\nIndeed, this is an example of an *implicit type cast* in TFF, similar to\nimplicit type casts you may have encountered elsewhere, e.g., when you feed an\n`int` to a function that accepts a `float`. Implicit casting is used scarcily at\nthis point, but we plan to make it more pervasive in TFF as a way to minimize\nboilerplate.\n\nThe implicit cast that's applied in this case is the equivalence between\nfederated tuples of the form `{<X,Y>}@Z`, and tuples of federated values\n`<{X}@Z,{Y}@Z>`. While formally, these two are different type signatures,\nlooking at it from the programmers's perspective, each device in `Z` holds two\nunits of data `X` and `Y`. What happens here is not unlike `zip` in Python, and\nindeed, we offer an operator `tff.federated_zip` that allows you to perform such\nconversions explicity. When the `tff.federated_map` encounters a tuple as a\nsecond argument, it simply invokes `tff.federated_zip` for you.\n\nGiven the above, you should now be able to recognize the expression\n`tff.federated_broadcast(model)` as representing a value of TFF type\n`{MODEL_TYPE}@CLIENTS`, and `data` as a value of TFF type\n`{LOCAL_DATA_TYPE}@CLIENTS` (or simply `CLIENT_DATA_TYPE`), the two getting\nfiltered together through an implicit `tff.federated_zip` to form the second\nargument to `tff.federated_map`.\n\nThe operator `tff.federated_broadcast`, as you'd expect, simply transfers data\nfrom the server to the clients.\n\nNow, let's see how our local training affected the average loss in the system.", "_____no_output_____" ] ], [ [ "print('initial_model loss =', federated_eval(initial_model,\n federated_train_data))\nprint('locally_trained_model loss =',\n federated_eval(locally_trained_model, federated_train_data))", "initial_model loss = 23.025852\nlocally_trained_model loss = 54.432625\n" ] ], [ [ "Indeed, as expected, the loss has increased. In order to improve the model for\nall users, we'll need to train in on everyone's data.", "_____no_output_____" ], [ "### Federated training\n\nThe simplest way to implement federated training is to locally train, and then\naverage the models. This uses the same building blocks and patters we've already\ndiscussed, as you can see below.", "_____no_output_____" ] ], [ [ "SERVER_FLOAT_TYPE = tff.FederatedType(tf.float32, tff.SERVER)\n\n\[email protected]_computation(SERVER_MODEL_TYPE, SERVER_FLOAT_TYPE,\n CLIENT_DATA_TYPE)\ndef federated_train(model, learning_rate, data):\n return tff.federated_mean(\n tff.federated_map(local_train, [\n tff.federated_broadcast(model),\n tff.federated_broadcast(learning_rate), data\n ]))", "_____no_output_____" ] ], [ [ "Note that in the full-featured implementation of Federated Averaging provided by\n`tff.learning`, rather than averaging the models, we prefer to average model\ndeltas, for a number of reasons, e.g., the ability to clip the update norms,\nfor compression, etc.\n\nLet's see whether the training works by running a few rounds of training and\ncomparing the average loss before and after.", "_____no_output_____" ] ], [ [ "model = initial_model\nlearning_rate = 0.1\nfor round_num in range(5):\n model = federated_train(model, learning_rate, federated_train_data)\n learning_rate = learning_rate * 0.9\n loss = federated_eval(model, federated_train_data)\n print('round {}, loss={}'.format(round_num, loss))", "round 0, loss=21.60552406311035\nround 1, loss=20.365678787231445\nround 2, loss=19.27480125427246\nround 3, loss=18.31110954284668\nround 4, loss=17.45725440979004\n" ] ], [ [ "For completeness, let's now also run on the test data to confirm that our model\ngeneralizes well.", "_____no_output_____" ] ], [ [ "print('initial_model test loss =',\n federated_eval(initial_model, federated_test_data))\nprint('trained_model test loss =', federated_eval(model, federated_test_data))", "initial_model test loss = 22.795593\ntrained_model test loss = 17.278767\n" ] ], [ [ "This concludes our tutorial.\n\nOf course, our simplified example doesn't reflect a number of things you'd need\nto do in a more realistic scenario - for example, we haven't computed metrics\nother than loss. We encourage you to study\n[the implementation](https://github.com/tensorflow/federated/blob/master/tensorflow_federated/python/learning/federated_averaging.py)\nof federated averaging in `tff.learning` as a more complete example, and as a\nway to demonstrate some of the coding practices we'd like to encourage.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a867d47ee96b40cf5f3ec18fe136e3362ca9a40
1,210
ipynb
Jupyter Notebook
cs-master/cs401/notebook/python_basic_note.ipynb
Vince-Lau/my-study
5c94c978fdd01c07e33299e41a2aa695f4cbf3d1
[ "Apache-2.0" ]
null
null
null
cs-master/cs401/notebook/python_basic_note.ipynb
Vince-Lau/my-study
5c94c978fdd01c07e33299e41a2aa695f4cbf3d1
[ "Apache-2.0" ]
null
null
null
cs-master/cs401/notebook/python_basic_note.ipynb
Vince-Lau/my-study
5c94c978fdd01c07e33299e41a2aa695f4cbf3d1
[ "Apache-2.0" ]
null
null
null
18.333333
42
0.517355
[ [ [ "- dict不能使用可变数据结构作为键, 例如list, dict\n- 但是可以使用不可变数据结构作为键,例如tuple,string", "_____no_output_____" ] ], [ [ "d = {}\nd[(1,2,3)] = 'hello'\nprint(d)", "{(1, 2, 3): 'hello'}\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
4a867e50559cf72e0023862818e70b1b7aeb56bf
224,662
ipynb
Jupyter Notebook
content/ch-gates/more-circuit-identities.ipynb
HilmarTuneke/qiskit-textbook
b07a22209ac161d303fe4409692952f644ebd9f4
[ "Apache-2.0" ]
null
null
null
content/ch-gates/more-circuit-identities.ipynb
HilmarTuneke/qiskit-textbook
b07a22209ac161d303fe4409692952f644ebd9f4
[ "Apache-2.0" ]
null
null
null
content/ch-gates/more-circuit-identities.ipynb
HilmarTuneke/qiskit-textbook
b07a22209ac161d303fe4409692952f644ebd9f4
[ "Apache-2.0" ]
null
null
null
42.46116
1,493
0.510028
[ [ [ "# Basic Circuit Identities", "_____no_output_____" ] ], [ [ "from qiskit import QuantumCircuit, Aer, execute\nfrom qiskit.circuit import Gate\nfrom math import pi\nqc = QuantumCircuit(2)\nc = 0\nt = 1", "_____no_output_____" ] ], [ [ "When we program quantum computers, our aim is always to build useful quantum circuits from the basic building blocks. But sometimes, we might not have all the basic building blocks we want. In this section, we'll look at how we can transform basic gates into each other, and how to use them to build some gates that are slightly more complex \\(but still pretty basic\\).\n\nMany of the techniques discussed in this chapter were first proposed in a paper by Barenco and coauthors in 1995 [1].\n\n## Contents\n\n1. [Making a Controlled-Z from a CNOT](#c-from-cnot)\n2. [Swapping Qubits](#swapping) \n3. [Controlled Rotations](#controlled-rotations)\n4. [The Toffoli](#ccx)\n5. [Arbitrary rotations from H and T](#arbitrary-rotations)\n6. [References](#references)", "_____no_output_____" ], [ "## 1. Making a Controlled-Z from a CNOT <a id=\"c-from-cnot\"></a>\n\nThe controlled-Z or `cz` gate is another well-used two-qubit gate. Just as the CNOT applies an $X$ to its target qubit whenever its control is in state $|1\\rangle$, the controlled-$Z$ applies a $Z$ in the same case. In Qiskit it can be invoked directly with", "_____no_output_____" ] ], [ [ "# a controlled-Z\nqc.cz(c,t)\nqc.draw()", "_____no_output_____" ] ], [ [ "where c and t are the control and target qubits. In IBM Q devices, however, the only kind of two-qubit gate that can be directly applied is the CNOT. We therefore need a way to transform one to the other.\n\nThe process for this is quite simple. We know that the Hadamard transforms the states $|0\\rangle$ and $|1\\rangle$ to the states $|+\\rangle$ and $|-\\rangle$ respectively. We also know that the effect of the $Z$ gate on the states $|+\\rangle$ and $|-\\rangle$ is the same as that for $X$ on the states $|0\\rangle$ and $|1\\rangle$ respectively. From this reasoning, or from simply multiplying matrices, we find that\n\n$$\nH X H = Z,\\\\\\\\\nH Z H = X.\n$$\n\nThe same trick can be used to transform a CNOT into a controlled-$Z$. All we need to do is precede and follow the CNOT with a Hadamard on the target qubit. This will transform any $X$ applied to that qubit into a $Z$.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\n# also a controlled-Z\nqc.h(t)\nqc.cx(c,t)\nqc.h(t)\nqc.draw()", "_____no_output_____" ] ], [ [ "More generally, we can transform a single CNOT into a controlled version of any rotation around the Bloch sphere by an angle $\\pi$, by simply preceding and following it with the correct rotations. For example, a controlled-$Y$:", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\n# a controlled-Y\nqc.sdg(t)\nqc.cx(c,t)\nqc.s(t)\nqc.draw()", "_____no_output_____" ] ], [ [ "and a controlled-$H$:", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\n# a controlled-H\nqc.ry(pi/4,t)\nqc.cx(c,t)\nqc.ry(-pi/4,t)\nqc.draw()", "_____no_output_____" ] ], [ [ "## 2. Swapping Qubits <a id=\"swapping\"></a>", "_____no_output_____" ] ], [ [ "a = 0\nb = 1", "_____no_output_____" ] ], [ [ "Sometimes we need to move information around in a quantum computer. For some qubit implementations, this could be done by physically moving them. Another option is simply to move the state between two qubits. This is done by the SWAP gate.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\n# swaps states of qubits a and b\nqc.swap(a,b)\nqc.draw()", "_____no_output_____" ] ], [ [ "The command above directly invokes this gate, but let's see how we might make it using our standard gate set. For this, we'll need to consider a few examples.\n\nFirst, we'll look at the case that qubit a is in state $|1\\rangle$ and qubit b is in state $|0\\rangle$. For this we'll apply the following gates:", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\n# swap a 1 from a to b\nqc.cx(a,b) # copies 1 from a to b\nqc.cx(b,a) # uses the 1 on b to rotate the state of a to 0\nqc.draw()", "_____no_output_____" ] ], [ [ "This has the effect of putting qubit b in state $|1\\rangle$ and qubit a in state $|0\\rangle$. In this case at least, we have done a SWAP.\n\nNow let's take this state and SWAP back to the original one. As you may have guessed, we can do this with the reverse of the above process:", "_____no_output_____" ] ], [ [ "# swap a q from b to a\nqc.cx(b,a) # copies 1 from b to a\nqc.cx(a,b) # uses the 1 on a to rotate the state of b to 0\nqc.draw()", "_____no_output_____" ] ], [ [ "Note that in these two processes, the first gate of one would have no effect on the initial state of the other. For example, when we swap the $|1\\rangle$ b to a, the first gate is `cx(b,a)`. If this were instead applied to a state where no $|1\\rangle$ was initially on b, it would have no effect.\n\nNote also that for these two processes, the final gate of one would have no effect on the final state of the other. For example, the final `cx(b,a)` that is required when we swap the $|1\\rangle$ from a to b has no effect on the state where the $|1\\rangle$ is not on b.\n\nWith these observations, we can combine the two processes by adding an ineffective gate from one onto the other. For example,", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\nqc.cx(b,a)\nqc.cx(a,b)\nqc.cx(b,a)\nqc.draw()", "_____no_output_____" ] ], [ [ "We can think of this as a process that swaps a $|1\\rangle$ from a to b, but with a useless `qc.cx(b,a)` at the beginning. We can also think of it as a process that swaps a $|1\\rangle$ from b to a, but with a useless `qc.cx(b,a)` at the end. Either way, the result is a process that can do the swap both ways around.\n\nIt also has the correct effect on the $|00\\rangle$ state. This is symmetric, and so swapping the states should have no effect. Since the CNOT gates have no effect when their control qubits are $|0\\rangle$, the process correctly does nothing.\n\nThe $|11\\rangle$ state is also symmetric, and so needs a trivial effect from the swap. In this case, the first CNOT gate in the process above will cause the second to have no effect, and the third undoes the first. Therefore, the whole effect is indeed trivial.\n\nWe have thus found a way to decompose SWAP gates into our standard gate set of single-qubit rotations and CNOT gates.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\n# swaps states of qubits a and b\nqc.cx(b,a)\nqc.cx(a,b)\nqc.cx(b,a)\nqc.draw()", "_____no_output_____" ] ], [ [ "It works for the states $|00\\rangle$, $|01\\rangle$, $|10\\rangle$ and $|11\\rangle$, and if it works for all the states in the computational basis, it must work for all states generally. This circuit therefore swaps all possible two-qubit states.\n\nThe same effect would also result if we changed the order of the CNOT gates:", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\n# swaps states of qubits a and b\nqc.cx(a,b)\nqc.cx(b,a)\nqc.cx(a,b)\nqc.draw()", "_____no_output_____" ] ], [ [ "This is an equally valid way to get the SWAP gate.\n\nThe derivation used here was very much based on the z basis states, but it could also be done by thinking about what is required to swap qubits in states $|+\\rangle$ and $|-\\rangle$. The resulting ways of implementing the SWAP gate will be completely equivalent to the ones here.\n\n#### Quick Exercise:\n- Find a different circuit that swaps qubits in the states $|+\\rangle$ and $|-\\rangle$, and show that this is equivalent to the circuit shown above.", "_____no_output_____" ] ], [ [ "from qiskit.visualization import array_to_latex\n\nbackend = Aer.get_backend('aer_simulator')\n\nqc01 = QuantumCircuit(2)\nqc01.cx(0,1)\nqc01.cx(1,0)\nqc01.cx(0,1)\nqc01.save_unitary()\ngate01 = execute(qc01,backend).result().get_unitary()\narray_to_latex(gate01, prefix=\"\\\\text{gate01} = \")", "_____no_output_____" ], [ "qcpm = QuantumCircuit(2)\nqcpm.h(0)\nqcpm.h(1)\nqcpm.cx(0,1)\nqcpm.cx(1,0)\nqcpm.cx(0,1)\nqcpm.h(0)\nqcpm.h(1)\nqcpm.save_unitary()\ngatepm = execute(qcpm,backend).result().get_unitary()\narray_to_latex(gatepm, prefix=\"\\\\text{gatepm} = \")", "_____no_output_____" ] ], [ [ "## 3. Controlled Rotations <a id=\"controlled-rotations\"></a>\n\nWe have already seen how to build controlled $\\pi$ rotations from a single CNOT gate. Now we'll look at how to build any controlled rotation.\n\nFirst, let's consider arbitrary rotations around the y axis. Specifically, consider the following sequence of gates.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(2)\ntheta = pi # theta can be anything (pi chosen arbitrarily)\nqc.ry(theta/2,t)\nqc.cx(c,t)\nqc.ry(-theta/2,t)\nqc.cx(c,t)\nqc.draw()", "_____no_output_____" ] ], [ [ "If the control qubit is in state $|0\\rangle$, all we have here is a $R_y(\\theta/2)$ immediately followed by its inverse, $R_y(-\\theta/2)$. The end effect is trivial. If the control qubit is in state $|1\\rangle$, however, the `ry(-theta/2)` is effectively preceded and followed by an X gate. This has the effect of flipping the direction of the y rotation and making a second $R_y(\\theta/2)$. The net effect in this case is therefore to make a controlled version of the rotation $R_y(\\theta)$. \n\nThis method works because the x and y axis are orthogonal, which causes the x gates to flip the direction of the rotation. It therefore similarly works to make a controlled $R_z(\\theta)$. A controlled $R_x(\\theta)$ could similarly be made using CNOT gates.\n\nWe can also make a controlled version of any single-qubit rotation, $V$. For this we simply need to find three rotations A, B and C, and a phase $\\alpha$ such that\n\n$$\nABC = I, ~~~e^{i\\alpha}AZBZC = V\n$$\n\nWe then use controlled-Z gates to cause the first of these relations to happen whenever the control is in state $|0\\rangle$, and the second to happen when the control is state $|1\\rangle$. An $R_z(2\\alpha)$ rotation is also used on the control to get the right phase, which will be important whenever there are superposition states.", "_____no_output_____" ] ], [ [ "A = Gate('A', 1, [])\nB = Gate('B', 1, [])\nC = Gate('C', 1, [])\nalpha = 1 # arbitrarily define alpha to allow drawing of circuit", "_____no_output_____" ], [ "qc = QuantumCircuit(2)\nqc.append(C, [t])\nqc.cz(c,t)\nqc.append(B, [t])\nqc.cz(c,t)\nqc.append(A, [t])\nqc.p(alpha,c)\nqc.draw()", "_____no_output_____" ] ], [ [ "![A controlled version of a gate V](images/iden1.png)\n\nHere `A`, `B` and `C` are gates that implement $A$ , $B$ and $C$, respectively.", "_____no_output_____" ], [ "## 4. The Toffoli <a id=\"ccx\"></a>\n\nThe Toffoli gate is a three-qubit gate with two controls and one target. It performs an X on the target only if both controls are in the state $|1\\rangle$. The final state of the target is then equal to either the AND or the NAND of the two controls, depending on whether the initial state of the target was $|0\\rangle$ or $|1\\rangle$. A Toffoli can also be thought of as a controlled-controlled-NOT, and is also called the CCX gate.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(3)\na = 0\nb = 1\nt = 2\n# Toffoli with control qubits a and b and target t\nqc.ccx(a,b,t)\nqc.draw()", "_____no_output_____" ] ], [ [ "To see how to build it from single- and two-qubit gates, it is helpful to first show how to build something even more general: an arbitrary controlled-controlled-U for any single-qubit rotation U. For this we need to define controlled versions of $V = \\sqrt{U}$ and $V^\\dagger$. In the code below, we use `cp(theta,c,t)` and `cp(-theta,c,t)`in place of the undefined subroutines `cv` and `cvdg` respectively. The controls are qubits $a$ and $b$, and the target is qubit $t$.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(3)\nqc.cp(theta,b,t)\nqc.cx(a,b)\nqc.cp(-theta,b,t)\nqc.cx(a,b)\nqc.cp(theta,a,t)\nqc.draw()", "_____no_output_____" ] ], [ [ "![A doubly controlled version of a gate V](images/iden2.png)\n\nBy tracing through each value of the two control qubits, you can convince yourself that a U gate is applied to the target qubit if and only if both controls are 1. Using ideas we have already described, you could now implement each controlled-V gate to arrive at some circuit for the doubly-controlled-U gate. It turns out that the minimum number of CNOT gates required to implement the Toffoli gate is six [2].\n\n\n![A Toffoli](images/iden3.png)\n*This is a Toffoli with 3 qubits(q0,q1,q2) respectively. In this circuit example, q0 is connected with q2 but q0 is not connected with q1.\n\n\nThe Toffoli is not the unique way to implement an AND gate in quantum computing. We could also define other gates that have the same effect, but which also introduce relative phases. In these cases, we can implement the gate with fewer CNOTs.\n\nFor example, suppose we use both the controlled-Hadamard and controlled-$Z$ gates, which can both be implemented with a single CNOT. With these we can make the following circuit:", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(3)\nqc.ch(a,t)\nqc.cz(b,t)\nqc.ch(a,t)\nqc.draw()", "_____no_output_____" ] ], [ [ "For the state $|00\\rangle$ on the two controls, this does nothing to the target. For $|11\\rangle$, the target experiences a $Z$ gate that is both preceded and followed by an H. The net effect is an $X$ on the target. For the states $|01\\rangle$ and $|10\\rangle$, the target experiences either just the two Hadamards \\(which cancel each other out\\) or just the $Z$ \\(which only induces a relative phase\\). This therefore also reproduces the effect of an AND, because the value of the target is only changed for the $|11\\rangle$ state on the controls -- but it does it with the equivalent of just three CNOT gates.", "_____no_output_____" ], [ "## 5. Arbitrary rotations from H and T <a id=\"arbitrary-rotations\"></a>\n\nThe qubits in current devices are subject to noise, which basically consists of gates that are done by mistake. Simple things like temperature, stray magnetic fields or activity on neighboring qubits can make things happen that we didn't intend.\n\nFor large applications of quantum computers, it will be necessary to encode our qubits in a way that protects them from this noise. This is done by making gates much harder to do by mistake, or to implement in a manner that is slightly wrong.\n\nThis is unfortunate for the single-qubit rotations $R_x(\\theta)$, $R_y(\\theta)$ and $R_z(\\theta)$. It is impossible to implement an angle $\\theta$ with perfect accuracy, such that you are sure that you are not accidentally implementing something like $\\theta + 0.0000001$. There will always be a limit to the accuracy we can achieve, and it will always be larger than is tolerable when we account for the build-up of imperfections over large circuits. We will therefore not be able to implement these rotations directly in fault-tolerant quantum computers, but will instead need to build them in a much more deliberate manner.\n\nFault-tolerant schemes typically perform these rotations using multiple applications of just two gates: $H$ and $T$.\n\nThe T gate is expressed in Qiskit as `.t()`:", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(1)\nqc.t(0) # T gate on qubit 0\nqc.draw()", "_____no_output_____" ] ], [ [ "It is a rotation around the z axis by $\\theta = \\pi/4$, and so is expressed mathematically as $R_z(\\pi/4) = e^{i\\pi/8~Z}$.\n\nIn the following we assume that the $H$ and $T$ gates are effectively perfect. This can be engineered by suitable methods for error correction and fault-tolerance.\n\nUsing the Hadamard and the methods discussed in the last chapter, we can use the T gate to create a similar rotation around the x axis.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(1)\nqc.h(0)\nqc.t(0)\nqc.h(0)\nqc.draw()", "_____no_output_____" ] ], [ [ "Now let's put the two together. Let's make the gate $R_z(\\pi/4)~R_x(\\pi/4)$.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(1)\nqc.h(0)\nqc.t(0)\nqc.h(0)\nqc.t(0)\nqc.draw()", "_____no_output_____" ] ], [ [ "Since this is a single-qubit gate, we can think of it as a rotation around the Bloch sphere. That means that it is a rotation around some axis by some angle. We don't need to think about the axis too much here, but it clearly won't be simply x, y or z. More important is the angle.\n\nThe crucial property of the angle for this rotation is that it is an irrational multiple of $\\pi$. You can prove this yourself with a bunch of math, but you can also see the irrationality in action by applying the gate. Keeping in mind that every time we apply a rotation that is larger than $2\\pi$, we are doing an implicit modulos by $2\\pi$ on the rotation angle. Thus, repeating the combined rotation mentioned above $n$ times results in a rotation around the same axis by a different angle. As a hint to a rigorous proof, recall that an irrational number cannot be be written as what?\n\nWe can use this to our advantage. Each angle will be somewhere between $0$ and $2\\pi$. Let's split this interval up into $n$ slices of width $2\\pi/n$. For each repetition, the resulting angle will fall in one of these slices. If we look at the angles for the first $n+1$ repetitions, it must be true that at least one slice contains two of these angles due to the pigeonhole principle. Let's use $n_1$ to denote the number of repetitions required for the first, and $n_2$ for the second.\n\nWith this, we can prove something about the angle for $n_2-n_1$ repetitions. This is effectively the same as doing $n_2$ repetitions, followed by the inverse of $n_1$ repetitions. Since the angles for these are not equal \\(because of the irrationality\\) but also differ by no greater than $2\\pi/n$ \\(because they correspond to the same slice\\), the angle for $n_2-n_1$ repetitions satisfies\n\n$$\n\\theta_{n_2-n_1} \\neq 0, ~~~~-\\frac{2\\pi}{n} \\leq \\theta_{n_2-n_1} \\leq \\frac{2\\pi}{n} .\n$$\n\nWe therefore have the ability to do rotations around small angles. We can use this to rotate around angles that are as small as we like, just by increasing the number of times we repeat this gate.\n\nBy using many small-angle rotations, we can also rotate by any angle we like. This won't always be exact, but it is guaranteed to be accurate up to $2\\pi/n$, which can be made as small as we like. We now have power over the inaccuracies in our rotations.\n\nSo far, we only have the power to do these arbitrary rotations around one axis. For a second axis, we simply do the $R_z(\\pi/4)$ and $R_x(\\pi/4)$ rotations in the opposite order.", "_____no_output_____" ] ], [ [ "qc = QuantumCircuit(1)\nqc.t(0)\nqc.h(0)\nqc.t(0)\nqc.h(0)\nqc.draw()", "_____no_output_____" ] ], [ [ "The axis that corresponds to this rotation is not the same as that for the gate considered previously. We therefore now have arbitrary rotation around two axes, which can be used to generate any arbitrary rotation around the Bloch sphere. We are back to being able to do everything, though it costs quite a lot of $T$ gates.\n\nIt is because of this kind of application that $T$ gates are so prominent in quantum computation. In fact, the complexity of algorithms for fault-tolerant quantum computers is often quoted in terms of how many $T$ gates they'll need. This motivates the quest to achieve things with as few $T$ gates as possible. Note that the discussion above was simply intended to prove that $T$ gates can be used in this way, and does not represent the most efficient method we know.", "_____no_output_____" ], [ "## 6. References <a id=\"references\"></a>\n\n[1] [Barenco, *et al.* 1995](https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.3457?cm_mc_uid=43781767191014577577895&cm_mc_sid_50200000=1460741020)\n\n[2] [Shende and Markov, 2009](http://dl.acm.org/citation.cfm?id=2011799)", "_____no_output_____" ] ], [ [ "import qiskit.tools.jupyter\n%qiskit_version_table", "/Users/hilmar/.pyenv/versions/3.9.7/lib/python3.9/site-packages/qiskit/aqua/__init__.py:86: DeprecationWarning: The package qiskit.aqua is deprecated. It was moved/refactored to qiskit-terra For more information see <https://github.com/Qiskit/qiskit-aqua/blob/main/README.md#migration-guide>\n warn_package('aqua', 'qiskit-terra')\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
4a867f666fa7e75f520dff467241e7bc50893e8d
128,497
ipynb
Jupyter Notebook
TrayectoriaParticulas2.ipynb
SantiagoBeltran/Trabajo_1_Aceleradores
8e818311ce78a6b76c0fe690cc60ee2b58243ada
[ "MIT" ]
null
null
null
TrayectoriaParticulas2.ipynb
SantiagoBeltran/Trabajo_1_Aceleradores
8e818311ce78a6b76c0fe690cc60ee2b58243ada
[ "MIT" ]
null
null
null
TrayectoriaParticulas2.ipynb
SantiagoBeltran/Trabajo_1_Aceleradores
8e818311ce78a6b76c0fe690cc60ee2b58243ada
[ "MIT" ]
null
null
null
234.483577
37,596
0.918527
[ [ [ "%pylab inline\nimport numpy as np\nimport math", "Populating the interactive namespace from numpy and matplotlib\n" ] ], [ [ "Condiciones iniciales del problema.", "_____no_output_____" ] ], [ [ "q=-1.602176565E-19\nv_0=3.0E5\ntheta_0=0\nB=-10**(-4)\nm=9.1093829E-31\nN=1000", "_____no_output_____" ] ], [ [ "Cálculo del tiempo que tarda la partícula en volver a cruzar el eje $x$ teóricamente t definición del intervalo de tiempo a observar.", "_____no_output_____" ] ], [ [ "def tiempo(q,B,theta_0, m,N):\n t_max=(m/(q*B))*(2*theta_0+math.pi)\n dt=t_max/N\n t=np.arange(0,t_max+dt,dt)\n return t, t_max\ntime,t_max=tiempo(q,B,theta_0, m,N)\nprint(\"El tiempo total del recorrido teóricamente hasta llegar al detector es {}.\".format(t_max))", "El tiempo total del recorrido teóricamente hasta llegar al detector es 1.7861932962036851e-07.\n" ] ], [ [ "Ecuaciones de posición teóricas respecto al tiempo $t$ para $\\theta_0$ arbitrario y na velocidad $v_0$ de entrada.", "_____no_output_____" ] ], [ [ "def posicion(q,B,v_0,theta_0,m,t):\n omega=q*B/m\n x=-v_0*np.cos(theta_0-omega*t)/omega+v_0*np.cos(theta_0)/omega\n y=-v_0*np.sin(theta_0-omega*t)/omega+v_0*np.sin(theta_0)/omega\n return x,y", "_____no_output_____" ] ], [ [ "Gráfica del recorrido circular de una partícula de carga $q$ que incide en el eje $x$ con rapidez $v_0$ y ángulo de incidencia $\\theta_0=0,$debido a un campo $\\mathbf{B}$ perpendicular.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(50/6,5))\nxTeo,yTeo=posicion(q,B,v_0,theta_0,m,time)\nplt.plot(xTeo,yTeo, label=\"Trayectoria circular\")\nplt.plot(xTeo,np.zeros(len(xTeo)), c=\"black\")\nplt.legend()\nplt.xlabel(\"Posición en x (m)\")\nplt.ylabel(\"Posición en y (m)\")", "_____no_output_____" ] ], [ [ "Cálculo de la posición final de la partícula al llegar al detector, es decir el punto en el que la trayectoria cruza el eje $x$ nuevamente.", "_____no_output_____" ] ], [ [ "x_max, y_max=posicion(q,B,v_0,theta_0,m,t_max)\nprint(\"Teóricamente a partícula alcanza el detector cuando este se encuentra en x={}m y y={}m.\".format(x_max,y_max))", "Teóricamente a partícula alcanza el detector cuando este se encuentra en x=0.0341137790890107m y y=9.663647118728019e-18m.\n" ] ], [ [ "Cálculo del momento inicial $p_0=mv_0$, final $p_f=\\frac{1}{2}qBx$ y la diferencia de momento que comprueba la conservación del momento lineal.", "_____no_output_____" ] ], [ [ "p_0=m*v_0\np_f=0.5*q*B*x_max\ndp=np.abs(p_f-p_0)\nprint(\"El momento inicial de la partícula es {} kg m/s, el momento final es {} kg m/s y la diferencia de momento es {} kg m/s.\".format(p_0,p_f,dp))", "El momento inicial de la partícula es 2.73281487e-25 kg m/s, el momento final es 2.73281487e-25 kg m/s y la diferencia de momento es 0.0 kg m/s.\n" ] ], [ [ "Definición de la función de trayectoria de la partícula que incide con rapidez $v_0$ y ángulo $\\theta_0$ a una región de campo magnético erpendicular $\\mathbf{B}$; siguiendo el paso a paso de Feynmann.", "_____no_output_____" ] ], [ [ "def pasoApaso(q,B,v_0,theta_0,m):\n N=10000\n t=0.0\n omega=q*B/m\n dt=1/(omega*N)\n x=[0]\n y=[0]\n v_x=-v_0*np.sin(theta_0)\n v_y=v_0*np.cos(theta_0)\n while y[-1]>=0:\n a_x=omega*v_y\n a_y=-omega*v_x\n x_new=x[-1]+v_x*dt\n y_new=y[-1]+v_y*dt\n x.append(x_new)\n y.append(y_new)\n v_x=v_x+a_x*dt\n v_y=v_y+a_y*dt\n t=t+dt\n x=np.array(x)\n y=np.array(y)\n return x,y,t", "_____no_output_____" ] ], [ [ "Gráfica de la trayectoria circular por medio del Método de Feynmann de una partícula de carga $q$ que incide en el eje $x$ con rapidez $v_0$ y ángulo de incidencia $\\theta_0=0,$debido a un campo $\\mathbf{B}$ perpendicular.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(50/6,5))\nxF,yF,t_maxF=pasoApaso(q,B,v_0,theta_0,m)\nplt.plot(xF,yF, label=\"Trayectoria circular\")\nplt.plot(xF,np.zeros(len(xF)), c=\"black\")\nplt.legend()\nplt.xlabel(\"Posición en x (m)\")\nplt.ylabel(\"Posición en y (m)\")", "_____no_output_____" ] ], [ [ "Cálculo numérico de la posición final de la partícula al llegar al detector.", "_____no_output_____" ] ], [ [ "xF_max=xF[-1]\nyF_max=yF[-1]\nprint(\"Mediante el Método de Feynmann la partícula alcanza el detector cuando este se encuentra en x={}m y y={}m.\".format(xF_max,yF_max))", "Mediante el método de Feynmann la partícula alcanza el detector cuando este se encuentra en x=0.03411645859519082m y y=-1.2514794460159725e-07m.\n" ] ], [ [ "Cálculo del momento inicial $p_0=mv_0$, final $p_f=\\frac{1}{2}qBx$ y la diferencia de momento que comprueba la conservación del momento lineal.", "_____no_output_____" ] ], [ [ "pF_0=m*v_0\npF_f=0.5*q*B*xF_max\ndpF=np.abs(pF_f-pF_0)\nprint(\"El momento inicial de la partícula es {} kg m/s, el momento final es {} kg m/s y la diferencia de momento es {} kg m/s.\".format(pF_0,pF_f,dpF))", "El momento inicial de la partícula es 2.73281487e-25 kg m/s, el momento final es 2.733029522100378e-25 kg m/s y la diferencia de momento es 2.1465210037821538e-29 kg m/s.\n" ] ], [ [ "Definición de cambio en la velocidad y la función de trayectoria de la partícula que incide con rapidez $v_0$ y ángulo $\\theta_0$ a una región de campo magnético erpendicular $\\mathbf{B}$; siguiendo el paso a paso del Método de Runge Kutta de cuarto orden.", "_____no_output_____" ] ], [ [ "def delta(omega,v_x,v_y,dt):\n delta11=dt*omega*v_y\n delta12=dt*omega*(v_y+delta11/2)\n delta13=dt*omega*(v_y+delta12/2)\n delta14=dt*omega*(v_y+delta13)\n delta1=(delta11+2*delta12+2*delta13+delta14)/6\n delta21=-dt*omega*v_x\n delta22=-dt*omega*(v_x+delta21/2)\n delta23=-dt*omega*(v_x+delta22/2)\n delta24=-dt*omega*(v_x+delta23)\n delta2=(delta21+2*delta22+2*delta23+delta24)/6\n return delta1, delta2\n\ndef rungePaso(q,B,v_0,theta_0,m,N):\n t=0.0\n omega=q*B/m\n dt=1/(omega*N)\n x=[0]\n y=[0]\n v_x=-v_0*np.sin(theta_0)\n v_y=v_0*np.cos(theta_0)\n while y[-1]>=0:\n x_new=x[-1]+v_x*dt\n y_new=y[-1]+v_y*dt\n x.append(x_new)\n y.append(y_new)\n v_x=v_x+delta(omega,v_x,v_y,dt)[0]\n v_y=v_y+delta(omega,v_x,v_y,dt)[1]\n t=t+dt\n x=np.array(x)\n y=np.array(y)\n return x,y,t", "_____no_output_____" ] ], [ [ "Gráfica de la trayectoria circular por medio del Método de Runge Kutta 4 de una partícula de carga $q$ que incide en el eje $x$ con rapidez $v_0$ y ángulo de incidencia $\\theta_0=0,$debido a un campo $\\mathbf{B}$ perpendicular.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(50/6,5))\nxR,yR,t_maxR=rungePaso(q,B,v_0,theta_0,m,N)\nplt.plot(xR,yR, label=\"Trayectoria circular\")\nplt.plot(xR,np.zeros(len(xR)), c=\"black\")\nplt.legend()\nplt.xlabel(\"Posición en x (m)\")\nplt.ylabel(\"Posición en y (m)\")", "_____no_output_____" ] ], [ [ "Cálculo numérico de la posición final de la partícula al llegar al detector.", "_____no_output_____" ] ], [ [ "xR_max=xR[-1]\nyR_max=yR[-1]\nprint(\"Mediante el Método de Runge Kutta 4 la partícula alcanza el detector cuando este se encuentra en x={}m y y={}m.\".format(xR_max,yR_max))", "Mediante el Método de Runge Kutta 4 la partícula alcanza el detector cuando este se encuentra en x=0.03413084088171104m y y=-6.952529174398728e-06m.\n" ] ], [ [ "Cálculo del momento inicial $p_0=mv_0$, final $p_f=\\frac{1}{2}qBx$ y la diferencia de momento que comprueba la conservación del momento lineal.", "_____no_output_____" ] ], [ [ "pR_0=m*v_0\npR_f=0.5*q*B*xR_max\ndpR=np.abs(pR_f-pR_0)\nprint(\"El momento inicial de la partícula es {} kg m/s, el momento final es {} kg m/s y la diferencia de momento es {} kg m/s.\".format(pR_0,pR_f,dpR))", "El momento inicial de la partícula es 2.73281487e-25 kg m/s, el momento final es 2.7341816702210685e-25 kg m/s y la diferencia de momento es 1.3668002210685885e-28 kg m/s.\n" ] ], [ [ "Comparación gráfica de los métodos numéricos para $\\theta_0=0$.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(12,6))\nplt.title(\"Recorridos de partículas con carga {}C, masa {}kg, velocidad {}m/s, ángulo de entrada {}rad, debidas a un campo perpendicular B={}T.\".format(q,m,v_0,theta_0,B))\nplt.subplot(1,2,1)\nplt.plot(xF,yF, label=\"PasoFeynmann\", c=\"green\")\nplt.legend()\nplt.xlabel(\"Posición x (m)\")\nplt.xlabel(\"Posición y (m)\")\nplt.subplot(1,2,2)\nplt.plot(xR,yR, label=\"PasoRunge\")\nplt.legend()\nplt.xlabel(\"Posición en x (m)\")\nplt.xlabel(\"Posición en y (m)\")\nplt.savefig(\"recorridos.jpg\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a867fd19ab8e22730372eb12e2acda3dfe87ea7
22,322
ipynb
Jupyter Notebook
numerical/sie_black_hole_delays.ipynb
everettiantomi/plasmalens
7b08ea6980df5c9afc4d06260a169087d941b079
[ "MIT" ]
null
null
null
numerical/sie_black_hole_delays.ipynb
everettiantomi/plasmalens
7b08ea6980df5c9afc4d06260a169087d941b079
[ "MIT" ]
null
null
null
numerical/sie_black_hole_delays.ipynb
everettiantomi/plasmalens
7b08ea6980df5c9afc4d06260a169087d941b079
[ "MIT" ]
null
null
null
27.524044
155
0.505958
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom astropy import units as u\nfrom astropy import constants as const\nimport matplotlib as mpl\nfrom jupyterthemes import jtplot #These two lines can be skipped if you are not using jupyter themes\njtplot.reset()\n\nfrom astropy.cosmology import FlatLambdaCDM\ncosmo = FlatLambdaCDM(H0=67.4, Om0=0.314)\nimport scipy as sp\nimport multiprocessing as mp\n\n\nimport time\nstart_total = time.time()", "_____no_output_____" ], [ "import os\nmy_path = '/home/tomi/Documentos/Fisica/Tesis/escrito-tesis/images/'", "_____no_output_____" ], [ "from lenstronomy.LensModel.lens_model import LensModel\nfrom lenstronomy.LensModel.lens_model_extensions import LensModelExtensions\nfrom lenstronomy.LensModel.Solver.lens_equation_solver import LensEquationSolver", "_____no_output_____" ], [ "zl = 0.2; zs = 1.2\nDl = cosmo.angular_diameter_distance(zl) \nDs = cosmo.angular_diameter_distance(zs) \nDls = cosmo.angular_diameter_distance_z1z2(zl, zs)\nG = const.G\nrho_crit = (cosmo.critical_density(zl)).to(u.kg/u.m**3)\nc_light = (const.c).to(u.cm/u.second)\n\n#r0 = 10*u.kpc\nr0 = 10.0*u.kpc\n#r0 = 0.1*u.kpc\npi = np.pi\n\ndef scale_radius(v,Dl,Ds,Dls): #this is e0 in eq 3.42 meneghetti, eq 1 barnacka 2014\n return (4.*pi*v**2/c_light**2*Dl*Dls/Ds).decompose()\ndef theta_E_SIS():\n 'in arcsec'\n pre_theta_E = (scale_radius(v,Dl,Ds,Dls)/Dl).decompose()\n return pre_theta_E*u.rad.to('arcsec', equivalencies=u.dimensionless_angles()) \n\nv = 180 *u.km/u.s\nss_r = scale_radius(v,Dl,Ds,Dls) \nprint('scale radius (m): ',ss_r)\nprint('scale radius (kpc): ',ss_r.to(u.kpc))\nprint('theta_E: ',theta_E_SIS() ,'arcsec')\ntheta_E_num = theta_E_SIS()\nelipt = 0.3\nre = (const.e.esu**2/const.m_e/(c_light**2)).decompose()\nprint('Classic electron radius: ',re)", "scale radius (m): 7.705329461274929e+19 m\nscale radius (kpc): 2.49712721364453 kpc\ntheta_E: 0.7301786792241515 arcsec\nClassic electron radius: 2.817940324670788e-15 m\n" ] ], [ [ "The lensing potential by a point mass is given by \n\n$$\n\\psi = \\frac{4GM}{c^2} \\frac{D_{ls}}{D_l D_s} ln |\\vec{\\theta}|\n$$\n\nIn terms of the Einstein radius\n\n$$\n\\theta_{e_1} = \\sqrt{\\frac{4GM}{c^2} \\frac{D_{ls}}{D_l D_s}} \\\\\n\\psi = \\theta_{e_1}^2 ln |\\vec{\\theta}|\n$$\n\nLets start with \n\n$$\nM_1 = 10^3 M_\\odot \\\\\nM_2 = 10^4 M_\\odot \\\\\nM_3 = 10^5 M_\\odot \\\\\nM_4 = 10^6 M_\\odot \\\\\nM_5 = 10^8 M_\\odot \n$$", "_____no_output_____" ], [ "The mass of the main lens is \n\n$$\nM(\\theta_e) = \\theta_e^2 \\frac{c^2}{4G} \\frac{D_l D_s}{D_{ls}}\n$$", "_____no_output_____" ] ], [ [ "M_e = (theta_E_num*u.arcsec)**2 * c_light**2 /4 / G * Dl * Ds / Dls\nM_e = (M_e/(u.rad)**2).decompose()\nms = 1.98847e30*u.kg\nM_e/ms", "_____no_output_____" ] ], [ [ "$$\nM(0.73 \\mathrm{arcsec}) = 5.91\\cdot 10^{10} M_\\odot\n$$", "_____no_output_____" ] ], [ [ "ms = 1.98847e30*u.kg #solar mass\nm1 = ms*1e3\nm2 = ms*1e4\nm3 = ms*1e5\nm4 = ms*1e6\nm5 = ms*1e8", "_____no_output_____" ], [ "theta_E_1 = np.sqrt(4*G*m1/c_light**2*Dls/Dl/Ds)\ntheta_E_1 = theta_E_1.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles()) \ntheta_E_1.value", "_____no_output_____" ], [ "theta_E_2 = np.sqrt(4*G*m2/c_light**2*Dls/Dl/Ds)\ntheta_E_2 = theta_E_2.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles()) \ntheta_E_2.value", "_____no_output_____" ], [ "theta_E_3 = np.sqrt(4*G*m3/c_light**2*Dls/Dl/Ds)\ntheta_E_3 = theta_E_3.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles()) \ntheta_E_3.value", "_____no_output_____" ], [ "theta_E_4 = np.sqrt(4*G*m4/c_light**2*Dls/Dl/Ds)\ntheta_E_4 = theta_E_4.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles()) \ntheta_E_4.value", "_____no_output_____" ], [ "theta_E_5 = np.sqrt(4*G*m5/c_light**2*Dls/Dl/Ds)\ntheta_E_5 = theta_E_5.decompose()*u.rad.to('arcsec', equivalencies=u.dimensionless_angles()) \ntheta_E_5.value", "_____no_output_____" ] ], [ [ "### Two black holes of $ M_4 = 10^6 M_\\odot$", "_____no_output_____" ] ], [ [ "x1 = -0.26631755 + 2e-3\ny1 = -0.26631755 \n\nx0 = y0 = - 0.26631755\nb1x = -3.5e-3 + x0 ; b1y = -2e-3 + y0\nb2x = 0 + x0 ; b2y = -1.5e-3 + y0\nb3x = 3.5e-3 + x0 ; b3y = 2.5e-3 + y0", "_____no_output_____" ], [ "x2 = -0.26631755 - 2e-3\ny2 = -0.26631755 - 4e-3", "_____no_output_____" ], [ "lens_model_list = ['SIEBH2']\nlensModel = LensModel(lens_model_list)\nlensEquationSolver = LensEquationSolver(lensModel)", "_____no_output_____" ], [ "kwargs = {'theta_E':theta_E_num.value,'eta':0*elipt, 'theta_E_1':theta_E_4.value, 'x1':x1, 'y1':y1, 'theta_E_2':theta_E_4.value, 'x2':x2, 'y2':y2}\nkwargs_lens_list = [kwargs]", "_____no_output_____" ], [ "from lenstronomy.LensModel.Profiles.sie_black_hole_2 import SIEBH2\nperfil = SIEBH2()\n\nt = [0,0,0,0]\nphi = [0,0,0,0]\n\nphi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value\n\n\n\nprint(t[3] - t[1])\nprint(t[3] - t[2])\nprint(t[2] - t[1])", "-8.105748056903394\n-3.967594036250375\n-4.138154020653019\n-21792.710728007325\n-21796.848882027978\n-21800.81647606423\n" ], [ "phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\n\nprint(t[3] - t[1])\nprint(t[3] - t[2])\nprint(t[2] - t[1])", "-15557.484019854921\n-10126.815301985247\n-5430.668717869674\n" ], [ "phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value\n\n\nprint(t[3] - t[1])\nprint(t[3] - t[2])\nprint(t[2] - t[1])", "15549.378271797788\n10122.84770794888\n5426.530563848908\n" ] ], [ [ "# massless black holes", "_____no_output_____" ] ], [ [ "from lenstronomy.LensModel.Profiles.sie_black_hole_2 import SIEBH2\nperfil = SIEBH2()\n\nt = [0,0,0,0]\nphi = [0,0,0,0]\n\nphi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value\n\n\n\nprint(t[3] - t[1])\nprint(t[3] - t[2])\nprint(t[2] - t[1])", "4.394162934288033\n22.793922406694037\n-18.399759472406004\n" ], [ "phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\n\nprint(t[3] - t[1])\nprint(t[3] - t[2])\nprint(t[2] - t[1])", "-15557.484019854921\n-10126.815301985247\n-5430.668717869674\n" ], [ "phi[1] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[2] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[2] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[2])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[3] = SIEBH2.function(perfil, b3x, b3y, theta_E_num.value, 0*elipt, 0*theta_E_4.value, x1, y1, 0*theta_E_4.value, x2, y2)\nt[3] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b3x )**2 + (.25 - b3y)**2) - phi[3])*(u.arcsec**2).to('rad**2')).to('s').value\n\n\nprint(t[3] - t[1])\nprint(t[3] - t[2])\nprint(t[2] - t[1])", "15561.878182789194\n10149.609224391985\n5412.26895839721\n" ] ], [ [ "### $ M_1 = 10^3 M_\\odot$", "_____no_output_____" ] ], [ [ "x1 = -0.06630 - .05e-3\ny1 = -0.0662868 \nx2 = -0.06630 + .05e-3\ny2 = -0.0662868 ", "_____no_output_____" ], [ "lens_model_list = ['SIEBH2']\nlensModel = LensModel(lens_model_list)\nlensEquationSolver = LensEquationSolver(lensModel)", "_____no_output_____" ], [ "kwargs = {'theta_E':theta_E_num.value,'eta':0*elipt, 'theta_E_1':theta_E_1.value, 'x1':x1, 'y1':y1, 'theta_E_2':theta_E_1.value, 'x2':x2, 'y2':y2}\nkwargs_lens_list = [kwargs]", "_____no_output_____" ], [ "x0 = -.0662968\ny0 = -.0663026\n#blobs position\nb1x = x0 - -.1e-3 ; b1y = y0 - -.1e-3\nb2x = x0 - .1e-3 ; b2y = y0 - .1e-3", "_____no_output_____" ], [ "from lenstronomy.LensModel.Profiles.sie_black_hole_2 import SIEBH2\nperfil = SIEBH2()\n\nt_ = [0,0]\nphi = [0,0]\n\nphi[0] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt_[0] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[0])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[1] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt_[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value\n\n\nprint(t_[0] - t_[1])", "216.6083137965179\n" ], [ "t_ = [0,0]\nphi = [0,0]\n\nphi[0] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt_[0] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[0]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[1] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt_[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[1]*0)*(u.arcsec**2).to('rad**2')).to('s').value\n\n\nprint(t_[0] - t_[1])", "-331.5627250271791\n" ], [ "t_ = [0,0]\nphi = [0,0]\n\nphi[0] = SIEBH2.function(perfil, b1x, b1y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt_[0] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b1x )**2 + (.25 - b1y)**2) - phi[0])*(u.arcsec**2).to('rad**2')).to('s').value\n\nphi[1] = SIEBH2.function(perfil, b2x, b2y, theta_E_num.value, 0*elipt, theta_E_4.value, x1, y1, theta_E_4.value, x2, y2)\nt_[1] = ((1+zl)/c_light*Ds*Dl/Dls*( 0*1/2*( (.25 - b2x )**2 + (.25 - b2y)**2) - phi[1])*(u.arcsec**2).to('rad**2')).to('s').value\n\n\nprint(t_[0] - t_[1])", "548.1710388237261\n" ], [ "end_total = time.time()\nprint('total time: ',(end_total-start_total)/60.,' minutes')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a86858faba37716c8bac941e9796cd9752feae6
48,643
ipynb
Jupyter Notebook
archive/Rdata/GO_term_analysis/R_Maaslin2/GO_TERM_name_mismatch_troubleshooting_19_NOV_2020.ipynb
COV-IRT/-microbial
02e3ae2a040baab035db1e2c3a51bb09619d5dea
[ "MIT" ]
null
null
null
archive/Rdata/GO_term_analysis/R_Maaslin2/GO_TERM_name_mismatch_troubleshooting_19_NOV_2020.ipynb
COV-IRT/-microbial
02e3ae2a040baab035db1e2c3a51bb09619d5dea
[ "MIT" ]
null
null
null
archive/Rdata/GO_term_analysis/R_Maaslin2/GO_TERM_name_mismatch_troubleshooting_19_NOV_2020.ipynb
COV-IRT/-microbial
02e3ae2a040baab035db1e2c3a51bb09619d5dea
[ "MIT" ]
null
null
null
165.452381
41,688
0.902905
[ [ [ "# Issue: the GO Term tags and names are mismatched on the output of the maaslin2 term files\n## Objective: Identify and repair the mismatached go tags and names", "_____no_output_____" ] ], [ [ "library(phyloseq)\nsetwd(\"/media/jochum00/Aagaard_Raid3/microbial/GO_term_analysis/R_Maaslin2/\") # Change the current working directory ", "_____no_output_____" ] ], [ [ "import the base phyloseq object", "_____no_output_____" ] ], [ [ "bac_pseq<-readRDS(\"bac_pseq.rds\")", "_____no_output_____" ], [ "write.table(as.data.frame(tax_table(bac_pseq)),file = \"bac_pseq_tax_table.tsv\",sep = \"\\t\")", "_____no_output_____" ] ], [ [ "These names are ok, so what is the difference", "_____no_output_____" ] ], [ [ "bac_pseq_no_neg<-subset_samples(bac_pseq, sample_type!=\"neg_control\")", "_____no_output_____" ], [ "write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = \"bac_pseq_no_neg_tax_table.tsv\",sep = \"\\t\")", "_____no_output_____" ] ], [ [ "names are still ok", "_____no_output_____" ] ], [ [ "bac_pseq_no_neg<-subset_samples(bac_pseq_no_neg, sample_type!=\"Unknown\")", "_____no_output_____" ], [ "write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = \"bac_pseq_no_neg_tax_table.tsv\",sep = \"\\t\")", "_____no_output_____" ] ], [ [ "still ok", "_____no_output_____" ] ], [ [ "names<-paste(taxa_names(bac_pseq_no_neg),get_taxa_unique(bac_pseq_no_neg,taxonomic.rank = \"name\" ),sep = \"-\")\ntaxa_names(bac_pseq_no_neg)<-names", "_____no_output_____" ], [ "write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = \"bac_pseq_no_neg_tax_table.tsv\",sep = \"\\t\")", "_____no_output_____" ] ], [ [ "This messed it up somehow", "_____no_output_____" ], [ "GO:0030640\tpolyketide catabolic process\tTRUE\tGO:0030640-polyketide catabolic process\tbiological_process\t4\tpolyketide catabolic process\nGO:0021731\ttrigeminal motor nucleus development\tTRUE\tGO:0021731-trigeminal motor nucleus development\tbiological_process\t4\ttrigeminal motor nucleus development\nGO:0032970\tregulation of actin filament\tFALSE\tGO:0032970-regulation of actin filament-based process\tbiological_process\t4\tregulation of actin filament-based process\nGO:0002009\tmorphogenesis of an epithelium\tTRUE\tGO:0002009-morphogenesis of an epithelium\tbiological_process\t4\tmorphogenesis of an epithelium\nGO:0060359\tadhesion of symbiont to host cell\tFALSE\tGO:0060359-adhesion of symbiont to host cell\tbiological_process\t4\tresponse to ammonium ion\nGO:0044650\tviral tail assembly\tFALSE\tGO:0044650-viral tail assembly\tbiological_process\t4\tadhesion of symbiont to host cell\nGO:0098003\tinflammatory response\tFALSE\tGO:0098003-inflammatory response\tbiological_process\t4\tviral tail assembly\nGO:0006954\thydrogen peroxide metabolic process\tFALSE\tGO:0006954-hydrogen peroxide metabolic process\tbiological_process\t4\tinflammatory response\n![image.png](attachment:image.png)", "_____no_output_____" ], [ "## I THINK IT MESSED UP BECAUSE OF THE \"-\" DELIMITER \nLets try to fix that", "_____no_output_____" ] ], [ [ "bac_pseq_no_neg<-subset_samples(bac_pseq, sample_type!=\"neg_control\")\nbac_pseq_no_neg<-subset_samples(bac_pseq_no_neg, sample_type!=\"Unknown\")", "_____no_output_____" ], [ "length(taxa_names(bac_pseq_no_neg))\nlength(get_taxa_unique(bac_pseq_no_neg,taxonomic.rank = \"name\")) ", "_____no_output_____" ] ], [ [ "## Well this is an issue lol", "_____no_output_____" ] ], [ [ "tax<-data.frame(tax_table(bac_pseq_no_neg))\nnames<-paste(rownames(tax),tax$name,sep=\"-\")\nlength(names)\ntaxa_names(bac_pseq_no_neg)<-names", "_____no_output_____" ] ], [ [ "Could it of really been that easy?", "_____no_output_____" ] ], [ [ "write.table(as.data.frame(tax_table(bac_pseq_no_neg)),file = \"bac_pseq_no_neg_tax_table.tsv\",sep = \"\\t\")", "_____no_output_____" ] ], [ [ "WOW it really was that easy! Problem Solved!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a8695478e7fabdfecd0a0de7318025509cafae9
75,107
ipynb
Jupyter Notebook
tictactrip_internship_exercise_El_Mehdi_CHOUHAM.ipynb
G4nnesh/internship_exercise_for_tictactrip
1e3b6f1375a8eef23a46a1af1e073c4784956133
[ "MIT" ]
null
null
null
tictactrip_internship_exercise_El_Mehdi_CHOUHAM.ipynb
G4nnesh/internship_exercise_for_tictactrip
1e3b6f1375a8eef23a46a1af1e073c4784956133
[ "MIT" ]
null
null
null
tictactrip_internship_exercise_El_Mehdi_CHOUHAM.ipynb
G4nnesh/internship_exercise_for_tictactrip
1e3b6f1375a8eef23a46a1af1e073c4784956133
[ "MIT" ]
null
null
null
32.584382
9,977
0.402479
[ [ [ "#El Mehdi CHOUHAM version 1\n# [email protected] for Tictactrip ", "_____no_output_____" ], [ "import pandas as pd\nimport numpy as np\nimport datetime as dt\nimport plotly.graph_objects as go\n\nfrom sklearn.preprocessing import OneHotEncoder\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.linear_model import ARDRegression", "_____no_output_____" ], [ "ticket_data=pd.read_csv('./data/ticket_data.csv', sep=',', index_col='id')\nstations=pd.read_csv('./data/stations.csv', sep=',', index_col='id')\nproviders=pd.read_csv('./data/providers.csv', sep=',', index_col='id')\ncities=pd.read_csv('./data/cities.csv', sep=',', index_col='id')", "_____no_output_____" ] ], [ [ "## Summary :", "_____no_output_____" ], [ "1 - Dataset review\n <br>2 - Data extraction exercise\n <br>&nbsp;&nbsp;2 - 1 - Prices\n <br>&nbsp;&nbsp;2 - 2 - Durations\n <br>&nbsp;&nbsp;2 - 3 - Price and duration difference per transport mean and per travel range\n <br>3 - Bonus\n <br>&nbsp;&nbsp;3 - 1 - Kilometer price per company\n <br>&nbsp;&nbsp;3 - 1 - New trainline !", "_____no_output_____" ], [ "## 1 - Datasets review", "_____no_output_____" ] ], [ [ "providers.head()", "_____no_output_____" ], [ "ticket_data.head(2)", "_____no_output_____" ], [ "stations.head(2)", "_____no_output_____" ], [ "providers.head(2)", "_____no_output_____" ], [ "cities.head(2)", "_____no_output_____" ] ], [ [ "## 2 - Data extraction exercise", "_____no_output_____" ], [ "### 2 - 1 - Prices", "_____no_output_____" ] ], [ [ "df = ticket_data.merge(providers['fullname'], left_on='company', right_on='id', how='left')\ndf = df.merge(stations['unique_name'], left_on='o_city', right_on='id', how='left').merge(stations['unique_name'], left_on='d_city', right_on='id', how='left').rename(index=str,\n columns={\"unique_name_x\": \"o_station_name\", \"unique_name_y\" : \"d_station_name\", \"price_in_cents\" : \"price_in_euros\"})\n\ndf = df.drop(columns=['arrival_ts','company', 'o_station', 'd_station', 'o_city', 'd_city', 'departure_ts', 'search_ts', 'middle_stations', 'other_companies'])\n\ndf.price_in_euros = round(df.price_in_euros/100, 5)\n\ndf.describe().loc[['mean','min','max']]", "_____no_output_____" ] ], [ [ "### 2 - 2 - Durations", "_____no_output_____" ] ], [ [ "ticket_data['duration']=pd.to_datetime(ticket_data['arrival_ts'])-pd.to_datetime(ticket_data['departure_ts'])", "_____no_output_____" ], [ "ticket_data.loc[:,['price_in_cents','duration']].describe().loc[['mean','min','max']]", "_____no_output_____" ] ], [ [ "### 2 - 3 - Price and duration difference per transport mean and per travel range", "_____no_output_____" ] ], [ [ " def absc_curvi(o_lat, d_lat, delta_long) :\n return m.acos( m.sin(o_lat)*m.sin(d_lat) + m.cos(o_lat)*m.cos(d_lat)*m.cos(delta_long))", "_____no_output_____" ], [ "r_terre = 6378137\nto_rad = np.pi/180\n\n#Calcul et ajout de la distance à partir de l'abscisse curviligne \n\ncoords = ticket_data.merge(stations, how='left', right_on='id', left_on='o_station').rename(index=str, columns={\"longitude\": \"o_longitude\", \"latitude\": \"o_latitude\", \"unique_name\": \"o_station\"})\ncoords = coords.merge(stations, how='left', right_on='id', left_on='d_station').rename(columns={\"longitude\": \"d_longitude\", \"latitude\": \"d_latitude\", \"unique_name\": \"d_station\"})\n\ncoords['delta_long'] = coords.d_longitude - coords.o_longitude\ncoords['absc_curvi'] = np.arccos( np.sin(coords.o_latitude*to_rad)*np.sin(coords.d_latitude*to_rad) + np.cos(coords.o_latitude*to_rad)*np.cos(coords.d_latitude*to_rad)*np.cos(coords.delta_long*to_rad))\ncoords['distance'] = r_terre * coords['absc_curvi'] /1000\n\ncoords = coords.drop(columns=[ 'middle_stations', 'o_station', 'd_station', 'departure_ts', 'arrival_ts', 'search_ts', 'delta_long', 'o_latitude', 'o_longitude','d_latitude', 'd_longitude', 'other_companies'])\n\n# coords[coords.o_name == \"Massy-Palaiseau\"][coords.d_name == \"Gare Lille-Europe\"] matches google maps distance !!", "_____no_output_____" ], [ "# Prise en compte du type de transport\ntrajets = coords.merge(providers['transport_type'], left_on='company', right_on='id', how='left').drop(columns=[ 'company', 'o_city', 'd_city'])\ntrajets", "_____no_output_____" ], [ "trajets_0_200 = trajets[trajets['distance'] <= 200]\ntrajets_201_800 = trajets[(trajets['distance'] > 200) & (trajets['distance'] < 800)]\ntrajets_800_2000 = trajets[(800 < trajets['distance']) & (trajets['distance'] < 2000)]\ntrajets_over_2000 = trajets[trajets['distance'] > 2000]", "_____no_output_____" ], [ "print(\"Le plus grand trajet est : \", trajets.distance.max(), \"Km !\\n\")", "Le plus grand trajet est : 1867.5584602995452 Km !\n\n" ] ], [ [ "### Paths per travel ranges :", "_____no_output_____" ] ], [ [ "transport_types = []; list_0_200 = []; list_201_800 = []; list_800_2000 = []; list_over_2000 = [];\n\nfor each in pd.unique(trajets['transport_type']) :\n transport_types.append(each)\n\nfor each in transport_types:\n seconds = trajets_0_200.duration[trajets_0_200['transport_type'] == each].mean()\n list_0_200.append([trajets_0_200.price_in_cents[trajets_0_200['transport_type'] == each].mean(), seconds.total_seconds()]) \n\nfor each in transport_types:\n seconds = trajets_201_800.duration[trajets_201_800['transport_type'] == each].mean()\n list_201_800.append([trajets_201_800.price_in_cents[trajets_201_800['transport_type'] == each].mean(), seconds.total_seconds()]) \n\nfor each in transport_types:\n seconds = trajets_800_2000.duration[trajets_800_2000['transport_type'] == each].mean()\n list_800_2000.append([trajets_800_2000.price_in_cents[trajets_800_2000['transport_type'] == each].mean(), seconds.total_seconds()]) \n \nfor each in transport_types:\n seconds = trajets_over_2000.duration[trajets_over_2000['transport_type'] == each].mean()\n list_over_2000.append([trajets_over_2000.price_in_cents[trajets_over_2000['transport_type'] == each].mean(), seconds.total_seconds()]) \n\naverages = np.array([list_0_200, list_201_800, list_800_2000, list_over_2000]) ", "_____no_output_____" ], [ "travel_ranges = ['from 0 Km to 200 Km', '201 Km to 800 Km', '801 Km to 2000 Km'] #, 'over 2000 Km'], dismissed\nfig_1 = go.Figure()\nfig_1.data = []; fig_1.layout={} #reset\n\n\nfor idx, each_transport in enumerate(transport_types) :\n fig_1.add_trace(go.Scatter(x = travel_ranges, y = averages[:, idx, 0]/100, name=each_transport))\n\nfig_1.update_layout(title='Average Price per travel range per transport mean',\n xaxis_title='Travel ranges',\n yaxis_title='Price (in euros)')\nfig_1.show()", "_____no_output_____" ], [ "fig_2 = go.Figure()\nfig_2.data = []; fig_2.layout = {} #reset\n\nfor idx, each_transport in enumerate(transport_types) :\n fig_2.add_trace(go.Scatter(x = travel_ranges, y = averages[:, idx, 1]/60, name=each_transport))\n\nfig_2.update_layout(title='Average Duration per travel range per transport mean',\n xaxis_title='Travel ranges',\n yaxis_title='Duration (in minutes)')\nfig_2.show()", "_____no_output_____" ] ], [ [ "## 3 - Bonus", "_____no_output_____" ], [ "### 3 - 1 - Kilometer price per company", "_____no_output_____" ] ], [ [ "company_info = 'fullname' #or fullname\n\ntrajets_companies = coords.merge(providers[['name','fullname']], how='left', right_on='id', left_on='company')\ntrajets_companies['kilometer_price_in_euros'] = trajets_companies['price_in_cents']/(100*trajets['distance'])\ntrajets_companies = trajets_companies.drop(columns=['price_in_cents', 'company', 'duration', 'absc_curvi'])", "_____no_output_____" ], [ "fig_3 = go.Figure(\n data=[go.Bar(x=trajets_companies.groupby([company_info]).kilometer_price_in_euros.mean().index, y=trajets_companies.groupby([company_info]).kilometer_price_in_euros.mean())],\n)\n\nfig_3.update_layout(title=\"Kilometer price per company\",\n xaxis_title='Providers',\n yaxis_title='Kilometer price (euros/Km)')\nfig_3.show()", "_____no_output_____" ] ], [ [ "### 3 - 2 - New trainline !", "_____no_output_____" ] ], [ [ "#dataset\n\ncities.loc[(cities.unique_name == 'paris'),'population'] = 2187526 \n\nnl = ticket_data.merge(cities[['unique_name', 'latitude', 'longitude', 'population']], how='left', right_on='id', left_on='o_city').rename(index=str, columns={\"longitude\": \"o_longitude\", \"latitude\": \"o_latitude\", \"unique_name\": \"o_city_name\", 'population': 'o_city_pop'})\nnl = nl.merge(cities[['unique_name', 'latitude', 'longitude', 'population']], how='left', right_on='id', left_on='d_city').rename(columns={\"longitude\": \"d_longitude\", \"latitude\": \"d_latitude\", \"unique_name\": \"d_city_name\", 'population': 'd_city_pop'})\nnl = nl.merge(providers['fullname'], how='left', right_on='id', left_on='company')\n\ndelta_long = nl.d_longitude - nl.o_longitude\nnl['distance'] = r_terre * (np.arccos( np.sin(nl.o_latitude*to_rad)*np.sin(nl.d_latitude*to_rad) + np.cos(nl.o_latitude*to_rad)*np.cos(nl.d_latitude*to_rad)*np.cos(delta_long*to_rad)))/1000\n\nnl = nl.rename(columns={'price_in_cents' : 'price_in_euros'})\nnl['price_in_euros'] = nl['price_in_euros']/100\n\nnl['month'] = pd.DatetimeIndex(nl['departure_ts']).month\nnl['hour'] = pd.DatetimeIndex(nl['departure_ts']).hour\nnl['delta_purchase_hours'] = (pd.to_datetime(nl['departure_ts'])-pd.to_datetime(nl['search_ts'])).dt.total_seconds()/3600\nnl['duration'] = nl['duration'].dt.total_seconds()/60\n\nnl = nl.drop(columns=[ 'middle_stations', 'o_station', 'd_station', 'departure_ts', 'arrival_ts', 'search_ts', 'o_latitude', 'o_longitude','d_latitude', 'd_longitude', 'other_companies', 'o_city', 'd_city', 'company'])", "_____no_output_____" ], [ "#we only keep, datas we think are relevant for our prediction; here :\n # origine city and destination city names and populations, the month and the hour for the travel, company and the search ts\nnl = nl.dropna()\nnl=nl.reindex(columns=list(nl.columns)[1:]+[list(nl.columns)[0]])\nnl.head(2)", "_____no_output_____" ] ], [ [ "### Encoding and training", "_____no_output_____" ] ], [ [ "encod = OneHotEncoder(handle_unknown='ignore')\nnl_str = nl[['o_city_name','d_city_name', 'fullname']]\n\nencoded_nl_str = encod.fit_transform(nl_str)\n\nencoded_nl = pd.concat((pd.DataFrame(encoded_nl_str.toarray(), index=nl.index), nl[[ 'duration', 'o_city_pop', 'd_city_pop', 'distance', 'month', 'hour','delta_purchase_hours','price_in_euros']]), axis=1) ", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(encoded_nl.iloc[:,:-1],encoded_nl.iloc[:,-1], test_size=0.2, random_state=42, shuffle=True)", "_____no_output_____" ], [ "%%time\n\nclf = ARDRegression()\nclf.fit(X_train, y_train)\n\nclf.score(X_test, y_test)", "Wall time: 7.33 s\n" ] ], [ [ "### Prediction", "_____no_output_____" ], [ "An interesting trip we can predict the price of is a high speed trainline between Lyon and Bordeaux, two big cities to which people usually travel to by plane for the lack of a 'TGV'\n\nWe provide :\n- the duration\n- both cities populations and names\n- the month of departure\n- the hour of departure\n- the time difference between the ticket purchase and the departure in minutes.", "_____no_output_____" ] ], [ [ "def f(V):\n return np.concatenate((encod.transform(np.array(V[:3]).reshape((1,-1))).toarray()[0], V[3:])).reshape((1,-1))\ndef predict(trip):\n return round(float(clf.predict(f(trip))), 2)", "_____no_output_____" ], [ "%%time\n\ntrip = ['lyon', #o_city_name\n 'bordeaux', #d_city_name\n 'TGV', #provider fullname\n 513275, #duration\n 249712, #o_city_pop\n 1122005, #d_city_name\n 800, #distance\n 2, #month\n 12, #hour\n 80] #delta_purchase_hours\n \n\nprint('Le prix d\\'un voyage de ', trip[1], ' vers ', trip[2], 'est de : \\n', predict(trip), 'euros ! \\n')", "Le prix d'un voyage de bordeaux vers TGV est de : \n 110.64 euros ! \n\nWall time: 3.01 ms\n" ] ], [ [ "### Other interesting ideas :", "_____no_output_____" ], [ "- Group by paths, for a interesting paths, plot price curves (+ / time lapses)\n- Group by paths, mean price for transportation_mean/paths, ( mean price for class_paths)\n- Joint company & provider for same class, (donnée inconnues ?)\n- Draw coordinates on map\n- Correlation between number of travels per region\n- Pour une nouvelle ligne, on peut estimer le prix en fonction du prix du kilometre, denivelée des coordonnées ?; populations des villes d'orignies/ destination. + période creuse \n- Utilisation de labelles d'édition d'étiquette/A* pour trouver le trajet le plus court d'une station 1 à une station 2 à partir de la liste des stations (théorie des graphes)", "_____no_output_____" ], [ "### Thank you !", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
4a869bbd451a179c521bacab1ecc65b078794824
19,516
ipynb
Jupyter Notebook
nbs/02_data.load.ipynb
manycoding/fastai2
18986a55b20c0e5d2c262e14043346fcfae22a76
[ "Apache-2.0" ]
null
null
null
nbs/02_data.load.ipynb
manycoding/fastai2
18986a55b20c0e5d2c262e14043346fcfae22a76
[ "Apache-2.0" ]
null
null
null
nbs/02_data.load.ipynb
manycoding/fastai2
18986a55b20c0e5d2c262e14043346fcfae22a76
[ "Apache-2.0" ]
null
null
null
31.2256
215
0.552982
[ [ [ "#default_exp data.load", "_____no_output_____" ], [ "#export\nfrom fastai2.torch_basics import *\n\nfrom torch.utils.data.dataloader import _MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter,_DatasetKind\n_loaders = (_MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter)", "_____no_output_____" ], [ "from nbdev.showdoc import *", "_____no_output_____" ], [ "bs = 4\nletters = list(string.ascii_lowercase)", "_____no_output_____" ] ], [ [ "## DataLoader", "_____no_output_____" ] ], [ [ "#export\ndef _wif(worker_id):\n set_num_threads(1)\n info = get_worker_info()\n ds = info.dataset.d\n ds.nw,ds.offs = info.num_workers,info.id\n set_seed(info.seed)\n ds.wif()\n\nclass _FakeLoader(GetAttr):\n _auto_collation,collate_fn,drop_last,dataset_kind,_dataset_kind,_index_sampler = (\n False,noops,False,_DatasetKind.Iterable,_DatasetKind.Iterable,Inf.count)\n def __init__(self, d, pin_memory, num_workers, timeout):\n self.dataset,self.default,self.worker_init_fn = self,d,_wif\n store_attr(self, 'd,pin_memory,num_workers,timeout')\n\n def __iter__(self): return iter(self.d.create_batches(self.d.sample()))\n\n @property\n def multiprocessing_context(self): return (None,multiprocessing)[self.num_workers>0]\n\n @contextmanager\n def no_multiproc(self):\n old_nw = self.num_workers\n try:\n self.num_workers = 0\n yield self.d\n finally: self.num_workers = old_nw\n\n_collate_types = (ndarray, Tensor, typing.Mapping, str)", "_____no_output_____" ], [ "#export\ndef fa_collate(t):\n b = t[0]\n return (default_collate(t) if isinstance(b, _collate_types)\n else type(t[0])([fa_collate(s) for s in zip(*t)]) if isinstance(b, Sequence)\n else default_collate(t))", "_____no_output_____" ], [ "#e.g. x is int, y is tuple\nt = [(1,(2,3)),(1,(2,3))]\ntest_eq(fa_collate(t), default_collate(t))\ntest_eq(L(fa_collate(t)).map(type), [Tensor,tuple])\n\nt = [(1,(2,(3,4))),(1,(2,(3,4)))]\ntest_eq(fa_collate(t), default_collate(t))\ntest_eq(L(fa_collate(t)).map(type), [Tensor,tuple])\ntest_eq(L(fa_collate(t)[1]).map(type), [Tensor,tuple])", "_____no_output_____" ], [ "#export\ndef fa_convert(t):\n return (default_convert(t) if isinstance(t, _collate_types)\n else type(t)([fa_convert(s) for s in t]) if isinstance(t, Sequence)\n else default_convert(t))", "_____no_output_____" ], [ "t0 = array([1,2])\nt = [t0,(t0,t0)]\n\ntest_eq(fa_convert(t), default_convert(t))\ntest_eq(L(fa_convert(t)).map(type), [Tensor,tuple])", "_____no_output_____" ], [ "#export\nclass SkipItemException(Exception): pass", "_____no_output_____" ], [ "#export\n@funcs_kwargs\nclass DataLoader(GetAttr):\n _noop_methods = 'wif before_iter after_item before_batch after_batch after_iter'.split()\n for o in _noop_methods:\n exec(f\"def {o}(self, x=None, *args, **kwargs): return x\")\n _methods = _noop_methods + 'create_batches create_item create_batch retain \\\n get_idxs sample shuffle_fn do_batch create_batch'.split()\n _default = 'dataset'\n def __init__(self, dataset=None, bs=None, num_workers=0, pin_memory=False, timeout=0, batch_size=None,\n shuffle=False, drop_last=False, indexed=None, n=None, device=None, **kwargs):\n if batch_size is not None: bs = batch_size # PyTorch compatibility\n assert not (bs is None and drop_last)\n if indexed is None: indexed = dataset is not None and hasattr(dataset,'__getitem__')\n if n is None:\n try: n = len(dataset)\n except TypeError: pass\n store_attr(self, 'dataset,bs,shuffle,drop_last,indexed,n,pin_memory,timeout,device')\n self.rng,self.nw,self.offs = random.Random(),1,0\n self.fake_l = _FakeLoader(self, pin_memory, num_workers, timeout)\n\n def __len__(self):\n if self.n is None: raise TypeError\n if self.bs is None: return self.n\n return self.n//self.bs + (0 if self.drop_last or self.n%self.bs==0 else 1)\n \n def get_idxs(self):\n idxs = Inf.count if self.indexed else Inf.nones\n if self.n is not None: idxs = list(itertools.islice(idxs, self.n))\n if self.shuffle: idxs = self.shuffle_fn(idxs)\n return idxs\n \n def sample(self):\n idxs = self.get_idxs()\n return (b for i,b in enumerate(idxs) if i//(self.bs or 1)%self.nw==self.offs)\n \n def __iter__(self):\n self.randomize()\n self.before_iter()\n for b in _loaders[self.fake_l.num_workers==0](self.fake_l):\n if self.device is not None: b = to_device(b, self.device)\n yield self.after_batch(b)\n self.after_iter()\n if hasattr(self, 'it'): delattr(self, 'it')\n\n def create_batches(self, samps):\n self.it = iter(self.dataset) if self.dataset is not None else None\n res = filter(lambda o:o is not None, map(self.do_item, samps))\n yield from map(self.do_batch, self.chunkify(res))\n\n def new(self, dataset=None, cls=None, **kwargs):\n if dataset is None: dataset = self.dataset\n if cls is None: cls = type(self)\n cur_kwargs = dict(dataset=dataset, num_workers=self.fake_l.num_workers, pin_memory=self.pin_memory, timeout=self.timeout,\n bs=self.bs, shuffle=self.shuffle, drop_last=self.drop_last, indexed=self.indexed, device=self.device)\n for n in self._methods: cur_kwargs[n] = getattr(self, n)\n return cls(**merge(cur_kwargs, kwargs))\n \n @property\n def prebatched(self): return self.bs is None\n def do_item(self, s):\n try: return self.after_item(self.create_item(s))\n except SkipItemException: return None\n def chunkify(self, b): return b if self.prebatched else chunked(b, self.bs, self.drop_last)\n def shuffle_fn(self, idxs): return self.rng.sample(idxs, len(idxs))\n def randomize(self): self.rng = random.Random(self.rng.randint(0,2**32-1))\n def retain(self, res, b): return retain_types(res, b[0] if is_listy(b) else b)\n def create_item(self, s): return next(self.it) if s is None else self.dataset[s]\n def create_batch(self, b): return (fa_collate,fa_convert)[self.prebatched](b)\n def do_batch(self, b): return self.retain(self.create_batch(self.before_batch(b)), b)\n def one_batch(self):\n if self.n is not None and len(self)==0: raise ValueError(f'This DataLoader does not contain any batches')\n with self.fake_l.no_multiproc(): res = first(self)\n if hasattr(self, 'it'): delattr(self, 'it')\n return res", "_____no_output_____" ] ], [ [ "Override `item` and use the default infinite sampler to get a stream of unknown length (`stop()` when you want to stop the stream).", "_____no_output_____" ] ], [ [ "class RandDL(DataLoader):\n def create_item(self, s):\n r = random.random()\n return r if r<0.95 else stop()\n\nL(RandDL())", "_____no_output_____" ], [ "L(RandDL(bs=4, drop_last=True)).map(len)", "_____no_output_____" ], [ "dl = RandDL(bs=4, num_workers=4, drop_last=True)\nL(dl).map(len)", "_____no_output_____" ], [ "test_eq(dl.fake_l.num_workers, 4)\nwith dl.fake_l.no_multiproc(): \n test_eq(dl.fake_l.num_workers, 0)\n L(dl).map(len)\ntest_eq(dl.fake_l.num_workers, 4)", "_____no_output_____" ], [ "def _rand_item(s):\n r = random.random()\n return r if r<0.95 else stop()\n\nL(DataLoader(create_item=_rand_item))", "_____no_output_____" ] ], [ [ "If you don't set `bs`, then `dataset` is assumed to provide an iterator or a `__getitem__` that returns a batch.", "_____no_output_____" ] ], [ [ "ds1 = DataLoader(letters)\ntest_eq(L(ds1), letters)\ntest_eq(len(ds1), 26)\n\ntest_shuffled(L(DataLoader(letters, shuffle=True)), letters)\n\nds1 = DataLoader(letters, indexed=False)\ntest_eq(L(ds1), letters)\ntest_eq(len(ds1), 26)\n\nt2 = L(tensor([0,1,2]),tensor([3,4,5]))\nds2 = DataLoader(t2)\ntest_eq_type(L(ds2), t2)\n\nt3 = L(array([0,1,2]),array([3,4,5]))\nds3 = DataLoader(t3)\ntest_eq_type(L(ds3), t3.map(tensor))\n\nds4 = DataLoader(t3, create_batch=noop, after_iter=lambda: setattr(t3, 'f', 1))\ntest_eq_type(L(ds4), t3)\ntest_eq(t3.f, 1)", "_____no_output_____" ] ], [ [ "If you do set `bs`, then `dataset` is assumed to provide an iterator or a `__getitem__` that returns a single item of a batch.", "_____no_output_____" ] ], [ [ "def twoepochs(d): return ' '.join(''.join(o) for _ in range(2) for o in d)", "_____no_output_____" ], [ "ds1 = DataLoader(letters, bs=4, drop_last=True, num_workers=0)\ntest_eq(twoepochs(ds1), 'abcd efgh ijkl mnop qrst uvwx abcd efgh ijkl mnop qrst uvwx')\n\nds1 = DataLoader(letters,4,num_workers=2)\ntest_eq(twoepochs(ds1), 'abcd efgh ijkl mnop qrst uvwx yz abcd efgh ijkl mnop qrst uvwx yz')\n\nds1 = DataLoader(range(12), bs=4, num_workers=3)\ntest_eq_type(L(ds1), L(tensor([0,1,2,3]),tensor([4,5,6,7]),tensor([8,9,10,11])))\n\nds1 = DataLoader([str(i) for i in range(11)], bs=4, after_iter=lambda: setattr(t3, 'f', 2))\ntest_eq_type(L(ds1), L(['0','1','2','3'],['4','5','6','7'],['8','9','10']))\ntest_eq(t3.f, 2)\n\nit = iter(DataLoader(map(noop,range(20)), bs=4, num_workers=1))\ntest_eq_type([next(it) for _ in range(3)], [tensor([0,1,2,3]),tensor([4,5,6,7]),tensor([8,9,10,11])])", "_____no_output_____" ], [ "class SleepyDL(list):\n def __getitem__(self,i):\n time.sleep(random.random()/50)\n return super().__getitem__(i)\n\nt = SleepyDL(letters)\n\n%time test_eq(DataLoader(t, num_workers=0), letters)\n%time test_eq(DataLoader(t, num_workers=2), letters)\n%time test_eq(DataLoader(t, num_workers=4), letters)\n\ndl = DataLoader(t, shuffle=True, num_workers=1)\ntest_shuffled(L(dl), letters)\ntest_shuffled(L(dl), L(dl))", "CPU times: user 4.98 ms, sys: 596 µs, total: 5.58 ms\nWall time: 310 ms\nCPU times: user 8.3 ms, sys: 24.8 ms, total: 33.1 ms\nWall time: 160 ms\nCPU times: user 17.4 ms, sys: 30.2 ms, total: 47.6 ms\nWall time: 127 ms\n" ], [ "class SleepyQueue():\n \"Simulate a queue with varying latency\"\n def __init__(self, q): self.q=q\n def __iter__(self):\n while True:\n time.sleep(random.random()/100)\n try: yield self.q.get_nowait()\n except queues.Empty: return\n\nq = Queue()\nfor o in range(30): q.put(o)\nit = SleepyQueue(q)\n\n%time test_shuffled(L(DataLoader(it, num_workers=4)), range(30))", "CPU times: user 25.9 ms, sys: 17.9 ms, total: 43.9 ms\nWall time: 113 ms\n" ], [ "class A(TensorBase): pass\n\nfor nw in (0,2):\n t = A(tensor([1,2]))\n dl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=nw)\n b = first(dl)\n test_eq(type(b), A)\n\n t = (A(tensor([1,2])),)\n dl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=nw)\n b = first(dl)\n test_eq(type(b[0]), A)", "_____no_output_____" ], [ "class A(TensorBase): pass\nt = A(tensor(1,2))\n\ntdl = DataLoader([t,t,t,t,t,t,t,t], bs=4, num_workers=2, after_batch=to_device)\nb = first(tdl)\ntest_eq(type(b), A)\n\n# Unknown attributes are delegated to `dataset`\ntest_eq(tdl.pop(), tensor(1,2))", "_____no_output_____" ] ], [ [ "## Export -", "_____no_output_____" ] ], [ [ "#hide\nfrom nbdev.export import notebook2script\nnotebook2script()", "Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_learner.ipynb.\nConverted 13a_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.transfer_learning.ipynb.\nConverted 24_vision.gan.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.ulmfit.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.learner.ipynb.\nConverted 43_tabular.model.ipynb.\nConverted 45_collab.ipynb.\nConverted 50_datablock_examples.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 97_test_utils.ipynb.\nConverted index.ipynb.\nConverted migrating.ipynb.\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a869c5f29d13b6b0a6cbda6e54eb06fc7d61e9c
405,653
ipynb
Jupyter Notebook
chapter08_ml/07_pca.ipynb
ofgod2/libro_de_IPython_-ingles-
85128694cdb9206b4325f95fe0060e01cf72e7f5
[ "MIT" ]
645
2018-02-01T09:16:45.000Z
2022-03-03T17:47:59.000Z
chapter08_ml/07_pca.ipynb
wangbin0619/cookbook-2nd-code
acd2ea2e55838f9bb3fc92a23aa991b3320adcaf
[ "MIT" ]
3
2019-03-11T09:47:21.000Z
2022-01-11T06:32:00.000Z
chapter08_ml/07_pca.ipynb
wangbin0619/cookbook-2nd-code
acd2ea2e55838f9bb3fc92a23aa991b3320adcaf
[ "MIT" ]
418
2018-02-13T03:17:05.000Z
2022-03-18T21:04:45.000Z
2,960.970803
140,000
0.950322
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a869d69c9cbcd920b5d69374e65e92edaa2c441
667,121
ipynb
Jupyter Notebook
02_end_to_end_machine_learning_project_Insurance_Anlaysis.ipynb
talhaulusoy/handson-ml2
00d59215dc563017b335960cb078264dcdc9428e
[ "Apache-2.0" ]
null
null
null
02_end_to_end_machine_learning_project_Insurance_Anlaysis.ipynb
talhaulusoy/handson-ml2
00d59215dc563017b335960cb078264dcdc9428e
[ "Apache-2.0" ]
null
null
null
02_end_to_end_machine_learning_project_Insurance_Anlaysis.ipynb
talhaulusoy/handson-ml2
00d59215dc563017b335960cb078264dcdc9428e
[ "Apache-2.0" ]
null
null
null
148.977445
236,532
0.846623
[ [ [ "**Chapter 2 – End-to-end Machine Learning project**\n\n*Welcome to Machine Learning Housing Corp.! Your task is to predict median house values in Californian districts, given a number of features from these districts.*\n\n*This notebook contains all the sample code and solutions to the exercices in chapter 2.*", "_____no_output_____" ], [ "<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/ageron/handson-ml2/blob/master/02_end_to_end_machine_learning_project.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://kaggle.com/kernels/welcome?src=https://github.com/ageron/handson-ml2/blob/master/02_end_to_end_machine_learning_project.ipynb\"><img src=\"https://kaggle.com/static/images/open-in-kaggle.svg\" /></a>\n </td>\n</table>", "_____no_output_____" ], [ "# Setup", "_____no_output_____" ], [ "First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.", "_____no_output_____" ] ], [ [ "# Python ≥3.5 is required\nimport sys\nassert sys.version_info >= (3, 5)\n\n# Scikit-Learn ≥0.20 is required\nimport sklearn\nassert sklearn.__version__ >= \"0.20\"\n\n# Common imports\nimport numpy as np\nimport os\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nmpl.rc('axes', labelsize=14)\nmpl.rc('xtick', labelsize=12)\nmpl.rc('ytick', labelsize=12)\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"end_to_end_project\"\nIMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID)\nos.makedirs(IMAGES_PATH, exist_ok=True)\n\ndef save_fig(fig_id, tight_layout=True, fig_extension=\"png\", resolution=300):\n path = os.path.join(IMAGES_PATH, fig_id + \".\" + fig_extension)\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format=fig_extension, dpi=resolution)", "_____no_output_____" ] ], [ [ "# Get the data", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nhousing = pd.read_csv(\"insurance.csv\")", "_____no_output_____" ], [ "housing.head(7)", "_____no_output_____" ], [ "sns.boxplot(x = \"region\", y = \"charges\", data = housing)", "_____no_output_____" ], [ "housing.drop(\"region\", axis=1, inplace=True)", "_____no_output_____" ], [ "housing.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 1338 entries, 0 to 1337\nData columns (total 6 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 age 1338 non-null int64 \n 1 sex 1338 non-null object \n 2 bmi 1338 non-null float64\n 3 children 1338 non-null int64 \n 4 smoker 1338 non-null object \n 5 charges 1338 non-null float64\ndtypes: float64(2), int64(2), object(2)\nmemory usage: 62.8+ KB\n" ], [ "housing.describe()", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nhousing.hist(bins=30, figsize=(20,15))\nsave_fig(\"attribute_histogram_plots\")\nplt.show()", "Saving figure attribute_histogram_plots\n" ], [ "# to make this notebook's output identical at every run\nnp.random.seed(42)", "_____no_output_____" ], [ "import numpy as np\n\n# For illustration only. Sklearn has train_test_split()\ndef split_train_test(data, test_ratio):\n shuffled_indices = np.random.permutation(len(data))\n test_set_size = int(len(data) * test_ratio)\n test_indices = shuffled_indices[:test_set_size]\n train_indices = shuffled_indices[test_set_size:]\n return data.iloc[train_indices], data.iloc[test_indices]", "_____no_output_____" ], [ "train_set, test_set = split_train_test(housing, 0.2)\nlen(train_set)", "_____no_output_____" ], [ "len(test_set)", "_____no_output_____" ], [ "from zlib import crc32\n\ndef test_set_check(identifier, test_ratio):\n return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32\n\ndef split_train_test_by_id(data, test_ratio, id_column):\n ids = data[id_column]\n in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio))\n return data.loc[~in_test_set], data.loc[in_test_set]", "_____no_output_____" ] ], [ [ "The implementation of `test_set_check()` above works fine in both Python 2 and Python 3. In earlier releases, the following implementation was proposed, which supported any hash function, but was much slower and did not support Python 2:", "_____no_output_____" ] ], [ [ "test_set.head()", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\n\ntrain_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)", "_____no_output_____" ], [ "from sklearn.model_selection import StratifiedShuffleSplit\n\nsplit = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)\nfor train_index, test_index in split.split(housing, housing[\"smoker\"]):\n strat_train_set = housing.loc[train_index]\n strat_test_set = housing.loc[test_index]", "_____no_output_____" ], [ "strat_test_set[\"smoker\"].value_counts() / len(strat_test_set)", "_____no_output_____" ], [ "housing[\"smoker\"].value_counts() / len(housing)", "_____no_output_____" ] ], [ [ "# Discover and visualize the data to gain insights", "_____no_output_____" ] ], [ [ "housing = strat_train_set.copy()", "_____no_output_____" ], [ "housing.plot(kind=\"scatter\", x=\"age\", y=\"charges\")\nsave_fig(\"bad_visualization_plot\")", "Saving figure bad_visualization_plot\n" ], [ "housing.plot(kind=\"scatter\", x=\"age\", y=\"charges\", alpha=0.4,\n s=housing[\"bmi\"], figsize=(10,7), sharex=False)\nplt.legend()\nsave_fig(\"housing_prices_scatterplot\")", "No handles with labels found to put in legend.\n" ], [ "corr_matrix = housing.corr()", "_____no_output_____" ], [ "corr_matrix[\"charges\"].sort_values(ascending=False)", "_____no_output_____" ], [ "# from pandas.tools.plotting import scatter_matrix # For older versions of Pandas\nfrom pandas.plotting import scatter_matrix\n\nattributes = [\"charges\", \"age\", \"bmi\",\n \"children\"]\nscatter_matrix(housing[attributes], figsize=(12, 8))\nsave_fig(\"scatter_matrix_plot\")", "Saving figure scatter_matrix_plot\n" ], [ "housing.plot(kind=\"scatter\", x=\"bmi\", y=\"charges\",\n alpha=0.1)\nplt.axis([15, 60, 0, 55000])\nsave_fig(\"income_vs_house_value_scatterplot\")", "Saving figure income_vs_house_value_scatterplot\n" ], [ "corr_matrix = housing.corr()\ncorr_matrix[\"charges\"].sort_values(ascending=False)", "_____no_output_____" ], [ "housing.describe()", "_____no_output_____" ] ], [ [ "# Prepare the data for Machine Learning algorithms", "_____no_output_____" ] ], [ [ "housing = strat_train_set.drop(\"charges\", axis=1) # drop labels for training set\nhousing_labels = strat_train_set[\"charges\"].copy()", "_____no_output_____" ], [ "strat_train_set.head()", "_____no_output_____" ], [ "sample_incomplete_rows = housing[housing.isnull().any(axis=1)].head()\nsample_incomplete_rows", "_____no_output_____" ], [ "median = housing[\"children\"].median()\nsample_incomplete_rows[\"children\"].fillna(median, inplace=True) # option 3", "_____no_output_____" ], [ "sample_incomplete_rows", "_____no_output_____" ], [ "from sklearn.impute import SimpleImputer\nimputer = SimpleImputer(strategy=\"median\")", "_____no_output_____" ] ], [ [ "Remove the text attribute because median can only be calculated on numerical attributes:", "_____no_output_____" ] ], [ [ "housing_num = housing.select_dtypes(include=[np.number])", "_____no_output_____" ], [ "imputer.fit(housing_num)", "_____no_output_____" ], [ "imputer.statistics_", "_____no_output_____" ] ], [ [ "Check that this is the same as manually computing the median of each attribute:", "_____no_output_____" ] ], [ [ "housing_num.median().values", "_____no_output_____" ] ], [ [ "Transform the training set:", "_____no_output_____" ] ], [ [ "X = imputer.transform(housing_num)", "_____no_output_____" ], [ "type(X)", "_____no_output_____" ], [ "housing_tr = pd.DataFrame(X, columns=housing_num.columns,\n index=housing.index)", "_____no_output_____" ], [ "housing_tr.loc[sample_incomplete_rows.index.values]", "_____no_output_____" ], [ "imputer.strategy", "_____no_output_____" ], [ "housing_tr = pd.DataFrame(X, columns=housing_num.columns,\n index=housing_num.index)", "_____no_output_____" ], [ "housing_tr.head()", "_____no_output_____" ] ], [ [ "Now let's preprocess the categorical input feature, `ocean_proximity`:", "_____no_output_____" ] ], [ [ "housing_cat = housing[[\"sex\", \"smoker\"]]\nhousing_cat.head(10)", "_____no_output_____" ], [ "housing_ord = housing[[\"children\"]]", "_____no_output_____" ], [ "from sklearn.preprocessing import OrdinalEncoder\n\nordinal_encoder = OrdinalEncoder()\nhousing_ord_encoded = ordinal_encoder.fit_transform(housing_ord)\nhousing_ord_encoded", "_____no_output_____" ], [ "from sklearn.preprocessing import OneHotEncoder\n\ncat_encoder = OneHotEncoder()\nhousing_cat_1hot = cat_encoder.fit_transform(housing_cat)\nhousing_cat_1hot", "_____no_output_____" ] ], [ [ "By default, the `OneHotEncoder` class returns a sparse array, but we can convert it to a dense array if needed by calling the `toarray()` method:", "_____no_output_____" ] ], [ [ "housing_cat_1hot.toarray()", "_____no_output_____" ] ], [ [ "Alternatively, you can set `sparse=False` when creating the `OneHotEncoder`:", "_____no_output_____" ] ], [ [ "cat_encoder = OneHotEncoder(sparse=False)\nhousing_cat_1hot = cat_encoder.fit_transform(housing_cat)\nhousing_cat_1hot", "_____no_output_____" ], [ "cat_encoder.categories_", "_____no_output_____" ] ], [ [ "Let's create a custom transformer to add extra attributes:", "_____no_output_____" ], [ "Now let's build a pipeline for preprocessing the numerical attributes:", "_____no_output_____" ] ], [ [ "from sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import StandardScaler\n\nnum_pipeline = Pipeline([\n ('imputer', SimpleImputer(strategy=\"median\")),\n ('std_scaler', StandardScaler()),\n ])\n\nhousing_num_tr = num_pipeline.fit_transform(housing_num)", "_____no_output_____" ], [ "housing_num_tr", "_____no_output_____" ], [ "from sklearn.compose import ColumnTransformer\n\nnum_attribs = [\"age\", \"bmi\"]\ncat_attribs = [\"sex\", \"smoker\"]\nord_attribs = [\"children\"]\n\nfull_pipeline = ColumnTransformer([\n (\"num\", num_pipeline, num_attribs),\n (\"cat\", OneHotEncoder(), cat_attribs),\n (\"ord\", OrdinalEncoder(), ord_attribs)\n ])\n\nhousing_prepared = full_pipeline.fit_transform(housing)", "_____no_output_____" ], [ "housing_prepared", "_____no_output_____" ], [ "housing_prepared.shape", "_____no_output_____" ], [ "housing_prepared[0:10]", "_____no_output_____" ] ], [ [ "# Select and train a model ", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression\n\nlin_reg = LinearRegression()\nlin_reg.fit(housing_prepared, housing_labels)", "_____no_output_____" ], [ "# let's try the full preprocessing pipeline on a few training instances\nsome_data = housing.iloc[:4]\nsome_labels = housing_labels.iloc[:4]\nsome_data_prepared = full_pipeline.transform(some_data)\n\nprint(\"Predictions:\", lin_reg.predict(some_data_prepared))", "Predictions: [ 4378.182135 10693.5006778 4359.22447554 14025.00428706]\n" ] ], [ [ "Compare against the actual values:", "_____no_output_____" ] ], [ [ "print(\"Labels:\", list(some_labels))", "Labels: [3906.127, 8538.28845, 1629.8335, 9391.346]\n" ], [ "some_data_prepared", "_____no_output_____" ], [ "from sklearn.metrics import mean_squared_error\n\nhousing_predictions = lin_reg.predict(housing_prepared)\nlin_mse = mean_squared_error(housing_labels, housing_predictions)\nlin_rmse = np.sqrt(lin_mse)\nlin_rmse", "_____no_output_____" ] ], [ [ "**Note**: since Scikit-Learn 0.22, you can get the RMSE directly by calling the `mean_squared_error()` function with `squared=False`.", "_____no_output_____" ] ], [ [ "from sklearn.metrics import mean_absolute_error\n\nlin_mae = mean_absolute_error(housing_labels, housing_predictions)\nlin_mae", "_____no_output_____" ], [ "from sklearn.tree import DecisionTreeRegressor\n\ntree_reg = DecisionTreeRegressor(random_state=42)\ntree_reg.fit(housing_prepared, housing_labels)", "_____no_output_____" ], [ "housing_predictions = tree_reg.predict(housing_prepared)\ntree_mse = mean_squared_error(housing_labels, housing_predictions)\ntree_rmse = np.sqrt(tree_mse)\ntree_rmse", "_____no_output_____" ] ], [ [ "# Fine-tune your model", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\n\nscores = cross_val_score(tree_reg, housing_prepared, housing_labels,\n scoring=\"neg_mean_squared_error\", cv=10)\ntree_rmse_scores = np.sqrt(-scores)", "_____no_output_____" ], [ "def display_scores(scores):\n print(\"Scores:\", scores)\n print(\"Mean:\", scores.mean())\n print(\"Standard deviation:\", scores.std())\n\ndisplay_scores(tree_rmse_scores)", "Scores: [6864.26729824 6495.61668862 7485.63531999 6349.41160265 6299.61036025\n 8091.24763752 5838.5614121 5904.81363181 6055.07172709 7137.53253026]\nMean: 6652.176820854676\nStandard deviation: 697.4000031991728\n" ], [ "lin_scores = cross_val_score(lin_reg, housing_prepared, housing_labels,\n scoring=\"neg_mean_squared_error\", cv=10)\nlin_rmse_scores = np.sqrt(-lin_scores)\ndisplay_scores(lin_rmse_scores)", "Scores: [5861.41035553 6526.44340547 6509.36341547 5617.29842325 5724.9240673\n 6968.083391 6103.43136992 5434.83377857 6372.29219335 6943.37200233]\nMean: 6206.145240218421\nStandard deviation: 514.6666953236121\n" ] ], [ [ "**Note**: we specify `n_estimators=100` to be future-proof since the default value is going to change to 100 in Scikit-Learn 0.22 (for simplicity, this is not shown in the book).", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestRegressor\n\nforest_reg = RandomForestRegressor(n_estimators=100, random_state=42)\nforest_reg.fit(housing_prepared, housing_labels)", "_____no_output_____" ], [ "housing_predictions = forest_reg.predict(housing_prepared)\nforest_mse = mean_squared_error(housing_labels, housing_predictions)\nforest_rmse = np.sqrt(forest_mse)\nforest_rmse", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\n\nforest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels,\n scoring=\"neg_mean_squared_error\", cv=10)\nforest_rmse_scores = np.sqrt(-forest_scores)\ndisplay_scores(forest_rmse_scores)", "Scores: [4525.40423022 5291.88630757 5660.50181474 4656.29783718 4924.66093864\n 6078.19549324 4588.53019133 4142.94477569 4554.63056435 5890.2953495 ]\nMean: 5031.334750246543\nStandard deviation: 625.9413886157679\n" ], [ "scores = cross_val_score(lin_reg, housing_prepared, housing_labels, scoring=\"neg_mean_squared_error\", cv=10)\npd.Series(np.sqrt(-scores)).describe()", "_____no_output_____" ], [ "from sklearn.svm import SVR\n\nsvm_reg = SVR(kernel=\"linear\")\nsvm_reg.fit(housing_prepared, housing_labels)\nhousing_predictions = svm_reg.predict(housing_prepared)\nsvm_mse = mean_squared_error(housing_labels, housing_predictions)\nsvm_rmse = np.sqrt(svm_mse)\nsvm_rmse", "_____no_output_____" ], [ "from sklearn.model_selection import GridSearchCV\n\nparam_grid = [\n # try 12 (3×4) combinations of hyperparameters\n {'n_estimators': [10, 30, 50], 'max_features': [4, 6, 8, 10]},\n # then try 6 (2×3) combinations with bootstrap set as False\n {'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]},\n ]\n\nforest_reg = RandomForestRegressor(random_state=42)\n# train across 5 folds, that's a total of (12+6)*5=90 rounds of training \ngrid_search = GridSearchCV(forest_reg, param_grid, cv=5,\n scoring='neg_mean_squared_error',\n return_train_score=True)\ngrid_search.fit(housing_prepared, housing_labels)", "/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py:615: FitFailedWarning: Estimator fit failed. The score on this train-test partition for these parameters will be set to nan. Details: \nTraceback (most recent call last):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/model_selection/_validation.py\", line 598, in _fit_and_score\n estimator.fit(X_train, y_train, **fit_params)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 387, in fit\n trees = Parallel(n_jobs=self.n_jobs, verbose=self.verbose,\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 1041, in __call__\n if self.dispatch_one_batch(iterator):\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 859, in dispatch_one_batch\n self._dispatch(tasks)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 777, in _dispatch\n job = self._backend.apply_async(batch, callback=cb)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 208, in apply_async\n result = ImmediateResult(func)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/_parallel_backends.py\", line 572, in __init__\n self.results = batch()\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in __call__\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/joblib/parallel.py\", line 262, in <listcomp>\n return [func(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/utils/fixes.py\", line 222, in __call__\n return self.function(*args, **kwargs)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/ensemble/_forest.py\", line 169, in _parallel_build_trees\n tree.fit(X, y, sample_weight=curr_sample_weight, check_input=False)\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 1252, in fit\n super().fit(\n File \"/Users/talhaulusoy/opt/anaconda3/envs/dataScience/lib/python3.8/site-packages/sklearn/tree/_classes.py\", line 289, in fit\n raise ValueError(\"max_features must be in (0, n_features]\")\nValueError: max_features must be in (0, n_features]\n\n warnings.warn(\"Estimator fit failed. The score on this train-test\"\n" ] ], [ [ "The best hyperparameter combination found:", "_____no_output_____" ] ], [ [ "grid_search.best_params_", "_____no_output_____" ], [ "grid_search.best_estimator_", "_____no_output_____" ] ], [ [ "Let's look at the score of each hyperparameter combination tested during the grid search:", "_____no_output_____" ] ], [ [ "cvres = grid_search.cv_results_\nfor mean_score, params in zip(cvres[\"mean_test_score\"], cvres[\"params\"]):\n print(np.sqrt(-mean_score), params)", "5141.957318126989 {'max_features': 4, 'n_estimators': 10}\n5036.342471271934 {'max_features': 4, 'n_estimators': 30}\n4998.963122659777 {'max_features': 4, 'n_estimators': 50}\n5236.618675878344 {'max_features': 6, 'n_estimators': 10}\n5134.26331323307 {'max_features': 6, 'n_estimators': 30}\n5065.944121853433 {'max_features': 6, 'n_estimators': 50}\nnan {'max_features': 8, 'n_estimators': 10}\nnan {'max_features': 8, 'n_estimators': 30}\nnan {'max_features': 8, 'n_estimators': 50}\nnan {'max_features': 10, 'n_estimators': 10}\nnan {'max_features': 10, 'n_estimators': 30}\nnan {'max_features': 10, 'n_estimators': 50}\n6049.49196593775 {'bootstrap': False, 'max_features': 2, 'n_estimators': 3}\n5549.312455329422 {'bootstrap': False, 'max_features': 2, 'n_estimators': 10}\n5637.432809555073 {'bootstrap': False, 'max_features': 3, 'n_estimators': 3}\n5449.865464301788 {'bootstrap': False, 'max_features': 3, 'n_estimators': 10}\n5817.551874097949 {'bootstrap': False, 'max_features': 4, 'n_estimators': 3}\n5539.372817215994 {'bootstrap': False, 'max_features': 4, 'n_estimators': 10}\n" ], [ "pd.DataFrame(grid_search.cv_results_)", "_____no_output_____" ], [ "from sklearn.model_selection import RandomizedSearchCV\nfrom scipy.stats import randint\n\nparam_distribs = {\n 'n_estimators': randint(low=1, high=200),\n 'max_features': randint(low=1, high=8),\n }\n\nforest_reg = RandomForestRegressor(random_state=42)\nrnd_search = RandomizedSearchCV(forest_reg, param_distributions=param_distribs,\n n_iter=10, cv=5, scoring='neg_mean_squared_error', random_state=42)\nrnd_search.fit(housing_prepared, housing_labels)", "_____no_output_____" ], [ "cvres = rnd_search.cv_results_\nfor mean_score, params in zip(cvres[\"mean_test_score\"], cvres[\"params\"]):\n print(np.sqrt(-mean_score), params)", "5083.9142539352215 {'max_features': 7, 'n_estimators': 180}\n5226.0759806180395 {'max_features': 5, 'n_estimators': 15}\n5007.3112587342475 {'max_features': 3, 'n_estimators': 72}\n5177.459584104069 {'max_features': 5, 'n_estimators': 21}\n5079.780458124596 {'max_features': 7, 'n_estimators': 122}\n5009.140386972385 {'max_features': 3, 'n_estimators': 75}\n5000.240002250819 {'max_features': 3, 'n_estimators': 88}\n5005.934209666698 {'max_features': 5, 'n_estimators': 100}\n4988.329975541375 {'max_features': 3, 'n_estimators': 150}\n6267.39462592837 {'max_features': 5, 'n_estimators': 2}\n" ], [ "feature_importances = grid_search.best_estimator_.feature_importances_\nfeature_importances", "_____no_output_____" ], [ "cat_encoder_list = list(full_pipeline.named_transformers_[\"cat\"].categories_)\ncat_one_hot_attribs = [item for sublist in cat_encoder_list for item in sublist]\nattributes = num_attribs + cat_one_hot_attribs + ord_attribs\nsorted(zip(feature_importances, attributes), reverse=True)\n#list(zip(feature_importances, attributes))", "_____no_output_____" ], [ "final_model = grid_search.best_estimator_\n\nX_test = strat_test_set.drop(\"charges\", axis=1)\ny_test = strat_test_set[\"charges\"].copy()\n\nX_test_prepared = full_pipeline.transform(X_test)\nfinal_predictions = final_model.predict(X_test_prepared)\n\nfinal_mse = mean_squared_error(y_test, final_predictions)\nfinal_rmse = np.sqrt(final_mse)", "_____no_output_____" ], [ "final_rmse", "_____no_output_____" ] ], [ [ "We can compute a 95% confidence interval for the test RMSE:", "_____no_output_____" ] ], [ [ "from scipy import stats\n\nconfidence = 0.95\nsquared_errors = (final_predictions - y_test) ** 2\nnp.sqrt(stats.t.interval(confidence, len(squared_errors) - 1,\n loc=squared_errors.mean(),\n scale=stats.sem(squared_errors)))", "_____no_output_____" ] ], [ [ "We could compute the interval manually like this:", "_____no_output_____" ] ], [ [ "m = len(squared_errors)\nmean = squared_errors.mean()\ntscore = stats.t.ppf((1 + confidence) / 2, df=m - 1)\ntmargin = tscore * squared_errors.std(ddof=1) / np.sqrt(m)\nnp.sqrt(mean - tmargin), np.sqrt(mean + tmargin)", "_____no_output_____" ] ], [ [ "Alternatively, we could use a z-scores rather than t-scores:", "_____no_output_____" ] ], [ [ "zscore = stats.norm.ppf((1 + confidence) / 2)\nzmargin = zscore * squared_errors.std(ddof=1) / np.sqrt(m)\nnp.sqrt(mean - zmargin), np.sqrt(mean + zmargin)", "_____no_output_____" ] ], [ [ "# Extra material", "_____no_output_____" ], [ "## A full pipeline with both preparation and prediction", "_____no_output_____" ] ], [ [ "full_pipeline_with_predictor = Pipeline([\n (\"preparation\", full_pipeline),\n (\"linear\", LinearRegression())\n ])\n\nfull_pipeline_with_predictor.fit(housing, housing_labels)\nfull_pipeline_with_predictor.predict(some_data)", "_____no_output_____" ] ], [ [ "## Model persistence using joblib", "_____no_output_____" ] ], [ [ "my_model = full_pipeline_with_predictor", "_____no_output_____" ], [ "import joblib\njoblib.dump(my_model, \"my_model.pkl\") # DIFF\n#...\nmy_model_loaded = joblib.load(\"my_model.pkl\") # DIFF", "_____no_output_____" ] ], [ [ "Congratulations! You already know quite a lot about Machine Learning. :)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a86aabf5987f2a3c5886a621fd6f69464648d38
99,028
ipynb
Jupyter Notebook
travel.ipynb
kennethcc2005/travel_with_friends
ca03581a3a92631a2ef2d1d9a316e45fd60a866d
[ "MIT" ]
null
null
null
travel.ipynb
kennethcc2005/travel_with_friends
ca03581a3a92631a2ef2d1d9a316e45fd60a866d
[ "MIT" ]
null
null
null
travel.ipynb
kennethcc2005/travel_with_friends
ca03581a3a92631a2ef2d1d9a316e45fd60a866d
[ "MIT" ]
null
null
null
45.197627
2,759
0.536404
[ [ [ "import pandas as pd\nimport numpy as np\nimport requests\nimport psycopg2\nimport json\nimport simplejson\nimport urllib\nimport config\nimport ast\n\nfrom operator import itemgetter\nfrom sklearn.cluster import KMeans\nfrom sqlalchemy import create_engine", "_____no_output_____" ], [ "!pip install --upgrade pip\n!pip install sqlalchemy\n!pip install psycopg2\n!pip install simplejson\n!pip install config", "_____no_output_____" ], [ "conn_str = \"dbname='travel_with_friends' user='zoesh' host='localhost'\"\n# conn_str = \"dbname='travel_with_friends' user='Zoesh' host='localhost'\"", "_____no_output_____" ], [ "import math\nimport random\n\n\ndef distL2((x1,y1), (x2,y2)):\n \"\"\"Compute the L2-norm (Euclidean) distance between two points.\n\n The distance is rounded to the closest integer, for compatibility\n with the TSPLIB convention.\n\n The two points are located on coordinates (x1,y1) and (x2,y2),\n sent as parameters\"\"\"\n xdiff = x2 - x1\n ydiff = y2 - y1\n return math.sqrt(xdiff*xdiff + ydiff*ydiff) + .5\n\n\ndef distL1((x1,y1), (x2,y2)):\n \"\"\"Compute the L1-norm (Manhattan) distance between two points.\n\n The distance is rounded to the closest integer, for compatibility\n with the TSPLIB convention.\n\n The two points are located on coordinates (x1,y1) and (x2,y2),\n sent as parameters\"\"\"\n return abs(x2-x1) + abs(y2-y1)+.5\n\n\ndef mk_matrix(coord, dist):\n \"\"\"Compute a distance matrix for a set of points.\n\n Uses function 'dist' to calculate distance between\n any two points. Parameters:\n -coord -- list of tuples with coordinates of all points, [(x1,y1),...,(xn,yn)]\n -dist -- distance function\n \"\"\"\n n = len(coord)\n D = {} # dictionary to hold n times n matrix\n for i in range(n-1):\n for j in range(i+1,n):\n [x1,y1] = coord[i]\n [x2,y2] = coord[j]\n D[i,j] = dist((x1,y1), (x2,y2))\n D[j,i] = D[i,j]\n return n,D\n\ndef read_tsplib(filename):\n \"basic function for reading a TSP problem on the TSPLIB format\"\n \"NOTE: only works for 2D euclidean or manhattan distances\"\n f = open(filename, 'r');\n\n line = f.readline()\n while line.find(\"EDGE_WEIGHT_TYPE\") == -1:\n line = f.readline()\n\n if line.find(\"EUC_2D\") != -1:\n dist = distL2\n elif line.find(\"MAN_2D\") != -1:\n dist = distL1\n else:\n print \"cannot deal with non-euclidean or non-manhattan distances\"\n raise Exception\n\n while line.find(\"NODE_COORD_SECTION\") == -1:\n line = f.readline()\n\n xy_positions = []\n while 1:\n line = f.readline()\n if line.find(\"EOF\") != -1: break\n (i,x,y) = line.split()\n x = float(x)\n y = float(y)\n xy_positions.append((x,y))\n\n n,D = mk_matrix(xy_positions, dist)\n return n, xy_positions, D\n\n\ndef mk_closest(D, n):\n \"\"\"Compute a sorted list of the distances for each of the nodes.\n\n For each node, the entry is in the form [(d1,i1), (d2,i2), ...]\n where each tuple is a pair (distance,node).\n \"\"\"\n C = []\n for i in range(n):\n dlist = [(D[i,j], j) for j in range(n) if j != i]\n dlist.sort()\n C.append(dlist)\n return C\n\n\ndef length(tour, D):\n \"\"\"Calculate the length of a tour according to distance matrix 'D'.\"\"\"\n z = D[tour[-1], tour[0]] # edge from last to first city of the tour\n for i in range(1,len(tour)):\n z += D[tour[i], tour[i-1]] # add length of edge from city i-1 to i\n return z\n\n\ndef randtour(n):\n \"\"\"Construct a random tour of size 'n'.\"\"\"\n sol = range(n) # set solution equal to [0,1,...,n-1]\n random.shuffle(sol) # place it in a random order\n return sol\n\n\ndef nearest(last, unvisited, D):\n \"\"\"Return the index of the node which is closest to 'last'.\"\"\"\n near = unvisited[0]\n min_dist = D[last, near]\n for i in unvisited[1:]:\n if D[last,i] < min_dist:\n near = i\n min_dist = D[last, near]\n return near\n\n\ndef nearest_neighbor(n, i, D):\n \"\"\"Return tour starting from city 'i', using the Nearest Neighbor.\n\n Uses the Nearest Neighbor heuristic to construct a solution:\n - start visiting city i\n - while there are unvisited cities, follow to the closest one\n - return to city i\n \"\"\"\n unvisited = range(n)\n unvisited.remove(i)\n last = i\n tour = [i]\n while unvisited != []:\n next = nearest(last, unvisited, D)\n tour.append(next)\n unvisited.remove(next)\n last = next\n return tour\n\n\n\ndef exchange_cost(tour, i, j, D):\n \"\"\"Calculate the cost of exchanging two arcs in a tour.\n\n Determine the variation in the tour length if\n arcs (i,i+1) and (j,j+1) are removed,\n and replaced by (i,j) and (i+1,j+1)\n (note the exception for the last arc).\n\n Parameters:\n -t -- a tour\n -i -- position of the first arc\n -j>i -- position of the second arc\n \"\"\"\n n = len(tour)\n a,b = tour[i],tour[(i+1)%n]\n c,d = tour[j],tour[(j+1)%n]\n return (D[a,c] + D[b,d]) - (D[a,b] + D[c,d])\n\n\ndef exchange(tour, tinv, i, j):\n \"\"\"Exchange arcs (i,i+1) and (j,j+1) with (i,j) and (i+1,j+1).\n\n For the given tour 't', remove the arcs (i,i+1) and (j,j+1) and\n insert (i,j) and (i+1,j+1).\n\n This is done by inverting the sublist of cities between i and j.\n \"\"\"\n n = len(tour)\n if i>j:\n i,j = j,i\n assert i>=0 and i<j-1 and j<n\n path = tour[i+1:j+1]\n path.reverse()\n tour[i+1:j+1] = path\n for k in range(i+1,j+1):\n tinv[tour[k]] = k\n\n\ndef improve(tour, z, D, C):\n \"\"\"Try to improve tour 't' by exchanging arcs; return improved tour length.\n\n If possible, make a series of local improvements on the solution 'tour',\n using a breadth first strategy, until reaching a local optimum.\n \"\"\"\n n = len(tour)\n tinv = [0 for i in tour]\n for k in range(n):\n tinv[tour[k]] = k # position of each city in 't'\n for i in range(n):\n a,b = tour[i],tour[(i+1)%n]\n dist_ab = D[a,b]\n improved = False\n for dist_ac,c in C[a]:\n if dist_ac >= dist_ab:\n break\n j = tinv[c]\n d = tour[(j+1)%n]\n dist_cd = D[c,d]\n dist_bd = D[b,d]\n delta = (dist_ac + dist_bd) - (dist_ab + dist_cd)\n if delta < 0: # exchange decreases length\n exchange(tour, tinv, i, j);\n z += delta\n improved = True\n break\n if improved:\n continue\n for dist_bd,d in C[b]:\n if dist_bd >= dist_ab:\n break\n j = tinv[d]-1\n if j==-1:\n j=n-1\n c = tour[j]\n dist_cd = D[c,d]\n dist_ac = D[a,c]\n delta = (dist_ac + dist_bd) - (dist_ab + dist_cd)\n if delta < 0: # exchange decreases length\n exchange(tour, tinv, i, j);\n z += delta\n break\n return z\n\n\ndef localsearch(tour, z, D, C=None):\n \"\"\"Obtain a local optimum starting from solution t; return solution length.\n\n Parameters:\n tour -- initial tour\n z -- length of the initial tour\n D -- distance matrix\n \"\"\"\n n = len(tour)\n if C == None:\n C = mk_closest(D, n) # create a sorted list of distances to each node\n while 1:\n newz = improve(tour, z, D, C)\n if newz < z:\n z = newz\n else:\n break\n return z\n\n\ndef multistart_localsearch(k, n, D, report=None):\n \"\"\"Do k iterations of local search, starting from random solutions.\n\n Parameters:\n -k -- number of iterations\n -D -- distance matrix\n -report -- if not None, call it to print verbose output\n\n Returns best solution and its cost.\n \"\"\"\n C = mk_closest(D, n) # create a sorted list of distances to each node\n bestt=None\n bestz=None\n for i in range(0,k):\n tour = randtour(n)\n z = length(tour, D)\n z = localsearch(tour, z, D, C)\n if z < bestz or bestz == None:\n bestz = z\n bestt = list(tour)\n if report:\n report(z, tour)\n\n return bestt, bestz\n\n", "_____no_output_____" ], [ "# db_name = \"travel_with_friends\"\n# TABLES ={}\n# TABLES['full_trip_table'] = (\n# \"CREATE TABLE `full_trip_table` (\"\n# \" `user_id` int(11) NOT NULL AUTO_INCREMENT,\"\n# \" `full_trip_id` date NOT NULL,\"\n# \" `trip_location_ids` varchar(14) NOT NULL,\"\n# \" `default` varchar(16) NOT NULL,\"\n# \" `county` enum('M','F') NOT NULL,\"\n# \" `state` date NOT NULL,\"\n# \" `details` ,\"\n# \" `n_days`,\"\n# \" PRIMARY KEY (`full_trip_id`)\"\n# \") ENGINE=InnoDB\")\n\n", "_____no_output_____" ], [ "# def create_tables():\n# \"\"\" create tables in the PostgreSQL database\"\"\"\n# commands = (\n# \"\"\"\n# CREATE TABLE full_trip_table (\n# index INTEGER PRIMARY KEY,\n# user_id VARCHAR(225) NOT NULL,\n# full_trip_id VARCHAR(225) NOT NULL,\n# trip_location_ids VARCHAR(225),\n# default BOOLEAN NOT NULL,\n# county VARCHAR(225) NOT NULL,\n# state VARCHAR(225) NOT NULL,\n# details VARCHAR(MAX),\n# n_days VARCHAR(225) NOT NULL\n# )\n# \"\"\",\n# \"\"\" CREATE TABLE day_trip_table (\n# trip_locations_id \n# full_day\n# default \n# county \n# state\n# details\n# )\n# \"\"\",\n# \"\"\"\n# CREATE TABLE poi_detail_table (\n# part_id INTEGER PRIMARY KEY,\n# file_extension VARCHAR(5) NOT NULL,\n# drawing_data BYTEA NOT NULL,\n# FOREIGN KEY (part_id)\n# REFERENCES parts (part_id)\n# ON UPDATE CASCADE ON DELETE CASCADE\n# )\n# \"\"\",\n# \"\"\"\n# CREATE TABLE google_travel_time_table (\n# index INTEGER PRIMARY KEY,\n# id_ VARCHAR NOT NULL,\n# orig_name VARCHAR,\n# orig_idx VARCHAR,\n# dest_name VARCHAR,\n# dest_idx VARCHAR,\n# orig_coord0 INTEGER,\n# orig_coord1 INTEGER,\n# dest_coord0 INTEGER,\n# dest_coord1 INTEGER,\n# orig_coords VARCHAR,\n# dest_coords VARCHAR,\n# google_driving_url VARCHAR,\n# google_walking_url VARCHAR,\n# driving_result VARCHAR,\n# walking_result VARCHAR,\n# google_driving_time INTEGER,\n# google_walking_time INTEGER\n \n\n# )\n# \"\"\")\n# conn = None\n# try:\n# # read the connection parameters\n# params = config()\n# # connect to the PostgreSQL server\n# conn = psycopg2.connect(**params)\n# cur = conn.cursor()\n# # create table one by one\n# for command in commands:\n# cur.execute(command)\n# # close communication with the PostgreSQL database server\n# cur.close()\n# # commit the changes\n# conn.commit()\n# except (Exception, psycopg2.DatabaseError) as error:\n# print(error)\n# finally:\n# if conn is not None:\n# conn.close()", "_____no_output_____" ], [ "# full_trip_table = pd.DataFrame(columns =['user_id', 'full_trip_id', 'trip_location_ids', 'default', 'county', 'state', 'details', 'n_days'])\n\n# day_trip_locations_table = pd.DataFrame(columns =['trip_locations_id','full_day', 'default', 'county', 'state','details'])\n\n# google_travel_time_table = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\\\n# 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\\\n# 'google_walking_url','driving_result','walking_result','google_driving_time',\\\n# 'google_walking_time'])", "_____no_output_____" ], [ "# read poi details csv file \npoi_detail = pd.read_csv(\"./step9_poi.csv\", index_col=0)\npoi_detail['address'] = None\npoi_detail['rating']=poi_detail['rating'].fillna(0)\n#read US city state and county csv file\ndf_counties = pd.read_csv('./us_cities_states_counties.csv',sep='|')\n#find counties without duplicate\ndf_counties_u = df_counties.drop('City alias',axis = 1).drop_duplicates()\n\n", "_____no_output_____" ], [ "def init_db_tables():\n full_trip_table = pd.DataFrame(columns =['user_id', 'full_trip_id', 'trip_location_ids', 'default', 'county', 'state', 'details', 'n_days'])\n\n day_trip_locations_table = pd.DataFrame(columns =['trip_locations_id','full_day', 'default', 'county', 'state','details','event_type','event_ids'])\n\n google_travel_time_table = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\\\n 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\\\n 'google_walking_url','driving_result','walking_result','google_driving_time',\\\n 'google_walking_time'])\n day_trip_locations_table.loc[0] = ['CALIFORNIA-SAN-DIEGO-1-3-0', True, True, 'SAN DIEGO', 'California',\n [\"{'address': '15500 San Pasqual Valley Rd, Escondido, CA 92027, USA', 'id': 2259, 'day': 0, 'name': u'San Diego Zoo Safari Park'}\", \"{'address': 'Safari Walk, Escondido, CA 92027, USA', 'id': 2260, 'day': 0, 'name': u'Meerkat'}\", \"{'address': '1999 Citracado Parkway, Escondido, CA 92029, USA', 'id': 3486, 'day': 0, 'name': u'Stone'}\", \"{'address': '1999 Citracado Parkway, Escondido, CA 92029, USA', 'id': 3487, 'day': 0, 'name': u'Stone Brewery'}\", \"{'address': 'Mount Woodson Trail, Poway, CA 92064, USA', 'id': 4951, 'day': 0, 'name': u'Lake Poway'}\", \"{'address': '17130 Mt Woodson Rd, Ramona, CA 92065, USA', 'id': 4953, 'day': 0, 'name': u'Potato Chip Rock'}\", \"{'address': '17130 Mt Woodson Rd, Ramona, CA 92065, USA', 'id': 4952, 'day': 0, 'name': u'Mt. Woodson'}\"],\n 'big','[2259, 2260,3486,3487,4951,4953,4952]']\n google_travel_time_table.loc[0] = ['439300002871', u'Moonlight Beach', 4393.0,\n u'Carlsbad Flower Fields', 2871.0, -117.29692141333341,\n 33.047769600024424, -117.3177652511278, 33.124079753475236,\n '33.0477696,-117.296921413', '33.1240797535,-117.317765251',\n 'https://maps.googleapis.com/maps/api/distancematrix/json?origins=33.0477696,-117.296921413&destinations=33.1240797535,-117.317765251&mode=driving&language=en-EN&sensor=false&key=AIzaSyDJh9EWCA_v0_B3SvjzjUA3OSVYufPJeGE',\n 'https://maps.googleapis.com/maps/api/distancematrix/json?origins=33.0477696,-117.296921413&destinations=33.1240797535,-117.317765251&mode=walking&language=en-EN&sensor=false&key=AIzaSyDJh9EWCA_v0_B3SvjzjUA3OSVYufPJeGE',\n \"{'status': 'OK', 'rows': [{'elements': [{'duration': {'text': '14 mins', 'value': 822}, 'distance': {'text': '10.6 km', 'value': 10637}, 'status': 'OK'}]}], 'origin_addresses': ['233 C St, Encinitas, CA 92024, USA'], 'destination_addresses': ['5754-5780 Paseo Del Norte, Carlsbad, CA 92008, USA']}\",\n \"{'status': 'OK', 'rows': [{'elements': [{'duration': {'text': '2 hours 4 mins', 'value': 7457}, 'distance': {'text': '10.0 km', 'value': 10028}, 'status': 'OK'}]}], 'origin_addresses': ['498 B St, Encinitas, CA 92024, USA'], 'destination_addresses': ['5754-5780 Paseo Del Norte, Carlsbad, CA 92008, USA']}\",\n 13.0, 124.0]\n full_trip_table.loc[0] = ['gordon_lee01', 'CALIFORNIA-SAN-DIEGO-1-3',\n \"['CALIFORNIA-SAN-DIEGO-1-3-0', 'CALIFORNIA-SAN-DIEGO-1-3-1', 'CALIFORNIA-SAN-DIEGO-1-3-2']\",\n True, 'SAN DIEGO', 'California',\n '[\"{\\'address\\': \\'15500 San Pasqual Valley Rd, Escondido, CA 92027, USA\\', \\'id\\': 2259, \\'day\\': 0, \\'name\\': u\\'San Diego Zoo Safari Park\\'}\", \"{\\'address\\': \\'Safari Walk, Escondido, CA 92027, USA\\', \\'id\\': 2260, \\'day\\': 0, \\'name\\': u\\'Meerkat\\'}\", \"{\\'address\\': \\'1999 Citracado Parkway, Escondido, CA 92029, USA\\', \\'id\\': 3486, \\'day\\': 0, \\'name\\': u\\'Stone\\'}\", \"{\\'address\\': \\'1999 Citracado Parkway, Escondido, CA 92029, USA\\', \\'id\\': 3487, \\'day\\': 0, \\'name\\': u\\'Stone Brewery\\'}\", \"{\\'address\\': \\'Mount Woodson Trail, Poway, CA 92064, USA\\', \\'id\\': 4951, \\'day\\': 0, \\'name\\': u\\'Lake Poway\\'}\", \"{\\'address\\': \\'17130 Mt Woodson Rd, Ramona, CA 92065, USA\\', \\'id\\': 4953, \\'day\\': 0, \\'name\\': u\\'Potato Chip Rock\\'}\", \"{\\'address\\': \\'17130 Mt Woodson Rd, Ramona, CA 92065, USA\\', \\'id\\': 4952, \\'day\\': 0, \\'name\\': u\\'Mt. Woodson\\'}\", \"{\\'address\\': \\'1 Legoland Dr, Carlsbad, CA 92008, USA\\', \\'id\\': 2870, \\'day\\': 1, \\'name\\': u\\'Legoland\\'}\", \"{\\'address\\': \\'5754-5780 Paseo Del Norte, Carlsbad, CA 92008, USA\\', \\'id\\': 2871, \\'day\\': 1, \\'name\\': u\\'Carlsbad Flower Fields\\'}\", \"{\\'address\\': \\'211-359 The Strand N, Oceanside, CA 92054, USA\\', \\'id\\': 2089, \\'day\\': 1, \\'name\\': u\\'Oceanside Pier\\'}\", \"{\\'address\\': \\'211-359 The Strand N, Oceanside, CA 92054, USA\\', \\'id\\': 2090, \\'day\\': 1, \\'name\\': u\\'Pier\\'}\", \"{\\'address\\': \\'1016-1024 Neptune Ave, Encinitas, CA 92024, USA\\', \\'id\\': 2872, \\'day\\': 1, \\'name\\': u\\'Encinitas\\'}\", \"{\\'address\\': \\'625 Pan American Rd E, San Diego, CA 92101, USA\\', \\'id\\': 147, \\'day\\': 2, \\'name\\': u\\'Balboa Park\\'}\", \"{\\'address\\': \\'1849-1863 Zoo Pl, San Diego, CA 92101, USA\\', \\'id\\': 152, \\'day\\': 2, \\'name\\': u\\'San Diego Zoo\\'}\", \"{\\'address\\': \\'701-817 Coast Blvd, La Jolla, CA 92037, USA\\', \\'id\\': 148, \\'day\\': 2, \\'name\\': u\\'La Jolla\\'}\", \"{\\'address\\': \\'10051-10057 Pebble Beach Dr, Santee, CA 92071, USA\\', \\'id\\': 4630, \\'day\\': 2, \\'name\\': u\\'Santee Lakes\\'}\", \"{\\'address\\': \\'Lake Murray Bike Path, La Mesa, CA 91942, USA\\', \\'id\\': 4545, \\'day\\': 2, \\'name\\': u\\'Lake Murray\\'}\", \"{\\'address\\': \\'4905 Mt Helix Dr, La Mesa, CA 91941, USA\\', \\'id\\': 4544, \\'day\\': 2, \\'name\\': u\\'Mt. Helix\\'}\", \"{\\'address\\': \\'1720 Melrose Ave, Chula Vista, CA 91911, USA\\', \\'id\\': 1325, \\'day\\': 2, \\'name\\': u\\'Thick-billed Kingbird\\'}\", \"{\\'address\\': \\'711 Basswood Ave, Imperial Beach, CA 91932, USA\\', \\'id\\': 1326, \\'day\\': 2, \\'name\\': u\\'Lesser Sand-Plover\\'}\"]',\n 3.0]\n engine = create_engine('postgresql://zoesh@localhost:5432/travel_with_friends')\n # full_trip_table = pd.read_csv('./full_trip_table.csv', index_col= 0)\n # full_trip_table.to_sql('full_trip_table', engine,if_exists='append')\n\n full_trip_table.to_sql('full_trip_table',engine, if_exists = \"replace\")\n day_trip_locations_table.to_sql('day_trip_table',engine, if_exists = \"replace\")\n google_travel_time_table.to_sql('google_travel_time_table',engine, if_exists = \"replace\")\n poi_detail.to_sql('poi_detail_table',engine, if_exists = \"replace\")\n \n \n df_counties = pd.read_csv('/Users/zoesh/Desktop/travel_with_friends/travel_with_friends/us_cities_states_counties.csv',sep='|')\n df_counties_u = df_counties.drop('City alias',axis = 1).drop_duplicates()\n df_counties_u.columns = [\"city\",\"state_abb\",\"state\",\"county\"]\n df_counties_u.to_sql('county_table',engine, if_exists = \"replace\")\ninit_db_tables()", "_____no_output_____" ], [ "def cold_start_places(df, county, state, city, number_days, first_day_full = True, last_day_full = True):\n \n if len(county.values) != 0:\n county = county.values[0]\n temp_df = df[(df['county'] == county) & (df['state'] == state)]\n else:\n temp_df = df[(df['city'] == city) & (df['state'] == state)]\n\n return county, temp_df\ndef find_county(state, city):\n \n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n cur.execute(\"select county from county_table where city = '%s' and state = '%s';\" %(city.title(), state.title()))\n\n county = cur.fetchone()\n conn.close()\n if county:\n return county[0]\n else:\n return None\ndef db_start_location(county, state, city):\n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n if county:\n cur.execute(\"select index, coord0, coord1, adjusted_normal_time_spent, poi_rank, rating from poi_detail_table where county = '%s' and state = '%s'; \"%(county.upper(), state))\n\n else:\n print \"else\"\n cur.execute(\"select index, coord0, coord1, adjusted_normal_time_spent, poi_rank, rating from poi_detail_table where city = '%s' and state = '%s'; \"%(city, state))\n a = cur.fetchall()\n conn.close()\n return np.array(a)", "_____no_output_____" ], [ "a1= db_start_location('San Francisco',\"California\",\"San Francisco\")\npoi_coords = a1[:,1:3]\nn_days = 1\ncurrent_events =[]\nbig_ix, med_ix, small_ix =[],[],[]\nkmeans = KMeans(n_clusters=n_days).fit(poi_coords)\ni = 0\nfor ix, label in enumerate(kmeans.labels_):\n# print ix, label\n if label == i:\n time = a1[ix,3]\n event_ix = a1[ix,0]\n current_events.append(event_ix)\n if time > 180 :\n big_ix.append(ix)\n\n elif time >= 120 :\n med_ix.append(ix)\n else:\n small_ix.append(ix)\nprint big_ix, med_ix, small_ix\n# big_ = a1[[big_ix],4:]\n# print med_ix\n# print a1\nbig_ = a1[big_ix][:,[0,4,5]]\nmed_ = a1[med_ix][:,[0,4,5]]\nsmall_ = a1[small_ix][:,[0,4,5]]\n", "_____no_output_____" ], [ "list_a1 = sorted_events(a1,small_ix)\n", "_____no_output_____" ], [ "from operator import itemgetter\nmed_s= sorted(med_, key=itemgetter(1,2))\na = [[1,3,3],[2,2,5],[3,4,4],[4,2,2]]\n# [sorted(a, key=itemgetter(1,2)) for i in range(100)]\nb = sorted(a, key=itemgetter(1,2))\na.sort(key=lambda k: (k[1], -k[2]), reverse=False)\n# print med_s\nprint a", "_____no_output_____" ], [ "def default_cold_start_places(df,df_counties_u, day_trip_locations,full_trip_table,df_poi_travel_info,number_days = [1,2,3,4,5]):\n \n df_c = df_counties_u.groupby(['State full','County']).count().reset_index()\n for state, county,_,_ in df_c.values[105:150]:\n temp_df = df[(df['county'] == county) & (df['state'] == state)]\n if temp_df.shape[0]!=0:\n if sum(temp_df.adjusted_normal_time_spent) < 360:\n number_days = [1]\n elif sum(temp_df.adjusted_normal_time_spent) < 720:\n number_days = [1,2]\n big_events = temp_df[temp_df.adjusted_normal_time_spent > 180]\n med_events = temp_df[(temp_df.adjusted_normal_time_spent>= 120)&(temp_df.adjusted_normal_time_spent<=180)]\n small_events = temp_df[temp_df.adjusted_normal_time_spent < 120]\n for i in number_days:\n n_days = i\n full_trip_table, day_trip_locations, new_trip_df1, df_poi_travel_info = \\\n default_search_cluster_events(df, df_counties_u, county, state, big_events,med_events, \\\n small_events, temp_df, n_days,day_trip_locations, full_trip_table,\\\n df_poi_travel_info)\n print county, state\n print full_trip_table.shape, len(day_trip_locations), new_trip_df1.shape, df_poi_travel_info.shape\n return None", "_____no_output_____" ], [ "# full_trip_table = pd.DataFrame(columns =['user_id', 'full_trip_id', 'trip_location_ids', 'default', 'county', 'state', 'details', 'n_days'])\n\n# day_trip_locations_table = pd.DataFrame(columns =['trip_locations_id','full_day', 'default', 'county', 'state','details'])\n\n# google_travel_time_table = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\\\n# 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\\\n# 'google_walking_url','driving_result','walking_result','google_driving_time',\\\n# 'google_walking_time'])\n# google_travel_time_table.loc[0] = ['000000000001', \"home\", '0000', 'space', '0001', 999 ,999, 999.1,999.1, \"999,999\",\"999.1,999.1\",\"http://google.com\",\"http://google.com\", \"\", \"\", 0, 0 ]", "_____no_output_____" ], [ "# google_travel_time_table.index=google_travel_time_table.index.astype(int)", "_____no_output_____" ], [ "# engine = create_engine('postgresql://Gon@localhost:5432/travel_with_friends')\n# # full_trip_table = pd.read_csv('./full_trip_table.csv', index_col= 0)\n# # full_trip_table.to_sql('full_trip_table', engine,if_exists='append')\n\n# full_trip_table.to_sql('full_trip_table',engine, if_exists = \"append\")\n# day_trip_locations_table.to_sql('day_trip_table',engine, if_exists = \"append\")\n# google_travel_time_table.to_sql('google_travel_time_table',engine, if_exists = \"append\")\n# # df.to_sql('poi_detail_table',engine, if_exists = \"append\")", "_____no_output_____" ], [ "def trip_df_cloest_distance(trip_df, event_type):\n points = trip_df[['coord0','coord1']].values.tolist()\n n, D = mk_matrix(points, distL2) # create the distance matrix\n if len(points) >= 3:\n if event_type == 'big':\n tour = nearest_neighbor(n, trip_df.shape[0]-1, D) # create a greedy tour, visiting city 'i' first\n z = length(tour, D)\n z = localsearch(tour, z, D)\n elif event_type == 'med':\n tour = nearest_neighbor(n, trip_df.shape[0]-2, D) # create a greedy tour, visiting city 'i' first\n z = length(tour, D)\n z = localsearch(tour, z, D)\n else:\n tour = nearest_neighbor(n, 0, D) # create a greedy tour, visiting city 'i' first\n z = length(tour, D)\n z = localsearch(tour, z, D)\n return tour\n else:\n return range(len(points))\n", "_____no_output_____" ], [ "def get_event_ids_list(trip_locations_id):\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select event_ids,event_type from day_trip_table where trip_locations_id = '%s' \" %(trip_locations_id))\n event_ids,event_type = cur.fetchone()\n event_ids = ast.literal_eval(event_ids)\n conn.close()\n return event_ids,event_type\n\ndef db_event_cloest_distance(trip_locations_id=None,event_ids=None, event_type = 'add',new_event_id = None):\n if new_event_id or not event_ids:\n event_ids, event_type = get_event_ids_list(trip_locations_id)\n if new_event_id:\n event_ids.append(new_event_id)\n \n conn = psycopg2.connect(conn_str) \n cur = conn.cursor()\n\n print event_ids\n points = np.zeros((len(event_ids), 3))\n for i,v in enumerate(event_ids):\n cur.execute(\"select index, coord0, coord1 from poi_detail_table where index = %i;\"%(float(v)))\n points[i] = cur.fetchone()\n conn.close()\n n,D = mk_matrix(points[:,1:], distL2)\n if len(points) >= 3:\n if event_type == 'add':\n tour = nearest_neighbor(n, 0, D)\n # create a greedy tour, visiting city 'i' first\n z = length(tour, D)\n z = localsearch(tour, z, D)\n return np.array(event_ids)[tour], event_type\n #need to figure out other cases\n else:\n tour = nearest_neighbor(n, 0, D)\n # create a greedy tour, visiting city 'i' first\n z = length(tour, D)\n z = localsearch(tour, z, D)\n return np.array(event_ids)[tour], event_type\n else:\n return np.array(event_ids), event_type\n", "_____no_output_____" ], [ "event_ids = np.array([1,2,3])", "_____no_output_____" ], [ "np.array(event_ids)", "_____no_output_____" ], [ "def check_full_trip_id(full_trip_id, debug):\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select details from full_trip_table where full_trip_id = '%s'\" %(full_trip_id)) \n a = cur.fetchone()\n conn.close()\n if bool(a):\n if not debug: \n return a[0]\n else:\n return True\n else:\n return False\n\ndef check_full_trip_id(day_trip_id, debug):\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select details from day_trip_table where trip_locations_id = '%s'\" %(day_trip_id)) \n a = cur.fetchone()\n conn.close()\n if bool(a):\n if not debug: \n return a[0]\n else:\n return True\n else:\n return False\n\ndef check_travel_time_id(new_id):\n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n cur.execute(\"select google_driving_time from google_travel_time_table where id_ = '%s'\" %(new_id))\n a = cur.fetchone()\n conn.close()\n if bool(a):\n return True\n else:\n return False\n", "_____no_output_____" ], [ "def sorted_events(info,ix):\n '''\n find the event_id, ranking and rating columns\n sorted base on ranking then rating\n \n return sorted list \n '''\n event_ = info[ix][:,[0,4,5]]\n return np.array(sorted(event_, key=lambda x: (x[1], -x[2])))", "_____no_output_____" ], [ "# test_ai_ids, type_= create_trip_df(big_,medium_,small_)\nbig_.shape, med_.shape, small_.shape\n", "_____no_output_____" ], [ "med_[0,1] ,small_[0,1] , med_[0,0]\n# small_[0:8,0].concatenate(medium_[0:2,0])\nc = small_[0:10,]\nd = med_[0:2,]\nprint np.vstack((c,d))\n# print list(np.array(sorted(np.vstack((c,d)), key=lambda x: (x[1],-x[2])))[:,0])\nprint list(np.array(sorted(np.vstack((small_[0:10,:],med_)), key=lambda x: (x[1],-x[2])))[:,0])\n# np.vstack((small_[0:10,],med_))", "_____no_output_____" ], [ "a_ids, a_type=create_event_id_list(big_,med_,small_)\n# conn = psycopg2.connect(conn_str)\n# cur = conn.cursor()\n# for i,v in enumerate(a_ids):\n# # print i, v, type(v)\n# cur.execute(\"select index, coord0, coord1 from poi_detail_table where index = %i;\"%(float(v)))\n# aaa = cur.fetchone()\n# # print aaa\n# conn.close()\n\n# new_a_ids, new_a_type=db_event_cloest_distance(event_ids = a_ids, event_type = a_type)\n# print new_a_ids, new_a_type", "_____no_output_____" ], [ "def create_event_id_list(big_,medium_,small_):\n event_type = ''\n if big_.shape[0] >= 1:\n if (medium_.shape[0] < 2) or (big_[0,1] <= medium_[0,1]):\n if small_.shape[0] >= 6:\n event_ids = np.concatenate((big_[0,0],small_[0:6,0]),axis=0) \n else:\n event_ids = np.concatenate((big_[0,0],small_[:,0]),axis=0)\n event_type = 'big'\n else:\n if small_.shape[0] >= 8:\n event_ids = np.concatenate((medium_[0:2,0],small_[0:8,0]),axis=0)\n else:\n event_ids = np.concatenate((medium_[0:2,0],small_[:,0]),axis=0)\n event_type = 'med'\n elif medium_.shape[0] >= 2:\n if small_.shape[0] >= 8:\n event_ids = np.concatenate((medium_[0:2,0],small_[0:8,0]),axis=0)\n else:\n event_ids = np.concatenate((medium_[0:2,0],small_[:,0]),axis=0)\n event_type = 'med'\n elif medium_.shape[0]> 0:\n if small_.shape[0] >= 10:\n event_ids = np.array(sorted(np.vstack((small_[0:10,:],medium_)), key=lambda x: (x[1],-x[2])))[:,0]\n else:\n event_ids = np.array(sorted(np.vstack((small_,medium_)), key=lambda x: (x[1],-x[2])))[:,0]\n event_type = 'small'\n else:\n if small_.shape[0]> 0:\n event_ids = small_[:,0]\n event_type = 'small'\n return event_ids, event_type", "_____no_output_____" ], [ "def create_trip_df(big_,medium_,small_):\n event_type = ''\n if big_.shape[0] >= 1:\n if (medium_.shape[0] < 2) or (big_.iloc[0].poi_rank <= medium_.iloc[0].poi_rank):\n if small_.shape[0] >= 6:\n trip_df = small_.iloc[0:6].append(big_.iloc[0])\n else:\n trip_df = small_.append(big_.iloc[0])\n event_type = 'big'\n else:\n if small_.shape[0] >= 8:\n trip_df = small_.iloc[0:8].append(medium_.iloc[0:2])\n else:\n trip_df = small_.append(medium_.iloc[0:2])\n event_type = 'med'\n elif medium_.shape[0] >= 2:\n if small_.shape[0] >= 8:\n trip_df = small_.iloc[0:8].append(medium_.iloc[0:2])\n else:\n trip_df = small_.append(medium_.iloc[0:2])\n event_type = 'med'\n else:\n if small_.shape[0] >= 10:\n trip_df = small_.iloc[0:10].append(medium_).sort_values(['poi_rank', 'rating'], ascending=[True, False])\n else:\n trip_df = small_.append(medium_).sort_values(['poi_rank', 'rating'], ascending=[True, False])\n event_type = 'small'\n return trip_df, event_type\n\n", "_____no_output_____" ], [ "my_key = 'AIzaSyDJh9EWCA_v0_B3SvjzjUA3OSVYufPJeGE'\n# my_key = 'AIzaSyAwx3xg6oJ0yiPV3MIunBa1kx6N7v5Tcw8'\ndef google_driving_walking_time(tour,trip_df,event_type):\n poi_travel_time_df = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\\\n 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\\\n 'google_walking_url','driving_result','walking_result','google_driving_time',\\\n 'google_walking_time'])\n ids_, orig_names,orid_idxs,dest_names,dest_idxs,orig_coord0s,orig_coord1s,dest_coord0s,dest_coord1s = [],[],[],[],[],[],[],[],[]\n orig_coordss,dest_coordss,driving_urls,walking_urls,driving_results,walking_results,driving_times,walking_times = [],[],[],[],[],[],[],[]\n trip_id_list=[]\n for i in range(len(tour)-1):\n id_ = str(trip_df.loc[trip_df.index[tour[i]]].name) + '0000'+str(trip_df.loc[trip_df.index[tour[i+1]]].name)\n \n result_check_travel_time_id = check_travel_time_id(id_)\n if not result_check_travel_time_id:\n \n orig_name = trip_df.loc[trip_df.index[tour[i]]]['name']\n orig_idx = trip_df.loc[trip_df.index[tour[i]]].name\n dest_name = trip_df.loc[trip_df.index[tour[i+1]]]['name']\n dest_idx = trip_df.loc[trip_df.index[tour[i+1]]].name\n orig_coord0 = trip_df.loc[trip_df.index[tour[i]]]['coord0']\n orig_coord1 = trip_df.loc[trip_df.index[tour[i]]]['coord1']\n dest_coord0 = trip_df.loc[trip_df.index[tour[i+1]]]['coord0']\n dest_coord1 = trip_df.loc[trip_df.index[tour[i+1]]]['coord1']\n orig_coords = str(orig_coord1)+','+str(orig_coord0)\n dest_coords = str(dest_coord1)+','+str(dest_coord0)\n \n google_driving_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}\".\\\n format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)\n google_walking_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}\".\\\n format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)\n driving_result= simplejson.load(urllib.urlopen(google_driving_url))\n walking_result= simplejson.load(urllib.urlopen(google_walking_url))\n \n if driving_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':\n google_driving_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}\".\\\n format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)\n driving_result= simplejson.load(urllib.urlopen(google_driving_url))\n \n if walking_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':\n google_walking_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}\".\\\n format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)\n walking_result= simplejson.load(urllib.urlopen(google_walking_url))\n \n if (driving_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND') and (walking_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND'):\n new_df = trip_df.drop(trip_df.iloc[tour[i+1]].name)\n new_tour = trip_df_cloest_distance(new_df,event_type)\n return google_driving_walking_time(new_tour,new_df, event_type)\n try:\n google_driving_time = driving_result['rows'][0]['elements'][0]['duration']['value']/60\n except: \n print driving_result\n\n try:\n google_walking_time = walking_result['rows'][0]['elements'][0]['duration']['value']/60\n except:\n google_walking_time = 9999\n \n \n poi_travel_time_df.loc[len(df_poi_travel_time)]=[id_,orig_name,orig_idx,dest_name,dest_idx,orig_coord0,orig_coord1,dest_coord0,\\\n dest_coord1,orig_coords,dest_coords,google_driving_url,google_walking_url,\\\n str(driving_result),str(walking_result),google_driving_time,google_walking_time]\n driving_result = str(driving_result).replace(\"'\", '\"')\n walking_result = str(walking_result).replace(\"'\", '\"')\n\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select max(index) from google_travel_time_table\")\n index = cur.fetchone()[0]+1\n# print \"startindex:\", index , type(index)\n# index += 1\n# print \"end index: \" ,index\n cur.execute(\"INSERT INTO google_travel_time_table VALUES (%i, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s','%s', '%s', '%s', '%s', '%s', '%s', %s, %s);\"%(index, id_, orig_name, orig_idx, dest_name, dest_idx, orig_coord0, orig_coord1, dest_coord0,\\\n dest_coord1, orig_coords, dest_coords, google_driving_url, google_walking_url,\\\n str(driving_result), str(walking_result), google_driving_time, google_walking_time))\n conn.commit()\n conn.close()\n else:\n trip_id_list.append(id_)\n \n return tour, trip_df, poi_travel_time_df\n ", "_____no_output_____" ], [ "def db_google_driving_walking_time(event_ids, event_type):\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n google_ids = []\n driving_time_list = []\n walking_time_list = []\n name_list = []\n for i,v in enumerate(event_ids[:-1]):\n id_ = str(v) + '0000'+str(event_ids[i+1])\n result_check_travel_time_id = check_travel_time_id(id_)\n if not result_check_travel_time_id:\n cur.execute(\"select name, coord0, coord1 from poi_detail_table where index = '%s'\"%(v))\n orig_name, orig_coord0, orig_coord1 = cur.fetchone()\n orig_idx = v\n cur.execute(\"select name, coord0, coord1 from poi_detail_table where index = '%s'\"%(event_ids[i+1]))\n dest_name, dest_coord0, dest_coord1 = cur.fetchone()\n dest_idx = event_ids[i+1]\n orig_coords = str(orig_coord1)+','+str(orig_coord0)\n dest_coords = str(dest_coord1)+','+str(dest_coord0)\n google_driving_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}\".\\\n format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)\n google_walking_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}\".\\\n format(orig_coords.replace(' ',''),dest_coords.replace(' ',''),my_key)\n driving_result= simplejson.load(urllib.urlopen(google_driving_url))\n walking_result= simplejson.load(urllib.urlopen(google_walking_url))\n if driving_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':\n google_driving_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=driving&language=en-EN&sensor=false&key={2}\".\\\n format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)\n driving_result= simplejson.load(urllib.urlopen(google_driving_url))\n \n if walking_result['rows'][0]['elements'][0]['status'] == 'ZERO_RESULTS':\n google_walking_url = \"https://maps.googleapis.com/maps/api/distancematrix/json?origins={0}&destinations={1}&mode=walking&language=en-EN&sensor=false&key={2}\".\\\n format(orig_name.replace(' ','+').replace('-','+'),dest_name.replace(' ','+').replace('-','+'),my_key)\n walking_result= simplejson.load(urllib.urlopen(google_walking_url))\n if (driving_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND') and (walking_result['rows'][0]['elements'][0]['status'] == 'NOT_FOUND'):\n new_event_ids = list(event_ids)\n new_event_ids.pop(i+1)\n new_event_ids = db_event_cloest_distance(event_ids=new_event_ids, event_type = event_type)\n return db_google_driving_walking_time(new_event_ids, event_type)\n try:\n google_driving_time = driving_result['rows'][0]['elements'][0]['duration']['value']/60\n except: \n print v, id_, driving_result #need to debug for this\n try:\n google_walking_time = walking_result['rows'][0]['elements'][0]['duration']['value']/60\n except:\n google_walking_time = 9999\n# return event_ids, google_driving_time, google_walking_time\n \n cur.execute(\"select max(index) from google_travel_time_table\")\n index = cur.fetchone()[0]+1\n# print \"startindex:\", index , type(index)\n# index += 1\n# print \"end index: \" ,index\n driving_result = str(driving_result).replace(\"'\",'\"')\n walking_result = str(walking_result).replace(\"'\",'\"')\n orig_name = orig_name.replace(\"'\",\"''\")\n dest_name = dest_name.replace(\"'\",\"''\")\n cur.execute(\"INSERT INTO google_travel_time_table VALUES (%i, '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s', '%s','%s', '%s', '%s', '%s', '%s', '%s', %s, %s);\"%(index, id_, orig_name, orig_idx, dest_name, dest_idx, orig_coord0, orig_coord1, dest_coord0,\\\n dest_coord1, orig_coords, dest_coords, google_driving_url, google_walking_url,\\\n str(driving_result), str(walking_result), google_driving_time, google_walking_time))\n# cur.execute(\"select google_driving_time, google_walking_time from google_travel_time_table \\\n# where id_ = '%s'\" %(id_))\n conn.commit()\n name_list.append(orig_name+\" to \"+ dest_name)\n google_ids.append(id_)\n driving_time_list.append(google_driving_time)\n walking_time_list.append(google_walking_time)\n else:\n \n cur.execute(\"select orig_name, dest_name, google_driving_time, google_walking_time from google_travel_time_table \\\n where id_ = '%s'\" %(id_))\n orig_name, dest_name, google_driving_time, google_walking_time = cur.fetchone()\n name_list.append(orig_name+\" to \"+ dest_name)\n google_ids.append(id_)\n driving_time_list.append(google_driving_time)\n walking_time_list.append(google_walking_time)\n \n \n conn.close()\n return event_ids, google_ids, name_list, driving_time_list, walking_time_list\n\n \n \n", "_____no_output_____" ], [ "trip_locations_id = 'CALIFORNIA-SAN-DIEGO-1-3-0'\ndefault = 1\nn_days =3\nfull_day = 1\npoi_coords = df_events[['coord0','coord1']]\ntrip_location_ids, full_trip_details =[],[]\nkmeans = KMeans(n_clusters=n_days).fit(poi_coords)\n# print kmeans.labels_\ni=0\ncurrent_events = []\nbig_ix = []\nsmall_ix = []\nmed_ix = []\nfor ix, label in enumerate(kmeans.labels_):\n if label == i:\n time = df_events.iloc[ix].adjusted_normal_time_spent\n event_ix = df_events.iloc[ix].name\n current_events.append(event_ix)\n if time > 180 :\n big_ix.append(event_ix)\n elif time >= 120 :\n med_ix.append(event_ix)\n else:\n small_ix.append(event_ix)\n\n# all_big = big.sort_values(['poi_rank', 'rating'], ascending=[True, False])\nbig_ = df_events.loc[big_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\nsmall_ = df_events.loc[small_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\nmedium_ = df_events.loc[med_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\n\n# trip_df, event_type = create_trip_df(big_,medium_,small_)\n# tour = trip_df_cloest_distance(trip_df, event_type)\n# new_tour, new_trip_df, df_poi_travel_time = google_driving_walking_time(tour,trip_df,event_type)\n# new_trip_df = new_trip_df.iloc[new_tour]\n# new_trip_df1,new_df_poi_travel_time,total_time = remove_extra_events(new_trip_df, df_poi_travel_time)\n# new_trip_df1['address'] = df_addresses(new_trip_df1, new_df_poi_travel_time)\n\n# event_ids, event_type=db_event_cloest_distance(trip_locations_id)\n# event_ids, google_ids, name_list, driving_time_list, walking_time_list =db_google_driving_walking_time(event_ids, event_type)\n# event_ids, driving_time_list, walking_time_list, total_time_spent = db_remove_extra_events(event_ids, driving_time_list, walking_time_list)\n# db_address(event_ids)\n# values = day_trip(event_ids, county, state, default, full_day,n_days,i)\n# day_trip_locations.loc[len(day_trip_locations)] = values\n# trip_location_ids.append(values[0])\n# full_trip_details.extend(values[-1])\n", "_____no_output_____" ], [ "def get_fulltrip_data_default(state, city, n_days, day_trip_locations = True, full_trip_table = True, default = True, debug = True):\n county = find_county(state, city)\n trip_location_ids, full_trip_details =[],[]\n full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])\n \n if not check_full_trip_id(full_trip_id, debug):\n county_list_info = db_start_location(county, state, city)\n poi_coords = county_list_info[:,1:3]\n kmeans = KMeans(n_clusters=n_days).fit(poi_coords)\n if not county_list_info:\n return \"error: county_list_info is empty\"\n for i in range(n_days):\n \n if not check_day_trip_id(day_trip, debug):\n trip_locations_id = '-'.join([str(state), str(county.replace(' ','-')),str(int(default)), str(n_days),str(i)])\n current_events, big_ix, small_ix, med_ix = [],[],[],[]\n\n for ix, label in enumerate(kmeans.labels_):\n if label == i:\n time = county_list_info[ix,3]\n event_ix = county_list_info[ix,0]\n current_events.append(event_ix)\n if time > 180 :\n big_ix.append(ix)\n elif time >= 120 :\n med_ix.append(ix)\n else:\n small_ix.append(ix)\n \n \n \n big_ = sorted_events(county_list_info, big_ix)\n med_ = sorted_events(county_list_info, med_ix)\n small_ = sorted_events(county_list_info, small_ix)\n \n event_ids, event_type = create_event_id_list(big_, med_, small_)\n event_ids, event_type = db_event_cloest_distance(event_ids = event_ids, event_type = event_type)\n event_ids, google_ids, name_list, driving_time_list, walking_time_list =db_google_driving_walking_time(event_ids, event_type)\n event_ids, driving_time_list, walking_time_list, total_time_spent = db_remove_extra_events(event_ids, driving_time_list, walking_time_list)\n db_address(event_ids)\n values = db_day_trip(event_ids, county, state, default, full_day,n_days,i)\n# insert to day_trip ....\n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n cur.execute(\"insert into day_trip_table (trip_locations_id,full_day, default, county, state, details, event_type, event_ids) VALUES ( '%s', %s, %s, '%s', '%s', '%s', '%s', '%s')\" %( trip_location_id, full_day, default, county, state, details, event_type, event_ids))\n conn.commit()\n conn.close()\n trip_location_ids.append(values[0])\n full_trip_details.extend(values[-1])\n\n else:\n print \"error: already have this day, please check the next day\"\n trip_location_ids.append(trip_locations_id)\n# call db find day trip detail \n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n cur.execute(\"select details from day_trip_table where trip_locations_id = '%s';\"%(trip_locations_id) )\n day_trip_detail = fetchall()\n conn.close()\n full_trip_details.extend(day_trip_detail)\n\n full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])\n details = full_trip_details\n user_id = \"Admin\"\n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n cur.execute(\"insert into full_trip_table(user_id, full_trip_id,trip_location_ids, default, county, state, details, n_days) VALUES ('%s', '%s', '%s', %s, '%s', '%s', '%s', %s)\" %(user_id, full_trip_id, str(trip_location_ids), default, county, state, details, n_days))\n conn.commit()\n conn.close()\n\n return \"finish update %s, %s into database\" %(state, county)\n else:\n return \"%s, %s already in database\" %(state, county) \n ", "_____no_output_____" ], [ "def db_day_trip(event_ids, county, state, default, full_day,n_days,i):\n conn=psycopg2.connect(conn_str)\n cur = conn.cursor()\n\n# cur.execute(\"select state, county, count(*) AS count from poi_detail_table where index in %s GROUP BY state, county order by count desc;\" %(tuple(test_event_ids_list),))\n# a = cur.fetchall()\n# state = a[0][0].upper()\n# county = a[0][1].upper()\n\n trip_locations_id = '-'.join([str(state), str(county.replace(' ','-')),str(int(default)), str(n_days),str(i)])\n\n #details dict includes: id, name,address, day\n\n cur.execute(\"select index, name, address from poi_detail_table where index in %s;\" %(tuple(event_ids),))\n a = cur.fetchall()\n details = [str({'id': a[x][0],'name': a[x][1],'address': a[x][2], 'day': i}) for x in range(len(a))]\n conn.close()\n return [trip_locations_id, full_day, default, county, state, details]", "_____no_output_____" ], [ "conn=psycopg2.connect(conn_str)\ncur = conn.cursor()\ncur.execute(\"select state, county, count(*) AS count from poi_detail_table where index in %s GROUP BY state, county order by count desc;\" %(tuple(test_event_ids_list),))\na = cur.fetchall()\nprint a[0][0].upper()\n# details = [str({'id': a[x][0],'name': a[x][1],'address': a[x][2], 'day': i}) for x in range(a)]\nconn.close()", "_____no_output_____" ], [ "def extend_full_trip_details(full_trip_details):\n details = {}\n addresses = []\n ids = []\n days = []\n names = []\n for item in full_trip_details:\n addresses.append(eval(item)['address'])\n ids.append(eval(item)['id'])\n days.append(eval(item)['day'])\n names.append(eval(item)['name'])\n details['addresses'] = addresses\n details['ids'] = ids\n details['days'] = days\n details['names'] = names\n return str(full_trip_details)", "_____no_output_____" ], [ "event_ids", "_____no_output_____" ], [ "print event_ids, google_ids, name_list, driving_time_list, walking_time_list", "_____no_output_____" ], [ "init_db_tables()", "_____no_output_____" ], [ "def remove_extra_events(trip_df, df_poi_travel_time):\n if sum(trip_df.adjusted_normal_time_spent)+sum(df_poi_travel_time.google_driving_time) > 480:\n new_trip_df = trip_df[:-1]\n new_df_poi_travel_time = df_poi_travel_time[:-1]\n return remove_extra_events(new_trip_df,new_df_poi_travel_time)\n else:\n return trip_df, df_poi_travel_time, sum(trip_df.adjusted_normal_time_spent)+sum(df_poi_travel_time.google_driving_time)\ndef db_remove_extra_events(event_ids, driving_time_list,walking_time_list):\n conn = psycopg2.connect(conn_str)\n cur = conn.cursor() \n cur.execute(\"select sum (adjusted_normal_time_spent) from poi_detail_table where index in %s;\" %(tuple(event_ids),))\n time_spent = cur.fetchone()[0]\n conn.close()\n time_spent += sum(np.minimum(np.array(driving_time_list),np.array(walking_time_list)))\n if time_spent > 480:\n update_event_ids = event_ids[:-1]\n update_driving_time_list = driving_time_list[:-1]\n update_walking_time_list = walking_time_list[:-1]\n return db_remove_extra_events(update_event_ids, update_driving_time_list, update_walking_time_list)\n else:\n return event_ids, driving_time_list, walking_time_list, time_spent\n \n ", "_____no_output_____" ], [ "def df_addresses(new_trip_df1, new_df_poi_travel_time):\n my_lst = []\n print new_trip_df1.index.values\n for i in new_trip_df1.index.values:\n temp_df = new_df_poi_travel_time[i == new_df_poi_travel_time.orig_idx.values]\n if temp_df.shape[0]>0:\n address = eval(temp_df.driving_result.values[0])['origin_addresses'][0]\n my_lst.append(address)\n else:\n try:\n temp_df = new_df_poi_travel_time[i == new_df_poi_travel_time.dest_idx.values]\n address = eval(temp_df.driving_result.values[0])['destination_addresses'][0]\n my_lst.append(address)\n except:\n print new_trip_df1, new_df_poi_travel_time\n return my_lst\n\ndef check_address(index):\n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n cur.execute(\"select address from poi_detail_table where index = %s;\"%(index))\n a = cur.fetchone()[0]\n conn.close()\n if a:\n return True\n else:\n return False\ndef db_address(event_ids):\n conn = psycopg2.connect(conn_str)\n cur = conn.cursor()\n for i in event_ids[:-1]:\n if not check_address(i):\n cur.execute(\"select driving_result from google_travel_time_table where orig_idx = %s;\" %(i))\n a= cur.fetchone()[0]\n add = ast.literal_eval(a)['origin_addresses'][0]\n cur.execute(\"update poi_detail_table set address = '%s' where index = %s;\" %(add, i))\n conn.commit()\n last = event_ids[-1]\n if not check_address(last):\n cur.execute(\"select driving_result from google_travel_time_table where dest_idx = %s;\" %(last))\n a= cur.fetchone()[0]\n add = ast.literal_eval(a)['destination_addresses'][0]\n cur.execute(\"update poi_detail_table set address = '%s' where index = %s;\" %(add, last))\n conn.commit()\n conn.close()\n\n ", "_____no_output_____" ], [ "from numpy import *\n\ntest_event_ids_list = append(273, event_ids)\n# event_ids\nprint test_event_ids_list", "_____no_output_____" ], [ "conn=psycopg2.connect(conn_str)\ncur = conn.cursor()\ncur.execute(\"select state, county, count(*) AS count from poi_detail_table where index in %s GROUP BY state, county order by count desc;\" %(tuple(test_event_ids_list),))\na = cur.fetchall()\nprint a[0][0].upper()\n# details = [str({'id': a[x][0],'name': a[x][1],'address': a[x][2], 'day': i}) for x in range(a)]\nconn.close()", "_____no_output_____" ], [ "day_trip_locations = 'San Diego, California'\n\nf, d, n, d= search_cluster_events(df, county, state, city, 3, day_trip_locations, full_trip_table, default = True)", "_____no_output_____" ], [ "import time\nt1=time.time()\n# [index, ranking,score]\na = [[1,2,3],[2,2,6],[3,3,3],[4,3,10]]\nfrom operator import itemgetter\nprint sorted(a, key=lambda x: (x[1], -x[2]))\n\ntime.time()-t1", "_____no_output_____" ], [ "'''\nMost important event that will call all the functions and return the day details for the trip\n'''\n\n\ndef search_cluster_events(df, county, state, city, n_days, day_trip_locations = True, full_trip_table = True, default = True, debug = True):\n \n county, df_events =cold_start_places(df, county, state, city, n_days) \n \n poi_coords = df_events[['coord0','coord1']]\n kmeans = KMeans(n_clusters=n_days).fit(poi_coords)\n\n new_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])\n if not check_full_trip_id(new_trip_id, debug):\n \n trip_location_ids = []\n full_trip_details = []\n for i in range(n_days):\n current_events = []\n big_ix = []\n small_ix = []\n med_ix = []\n for ix, label in enumerate(kmeans.labels_):\n if label == i:\n event_ix = poi_coords.index[ix]\n current_events.append(event_ix)\n if event_ix in big.index:\n big_ix.append(event_ix)\n elif event_ix in med.index:\n med_ix.append(event_ix)\n else:\n small_ix.append(event_ix)\n all_big = big.sort_values(['poi_rank', 'rating'], ascending=[True, False])\n big_ = big.loc[big_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\n small_ = small.loc[small_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\n medium_ = med.loc[med_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\n # print 'big:', big_, 'small:', small_, 'msize:', medium_\n trip_df, event_type = create_trip_df(big_,medium_,small_)\n # print event_type\n tour = trip_df_cloest_distance(trip_df, event_type)\n # print tour\n new_tour, new_trip_df, df_poi_travel_time = google_driving_walking_time(tour,trip_df,event_type)\n # print new_tour, new_trip_df\n # return new_trip_df, df_poi_travel_time\n new_trip_df = new_trip_df.iloc[new_tour]\n new_trip_df1,new_df_poi_travel_time,total_time = remove_extra_events(new_trip_df, df_poi_travel_time)\n # print new_trip_df1\n new_trip_df1['address'] = df_addresses(new_trip_df1, new_df_poi_travel_time)\n # print 'total time:', total_ti\n values = day_trip(new_trip_df1, county, state, default, full_day,n_days,i)\n day_trip_locations.loc[len(day_trip_locations)] = values\n trip_location_ids.append(values[0])\n full_trip_details.extend(values[-1])\n df_poi_travel_info = df_poi_travel_info.append(new_df_poi_travel_time)\n \n full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])\n details = extend_full_trip_details(full_trip_details)\n full_trip_table.loc[len(full_trip_table)] = [\"adam\", full_trip_id, str(trip_location_ids), default, county, state, details, n_days]\n \n \n return full_trip_table, day_trip_locations, new_trip_df1, df_poi_travel_info", "_____no_output_____" ], [ "def default_search_cluster_events(df, df_counties_u, county, state, big,med, small, \\\n temp, n_days,day_trip_locations, full_trip_table,df_poi_travel_info):\n# df_poi_travel_info = pd.DataFrame(columns =['id_','orig_name','orig_idx','dest_name','dest_idx','orig_coord0','orig_coord1',\\\n# 'dest_coord0','dest_coord1','orig_coords','dest_coords','google_driving_url',\\\n# 'google_walking_url','driving_result','walking_result','google_driving_time',\\\n# 'google_walking_time'])\n poi_coords = temp[['coord0','coord1']]\n kmeans = KMeans(n_clusters=n_days, random_state=0).fit(poi_coords)\n# print kmeans.labels_\n full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])\n trip_location_ids = []\n full_trip_details = []\n for i in range(n_days):\n current_events = []\n big_ix = []\n small_ix = []\n med_ix = []\n for ix, label in enumerate(kmeans.labels_):\n if label == i:\n event_ix = poi_coords.index[ix]\n current_events.append(event_ix)\n if event_ix in big.index:\n big_ix.append(event_ix)\n elif event_ix in med.index:\n med_ix.append(event_ix)\n else:\n small_ix.append(event_ix)\n all_big = big.sort_values(['poi_rank', 'rating'], ascending=[True, False])\n big_ = big.loc[big_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\n small_ = small.loc[small_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\n medium_ = med.loc[med_ix].sort_values(['poi_rank', 'rating'], ascending=[True, False])\n trip_df, event_type = create_trip_df(big_,medium_,small_)\n tour = trip_df_cloest_distance(trip_df, event_type)\n new_tour, new_trip_df, df_poi_travel_time = google_driving_walking_time(tour,trip_df,event_type)\n new_trip_df = new_trip_df.iloc[new_tour]\n new_trip_df1,new_df_poi_travel_time,total_time = remove_extra_events(new_trip_df, df_poi_travel_time)\n new_trip_df1['address'] = df_addresses(new_trip_df1, new_df_poi_travel_time)\n values = day_trip(new_trip_df1, county, state, default, full_day,n_days,i)\n day_trip_locations.loc[len(day_trip_locations)] = values\n trip_location_ids.append(values[0])\n full_trip_details.extend(values[-1])\n# print 'trave time df \\n',new_df_poi_travel_time\n df_poi_travel_info = df_poi_travel_info.append(new_df_poi_travel_time)\n full_trip_id = '-'.join([str(state.upper()), str(county.upper().replace(' ','-')),str(int(default)), str(n_days)])\n details = extend_full_trip_details(full_trip_details)\n full_trip_table.loc[len(full_trip_table)] = [user_id, full_trip_id, \\\n str(trip_location_ids), default, county, state, details, n_days]\n return full_trip_table, day_trip_locations, new_trip_df1, df_poi_travel_info", "_____no_output_____" ], [ "###Next Steps: Add control from the users. funt1: allow to add events,(specific name or auto add)\n### auto route to the most appropirate order\n###funt2: allow to reorder the events. funt3: allow to delete the events. \n###funt4: allow to switch a new event-next to the switch and x mark icon,check mark to confirm the new place and auto order\n\n###New table for the trip info...features including trip id, event place, days, specific date, trip details. (trip tour, trip)\n\ndef ajax_available_events(county, state):\n county=county.upper()\n state = state.title()\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select index, name from poi_detail_table where county='%s' and state='%s'\" %(county,state)) \n poi_lst = [item for item in cur.fetchall()]\n conn.close()\n return poi_lst\n\ndef add_event(trip_locations_id, event_day, event_id=None, event_name=None, full_day = True, unseen_event = False):\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select * from day_trip_table where trip_locations_id='%s'\" %(trip_locations_id)) \n (index, trip_locations_id, full_day, default, county, state, detail, event_type, event_ids) = cur.fetchone()\n if unseen_event:\n index += 1\n trip_locations_id = '-'.join([str(eval(i)['id']) for i in eval(detail)])+'-'+event_name.replace(' ','-')+'-'+event_day\n cur.execute(\"select details from day_trip_locations where trip_locations_id='%s'\" %(trip_locations_id))\n a = cur.fetchone()\n if bool(a):\n conn.close()\n return trip_locations_id, a[0]\n else:\n cur.execute(\"select max(index) from day_trip_locations\")\n index = cur.fetchone()[0]+1\n detail = list(eval(detail))\n #need to make sure the type is correct for detail!\n new_event = \"{'address': 'None', 'id': 'None', 'day': %s, 'name': u'%s'}\"%(event_day, event_name)\n detail.append(new_event)\n #get the right format of detail: change from list to string and remove brackets and convert quote type\n new_detail = str(detail).replace('\"','').replace('[','').replace(']','').replace(\"'\",'\"')\n cur.execute(\"INSERT INTO day_trip_locations VALUES (%i, '%s',%s,%s,'%s','%s','%s');\" %(index, trip_locations_id, full_day, False, county, state, new_detail))\n conn.commit()\n conn.close()\n return trip_locations_id, detail\n else:\n event_ids = add_event_cloest_distance(trip_locations_id, event_id)\n event_ids, google_ids, name_list, driving_time_list, walking_time_list = db_google_driving_walking_time(event_ids,event_type = 'add')\n trip_locations_id = '-'.join(event_ids)+'-'+event_day\n cur.execute(\"select details from day_trip_locations where trip_locations_id='%s'\" %(trip_locations_id)) \n a = cur.fetchone()\n if not a:\n details = []\n db_address(event_ids)\n for item in event_ids:\n cur.execute(\"select index, name, address from poi_detail_table where index = '%s';\" %(item))\n a = cur.fetchone()\n detail = {'id': a[0],'name': a[1],'address': a[2], 'day': event_day}\n details.append(detail)\n #need to make sure event detail can append to table!\n cur.execute(\"insert into day_trip_table (trip_locations_id,full_day, default, county, state, details, event_type, event_ids) VALUES ( '%s', %s, %s, '%s', '%s', '%s', '%s', '%s')\" %( trip_location_id, full_day, False, county, state, details, event_type, event_ids))\n conn.commit()\n conn.close()\n return trip_locations_id, details\n else:\n conn.close()\n #need to make sure type is correct.\n return trip_locations_id, a[0]\n\ndef remove_event(trip_locations_id, remove_event_id, remove_event_name=None, event_day=None, full_day = True):\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select * from day_trip_table where trip_locations_id='%s'\" %(trip_locations_id)) \n (index, trip_locations_id, full_day, default, county, state, detail, event_type, event_ids) = cur.fetchone()\n new_event_ids = ast.literal_eval(event_ids)\n new_event_ids.remove(remove_event_id)\n new_trip_locations_id = '-'.join(str(event_id) for event_id in new_event_ids)\n cur.execute(\"select * from day_trip_table where trip_locations_id='%s'\" %(new_trip_locations_id)) \n check_id = cur.fetchone()\n if check_id:\n return new_trip_locations_id, check_id[-3]\n detail = ast.literal_eval(detail[1:-1])\n for index, trip_detail in enumerate(detail):\n if ast.literal_eval(trip_detail)['id'] == remove_event_id:\n remove_index = index\n break\n new_detail = list(detail)\n new_detail.pop(remove_index)\n new_detail = str(new_detail).replace(\"'\",\"''\")\n default = False\n cur.execute(\"select max(index) from day_trip_table where trip_locations_id='%s'\" %(trip_locations_id)) \n new_index = cur.fetchone()[0]\n new_index+=1\n cur.execute(\"INSERT INTO day_trip_table VALUES (%i, '%s', %s, %s, '%s', '%s', '%s', '%s','%s');\" \\\n %(new_index, new_trip_locations_id, full_day, default, county, state, new_detail, event_type, new_event_ids)) \n conn.commit()\n conn.close()\n return new_trip_locations_id, new_detail\n\ndef event_type_time_spent(adjusted_normal_time_spent):\n if adjusted_normal_time_spent > 180:\n return 'big'\n elif adjusted_normal_time_spent >= 120:\n return 'med'\n else:\n return 'small'\ndef switch_event_list(full_trip_id, trip_locations_id, switch_event_id, switch_event_name=None, event_day=None, full_day = True):\n# new_trip_locations_id, new_detail = remove_event(trip_locations_id, switch_event_id)\n conn = psycopg2.connect(conn_str) \n cur = conn.cursor() \n cur.execute(\"select name, city, county, state, coord0, coord1,poi_rank, adjusted_normal_time_spent from poi_detail_table where index=%s\" %(switch_event_id))\n name, city, county, state,coord0, coord1,poi_rank, adjusted_normal_time_spent = cur.fetchone()\n event_type = event_type_time_spent(adjusted_normal_time_spent)\n avialable_lst = ajax_available_events(county, state)\n cur.execute(\"select trip_location_ids,details from full_trip_table where full_trip_id=%s\" %(full_trip_id))\n full_trip_detail = cur.fetchone()\n full_trip_detail = ast.literal_eval(full_trip_detail)\n full_trip_ids = [ast.literal_eval(item)['id'] for item in full_trip_detail]\n switch_lst = []\n for item in avialable_lst:\n index = item[0]\n if index not in full_trip_ids:\n event_ids = [switch_event_id, index]\n event_ids, google_ids, name_list, driving_time_list, walking_time_list = db_google_driving_walking_time(event_ids, event_type='switch')\n if min(driving_time_list[0], walking_time_list[0]) <= 60:\n cur.execute(\"select poi_rank, rating, adjusted_normal_time_spent from poi_detail_table where index=%s\" %(index))\n target_poi_rank, target_rating, target_adjusted_normal_time_spent = cur.fetchone()\n target_event_type = event_type_time_spent(target_adjusted_normal_time_spent)\n switch_lst.append([target_poi_rank, target_rating, target_event_type==event_type])\n #need to sort target_event_type, target_poi_rank and target_rating\n return {switch_event_id: switch_lst}\n\ndef switch_event(trip_locations_id, switch_event_id, final_event_id, event_day):\n new_trip_locations_id, new_detail = remove_event(trip_locations_id, switch_event_id)\n new_trip_locations_id, new_detail = add_event(new_trip_locations_id, event_day, final_event_id, full_day = True, unseen_event = False)\n return new_trip_locations_id, new_detail", "_____no_output_____" ], [ "ajax_available_events(county='San Francisco', state = \"California\")\ncounty='San Francisco'.upper()\nstate = \"California\"\nconn = psycopg2.connect(conn_str) \ncur = conn.cursor() \ncur.execute(\"select index, name from poi_detail_table where county='%s' and state='%s'\" %(county,state)) \nfull_trip_id = 'CALIFORNIA-SAN-DIEGO-1-3'\ncur.execute(\"select details from full_trip_table where full_trip_id='%s'\" %(full_trip_id))\nfull_trip_detail = cur.fetchone()[0]\nfull_trip_detail = ast.literal_eval(full_trip_detail)\n[ast.literal_eval(item)['id'] for item in full_trip_detail]", "_____no_output_____" ], [ "conn = psycopg2.connect(conn_str) \ncur = conn.cursor() \ncur.execute(\"select * from poi_detail_table where name='%s'\" %(trip_locations_id)) \n \n'Blue Springs State Park' in df.name", "_____no_output_____" ], [ "\"select * from poi_detail_table where index=%s\" %(remove_event_id)", "_____no_output_____" ], [ "trip_locations_id = 'CALIFORNIA-SAN-DIEGO-1-3-0'\nremove_event_id = 3486\nconn = psycopg2.connect(conn_str) \ncur = conn.cursor() \ncur.execute(\"select * from day_trip_table where trip_locations_id='%s'\" %(trip_locations_id)) \n(index, trip_locations_id, full_day, default, county, state, detail, event_type, event_ids) = cur.fetchone()\n# event_ids = ast.literal_eval(event_ids)\n# print detail, '\\n'\nnew_event_ids = ast.literal_eval(event_ids)\nnew_event_ids.remove(remove_event_id)\nnew_trip_locations_id = '-'.join(str(id_) for id_ in new_event_ids)\n# event_ids.remove(remove_event_id)\ndetail = ast.literal_eval(detail[1:-1])\nprint type(detail[0])\nfor index, trip_detail in enumerate(detail):\n if ast.literal_eval(trip_detail)['id'] == remove_event_id:\n remove_index = index\n break\nnew_detail = list(detail)\nnew_detail.pop(remove_index)\nnew_detail = str(new_detail).replace(\"'\",\"''\")", "_____no_output_____" ], [ "'-'.join(str(id_) for id_ in new_event_ids)", "_____no_output_____" ], [ "#Tasks:\n#0. Run the initial to debug with all the cities and counties for the poi_detail_table in hand. \n#1. Continue working on add/suggest/remove features\n#2. Start the new feature that allows user to generate the google map route for the day\n#3. new feature that allows user to explore outside the city from a direction away from the started location\n#4. get all the state and national park data into database and rework the ranking system and the poi_detail_table!\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a86ac2b06106c923e7085380a2875372cb62a89
223,880
ipynb
Jupyter Notebook
notebooks/ga01.ipynb
kazuyamagiwa/graphnet-automata
0f62d1269ee7f65a4334d57ff06e823a212a1bdd
[ "MIT" ]
4
2020-05-15T13:38:00.000Z
2020-08-01T13:19:57.000Z
notebooks/ga01.ipynb
kazuyamagiwa/graphnet-automata
0f62d1269ee7f65a4334d57ff06e823a212a1bdd
[ "MIT" ]
null
null
null
notebooks/ga01.ipynb
kazuyamagiwa/graphnet-automata
0f62d1269ee7f65a4334d57ff06e823a212a1bdd
[ "MIT" ]
null
null
null
615.054945
78,852
0.948758
[ [ [ "import networkx as nx\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom functools import lru_cache\nfrom numba import jit\nimport community", "_____no_output_____" ], [ "import warnings; warnings.simplefilter('ignore')", "_____no_output_____" ], [ "@jit(nopython = True)\ndef generator(A):\n B = np.zeros((len(A)+2, len(A)+2), np.int_)\n B[1:-1,1:-1] = A\n for i in range(len(B)):\n for j in range(len(B)):\n count = 0\n count += B[i][j]\n if i-1 > 0:\n count += B[i-1][j]\n if i+1 < len(B):\n count += B[i+1][j]\n if j-1 > 0:\n count += B[i][j-1]\n if j+1 < len(B):\n count += B[i][j+1]\n if count == 0:\n B[i][j] = 1\n if count > 4:\n B[i][j] = 1\n if count <= 4 and count > 0:\n B[i][j] = 0\n Bnext = np.zeros_like(B, np.int_)\n Bnext = np.triu(B,1) + B.T - np.diag(np.diag(B))\n for i in range(len(Bnext)):\n for j in range(len(Bnext)):\n if Bnext[i][j] > 1:\n Bnext[i][j] = 1\n return(Bnext)", "_____no_output_____" ], [ "try:\n from functools import lru_cache\nexcept ImportError:\n from backports.functools_lru_cache import lru_cache", "_____no_output_____" ], [ "def generator2(A_, number):\n time = 0\n while time < number:\n A_ = generator(A_)\n time += 1\n return A_", "_____no_output_____" ], [ "g1 = nx.erdos_renyi_graph(3, 0.8)\nA1 = nx.to_numpy_matrix(g1)\nprint(A1)\nnx.draw(g1, node_size=150, alpha=0.5, with_labels=True, font_weight = 'bold')\n#plt.savefig('g1_0.png')\nplt.show()", "[[0. 0. 1.]\n [0. 0. 1.]\n [1. 1. 0.]]\n" ], [ "gen_A1 = generator2(A1, 100)\ngen_g1 = nx.from_numpy_matrix(gen_A1)\nnx.draw(gen_g1, node_size=10, alpha=0.5)\n#plt.savefig('g1_100.png')\nplt.show()", "_____no_output_____" ], [ "partition = community.best_partition(gen_g1)\npos = nx.spring_layout(gen_g1)\nplt.figure(figsize=(8, 8))\nplt.axis('off')\nnx.draw_networkx_nodes(gen_g1, pos, node_size=10, cmap=plt.cm.RdYlBu, node_color=list(partition.values()))\nnx.draw_networkx_edges(gen_g1, pos, alpha=0.3)\n#plt.savefig('g1_100_community.png')\nplt.show(gen_g1)", "_____no_output_____" ], [ "g2 = nx.erdos_renyi_graph(4, 0.8)\nA2 = nx.to_numpy_matrix(g2)\nprint(A2)\nnx.draw(g2, node_size=150, alpha=0.5, with_labels=True, font_weight = 'bold')\n#plt.savefig('g2_0.png')\nplt.show()", "[[0. 1. 1. 1.]\n [1. 0. 0. 1.]\n [1. 0. 0. 1.]\n [1. 1. 1. 0.]]\n" ], [ "gen_A2 = generator2(A2, 100)\ngen_g2 = nx.from_numpy_matrix(gen_A2)\nnx.draw(gen_g2, node_size=10, alpha=0.5)\n#plt.savefig('g2_100.png')\nplt.show()", "_____no_output_____" ], [ "partition = community.best_partition(gen_g2)\npos = nx.spring_layout(gen_g2)\nplt.figure(figsize=(8, 8))\nplt.axis('off')\nnx.draw_networkx_nodes(gen_g2, pos, node_size=10, cmap=plt.cm.RdYlBu, node_color=list(partition.values()))\nnx.draw_networkx_edges(gen_g2, pos, alpha=0.3)\n#plt.savefig('g2_100_community.png')\nplt.show(gen_g2)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a86c5dd2025c479a7808467fd59cc2d058e0104
327,956
ipynb
Jupyter Notebook
hw02/ml-intro-hw.ipynb
rojaster/sfml
a82b7c7330faa52569f17bddebc4e51204bb4f01
[ "Naumen", "Condor-1.1", "MS-PL" ]
null
null
null
hw02/ml-intro-hw.ipynb
rojaster/sfml
a82b7c7330faa52569f17bddebc4e51204bb4f01
[ "Naumen", "Condor-1.1", "MS-PL" ]
null
null
null
hw02/ml-intro-hw.ipynb
rojaster/sfml
a82b7c7330faa52569f17bddebc4e51204bb4f01
[ "Naumen", "Condor-1.1", "MS-PL" ]
null
null
null
149.751598
71,066
0.68998
[ [ [ "# SkillFactory\n## Введение в ML, введение в sklearn", "_____no_output_____" ], [ "В этом задании мы с вами рассмотрим данные с конкурса [Задача предсказания отклика клиентов ОТП Банка](http://www.machinelearning.ru/wiki/index.php?title=%D0%97%D0%B0%D0%B4%D0%B0%D1%87%D0%B0_%D0%BF%D1%80%D0%B5%D0%B4%D1%81%D0%BA%D0%B0%D0%B7%D0%B0%D0%BD%D0%B8%D1%8F_%D0%BE%D1%82%D0%BA%D0%BB%D0%B8%D0%BA%D0%B0_%D0%BA%D0%BB%D0%B8%D0%B5%D0%BD%D1%82%D0%BE%D0%B2_%D0%9E%D0%A2%D0%9F_%D0%91%D0%B0%D0%BD%D0%BA%D0%B0_%28%D0%BA%D0%BE%D0%BD%D0%BA%D1%83%D1%80%D1%81%29)", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (12,5)", "_____no_output_____" ] ], [ [ "### Грузим данные", "_____no_output_____" ], [ "Считаем описание данных", "_____no_output_____" ] ], [ [ "\ndf_descr = pd.read_csv('data/otp_description.csv', sep='\\t', encoding='utf8')", "_____no_output_____" ], [ "df_descr", "_____no_output_____" ] ], [ [ "Считаем обучающую выборки и тестовую (которую мы как бы не видим)", "_____no_output_____" ] ], [ [ "df_train = pd.read_csv('data/otp_train.csv', sep='\\t', encoding='utf8')", "_____no_output_____" ], [ "df_train.shape", "_____no_output_____" ], [ "df_test = pd.read_csv('data/otp_test.csv', sep='\\t', encoding='utf8')", "_____no_output_____" ], [ "df_test.shape", "_____no_output_____" ], [ "df_train.head()", "_____no_output_____" ] ], [ [ "## Объединим две выборки\n\nТак как пока мы пока не умеем работать sklearn Pipeline, то для того, чтобы после предобработки столбцы в двух выборках находились на своих местах.\n\nДля того, чтобы в дальнейшем отделить их введем новый столбец \"sample\"", "_____no_output_____" ] ], [ [ "df_train.loc[:, 'sample'] = 'train'\ndf_test.loc[:, 'sample'] = 'test'", "_____no_output_____" ], [ "df = df_test.append(df_train).reset_index(drop=True)", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ] ], [ [ "### Чуть-чуть посмотрим на данные", "_____no_output_____" ], [ "Посмотрим типы данных и их заполняемость", "_____no_output_____" ] ], [ [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 30133 entries, 0 to 30132\nData columns (total 53 columns):\nAGREEMENT_RK 30133 non-null int64\nTARGET 30133 non-null int64\nAGE 30133 non-null int64\nSOCSTATUS_WORK_FL 30133 non-null int64\nSOCSTATUS_PENS_FL 30133 non-null int64\nGENDER 30133 non-null int64\nCHILD_TOTAL 30133 non-null int64\nDEPENDANTS 30133 non-null int64\nEDUCATION 30133 non-null object\nMARITAL_STATUS 30133 non-null object\nGEN_INDUSTRY 27420 non-null object\nGEN_TITLE 27420 non-null object\nORG_TP_STATE 27420 non-null object\nORG_TP_FCAPITAL 27425 non-null object\nJOB_DIR 27420 non-null object\nFAMILY_INCOME 30133 non-null object\nPERSONAL_INCOME 30133 non-null object\nREG_ADDRESS_PROVINCE 30133 non-null object\nFACT_ADDRESS_PROVINCE 30133 non-null object\nPOSTAL_ADDRESS_PROVINCE 30133 non-null object\nTP_PROVINCE 29543 non-null object\nREGION_NM 30131 non-null object\nREG_FACT_FL 30133 non-null int64\nFACT_POST_FL 30133 non-null int64\nREG_POST_FL 30133 non-null int64\nREG_FACT_POST_FL 30133 non-null int64\nREG_FACT_POST_TP_FL 30133 non-null int64\nFL_PRESENCE_FL 30133 non-null int64\nOWN_AUTO 30133 non-null int64\nAUTO_RUS_FL 30133 non-null int64\nHS_PRESENCE_FL 30133 non-null int64\nCOT_PRESENCE_FL 30133 non-null int64\nGAR_PRESENCE_FL 30133 non-null int64\nLAND_PRESENCE_FL 30133 non-null int64\nCREDIT 30133 non-null object\nTERM 30133 non-null int64\nFST_PAYMENT 30133 non-null object\nDL_DOCUMENT_FL 30133 non-null int64\nGPF_DOCUMENT_FL 30133 non-null int64\nFACT_LIVING_TERM 30133 non-null int64\nWORK_TIME 27416 non-null float64\nFACT_PHONE_FL 30133 non-null int64\nREG_PHONE_FL 30133 non-null int64\nGEN_PHONE_FL 30133 non-null int64\nLOAN_NUM_TOTAL 30133 non-null int64\nLOAN_NUM_CLOSED 30133 non-null int64\nLOAN_NUM_PAYM 30133 non-null int64\nLOAN_DLQ_NUM 30133 non-null int64\nLOAN_MAX_DLQ 30133 non-null int64\nLOAN_AVG_DLQ_AMT 30133 non-null object\nLOAN_MAX_DLQ_AMT 30133 non-null object\nPREVIOUS_CARD_NUM_UTILIZED 600 non-null float64\nsample 30133 non-null object\ndtypes: float64(2), int64(32), object(19)\nmemory usage: 12.2+ MB\n" ] ], [ [ "Видим, что часть данных - object, скорее всего стоки.\n\n\nДавайте выведем эти значения для каждого столбца", "_____no_output_____" ] ], [ [ "for i in df_train.columns: # перебираем все столбцы\n if str(df_train[i].dtype) == 'object': # если тип столбца - object\n print('='*10)\n print(i) # выводим название столбца\n print(set(df_train[i])) # выводим все его значения (но делаем set - чтоб значения не повторялись)\n print('\\n') # выводим пустую строку", "==========\nEDUCATION\n{'Неоконченное высшее', 'Неполное среднее', 'Ученая степень', 'Среднее', 'Среднее специальное', 'Два и более высших образования', 'Высшее'}\n\n\n==========\nMARITAL_STATUS\n{'Гражданский брак', 'Состою в браке', 'Вдовец/Вдова', 'Не состоял в браке', 'Разведен(а)'}\n\n\n==========\nGEN_INDUSTRY\n{nan, 'Сборочные производства', 'Энергетика', 'Юридические услуги/нотариальные услуги', 'Банк/Финансы', 'Транспорт', 'Недвижимость', 'Сельское хозяйство', 'Страхование', 'ЧОП/Детективная д-ть', 'Торговля', 'Коммунальное хоз-во/Дорожные службы', 'Образование', 'Здравоохранение', 'Управляющая компания', 'Туризм', 'Наука', 'Информационные услуги', 'Салоны красоты и здоровья', 'СМИ/Реклама/PR-агенства', 'Логистика', 'Металлургия/Промышленность/Машиностроение', 'Химия/Парфюмерия/Фармацевтика', 'Информационные технологии', 'Нефтегазовая промышленность', 'Маркетинг', 'Ресторанный бизнес/Общественное питание', 'Государственная служба', 'Строительство', 'Подбор персонала', 'Развлечения/Искусство', 'Другие сферы'}\n\n\n==========\nGEN_TITLE\n{nan, 'Рабочий', 'Индивидуальный предприниматель', 'Руководитель низшего звена', 'Другое', 'Руководитель высшего звена', 'Работник сферы услуг', 'Партнер', 'Специалист', 'Служащий', 'Руководитель среднего звена', 'Военнослужащий по контракту', 'Высококвалифиц. специалист'}\n\n\n==========\nORG_TP_STATE\n{nan, 'Частная компания', 'Государственная комп./учреж.', 'Индивидуальный предприниматель', 'Частная ком. с инос. капиталом', 'Некоммерческая организация'}\n\n\n==========\nORG_TP_FCAPITAL\n{'Без участия', nan, 'С участием'}\n\n\n==========\nJOB_DIR\n{nan, 'Реклама и маркетинг', 'Служба безопасности', 'Кадровая служба и секретариат', 'Пр-техн. обесп. и телеком.', 'Юридическая служба', 'Снабжение и сбыт', 'Участие в основ. деятельности', 'Бухгалтерия, финансы, планир.', 'Вспомогательный техперсонал', 'Адм-хоз. и трансп. службы'}\n\n\n==========\nFAMILY_INCOME\n{'от 20000 до 50000 руб.', 'свыше 50000 руб.', 'до 5000 руб.', 'от 10000 до 20000 руб.', 'от 5000 до 10000 руб.'}\n\n\n==========\nPERSONAL_INCOME\n{'9330', '10600', '9100', '6700', '19700', '5500', '77000', '7200', '18700', '2100', '18000', '11600', '4340', '41000', '11800', '14800', '29000', '48000', '9700', '1950', '8350', '11400', '9240', '24500', '15000', '19500', '34000', '22500', '41900', '38000', '17600', '5680', '6359', '10700', '8100', '17200', '5100', '4000', '92000', '14300', '6100', '6299,19', '3600', '15200', '7500', '8700', '27300', '5000', '8600', '10000', '13800', '10200', '5790', '10300', '80000', '4100', '12300', '7650', '3900', '9340', '21800', '7220', '42000', '12600', '12800', '13000', '4500', '9000', '4200', '21500', '15560', '7250', '32000', '12700', '2000', '13160', '4515', '8900', '3500', '65000', '17404', '16800', '11200', '12900', '47000', '7700', '16500', '15600', '5460', '19000', '7830', '5088', '26000', '9300', '18200', '15400', '27000', '6670', '16900', '44000', '10100', '14500', '6400', '4950', '10800', '2800', '13900', '24', '5360', '8250', '8500', '7400', '3300', '8726', '2300', '29800', '20646,16', '13450', '22000', '4700', '8800', '25000', '10900', '13150', '32640', '55000', '6800', '17000', '13600', '52000', '3400', '7800', '35000', '4400', '37000', '3200', '6300', '170000', '100000', '28300', '5446', '12650', '19800', '3700', '7600', '9200', '9800', '13400', '7640', '12100', '5400', '5900', '4900', '15380', '14000', '13200', '51000', '9600', '16700', '23100', '40000', '7101', '4800', '17500', '3100', '33000', '250000', '8066', '5600', '53000', '7900', '7000', '7251', '24000', '7550', '26500', '11640', '5350', '5200', '22955', '67700', '43000', '8000', '6180', '4590', '5700', '12500', '10500', '13700', '31000', '12400', '12000', '28000', '70000', '23800', '20000', '6600', '6500', '23000', '54000', '16050', '56000', '6608', '16100', '5050', '4600', '6900', '18600', '75000', '39000', '67000', '13300', '5339', '19300', '24800', '11900', '68000', '150000', '15500', '23500', '4360', '30000', '16200', '14700', '46000', '7850', '5800', '9900', '50000', '8400', '17800', '21000', '5300', '45000', '14358', '18500', '5582', '7100', '49000', '4330', '11300', '7050', '8300', '11700', '36000', '7300', '42500', '15300', '16000', '29500', '9500', '11500', '4300', '10400', '9628', '17900', '11000', '160000', '5230', '25800', '20500', '5425', '9400', '19600', '220000', '6000', '3800', '6200', '3000', '60000', '17700', '110000', '13500', '8200'}\n\n\n==========\nREG_ADDRESS_PROVINCE\n{'Тульская область', 'Волгоградская область', 'Бурятия', 'Красноярский край', 'Брянская область', 'Северная Осетия', 'Камчатская область', 'Читинская область', 'Нижегородская область', 'Омская область', 'Курская область', 'Кабардино-Балкария', 'Ставропольский край', 'Рязанская область', 'Москва', 'Костромская область', 'Эвенкийский АО', 'Калмыкия', 'Хакасия', 'Псковская область', 'Краснодарский край', 'Хабаровский край', 'Ханты-Мансийский АО', 'Усть-Ордынский Бурятский АО', 'Астраханская область', 'Ленинградская область', 'Архангельская область', 'Мурманская область', 'Карелия', 'Ульяновская область', 'Самарская область', 'Удмуртия', 'Коми', 'Пензенская область', 'Тюменская область', 'Чувашия', 'Башкирия', 'Якутия', 'Марийская республика', 'Алтайский край', 'Татарстан', 'Новгородская область', 'Саратовская область', 'Владимирская область', 'Ростовская область', 'Магаданская область', 'Орловская область', 'Кировская область', 'Кемеровская область', 'Оренбургская область', 'Санкт-Петербург', 'Московская область', 'Сахалинская область', 'Еврейская АО', 'Свердловская область', 'Амурская область', 'Калужская область', 'Ярославская область', 'Адыгея', 'Белгородская область', 'Ивановская область', 'Вологодская область', 'Челябинская область', 'Дагестан', 'Липецкая область', 'Пермская область', 'Тамбовская область', 'Ямало-Ненецкий АО', 'Приморский край', 'Новосибирская область', 'Томская область', 'Мордовская республика', 'Курганская область', 'Карачаево-Черкесия', 'Калининградская область', 'Агинский Бурятский АО', 'Смоленская область', 'Воронежская область', 'Тверская область', 'Иркутская область', 'Горный Алтай'}\n\n\n==========\nFACT_ADDRESS_PROVINCE\n{'Тульская область', 'Волгоградская область', 'Бурятия', 'Красноярский край', 'Брянская область', 'Северная Осетия', 'Камчатская область', 'Читинская область', 'Нижегородская область', 'Омская область', 'Курская область', 'Кабардино-Балкария', 'Ставропольский край', 'Рязанская область', 'Москва', 'Костромская область', 'Эвенкийский АО', 'Калмыкия', 'Хакасия', 'Псковская область', 'Краснодарский край', 'Хабаровский край', 'Ханты-Мансийский АО', 'Усть-Ордынский Бурятский АО', 'Астраханская область', 'Ленинградская область', 'Архангельская область', 'Мурманская область', 'Карелия', 'Ульяновская область', 'Самарская область', 'Удмуртия', 'Коми', 'Пензенская область', 'Тюменская область', 'Чувашия', 'Башкирия', 'Якутия', 'Марийская республика', 'Алтайский край', 'Татарстан', 'Новгородская область', 'Саратовская область', 'Владимирская область', 'Ростовская область', 'Магаданская область', 'Орловская область', 'Кировская область', 'Кемеровская область', 'Оренбургская область', 'Санкт-Петербург', 'Московская область', 'Сахалинская область', 'Еврейская АО', 'Свердловская область', 'Амурская область', 'Калужская область', 'Ярославская область', 'Адыгея', 'Белгородская область', 'Ивановская область', 'Вологодская область', 'Челябинская область', 'Дагестан', 'Липецкая область', 'Пермская область', 'Тамбовская область', 'Ямало-Ненецкий АО', 'Приморский край', 'Новосибирская область', 'Томская область', 'Мордовская республика', 'Курганская область', 'Карачаево-Черкесия', 'Калининградская область', 'Агинский Бурятский АО', 'Смоленская область', 'Воронежская область', 'Тверская область', 'Иркутская область', 'Горный Алтай'}\n\n\n==========\nPOSTAL_ADDRESS_PROVINCE\n{'Тульская область', 'Волгоградская область', 'Бурятия', 'Красноярский край', 'Брянская область', 'Северная Осетия', 'Камчатская область', 'Читинская область', 'Нижегородская область', 'Омская область', 'Курская область', 'Кабардино-Балкария', 'Ставропольский край', 'Рязанская область', 'Москва', 'Костромская область', 'Эвенкийский АО', 'Калмыкия', 'Хакасия', 'Псковская область', 'Краснодарский край', 'Хабаровский край', 'Ханты-Мансийский АО', 'Усть-Ордынский Бурятский АО', 'Астраханская область', 'Ленинградская область', 'Архангельская область', 'Мурманская область', 'Карелия', 'Ульяновская область', 'Самарская область', 'Удмуртия', 'Коми', 'Пензенская область', 'Тюменская область', 'Чувашия', 'Башкирия', 'Якутия', 'Марийская республика', 'Алтайский край', 'Татарстан', 'Новгородская область', 'Саратовская область', 'Владимирская область', 'Ростовская область', 'Магаданская область', 'Орловская область', 'Кировская область', 'Кемеровская область', 'Оренбургская область', 'Санкт-Петербург', 'Московская область', 'Сахалинская область', 'Еврейская АО', 'Свердловская область', 'Амурская область', 'Калужская область', 'Ярославская область', 'Адыгея', 'Белгородская область', 'Ивановская область', 'Вологодская область', 'Челябинская область', 'Липецкая область', 'Пермская область', 'Тамбовская область', 'Ямало-Ненецкий АО', 'Приморский край', 'Новосибирская область', 'Томская область', 'Мордовская республика', 'Курганская область', 'Карачаево-Черкесия', 'Калининградская область', 'Агинский Бурятский АО', 'Смоленская область', 'Воронежская область', 'Тверская область', 'Иркутская область', 'Горный Алтай'}\n\n\n==========\nTP_PROVINCE\n{nan, 'Тульская область', 'Волгоградская область', 'Бурятия', 'Красноярский край', 'Брянская область', 'Камчатская область', 'Читинская область', 'Нижегородская область', 'Омская область', 'Курская область', 'Ставропольский край', 'Кабардино-Балкария', 'Рязанская область', 'Москва', 'Костромская область', 'Псковская область', 'Краснодарский край', 'Хабаровский край', 'Ханты-Мансийский АО', 'Астраханская область', 'Архангельская область', 'Мурманская область', 'Карелия', 'Ульяновская область', 'Самарская область', 'Удмуртия', 'Коми', 'Пензенская область', 'Тюменская область', 'Чувашия', 'Башкирия', 'Якутия', 'Марийская республика', 'Алтайский край', 'Татарстан', 'Новгородская область', 'Саратовская область', 'Владимирская область', 'Ростовская область', 'Магаданская область', 'Орловская область', 'Кировская область', 'Санкт-Петербург', 'Оренбургская область', 'Кемеровская область', 'Сахалинская область', 'Еврейская АО', 'Свердловская область', 'Амурская область', 'Калужская область', 'Ярославская область', 'Адыгея', 'Белгородская область', 'Ивановская область', 'Вологодская область', 'Челябинская область', 'Липецкая область', 'Пермская область', 'Тамбовская область', 'Приморский край', 'Новосибирская область', 'Томская область', 'Мордовская республика', 'Курганская область', 'Калининградская область', 'Смоленская область', 'Воронежская область', 'Тверская область', 'Иркутская область', 'Горный Алтай'}\n\n\n==========\nREGION_NM\n{nan, 'ЮЖНЫЙ', 'ЗАПАДНО-СИБИРСКИЙ', 'ДАЛЬНЕВОСТОЧНЫЙ', 'ПРИВОЛЖСКИЙ', 'ЦЕНТРАЛЬНЫЙ 1', 'СЕВЕРО-ЗАПАДНЫЙ', 'ЦЕНТРАЛЬНЫЙ ОФИС', 'ПОВОЛЖСКИЙ', 'ЦЕНТРАЛЬНЫЙ 2', 'ВОСТОЧНО-СИБИРСКИЙ', 'УРАЛЬСКИЙ'}\n\n\n==========\nCREDIT\n{'14755', '8856', '14169,1', '26164', '24700', '16647', '2100', '74700', '18599,4', '10431', '41000', '2180', '34500', '23020', '3717,9', '41936', '3930', '13890', '10105', '21083', '13080', '9080', '15079,91', '19375', '6145', '13990,92', '31070', '5410', '11623', '15170', '6605', '12509,9', '16064', '24480', '4243', '15616', '6903', '21021,9', '4168,8', '19782', '5940', '8148', '37017', '18468', '11080', '5297', '26595', '98042', '3010', '12210', '6396', '12687', '35550', '3138', '29942', '28810', '5207', '24125', '22621', '6279', '11160', '26790', '9754', '25900', '17273', '16105', '10250', '13830', '3854', '9574', '4107', '21146,4', '9845', '19991', '36701', '51000', '25785', '40000', '33000', '4500,95', '13669', '5988', '26120', '6269', '11390', '16364', '10618', '3220', '28258', '13489', '17448', '5857', '7551', '10693', '6097,5', '29475', '15907', '9950', '5579', '27750', '17146', '28172', '7357', '15619', '29198', '9997', '6364', '28350', '13420', '32765', '2699', '31224', '8381,5', '16720', '7433', '3060', '23220', '19857', '3411', '16750', '5889', '19850', '20295', '17804', '2599', '13456', '12057', '3311', '41929', '11034', '8904', '30600', '29205', '11040', '13847', '3550', '10323', '12443', '21950', '16420', '16732', '9427', '15030', '32348', '15990', '9809', '16780', '92000', '4479', '3119', '9301', '9264,69', '8832', '21877', '22692', '24283', '24465', '9576', '13332', '4070', '25298', '3093', '9448', '24260', '4100', '32090', '15041', '29445', '3103,2', '13190', '8033', '20450', '28224', '8221', '5438', '14230', '8777,1', '12195', '18200', '8489', '43010', '16065', '9282', '17956', '20698,1', '17415', '39327,2', '4195', '15780', '18650', '18049', '9840,6', '11440', '20132', '19045', '11880', '4938', '4440', '8487', '32828', '16128', '10740,64', '4752', '11460', '16760', '3849', '15948', '35991', '14550', '18440', '27962', '18668', '18740,5', '6454,8', '18707,1', '4177', '32850', '4365', '22098', '21408', '35091', '4139', '15027', '19340', '23861', '17091', '30510', '15247', '16499', '10995', '17558', '5409', '16776', '11217', '17546', '6281', '16227', '21190', '30940', '10672', '18784', '24800', '16740', '10970', '4598,6', '14669', '11266', '19377', '3743', '23396', '12347', '12262', '27877', '9098', '15216,06', '30630', '29858', '8132', '7348,52', '16875', '23498', '2629,8', '24272', '27208', '2259,9', '70705,96', '14175', '15750,91', '21927', '54738', '41490', '10445', '13366', '6086', '22515', '15096', '2855', '27600', '7469', '5494', '14520', '15524', '24500', '8393', '3570', '38000', '30190', '4135', '5748,3', '7820', '14300', '9199', '14718', '6381', '7745', '4031', '18209', '13800', '18553', '10980', '8360', '5545', '20863', '23990,83', '6039', '8199', '21881', '12434', '15800', '14200', '12948', '8341', '42408', '6838', '30597', '15704', '19224', '5015', '8780', '16447', '10352', '30627', '4949', '11720', '4248', '5384', '12849', '35326', '25068,94', '9351', '6189,3', '11260', '6830', '20530', '6129', '8536', '6490', '7788', '51300', '2290,7', '9276', '5984', '12850', '25386', '46881', '90000', '10542', '29345', '10420', '18782,2', '22905', '6108', '28000', '22075', '22561', '39240', '16613', '12209,5', '18246', '5449', '5620', '22428', '17339', '8297', '17885', '9710', '11949', '15880', '4488', '20910', '13356', '9365', '15496', '10340', '24667', '15215', '6719', '14725', '11730', '26928', '4018', '20225', '12790', '10591', '10248,2', '25966', '7452', '23160', '6469', '10616', '89344', '29790', '12405', '9538,9', '84500', '13536', '34295', '4493', '4026,55', '10330', '11996', '17350', '4673', '24889', '3389', '20789', '16180', '10339', '19500', '5535', '12120', '7917', '35683', '8840', '22959', '19116', '17280', '14780', '23044', '25340', '21150', '9395', '3675', '26340', '12451', '4200', '7649', '8530', '7990', '23688', '3143', '15078', '9655', '4291', '18990,5', '2800', '22070', '17520', '29707', '29750', '35060', '10230', '16766', '6192', '7984', '8694', '16662', '28590', '17765', '18040', '18719', '3548', '24752', '25727', '11148', '4399', '18840', '6930', '27400', '22213', '25360', '6429', '7954', '21580', '5296', '4882', '22944', '4290', '5390', '22320', '16112', '27846', '23268', '14840', '6779', '3690,92', '49100', '29280', '8171', '65400', '19977', '13989', '3063', '3353', '13897', '18578', '58947', '14391', '19320', '13990', '26547,3', '25400', '20324,92', '11672', '6598', '9235', '15085', '10990,94', '11727', '24408', '10969', '3599', '4360', '16547,3', '5070', '25700', '19589', '6070', '19497', '23550', '4047', '2690', '4370', '20498,1', '23320', '11456', '4966', '15313', '24470', '5870', '6283', '4175', '10575', '25440', '15859', '21312', '19143', '13438,3', '5242,3', '16330', '6209', '19572,1', '4252', '18730', '21180', '18067', '5504', '21658', '30390', '19503', '8980', '24047', '18240,91', '35563', '22200', '20874', '19410', '30696', '10480', '22302', '5448', '5292', '4735', '11409', '6282', '11219', '116360', '16657', '30735', '22349', '3464', '25662', '23025', '15220', '5295', '9338', '4633', '27500', '16063', '13435', '3350', '11296', '30196', '30455', '50098', '10046', '20977', '40200', '11576', '20397', '23347', '18643', '30590,92', '21473', '25278,14', '24198,2', '8650', '7995', '22221', '5095', '9600', '15427', '7490', '4800', '13460', '12737', '12581,89', '14005', '5359', '25182,4', '14770', '35035,2', '3420', '27398', '4998', '10260', '10331', '4589', '3188', '19300', '9995', '16190', '3998', '15975', '2219', '20699', '14196', '10185', '23680', '10188', '6596', '3070', '19044', '22330', '11487', '60080', '17910', '6338', '11394', '19854', '11115', '2680', '61540', '12345', '21615', '24295', '10548', '8278', '19660', '4510', '35250', '7645', '26961', '9660', '10540', '11718,94', '4109', '3354', '7815', '27540', '9212', '53400', '8010', '24174', '4853', '35767,4', '23050', '3850,85', '39741', '22310', '3128,2', '6071', '5119', '19324', '18074', '13009', '15610', '8771', '32822', '47704', '12546', '20320', '15624', '32280', '16849', '13148', '23711', '9406', '17863', '18710', '14320', '41400', '18575,2', '25920', '6199', '21214', '15883', '31509', '14632', '27449', '21207', '14243', '3882', '2389', '28728', '8377', '36350', '17770', '9345', '17545', '8370', '4730', '9445', '8429', '13351', '6625', '19745', '23663', '5688', '7020,9', '2007', '39028', '4475', '7739', '5540', '23300', '3099', '11571', '27062', '2529,61', '26999,2', '6887', '4533', '3690,82', '9919', '88312', '9074', '4878', '9889', '23940', '7715', '37495,2', '9054', '10655', '16272', '13520', '23410', '7996', '34190', '9669', '4021,92', '31850', '13922', '13289', '5312', '26960', '7829', '12099,1', '15795', '26555', '18206', '17399,1', '11660,93', '30430', '31040', '8406', '50940', '4572', '31817,2', '11876', '23478', '48000', '18450', '30520', '74690', '27700', '33100', '8268', '9910', '5695', '14659,2', '14854', '24721', '10300', '12591', '18096', '21620', '10938', '24550', '18085', '11247', '11524,79', '15100,82', '18100', '3711', '4185', '8811', '8792', '9534', '11488', '14998', '32310', '14960', '6698,3', '10322', '46671', '40499', '9520', '31885', '20565', '49838', '5184', '16999', '9655,2', '3059', '93000', '13680', '20490', '13916', '19289', '20609', '16185', '3391', '18374', '6839', '3240', '20720', '18269', '13155', '19567', '10151', '4840', '19120', '8120', '27260', '4490', '44415', '10413', '4936', '28340', '12896', '46800', '3663', '19748', '4746', '6925', '3515', '38032', '13321', '40490,91', '6452', '11586', '2650', '56310', '28059', '15556', '22345', '4350', '8440', '24017', '6420', '4284', '41994', '9443', '56000', '28098', '25313', '3941', '3979', '9990', '64670', '37780', '18410', '9438', '3699', '6246', '16859', '22091', '4318,2', '13257', '33725', '15927', '13887,68', '21880', '14040', '3385', '12651', '11778', '17780', '53091', '4899', '11529,91', '10790', '16312', '10646,2', '6299,2', '3930,81', '7908,1', '6475', '9749', '17842', '34000', '12995', '22500', '24520', '4867', '5607', '23820', '9470', '9422', '11977', '19769', '9490', '7500', '8491,82', '10690', '3748', '40700', '20305', '8829', '3961', '62000', '3760', '14927,83', '46960', '21265', '34120', '75581', '15897,92', '3390', '17573', '18392,91', '4212,8', '14450', '22977', '30050', '15602,5', '24035,5', '9444', '7505', '5798', '3879,17', '10971', '6935', '5415', '19358', '14229', '5097', '14644', '30920', '18113', '21430', '37560', '30545', '17297,6', '13260', '11243', '15910', '18398', '8280', '16669', '6721', '4312', '39900', '7991', '44290', '13197', '7950', '26740', '13498', '23865', '5350', '5193', '3940', '19460', '30031', '53162', '3750,91', '19060', '3301', '12972', '28116', '5480', '17230', '22090', '16197', '13300,91', '21704', '9371', '20338', '26997', '2160', '11389,2', '3701', '7160', '17890', '4170', '10844', '8380', '5089', '9085', '17390', '22927', '25950', '27890', '5636', '12749', '27637', '21847', '21960', '28431', '3800', '4827', '24223', '6690', '31250', '11539', '7200', '30980', '9246', '13530', '8529', '8892', '3892,2', '6698', '9260', '19187', '3547,42', '11719', '13318', '13087', '4838', '9349,92', '9651', '2719,2', '24711', '7381', '8700', '14360', '23399', '14119', '14470', '11376', '21762', '12320', '9536', '16416', '17276', '4979', '22440', '20190', '5931', '10149', '8290', '6069', '33320', '5333', '5533', '9960', '4420', '7499', '3547', '11669', '21865', '14350', '10879', '16140', '6863', '36093', '12428,7', '57581', '8890', '4293', '3898,1', '24301', '26720', '23397', '14989', '5329', '14796', '3792,92', '25830', '23822', '13200', '21850,94', '7459', '12803,4', '33900', '7560', '11750', '6058,1', '96860', '7327', '17193', '14453', '12328', '35670', '6906', '7796', '9040', '5612', '2947', '11063', '3215', '6050', '20143', '13278', '6653', '15293', '14440', '11890', '11871', '20390', '28402', '25301', '33760', '8644', '9900', '23490', '10432', '25695', '22180', '13980', '16489,9', '19342', '2871', '25199', '4773', '7667', '2060', '24160', '62550', '17624', '20570', '11853', '32770', '30951', '9045', '5162', '6712', '4872', '6789', '4785', '13590', '75535,08', '2072,7', '3173', '53673', '3555', '20877', '8097', '9750', '17995', '5968', '12425', '4891', '11748', '7285', '9308', '17541', '9610', '19970', '14338', '10755', '16561', '9349', '26379', '21760', '18598', '20242', '8311,29', '7611', '14510', '10870', '6744', '9350', '37710', '12980', '32397', '13688', '4761', '2870', '7690', '15985', '6380', '21999', '12910', '8500', '13689,86', '28230', '11911', '9648', '2490,92', '2940', '24200', '15016', '20375', '7279', '8955', '19760', '78390', '11926', '4290,91', '4242', '26461', '28640', '17789', '3100', '18696', '3690', '4048', '16480', '13232,88', '12870', '8204', '12053', '21290', '26560', '17099,1', '7857', '11130', '22150', '16745', '39590', '2075', '9670', '18340', '4973', '16730', '6916', '6309', '30420', '12550', '3410', '4869', '23944', '11496', '9866', '10010', '32670,79', '12608', '8753', '4470', '23186', '28042', '10478', '3493', '30580', '6733', '2065,5', '16989,92', '40970', '19008', '8790', '10400', '4580', '20500', '11426', '4788', '19600', '10061', '12024', '6007', '14083', '13302', '20820,44', '10735', '13178', '15372', '28983', '15360', '6585', '39254', '11335', '25983', '16979', '14847', '24881', '22945', '14540,04', '17203', '9748', '8606', '70410', '22951', '33183,4', '8950', '15214', '21500', '16350', '5268', '14207,1', '5938', '12902', '3780', '59398,2', '18990', '26600', '16908', '3179', '16392', '13010', '17470', '59590', '9300', '8156', '2990,92', '15100', '12236', '11804', '3768', '26084', '12541', '16257', '3659,5', '6056', '5398', '14781', '7391', '58500', '20562,2', '10791', '26818', '41500', '23030', '6828', '22458', '15354', '20980', '10348', '12249', '24495', '14993', '61487', '4841', '4611', '9689', '9530', '3929', '20750', '13326,5', '3069,9', '2158', '15299', '13700', '14999', '11877', '3875', '5735,7', '7735', '6297', '13325', '6161', '3399', '19129', '8398,1', '7090', '12820', '4789,92', '17117', '19959', '16200', '23060', '6348', '9594', '20792', '32293', '7441', '21347', '4658', '39638', '50000', '43880', '21000', '3645', '24250', '27149', '7610', '22789', '23971', '24112', '31014', '41990', '17795', '7847', '33500', '31450', '25574', '10760', '5960', '12289', '9799', '3238', '8603', '10233', '10853', '2500', '15806', '15435', '4725', '2831,97', '21381,12', '9100', '18690', '2245', '17261', '5966,7', '11098,8', '9270', '19189', '14800', '20698', '6703', '11400', '20550', '11666', '13905', '4680', '21286', '3616', '18060', '2955', '3032', '10939', '14760', '9169', '12250', '12947', '12914', '27850', '17480', '13290', '45952', '41021', '48920', '7776', '9774', '46385', '3231', '4880', '42928', '5837', '19900', '28990,91', '6715,2', '8991', '6624', '23797', '23989', '19950', '22970', '10080', '4041', '24750', '79200', '19940', '4653', '14497', '18399', '3140', '55000', '15240', '26708', '13999', '15763', '7590,31', '18328,6', '8389,91', '18530', '38200', '30230', '15760', '5745', '27250', '3159', '20610', '9198', '6093', '25641', '37531', '4809', '22329,67', '15598', '6291', '2928', '50400', '78550', '15398', '3808', '24233', '10730', '6980', '23784', '7336', '12537', '4767', '12720', '12755', '11405', '18022', '14096', '2889', '16605', '28221', '11055', '21930', '17247', '26219', '3744', '5681', '6342', '18471', '13399', '7128', '74466', '12213', '19925', '8260', '7471', '3348', '16322', '18702', '26710', '9998', '15890,9', '44900', '16450', '7892', '35522', '26875', '8064', '10169', '13039', '22469', '8150', '70245', '22803', '23749,76', '5239', '7766', '11754', '36527', '81743,87', '7228,2', '14331', '13465', '25260', '22293', '21482', '2499', '16245', '7098', '23108', '27360', '6168', '12990', '25140', '19150', '38244', '29875', '24275', '16840', '32335', '27620', '4114', '19106', '5337', '10048', '27945', '26490', '5232', '16479', '12474', '8597', '3448,8', '2088', '5267', '12290,9', '12542', '21654', '20477', '7798', '9248', '20100', '20770', '5817', '13846', '26329', '8858,8', '14994', '26500', '15639', '6991', '4150,95', '22955', '42930', '3740', '26889', '2600', '13476', '10616,4', '6622', '4250', '27714', '21746', '39600', '4377', '4920', '32500', '26820', '18881', '29012', '11646', '4013', '3429', '12132', '5037', '15300', '20310', '7259', '5427', '9430', '7281,94', '2400', '3553', '14280', '27370', '16796', '26200', '32508,2', '8492', '31300', '12890,92', '35955', '29663', '30930', '4088', '5942', '3943', '2457', '19230', '25447', '29120', '7511', '43631', '11323', '42000', '18068', '4458', '6928', '16040', '9000', '71400', '7730', '6494', '13205', '20330', '10874', '5738', '2000', '35900', '2671,2', '25179', '24962', '6379', '16500', '25540', '20495', '35320', '7130', '2819', '14080', '5261', '23406', '2447', '23780', '7710', '14279', '21646', '8160', '11025', '3950', '16701', '30596', '22898', '5496', '26390', '22490', '29490', '27900', '13507', '6150', '27769', '12650', '19800', '13598', '17025', '20520', '12173', '13392', '12348', '16991', '8725', '71243', '13324', '87786', '4199', '17097', '29250', '2511,82', '26410', '18050', '29001', '18528', '11751', '23094', '40574', '7609,5', '6630', '14661', '22160', '11787', '19889,92', '17840', '3987', '6766,4', '88440', '26730', '9069', '89196', '5175', '14860', '15875', '29006', '3516', '27493', '20540', '5023', '7755', '23437', '21295', '8424', '3770', '33459', '17130', '11796', '13577', '11455', '44760', '7523', '19560', '42746', '10360', '5345', '2730', '27040', '20510', '24239', '10611', '4642', '19020', '4745', '4769', '18006', '24490', '5760', '5431,2', '9153', '2995', '8381', '37880', '11150', '6390', '21989', '17045', '31491', '21209', '30990', '4389', '30200', '5676', '24933', '28220', '3960', '7486,1', '17398,1', '4010', '20140', '14798', '27490', '29789,91', '17510', '19280', '4408', '78800', '11697,1', '8957', '12315', '10925', '99313', '16883', '17575', '17422', '7695', '22947', '20691', '4756,5', '10320', '17763', '29661', '9227', '9097', '27950', '26461,91', '21586', '3680', '15790', '3490,92', '4580,92', '12290,94', '7988', '83106', '20706', '8750', '5064', '4935', '18270', '9526', '30820', '5280', '2633', '5010', '5268,47', '13780,82', '19521', '4590', '12762', '29948', '94330', '4492', '3904,3', '17538', '10295', '17250', '26099', '16507', '15680', '3380', '29690', '9217', '5016', '31588', '15147', '24980', '3883', '69500', '7497', '18913', '18020', '29500', '22588', '26770', '74319', '7541', '7032', '10687', '76300', '27088', '6329', '28450', '7019', '28922', '3014', '11600', '6360', '30131', '10071', '9464', '19502', '10525,01', '5335', '28932', '11396', '7084', '7310', '16634,5', '28160', '23185', '11048', '34466,4', '7250', '13527', '11345', '30756', '28280', '5928,1', '9497', '11416', '22761', '59629', '37360', '45500', '5351', '6672', '16280', '10550', '7741', '5360', '26896', '9144', '8445', '7448', '5827', '3729', '38408', '17240', '20587', '8902', '17789,2', '34396', '20699,1', '17509', '16579', '6530', '13623', '15190', '5489', '23793', '9120', '24795', '3337', '4983', '24000', '6890', '20294', '13301', '37450', '3442', '7790', '6180', '18734', '4560', '4328,1', '14328', '10114', '15347', '8149', '6373', '5770', '20856', '24389', '5091', '7797,6', '11790', '16440', '26183,5', '11855', '37857,2', '13550', '16199', '10022', '35960', '16650', '35165', '23735', '31255', '20297', '3307', '11223', '6780,95', '11321', '24740', '7750', '13110', '10389', '23552', '29799', '3798', '14950', '14165', '21250', '4629', '14980', '18857', '30422', '7035', '6597', '7296', '20296', '6021', '17561,8', '26590', '7595', '4079', '5678', '8762', '58541', '17901', '18899', '8355', '27580', '8604', '27283', '22881', '14013', '34766', '13679', '3269', '6270', '29295', '9899,1', '15033', '16425', '25990', '7183', '7977', '11050', '17720', '21279', '7349', '13699', '9380', '22900', '6899', '18538', '23104', '8547', '9009,91', '20355', '5890', '5088', '14795', '43376', '20943', '9042', '13490', '9468', '24950', '12741', '3405', '16865', '50678', '25896', '19021', '17650', '6665', '6260', '4607', '11170', '20317', '16965', '41211', '12944', '30313', '10189', '21780', '33920', '10429', '12498', '8373', '2691', '47309', '5211', '11789', '11196', '16899', '5320', '24871', '25740', '11916', '5799,2', '14948', '38340', '32935', '9009', '4386', '7604', '13447', '11665', '45672', '18600', '98996', '39000', '16520', '16960', '18250', '24945', '12050', '6538', '13841', '18054', '2979', '14690', '10240', '3565', '34806', '6504', '18360', '9551', '7123', '27124', '19930', '10864', '29697', '21505', '86900', '14406', '4671', '10290', '20079', '21398', '3660', '7949', '29933,4', '26970,6', '11289', '13430', '16250', '41170', '12597', '16635', '4640', '14168', '30100', '3928', '7430', '6699', '29580', '28026', '4683', '11180', '18413', '13225', '2872', '6272', '83194', '23395', '12159', '63700', '3898', '27995', '18940', '23390', '5806', '28410', '23450', '11199', '4435', '15602', '10990,2', '23390,91', '4842', '16199,2', '11359', '15844', '8549,9', '32370', '19220', '6502', '4765', '13785', '28338', '7450', '13229', '9983', '9550,82', '4210', '23758', '22848', '21490', '33097', '74482', '6374,78', '19446', '10210', '15759', '45750', '7794', '20313', '35400', '14060', '23999', '4160', '17075', '3889', '9718', '9200', '7605', '19750', '5210', '8899', '9278,1', '4743,68', '50568', '6450', '11995', '39422', '16362', '3357', '2135,14', '29699', '11943', '18850', '28310', '10506,76', '2421', '25150', '9850', '10551', '13267', '5525', '8430', '4961', '30220', '13941', '4455', '2125', '2270', '8102,08', '7353', '7230', '11536', '16380', '11580', '7182', '46600', '5272', '11450,9', '18280', '32907,5', '21080', '16627', '14724', '9759,4', '18260', '34200', '10779', '13880', '25100', '3107,7', '52469,87', '15097', '20464', '26890', '9971,89', '28666', '3848', '8450', '10225', '19400', '16436', '7559', '6245', '7498,9', '5500', '19343', '11817', '15350', '21464', '9620', '10428', '25599', '12506', '11899', '28861', '7350', '4750', '21640', '4265', '3497', '21802', '5749', '15850', '6330', '32415', '11601', '18140', '13000', '5723', '23485', '3424', '36400', '13678', '3990', '7700', '20431', '9342', '6959', '16191', '4042', '25315', '10846', '12150', '8129', '5176,8', '28775', '19799', '12945', '22617', '13326', '3392', '17388', '12843', '41730', '6756', '6514', '16988', '27680', '6219', '22950', '14337', '16850', '31790', '2691,9', '38850', '29430', '24605', '27820', '4647', '2599,2', '18130', '21200', '11954', '6166', '20505', '9367', '4982,4', '5355', '5995', '7863', '2165,16', '17453', '31190', '4963', '57209', '13815', '35825', '17346', '28790', '7515', '19976', '4980', '6470', '6481,8', '17106', '18296', '21910', '16371', '6559', '31730,91', '4190', '17451', '6247,8', '15832', '4182', '48572', '16278', '36000', '14487', '17760', '8435', '20768', '3498', '9907,2', '69188', '31333', '15291', '18964', '32846', '5602', '14004', '17860', '18175', '25980', '20820,2', '12239', '16002', '10373', '17869', '14030', '7470', '7965', '21750', '4570', '5794', '9700', '22987', '7366', '12510', '2990', '21590', '55500', '10630', '20698,2', '15565,5', '23070,6', '15128', '6146', '9647', '19083', '8396', '6982', '2700', '6461', '2421,9', '6710', '25300', '10804', '40180', '22850', '65000', '10238,82', '21400', '6105', '17549', '5144', '25480', '9664,5', '13280', '8978', '9259', '16648', '18549', '22618', '8754', '15040', '9801', '27196,1', '4110', '6060', '31990', '4329', '12906', '23440', '7919', '9630', '12570', '14599', '14637', '14065', '23121', '13825', '5542', '27248', '14845', '12860', '4870', '11528', '16882', '8935', '9060', '31500', '2590', '9592', '4072', '26800', '27598', '8185', '12170', '16099', '12206', '15090', '23451', '22792', '15730', '14978', '9375', '19602', '8666', '75000', '14295', '22480', '4638', '40756', '10609,2', '8117', '62100', '10884', '22860', '23126', '17970', '9287', '13989,4', '36900', '9972', '14430', '18240', '10147', '25459,3', '9137', '8470', '33680', '17550', '8210', '5230', '7334', '12797', '24132', '9949', '41620', '17361,76', '25538', '10020', '7908', '10024', '21771', '4858', '4186', '86041', '4469', '40480', '40110', '16244,9', '13294', '20350', '12598,1', '22627', '4410', '6507', '5908', '21570', '22662', '10505,5', '2720', '7749', '5399', '7207', '16838', '3537', '12342', '3827', '10193', '47000', '34280', '26115', '6036', '13050', '28070', '58640', '23297', '7169', '3860', '14940', '16691', '3681', '16398', '31221', '20640', '5045', '27428', '18777', '12200', '14398', '72000', '3377', '26241,6', '28273', '9095', '8490', '35000', '12690', '17811', '3185', '8620', '6675', '41210', '14284', '27498', '13263', '13499', '4968', '12904', '7629', '22105', '12317', '3285', '8480', '72885,57', '2957', '96490', '16743', '25158', '39160', '12472', '25570', '9840', '2975', '30656', '9398', '23800', '16824', '19125', '3750', '18001,28', '2610', '70360', '27060', '18334', '90590', '17330', '8910', '3682', '18376', '23600', '34555', '17800', '17690', '21243', '7720', '27180', '19547', '2299', '24897', '5149', '28210', '14461', '16598', '23715', '39500', '18329', '7300', '12339', '13910', '2150', '8438', '38800', '29382', '7399', '22324', '4379', '32076', '4599', '6550', '16879', '9417', '23029', '2045', '4366', '2635', '98136', '6517', '9396', '8998', '8350', '3530', '19470', '13208', '14696', '12953', '12702', '21597,8', '7902', '2165,4', '5456', '3460', '18615', '2724', '27905', '9496', '7625', '21717', '13302,83', '14206', '7327,8', '6950', '19490', '9251', '4923', '10040', '11966', '12197', '19254', '12160', '41397', '14316', '23459', '6944', '13640', '9420', '30350', '2238', '21350', '2050', '3290', '6202', '33300', '13462', '4430', '3959', '3330', '5301', '18749', '23009', '10424,6', '13248', '25390,94', '7763', '38570', '13400', '5912', '9650', '21550', '16860', '31404', '7795', '18560', '23110', '7900', '99000', '6035', '21670', '40900', '14150', '5755', '8000', '13495', '24675,85', '5751', '26080', '16050', '20091', '30603', '22139', '20095', '14636', '17460', '2974,15', '4525', '18566', '9019', '71600', '2910', '23499', '10084', '12511', '18547', '25330', '3779', '32100', '3535', '31587', '14891', '17962', '26950', '31484,1', '8102', '17398,2', '6323', '13719', '22575', '44660', '18481', '7022,7', '30825', '12868', '11362', '6544', '4940', '16070,5', '12606', '30011', '3199,2', '13615', '7439', '14609', '5849', '33088,5', '9347', '5865', '12840', '14766', '29677', '19751', '18390', '19980', '19040', '11126', '15840', '20409', '19145', '9837', '19417', '18458', '23737', '3294', '19895', '40824', '17170', '21388', '29630', '5718', '6885', '3980', '29599', '21060', '22498', '20658,2', '7197', '5299', '17270', '19615', '8874', '7660', '6785', '7598', '18984', '3640', '4911', '4401', '15010', '30800', '3396', '11495', '4701,6', '25317', '12331', '11315', '9996', '19099', '60888', '9800', '13184', '5900', '7039', '14493', '8723', '15099', '12833', '17609', '4040', '49750', '10841', '19590', '58670', '10450', '5189', '25940', '11999', '14976', '25109', '9447', '18012', '30325', '13350', '41600', '15098', '65250', '6256,8', '7137', '16992', '37150', '62240', '25999', '6290', '28050', '10286', '12598', '84000', '11410', '3039', '9467', '35520', '11700', '17920', '14698', '28551', '20636,91', '12493', '24085', '29443', '3028', '22864', '3762', '11897', '14494', '9790', '26837', '9397', '13254', '28715', '5219', '26060', '61920', '7066', '35744', '6088', '9597', '58567', '8184', '16596', '19989', '10708', '10738,5', '9325', '10446', '13138', '12220', '7224', '17631', '5550', '43200', '34479', '13750', '20883', '22640', '19089', '32000', '20939', '28674', '13728', '28684', '7970', '32387', '41837', '30796', '19000', '5238', '28420', '21166', '89640', '8099', '21600', '2428,2', '3510', '20660', '10605', '13016', '11302', '6424', '33470', '3752,9', '23197', '5625', '11383', '25145,79', '34990,8', '25588', '20298', '10134', '9061', '22057', '9432', '21595', '3509', '16833', '8990,9', '7455', '4097', '8095', '13120', '33862', '12470', '26580', '6096,13', '8680', '3314', '10782', '12646', '15270', '8730', '4879,8', '16052', '10216', '22750', '7240,5', '3109', '18669,2', '6118', '21062', '27686,2', '14588', '23706', '19290', '4843', '17379,99', '26920', '9557,1', '18220', '7874', '11303', '2048', '21123', '6650', '4390,91', '15580', '31833', '9128', '32391', '8499', '27166', '17091,9', '25225', '26479', '13539,2', '10523', '64660', '7319', '6299,72', '8698', '8448', '26397', '7858', '21170', '34999', '5639', '8020', '5680', '4703', '7981', '8220', '46695', '20060', '15279', '4290,95', '10200', '3299', '38406', '6024', '17220', '8230', '2119', '26321', '14110', '2333', '24346', '4885', '6522', '4579', '2495', '9970', '5670', '19790', '43996', '7848', '30946,29', '13730', '8037', '8055', '21770', '15400', '27816', '30826', '10830', '14942', '10800', '37415', '17530', '11560', '18304', '4661', '17946,2', '12512', '21510', '6004', '22427', '3128', '4281', '9180', '36800', '4297', '3520,8', '23140', '16490', '11330', '26370', '15930', '25640', '12991,82', '5568', '11808', '3434', '10992', '29694', '13793', '22214,69', '14130', '5259', '20343', '12000', '37350', '8180', '6757', '5408', '8517', '19699', '23575', '6831', '9955', '2745,9', '4620', '9315', '5247', '3190', '5199', '17560', '8419', '8011', '33881', '7632', '27472', '4115', '21858', '5614', '16920', '8330', '28600', '16485', '6205', '20650', '15820', '13320', '8699', '18319', '13265', '10381,5', '30324', '6780', '48580', '7105', '3245', '9477', '6200', '25263', '31020', '3920', '2248', '5043', '12048', '23251', '6575', '32706', '9341,5', '25406', '35669', '14875', '25631', '18143', '24362', '6503', '9460', '17040', '43373', '35142', '14947', '26395', '21349', '46500', '10220', '7642', '7509', '14107', '23516,84', '29075', '8717', '27970', '8797', '10438', '18030', '2104', '14619', '21015', '11590,92', '14460', '10912', '12745', '26000', '4860', '40500', '10457', '21238', '11236', '14356', '15120', '8250', '19529', '15540', '18239', '4952', '27939', '3916', '11753', '12716', '17000', '16530', '22810', '9891', '18686', '7080', '24190', '10799', '13157', '19350', '7729', '21481', '14916', '17316', '3703', '30681', '22124', '43950', '7272', '8420', '33340', '7473', '8241', '22799', '38725', '6640', '31200', '3407', '29294', '29460', '15065', '7135', '10778', '24990', '17854', '8025', '36290', '7189', '32576,5', '3284', '12708', '32400', '17802', '4499,2', '49490', '19420', '7947', '10128,8', '21980', '6978', '5231', '32292', '21943', '17379', '24466', '27736', '28641', '14891,92', '15255', '6867', '34195', '12655', '2439', '22589', '22225', '5898', '6556', '12518,5', '8240', '29266', '4990', '7632,92', '5630', '6298,6', '10650', '26780', '50700', '14092', '13173', '26138', '27800', '6247', '10021', '27446', '16544', '12530', '37437,66', '8100', '8369', '6136', '9814', '16395', '7580', '26524', '19498', '28970', '26055', '18170', '12397', '25937', '34905', '7597', '9013', '12600', '16880', '12340', '54105', '15727', '5694', '4816', '16460', '33598', '14288', '11190', '17949', '11764', '10398', '48600', '16910', '9215', '23661', '67795', '6800', '6116', '10349', '29680', '11204', '8889', '61250', '24840', '6940', '3387', '8846', '6875', '5589', '16868', '4608', '28016', '20090', '14292', '8405', '8825', '18711,05', '35580,91', '4520', '10599', '27340', '8770', '7824', '25881', '5364', '24145', '13386', '16160', '9525', '47691', '5032', '3877', '13441', '11295', '5195', '21075', '5050', '10025', '15904', '12699', '21487', '4652', '5250', '3448', '7132', '17112', '17020', '30837', '10319', '6103', '8549', '5135', '5847', '6985', '18582', '15670', '34821', '19464', '7655', '28247', '12557', '23152', '28509', '31882', '25460', '11093', '9994', '11389', '17700', '20179', '8639', '14410', '8886', '17298', '14006', '17110', '24599', '19993', '11749', '2555', '5921', '16110', '18602', '69000', '21948', '29094', '27348', '4581,87', '31792', '24103', '14380', '14097', '10354', '37480', '10442', '14446,3', '44700', '13644', '15696', '16317', '12742', '3789', '10854', '23130', '5389', '37508', '8830', '9931', '5190', '4807', '13891,5', '15600', '14361', '7878', '10496', '22890', '15629', '4994', '8285', '11297', '9813', '16198', '44440', '21999,2', '19670', '20561', '8948', '7400', '14081', '12631', '8581', '7867', '3906', '6240', '15370', '10112,8', '17248', '19404', '11135', '25897,4', '15246', '19110', '6289', '36980', '32955', '10706', '22920', '14625', '7194,72', '12488,25', '19689', '15380', '3187', '14856', '10697', '18630', '19999', '15725', '23998', '24100,83', '16310', '5120', '7443', '5377', '17089,9', '25904', '34740', '19966', '34699', '16399', '18445', '6873', '26280', '16048', '14400', '3786', '21827', '13344', '16715', '7492', '3654', '5800', '9588', '13690', '8617', '3440', '8652', '11620', '3199', '11286', '14629', '3850', '5369', '6999,3', '83407', '10940', '17930', '7618', '15237,1', '10999', '6299', '15640', '4239', '3745', '19791', '4888', '39450', '5936,81', '19998', '5761,9', '7799', '76000', '8532', '35530', '18599', '28750', '11398', '6999', '8386', '17531', '24570', '2619', '6584', '7235', '45438', '9577,8', '41470', '13624', '22460', '5105,7', '17838', '13021', '6498', '8878', '29592', '24929', '12628,91', '22147', '15638', '21630', '5880', '14994,6', '15320', '51140', '4921', '29470', '10124', '8564', '28472', '20424', '7173', '29949,7', '27589', '47600', '20519', '7982', '26543', '2759', '12191', '26276', '26658,9', '52000', '10680', '51570', '16428', '3894,2', '22194', '7236', '16620', '17310', '13360', '11446', '3339', '7404', '4311', '4628', '9540', '8610', '23530', '16098', '17329,92', '6127', '19680', '6691', '6492', '13985', '21298', '9554', '6539', '11978', '5005', '3784', '36194', '12680', '15292', '17567', '22236', '31000', '22747,4', '16150', '14340', '15325', '12835', '9456', '9826', '9230', '12060', '22317', '11368', '3329', '7530', '4962', '9390', '32990', '4777', '24864', '5724,2', '26462', '20620', '19051', '10419', '14050', '14035', '10803,84', '6419,78', '18187', '8412', '6599', '13134', '20290', '11377', '17198', '11154', '12038', '38070', '14530', '17529,91', '6375', '29400', '30197', '11466', '21520', '14598', '37016', '22798', '6746', '10492', '33450', '43573', '10560', '14456', '31694', '18700', '21700', '23700', '15890', '7198', '16175', '29493', '15748', '21086', '17613', '12450', '12725', '4148', '18997', '4000', '3764', '20160', '7063', '13748', '40080', '21382', '23377', '6263', '31498', '12784', '17360', '4527', '27130', '22430', '30370', '45297', '8320', '12385', '3222,9', '9268', '46921', '28930', '25713', '26339', '4915', '25385', '12463', '12360', '20592', '19035', '19720', '12734', '10640', '19415', '39865', '13198', '15363', '3874', '39980', '13489,91', '7373', '12417', '27740', '17966', '16786', '20300', '21696', '5005,8', '35640', '5759', '16823', '10274', '32175', '18590', '13950', '3590', '17076,2', '5482,3', '22369', '9999', '13018,1', '12389', '8460', '9090', '15336', '18495', '27448', '5391', '22398', '29788', '21920', '46000', '28041', '31875,3', '31294', '30372', '84558', '3520', '9654', '9034', '15393', '28162', '24061', '35610', '11850', '17290', '20209', '35596', '4850', '12822', '18603', '9189', '11833', '94720', '31991', '7459,6', '6610', '6920', '80973', '28842', '13695', '8720', '13882,92', '13650', '11970', '6652', '24199', '11392', '17704', '13250', '31415', '70500', '4380', '6561,9', '51900', '65629,5', '4617', '10910,6', '39830', '23202', '2998', '18171', '8237', '10238', '24680', '40050', '15856', '14044', '3385,92', '8198', '19746', '14570,01', '12410', '63306', '6750', '7687', '2578', '2390', '2520', '16070', '14140', '3271,5', '19245', '27347', '4398', '7157', '9440', '42950', '3196', '2540', '4320', '7588', '4021', '21220', '8346', '6018,39', '18799', '4744', '6141,75', '25200', '2689', '4781', '4826', '37305', '12514', '4480', '5218', '11615,4', '4900', '20179,3', '78120', '27200', '3899', '13998', '6467', '3542', '6732', '5588', '5516', '14618', '22047', '29640', '13195', '13020', '12265', '5740', '9532', '3383', '16324', '12678', '76550', '2489', '12061', '15490', '6295', '8253', '13670', '4428,1', '11526', '15500', '23146', '17568,3', '3949,92', '36267', '20430', '4491', '36308', '3720', '14521', '7050', '6477', '12211', '27369', '5424', '35097,66', '28060', '15430', '15615', '30110', '3989', '12959', '5451', '14590', '22233', '5140', '55846', '18910', '3710', '24263', '2000,51', '7416', '22395', '13287', '23830', '17073', '10116', '13240', '10910', '30500', '9842', '14688', '14100', '16300', '26380', '5310', '2680,4', '12670', '2250', '13488', '5404', '14270', '14660', '3490,91', '13848', '11830', '18890', '9117', '33510', '15315', '21814', '18535', '3095', '18318', '3276', '21866', '9370', '10359', '2970', '17998,2', '14031', '21940', '13122', '22990', '13433', '44879', '26049', '35450', '17697', '63000', '16540', '5346', '28778,3', '10140', '26232', '51200', '3180', '5058', '20480', '25271', '3591', '13570', '3951', '8598', '13494', '4534', '13689', '31560', '47247', '13100', '5594', '10740', '17496', '75424', '22168', '11090', '23790', '31499', '38250', '10298', '14385', '19620', '10820', '3798,9', '41700', '19457', '6600', '44560', '17950', '23000', '18499', '11724', '15647', '8905', '11340', '8116', '28308', '12960', '15720,91', '9538', '8651', '21919', '38625', '22212', '67950', '34335', '18162', '33678,2', '15309', '9167', '26848', '10508', '16285', '10590', '10860', '4896', '6115', '53364', '34422', '23475', '7680', '7760', '3715', '3919', '3730', '21665', '28565', '15547', '49000', '4930,95', '6790', '25280', '7619', '29386', '22656', '21870', '2873', '19390', '16260', '2330', '18825', '5198', '19728', '6520', '17888', '6700', '13078', '5573', '24900', '16218', '19860', '10840', '10775', '5989', '17050', '5478', '10089', '9989', '34148', '22290', '6490,5', '6186,7', '24400', '21809', '30418', '3305', '5276', '25992', '26346', '16783', '9738', '17688', '7968', '15127,38', '19877', '12636', '25855', '6518', '8918', '13686', '2860', '5708', '17366', '12848', '3610', '14330', '2704', '12260', '13960', '9898', '19080', '15575', '13921,52', '18497', '49653', '8490,92', '23310', '16170', '2300', '14678', '9759', '7190', '23720', '30738', '24131', '3335', '29478', '19210', '27986', '8755', '5611', '3023,58', '18882', '3072', '13491', '22830', '25020', '12189', '8399', '5220', '14938', '6945', '7336,3', '13930', '12100', '4074', '30615', '14000', '6327', '29461,34', '5370', '10757', '35940', '3089', '4790', '19301', '53000', '16390', '4099,9', '10500', '83871', '22962', '26930', '24120', '13591', '16629', '25420', '28170', '12575', '12251', '11786,85', '15367', '12465', '10338', '26140', '19665', '13300', '10310', '25122', '10538', '9219', '13199', '60300', '23684', '23315', '5995,46', '5445', '12935', '45000', '21578', '5055', '36570', '4330', '23945', '19593', '16765', '87570', '12730', '9280', '18693', '9295', '11000', '19133', '5789', '2360', '6967', '11955', '22269', '15789', '2490', '15940', '10852', '29040', '6435', '31538', '7869,61', '11428', '8999', '15221', '16020', '6881', '17049', '12639', '27898', '3820', '3879', '50652,94', '31278', '14580', '51700,5', '24119', '25720', '26872', '5285', '79910', '7181', '12309', '10467,1', '7084,28', '9629', '6939', '6230', '4575', '17875', '12300', '19540', '7380', '21800', '13463', '8195', '12780', '22240', '3393', '31460', '4768', '9690', '5155', '11595', '2573', '6400', '21974', '29877,3', '15390', '8658', '23647', '25137', '3328', '21330', '2241', '24030', '2255,4', '3575', '5300,9', '18900', '14596', '33821', '12310,91', '9760', '37440', '2189', '7386,75', '32540', '6191', '17610', '19580', '14574', '5600', '21246', '33660', '9296', '9858', '24778', '8745', '17595', '17750', '17991', '86071', '12508,2', '18491', '4996,8', '8098', '4964', '20767', '32975,3', '15470', '11765', '46649', '17590', '28755', '4362', '32819', '2934', '6449', '21650', '2879', '23488', '4837', '16948', '19610', '2200', '18885', '8508', '8322,77', '7876', '16407', '6639', '4670', '22080', '11325', '23400', '11445', '17790', '8515,7', '38108', '6101', '18358', '7637,63', '4879', '18460', '9963', '29206', '3561', '73996', '32200', '14355', '26910', '7472', '12572', '2190', '2440', '17630', '9573', '12421', '3160', '11099', '9224', '16571', '5548', '19130', '10714', '4758', '10435', '8744', '22097', '13090', '2310', '11250', '5730', '9116', '19797,1', '3322', '26365,6', '13665', '38832', '14193', '20133', '9340', '9622,77', '22413', '18176', '71740', '20249', '12710', '60500', '19611', '16736', '8090', '40313', '15115', '31646', '19570', '20600', '5669', '11480', '5253', '4319', '10759', '4592', '76730', '27228', '8920', '26989', '30925', '22770', '2878', '82800', '23124', '61000', '6300', '27070', '17850', '5328', '4463', '19650', '2341', '5400', '4207', '8270', '12810,94', '9498', '84570', '7648', '21721,1', '6995', '66680', '6301', '5051', '23772,27', '21050', '12400', '8361', '6099', '4280', '9560', '17010', '8648,5', '5077', '4351', '18204', '39020', '5060', '6165', '57593', '23500', '15770', '30000', '11635', '11986', '13063', '10780', '2580', '3316', '26865', '4098', '37100', '22635', '11558', '3834', '15201', '7150', '4941', '21673', '19993,4', '19491', '13060', '5531,8', '21985', '11989', '7152', '5530', '60000', '17599', '13918', '21237', '30960', '10466', '30879,55', '2366', '18350', '6545', '24850', '23355', '4986,5', '13001', '25812,5', '18934', '40410', '9606', '15599', '35369,4', '20985,62', '17714', '24777', '22394', '7703,1', '20400', '11510', '3376,8', '28400', '8806', '4710', '15582', '22040', '29970', '23990', '2399', '20277', '4485', '27170', '19011', '13230', '7810', '7621', '13284', '21805', '14611', '7011', '5793,3', '27617,1', '20679', '98030', '4020', '15865,65', '19924', '6793', '11100', '16653', '20751', '5888', '5347', '22192', '8128', '18463', '9393', '12398', '3031', '5330', '18960', '16263', '73300', '30300', '25600', '89018', '9021', '5225', '18027', '16700', '9599', '27759', '10671', '95441', '8188,04', '5683', '15003', '17649', '24630', '22167,6', '5839', '8225', '27744', '11551', '17006', '17851', '5910', '22082', '14090', '29220', '27990', '19520', '6889', '8550', '9241', '78000', '5936', '19719', '37466', '6616', '6255,15', '8793', '7563', '7362', '16039', '6000', '24280', '20205', '21280', '4930', '18185', '44500', '20925', '4103', '4596', '9971,3', '17162', '15199', '18855', '3061,45', '3398', '22590', '21895', '25870', '14990,95', '16781', '10850', '6594', '32490', '28945', '23539', '5988,46', '12645', '11189', '6969', '20356', '9823', '27313', '15230', '6822', '20536,1', '29300', '5606', '16993', '13388', '17300', '10968', '12318', '7630', '3195', '7280', '5090', '6499', '44000', '4910', '16470', '7676', '3470', '11317', '23690,91', '8675', '3886', '4835', '33111', '19765', '9567', '7478', '4060', '60829', '14481', '10418', '11030', '45280', '23517', '3933', '20826', '21726', '6513', '7651', '8416', '5221', '19840', '4041,9', '3538', '25625', '10174', '11470', '10648', '49798,1', '26641', '7599', '3996,9', '7407', '12090', '31060', '4120', '16893', '22102', '9679', '13653', '19430', '25780', '11113', '24835', '11520', '18225', '7539', '2850', '10463', '24844', '17501', '22188', '10490,91', '7180,9', '4080', '7147', '8088', '19296', '10699', '3152', '12789', '3480', '18498', '50177', '35992', '35598', '7110', '22323', '12889', '7895', '8078', '7315', '4704', '24459', '5748', '26750', '24858', '14052', '78400', '20802', '6662', '7410', '29898', '32114', '3628', '11958', '38830', '18048', '8049', '42300', '10555', '6938', '17397', '7425', '9400', '5807', '14290', '37069', '81423', '8940', '10737', '7070', '6893', '28800', '12365,2', '21484', '16975', '14321', '16897', '18389', '19839', '5309', '20113', '16011', '21860', '11690', '22307', '3040', '27190', '3540', '6919', '11650', '3067', '13790', '13733', '42940', '28557', '23745', '14513', '29037,12', '16222', '26550', '29622', '4576,5', '9298', '18760,75', '4542', '61600', '29065', '12350', '18224', '15739', '59009', '17826', '49743', '22650', '29780', '9980', '10635', '3897', '32444', '19730', '23240', '25000', '20989', '7369', '14930', '8295', '33476', '24179', '3984', '28781', '14820', '14717', '26085', '7875', '49860', '9899', '26355', '24867', '23667', '21470', '10058,5', '32900', '18789', '8360,92', '2305,8', '14094', '19960', '19177', '5736,6', '18999,2', '7189,6', '17490', '31716', '29880', '7769', '6741', '10447', '15887', '3840', '18797', '70000', '4754', '6843', '20088', '34720', '6834', '22263', '17425', '10617', '6562', '21460', '8941', '28530', '22482', '23950', '6090', '11240', '3418,5', '29390', '29905', '20440', '9862', '21197', '4274', '9533', '45665', '76738', '4300', '3484', '5178', '14744', '4602', '20129', '6748', '6941', '16995', '50879,83', '5689', '12520', '11630', '61832', '6696', '25840', '6181', '17540', '21498', '17775', '24423', '4495', '2009', '23103', '4550', '6776', '35787', '26547,1', '17600', '34140', '23760', '21984,55', '17769', '15755', '10000', '45110', '6770', '7872', '23259', '5078', '20715', '10904', '21808', '29990', '6340', '21499', '12830', '11269', '23304,6', '13040', '16045,2', '22110', '17640', '22050', '20871', '23650', '6883', '20646', '7980', '28208', '11363,99', '39627', '10576', '22450', '4939', '19382', '15745', '52930', '15621', '31806', '6643', '9030', '15481', '7478,2', '12599', '23128', '9720', '9729', '17060', '28421', '21589,9', '20086', '19482', '5879', '5965', '7870', '15822', '7000', '33978,63', '22497', '38880', '39276', '13010,94', '14490', '3340', '10987', '12166', '18898', '3869', '10990', '10499', '14197', '19886', '11349', '22384', '11565', '22509', '13443', '2844', '33330', '14177', '7040', '7020', '10150', '4969', '11910', '37456', '26642', '9461,5', '5544', '24015', '5616,15', '19205', '9586', '12799', '4241', '4039', '14648', '18880', '9290,91', '17090', '3737,6', '7639', '27012', '23595', '19315', '31050', '12454', '16010', '20580', '9550', '9835', '24027', '11670', '9986', '6997', '21840', '10440', '41907', '8398', '5472', '2799', '27922', '6347', '31895,48', '6125', '5562', '6983', '8990', '5710', '22896', '17367', '4339', '6560', '3614', '15226', '4743', '6130', '8619', '20499', '9405', '24587', '10248', '2263', '16551', '17580', '4889', '27501', '7660,42', '53499', '24970', '3630', '5840', '3200', '4535', '14245', '6163', '25897', '6430', '3935', '3474', '80440', '6848', '18090', '15784', '2099', '3015,2', '2961', '10769', '3006', '7550', '21240', '24314', '3725', '17466', '16814', '11600,6', '14098,1', '22760', '5829', '3501', '11167', '13948,2', '8034,2', '19945,35', '10470', '22085', '28238', '5929,95', '5017,9', '14189', '5932', '3296', '17279,2', '27110', '6142', '9031', '3816', '25177', '4099', '24420', '10855', '14260', '11264,25', '17702', '9768', '5490', '26941', '8741', '13567', '9500', '16512', '24552', '12412', '20536', '37656', '13025', '28500', '50970', '16872,3', '29100', '20169', '9675', '6048', '29742', '19419', '16367', '6089', '10570', '18000', '7578', '29336,4', '6311', '2800,85', '70047', '9973', '44225', '6399', '3870', '3769', '17712', '24940', '16455', '12325', '13030', '17960', '15899', '2151', '4276', '9825', '6799', '3639', '16799,4', '30391', '31910', '4726', '14910', '12228', '23677', '24340', '37400', '10734', '6109', '3034', '4505', '31466', '5366', '16820', '15359', '14047', '11422', '6745', '5517', '6726,1', '6367', '17043', '17098,2', '7920', '3865', '2240', '3700', '5757', '14678,63', '26499', '32810', '5560', '16785', '9652', '26337,6', '75500', '13820', '15482', '97220', '16977', '7317', '15625', '7047', '27925', '9988', '13421', '22781', '3046', '27550', '24290', '6970', '4555', '7291,85', '25899', '7144', '2660', '21120', '11350', '4202', '13949', '6578', '7210', '46156', '13558', '9621', '34916', '27541', '7582', '6365', '4918', '27456', '6519', '8556', '5405', '13098', '13776', '6950,7', '30060', '14528', '18790', '21598', '2152', '89200', '22616', '6854', '6204', '9510', '13006', '7620', '28428', '4221', '18010', '10950', '42228', '13315', '22244', '9590', '4522', '23280', '11699', '13479', '19513', '4869,66', '5508', '8857', '22530', '7885', '5632', '15340', '13389', '23985', '7997', '5100', '10350', '19899', '8988', '19905', '2372', '5937', '8689', '19316', '19608', '6870', '4035,2', '15127,4', '11288', '6190,92', '24717', '23147', '31596,92', '77210', '9425', '21340', '22419', '25584', '6835', '4898', '4230', '15914', '83000', '9278', '29650', '24281', '18724', '27645', '19591,36', '7937', '12829', '28417', '8735', '10585', '18290', '7454', '30590', '20069,33', '87600', '11842', '16848', '24070', '7800', '37000', '22100', '21728', '90747,9', '12782', '9389', '10490', '7614', '19626', '5487', '5906', '23411', '8286', '3876', '28440', '13843', '8640', '21861', '7891', '21724', '10417', '14204', '5835', '8625', '11225', '6500', '54000', '17099', '26174', '10374', '7395', '5279', '42320', '7201', '4140', '52561', '10423', '19169', '6250', '4140,2', '21710', '6993', '20833', '5019', '4672', '4008', '4450', '17865', '24300', '5867', '19798', '25491', '22600', '11195', '11502', '9846', '5745,12', '31990,6', '13940', '3000', '12590', '20862', '13500', '8200', '4548', '17199', '16949', '6782', '16692', '9240', '39780', '24236', '3413', '9416', '38553', '8038', '5941', '5353,5', '6346', '13563', '7452,4', '9880', '4690', '4040,1', '30360', '13387', '15327', '18849', '4780', '7290', '34695', '50929,82', '79380', '63320', '6984', '13709', '13193', '22299', '25105,1', '33830', '26054', '10279', '12390', '6136,58', '18105', '11450', '27560', '14730', '4890', '11259', '12593,7', '73306', '7498', '18517,1', '14600', '6189,92', '3400', '9870', '6278', '9515', '5877', '8050', '38700', '30198', '17476', '10837', '33030', '15658', '28745', '14593', '27381', '9130', '6020', '13166', '3731', '10917', '4131', '26538', '35201', '9681,5', '4845', '18740', '7089', '8906', '12480', '10972', '19250', '30299', '11920', '24949,09', '11590', '18080', '7713', '27480', '3541', '5590', '14185', '2907', '19295', '4292', '10525', '7313', '4812', '10529', '2090', '23170', '29200', '12080', '11710', '4970', '18500', '7638', '26100', '10198', '8669', '37818', '19622', '17997', '20007', '9798', '11637', '16980', '20046', '14438', '13747', '10342', '5690', '5895', '11304', '23137', '5721', '25765', '10580', '21405', '23393', '2223', '12919', '31610', '6739', '15000', '56700', '11799', '11333', '6138', '33475', '5304', '17671,2', '13899', '10981,82', '3045', '5660', '34493', '19630', '2781', '3540,2', '15442', '24270', '34499,2', '8638', '29334', '70300', '7151', '4738', '9263', '8883', '8298', '23166', '3370', '6386', '5232,2', '10752', '12455', '14773', '16900', '6796', '26150', '31672', '3123', '27080', '26199', '24210', '8880', '5859', '3110', '32940', '9979', '21218', '5950', '6619', '6030', '14880', '26988', '3141', '18962', '6370', '16680', '19740', '56277', '31702', '9457', '2890', '13235', '4779', '22930', '3885', '15290,7', '41171', '8589', '6975', '5432', '23358', '4228', '7713,42', '9791', '13620', '4290,9', '11151', '24519', '12711', '32650', '14160', '15740', '10646', '27020', '14046', '20900', '17251', '16169', '11194,06', '6210', '17190', '15319', '13770', '4260', '14383', '63271', '3275', '6288', '24580', '13566', '29471', '17272', '2880', '10163', '20433', '5970', '22210', '14645', '19475', '29612', '6797', '21915', '7549,89', '92951', '14370', '9695', '10920', '19260', '12713', '18230', '17324', '19883', '24405,4', '5980', '17815', '7394', '15192', '14849', '4775', '10665', '5180', '3718', '31317', '13829', '15389', '97000', '16491', '10675', '11990,95', '9360', '8389', '13592', '6193', '4951', '10539', '44697', '15570', '26760', '6747', '20220', '22252', '48700', '20882', '21160', '24059,83', '25520', '5824', '19170', '4770', '11070', '18540', '23290', '22195', '9890', '21069,8', '17039,91', '20381', '7199', '20050', '11798', '3035', '11060', '22660', '8879', '3222', '3300', '34199', '10410', '17633', '15555', '6228', '4700', '15121', '12138', '58900', '23057', '51685', '16773', '7558', '13600', '5765', '4082', '15070', '2638', '5290', '70936,32', '23975', '7431', '11927', '15468', '13780', '2751', '9430,38', '8067', '11477', '12232', '10191', '13140', '30150', '2190,2', '4540', '16990', '11980', '5930', '6489', '18972', '12899', '10960', '3137', '7493', '22598,1', '7170', '28900', '24768', '3549', '27096', '4922', '12198', '10520', '23370', '16081', '4499', '22756', '6561', '19770', '18896', '3668', '30344', '15035', '16141', '12898', '24367', '11500', '23825', '7513', '23954', '7440', '16591', '12810', '25343', '6510', '6081', '7899', '7510', '62468', '23352', '11490', '12435', '7180', '20990', '87673', '25227', '26695,8', '9546', '31263', '32449', '5520', '19795', '16834', '47529,91', '14201', '6201', '26774', '2430', '25760', '11768', '51750', '28850', '26430', '21889,92', '11680', '13872', '72890', '3900', '19235', '16030', '21890', '15980', '15084', '8235', '8459', '9570', '6190', '39720', '5450', '14522', '26051', '12900', '21938', '10422', '47789', '56340', '5031', '26547', '7693', '13900', '6393', '13934', '4257', '11399', '5955', '8047', '13115', '11750,31', '10430', '8170', '26871', '14170', '12559', '15960', '9977', '25829,1', '19085', '26170', '53169', '7120', '24399', '22498,2', '23389', '14646', '23100', '14348', '28999', '2570', '13312', '13439', '20229', '46729', '24771', '35943', '3646', '3430', '12616', '10818', '5871', '3237', '6964', '18099', '25887', '4314', '16438', '5208', '10245', '8630', '28490', '4315', '10113', '17592', '3670', '11810', '21850', '30471', '13490,2', '26315', '4299', '10280', '12393', '28215', '10265', '11020', '27342,5', '7092', '6119', '9733', '6918,34', '16090', '6680', '21380', '2030', '24370', '4546', '13170', '3090', '23391', '9924', '45863', '25989', '47223', '13920', '3048', '16265', '7589', '26260', '61915', '6660', '5780', '11935', '7289', '20370', '8901', '14571', '7060', '8108', '16284', '6130,8', '7174,6', '18400', '80000', '7540', '25508', '14144', '6819', '7752', '28241', '8860', '5128', '42135', '10052', '21845', '25936', '19890', '42689', '39025', '8441', '6749', '27000', '9913', '4125', '19439', '19360', '22599', '3990,92', '10075', '7516', '7342', '21730', '9173,88', '13450', '18004', '13221', '20648', '8453', '7187', '15732', '25050', '27450', '14148', '12380', '7247', '35360', '5778', '2745', '25783', '4005', '16590', '23425', '19797', '5197', '17752', '4030', '23713', '5131', '85270', '33570', '26457', '52509', '22455', '15838', '18800', '98644', '2559', '17150', '19050', '8912', '25627', '56673', '6697', '24650', '32617', '36805', '4645', '14449', '13727', '13740', '4240', '7163', '22870', '4366,8', '24220', '10990,92', '29600', '17082', '5505', '4799', '84800', '11514', '4504', '8609,92', '10600', '19840,5', '4851,81', '14215', '9170', '16448', '16214', '24344', '13999,2', '25080', '39690', '6880', '9614', '10898', '22300', '7390', '19938', '6217', '9820', '7417,5', '23659', '5426', '21112', '17572,68', '28955', '23200', '81612', '16748,1', '5757,1', '23795', '15020', '18425', '5825', '6440', '6323,33', '20960', '17320', '4217', '12598,2', '21479', '27518', '24130', '67500', '7682', '11655', '5746', '5152', '19820', '20059', '13792', '11202,3', '2268', '14349', '10773', '6040', '42669', '29384', '13440', '7040,92', '19809', '4995', '42436', '3790', '28502', '6549,83', '5920', '9927', '5174', '25081,98', '29285', '6570', '8979', '10836', '8056', '21355', '4089', '5990', '7544', '21936', '13860', '4405', '4810', '4134,4', '13850', '11795', '21100', '8568', '20000', '4619', '3528', '34391', '20852', '11519,1', '6647', '38058', '4487,4', '19286', '37235', '14205', '17940', '15440', '18414', '5580', '8788', '27780', '8400', '21899,94', '18654', '4892', '31800', '5640', '17528', '6491', '3360', '27971', '88555', '22674', '29944', '21536', '23608', '11085', '5222', '7677', '12290', '10631', '8047,1', '10801', '5796', '21440', '35332', '5852', '22960', '24357', '5848', '3489', '14651', '32593', '26379,91', '94358', '13698', '17896', '29782', '2572', '15950', '11434', '5832', '6783', '4648', '4095', '14527', '17660', '34630', '10712', '7115', '6958,92', '15186', '9341', '17140', '7830', '15017', '8544', '38120', '12012', '9237', '3801', '11642,3', '8289', '94300', '19929', '5772', '16145', '7422', '10964', '52295', '14658', '15912', '16045', '17400', '9480', '56899', '7787', '29700', '54700', '15999', '5742', '26020', '2797', '9140', '12580', '11077', '25680', '15520', '3230', '35793', '4664', '13365', '31015', '26090', '6730', '5179', '20931', '3239', '27860', '41100', '8080', '97026', '27599', '91536', '13402', '17880', '3955', '15299,9', '3051', '4169', '5712', '24003', '3924', '20790', '7547,2', '12490,9', '5340', '9596', '7068', '10057', '17173', '14498', '55174', '6377', '7302', '23900', '33255', '24810', '20063', '6273', '3168', '45160', '26440', '8715', '20644', '15518', '22570', '23226', '21900', '4057', '21195', '29388', '15314', '6546', '17980', '4059', '17594', '2322', '31474', '23095', '2679', '25749', '13112', '12059,1', '18648', '23890', '4741', '5661', '8929', '7999,1', '14850', '15067', '17370', '13165', '7859', '14489,2', '8900', '18251', '12695', '11685', '16985', '16660', '6415', '4760', '34150', '21097', '4117,5', '42740', '16870', '8800', '19341,2', '58490', '3356', '5357', '5169', '9680', '2470', '4928,96', '27781', '4887', '10750', '7340', '23148', '11692', '34900', '9930', '4689', '10158', '12610', '18649', '36910', '29691', '4132,5', '26300', '10021,5', '11687', '13981,8', '6567', '48800', '21778', '23980', '5109', '16669,84', '96000', '19627', '7017', '4212,9', '6080', '13965', '9230,9', '24647', '13975', '11136', '16890', '30599', '33192', '25991', '5750', '2595', '14700', '18286', '3359', '12672', '21592', '4476', '11619,84', '3130', '4984', '16798,2', '24059', '4240,91', '24600', '4302', '28580', '23850', '15830', '28594', '28933', '20630', '15498', '2930', '6455', '26091', '9290', '9450', '13383', '6998', '21994', '14480', '13179', '7140', '28820', '27790', '2899', '29234', '6893,7', '4830', '15889', '6904', '29000', '6350', '19440', '5160', '22525', '4369', '29900', '20326', '4563', '30974', '8540', '15102', '5419', '16450,71', '32580', '15552', '14787', '3824', '24330', '5282', '13180', '17347', '27980', '10439', '19880', '14866', '29997,24', '6027', '21300', '8718,1', '20200', '23783', '13259', '27270', '4934', '12816', '23216', '6496,1', '8767', '12130', '26660,2', '3909', '7593', '23480', '3492', '59760', '18610', '2224,8', '11290', '4270', '14219', '8595', '14921', '13469', '11499', '31680', '7640', '3033', '27706', '20393', '14472', '7229', '20598', '8127', '13496', '4858,1', '6990,92', '7978', '10512', '21648', '5085', '12500', '5818,5', '39800', '21901', '13992', '13448', '17999', '12411', '3308', '2498', '19674', '28700', '9010', '14765', '13671', '22286', '12549,07', '18970', '66578', '18980', '13366,2', '10366', '30428', '23696', '8984', '33358,84', '30686', '35700', '21597', '10390', '3970', '16005', '20379,5', '27673', '8233,95', '32914,31', '2243,7', '33996', '40534', '7905', '10130', '6100', '13130', '5435', '33344', '15210', '10704', '82910', '6948', '23854', '18370', '31520', '18338', '22711,21', '2450', '36136', '5129', '38520', '37640,77', '18217', '10483', '24640', '6264', '20196', '37200', '26819', '5020', '27510', '6462', '24741', '4950', '26400', '36086', '10880', '2412', '40521', '21619', '8091', '16092', '21124', '7249', '36490', '22495', '2881', '16830', '3604', '38810', '7307', '37752', '32938', '11188,14', '11116', '20690', '17778', '19597', '15164', '13580', '2798', '7600', '8188', '3499', '15897', '16789,98', '12940', '13734', '5240', '22162', '21496', '18121', '4220', '8977', '45820', '6850', '7880', '7590', '2900', '10647', '77762', '14020', '2936', '4375', '12330', '11892', '3868', '19218', '87880', '10930,52', '36177', '6537', '35500', '51370', '9804', '19736', '5018', '5715', '10256,1', '3636,9', '16100', '6979', '19542', '4904', '4600', '5913', '32525', '10767', '9730', '2909', '14848', '25168', '4804', '16220', '4610', '8844', '99308', '6023,7', '7337', '6237,03', '12148', '3861', '24690', '17424', '12903', '44100', '11518', '17990', '16320', '12240', '18197', '13455', '21042', '25800', '10479', '8292', '5130', '32790,9', '16616', '32700', '7940', '16063,29', '6525', '20748', '39090', '17505', '3195,9', '5040', '6470,9', '5110', '20252', '5482', '12989', '30340', '8685', '15863', '10001', '11740', '9990,91', '5270', '4208', '12499', '17605', '28059,81', '10998', '9573,1', '11529', '7369,7', '7574', '5855', '29052', '31090', '5768', '17593', '6679', '10448', '9879', '10536', '3500', '7998', '23109', '6495', '14595', '31900', '12750', '23992', '38300', '3255', '33850', '19772', '5699', '4679', '18670', '6397', '8244', '14390', '19830', '16060', '20040', '23852', '34557,1', '2028,56', '7888', '18720', '18725', '35775', '6361', '13978', '23840', '3358', '7985', '11395', '19082,3', '19278', '36500', '10170', '7845', '5706', '15036,21', '27150', '12495', '28722', '11212,06', '22499', '5869', '10097', '8442', '18300', '9053', '25498', '3001', '30673', '11760', '8275', '9004', '10115', '13630', '4477', '12640', '6990', '15895', '5260', '45900', '8763', '33292', '11968', '27234', '15150', '10659', '7204', '19920', '35990', '10394', '10497', '12092', '3895', '6910', '18470', '8299', '24772', '26450', '18138', '6033', '3490', '14758', '22198,1', '11138', '15034', '19983', '9555', '33400', '20800', '7770', '29975', '11081,52', '13952,73', '22528', '10190', '5840,5', '18144', '17716', '11819', '20992,78', '14670', '11936', '15110', '28110', '3910', '12460', '21009', '21270', '15045', '14388,3', '7586', '3167', '11725', '5488', '12396', '5790', '11211', '15806,5', '79770', '8785', '15560', '7368', '28950', '3752', '79959', '26700', '20670', '18632', '28090', '5185', '17503', '3923', '2282', '28186', '17184', '28055', '10090', '5773', '8648', '64500', '16687', '14257,91', '23154', '13409', '7299', '14399', '15250', '18450,8', '20928', '9025', '8186', '25250', '17952', '3651', '8570', '11298', '16996', '7385', '12854', '16467', '27005', '11305', '21660', '20680', '17500', '5465,26', '31810', '6960', '3080', '15290', '14872', '7088', '3560', '3289', '6658', '2089', '4497', '4801', '22490,91', '7514', '15705', '6840', '61290', '6876', '6900', '11930', '3647', '10194', '3942', '3205', '11192', '5592', '11950', '4268', '25820', '11648', '3749', '5436', '41125,88', '3419', '5570', '14133', '4056', '7488', '17380', '20779', '23040', '7860', '19550', '6720', '18490,91', '2489,4', '7360', '23382', '23579', '9719', '6336', '9902', '9899,92', '69282', '35750', '3892', '79679', '5030', '17670', '6898', '18750', '4489', '8600', '27350', '49500', '84630', '29376', '11748,2', '27670', '13380,56', '28971', '23660', '16999,82', '8690', '29950', '12800', '4500', '34100', '21450', '35770', '4590,2', '30697', '10795', '18660', '9908', '22520', '10262,2', '18906', '18198', '22390', '5388', '11674', '9810', '4196', '11049', '18810', '16381', '6319', '14443', '16698', '25240', '10900', '19046', '21685', '13191', '10526', '20999', '40581', '5294', '9452,97', '52200', '15160', '22140', '4508', '4909', '13635', '13888', '12447', '18045', '4919', '17998', '13298', '29450', '13216', '6411,72', '28991', '32970', '5200', '17439', '25623', '16358', '11310', '11691', '26034', '6999,9', '3310', '11293', '6354', '20880', '25732,2', '4905', '34618', '46540', '20155', '10029,92', '3825', '32750', '5555', '4595', '21320', '13889', '26900', '34990', '16189,4', '7348', '17450', '16550', '12246', '4660', '12908', '12029', '21105', '18472', '22522', '17622', '13137', '8560', '28079,84', '18515,03', '5083', '4727', '14441', '22969', '12077', '7898', '30918', '17428', '14570', '29177', '9499', '31091', '3600', '7709', '22264', '11498', '3580', '23399,2', '4310', '16240', '19269', '24458', '8228', '17432', '19734', '11938', '6049', '6792', '8262', '29830', '26529', '10770', '11583', '27796', '34891', '32980', '12554', '11960', '22880', '2792', '11021', '4077', '27880', '5899', '38778,01', '82700,9', '21810', '5380', '13336', '49227', '10584', '24682', '43515', '12696', '10361', '8697', '35100', '23351', '12199', '9793', '5850', '10205,5', '7376', '13227,7', '10088', '62800', '38718', '36730', '8543', '28899', '7101', '22230', '9839', '13926', '5897', '21494', '22072', '8870', '5687', '13355', '7570', '14900', '23204', '9107', '47900', '14365', '5430', '21690', '7780', '22872', '17215', '22410', '6580', '16389', '11940', '35529', '30170', '11779', '5999', '23598', '18426', '23930', '9484', '5300', '7909', '9072', '8515', '12566', '13835', '15750', '6758', '17556', '13288', '25492', '2039', '93947', '16107', '21838', '6540', '10989', '6345', '11133', '14965', '19897,1', '7812', '5599', '28517', '22356', '11370', '4004', '3999', '5645', '20659', '5860', '11227', '12478', '30224', '22488', '14390,92', '9158', '28469', '7220', '24110', '18190', '12845', '4244', '11723,2', '7107', '9412', '15403,02', '6266', '11200', '20575', '6892', '7839', '7480', '63657', '12890', '18820', '29526,12', '4641', '11391', '10660', '27356', '34641', '10085,9', '10495', '16400', '21907', '6966', '5455,8', '63776', '4729,72', '3771', '10528,3', '4454', '9050', '11042', '4392', '12872', '16498', '4400', '40084', '18330', '16600', '25998', '12284', '10936', '10628', '2070', '4630', '4644', '19489,91', '8169,55', '5150', '29760', '7002', '93400', '9728', '29070', '34804', '21420', '26022', '5575', '7166', '24923', '6181,09', '22015', '30813', '13995', '34449', '50950', '20272', '42335', '33990', '8995', '12304', '7209', '56518', '22560', '28242', '8951', '13768', '17988', '40037', '8071', '7100', '14190', '43353', '16000', '17900', '24830', '26489', '24502', '22670', '5720', '6651', '4372,2', '3815', '4209,2', '12490', '16704', '14639', '23577', '11651', '12070', '11755', '21128', '16961,1', '15998', '10668', '6865', '11677', '4050', '8340', '90200', '32041', '17906', '27854', '6858', '14680', '25350', '10700', '23037', '25587', '21990', '13456,82', '23453', '4189', '5634', '4650', '18510', '12110', '77860', '18580', '8698,1', '16519', '14890', '8960', '8641', '16194', '21480', '6543', '4180', '27886', '52250', '5099', '8943', '11175', '6053', '17820', '21475', '19079', '13993', '10566', '19005', '4460', '99540', '8410', '22000', '17263', '9920', '12597,3', '22536', '17308', '27065', '15920', '14996', '6781', '19184', '16950', '37517', '100000', '13875', '5440', '31933', '16449', '16670', '21740', '20987', '12280', '7698', '28460', '3145', '21390', '27810', '39110', '6860', '14610', '43682', '12424', '2380', '10720', '14240', '15165', '28879', '23689', '9580', '10805', '18380', '6809', '41451', '60720', '17372', '7999', '12818', '11322', '22361', '14917', '18216', '6280', '31400', '24479', '11308', '3674', '5754', '17080', '17981,9', '7740', '8580', '12140', '33800', '4978', '2514', '7910', '3134', '16733', '15503', '30720', '26990', '6856', '23230', '14949', '8190', '44030', '4428', '11860', '22280', '9326', '19844', '22980', '6120', '56964,6', '11230', '3149', '13599', '44495', '3415', '18949', '7711', '3799', '21328', '9702', '16290', '17911', '22011', '12998', '12384', '5397', '5076', '16880,91', '20280', '21038', '7581', '5830', '3735', '85100', '20700', '14500', '13710', '23584', '25683', '76360', '18895', '9830', '14620', '31580', '31096', '14397', '4740', '4999', '2569', '10789', '8006,1', '18490', '5650', '3375', '7495', '8068', '17756', '2676', '4674', '20263', '82162', '15280', '11540', '3717', '4349', '7083,05', '8917', '7240', '18119', '21216', '10656', '8970', '25167', '10231', '19805', '4150', '3894', '25730', '25702', '19710', '4890,2', '10670', '12219', '3602', '5922', '3321', '4387', '31726', '16370', '13769,2', '20710', '5605', '19819', '4967', '10953', '19990', '13139', '14709', '35950', '2748', '3033,9', '21784', '5775', '30729', '5499', '5170', '18592', '14990', '24475', '9855', '28039', '34650', '3170', '22121,1', '8140', '10347', '17200', '16519,02', '50020', '6590', '28341', '25299', '26082', '22999', '4336,2', '5665', '25515', '5288', '12019', '27470', '14209', '7191', '23792', '16998', '2790', '6445', '15984', '5243', '28288', '4947', '8997', '11039', '4675', '7507', '17231', '6220', '13649', '3117', '18624', '15843', '12044', '13150', '25500', '12366', '2420', '20995', '15450', '24380', '4165', '5684', '27298', '11550', '20180', '4064', '10935', '6740', '13541', '78150', '15190,5', '14747,1', '18692', '22400', '3650', '20009', '17499', '22250', '3722', '3619', '4122', '11430', '23920', '11489,91', '4986', '12127', '17648', '5339', '3598', '7850', '3312', '5977', '32844', '33700', '10456', '17873,75', '16080', '17895', '6529', '18115', '29615', '11698', '29380', '7679', '14885', '48900', '17134', '9176', '14310', '42180', '20903,5', '11800', '28845', '25211', '3076', '19037,2', '17340', '18180', '4720', '56130', '2350', '11769', '24983', '18819', '8110', '6313', '30799', '8057', '15771', '23331', '15200', '41132', '33126', '5298,1', '12760', '23750', '3250', '17420', '18891', '25290', '6692', '16800', '7460', '15900', '50330', '20786', '20053', '7077', '34390', '7747', '26817', '5820', '16940', '16379', '4390', '14250', '4090', '21187', '10034', '25632', '96200', '4729', '21588', '21688', '20583', '17810,82', '3450', '71500', '15695', '81000', '25639', '15512', '29653', '28294', '27895', '16576,2', '22940', '13745', '5700', '18698', '88000', '6075', '2095', '2183', '22904', '26485', '34154', '20268', '66504', '9769', '5287', '20653', '9545', '12095', '11279', '10530', '31696', '6678', '19128,41', '4118', '13136', '7370', '15788', '7030', '11990', '15685', '5799', '5577', '6460', '9127', '77000', '37790', '4191', '10990,91', '13720', '74650', '6707', '28661', '25960', '21030', '23365', '17325', '4326', '13895', '11690,91', '12884', '22550', '9250', '22692,96', '16649', '23912', '25180', '17430', '5000', '5143', '15036', '4908', '9007', '18573', '16630', '12996', '8994', '10991', '4624', '19047', '15756', '21717,92', '28890', '17399,2', '12700', '24328', '20645', '18861', '36260', '10695,13', '5805', '5997', '6552', '5103', '16114', '21204', '17615', '15523', '23470', '9595', '59224', '10287', '5510', '3150', '47195,92', '12310', '11159', '16418', '14396', '11429', '13390', '22645', '13141', '2950', '9150', '9991', '36617', '24195', '11441', '2999', '25220', '34020', '10610', '17483', '14387', '8456', '18150', '5994', '15272', '25320', '25506,19', '33280', '7975', '10315', '29199', '11689', '14750', '9956', '22145,1', '6947', '16298', '20755', '39388', '13310', '22672', '14199', '20488', '19789', '73230', '13382', '10978', '37881', '19522', '5080', '9892', '18999', '21736,2', '8869', '16501', '3030', '11776', '11300', '3545', '86618', '7890', '6073', '18361', '4852', '15650', '10268', '8432', '15824', '32450', '5963', '18868', '16690', '8041', '3270', '3320', '9529', '11598', '4498', '18210', '19843', '4262', '6457', '10019', '32800', '10060', '27570', '7650', '62700', '20753', '21624', '5025', '24450', '13492', '20601', '9059,92', '8076', '17254', '15460', '11280', '4130', '3385,83', '18780', '23625', '22558', '25454', '17459', '13410', '8421', '3890', '8118', '10100', '20850', '27010', '10785', '18631', '43732,8', '3114', '10471', '12136', '12599,9', '6380,91', '6236', '27795', '6170', '18550', '7116', '9755', '7258', '4462', '11489', '19140', '24760', '13005', '35600', '25146', '26510', '20590', '10073', '12283', '8850', '15245', '6139', '7556', '29590', '9504', '5987', '9160', '4413', '17839,92', '14857', '21506', '23774', '15857', '11900', '33190', '26999', '7420', '11997,2', '9429', '6787,8', '22338,94', '24833', '14089', '33120', '8300', '12950', '8070', '29980', '119700', '27934', '13921,36', '5610', '19095', '34162', '20250', '6527', '20475', '15653', '14560', '26596', '45034', '27690', '88300', '19700', '16615', '24506', '3050', '21082', '4188', '23511', '5011', '17111', '20443', '4945', '9940', '7849', '26650', '9789', '11639', '15870', '8590', '30361', '2435', '16621', '8633,7', '21154', '9310', '6820', '22275', '93890', '5228', '4452,91', '41498', '3280', '23882', '18667', '15690', '25898', '15269', '4820', '8849', '22800', '19100', '14420', '32499', '12097', '8835', '19810', '13641,3', '2620', '8498', '27372', '15466', '12427,1', '22700', '31502', '3983', '37500', '9928', '26886', '30999', '3880', '6609', '14991', '9923', '12560,01', '7112', '10890', '13275', '5707,8', '91168', '3617', '13959', '19240', '15480', '5306', '8833', '11385', '2120', '9985', '6197', '5265', '7262', '44690', '35145', '26690', '65747', '8308,35', '2031', '34700', '5931,8', '13480', '4275', '9917', '16189', '19781', '11270', '6480', '8775', '4212', '12999', '14469', '6479', '11589', '17100', '15700', '17955', '32481,83', '20237', '14925', '7208', '26330', '27990,91', '37800', '4345', '6182', '21410', '6620', '18689', '5949', '8390', '7838', '9876', '31490', '8303', '4436', '11898', '25490', '7335', '15810', '5797', '4403', '14495,35', '11212', '20979', '6407,5', '19200', '6682'}\n\n\n==========\nFST_PAYMENT\n{'6700', '910', '835', '2608', '1094', '3126', '4318', '1619', '5244', '2100', '5989', '10515', '2000,22', '565', '1500,95', '41000', '10431', '3435', '2180', '800', '1566', '598', '2688', '2572', '2550', '3930', '48052,1', '1500,2', '1525', '1225', '1639', '4095', '2062,5', '588', '1153,5', '2458', '2860', '5708', '1036', '1381', '1730', '3610', '7531', '9898', '309', '2369', '5890', '1272', '1717,92', '5940', '49200', '12012', '1670,69', '11080', '439,95', '8082', '9272', '1876', '9267', '900,2', '1291', '2300', '1933', '3010', '1000,92', '6396', '3138', '390', '2149', '1935', '1322,9', '1814', '9480', '1559', '3335', '14990,94', '2115', '5611', '995', '2740', '10527', '9096', '9754', '6945', '2797', '2204', '3403', '6625,16', '9574', '12100', '9845', '14000', '30303', '1430', '3230', '40000', '777', '9528', '2129', '2352', '1805', '34923', '5320', '2775', '3220', '1353', '26090', '2655', '6730', '2325', '3469', '682', '10500', '1099,9', '651', '2901', '63219', '2260', '280', '12251', '1251', '4000,22', '13300', '4983,2', '1584', '2179', '11548', '2316', '576', '2518', '1001,7', '507', '1257', '389', '9418', '1080', '49356', '10240', '5340', '51127', '2320', '45000', '799', '3765', '4521', '3060', '7118', '1240', '4330', '3411', '774', '16750', '23900', '50', '6933', '11000', '1310,62', '1050', '618', '1708', '1771', '2599', '3311', '2360', '415', '977', '9037', '14448', '2092,94', '1860,9', '1512,1', '20644', '4395', '2490', '355', '3550', '21900', '3660', '1139', '429', '2926,4', '3257,04', '18588', '1020', '16250', '1418', '258', '4640', '2697', '795', '1238', '6699', '3119', '1812', '1495', '1373', '1831', '562', '6599,8', '2323', '2835', '1475', '4100', '446,05', '12300', '438', '1297', '416', '2014', '2259', '75606', '4806', '1992', '6882', '847', '2954', '605', '269,1', '542,2', '601', '1535', '8900', '6174', '10981', '1779', '420', '659', '1999,2', '1179,6', '1802', '1220', '646,5', '449', '953', '381', '5155', '4768', '1235', '6400', '2197', '4210', '3620', '1099', '1268', '8800', '7883', '11440', '1411', '2470', '1455', '2000,79', '2345', '9568', '1939', '3328', '1009', '10750', '998', '2241', '4752', '1498', '3575', '276,6', '690,95', '4160', '1680,82', '18900', '771', '9200', '9664', '3907', '1530', '5210', '2499,8', '2285', '1147', '1962', '2130', '17610', '1640', '4177', '1029,41', '5600', '1700,92', '2395', '3223', '840', '3357', '1116', '1379', '48800', '18850', '1405', '1958', '868,42', '1790', '684', '2962,1', '1819', '16481', '3463', '4455', '2125', '2270', '3120', '1575', '1002', '3318,4', '7230', '1899', '470', '1082', '2770', '279', '24800', '6160', '1632', '20235', '5750', '2595', '1380', '2879', '2934', '3359', '2200', '4476', '230', '6832', '677,5', '4587', '3130', '1610', '627', '1210', '60360', '1555', '3015', '1250', '3260', '2930', '2685', '1440', '2454', '1915', '4197', '439', '2244', '3104', '4186,59', '465', '8450', '575,2', '1071', '2899', '1841,91', '686', '3037', '929', '700,83', '5500', '1302', '2855', '4830', '391,2', '2440', '1011', '2190', '599,95', '950,92', '3334', '1069', '3160', '24500', '1059', '25599', '54870', '1130', '3570', '38000', '7350', '4750', '1745', '947', '7820', '4352,37', '2310', '850', '5730', '8540', '10980', '2616', '14786', '1876,25', '2047', '818', '13000', '8274', '2887', '1700,95', '4944', '3159,86', '3990', '1885', '2869,9', '5002', '7700', '14200', '4358', '1200,2', '5038', '979', '798', '516', '2937', '1389,8', '4319', '2849', '1489', '10759', '440', '1615', '10162', '4949', '7913', '3392', '3492', '2878', '513', '3685', '1055', '437', '1026', '7174,87', '4270', '6300', '8688,09', '1341', '894', '14640', '16515', '1185', '921', '930,91', '1866', '732', '5400', '1999', '855', '1581', '2000,92', '3216', '1596', '5044', '9498', '778', '1239', '5437', '100', '3595', '918', '5000,8', '691', '1509', '5074', '54230', '10512', '1780', '53370', '752', '21648', '4647', '2784', '12500', '28000', '4280', '445', '10637,7', '11027', '713', '1823', '868', '1231', '5060', '1524', '1255,41', '10502', '925', '5383,85', '4498,91', '643', '376,21', '833', '30000', '15000,4', '2498', '538', '4346', '2580', '550', '4190', '981', '2019', '4098', '16484', '586', '12335', '4018', '1000,1', '3498', '2816', '5761', '830', '582', '10152', '1258', '1622', '7623,3', '1810', '7945', '1648,5', '1188', '1216', '4139,36', '40758', '2535', '3363', '1689', '4666', '9964', '13536', '5361', '2271', '2547', '1065', '2078', '53530', '3970', '10330', '21750', '30660', '681,2', '9700', '9788', '639', '2990', '3451', '2132', '4986,5', '10394,5', '1829', '6010,3', '867', '6100', '3124', '430', '4850,8', '1703', '4567', '6948', '2700', '2315', '4712', '1635', '300', '25300', '1599', '2450', '4198', '10026', '503', '4200', '4364', '323,25', '2703', '2155', '7990', '1545', '5814', '3658', '2280', '65000', '532', '2399', '2192', '964', '629', '2355', '1399,8', '1547', '524,85', '1107', '1997', '1253,7', '558', '2800', '4950', '51929', '1572', '4020', '3810', '1000,2', '1503', '2598', '19924', '2832', '5234', '4287', '4110', '1249,95', '2156', '10230', '3202', '1717', '868,83', '10696', '8342,38', '2218', '276', '1650', '18719', '11116', '2387', '4399', '2630', '2000,4', '480,9', '2798', '13580', '553', '2205', '648', '7082', '670', '7600', '11024', '3374', '1497', '3499', '508', '2696', '4870', '11528', '878', '5240', '746', '2455', '5928', '2590', '2708', '2213', '2571', '2346', '2135', '2077', '7880', '7590', '5752', '2900', '296,8', '48210', '159', '2368', '3198', '256,2', '1444', '7764', '553,6', '1533', '1197', '667', '1877', '3718,2', '9824', '377', '9375', '6554', '16100', '10467', '780', '4600', '2028', '1751', '10015', '9730', '25400', '6598', '669,3', '3599', '4360', '26802', '1010', '874', '8117', '2564', '475', '4206,8', '240,6', '2532', '5726,3', '861', '1653', '8', '2437,5', '2690', '479', '4370', '4742', '2670', '2206', '1243', '855,9', '10147', '1035', '2098', '1158', '326', '531', '1991', '2665', '6000', '6196', '5130', '217,5', '571', '940', '2020', '1259', '138', '1619,95', '2515', '2279', '3088', '44500', '1854', '2615', '2017', '1679', '5703', '522,4', '1629', '8000,91', '2349,6', '3398', '5504', '290', '20595', '2688,69', '6995,92', '2157', '4263', '1610,4', '1739,69', '1182', '1634', '8923', '6820,45', '2720', '1841', '7207', '160', '3270,82', '965', '433', '3537', '903', '3105', '3500', '6495', '1834', '3677', '955', '1682', '8349', '1215', '16657', '7630', '3195', '10729', '943', '2588', '21094', '6670', '799,5', '4910', '177', '14940', '4289', '2338', '687', '3470', '5045', '3739', '5246,5', '4060', '3350', '10434', '457', '319', '2046', '405', '6513', '337', '556,2', '1320', '35000', '2574', '2365', '1165', '1961', '39910', '1129', '3538', '247,2', '510', '3049', '2500,4', '7506', '8650', '7995', '3596', '3100,1', '4014', '8328', '5095', '9600', '5667', '7845', '8480', '369', '2131', '4800', '1482', '3408,8', '1625,2', '238', '10097', '10267', '19430', '450', '1295', '3333', '3500,6', '2796', '6317', '2850', '5245', '2975', '739', '1474,3', '23800', '4080', '3750', '4565', '1348', '1808', '1392,4', '6990', '2778', '10260', '13126', '5260', '3480', '2863', '2467', '11762,27', '2610', '0,95', '7110', '315', '371', '1394', '2110', '1175', '485,2', '2811', '2237', '2287', '2466', '620', '10497', '1269', '6910', '140000', '2199,95', '7300', '2840', '3346', '1731', '2150', '663', '10008', '38800', '512', '2394', '11597', '1056', '9400', '1608', '934', '3490', '2680', '3698', '6844', '1292', '8940', '1265', '28800', '6550', '5631', '205', '281', '2045', '1752', '2635', '8998', '7142', '293', '3910', '1655', '4510', '1504', '616', '3092', '7586', '393', '6399,9', '1200', '360', '3650,1', '2858', '730', '780,3', '1799', '5790', '5811', '869', '488', '2142', '13790', '938', '915', '2923', '744', '6950', '481,8', '885', '3454', '3158', '912', '3752', '1813', '1328', '26550', '1892', '760', '2698,89', '11292', '849', '7261', '6000,81', '10090', '2892', '2238', '592', '2758', '1590', '2050', '797', '375,2', '3298,92', '3290', '370', '1152', '1462', '859', '694', '1111', '25000', '4430', '7964', '2199', '4184', '3330', '1395', '3984', '2985', '1089', '2253', '694,2', '2239', '3425', '2328', '3055', '2482', '1132', '1278', '13400', '2765', '2716', '435', '1875', '1246', '4989,92', '1045', '987', '1516', '1830', '17500', '7795', '711', '1398', '7900', '3080', '1064,2', '772,3', '10619', '1420', '8000', '529', '3426', '1356', '2000,2', '3187,74', '3840', '2007', '570', '2286', '5540', '345,3', '3099', '9108', '1399,3', '854', '6900', '1051', '837,91', '1649', '363,5', '1339,9', '677,3', '1091', '316', '395', '3205', '1940', '3078', '3288', '4011', '1266', '399', '2910', '1438', '1556,8', '4274', '1796,94', '41580', '383,2', '4300', '3484', '500', '3779', '4739', '2989', '9016', '1865', '6748', '1670', '6923', '873', '1190', '1864', '3524', '12520', '8008', '6323', '464', '2763', '16000,1', '421', '8751', '10725', '3794', '14753', '549', '1306', '4550', '792', '779', '862', '527', '9910', '2405,69', '9254', '1740', '1233,2', '7238,86', '985', '10041', '3749,74', '936', '400,2', '4145', '1050,92', '49038', '10000', '941', '27350', '1098', '1355', '612', '545', '1053', '3209', '3294', '5387', '2826', '311', '1785', '1290', '4500', '1313', '1570', '6340', '889', '5718', '223', '1651', '9784', '3711', '1500,92', '3980', '418', '344,8', '750', '2419', '4185', '1686', '346', '3774', '4593', '55982', '10795', '557', '877', '875', '1815', '1523', '9520', '336', '625', '5735', '2553', '658', '7660', '5352', '3538,92', '6785', '1181', '120', '517', '9685', '7243', '911', '2029', '8734', '7794,92', '3975', '2403', '10900', '5767', '3391', '5722,21', '622', '3240', '552', '5192', '6000,95', '3023', '3902', '349', '2013', '1691,6', '48156,13', '9996', '539,09', '9800', '747', '2167', '8120', '5900', '1100', '1510', '4490', '1092', '1680', '1469', '3416', '2698', '7034', '1172', '2414', '1784,81', '2289', '988', '4919', '419', '4091', '4040', '555,2', '13519', '1491', '4746', '7000', '2283', '1979', '213', '5200', '1452', '3340', '3669,6', '2650', '2530', '10177', '1261', '2173', '424', '4350', '1842', '2482,84', '1528', '50642', '800,82', '33694', '827', '3310', '7631', '299', '574', '13772,82', '749', '1271', '2640', '1700', '2433', '3172', '32750', '6290', '842', '1399,7', '6481,31', '2186', '1433', '765', '247', '7020', '260', '5461,11', '3039', '723', '3364', '1368', '4343', '409', '3997', '11425,8', '352', '3385', '7519', '1999,8', '2698,93', '4000,5', '1770', '4660', '3687,5', '50500', '3995', '1280', '2625', '1480', '2702', '3028', '1463', '1499', '2357', '9790', '2634', '2178', '11044', '813', '3183', '2375', '1700,91', '9550', '509', '2500,95', '2084', '34000', '22500', '786', '634,2', '775', '541', '845,5', '12144,42', '9325', '3600', '9490', '514', '7500', '729,1', '5472', '1855', '5550', '4310', '984', '2749', '5562', '2667', '3760', '341,1', '725', '13750', '1701', '7553', '3277', '5710', '718', '1340', '16293', '14221', '6326', '1102', '1012', '1150', '5398,1', '2700,75', '905', '19000', '3678', '4743', '11660', '3390', '554', '1987', '8710', '15939', '508,79', '1567', '2490,95', '594', '8099', '1329', '3510', '2263', '3214', '2278', '449,1', '9444', '5899', '10933', '1704', '1477', '2584', '2493', '755', '485,8', '1609', '4517', '759', '2517', '5097', '3200', '9793', '1084', '5850', '525', '3935', '804,5', '3084', '10205,5', '7707', '2099', '3000,68', '4000,91', '26565', '1281', '3298', '2825,5', '150', '705', '4000,92', '392', '1710', '1390', '2754', '3210', '1563', '7950', '3006', '7550', '3236', '1260', '8650,48', '8680', '846', '1000,89', '2053', '5350', '1957', '46300', '2674', '638', '790', '1574,1', '895', '952', '2690,92', '1702', '1735', '499', '697', '1138', '3107', '3087', '2006', '9391', '649', '3148', '1072', '1486', '668,71', '443', '10489,92', '1873', '8125', '920', '2160', '677', '16668,82', '5999', '468,1', '359', '6541', '2614', '2235', '1361', '3097,8', '736', '1994', '500,39', '1937', '5300', '1546', '2632', '555', '12358', '3605', '2384', '2653', '1108', '1781,83', '10397', '3439', '9500', '3511', '2201', '7547,8', '6650', '2039', '8499', '11076', '432', '268', '3800', '3265', '0', '962', '3157', '2845', '7200', '781', '18589', '18000', '1049', '11320', '1930', '373', '17669', '33590', '5838', '3999', '2025', '387', '1039', '1365', '1860', '5609', '3870', '1077', '8700', '1978', '3266', '10200', '3299', '1955', '573', '969,8', '2333', '754', '706', '1638', '2508', '2151', '4579', '644', '2495', '6532', '1330', '5670', '230,3', '1959', '404,1', '7811', '7480', '1847', '11200', '9960', '1936', '383', '1487', '1973', '2210', '3763', '656', '29080,92', '16400', '1422,6', '2331', '10800', '305,1', '2437', '11560', '3534', '2528', '567', '1408', '2877', '5366', '2166', '3293', '3100,3', '764', '455', '8522,2', '1820,94', '1199', '4400', '2560', '5517', '1195', '971', '210', '535', '698', '250,6', '1715', '2240', '3700', '2070', '457,5', '2760', '980', '5150', '1685', '5560', '13200', '530', '1849', '1279,9', '9699,69', '75500', '1343', '1271,1', '52182', '43359', '362', '1458', '1897', '825', '18735', '958', '8688', '1767', '34698,2', '4686', '4536', '9040', '2947', '436', '3609', '12000', '1015', '8180', '9838', '3215', '6050', '2660', '249', '1338', '580', '1030', '9955', '5844', '14440', '1247', '11890', '1200,8', '745', '2861', '6365', '3304', '9900', '7124', '3657', '6519', '1112', '3621,7', '13776', '3444', '7100', '1900', '251,1', '334', '853', '2373', '2152', '14190', '3495', '3414', '2710', '2060', '2935', '1598', '16000', '4245', '539', '1560', '5373', '53541', '358,1', '10051', '3815', '6712', '6789', '1434', '1364', '1013', '3245', '5000,87', '6200', '669', '10950', '3173', '7567', '2248', '4081', '989', '1399', '40664', '4050', '3555', '1869', '6575', '9750', '510,9', '478', '4275,2', '2444', '1384', '1126', '4444', '10700', '726', '1060', '5100', '630', '10350', '2293', '2915', '2375,68', '2372', '1821', '8618', '1370', '4650', '1207', '46500', '490', '808', '788', '1300,92', '7464', '899', '982', '2981', '9425', '2398', '2487', '3000,4', '6543', '1245', '1591', '2248,1', '6835', '2668', '4898', '2041', '4230', '58800', '26000', '551', '974', '20890', '1845', '6603', '965,92', '2870', '888', '12910', '2062', '8250', '8500', '1883', '540', '11221,5', '2805', '4823', '22000', '2231', '2327', '3916', '2940', '75570', '3688', '1924', '17000', '7800', '631', '2686', '5786', '685', '7080', '756', '44963', '999', '519', '5440', '1493', '249,3', '1123', '283', '858', '2170', '2022', '3703', '815', '1160', '2076', '3186', '21839', '3100', '5081', '5226,9', '579', '740', '4048', '515', '8640', '2380', '239', '2420,1', '7270', '1287', '695,2', '1671', '2324', '1630', '1949', '2410', '2198', '6500', '606,2', '2348,8', '524', '1327', '31350', '1625', '1331', '3284', '7395', '1818', '1561', '5279', '3410', '1809', '2949,7', '1000,95', '2247', '2597', '748', '2651', '880,2', '1768', '2044', '1874,7', '4450', '1351', '1982', '250', '560', '4597', '1514', '8790', '10400', '4580', '2988', '4990', '2593', '3043', '7740', '4788', '2299,9', '18375', '3000', '422,1', '10650', '6007', '13500', '456', '8200', '6000,1', '4978', '2514', '7049', '410', '1725', '5999,2', '54910', '6782', '970', '1661', '3134', '4348', '561', '8190', '1226', '1656', '6151', '11860', '1899,9', '7148,1', '39804', '2803', '8100', '1248', '3073,8', '3413', '1005', '11230', '1544', '3149', '2500,2', '4708', '3415', '35800', '1424', '6837', '11382', '458', '12600', '3899,95', '4690', '13203', '21500', '16350', '5022', '1600', '3780', '413,1', '2027,7', '54080', '1501', '459', '1326', '19355', '4194,3', '463,2', '4851', '2500,8', '9300', '2780', '1824', '1789', '3735', '3000,91', '1106', '14500', '1363', '367', '635', '6445,86', '7822', '676', '489', '1442', '2335', '2457,9', '1179', '5177', '857', '5075,84', '4393', '860', '2413', '2569', '3569', '4363', '942', '4434', '6800', '4225,44', '3400', '1042', '1168', '2299,8', '1219', '2772,13', '6875', '5650', '1058', '7495', '8164', '5375', '1674', '4551', '368', '13395', '1290,6', '19958', '2846', '2010', '1103', '3717', '2510', '9530', '1694', '2865', '1041', '1171', '9445,95', '2034', '2729', '7240', '1989', '29032', '1303', '1921', '13700', '7659,01', '469', '2605', '10136', '6297', '3872', '18080', '1928', '200', '1737', '2568', '637,3', '1543,5', '5050', '229,5', '1880', '870', '1360', '10525', '5250', '3448', '3602', '948', '2090', '15140', '4387', '1866,6', '608', '50000', '21000', '891', '652', '3108', '2340', '1264', '2096', '1095', '375', '5847', '2966,2', '615', '3666,2', '15670', '2080', '320', '1800,1', '2506', '19990', '564', '2175', '933', '15701', '2112', '46832', '1490', '1481', '2459', '2500', '3000,95', '5690', '1350', '1047,82', '2245', '9039', '4138', '4828', '511', '1620', '5170', '709', '1750', '2040', '17110', '5499', '1148', '9270', '715', '2223', '2555', '1950', '20550', '5033', '3170', '1255', '3846', '1004', '15000', '4680', '6138', '7125', '11673', '1445', '2955', '1999,94', '14760', '11924', '950', '1778', '1093', '54929', '834', '13818', '14220', '3045', '5660', '1318', '2136', '816', '10098', '498,6', '2358', '2830', '7776', '2790', '3169', '1720', '1450', '1605', '15600', '5460', '1708,5', '3370', '2852', '704', '16086', '10', '4675', '19950', '4992', '4027', '7507', '299,92', '1340,92', '811', '9602', '7400', '3830', '1025', '12044', '3110', '10695', '4653', '9979', '5812', '6240', '10557', '5950', '3369', '55000', '2420', '3140', '2857', '6370', '2649', '1417', '2586', '3822', '2890', '9457', '10706', '14806', '3297', '1969', '528,5', '3885', '1990', '1180', '5108', '4632', '56638', '6975', '3159', '1140', '1648', '695', '22400', '799,92', '3650', '645', '379', '7805', '1299', '2538', '1588', '5120', '2504', '610', '688', '3568', '1400,1', '425', '4809', '3619', '1871,75', '39487', '7528', '8820', '2712', '7520', '2522', '1975', '1666', '4260', '1146', '3275', '350', '2952', '2880', '4697', '2016', '1143', '9047', '1677', '3502', '2228', '1776', '10000,1', '5445,92', '5800', '364', '2140', '1996', '626', '1970', '342', '9695', '3440', '5000,91', '1520', '1595', '13807', '3135', '2753', '1093,4', '3850', '5980', '640', '17930', '4775', '5532,66', '225', '2052', '7128', '1288', '2290', '4239', '3745', '820', '1196', '1400,95', '3165', '544', '2864', '3003,3', '2024', '3639,5', '11800', '8389', '589', '839', '623', '1016', '4720', '2350', '2233', '6999', '8945', '1460', '11337', '700', '960', '8064', '1995', '2294', '3194', '1835', '11754', '1695', '546', '2480', '1890', '1829,43', '1703,2', '1786', '42518', '2843', '1376', '642', '4086', '2499', '496', '62250', '1081', '386', '3250', '687,7', '4770', '3585', '15900', '1374', '2750', '7747', '596', '1540,84', '2200,91', '5820', '2001', '41774', '3300', '2265', '956', '2596', '3800,2', '10465', '6228', '4390', '4700', '480', '1008', '495', '959,3', '2088', '2200,05', '1948,92', '4082', '9639', '168', '2477', '2578,8', '5998', '292,2', '468', '13360', '1534', '613', '1202', '330', '636', '1275', '2916', '9248', '737', '2643', '3339', '973', '1369', '3450', '769', '2999,6', '1980', '742', '2472,5', '1046', '5817', '9770', '5521', '2751', '6127', '363', '340', '3787', '407', '473', '366', '2848', '413', '3597', '3795', '13140', '3740', '776', '5700', '31000', '518', '2356', '2095', '6075', '1300', '2600', '5145', '2898', '590', '4250', '681', '690', '9020', '10007', '39600', '4155', '1178', '49582', '990', '7170', '712', '448,9', '2370', '9472', '13539', '619', '4499', '15091', '1135', '1952', '2230', '25089', '2159', '54470', '5148', '900', '520', '1914', '672', '11500', '7259', '2383', '793', '1568', '824', '10460', '2000,85', '3449', '14935', '863', '999,2', '11990', '3969', '674', '3318', '1507', '2400', '22399', '11490', '1571', '10560', '285', '1124', '23700', '1726', '1383', '7670', '4340', '1763', '1176', '9663', '4715', '5038,1', '2430', '848', '1895', '3943', '4000', '6929', '1517', '783', '9250', '2469', '2958', '5000', '1800', '1279', '5143', '3900', '1273', '5902', '3235', '9000', '1806', '1244,7', '12700', '9570', '2000', '5450', '8320', '20645', '6387', '16500', '400', '1040', '880', '8816', '130', '1580', '15886', '388', '1872', '810', '1305', '70940', '3950', '3874', '5001,9', '7185', '845', '3150', '3901', '2409', '720', '728', '1889', '638,7', '2950', '767', '3319,7', '3245,95', '21696', '2079,02', '2885', '1294', '9991', '10270', '1159', '1087', '2570', '3136', '2067', '19151', '50120', '7477', '2567', '2055', '7447', '2347', '4199', '1828', '1400', '3590', '26410', '4267', '50442', '0,92', '890', '1747', '650', '1759', '567,3', '4335', '1222', '727', '11689', '9871', '1690', '1155', '734', '1204', '5979', '4315', '8040', '1912', '6641', '772', '3670', '39400', '1956', '10000,92', '1333', '1018', '3000,92', '1431', '380', '5705', '1894', '1393', '11020', '3030', '3520', '197', '3545', '60432', '3516', '1161', '1000', '5023', '3976,32', '426', '2999,9', '3000,2', '10227,2', '6257', '1721', '24500,03', '306', '3064', '272', '7165', '717,2', '1229', '2030', '7095', '6610', '828', '887', '1788', '3090', '28257', '479,8', '10000,2', '299,1', '13650', '2920', '1385', '1765', '2730', '431', '1573', '5036', '1597', '3270', '1880,2', '1289', '3320', '4268,72', '628', '2288', '1470', '4498', '1256', '5760', '5784', '460', '643,7', '7669', '337,1', '4948', '1888', '18400', '1500', '2998', '4053', '1522', '4546,8', '9288', '3679', '451', '2549', '1319', '1557', '5985', '11233', '3960', '2491', '2222', '446,88', '6750', '1371', '2390', '3162,71', '27000', '1922', '3252', '2520', '508,5', '16070', '1500,5', '10100', '3228', '9922', '4398', '384', '2980', '9411', '3114', '6777', '2540', '4320', '2460', '7953', '2000,7', '1512', '60488', '372', '3829,6', '1837', '10723', '2689', '3680', '1410', '720,2', '2745', '1004,2', '2888', '7116', '4005', '4480', '7836', '1034', '2442', '4900', '3086', '7000,5', '515,85', '3899', '7633', '2004', '1820', '2485', '1386', '6398', '8750', '4935', '1660', '2364', '1884', '4030', '332', '1392', '2220', '5010', '1515', '19521', '1899,1', '2415', '4590', '7816', '5000,1', '18800', '758', '2015', '29590', '6509', '2559', '3249', '1760', '13613', '1960', '729', '1050,33', '949', '15680', '1254', '1115', '3380', '653', '2911', '1453', '13670', '11900', '650,95', '25131,59', '2114', '15500', '696', '269', '1198', '304', '7420', '323', '1003', '4980,91', '1887', '4353', '2776', '2900,5', '3482', '787', '2613', '1515,7', '8263', '3720', '1870', '176', '6325', '2807', '12211', '8300', '1366,05', '1293', '353', '5610', '29500', '4937', '1508', '4798', '1312', '2604', '814,2', '1910', '1284', '838', '1230', '724', '7541', '5140', '1920', '1063', '5744,5', '1556', '3212', '1550', '692', '60245', '4504', '3710', '7019', '1484', '6534', '1850', '559', '3050', '3014', '1309,65', '4000,9', '3278,9', '11600', '1360,92', '1000,67', '1722', '1120', '770', '3371', '7298', '14100', '29860', '6957', '1070', '6320', '583', '1457', '36330', '6217', '365', '633', '2250', '1267', '1252', '10425', '6820', '780,1', '4815', '10516', '19556', '599', '5248,91', '1170', '606', '2997', '3280', '1075', '1270', '339', '3137,9', '3095', '1141', '7250', '660', '2970', '3264', '4925', '699', '4817', '2587', '4820', '1121', '6357', '655', '10550', '1119', '2620', '832', '1519', '10140', '5746', '917', '3180', '13526', '3880', '542', '444,1', '25405', '301', '8879,48', '1479', '4523', '2810', '7882', '4995', '2647', '13100', '3790', '18152', '5594', '4846,96', '1488', '10095', '1565', '1840', '637,4', '1110', '2031', '11090', '3337', '2011', '1719', '1213', '1800,2', '234', '1048', '6480', '735', '24000', '6890', '9190', '12999', '3433', '14385', '3442', '485', '5990', '1201', '4916', '20000', '40100', '4619', '6600', '23000', '2000,9', '1310', '1901', '4045', '1693', '1374,88', '1090', '930', '34492', '1898', '600', '22097,99', '4235', '5580', '710', '8390', '6115', '8400', '2121', '9068', '927', '1151', '3730', '6491', '1705,5', '1337', '1549', '3360', '757', '6410', '2429', '2000,6', '1786,9', '7750', '738', '754,2', '1103,85', '4403', '9661,66', '4953', '893', '5305', '2330', '675', '1540', '5198', '602', '680', '42906', '1125', '6646', '4629', '3020', '3489', '1899,8'}\n\n\n==========\nLOAN_AVG_DLQ_AMT\n{'966,07', '820', '0', '670,085', '940', '2020', '910', '620,75', '678,2', '2598,9', '1283,93', '1094', '863,68', '5040', '1985,3', '1348,68', '5110', '324,16', '750,175', '2100', '849,91', '1930', '599,19', '2157,32', '2180', '1258,585', '5526,72666666667', '2350', '1686,92', '800', '8020', '1288,63', '2340,06777777778', '957,59', '339,83', '806,19', '1230,63', '918,55', '1460', '700', '1149,395', '2097,97333333333', '1860', '960', '198,88', '1412,55', '596,29', '63,31', '2550', '1066,005', '5075,99', '535,53', '836,79', '2262,12', '216,59', '1155,125', '2031,34333333333', '2480', '1890', '1147,5', '1077,605', '2720', '5410', '569,121428571429', '1352,85714285714', '1731,06', '1330', '1413,765', '970,155', '2860', '3250', '4770', '552,57', '3500', '1730', '1570,46', '2656,68', '763,3', '3610', '310,126', '1418,86', '630,49', '1409,965', '1069,6', '658,42', '749,82', '892,18', '3762,5', '1957,21', '6133,41', '1215', '1068,47', '2750', '2467,33', '1152,15', '2210', '3072,475', '1986,88', '3626,91', '224,59', '1817,76', '882,01', '1314,11', '782,705', '1749,69', '3470', '62,46', '3300', '2265', '2679,82', '1599,9', '2300', '1593,76', '3010', '4390', '4700', '1120,93', '6260', '1792,71', '756,1', '480', '1670,18', '2429,09', '1477,37', '1320', '2235,225', '1205,4725', '4400', '2560', '722,59', '2740', '913,4975', '2261,19', '1222,06', '2293,65', '510', '2240', '1889,43', '1033', '684,89', '1492,404', '1733,88', '2070', '330', '4630', '1398,58', '2760', '980', '1109,855', '1296,74', '2707,25', '1199,27', '5560', '2959', '530', '5370', '1980', '1430', '1880,59', '310', '4472,91333333333', '3230', '1614,36', '4120', '2002,45', '2782,89', '499,69', '1369,755', '1538,69', '673,6825', '823,54', '450', '5545,0225', '5320', '767,566', '3964,85', '1066,605', '872,065', '846,55', '2185', '327,98', '2518,18', '2755,21', '683,9975', '1208,09', '682', '2850', '3237,94', '1496,3275', '879', '1922,69', '1113,74', '578,3', '3303,2', '1260,55', '1300', '2600', '1304,0625', '3420', '437,26', '687,11', '2260', '1015', '4565', '3750', '2660', '2232,28333333333', '4445,6', '580', '2742,25', '1030', '4250', '1033,905', '3480', '1300,386', '690', '4620', '2610', '1007,36', '1608,255', '1892,238', '1175,57', '3190', '1059,47', '990', '2110', '4920', '3024', '1894,04', '1768,07', '1582,435', '702,565', '2154,02', '2370', '1080', '1716,52', '444', '2320', '1154,34', '1441,57', '1010,43', '620', '3860,18', '1135', '3060', '7100', '14870', '2230', '1900', '1666,28', '1240', '673,89', '1717,47333333333', '472,11', '728,49', '2247,51', '2840', '2710', '900', '3070', '2060', '9280', '1560', '1778,635', '2150', '5872,64', '976,68', '1050', '2796,66', '2082,47857142857', '2441,4', '2549,52', '1095,89', '1883,04', '2360', '2370,01', '1072,34', '1706,3', '6200', '3490', '446,31', '663,05', '2400', '560,86', '2680', '1264,23', '4055,03', '2490', '805,86', '812,693333333333', '2105', '965,1', '1144,03', '3660', '986,606666666667', '699,78', '1468,38', '1258,69', '4050', '951', '7670', '3102,49333333333', '18307,16', '1831,51333333333', '1020', '289,02', '619,73', '991,19', '948,58', '957,63', '4640', '3530', '59,68', '3820', '3910', '2957,71', '2430', '918,255', '2462,1', '1060', '1692,49', '1179,394', '4530', '630', '901,64', '4000', '407,8', '1679,2', '1200', '752,923076923077', '3460', '360', '758,95', '1601,63', '1098,8', '3540', '730', '1164,99666666667', '3040', '2379,46', '1370', '1299,55', '4650', '1800', '875,2', '4070', '1176,82', '490', '567,51', '1429,99', '1065,62', '1149,495', '2361,22', '110,91', '1829,73', '3900', '642,31', '1095,61', '782,21', '1634,81', '1867,66166666667', '2569,05', '2573,8', '457,71', '1798,49', '2000', '3037,58', '769,915', '1433,4', '526,563333333333', '3315', '2051,5', '760', '1220', '2272,5', '1070,52', '1833,77', '768,97', '4760', '400', '1040', '1141,06', '789,23', '2870', '1179,86', '880', '904,08', '1590', '1580', '2050', '4210', '2059,46', '4164,34', '3620', '540', '1266,34', '370', '1321,68833333333', '810', '727,63', '762,93', '1275,785', '2470', '704,93', '1070,05', '2940', '862,43', '3959', '5510', '2237,245', '3330', '3150', '859,92', '631', '7340', '1040,78666666667', '1134,58', '1010,55', '720', '682,56', '4160', '510,136666666667', '1426,72', '1581,82', '2950', '5428,62', '2269,565', '2816,25', '1411,835', '2090,53', '2960', '1530', '980,42', '1259,71333333333', '2170', '1160', '909,424285714286', '603,55', '1830', '2130', '2287,02', '1391,47', '3100', '776,59', '1640', '2570', '766,26', '740', '1276,66666666667', '3080', '840', '2380', '2186,585', '1478,49', '1400', '3590', '1671,7', '3172,83', '1594,914', '568,7', '1420', '2326,81', '2554,005', '3106,29', '1942,72', '4158,85', '890', '1010,27', '1751,37', '2269,735', '1790', '3840', '650', '1630', '769,43', '2066,905', '2410', '1879,28', '570', '558,06625', '1878,86', '648,68', '6500', '1539,2', '1144,98', '3201,95', '1690', '2270', '2142,2', '3120', '1427,295', '6080', '929,51', '1200,99', '1215,45777777778', '1539,13', '2018,255', '605,496', '917,57', '2770', '1258,49', '3670', '3509,805', '1309,89', '1093,75', '957,4', '1259,1', '4140', '1639,99', '3410', '3209,5', '2315,64857142857', '1841,9', '3971,6', '1380', '2349,51', '1940', '370,77', '1003,255', '988,85', '433,306666666667', '2200', '1182,23', '2084,32', '2647,9', '1247,92', '380', '2044,155', '1610', '880,2', '1210', '2910', '1677,48', '3520', '3030', '4450', '2263,2', '668,53', '896,1', '1250', '1949,8', '726,57', '1000', '5603,82', '560', '2930', '3260', '1002,60333333333', '4300', '1440', '1102,7', '500', '386,666666666667', '1314,58666666667', '2794,22333333333', '1439,9', '1272,72', '1138,2', '3770', '2030', '3000', '1670', '3635,77', '946,49', '1190', '1107,5', '758,82', '2201,11', '5500', '2920', '1405,53', '13920', '7010', '2190', '1286,73', '2372,77', '2440', '702,23', '1313,33333333333', '651,625', '970', '2699,88', '4940', '869,26', '1171,686', '860,505', '2240,34', '1597,92', '543,33', '438,19', '2009', '3160', '1931,44', '1269,82', '3870,25', '4550', '516,18', '5160', '1130', '656,705', '0,71', '1317,06', '6660', '3570', '517,51', '5780', '3089,155', '5760', '5030', '1470', '1386,96', '2441,465', '1740', '989,39', '827,57', '2310', '1459,99', '850', '6390', '859,84', '1373,99333333333', '2701,5', '2127,1', '1090,8', '478,13', '1500', '812,11', '849,085', '1053', '2676,2125', '2547,85', '4827,666', '1155,02142857143', '1290', '724,38', '3659,0625', '809,42', '1570', '4690', '2568,45', '1600', '3780', '1259,39', '1745,7', '3283,1', '613,12', '840,7', '1359,6', '3990', '3960', '750', '613,21', '2083', '1312,44', '856,64', '2273,72', '429,11', '2751,25', '2510,2', '2910,25', '2390', '5830', '3890', '1899,7', '2128,74', '497,62', '2520', '1848,05', '367', '1196,43', '1626,75', '2980', '3509,22', '1116,49', '4729,07', '3716,35', '1329,54', '1288,898', '1388,83769230769', '2540', '440', '4320', '1615', '860', '2070,52', '269,51', '791,56', '518,9', '2585', '1512', '1555,00090909091', '3400', '3240', '1371,86', '2013,19', '134,78', '2112,17', '767,7', '3769,3625', '779,415', '5650', '1410', '2163,94', '5269,78', '2101,256', '6490', '1108,574', '802,36', '2409,01', '481,506', '796,59', '4884,95', '2406,8', '855', '1100', '1510', '1740,19', '1591,94', '1804,9', '1680', '578,06', '2476,77', '2005,15', '1325,63', '6041,86', '2010', '604,353333333333', '1820', '2925', '2510', '2119,77', '1563,45', '588,13', '100', '1439,815', '1742,41', '1411,31', '4030', '532,24', '1489,13', '1319,56', '798,53', '2220', '1498,26', '1780', '673,02', '2239,37', '11310', '674,263333333333', '2650', '2530', '894,29', '1516,872', '791,98', '1776,96666666667', '4350', '855,675', '1713,74', '2151,50333333333', '5397,15', '1140,2', '1760', '3310', '366,58', '1724,15333333333', '1258,95', '957,46', '1960', '1006,59', '1880', '870', '1360', '1145,85', '2497,09', '549,88', '896,31', '2640', '1700', '1283,89', '586,646666666667', '554,36', '574,21', '2090', '1535,89', '887,79', '1810,155', '772,73', '2425', '911,793333333333', '2629,87', '1552,325', '2580', '550', '857,956666666667', '2466,8625', '2088,86', '4190', '2340', '1953,6', '3279,705', '1605,95', '433,3425', '1221,92', '1870', '1324,41285714286', '1959,99', '604,43', '1683,49', '4240', '1029,75', '1374,28', '6689,82', '1459,54666666667', '714', '2080', '177,34', '2509,38', '1002,22', '3506,07', '3702,8', '5394,29', '830', '1910', '502,1', '879,74', '1230', '1684,04', '990,85', '1490', '1770', '4660', '1810', '3086,66666666667', '5140', '1920', '455,32', '49,35', '1280', '2500', '2202,69', '4376,4', '600,93', '2464,55', '1550', '1480', '778,32', '567,16', '1193,9675', '1350', '830,53', '1508,79', '315,796666666667', '1850', '1920,9575', '1620', '5170', '2259,35', '1750', '1966,99', '14990', '2040', '136,76', '1781,225', '3050', '3970', '2779,4', '2178', '1626', '1697,22', '1950', '1851,55', '2524,28', '1120', '3170', '770', '1439,94', '2990', '1305,97', '908,78', '4680', '849,19', '865,05', '1007,8', '1070', '1402,92', '1088,05', '2365,47', '950', '892,95', '3600', '5048,19', '804,32', '2701,26', '1288,7', '1097,86', '837,48', '2739,76', '740,575', '430', '6310', '2419,9', '2250', '2276,54666666667', '28012,2', '4310', '5550', '1402,43', '1761,03', '2700', '4710', '2690,33333333333', '3760', '966,953333333333', '300', '240', '1091,2', '2450', '1410,53333333333', '777,84', '598,5', '1170', '820,124', '1085,5', '1270', '4200', '1340', '2830', '1589,92333333333', '821,45', '660', '2790', '3322,7775', '1259,43', '2970', '1720', '2280', '975', '1150', '669,12', '1450', '1119,27', '1113,335', '3651,445', '2989,98', '1127,76', '5698,46', '1052,63', '1789,04', '755,063333333333', '2359,898', '3433,42', '3390', '818,74', '94,89', '3539,82', '582,386666666667', '1056,95', '6130', '1875,70333333333', '270', '1424,9175', '5076,65', '821,93', '2492,54', '1675,75', '51,8', '2800', '3510', '709,96', '3180', '4020', '3810', '845,55', '4115,33333333333', '3110', '3777,93333333333', '647,42', '3880', '3914,24', '2503,19', '2820', '1296,4', '560,36', '3140', '3428,56', '1237,24', '0,04', '3630', '1939,61', '1108,96333333333', '2477,64', '2120', '3200', '954,65', '1650', '2810', '2139,63375', '862,5', '2890', '2630', '1636,04', '1549,18', '553', '5850', '3119,99', '1990', '1180', '1386,115', '670', '959,955', '816,83', '1840', '1844,885', '1241,32428571429', '2979,31', '1140', '677,474', '2343,41', '1110', '875,09', '761,866666666667', '4290', '1793,66', '355,04', '3202,2', '9060', '4001,34', '1390', '2961', '2590', '1710', '979,63', '2258,3', '3018,275', '4508,2', '989,15', '4220', '1830,2', '3127,19', '735', '2094,3', '1260', '1197,38333333333', '2538', '2916,18', '846,74', '610', '38,32', '3940', '2900', '790', '1639,24571428571', '451,495', '1509,44', '1335,47', '1292,59', '3768,7', '1649,37', '4810', '1663,27', '3782,39', '2184,66', '1845,5', '1387,15', '5430', '543,17', '610,986666666667', '2158,72', '1077,03', '509,52', '7780', '4260', '780', '4600', '1310', '836,6', '1090', '984,79', '930', '2880', '920', '2160', '533', '600', '1093,22', '605,32', '1529,06', '1010', '1004,225', '6070', '1139,49', '710', '2140', '1932,5', '981,71', '665,57', '1970', '115,65', '3589,09', '3440', '1520', '2702,4', '776,3', '1399,25', '1088,48', '830,7', '3360', '4370', '2670', '1909,16', '2732,2', '619,4', '1160,96', '3850', '5870', '559,94', '3691,2675', '640', '6650', '824,581428571429', '2590,15', '87,39', '651,16', '753', '3637,43', '588,9', '2368,84', '2330', '1930,595', '1540', '3501,45', '763,185', '1344,33', '3020', '680', '2290', '425,97', '825,10875', '2858,6', '2386,505', '367,35'}\n\n\n==========\nLOAN_MAX_DLQ_AMT\n{'966,07', '820', '0', '1012,9', '2598,9', '940', '2020', '910', '620,75', '678,2', '1283,93', '1094', '863,68', '5040', '1985,3', '1348,68', '5110', '324,16', '2100', '849,91', '1930', '599,19', '849,07', '2157,32', '2180', '1686,92', '2350', '800', '8020', '1288,63', '957,59', '339,83', '806,19', '1230,63', '2064,08', '1365', '1460', '700', '937,6', '918,55', '1860', '960', '596,29', '63,31', '2550', '2497', '5075,99', '724,93', '535,53', '836,79', '2262,12', '216,59', '2480', '1890', '2720', '5410', '490,88', '1731,06', '1330', '2860', '3250', '4770', '552,57', '3500', '1730', '1418,86', '2656,68', '3610', '630,49', '1069,6', '658,42', '749,82', '892,18', '3762,5', '1957,21', '6133,41', '1068,47', '2750', '2467,33', '2210', '1986,88', '3626,91', '224,59', '1817,76', '1105,07', '660,86', '1749,69', '3470', '62,46', '3300', '2265', '2679,82', '1599,9', '2300', '1593,76', '3010', '4390', '4700', '1120,93', '883,33', '6260', '1792,71', '756,1', '480', '1670,18', '2429,09', '1477,37', '1320', '4400', '2560', '722,59', '2740', '2261,19', '1222,06', '826,77', '3357,52', '2293,65', '2240', '510', '3700', '1300,16', '684,89', '2398,8', '1889,43', '2070', '330', '736,3', '1398,58', '2760', '980', '1296,74', '2707,25', '3450', '1199,27', '5560', '2959', '530', '5370', '1980', '1430', '489,13', '1880,59', '310', '1614,36', '3230', '4120', '2002,45', '2979,35', '499,69', '1538,69', '2618,78', '823,54', '1819,09', '450', '5320', '3220', '3964,85', '1930,47', '846,55', '327,98', '2518,18', '2755,21', '1208,09', '682', '2850', '3237,94', '879', '1113,74', '1819,26', '578,3', '3303,2', '1260,55', '1300', '2600', '3420', '437,26', '687,11', '2260', '1015', '4565', '3750', '2660', '590', '1030', '2742,25', '4250', '580', '4960', '3480', '690', '4620', '2610', '2044,32', '1175,57', '3190', '1059,47', '990', '2110', '4920', '1768,07', '1894,04', '1066', '2154,02', '2370', '1080', '1716,52', '5340', '444', '2320', '1154,34', '620', '3860,18', '3060', '7100', '14870', '2230', '1900', '1666,28', '1240', '673,89', '919,88', '472,11', '1387,47', '1408,32', '728,49', '2840', '2710', '900', '3070', '2060', '9280', '1560', '2150', '5872,64', '976,68', '1050', '2796,66', '2441,4', '2549,52', '1095,89', '1883,04', '2360', '609,52', '1090,01', '1828,89', '1706,3', '6200', '3490', '446,31', '663,05', '2400', '2680', '1901,3', '4055,03', '2490', '3920', '965,1', '1144,03', '3660', '1468,38', '699,78', '1056,64', '4050', '7670', '1020', '289,02', '619,73', '991,19', '948,58', '1450,22', '957,63', '4640', '3530', '3111,86', '59,68', '3910', '2957,71', '2430', '3820', '2462,1', '1060', '1692,49', '4530', '630', '2589,2', '901,64', '4000', '407,8', '1519,64', '1679,2', '1200', '783', '3460', '360', '3040', '758,95', '1601,63', '3540', '730', '1098,8', '2379,46', '1370', '1299,55', '18310', '4650', '1800', '875,2', '4070', '1176,82', '490', '567,51', '1429,99', '1065,62', '2361,22', '110,91', '3900', '642,31', '1095,61', '1249,61', '2569,05', '2573,8', '457,71', '1798,49', '2000', '2220,72', '856,3', '1433,4', '2051,5', '760', '1220', '1070,52', '2782,93', '1833,77', '3089,5', '768,97', '4760', '400', '1040', '1141,06', '789,23', '2870', '1179,86', '880', '904,08', '1590', '1580', '2050', '4210', '2059,46', '4164,34', '3620', '540', '1266,34', '370', '2247,51', '810', '1324,5', '762,93', '2858,62', '2470', '704,93', '1070,05', '2940', '862,43', '2058,71', '3959', '3330', '3150', '859,92', '631', '7340', '1010,55', '1134,58', '720', '2642,83', '4160', '1489,45', '1426,72', '1581,82', '2950', '5428,62', '2816,25', '2090,53', '2170', '2960', '1530', '980,42', '755,76', '1160', '603,55', '1830', '2130', '2287,02', '3100', '1391,47', '776,59', '1640', '2570', '766,26', '740', '1154,9', '3080', '840', '2380', '1478,49', '1400', '3590', '1671,7', '3560', '3172,83', '568,7', '1420', '5470', '3106,29', '1942,72', '4158,85', '890', '1010,27', '1751,37', '2142,2', '1790', '3840', '650', '1630', '5980,18', '769,43', '2410', '648,68', '570', '1539,2', '6500', '604,76', '1144,98', '3201,95', '1690', '2270', '3120', '6080', '1200,99', '1539,13', '473,16', '917,57', '2770', '1258,49', '3670', '470', '1309,89', '1093,75', '957,4', '1259,1', '4140', '1639,99', '3410', '1841,9', '3971,6', '1380', '2349,51', '1940', '370,77', '2084,32', '988,85', '2200', '2647,9', '1247,92', '1610', '880,2', '1210', '2910', '1677,48', '3520', '3030', '4450', '2263,2', '668,53', '896,1', '1250', '1046,01', '1949,8', '726,57', '1000', '5603,82', '560', '2930', '3260', '7890', '4300', '1440', '1102,7', '500', '1439,9', '1272,72', '1138,2', '3770', '2030', '3000', '1670', '3635,77', '946,49', '1190', '2201,11', '5500', '2920', '1405,53', '13920', '7010', '2190', '4830', '2372,77', '2440', '702,23', '2699,88', '970', '4940', '1597,92', '869,26', '543,33', '438,19', '4446,3', '2009', '3160', '1931,44', '3870,25', '2375,59', '4550', '516,18', '5160', '1130', '0,71', '3570', '1317,06', '6660', '517,51', '5780', '5760', '5030', '1470', '1740', '989,39', '572,22', '827,57', '2310', '1459,99', '850', '6390', '2127,1', '1090,8', '478,13', '1500', '812,11', '1053', '2547,85', '1290', '724,38', '809,42', '1570', '4690', '1395,7', '2568,45', '1600', '3780', '1259,39', '1745,7', '3283,1', '1359,6', '3990', '3960', '750', '613,21', '481,51', '2083', '1312,44', '856,64', '429,11', '2751,25', '2510,2', '2390', '2064,03', '5830', '3890', '776,6', '497,62', '2520', '1848,05', '367', '1196,43', '1626,75', '2980', '3509,22', '1116,49', '4729,07', '3716,35', '2540', '440', '4320', '1615', '943,56', '581,67', '1110,39', '1479,51', '860', '1033', '269,51', '791,56', '1351,06', '518,9', '3400', '3240', '1371,86', '2013,19', '134,78', '2112,17', '767,7', '797,16', '5650', '1410', '2163,94', '5269,78', '6490', '802,36', '2409,01', '796,59', '1672,88', '4884,95', '2406,8', '1100', '1510', '1740,19', '4490', '1591,94', '1804,9', '1680', '578,06', '2005,15', '1325,63', '6041,86', '2010', '1820', '2510', '2119,77', '1563,45', '588,13', '100', '1742,41', '1411,31', '4030', '1489,13', '1319,56', '798,53', '2220', '1498,26', '1452', '1780', '673,02', '2239,37', '11310', '2650', '2530', '894,29', '791,98', '4350', '1783,55', '5397,15', '1140,2', '1760', '3310', '366,58', '1960', '3018,62', '1006,59', '1880', '870', '1360', '1145,85', '2497,09', '549,88', '896,31', '2640', '1700', '840,4', '554,36', '574,21', '2090', '887,79', '2969,75', '772,73', '1733,88', '2425', '2629,87', '2580', '550', '2088,86', '4190', '2340', '1953,6', '1221,92', '1870', '604,43', '1959,99', '1683,49', '4240', '1029,75', '1374,28', '6689,82', '714', '2080', '177,34', '2509,38', '1002,22', '3506,07', '1837,31', '3702,8', '5394,29', '830', '1910', '502,1', '879,74', '1302,38', '1230', '990,85', '1490', '1770', '4660', '1810', '49,35', '5140', '1920', '1840,14', '1280', '2500', '2202,69', '600,93', '1550', '1480', '778,32', '567,16', '830,53', '1350', '1508,79', '1850', '1620', '5170', '1750', '1966,99', '14990', '2040', '136,76', '3050', '3970', '2178', '2779,4', '1697,22', '1387,3', '1950', '1851,55', '2524,28', '1120', '3170', '770', '1439,94', '2990', '910,63', '331', '908,78', '4680', '849,19', '1007,8', '1070', '2365,47', '1088,05', '950', '892,95', '3600', '5048,19', '804,32', '2701,26', '1288,7', '1097,86', '837,48', '2739,76', '430', '6310', '2419,9', '2250', '28012,2', '4310', '5550', '1402,43', '1761,03', '2700', '4710', '3760', '300', '240', '1086,57', '2450', '598,5', '777,84', '1170', '1085,5', '1270', '4200', '1340', '2830', '1860,72', '660', '2790', '1259,43', '2970', '1720', '2280', '975', '1150', '669,12', '1450', '1119,27', '1185,5', '2989,98', '1127,76', '4880', '5698,46', '1052,63', '1789,04', '3433,42', '3390', '818,74', '94,89', '3539,82', '1056,95', '6130', '270', '2620', '589,45', '5076,65', '2492,54', '51,8', '821,93', '605,59', '2800', '3510', '3180', '4020', '3810', '3110', '1427,4', '647,42', '3880', '1870,69', '3914,24', '2503,19', '2820', '1296,4', '560,36', '828,51', '3285,74', '3140', '3428,56', '0,04', '3630', '1939,61', '2477,64', '3780,1', '2120', '3200', '954,65', '1650', '2810', '862,5', '2890', '2630', '1636,04', '1549,18', '553', '5850', '1990', '1180', '670', '816,83', '918,83', '1840', '1140', '954,34', '2343,41', '1110', '875,09', '2289,89', '2281,1', '4290', '1793,66', '355,04', '3202,2', '9060', '4001,34', '2902,89', '1390', '2590', '1710', '979,63', '4508,2', '2258,3', '997,66', '937,79', '989,15', '4220', '2094,3', '3127,19', '2916,18', '1260', '620,18', '846,74', '610', '2102,75', '38,32', '3940', '2900', '790', '1509,44', '1335,47', '1292,59', '3768,7', '1649,37', '1663,27', '3782,39', '2184,66', '1845,5', '1387,15', '5430', '543,17', '790,77', '2158,72', '1077,03', '4260', '780', '4600', '1310', '1386,14', '836,6', '1090', '930', '2880', '920', '2160', '533', '961,94', '600', '1093,22', '605,32', '1529,06', '1010', '6070', '5580', '710', '2140', '1139,49', '981,71', '1370,25', '665,57', '1970', '115,65', '3589,09', '3440', '1520', '2702,4', '776,3', '1650,53', '830,7', '3360', '4370', '2670', '1909,16', '2732,2', '586,5', '1160,96', '3850', '5870', '559,94', '640', '6650', '1729,66', '2590,15', '87,39', '651,16', '3637,43', '2368,84', '2330', '1540', '3501,45', '1344,33', '3020', '918,28', '680', '2290', '2858,6', '367,35'}\n\n\n==========\nsample\n{'train'}\n\n\n" ] ], [ [ "Mожно заметить что некоторые переменные, которые обозначены как строки (например PERSONAL_INCOME) на самом деле числа, но по какой-то причине были распознаны как строки\n\nПричина же что использовалась запятая для разделения не целой части числа..", "_____no_output_____" ], [ "Перекодировать их можно например так:", "_____no_output_____" ] ], [ [ "df['PERSONAL_INCOME'].map(lambda x: x.replace(',', '.')).astype('float')", "_____no_output_____" ] ], [ [ "Такой эффект наблюдается в столбцах `PERSONAL_INCOME`, `CREDIT`, `FST_PAYMENT`, `LOAN_AVG_DLQ_AMT`, `LOAN_MAX_DLQ_AMT`", "_____no_output_____" ], [ "### Теперь ваше небольшое исследование", "_____no_output_____" ], [ "#### Задание 1. Есть ли пропуски в данных? Что с ними сделать?\n\n(единственного верного ответа нет - аргументируйте)", "_____no_output_____" ] ], [ [ "# Пропуски для данных можно попробовать восстановить по косвенным признакам. Допустим WORK_TIME - можно принять, что\n# если имеется GEN_TITLE и|или ORG_TP_STATE, JOB_DIR и т.п, то можно предположить, что WORK_TIME будет равен FACT_LIVING_TERM\n# Если по косвенным определить трудно, то можно сгруппировать наверное признаки, создав группы, которые закодируют кортеж фич,\n# и на место столбцов подставить код. Либо пытаться определить среднее, полагая, что разброс вряд ли будет больше одной сигмы.\n# А некоторые поля - допустим регионы - можно опустить, судя по условию задачи это не слишком значимые признаки.", "_____no_output_____" ] ], [ [ "#### Задание 2. Есть ли категориальные признаки? Что с ними делать?", "_____no_output_____" ] ], [ [ "# Ввести кодирование - либо по средним значениям группы, \n# либо по количеству входящих объектов, либо пользоваться LabelEncoder, OneHotEncoder", "_____no_output_____" ] ], [ [ "#### Задание 3. Фунция предобработки", "_____no_output_____" ], [ "Напишите функцию, которая бы\n\n* Удаляло идентификатор `AGREEMENT_RK`\n* Избавлялась от проблем с '.' и ',' в стобцах PERSONAL_INCOME, CREDIT, FST_PAYMENT, LOAN_AVG_DLQ_AMT, LOAN_MAX_DLQ_AMT\n* Что-то делала с пропусками\n* Кодировала категориальные признаки\n\nВ результате, ваш датафрейм должен содержать только числа и не содержать пропусков!", "_____no_output_____" ] ], [ [ "def collect_emptyvalue_features(_df):\n return _df.columns[_df.isnull().any()].values\n\ndef preproc_data(df_input):\n df_output = df_input.copy()\n # Удаляло идентификатор AGREEMENT_RK\n df_output.drop(['AGREEMENT_RK'], axis=1, inplace=True)\n \n # Избавлялась от проблем с '.' и ',' в стобцах PERSONAL_INCOME, CREDIT, FST_PAYMENT, \n # LOAN_AVG_DLQ_AMT, LOAN_MAX_DLQ_AMT\n convertable_featrues = ['PERSONAL_INCOME', 'CREDIT', 'FST_PAYMENT', 'LOAN_AVG_DLQ_AMT', 'LOAN_MAX_DLQ_AMT']\n for cf in convertable_features:\n df_output[cf] = df_output[cf].map(lambda v: v.replace(',', '.')).astype('float')\n \n # Что-то делала с пропусками\n # 1st variant\n # дропаем те фичи,которые имеют пропуски, потому что для них трудно определить заполнитель\n # вероятно это сразу приводит к уменьшению точности модели\n df_output.drop(['TP_PROVINCE', \n 'REGION_NM', \n 'REG_ADDRESS_PROVINCE',\n 'POSTAL_ADDRESS_PROVINCE',\n 'FACT_ADDRESS_PROVINCE',\n 'JOB_DIR'], axis=1, inplace=True)\n #\n # 2nd variant\n # Попробуем указать данным полям статус - не указан\n # for c in ['TP_PROVINCE', 'REGION_NM', 'ORG_TP_STATE', 'JOB_DIR']:\n # df_output[c].fillna('Не указано', inplace=True)\n df_output['ORG_TP_STATE'].fillna('Не указано', inplace=True)\n df_output['GEN_TITLE'].fillna('Другое', inplace=True)\n df_output['GEN_INDUSTRY'].fillna('Другие сферы', inplace=True)\n df_output['PREVIOUS_CARD_NUM_UTILIZED'].fillna(0, inplace=True)\n df_output['ORG_TP_FCAPITAL'].fillna('Без участия', inplace=True)\n df_output.loc[(df_output['SOCSTATUS_PENS_FL'] == 1) \n & (df_output['SOCSTATUS_WORK_FL'] == 0)\n & df_output['WORK_TIME'].isnull(), 'WORK_TIME'] = 0\n df_output['WORK_TIME'].fillna(df_output[~df_output['WORK_TIME'].isnull()]['WORK_TIME'].median(), inplace=True)\n\n \n # Если я что-то упустил, кинем исключение, чтобы взглянуть на данные и определить заполнитель\n evf = collect_emptyvalue_features(df_output)\n if evf.size > 0:\n print(evf)\n # raise TypeError('There are some features with empty values')\n \n # Кодировала категориальные признаки\n #\n # 1st variant\n# df_output = pd.get_dummies(df_output, columns=['ORG_TP_FCAPITAL', 'EDUCATION', 'MARITAL_STATUS',\n# 'GEN_INDUSTRY', 'GEN_TITLE', 'ORG_TP_STATE',\n# 'JOB_DIR', 'FAMILY_INCOME', 'REG_ADDRESS_PROVINCE',\n# 'FACT_ADDRESS_PROVINCE', 'POSTAL_ADDRESS_PROVINCE',\n# 'TP_PROVINCE', 'REGION_NM'])\n \n df_output = pd.get_dummies(df_output, columns=['EDUCATION',\n 'MARITAL_STATUS',\n 'GEN_INDUSTRY',\n 'GEN_TITLE',\n 'FAMILY_INCOME',\n 'ORG_TP_FCAPITAL',\n 'ORG_TP_STATE'])\n #\n # 2nd variant\n# df_output['ORG_TP_FCAPITAL'] = np.where(df_output['ORG_TP_FCAPITAL'].str.contains('Без участия'), 1, 0)\n \n# from sklearn.preprocessing import LabelEncoder\n# le_edu = LabelEncoder()\n# le_ms = LabelEncoder()\n# le_gi = LabelEncoder()\n# le_gt = LabelEncoder()\n# le_ots = LabelEncoder() \n# # le_jd = LabelEncoder() \n# le_fi = LabelEncoder()\n# # le_rap = LabelEncoder() \n# # le_fap = LabelEncoder()\n# # le_pap = LabelEncoder()\n# # le_tp = LabelEncoder()\n# # le_rn = LabelEncoder()\n# df_output['EDUCATION'] = le_edu.fit_transform(df_output['EDUCATION'])\n# df_output['MARITAL_STATUS'] = le_ms.fit_transform(df_output['MARITAL_STATUS'])\n# df_output['GEN_INDUSTRY'] = le_gi.fit_transform(df_output['GEN_INDUSTRY'])\n# df_output['GEN_TITLE'] = le_gt.fit_transform(df_output['GEN_TITLE'])\n# df_output['ORG_TP_STATE'] = le_ots.fit_transform(df_output['ORG_TP_STATE'])\n# # df_output['JOB_DIR'] = le_jd.fit_transform(df_output['JOB_DIR'])\n# df_output['FAMILY_INCOME'] = le_fi.fit_transform(df_output['FAMILY_INCOME'])\n# # df_output['REG_ADDRESS_PROVINCE'] = le_rap.fit_transform(df_output['REG_ADDRESS_PROVINCE'])\n# # df_output['FACT_ADDRESS_PROVINCE'] = le_fap.fit_transform(df_output['FACT_ADDRESS_PROVINCE'])\n# # df_output['POSTAL_ADDRESS_PROVINCE'] = le_pap.fit_transform(df_output['POSTAL_ADDRESS_PROVINCE'])\n# # df_output['TP_PROVINCE'] = le_tp.fit_transform(df_output['TP_PROVINCE'])\n# # df_output['REGION_NM'] = le_rn.fit_transform(df_output['REGION_NM'])\n \n return df_output\n\ndf_output = preproc_data(df)\ndf_output.select_dtypes(include='object') # here must be only `sample`", "_____no_output_____" ], [ "df_preproc = df.pipe(preproc_data)\n\ndf_train_preproc = df_preproc.query('sample == \"train\"').drop(['sample'], axis=1)\ndf_test_preproc = df_preproc.query('sample == \"test\"').drop(['sample'], axis=1)", "_____no_output_____" ] ], [ [ "#### Задание 4. Отделите целевую переменную и остальные признаки\n\nДолжно получится:\n* 2 матрицы: X и X_test\n* 2 вектора: y и y_test", "_____no_output_____" ] ], [ [ "y = df_train_preproc['TARGET']\nX = df_train_preproc.drop(['TARGET'], axis=1)\n\ny_test = df_test_preproc['TARGET']\nX_test = df_test_preproc.drop(['TARGET'], axis=1)", "_____no_output_____" ], [ "X.shape, y.shape", "_____no_output_____" ] ], [ [ "#### Задание 5. Обучение и оценка качества разных моделей", "_____no_output_____" ] ], [ [ "from sklearn.cross_validation import train_test_split\n# train_test_split?", "_____no_output_____" ], [ "# test_size=0.3, random_state=42\n\n## Your Code Here\nX_train, X_train_test, y_train, y_train_test = train_test_split(X, y, test_size=0.3, random_state=42)\nX_train.shape, y_train.shape, X_train_test.shape, y_train_test.shape", "_____no_output_____" ], [ "# Попробовать следующие \"черные ящики\": интерфейс одинаковый \n# fit, \n# predict, \n# predict_proba\n\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\n\n## Your Code Here\ndtc = DecisionTreeClassifier()\ndtc.fit(X_train, y_train)\npredict_dtc = dtc.predict(X_train_test)\npredict_proba_dtc = dtc.predict_proba(X_train_test)\n\nrfc = RandomForestClassifier()\nrfc.fit(X_train, y_train)\npredict_rfc = rfc.predict(X_train_test)\npredict_proba_rfc = rfc.predict_proba(X_train_test)\n\nlr = LogisticRegression()\nlr.fit(X_train, y_train)\npredict_lr = lr.predict(X_train_test)\npredict_proba_lr = lr.predict_proba(X_train_test)", "_____no_output_____" ], [ "# Посчитать метрики стандартные\n# accuracy, precision, recall\n\nfrom sklearn.metrics import accuracy_score, precision_score, recall_score\n\nmodels_accuracy = {\n 'dtc' : accuracy_score(y_train_test, predict_dtc),\n 'rfc' : accuracy_score(y_train_test, predict_rfc),\n 'lr' : accuracy_score(y_train_test, predict_lr)\n}\n\nbest_accuracy_score = max(models_accuracy.values())\nmodels_accuracy, best_accuracy_score, list(filter(lambda k: models_accuracy.get(k) == best_accuracy_score, models_accuracy.keys()))\n\nmodels_precision_score = {\n 'dtc' : precision_score(y_train_test, predict_dtc),\n 'rfc' : precision_score(y_train_test, predict_rfc),\n 'lr' : precision_score(y_train_test, predict_lr)\n}\n\nbest_precision_score = max(models_precision_score.values())\nmodels_precision_score, best_precision_score, list(filter(lambda k: models_precision_score.get(k) == best_precision_score, models_precision_score.keys()))\n\n\nmodels_recall_score = {\n 'dtc' : recall_score(y_train_test, predict_dtc),\n 'rfc' : recall_score(y_train_test, predict_rfc),\n 'lr' : recall_score(y_train_test, predict_lr)\n}\n \nbest_recall_score = max(models_recall_score.values())\nmodels_recall_score, best_recall_score, list(filter(lambda k: models_recall_score.get(k) == best_recall_score, models_recall_score.keys()))\n\n\nmodels_accuracy, models_precision_score, models_recall_score", "_____no_output_____" ], [ "# Визуалищировать эти метрики всех моделей на одном графике (чтоб визуально посмотреть)\n\n## Your Code Here\nfrom sklearn.metrics import precision_recall_curve\nfrom matplotlib import pyplot as plt\nfrom sklearn.metrics import roc_auc_score, roc_curve\n\nprecision_prc_dtc, recall_prc_dtc, treshold_prc_dtc = precision_recall_curve(y_train_test, predict_proba_dtc[:,1])\nprecision_prc_rfc, recall_prc_rfc, treshold_prc_rfc = precision_recall_curve(y_train_test, predict_proba_rfc[:,1])\nprecision_prc_lr, recall_prc_lr, treshold_prc_lr = precision_recall_curve(y_train_test, predict_proba_lr[:,1])\n\n%matplotlib inline\nplt.figure(figsize=(10, 10))\nplt.plot(precision_prc_dtc, recall_prc_dtc, label='dtc')\nplt.plot(precision_prc_rfc, recall_prc_rfc, label='rfc')\nplt.plot(precision_prc_lr, recall_prc_lr, label='lr')\nplt.legend(loc='upper right')\nplt.ylabel('recall')\nplt.xlabel('precision')\nplt.grid(True)\nplt.title('Precision Recall Curve')\nplt.xlim((-0.01, 1.01))\nplt.ylim((-0.01, 1.01))", "_____no_output_____" ], [ "# Потроить roc-кривые всех можелей на одном графике\n# Вывести roc_auc каждой моделе\n## Your Code Here\n\nfpr_dtc, tpr_dtc, thresholds_dtc = roc_curve(y_train_test, predict_proba_dtc[:,1])\nfpr_rfc, tpr_rfc, thresholds_rfc = roc_curve(y_train_test, predict_proba_rfc[:,1])\nfpr_lr, tpr_lr, thresholds_lr = roc_curve(y_train_test, predict_proba_lr[:,1])\nplt.figure(figsize=(10, 10))\nplt.plot(fpr_dtc, tpr_dtc, label='dtc')\nplt.plot(fpr_rfc, tpr_rfc, label='rfc')\nplt.plot(fpr_lr, tpr_lr, label='lr')\nplt.legend()\nplt.plot([1.0, 0], [1.0, 0])\nplt.ylabel('tpr')\nplt.xlabel('fpr')\nplt.grid(True)\nplt.title('ROC curve')\nplt.xlim((-0.01, 1.01))\nplt.ylim((-0.01, 1.01))\n\nroc_auc_score(y_train_test, predict_proba_dtc[:,1]), roc_auc_score(y_train_test, predict_proba_rfc[:,1]), roc_auc_score(y_train_test, predict_proba_lr[:,1])", "_____no_output_____" ], [ "from sklearn.cross_validation import cross_val_score\nfrom sklearn.model_selection import StratifiedKFold\n# Сделать k-fold (10 фолдов) кросс-валидацию каждой модели\n# И посчитать средний roc_auc\ncv = StratifiedKFold(n_splits=10, shuffle=True, random_state=123)\n## Your Code Here\ns1 = cross_val_score( dtc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, X_train_test, y_train_test, scoring='roc_auc', cv=cv.get_n_splits())\n\nfor train_ind, test_ind in cv.split(X, y):\n x_train_xval_ml = np.array(X)[train_ind,:]\n x_test_xval_ml = np.array(X)[test_ind,:]\n y_train_xval_ml = np.array(y)[train_ind]\n\ns2 = cross_val_score( dtc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, x_train_xval_ml, y_train_xval_ml, scoring='roc_auc', cv=cv.get_n_splits())\ns3 = s1 = cross_val_score( dtc, X, y, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( rfc, X, y, scoring='roc_auc', cv=cv.get_n_splits()), cross_val_score( lr, X, y, scoring='roc_auc', cv=cv.get_n_splits())\ns1,s2,s3", "_____no_output_____" ], [ "# Взять лучшую модель и сделать predict (с вероятностями (!!!)) для test выборки\n## Your Code Here\npredict_lr_XTEST_proba = lr.predict_proba(X_test)", "_____no_output_____" ], [ "# Померить roc_auc на тесте\nfpr_lr, tpr_lr, thresholds_lr = roc_curve(y_test, predict_lr_XTEST_proba[:, 1])\nplt.figure(figsize=(10, 10))\nplt.plot(fpr_dtc, tpr_dtc, label='lr')\nplt.legend()\nplt.plot([1.0, 0], [1.0, 0])\nplt.ylabel('tpr')\nplt.xlabel('fpr')\nplt.grid(True)\nplt.title('ROC curve')\nplt.xlim((-0.01, 1.01))\nplt.ylim((-0.01, 1.01))\nroc_auc_score(y_test, predict_lr_XTEST_proba[:, 1])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a86dc6289f8504b975d80463315e791b017ed47
52,582
ipynb
Jupyter Notebook
Classification/Classification.ipynb
razanmdr/Machine-Learning-Python
614966637d7e7ed92350fb49dbd57e1766cc7559
[ "Apache-2.0" ]
null
null
null
Classification/Classification.ipynb
razanmdr/Machine-Learning-Python
614966637d7e7ed92350fb49dbd57e1766cc7559
[ "Apache-2.0" ]
null
null
null
Classification/Classification.ipynb
razanmdr/Machine-Learning-Python
614966637d7e7ed92350fb49dbd57e1766cc7559
[ "Apache-2.0" ]
null
null
null
32.101343
20,304
0.605131
[ [ [ "# Classification", "_____no_output_____" ], [ "**Data - Social network Ads**", "_____no_output_____" ], [ "## Importing the Libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd", "_____no_output_____" ] ], [ [ "## Importing the dataset", "_____no_output_____" ] ], [ [ "dataset = pd.read_csv('Social_Network_Ads.csv')\nX = dataset.iloc[:, :-1].values\ny = dataset.iloc[:, -1].values", "_____no_output_____" ] ], [ [ "## Splitting the dataset into the Training set and Test set", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)", "_____no_output_____" ] ], [ [ "## Feature Scaling", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import StandardScaler\nsc = StandardScaler()\nX_train = sc.fit_transform(X_train)\nX_test = sc.transform(X_test)", "_____no_output_____" ], [ "import warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "### 1) Logistic Regression", "_____no_output_____" ] ], [ [ "#Training the Logistic Regression model on the Training set\n\nfrom sklearn.linear_model import LogisticRegression\nclassifier = LogisticRegression(random_state = 0)\nclassifier.fit(X_train, y_train)", "_____no_output_____" ], [ "#Predicting a new result\n\nprint(classifier.predict(sc.transform([[30,87000]])))", "[0]\n" ], [ "#Predicting the Test set results\n\ny_pred = classifier.predict(X_test)\nprint(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))", "[[0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [0 0]\n [1 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [0 1]\n [1 1]\n [1 1]]\n" ], [ "#Making the Confusion Matrix\n\nfrom sklearn.metrics import confusion_matrix, accuracy_score\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\naccuracy_score(y_test, y_pred)", "[[65 3]\n [ 8 24]]\n" ], [ "#Visualising the Test set results\n\nfrom matplotlib.colors import ListedColormap\nX_set, y_set = sc.inverse_transform(X_test), y_test\nX1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),\n np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))\nplt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),\n alpha = 0.75, cmap = ListedColormap(('red', 'green')))\nplt.xlim(X1.min(), X1.max())\nplt.ylim(X2.min(), X2.max())\nfor i, j in enumerate(np.unique(y_set)):\n plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)\nplt.title('Logistic Regression (Test set)')\nplt.xlabel('Age')\nplt.ylabel('Estimated Salary')\nplt.legend()\nplt.show()", "*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\n*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.\n" ] ], [ [ "### 2) K-Nearest Neighbors (K-NN)", "_____no_output_____" ] ], [ [ "#Training the K-NN model on the Training set\n\nfrom sklearn.neighbors import KNeighborsClassifier\nclassifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)\nclassifier.fit(X_train, y_train)", "_____no_output_____" ], [ "#Predicting a new result\n\nprint(classifier.predict(sc.transform([[30,87000]])))", "[0]\n" ], [ "#Predicting the Test set results\n\ny_pred = classifier.predict(X_test)\nprint(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))", "[[0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]]\n" ], [ "#Making the Confusion Matrix\n\nfrom sklearn.metrics import confusion_matrix, accuracy_score\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\naccuracy_score(y_test, y_pred)", "[[64 4]\n [ 3 29]]\n" ] ], [ [ "### 3) Support Vector Machine (SVM)", "_____no_output_____" ] ], [ [ "#Training the SVM model on the Training set\n\nfrom sklearn.svm import SVC\nclassifier = SVC(kernel = 'linear', random_state = 0)\nclassifier.fit(X_train, y_train)", "_____no_output_____" ], [ "#Predicting a new result\n\nprint(classifier.predict(sc.transform([[30,87000]])))", "[0]\n" ], [ "#Predicting the Test set results\n\ny_pred = classifier.predict(X_test)\nprint(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))", "[[0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [0 0]\n [1 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [0 1]\n [1 1]\n [1 1]]\n" ], [ "#Making the Confusion Matrix\n\nfrom sklearn.metrics import confusion_matrix, accuracy_score\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\naccuracy_score(y_test, y_pred)", "[[66 2]\n [ 8 24]]\n" ] ], [ [ "### 4) Kernel SVM", "_____no_output_____" ] ], [ [ "#Training the Kernel SVM model on the Training set\n\nfrom sklearn.svm import SVC\nclassifier = SVC(kernel = 'rbf', random_state = 0)\nclassifier.fit(X_train, y_train)", "_____no_output_____" ], [ "#Predicting a new result\n\nprint(classifier.predict(sc.transform([[30,87000]])))", "[0]\n" ], [ "#Predicting the Test set results\n\ny_pred = classifier.predict(X_test)\nprint(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))", "[[0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]]\n" ], [ "#Making the Confusion Matrix\n\nfrom sklearn.metrics import confusion_matrix, accuracy_score\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\naccuracy_score(y_test, y_pred)", "[[64 4]\n [ 3 29]]\n" ] ], [ [ "### 5) Naive Bayes", "_____no_output_____" ] ], [ [ "#Training the Naive Bayes model on the Training set\n\nfrom sklearn.naive_bayes import GaussianNB\nclassifier = GaussianNB()\nclassifier.fit(X_train, y_train)", "_____no_output_____" ], [ "#Predicting a new result\n\nprint(classifier.predict(sc.transform([[30,87000]])))", "[0]\n" ], [ "#Predicting the Test set results\n\ny_pred = classifier.predict(X_test)\nprint(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))", "[[0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [0 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]]\n" ], [ "#Making the Confusion Matrix\n\nfrom sklearn.metrics import confusion_matrix, accuracy_score\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\naccuracy_score(y_test, y_pred)", "[[65 3]\n [ 7 25]]\n" ] ], [ [ "### 6) Decision Tree Classification", "_____no_output_____" ] ], [ [ "#Training the Decision Tree Classification model on the Training set\n\nfrom sklearn.tree import DecisionTreeClassifier\nclassifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)\nclassifier.fit(X_train, y_train)", "_____no_output_____" ], [ "#Predicting a new result\n\nprint(classifier.predict(sc.transform([[30,87000]])))", "[0]\n" ], [ "#Predicting the Test set results\n\ny_pred = classifier.predict(X_test)\nprint(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))", "[[0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 0]\n [0 0]\n [1 0]\n [1 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]]\n" ], [ "#Making the Confusion Matrix\n\nfrom sklearn.metrics import confusion_matrix, accuracy_score\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\naccuracy_score(y_test, y_pred)", "[[62 6]\n [ 3 29]]\n" ] ], [ [ "### 7) Random Forest Classification", "_____no_output_____" ] ], [ [ "#Training the Random Forest Classification model on the Training set\n\nfrom sklearn.ensemble import RandomForestClassifier\nclassifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)\nclassifier.fit(X_train, y_train)", "_____no_output_____" ], [ "#Predicting a new result\n\nprint(classifier.predict(sc.transform([[30,87000]])))", "[0]\n" ], [ "#Predicting the Test set results\n\ny_pred = classifier.predict(X_test)\nprint(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))", "[[0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 0]\n [1 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 0]\n [1 1]\n [1 1]\n [1 1]\n [1 0]\n [0 0]\n [0 0]\n [1 1]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [0 0]\n [0 0]\n [1 1]\n [0 0]\n [0 0]\n [0 0]\n [0 1]\n [0 0]\n [1 1]\n [1 1]\n [1 1]]\n" ], [ "#Making the Confusion Matrix\n\nfrom sklearn.metrics import confusion_matrix, accuracy_score\ncm = confusion_matrix(y_test, y_pred)\nprint(cm)\naccuracy_score(y_test, y_pred)", "[[63 5]\n [ 4 28]]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a86fab9e25d03d7d7d18940d00a07af3da1a0ca
708
ipynb
Jupyter Notebook
06-Layering.ipynb
amitkaps/data-vis-python
ff748d2d6cd462e0f99c54ed58a3d3982d505eb2
[ "MIT" ]
null
null
null
06-Layering.ipynb
amitkaps/data-vis-python
ff748d2d6cd462e0f99c54ed58a3d3982d505eb2
[ "MIT" ]
null
null
null
06-Layering.ipynb
amitkaps/data-vis-python
ff748d2d6cd462e0f99c54ed58a3d3982d505eb2
[ "MIT" ]
2
2020-05-14T19:47:49.000Z
2020-07-08T03:20:16.000Z
17.268293
48
0.511299
[ [ [ "# Facets & Layers\n\n- Facets\n- Horizontal and Vertical Concatenation\n- Layers\n- Repeat\n- ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a870d0efde04fec741ce40d993dc586a8570bcf
472,366
ipynb
Jupyter Notebook
tutorial.ipynb
ETCBC/heads
e35fa13c55ba72f633ff5a9e3218056abe7fb355
[ "Unlicense" ]
null
null
null
tutorial.ipynb
ETCBC/heads
e35fa13c55ba72f633ff5a9e3218056abe7fb355
[ "Unlicense" ]
null
null
null
tutorial.ipynb
ETCBC/heads
e35fa13c55ba72f633ff5a9e3218056abe7fb355
[ "Unlicense" ]
null
null
null
55.592091
8,767
0.539882
[ [ [ "# [Best viewed in NBviewer](https://nbviewer.jupyter.org/github/ETCBC/heads/blob/master/tutorial.ipynb)", "_____no_output_____" ], [ "# Heads Tutorial\n\n\n## Introduction \n\nThis notebook provides a basic introduction to the `heads` edge and node features for BHSA, produced in `etcbc/heads`. Syntactic phrase heads are important because they provide the semantic core for a phrase. To give a simple example:\n\n> a good man\n\nIn this phrase, the word \"man\" serves as the phrase head. The word \"good\" is an adjective, and \"a\" is an indefinite article. While important, \"good\" and \"a\" are more optional, and their influence is mostly local to the phrase and to their relation to \"man.\" The phrase head, by comparison, has a stronger influence on the surrounding syntactic-semantic environment. For instance: \n\n> A good man walks.\n\nThe use of the subject phrase head \"man\" coincides with the use of the verb phrase \"walks.\" This is possible because \"man\" has the semantic property of \"things that walk,\" or perhaps animacy. Having heads for Hebrew phrases in BHSA enables more sophisticated analyses such as [participant tracking](https://en.wikipedia.org/wiki/Named-entity_recognition) and [word embeddings](https://en.wikipedia.org/wiki/Word_embedding) (see use case [here](https://nbviewer.jupyter.org/github/codykingham/noun_semantics_SBL18/blob/master/analysis/noun_semantics.ipynb)). \n\n## Phrase Types and Their Heads\n\nDifferent phrases in BHSA have different type values (BHSA: `typ`, [see bottom here](https://etcbc.github.io/bhsa/features/hebrew/c/typ)). A type value is derived from the part of speech of the phrase's head. Below are some example type values and expected parts of speech in BHSA:\n\n| type | BHSA typ | head part of speech |\n|--------------------|------|---------------------|\n| verb phrase | VP | verb |\n| noun phrase | NP | noun (BHSA: subs) |\n| preposition phrase | PP | preposition |\n| conjunction phrase | CP | conjunction |\n| adverb phrase | AdvP | adverb |\n| adjective phrase | AdjP | adjective |\n\nAdditional phrase types and parts of speech can be browsed [here](https://etcbc.github.io/bhsa/features/hebrew/c/typ) and [here](https://etcbc.github.io/bhsa/features/hebrew/c/pdp.html). To give an example: in the phrase בראשית, the head is simply ב, since this phrase has the type: prepositional phrase, or `PP`. The `heads` data is calculated for phrases using these correspondences. So its important to know when searching for certain kinds of data. In case one wants to isolate ראשית (and ignore any potential modifying elements) another feature is made available, called `nheads` (nominal heads), which provides for such cases.\n\n## Dataset\n\nThe `heads` dataset can be downloaded and used with Text-Fabric's `BHSA` app. An additional argument is passed to the `use` method: `mod=etcbc/heads/tf`. The location is not a local directory, but rather provides TF with coordinates to find the latest data in Github. The coordinates are organized as `organization/repository/tf`.", "_____no_output_____" ] ], [ [ "from tf.app import use\n\nA = use('bhsa', mod='etcbc/heads/tf', hoist=globals())", "Using etcbc/bhsa/tf - c r1.5 in /Users/cody/text-fabric-data\nUsing etcbc/phono/tf - c r1.2 in /Users/cody/text-fabric-data\nUsing etcbc/parallels/tf - c r1.2 in /Users/cody/text-fabric-data\nUsing etcbc/heads/tf - c rv1.02 in /Users/cody/text-fabric-data\n" ] ], [ [ "Four primary datasets are available in `heads`:\n\n* `head` - an edge feature from a syntactic phrase head to its containing phrase node\n* `obj_prep` - an edge feature from an object of a preposition to its governing preposition\n* `nhead` - an edge feature from a nominal syntactic phrase head to its containing phrase node; handy for prepositional phrases with nominal elements contained within\n* `sem_set` - a semantic set feature which contains the following feature values:\n * `quant` - enhanced quantifier sets\n * `prep` - enhanced preposition sets", "_____no_output_____" ], [ "## Getting a Phrase Head\n\nThe feature `head` contains an edge feature from a word node to the phrase for which it contributes headship. Let's have a look at heads in Genesis 1 with a simple query and display:", "_____no_output_____" ] ], [ [ "# configure TF display\nA.displaySetup(condenseType='phrase', condensed=True)\n\n# search for heads in Genesis 1:1-2\ngenesis_heads = '''\n\nbook book@en=Genesis\n chapter chapter=1\n verse verse<2\n phrase\n <head- word\n\n'''\n\ngenesis_heads = A.search(genesis_heads)\n\nA.show(genesis_heads)", " 0.84s 5 results\n" ] ], [ [ "Note how the selected heads agree with their phrase types: prepositional phrases have preposition heads, noun phrases have noun heads (`subs`), etc. Note also how the `head` feature encodes multiple heads in cases where there are coordinated heads.\n\nBelow we find cases where there are at least 3 coordinate heads in a phrase.", "_____no_output_____" ] ], [ [ "three_heads = '''\n\np:phrase\n w1:word\n < w2:word\n < w3:word\n\np <head- w1\np <head- w2\np <head- w3\n'''\n\nthree_heads = A.search(three_heads)\n\nA.show(three_heads, end=10, withNodes=True)", " 3.77s 11285 results\n" ] ], [ [ "### Hand Coding Heads\n\nHeads features can also be accessed by hand-coding with the `E` and `F` classes. The `E` (edge) class must be called on a word node with `.f` (edge from word node).", "_____no_output_____" ] ], [ [ "T.text(1)", "_____no_output_____" ], [ "E.head.f(1)", "_____no_output_____" ], [ "T.text(E.head.f(1)[0])", "_____no_output_____" ] ], [ [ "But the heads can also be located by calling `E.head` on the phrase, but with `.t` at the end (edge to):", "_____no_output_____" ] ], [ [ "E.head.t(651542)", "_____no_output_____" ], [ "T.text(E.head.t(651542)[0])", "_____no_output_____" ] ], [ [ "NB that the edge class always returns a tuple. This is because multiple edges are possible. This is valuable in cases where you want to find a phrase with multiple heads, as we did in the template above:", "_____no_output_____" ] ], [ [ "example = three_heads[0][0]\n\nE.head.t(example)", "_____no_output_____" ], [ "A.prettyTuple((example,) + E.head.t(example), seqNumber=0, withNodes=True)", "_____no_output_____" ] ], [ [ "## Getting an Object of a Preposition\n\nOften one wants to find objects of prepositions without accidentally selecting secondary, modifying elements, and without omitting coordinated objects. This is now possible with the new `obj_prep` edge feature. A few examples are provided below. We highlight prepositions in blue, and their objects in pink. ", "_____no_output_____" ] ], [ [ "E.obj_prep.f(23)", "_____no_output_____" ], [ "find_objs = '''\n\nphrase\n word\n <obj_prep- word\n\n'''\n\nfind_objs = A.search(find_objs)\n\nhighlights = {}\nfor result in find_objs:\n highlights[result[1]] = 'lightblue'\n highlights[result[2]] = 'pink'\n \nA.show(find_objs, highlights=highlights, end=10)", " 1.49s 64233 results\n" ] ], [ [ "Note that prepositional terms such as פני and תוך are properly treated as prepositions. This is due to a new custom set of prepositional words which were needed when processing the `heads` and `obj_prep` features. This feature is made available in `sem_set`, which has two values: `prep` and `quant`.\n\nBelow are cases of `prep` that are marked in BHSA as `subs` (noun):", "_____no_output_____" ] ], [ [ "sem_preps = A.search('''\n\nword pdp=subs sem_set=prep\n\n''')\n\nA.show(sem_preps, end=10, extraFeatures={'sem_set'})", " 0.53s 2069 results\n" ] ], [ [ "Returning to `obj_prep`, let's find cases where a preposition has more than one object. We do this with a hand-coded method this time. NB that only the cases of multiple objects of prepositions are highlighted.", "_____no_output_____" ] ], [ [ "results = []\nhighlights = {}\n\nfor prep in F.sem_set.s('prep'):\n \n objects = E.obj_prep.t(prep)\n\n if len(objects) > 1:\n phrase = L.u(prep, 'phrase')[0]\n results.append((phrase, prep)+objects)\n \n # update highlights\n highlights[prep] = 'lightblue'\n highlights.update({obj:'pink' for obj in objects})\n \nA.show(results, highlights=highlights, end=10, withNodes=True)", "_____no_output_____" ] ], [ [ "## Getting Nominal Heads\n\nThere are cases where it is beneficial to simply select any nominal head elements from underneath prepositional phrases (\"nominal\" here meaning any non-prepositional head). This is especially relevant when prepositional phrases are chained together, and the nominal element is difficult to recover. For these cases there is the feature `nhead`. Below we find such cases with a simple search:", "_____no_output_____" ] ], [ [ "find_nheads = A.search('''\n\np:phrase typ=PP\n/with/\n =: word sem_set=prep\n <: word sem_set=prep\n/-/\n < w1:word\n < w2:word\n\np <nhead- w1\np <nhead- w2\n''')\n\nA.show(find_nheads, end=10, withNodes=True)", " 2.52s 177 results\n" ] ], [ [ "## Using Enhanced Prepositions and Quantifiers\n\nThe `sem_set` feature brings enhanced preposition and quantifier semantic data, which was used in the calculations of `heads` features. We have already seen `prep` in action. Let's have a look at quantifiers.\n\nIn the BHSA base dataset, quantifiers can only be known through the `ls` (lexical set) feature where there is a value of `card` for cardinal numbers. But other kinds of quantifiers are not idenfiable. Let's have a look at what kinds of quantifiers `sem_set` = `quant` makes available...", "_____no_output_____" ] ], [ [ "sem_quants = A.search('''\n\nphrase\n word ls#card sem_set=quant\n\n''')\n\nA.show(sem_quants, end=5, extraFeatures={'sem_set'})", " 0.90s 6041 results\n" ] ], [ [ "These are all cases of כל. Let's find other kinds. We add a shuffle to randomize them as well. ", "_____no_output_____" ] ], [ [ "import random", "_____no_output_____" ], [ "nonKL = [result for result in sem_quants if F.lex.v(result[1]) != 'KL/']\n\nrandom.shuffle(nonKL)\n\nfor i, result in enumerate(nonKL[:10]):\n A.prettyTuple(result, extraFeatures={'sem_set'}, seqNumber=i)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a8719a4931e19ab333ecd281d8d37ee070da87f
1,221
ipynb
Jupyter Notebook
hello-world.ipynb
LL422/hello-world
248b4c52f35ea06ba2dbecbbeacfbc4bda203b31
[ "MIT" ]
null
null
null
hello-world.ipynb
LL422/hello-world
248b4c52f35ea06ba2dbecbbeacfbc4bda203b31
[ "MIT" ]
null
null
null
hello-world.ipynb
LL422/hello-world
248b4c52f35ea06ba2dbecbbeacfbc4bda203b31
[ "MIT" ]
null
null
null
19.693548
87
0.481572
[ [ [ "name = input(\"What is your name?\")\nage = input(\"How old are you?\")\nwhere = input(\"Where are you from?\")\nprint(\"Hello, \" + name + \", \" + age + \" years old, from \" + where + \" !\")", "What is your name? Le\nHow old are you? 19\nWhere are you from? China\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a872587dab05023744dfb3f2777b8c52b637262
1,487
ipynb
Jupyter Notebook
tftest.ipynb
TOWARDSzs/cftest
6cb8a21616614f140d2421a610ebeb2a03088179
[ "Apache-2.0" ]
null
null
null
tftest.ipynb
TOWARDSzs/cftest
6cb8a21616614f140d2421a610ebeb2a03088179
[ "Apache-2.0" ]
null
null
null
tftest.ipynb
TOWARDSzs/cftest
6cb8a21616614f140d2421a610ebeb2a03088179
[ "Apache-2.0" ]
null
null
null
30.979167
434
0.570276
[ [ [ "import os\nimport tensorflow as tf", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a872c6c012451c1529a6cdb6c8437740707dda5
10,110
ipynb
Jupyter Notebook
notebooks/old/olympics.ipynb
jlindbloom/fmax
22e7b6bd74e3665192770b6f5ac640aae1f42ac7
[ "MIT" ]
1
2021-11-16T17:06:35.000Z
2021-11-16T17:06:35.000Z
notebooks/old/olympics.ipynb
jlindbloom/fmax
22e7b6bd74e3665192770b6f5ac640aae1f42ac7
[ "MIT" ]
null
null
null
notebooks/old/olympics.ipynb
jlindbloom/fmax
22e7b6bd74e3665192770b6f5ac640aae1f42ac7
[ "MIT" ]
null
null
null
33.58804
94
0.426113
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport os\nimport sys", "_____no_output_____" ], [ "os.chdir(sys.path[0]+\"/../data\")", "_____no_output_____" ], [ "import urllib.request\nfrom bs4 import BeautifulSoup\nimport pandas as pd\nimport re\nfrom tqdm import tqdm\n\ncategories = [\n \"100 metres, Men\",\n \"200 metres, Men\",\n \"400 metres, Men\",\n \"800 metres, Men\",\n \"1,500 metres, Men\",\n \"5,000 metres, Men\",\n \"10,000 metres, Men\",\n \"Marathon, Men\",\n \"110 metres Hurdles, Men\",\n \"400 metres Hurdles, Men\",\n \"3,000 metres Steeplechase, Men\",\n \"4 x 100 metres Relay, Men\",\n \"4 x 400 metres Relay, Men\",\n \"20 kilometres Walk, Men\",\n \"50 kilometres Walk, Men\",\n \"100 metres, Women\",\n \"200 metres, Women\",\n \"400 metres, Women\",\n \"800 metres, Women\",\n \"1,500 metres, Women\",\n \"5,000 metres, Women\",\n \"10,000 metres, Women\",\n \"Marathon, Women\",\n \"110 metres Hurdles, Women\",\n \"400 metres Hurdles, Women\",\n \"3,000 metres Steeplechase, Women\",\n \"4 x 100 metres Relay, Women\",\n \"4 x 400 metres Relay, Women\",\n \"20 kilometres Walk, Women\",\n]\n\ndata = []\nfor edition in tqdm(range(1,62)): # Data from 1896 to 2020\n edition_url = f\"http://www.olympedia.org/editions/{edition}/sports/ATH\"\n response = urllib.request.urlopen(edition_url)\n edition_soup = BeautifulSoup(response, 'html.parser')\n title = edition_soup.find_all(\"h1\")[0]\n if \"Winter\" in title.text: continue # Skip winter olympics\n try:\n edition_year = int(re.findall(r\"\\d+\", title.text)[0])\n except IndexError:\n continue # Sometimes the page seems to not exist?\n\n for category in categories:\n try:\n elem = edition_soup.find_all(\"a\", string=category)[0]\n except IndexError:\n continue\n href = elem.get('href')\n event_url = \"http://www.olympedia.org\" + href\n response = urllib.request.urlopen(event_url)\n soup = BeautifulSoup(response, 'html.parser')\n table = soup.find_all(\"table\", {\"class\":\"table table-striped\"})[0]\n df = pd.read_html(str(table))[0]\n try:\n # gold_medal_time = float(re.findall(r\"[+-]?([0-9]+([.][0-9]*)?|[.][0-9]+)\",\n # df['Final'][0].split()[0])[0][0]\n # )\n final_time_raw = df['Final'][0].split()[0]\n h, m, s = re.findall(r\"^(?:(\\d{0,2})-)?(?:(\\d{0,2}):)?(\\d{0,2}\\.?\\d*)\", \n final_time_raw)[0]\n h,m,s = int(h) if len(h)>0 else 0, int(m) if len(m)>0 else 0, float(s)\n gold_medal_time = h*60*60+m*60+s\n except KeyError:\n continue\n data.append({\n \"category\" : category,\n \"year\": edition_year,\n \"time\" : gold_medal_time,\n \"reference\": event_url,\n })\n\ndf = pd.DataFrame(data)\ndf.to_csv('olympics_athletic_gold_medal_times.csv')\ndf", "100%|██████████| 61/61 [12:44<00:00, 12.53s/it]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
4a873a36d215811e362bbb4f6819745da24ff758
10,947
ipynb
Jupyter Notebook
lijin-THU:notes-python/08-object-oriented-programming/08.05-special-method.ipynb
Maecenas/python-getting-started
2739444e0f4aa692123dcd0c1b9a44218281f9b6
[ "MIT" ]
null
null
null
lijin-THU:notes-python/08-object-oriented-programming/08.05-special-method.ipynb
Maecenas/python-getting-started
2739444e0f4aa692123dcd0c1b9a44218281f9b6
[ "MIT" ]
null
null
null
lijin-THU:notes-python/08-object-oriented-programming/08.05-special-method.ipynb
Maecenas/python-getting-started
2739444e0f4aa692123dcd0c1b9a44218281f9b6
[ "MIT" ]
null
null
null
19.724324
85
0.451448
[ [ [ "# 特殊方法", "_____no_output_____" ], [ "**Python** 使用 `__` 开头的名字来定义特殊的方法和属性,它们有:\n\n- `__init__()`\n- `__repr__()`\n- `__str__()`\n- `__call__()`\n- `__iter__()`\n- `__add__()`\n- `__sub__()`\n- `__mul__()`\n- `__rmul__()`\n- `__class__`\n- `__name__`", "_____no_output_____" ], [ "## 构造方法 `__init__()`", "_____no_output_____" ], [ "之前说到,在产生对象之后,我们可以向对象中添加属性。事实上,还可以通过构造方法,在构造对象的时候直接添加属性:", "_____no_output_____" ] ], [ [ "class Leaf(object):\n \"\"\"\n A leaf falling in the woods.\n \"\"\"\n def __init__(self, color='green'):\n self.color = color", "_____no_output_____" ] ], [ [ "默认属性值:", "_____no_output_____" ] ], [ [ "leaf1 = Leaf()\n\nprint leaf1.color", "green\n" ] ], [ [ "传入有参数的值:", "_____no_output_____" ] ], [ [ "leaf2 = Leaf('orange')\n\nprint leaf2.color", "orange\n" ] ], [ [ "回到森林的例子:", "_____no_output_____" ] ], [ [ "import numpy as np\n\nclass Forest(object):\n \"\"\" Forest can grow trees which eventually die.\"\"\"\n def __init__(self):\n self.trees = np.zeros((150,150), dtype=bool)\n self.fires = np.zeros((150,150), dtype=bool)", "_____no_output_____" ] ], [ [ "我们在构造方法中定义了两个属性 `trees` 和 `fires`:", "_____no_output_____" ] ], [ [ "forest = Forest()\n\nforest.trees", "_____no_output_____" ], [ "forest.fires", "_____no_output_____" ] ], [ [ "修改属性的值:", "_____no_output_____" ] ], [ [ "forest.trees[0,0]=True\nforest.trees", "_____no_output_____" ] ], [ [ "改变它的属性值不会影响其他对象的属性值:", "_____no_output_____" ] ], [ [ "forest2 = Forest()\n\nforest2.trees", "_____no_output_____" ] ], [ [ "事实上,`__new__()` 才是真正产生新对象的方法,`__init__()` 只是对对象进行了初始化,所以:\n\n```python\nleaf = Leaf()\n```\n\n相当于\n\n```python\nmy_new_leaf = Leaf.__new__(Leaf)\nLeaf.__init__(my_new_leaf)\nleaf = my_new_leaf\n```", "_____no_output_____" ], [ "## 表示方法 `__repr__()` 和 `__str__()`", "_____no_output_____" ] ], [ [ "class Leaf(object):\n \"\"\"\n A leaf falling in the woods.\n \"\"\"\n def __init__(self, color='green'):\n self.color = color\n def __str__(self):\n \"This is the string that is printed.\"\n return \"A {} leaf\".format(self.color)\n def __repr__(self):\n \"This string recreates the object.\"\n return \"{}(color='{}')\".format(self.__class__.__name__, self.color)", "_____no_output_____" ] ], [ [ "`__str__()` 是使用 `print` 函数显示的结果:", "_____no_output_____" ] ], [ [ "leaf = Leaf()\n\nprint leaf", "A green leaf\n" ] ], [ [ "`__repr__()` 返回的是不使用 `print` 方法的结果:", "_____no_output_____" ] ], [ [ "leaf", "_____no_output_____" ] ], [ [ "回到森林的例子:", "_____no_output_____" ] ], [ [ "import numpy as np\n\nclass Forest(object):\n \"\"\" Forest can grow trees which eventually die.\"\"\"\n def __init__(self, size=(150,150)):\n self.size = size\n self.trees = np.zeros(self.size, dtype=bool)\n self.fires = np.zeros((self.size), dtype=bool)\n \n def __repr__(self):\n my_repr = \"{}(size={})\".format(self.__class__.__name__, self.size)\n return my_repr\n \n def __str__(self):\n return self.__class__.__name__", "_____no_output_____" ], [ "forest = Forest()", "_____no_output_____" ] ], [ [ "`__str__()` 方法:", "_____no_output_____" ] ], [ [ "print forest", "Forest\n" ] ], [ [ "`__repr__()` 方法:", "_____no_output_____" ] ], [ [ "forest", "_____no_output_____" ] ], [ [ "`__name__` 和 `__class__` 为特殊的属性:", "_____no_output_____" ] ], [ [ "forest.__class__", "_____no_output_____" ], [ "forest.__class__.__name__", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a873d12963f00b23cd82d5f81249bdd0cc93a9f
106,316
ipynb
Jupyter Notebook
Support notebooks/XGBoost.ipynb
MoshaLangerak/Data_Challenge_2_Group_18
34325993376edb967723611d0a86ccef748932d1
[ "MIT" ]
null
null
null
Support notebooks/XGBoost.ipynb
MoshaLangerak/Data_Challenge_2_Group_18
34325993376edb967723611d0a86ccef748932d1
[ "MIT" ]
null
null
null
Support notebooks/XGBoost.ipynb
MoshaLangerak/Data_Challenge_2_Group_18
34325993376edb967723611d0a86ccef748932d1
[ "MIT" ]
null
null
null
46.980115
15,948
0.47974
[ [ [ "#pip install xgboost", "_____no_output_____" ], [ "import xgboost as xgb\nfrom sklearn.metrics import mean_squared_error\nimport pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import train_test_split", "C:\\Users\\20193999\\AppData\\Roaming\\Python\\Python37\\site-packages\\pandas\\compat\\_optional.py:138: UserWarning: Pandas requires version '2.7.0' or newer of 'numexpr' (version '2.6.9' currently installed).\n warnings.warn(msg, UserWarning)\n" ], [ "df = pd.read_csv('2019_data.csv')", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\IPython\\core\\interactiveshell.py:3296: DtypeWarning: Columns (20) have mixed types.Specify dtype option on import or set low_memory=False.\n exec(code_obj, self.user_global_ns, self.user_ns)\n" ], [ "df.head(-5)", "_____no_output_____" ], [ "df.dropna(subset=['LSOA code', 'Month', 'Location', 'Crime type', 'Longitude', 'Latitude', 'Employment Domain Score',\n 'Income Domain Score', 'IDACI Score', 'IDAOPI Score', 'Police Strength',\n 'Police Funding', 'Population'], inplace=True)", "_____no_output_____" ], [ "df.head(-5)", "_____no_output_____" ], [ "df.sort_values(by=\"Month\")", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "#convert from british ',' to america '.'\n\ndf['Employment Domain Score'] = df['Employment Domain Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)\ndf['Income Domain Score'] = df['Income Domain Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)\ndf['IDACI Score'] = df['IDACI Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)\ndf['IDAOPI Score'] = df['IDAOPI Score'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)\ndf['Police Strength'] = df['Police Strength'].str.replace('[A-Za-z]', '').str.replace(',', '.').astype(float)\n\n\n", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:3: FutureWarning: The default value of regex will change from True to False in a future version.\n This is separate from the ipykernel package so we can avoid doing imports until\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:4: FutureWarning: The default value of regex will change from True to False in a future version.\n after removing the cwd from sys.path.\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:5: FutureWarning: The default value of regex will change from True to False in a future version.\n \"\"\"\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:6: FutureWarning: The default value of regex will change from True to False in a future version.\n \nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:7: FutureWarning: The default value of regex will change from True to False in a future version.\n import sys\n" ], [ "df.dtypes", "_____no_output_____" ], [ "df[\"Population\"] = df[\"Population\"].astype(str)", "_____no_output_____" ], [ "df['Population'] = df['Population'].str.replace('[A-Za-z]', '').str.replace('.', '').astype(float)", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: The default value of regex will change from True to False in a future version.\n \"\"\"Entry point for launching an IPython kernel.\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: FutureWarning: The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.\n \"\"\"Entry point for launching an IPython kernel.\n" ], [ "data = df.groupby('Reported by').agg({'Crime type':'count', 'Employment Domain Score':'mean', 'Income Domain Score':'mean', 'IDACI Score':'mean', 'IDAOPI Score':'mean', 'Police Strength':'mean', 'Income Domain Score':'mean', 'Police Funding':'mean', 'Population':'mean' })", "_____no_output_____" ], [ "data.reset_index()", "_____no_output_____" ], [ "X, y = data.iloc[:,0:],data.iloc[:,0]", "_____no_output_____" ], [ "X", "_____no_output_____" ], [ "y", "_____no_output_____" ], [ "X, y = data.iloc[:,:-1],data.iloc[:,-1]", "_____no_output_____" ], [ "data_dmatrix = xgb.DMatrix(data=X,label=y)", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=111)", "_____no_output_____" ], [ "xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,\n max_depth = 5, alpha = 10, n_estimators = 10)\nxg_reg.fit(X_train,y_train)\npreds = xg_reg.predict(X_test)", "[00:07:46] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.5.1/src/objective/regression_obj.cu:188: reg:linear is now deprecated in favor of reg:squarederror.\n" ], [ "rmse = np.sqrt(mean_squared_error(y_test, preds))\nprint(\"RMSE: %f\" % (rmse))", "RMSE: 91516.171710\n" ] ], [ [ "## K-FOld cross validation\n", "_____no_output_____" ] ], [ [ "params = {\"objective\":\"reg:linear\",'colsample_bytree': 0.3,'learning_rate': 0.1,\n 'max_depth': 10, 'alpha': 10}\n\ncv_results = xgb.cv(dtrain=data_dmatrix, params=params, nfold=3,\n num_boost_round=50,early_stopping_rounds=10,metrics=\"rmse\", as_pandas=True, seed=123)", "[00:07:46] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.5.1/src/objective/regression_obj.cu:188: reg:linear is now deprecated in favor of reg:squarederror.\n[00:07:46] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.5.1/src/objective/regression_obj.cu:188: reg:linear is now deprecated in favor of reg:squarederror.\n[00:07:46] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.5.1/src/objective/regression_obj.cu:188: reg:linear is now deprecated in favor of reg:squarederror.\n" ], [ "cv_results.head()", "_____no_output_____" ], [ "print((cv_results[\"test-rmse-mean\"]).tail(1))", "22 68591.559896\nName: test-rmse-mean, dtype: float64\n" ], [ "xg_reg = xgb.train(params=params, dtrain=data_dmatrix, num_boost_round=10)", "[00:07:47] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.5.1/src/objective/regression_obj.cu:188: reg:linear is now deprecated in favor of reg:squarederror.\n" ], [ "import matplotlib.pyplot as plt\n\nxgb.plot_tree(xg_reg,num_trees=0)\nplt.rcParams['figure.figsize'] = [50, 10]\nplt.show()", "_____no_output_____" ], [ "xgb.plot_importance(xg_reg)\nplt.rcParams['figure.figsize'] = [5, 5]\nplt.savefig(\"feature_importance.png\")\nplt.show()\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
4a8753ceb1f01db5b6217be7e70f51f4e880b412
154,009
ipynb
Jupyter Notebook
Exploratory_Data_Analysis/Global_population_and_covid_analysis.ipynb
Elysian01/ML-Projects
c90427b2e63791efbb3743714dec22ad19507d28
[ "MIT" ]
2
2020-12-10T06:25:17.000Z
2021-03-24T02:31:01.000Z
Exploratory_Data_Analysis/Global_population_and_covid_analysis.ipynb
HabibMrad/ML-Projects
c90427b2e63791efbb3743714dec22ad19507d28
[ "MIT" ]
null
null
null
Exploratory_Data_Analysis/Global_population_and_covid_analysis.ipynb
HabibMrad/ML-Projects
c90427b2e63791efbb3743714dec22ad19507d28
[ "MIT" ]
3
2021-04-19T18:15:38.000Z
2021-11-29T15:12:46.000Z
35.708092
331
0.337175
[ [ [ "# Jovian Commit Essentials\n# Please retain and execute this cell without modifying the contents for `jovian.commit` to work\n!pip install jovian --upgrade -q\nimport jovian\njovian.set_project('pandas-practice-assignment')\njovian.set_colab_id('1EMzM1GAuekn6b3mjbgjC83UH-2XgQHAe')", "_____no_output_____" ] ], [ [ "# Assignment 3 - Pandas Data Analysis Practice\n\n*This assignment is a part of the course [\"Data Analysis with Python: Zero to Pandas\"](https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas)*\n\nIn this assignment, you'll get to practice some of the concepts and skills covered this tutorial: https://jovian.ml/aakashns/python-pandas-data-analysis\n\nAs you go through this notebook, you will find a **???** in certain places. To complete this assignment, you must replace all the **???** with appropriate values, expressions or statements to ensure that the notebook runs properly end-to-end. \n\nSome things to keep in mind:\n\n* Make sure to run all the code cells, otherwise you may get errors like `NameError` for undefined variables.\n* Do not change variable names, delete cells or disturb other existing code. It may cause problems during evaluation.\n* In some cases, you may need to add some code cells or new statements before or after the line of code containing the **???**. \n* Since you'll be using a temporary online service for code execution, save your work by running `jovian.commit` at regular intervals.\n* Questions marked **(Optional)** will not be considered for evaluation, and can be skipped. They are for your learning.\n\nYou can make submissions on this page: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-3-pandas-practice\n\nIf you are stuck, you can ask for help on the community forum: https://jovian.ml/forum/t/assignment-3-pandas-practice/11225/3 . You can get help with errors or ask for hints, describe your approach in simple words, link to documentation, but **please don't ask for or share the full working answer code** on the forum.\n\n\n## How to run the code and save your work\n\nThe recommended way to run this notebook is to click the \"Run\" button at the top of this page, and select \"Run on Binder\". This will run the notebook on [mybinder.org](https://mybinder.org), a free online service for running Jupyter notebooks. \n\nBefore staring the assignment, let's save a snapshot of the assignment to your Jovian.ml profile, so that you can access it later, and continue your work.", "_____no_output_____" ] ], [ [ "import jovian", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ], [ "# Run the next line to install Pandas\n!pip install pandas --upgrade", "Requirement already up-to-date: pandas in /usr/local/lib/python3.7/dist-packages (1.2.3)\nRequirement already satisfied, skipping upgrade: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas) (2.8.1)\nRequirement already satisfied, skipping upgrade: pytz>=2017.3 in /usr/local/lib/python3.7/dist-packages (from pandas) (2018.9)\nRequirement already satisfied, skipping upgrade: numpy>=1.16.5 in /usr/local/lib/python3.7/dist-packages (from pandas) (1.19.5)\nRequirement already satisfied, skipping upgrade: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\n" ], [ "import pandas as pd", "_____no_output_____" ] ], [ [ "In this assignment, we're going to analyze an operate on data from a CSV file. Let's begin by downloading the CSV file.", "_____no_output_____" ] ], [ [ "from urllib.request import urlretrieve\n\nurlretrieve('https://hub.jovian.ml/wp-content/uploads/2020/09/countries.csv', \n 'countries.csv')", "_____no_output_____" ] ], [ [ "Let's load the data from the CSV file into a Pandas data frame.", "_____no_output_____" ] ], [ [ "countries_df = pd.read_csv('countries.csv')", "_____no_output_____" ], [ "countries_df", "_____no_output_____" ] ], [ [ "**Q: How many countries does the dataframe contain?**\n\nHint: Use the `.shape` method.", "_____no_output_____" ] ], [ [ "num_countries = countries_df.shape[0]", "_____no_output_____" ], [ "print('There are {} countries in the dataset'.format(num_countries))", "There are 210 countries in the dataset\n" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Retrieve a list of continents from the dataframe?**\n\n*Hint: Use the `.unique` method of a series.*", "_____no_output_____" ] ], [ [ "continents = countries_df[\"continent\"].unique()", "_____no_output_____" ], [ "continents", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: What is the total population of all the countries listed in this dataset?**", "_____no_output_____" ] ], [ [ "total_population = countries_df[\"population\"].sum()", "_____no_output_____" ], [ "print('The total population is {}.'.format(int(total_population)))", "The total population is 7757980095.\n" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: (Optional) What is the overall life expectancy across in the world?**\n\n*Hint: You'll need to take a weighted average of life expectancy using populations as weights.*", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "_____no_output_____" ] ], [ [ "**Q: Create a dataframe containing 10 countries with the highest population.**\n\n*Hint: Chain the `sort_values` and `head` methods.*", "_____no_output_____" ] ], [ [ "most_populous_df = countries_df.sort_values(\"population\", ascending=False)", "_____no_output_____" ], [ "most_populous_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Add a new column in `countries_df` to record the overall GDP per country (product of population & per capita GDP).**\n\n", "_____no_output_____" ] ], [ [ "countries_df['gdp'] = countries_df[\"population\"]*countries_df[\"gdp_per_capita\"]", "_____no_output_____" ], [ "countries_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: (Optional) Create a dataframe containing 10 countries with the lowest GDP per capita, among the counties with population greater than 100 million.**", "_____no_output_____" ] ], [ [ "lowest_gdp_df = countries_df[countries_df[\"population\"] > 100000000].sort_values(\"gdp_per_capita\").head(10).reset_index()", "_____no_output_____" ], [ "lowest_gdp_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Create a data frame that counts the number countries in each continent?**\n\n*Hint: Use `groupby`, select the `location` column and aggregate using `count`.*", "_____no_output_____" ] ], [ [ "country_counts_df = countries_df.groupby(\"continent\")[\"location\"].count()", "_____no_output_____" ], [ "country_counts_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Create a data frame showing the total population of each continent.**\n\n*Hint: Use `groupby`, select the population column and aggregate using `sum`.*", "_____no_output_____" ] ], [ [ "continent_populations_df = countries_df.groupby(\"continent\")[\"population\"].sum()", "_____no_output_____" ], [ "continent_populations_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "Let's download another CSV file containing overall Covid-19 stats for various countires, and read the data into another Pandas data frame.", "_____no_output_____" ] ], [ [ "urlretrieve('https://hub.jovian.ml/wp-content/uploads/2020/09/covid-countries-data.csv', \n 'covid-countries-data.csv')", "_____no_output_____" ], [ "covid_data_df = pd.read_csv('covid-countries-data.csv')", "_____no_output_____" ], [ "covid_data_df", "_____no_output_____" ] ], [ [ "**Q: Count the number of countries for which the `total_tests` data is missing.**\n\n*Hint: Use the `.isna` method.*", "_____no_output_____" ] ], [ [ "total_tests_missing = covid_data_df[covid_data_df[\"total_tests\"].isna()][\"location\"].count()", "_____no_output_____" ], [ "print(\"The data for total tests is missing for {} countries.\".format(int(total_tests_missing)))", "The data for total tests is missing for 122 countries.\n" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "Let's merge the two data frames, and compute some more metrics.\n\n**Q: Merge `countries_df` with `covid_data_df` on the `location` column.**\n\n*Hint: Use the `.merge` method on `countries_df`.", "_____no_output_____" ] ], [ [ "combined_df = countries_df.merge(covid_data_df, on=\"location\")", "_____no_output_____" ], [ "combined_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Add columns `tests_per_million`, `cases_per_million` and `deaths_per_million` into `combined_df`.**", "_____no_output_____" ] ], [ [ "combined_df['tests_per_million'] = combined_df['total_tests'] * 1e6 / combined_df['population']", "_____no_output_____" ], [ "combined_df['cases_per_million'] = combined_df['total_cases'] * 1e6 / combined_df['population']", "_____no_output_____" ], [ "combined_df['deaths_per_million'] = combined_df['total_deaths'] * 1e6 / combined_df['population']", "_____no_output_____" ], [ "combined_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Create a dataframe with 10 countires that have highest number of tests per million people.**", "_____no_output_____" ] ], [ [ "highest_tests_df = combined_df.sort_values(\"tests_per_million\",ascending=False).head(10).reset_index()", "_____no_output_____" ], [ "highest_tests_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Create a dataframe with 10 countires that have highest number of positive cases per million people.**", "_____no_output_____" ] ], [ [ "highest_cases_df = combined_df.sort_values(\"cases_per_million\",ascending=False).head(10).reset_index()", "_____no_output_____" ], [ "highest_cases_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**Q: Create a dataframe with 10 countires that have highest number of deaths cases per million people?**", "_____no_output_____" ] ], [ [ "highest_deaths_df = combined_df.sort_values(\"deaths_per_million\",ascending=False).head(10).reset_index()", "_____no_output_____" ], [ "highest_deaths_df", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**(Optional) Q: Count number of countries that feature in both the lists of \"highest number of tests per million\" and \"highest number of cases per million\".**", "_____no_output_____" ] ], [ [ "highest_test_and_cases = highest_cases_df[\"location\"].isin(highest_tests_df[\"location\"]).sum()\r\nprint(f\"Total number of Contries having both highest number of cases and covid tests are: {highest_test_and_cases}\")", "Total number of Contries having both highest number of cases and covid tests are: 2\n" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "**(Optional) Q: Count number of countries that feature in both the lists \"20 countries with lowest GDP per capita\" and \"20 countries with the lowest number of hospital beds per thousand population\". Only consider countries with a population higher than 10 million while creating the list.**", "_____no_output_____" ] ], [ [ "lowest_gdp_df = countries_df[countries_df[\"population\"] > 10000000].sort_values(\"gdp_per_capita\").head(20).reset_index()\r\nlowest_gdp_df", "_____no_output_____" ], [ "lowest_hospital_df = countries_df[countries_df[\"population\"] > 10000000].sort_values(\"hospital_beds_per_thousand\").head(20).reset_index()\r\nlowest_hospital_df", "_____no_output_____" ], [ "# set(lowest_gdp_df['location']).intersection(set(lowest_hospital_df['location']))\r\ntotal_countries_having_low_gdp_and_bed = lowest_gdp_df['location'].isin(lowest_hospital_df['location']).sum()\r\nprint(f\"Countries having lowest gdp per capita and lowest hosiptal bed are: {total_countries_having_low_gdp_and_bed}\")", "Countries having lowest gdp per capita and lowest hosiptal bed are: 14\n" ], [ "import jovian", "_____no_output_____" ], [ "jovian.commit(project='pandas-practice-assignment', environment=None)", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n" ] ], [ [ "## Submission \n\nCongratulations on making it this far! You've reached the end of this assignment, and you just completed your first real-world data analysis problem. It's time to record one final version of your notebook for submission.\n\nMake a submission here by filling the submission form: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-3-pandas-practice\n\nAlso make sure to help others on the forum: https://jovian.ml/forum/t/assignment-3-pandas-practice/11225/2", "_____no_output_____" ] ], [ [ "jovian.submit(assignment=\"zero-to-pandas-a3\")", "[jovian] Detected Colab notebook...\u001b[0m\n[jovian] Uploading colab notebook to Jovian...\u001b[0m\n[jovian] Capturing environment..\u001b[0m\n[jovian] Committed successfully! https://jovian.ai/elysian01/pandas-practice-assignment\u001b[0m\n[jovian] Submitting assignment..\u001b[0m\n[jovian] Verify your submission at https://jovian.ai/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-3-pandas-practice\u001b[0m\n" ], [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a875a98bfb49f33dc86a5a33eeea39f9221eb7a
53,213
ipynb
Jupyter Notebook
notebooks/155_WG3_double-check-unconstrained-forcing.ipynb
IPCC-WG1/Chapter-7
235679fbd25e489827de605e1417ac3f27e6abab
[ "MIT" ]
11
2021-08-18T10:15:24.000Z
2021-08-23T19:15:34.000Z
notebooks/155_WG3_double-check-unconstrained-forcing.ipynb
IPCC-WG1/Chapter-7
235679fbd25e489827de605e1417ac3f27e6abab
[ "MIT" ]
null
null
null
notebooks/155_WG3_double-check-unconstrained-forcing.ipynb
IPCC-WG1/Chapter-7
235679fbd25e489827de605e1417ac3f27e6abab
[ "MIT" ]
4
2021-08-25T00:55:11.000Z
2022-01-08T12:21:29.000Z
397.11194
26,652
0.941387
[ [ [ "# Quick check that aerosol forcing matches assessment\n\nTheme Song: Lose Control<br>\nArtist: Ash<br>\nAlbum: 1977<br>\nReleased: 1996", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as pl\nimport numpy as np", "_____no_output_____" ], [ "F_dir = np.load('../data_output_large/fair-samples/F_ERFari_unconstrained.npy')\nF_ind = np.load('../data_output_large/fair-samples/F_ERFaci_unconstrained.npy')", "_____no_output_____" ], [ "F_dir.shape", "_____no_output_____" ], [ "pl.fill_between(np.arange(1750,2101), np.percentile(F_dir, 5, axis=1), np.percentile(F_dir, 95, axis=1))\npl.plot(np.arange(1750,2101), np.percentile(F_dir, 50, axis=1), color='k');\npl.plot(np.arange(2005,2015), -0.3*np.ones(10), color='r')\npl.grid()", "_____no_output_____" ], [ "pl.fill_between(np.arange(1750,2101), np.percentile(F_ind, 5, axis=1), np.percentile(F_ind, 95, axis=1))\npl.plot(np.arange(1750,2101), np.percentile(F_ind, 50, axis=1), color='k');\npl.plot(np.arange(2005,2015), -1.0*np.ones(10), color='r')\npl.grid()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a875c51b059355acfe863b2732c2145fdafd513
145,453
ipynb
Jupyter Notebook
Plot.ipynb
iphysresearch/S_Dbw_validity_index
928b23fdcb75ca7a20140c2daa918b5702490a56
[ "MIT" ]
18
2018-04-10T14:02:25.000Z
2021-12-29T12:42:59.000Z
Plot.ipynb
iphysresearch/S_Dbw_validity_index
928b23fdcb75ca7a20140c2daa918b5702490a56
[ "MIT" ]
1
2020-03-08T07:04:06.000Z
2021-12-12T06:37:32.000Z
Plot.ipynb
iphysresearch/S_Dbw_validity_index
928b23fdcb75ca7a20140c2daa918b5702490a56
[ "MIT" ]
8
2018-08-09T19:27:21.000Z
2021-06-17T13:27:28.000Z
769.592593
73,576
0.950919
[ [ [ "import numpy as np\nimport S_Dbw as sdbw\nfrom sklearn.cluster import KMeans\nfrom sklearn.datasets.samples_generator import make_blobs\nfrom sklearn.metrics.pairwise import pairwise_distances_argmin\n\nnp.random.seed(0)\n\nS_Dbw_result = []\nbatch_size = 45\ncenters = [[1, 1], [-1, -1], [1, -1]]\ncluster_std=[0.7,0.3,1.2]\nn_clusters = len(centers)\nX1, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[0])\nX2, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[1])\nX3, _ = make_blobs(n_samples=3000, centers=centers, cluster_std=cluster_std[2])\n\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(9, 3))\nfig.subplots_adjust(left=0.02, right=0.98, bottom=0.08, top=0.9)\ncolors = ['#4EACC5', '#FF9C34', '#4E9A06']\n\nfor item, X in enumerate(list([X1, X2, X3])):\n k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)\n k_means.fit(X)\n\n k_means_cluster_centers = k_means.cluster_centers_\n k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)\n\n KS = sdbw.S_Dbw(X, k_means_labels, k_means_cluster_centers)\n S_Dbw_result.append(KS.S_Dbw_result())\n \n ax = fig.add_subplot(1,3,item+1)\n for k, col in zip(range(n_clusters), colors):\n my_members = k_means_labels == k\n cluster_center = k_means_cluster_centers[k]\n ax.plot(X[my_members, 0], X[my_members, 1], 'w',\n markerfacecolor=col, marker='.')\n ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,\n markeredgecolor='k', markersize=6)\n ax.set_title('S_Dbw: %.3f' %(S_Dbw_result[item]))\n ax.set_ylim((-4,4))\n ax.set_xlim((-4,4))\n plt.text(-3.5, 1.8, 'cluster_std: %f' %(cluster_std[item]))\nplt.savefig('./pic1.png', dpi=150)", "_____no_output_____" ], [ "np.random.seed(0)\n\nS_Dbw_result = []\nbatch_size = 45\ncenters = [[[1, 1], [-1, -1], [1, -1]],\n [[0.8, 0.8], [-0.8, -0.8], [0.8, -0.8]],\n [[1.2, 1.2], [-1.2, -1.2], [1.2, -1.2]]]\nn_clusters = len(centers)\nX1, _ = make_blobs(n_samples=3000, centers=centers[0], cluster_std=0.7)\nX2, _ = make_blobs(n_samples=3000, centers=centers[1], cluster_std=0.7)\nX3, _ = make_blobs(n_samples=3000, centers=centers[2], cluster_std=0.7)\n\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(8, 3))\nfig.subplots_adjust(left=0.02, right=0.98, bottom=0.2, top=0.9)\ncolors = ['#4EACC5', '#FF9C34', '#4E9A06']\n\nfor item, X in enumerate(list([X1, X2, X3])):\n k_means = KMeans(init='k-means++', n_clusters=3, n_init=10)\n k_means.fit(X)\n\n k_means_cluster_centers = k_means.cluster_centers_\n k_means_labels = pairwise_distances_argmin(X, k_means_cluster_centers)\n\n KS = sdbw.S_Dbw(X, k_means_labels, k_means_cluster_centers)\n S_Dbw_result.append(KS.S_Dbw_result())\n \n ax = fig.add_subplot(1,3,item+1)\n for k, col in zip(range(n_clusters), colors):\n my_members = k_means_labels == k\n cluster_center = k_means_cluster_centers[k]\n ax.plot(X[my_members, 0], X[my_members, 1], 'w',\n markerfacecolor=col, marker='.')\n ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,\n markeredgecolor='k', markersize=6)\n ax.set_title('S_Dbw: %.3f ' %(S_Dbw_result[item]))\n# ax.set_xticks(())\n# ax.set_yticks(())\n ax.set_ylim((-4,4))\n ax.set_xlim((-4,4))\n ax.set_xlabel('centers: \\n%s' %(centers[item]))\nplt.savefig('./pic2.png', dpi=150)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
4a8766c3a418c5347c6eb8adf1d89a682b860b9c
189,231
ipynb
Jupyter Notebook
Curso Pandas/Criando Agrupamentos.ipynb
edkiriyama/Projetos-Python
d2dce324965737ce02ecef3efdd53edb60b31eb9
[ "MIT" ]
null
null
null
Curso Pandas/Criando Agrupamentos.ipynb
edkiriyama/Projetos-Python
d2dce324965737ce02ecef3efdd53edb60b31eb9
[ "MIT" ]
null
null
null
Curso Pandas/Criando Agrupamentos.ipynb
edkiriyama/Projetos-Python
d2dce324965737ce02ecef3efdd53edb60b31eb9
[ "MIT" ]
null
null
null
121.379731
112,600
0.785183
[ [ [ "# Relatório de Análise VII", "_____no_output_____" ], [ "## Criando Argumentos", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "dados = pd.read_csv('dados/aluguel_residencial.csv', sep = ';')", "_____no_output_____" ], [ "dados.head(10)", "_____no_output_____" ] ], [ [ "##### https://pandas.pydata.org/pandas-docs/stable/reference/frame.html", "_____no_output_____" ] ], [ [ "dados['Valor'].mean()# Média Geral", "_____no_output_____" ], [ "dados.Bairro.unique()", "_____no_output_____" ], [ "bairros = ['Copacabana', 'Jardim Botânico', 'Centro', 'Higienópolis',\n 'Cachambi', 'Barra da Tijuca', 'Ramos', 'Grajaú',\n 'Lins de Vasconcelos', 'Taquara', 'Freguesia (Jacarepaguá)',\n 'Tijuca', 'Olaria', 'Ipanema', 'Campo Grande', 'Botafogo',\n 'Recreio dos Bandeirantes', 'Leblon', 'Jardim Oceânico', 'Humaitá',\n 'Península', 'Méier', 'Vargem Pequena', 'Maracanã', 'Jacarepaguá',\n 'São Conrado', 'Vila Valqueire', 'Gávea', 'Cosme Velho',\n 'Bonsucesso', 'Todos os Santos', 'Laranjeiras', 'Itanhangá',\n 'Flamengo', 'Piedade', 'Lagoa', 'Catete', 'Jardim Carioca',\n 'Benfica', 'Glória', 'Praça Seca', 'Vila Isabel', 'Engenho Novo',\n 'Engenho de Dentro', 'Pilares', 'Água Santa', 'São Cristóvão',\n 'Ilha do Governador', 'Jardim Sulacap', 'Oswaldo Cruz',\n 'Vila da Penha', 'Anil', 'Vargem Grande', 'Tanque', 'Vaz Lobo',\n 'Madureira', 'São Francisco Xavier', 'Pechincha', 'Leme', 'Irajá',\n 'Quintino Bocaiúva', 'Urca', 'Penha', 'Gardênia Azul',\n 'Rio Comprido', 'Andaraí', 'Santa Teresa', 'Inhaúma',\n 'Marechal Hermes', 'Curicica', 'Santíssimo', 'Moneró', 'Camorim',\n 'Cascadura', 'Praia da Bandeira', 'Saúde', 'Joá', 'Realengo',\n 'Fátima', 'Inhoaíba', 'Rocha', 'Jardim Guanabara', 'Jabour',\n 'Braz de Pina', 'Praça da Bandeira', 'Vila Kosmos', 'Vista Alegre',\n 'Encantado', 'Campinho', 'Guaratiba', 'Riachuelo', 'Bangu', 'Lapa',\n 'Catumbi', 'Penha Circular', 'Abolição', 'Tomás Coelho', 'Colégio',\n 'Pavuna', 'Santa Cruz', 'Alto da Boa Vista', 'Cidade Nova',\n 'Bento Ribeiro', 'Estácio', 'Jardim América', 'Cordovil', 'Caju',\n 'Pedra de Guaratiba', 'Padre Miguel', 'Paciência', 'Del Castilho',\n 'Arpoador', 'Sampaio', 'Anchieta', 'Icaraí', 'Senador Vasconcelos',\n 'Rocha Miranda', 'Gamboa', 'Maria da Graça', 'Barra de Guaratiba',\n 'Vicente de Carvalho', 'Paquetá', 'Largo do Machado',\n 'Parada de Lucas', 'Freguesia (Ilha do Governador)', 'Portuguesa',\n 'Guadalupe', 'Parque Anchieta', 'Turiaçu', 'Pitangueiras',\n 'Vila Militar', 'Vidigal', 'Senador Camará', 'Usina',\n 'Vigário Geral', 'Cosmos', 'Jacaré', 'Cocotá', 'Honório Gurgel',\n 'Engenho da Rainha', 'Cachamorra', 'Zumbi', 'Tauá', 'Santo Cristo',\n 'Ribeira', 'Magalhães Bastos', 'Cacuia', 'Bancários', 'Cavalcanti',\n 'Rio da Prata', 'Cidade Jardim', 'Coelho Neto']", "_____no_output_____" ], [ "selecao = dados['Bairro'].isin(bairros)\ndados = dados[selecao]", "_____no_output_____" ], [ "dados.Bairro.drop_duplicates()", "_____no_output_____" ], [ "dados.Bairro.drop_duplicates()", "_____no_output_____" ], [ "grupo_bairro = dados.groupby('Bairro') # Cria um grupo pegando como referencia o Bairro", "_____no_output_____" ], [ "type(grupo_bairro)", "_____no_output_____" ], [ "grupo_bairro.groups # Mostra os Bairros e o grupo de referencias/linhas onde ele aparece.", "_____no_output_____" ], [ "for bairro, data in grupo_bairro: #Faz um laço para mostrar como os atributos do bairro\n print(bairro)", "Abolição\nAlto da Boa Vista\nAnchieta\nAndaraí\nAnil\nArpoador\nBancários\nBangu\nBarra da Tijuca\nBarra de Guaratiba\nBenfica\nBento Ribeiro\nBonsucesso\nBotafogo\nBraz de Pina\nCachambi\nCachamorra\nCacuia\nCaju\nCamorim\nCampinho\nCampo Grande\nCascadura\nCatete\nCatumbi\nCavalcanti\nCentro\nCidade Jardim\nCidade Nova\nCocotá\nCoelho Neto\nColégio\nCopacabana\nCordovil\nCosme Velho\nCosmos\nCuricica\nDel Castilho\nEncantado\nEngenho Novo\nEngenho da Rainha\nEngenho de Dentro\nEstácio\nFlamengo\nFreguesia (Ilha do Governador)\nFreguesia (Jacarepaguá)\nFátima\nGamboa\nGardênia Azul\nGlória\nGrajaú\nGuadalupe\nGuaratiba\nGávea\nHigienópolis\nHonório Gurgel\nHumaitá\nIcaraí\nIlha do Governador\nInhaúma\nInhoaíba\nIpanema\nIrajá\nItanhangá\nJabour\nJacarepaguá\nJacaré\nJardim América\nJardim Botânico\nJardim Carioca\nJardim Guanabara\nJardim Oceânico\nJardim Sulacap\nJoá\nLagoa\nLapa\nLaranjeiras\nLargo do Machado\nLeblon\nLeme\nLins de Vasconcelos\nMadureira\nMagalhães Bastos\nMaracanã\nMarechal Hermes\nMaria da Graça\nMoneró\nMéier\nOlaria\nOswaldo Cruz\nPaciência\nPadre Miguel\nPaquetá\nParada de Lucas\nParque Anchieta\nPavuna\nPechincha\nPedra de Guaratiba\nPenha\nPenha Circular\nPenínsula\nPiedade\nPilares\nPitangueiras\nPortuguesa\nPraia da Bandeira\nPraça Seca\nPraça da Bandeira\nQuintino Bocaiúva\nRamos\nRealengo\nRecreio dos Bandeirantes\nRiachuelo\nRibeira\nRio Comprido\nRio da Prata\nRocha\nRocha Miranda\nSampaio\nSanta Cruz\nSanta Teresa\nSanto Cristo\nSantíssimo\nSaúde\nSenador Camará\nSenador Vasconcelos\nSão Conrado\nSão Cristóvão\nSão Francisco Xavier\nTanque\nTaquara\nTauá\nTijuca\nTodos os Santos\nTomás Coelho\nTuriaçu\nUrca\nUsina\nVargem Grande\nVargem Pequena\nVaz Lobo\nVicente de Carvalho\nVidigal\nVigário Geral\nVila Isabel\nVila Kosmos\nVila Militar\nVila Valqueire\nVila da Penha\nVista Alegre\nZumbi\nÁgua Santa\n" ], [ "for bairro, dados in grupo_bairro: #Faz um laço para mostrar que está sendo guardado um dataframe para cada bairro.\n print(type(dados))", "<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n<class 'pandas.core.frame.DataFrame'>\n" ], [ "for bairro, dados in grupo_bairro: \n print(f'{bairro} --> {dados.Valor.mean()}') # Calculou a media do valor para cada bairro. ", "Abolição --> 1195.3333333333333\nAlto da Boa Vista --> 3966.6666666666665\nAnchieta --> 875.0\nAndaraí --> 1464.7113402061855\nAnil --> 2048.8732394366198\nArpoador --> 12923.916666666666\nBancários --> 1825.0\nBangu --> 1016.0\nBarra da Tijuca --> 7069.552938130986\nBarra de Guaratiba --> 5550.0\nBenfica --> 996.0\nBento Ribeiro --> 1030.8695652173913\nBonsucesso --> 1225.9322033898304\nBotafogo --> 8791.828178694159\nBraz de Pina --> 1115.0\nCachambi --> 1157.1742424242425\nCachamorra --> 3000.0\nCacuia --> 916.6666666666666\nCaju --> 850.0\nCamorim --> 1735.2272727272727\nCampinho --> 1037.3333333333333\nCampo Grande --> 1267.71714922049\nCascadura --> 948.7096774193549\nCatete --> 2267.0625\nCatumbi --> 1112.5\nCavalcanti --> 595.0\nCentro --> 1254.7521865889212\nCidade Jardim --> 12000.0\nCidade Nova --> 1471.4285714285713\nCocotá --> 1883.3333333333333\nCoelho Neto --> 700.0\nColégio --> 885.0\nCopacabana --> 4126.677004538578\nCordovil --> 905.5555555555555\nCosme Velho --> 5343.548387096775\nCosmos --> 658.3333333333334\nCuricica --> 1514.5657894736842\nDel Castilho --> 1261.304347826087\nEncantado --> 1050.0\nEngenho Novo --> 1035.8252427184466\nEngenho da Rainha --> 883.5714285714286\nEngenho de Dentro --> 1166.3513513513512\nEstácio --> 1233.9130434782608\nFlamengo --> 4113.526610644258\nFreguesia (Ilha do Governador) --> 1720.0\nFreguesia (Jacarepaguá) --> 4966.666666666667\nFátima --> 956.6666666666666\nGamboa --> 1116.6666666666667\nGardênia Azul --> 761.1111111111111\nGlória --> 2467.2897196261683\nGrajaú --> 2038.6206896551723\nGuadalupe --> 700.0\nGuaratiba --> 1232.258064516129\nGávea --> 6563.688372093024\nHigienópolis --> 1006.25\nHonório Gurgel --> 628.5714285714286\nHumaitá --> 3802.3101265822784\nIcaraí --> 2393.75\nIlha do Governador --> 1728.0\nInhaúma --> 821.4285714285714\nInhoaíba --> 725.0\nIpanema --> 9352.001133786847\nIrajá --> 1096.012658227848\nItanhangá --> 9341.269841269841\nJabour --> 723.3333333333334\nJacarepaguá --> 2046.9877551020409\nJacaré --> 1200.0\nJardim América --> 714.2857142857143\nJardim Botânico --> 8722.357414448668\nJardim Carioca --> 2211.1111111111113\nJardim Guanabara --> 2124.6875\nJardim Oceânico --> 7626.086956521739\nJardim Sulacap --> 1176.6666666666667\nJoá --> 16773.478260869564\nLagoa --> 8147.075819672131\nLapa --> 1488.7777777777778\nLaranjeiras --> 3555.2776470588237\nLargo do Machado --> 1880.0\nLeblon --> 8746.344992050874\nLeme --> 3944.759825327511\nLins de Vasconcelos --> 1086.1702127659576\nMadureira --> 1017.6595744680851\nMagalhães Bastos --> 640.0\nMaracanã --> 1945.4961240310076\nMarechal Hermes --> 844.4444444444445\nMaria da Graça --> 1073.3333333333333\nMoneró --> 1900.0\nMéier --> 1355.8745098039215\nOlaria --> 1016.5625\nOswaldo Cruz --> 823.8888888888889\nPaciência --> 580.0\nPadre Miguel --> 786.25\nPaquetá --> 10550.0\nParada de Lucas --> 1062.5\nParque Anchieta --> 1287.5\nPavuna --> 648.9473684210526\nPechincha --> 1357.4257425742574\nPedra de Guaratiba --> 1082.1052631578948\nPenha --> 1023.3333333333334\nPenha Circular --> 1160.0\nPenínsula --> 7061.538461538462\nPiedade --> 956.5384615384615\nPilares --> 1077.5\nPitangueiras --> 2725.0\nPortuguesa --> 1272.7272727272727\nPraia da Bandeira --> 1300.0\nPraça Seca --> 998.4146341463414\nPraça da Bandeira --> 1358.723404255319\nQuintino Bocaiúva --> 924.6666666666666\nRamos --> 958.0\nRealengo --> 908.3333333333334\nRecreio dos Bandeirantes --> 3736.6130988477867\nRiachuelo --> 1143.9024390243903\nRibeira --> 1420.0\nRio Comprido --> 1610.240506329114\nRio da Prata --> 1300.0\nRocha --> 1086.842105263158\nRocha Miranda --> 877.0\nSampaio --> 937.5\nSanta Cruz --> 734.2857142857143\nSanta Teresa --> 2713.3333333333335\nSanto Cristo --> 1225.0\nSantíssimo --> 756.25\nSaúde --> 950.0\nSenador Camará --> 600.0\nSenador Vasconcelos --> 867.3529411764706\nSão Conrado --> 8780.88803088803\nSão Cristóvão --> 1391.71875\nSão Francisco Xavier --> 1237.2413793103449\nTanque --> 1335.7142857142858\nTaquara --> 1642.140350877193\nTauá --> 1340.0\nTijuca --> 2043.52\nTodos os Santos --> 1243.076923076923\nTomás Coelho --> 737.0\nTuriaçu --> 700.0\nUrca --> 7925.958904109589\nUsina --> 1520.0\nVargem Grande --> 3358.1666666666665\nVargem Pequena --> 2703.480769230769\nVaz Lobo --> 845.0\nVicente de Carvalho --> 1075.7142857142858\nVidigal --> 2600.0\nVigário Geral --> 970.0\nVila Isabel --> 1560.8333333333333\nVila Kosmos --> 1311.8181818181818\nVila Militar --> 500.0\nVila Valqueire --> 1769.5833333333333\nVila da Penha --> 1260.576923076923\nVista Alegre --> 1114.375\nZumbi --> 2150.0\nÁgua Santa --> 861.1111111111111\n" ], [ "grupo_bairro[['Valor', 'Condominio']].mean().round(2) # Maneira mais pratica em fazer o DataFrame.", "_____no_output_____" ] ], [ [ "## Estatística Descritiva", "_____no_output_____" ] ], [ [ "grupo_bairro['Valor'].describe().round(2).head(10) #Descrição geral de estatística", "_____no_output_____" ], [ "grupo_bairro['Valor'].aggregate(['min','max']).rename(columns = {'min':'Minimo', 'max':'Máximo'})\n#Agrega as estatísticas a sua escolha", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rc('figure', figsize = (20,10))\n", "_____no_output_____" ], [ "fig = grupo_bairro['Valor'].mean().plot.bar(color = 'blue')\nfig.set_ylabel('Valor do Aluguel')\nfig.set_title('Valor Médio do Aluguel por Bairro', {'fontsize':22})", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a876a0b75c904dcd6f96efc821498b4cffb9413
89,672
ipynb
Jupyter Notebook
examples/gnuplot-magic.ipynb
has2k1/gnuplot_kernel
b03c4c057ec57fdf20f084f8ff030243f4a109ea
[ "BSD-3-Clause" ]
74
2016-02-01T01:22:01.000Z
2021-12-28T20:32:15.000Z
examples/gnuplot-magic.ipynb
webglider/gnuplot_kernel
b7e94cd8c3f6f735f7f9ab1778c798256f94bc63
[ "BSD-3-Clause" ]
30
2016-02-27T20:26:14.000Z
2021-10-01T22:19:35.000Z
examples/gnuplot-magic.ipynb
webglider/gnuplot_kernel
b7e94cd8c3f6f735f7f9ab1778c798256f94bc63
[ "BSD-3-Clause" ]
29
2016-01-30T01:57:58.000Z
2021-11-19T09:04:44.000Z
506.621469
32,904
0.948267
[ [ [ "Using Gnuplot in different language kernel\n===========================================\n\nThis a Python 3 notebook, but we can switch on to use gnuplot\nby way of gnuplot magic. Make sure you have looked at the\n[gnuplot-kernel notebook](gnuplot-kernel.ipynb).", "_____no_output_____" ], [ "We can do the usual python stuff, but the key is the `%load_ext` magic\nthat we use to load `gnuplot_kernel` as an extension. This allows us\nto use the gnuplot magic", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\n# inline plots for matplotlib\n%matplotlib inline\n\n# This loads the magics for gnuplot\n%load_ext gnuplot_kernel", "_____no_output_____" ] ], [ [ "Plot using matplotlib", "_____no_output_____" ] ], [ [ "np.random.seed(123)\n\nN = 50\nx = np.random.rand(N)\ny = np.random.rand(N)\ncolors = np.random.rand(N)\narea = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radii\n\nplt.scatter(x, y, s=area, c=colors, alpha=0.5)\nplt.show()", "_____no_output_____" ] ], [ [ "Use the `%%gnuplot` cell magic for cells that contain gnuplot code.", "_____no_output_____" ] ], [ [ "%%gnuplot\nplot sin(x)", "_____no_output_____" ] ], [ [ "Use the `%gnuplot` line magic to change the inline plot options", "_____no_output_____" ] ], [ [ "%gnuplot inline pngcairo size 600,400", "_____no_output_____" ], [ "%%gnuplot\nplot cos(x)", "_____no_output_____" ] ], [ [ "That is it", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a877d9bb769d07dbf8755a9833ef745d7a0b865
9,145
ipynb
Jupyter Notebook
jupyter/Generate_dave_xml_w_bleach.ipynb
shiwei23/STORM6
669067503ebd164b575ce529fcc4a9a3f576b3d7
[ "MIT" ]
1
2022-03-29T22:40:38.000Z
2022-03-29T22:40:38.000Z
jupyter/Generate_dave_xml_w_bleach.ipynb
shiwei23/STORM6
669067503ebd164b575ce529fcc4a9a3f576b3d7
[ "MIT" ]
null
null
null
jupyter/Generate_dave_xml_w_bleach.ipynb
shiwei23/STORM6
669067503ebd164b575ce529fcc4a9a3f576b3d7
[ "MIT" ]
1
2021-08-01T20:20:09.000Z
2021-08-01T20:20:09.000Z
36.875
114
0.57649
[ [ [ "from xml.etree import ElementTree\nfrom xml.dom import minidom\nfrom xml.etree.ElementTree import Element, SubElement, Comment, indent\n\ndef prettify(elem):\n \"\"\"Return a pretty-printed XML string for the Element.\n \"\"\"\n rough_string = ElementTree.tostring(elem, encoding=\"ISO-8859-1\")\n reparsed = minidom.parseString(rough_string)\n return reparsed.toprettyxml(indent=\"\\t\")", "_____no_output_____" ], [ "import numpy as np\nimport os\nvalve_ids = np.arange(2,4+1)\nhyb_ids = np.arange(34,36+1)\nreg_names = [f'GM{_i}' for _i in np.arange(1,3+1)]", "_____no_output_____" ], [ "source_folder = r'D:\\Shiwei\\20210706-P_Forebrain_CTP09_only'\ntarget_drive = r'\\\\KOLMOGOROV\\Chromatin_NAS_5'\nimaging_protocol = r'Zscan_750_647_488_s60_n200'\nbleach_protocol = r'Bleach_740_647_s5'\n", "_____no_output_____" ], [ "cmd_seq = Element('command_sequence')\n\nfor _vid, _hid, _rname in zip(valve_ids, hyb_ids, reg_names):\n # comments\n comment = Comment(f\"Hyb {_hid} for {_rname}\")\n cmd_seq.append(comment)\n # TCEP\n tcep = SubElement(cmd_seq, 'valve_protocol')\n tcep.text = \"Flow TCEP\"\n # flow adaptor\n adt = SubElement(cmd_seq, 'valve_protocol')\n adt.text = f\"Hybridize {_vid}\"\n # delay time\n adt_incubation = SubElement(cmd_seq, 'delay')\n adt_incubation.text = \"60000\"\n \n # change bleach directory\n change_dir = SubElement(cmd_seq, 'change_directory')\n change_dir.text = os.path.join(source_folder, f\"Bleach\") \n # wakeup\n wakeup = SubElement(cmd_seq, 'wakeup')\n wakeup.text = \"5000\"\n # bleach loop\n loop = SubElement(cmd_seq, 'loop', name='Position Loop Zscan', increment=\"name\")\n loop_item = SubElement(loop, 'item', name=bleach_protocol)\n loop_item.text = \" \"\n \n # wash\n wash = SubElement(cmd_seq, 'valve_protocol')\n wash.text = \"Short Wash\"\n # readouts\n readouts = SubElement(cmd_seq, 'valve_protocol')\n readouts.text = \"Flow Readouts\"\n # delay time\n adt_incubation = SubElement(cmd_seq, 'delay')\n adt_incubation.text = \"60000\"\n \n # change directory\n change_dir = SubElement(cmd_seq, 'change_directory')\n change_dir.text = os.path.join(source_folder, f\"H{_hid}{_rname.upper()}\")\n # wakeup\n wakeup = SubElement(cmd_seq, 'wakeup')\n wakeup.text = \"5000\"\n \n # hybridization loop\n loop = SubElement(cmd_seq, 'loop', name='Position Loop Zscan', increment=\"name\")\n loop_item = SubElement(loop, 'item', name=imaging_protocol)\n loop_item.text = \" \"\n # delay time\n delay = SubElement(cmd_seq, 'delay')\n delay.text = \"2000\"\n # copy folder\n copy_dir = SubElement(cmd_seq, 'copy_directory')\n source_dir = SubElement(copy_dir, 'source_path')\n source_dir.text = change_dir.text#cmd_seq.findall('change_directory')[-1].text\n target_dir = SubElement(copy_dir, 'target_path')\n target_dir.text = os.path.join(target_drive, \n os.path.basename(os.path.dirname(source_dir.text)), \n os.path.basename(source_dir.text))\n del_source = SubElement(copy_dir, 'delete_source')\n del_source.text = \"True\"\n # empty line\n indent(target_dir)\n", "_____no_output_____" ], [ "print( prettify(cmd_seq))", "<?xml version=\"1.0\" ?>\n<command_sequence>\n\t<!--Hyb 34 for GM1-->\n\t<valve_protocol>Flow TCEP</valve_protocol>\n\t<valve_protocol>Hybridize 2</valve_protocol>\n\t<delay>60000</delay>\n\t<change_directory>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\Bleach</change_directory>\n\t<wakeup>5000</wakeup>\n\t<loop name=\"Position Loop Zscan\" increment=\"name\">\n\t\t<item name=\"Bleach_740_647_s5\"> </item>\n\t</loop>\n\t<valve_protocol>Short Wash</valve_protocol>\n\t<valve_protocol>Flow Readouts</valve_protocol>\n\t<delay>60000</delay>\n\t<change_directory>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\H34GM1</change_directory>\n\t<wakeup>5000</wakeup>\n\t<loop name=\"Position Loop Zscan\" increment=\"name\">\n\t\t<item name=\"Zscan_750_647_488_s60_n200\"> </item>\n\t</loop>\n\t<delay>2000</delay>\n\t<copy_directory>\n\t\t<source_path>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\H34GM1</source_path>\n\t\t<target_path>\\\\KOLMOGOROV\\Chromatin_NAS_5\\20210706-P_Forebrain_CTP09_only\\H34GM1</target_path>\n\t\t<delete_source>True</delete_source>\n\t</copy_directory>\n\t<!--Hyb 35 for GM2-->\n\t<valve_protocol>Flow TCEP</valve_protocol>\n\t<valve_protocol>Hybridize 3</valve_protocol>\n\t<delay>60000</delay>\n\t<change_directory>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\Bleach</change_directory>\n\t<wakeup>5000</wakeup>\n\t<loop name=\"Position Loop Zscan\" increment=\"name\">\n\t\t<item name=\"Bleach_740_647_s5\"> </item>\n\t</loop>\n\t<valve_protocol>Short Wash</valve_protocol>\n\t<valve_protocol>Flow Readouts</valve_protocol>\n\t<delay>60000</delay>\n\t<change_directory>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\H35GM2</change_directory>\n\t<wakeup>5000</wakeup>\n\t<loop name=\"Position Loop Zscan\" increment=\"name\">\n\t\t<item name=\"Zscan_750_647_488_s60_n200\"> </item>\n\t</loop>\n\t<delay>2000</delay>\n\t<copy_directory>\n\t\t<source_path>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\H35GM2</source_path>\n\t\t<target_path>\\\\KOLMOGOROV\\Chromatin_NAS_5\\20210706-P_Forebrain_CTP09_only\\H35GM2</target_path>\n\t\t<delete_source>True</delete_source>\n\t</copy_directory>\n\t<!--Hyb 36 for GM3-->\n\t<valve_protocol>Flow TCEP</valve_protocol>\n\t<valve_protocol>Hybridize 4</valve_protocol>\n\t<delay>60000</delay>\n\t<change_directory>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\Bleach</change_directory>\n\t<wakeup>5000</wakeup>\n\t<loop name=\"Position Loop Zscan\" increment=\"name\">\n\t\t<item name=\"Bleach_740_647_s5\"> </item>\n\t</loop>\n\t<valve_protocol>Short Wash</valve_protocol>\n\t<valve_protocol>Flow Readouts</valve_protocol>\n\t<delay>60000</delay>\n\t<change_directory>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\H36GM3</change_directory>\n\t<wakeup>5000</wakeup>\n\t<loop name=\"Position Loop Zscan\" increment=\"name\">\n\t\t<item name=\"Zscan_750_647_488_s60_n200\"> </item>\n\t</loop>\n\t<delay>2000</delay>\n\t<copy_directory>\n\t\t<source_path>D:\\Shiwei\\20210706-P_Forebrain_CTP09_only\\H36GM3</source_path>\n\t\t<target_path>\\\\KOLMOGOROV\\Chromatin_NAS_5\\20210706-P_Forebrain_CTP09_only\\H36GM3</target_path>\n\t\t<delete_source>True</delete_source>\n\t</copy_directory>\n</command_sequence>\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a87999d09315f41b56a52a5b3c75ea1a525e157
135,598
ipynb
Jupyter Notebook
Final_Project_AP_STC510.ipynb
alexpetrosino/STC-510
32874cc5849a6c119c4545750858bc3bf1c4f79b
[ "CC0-1.0" ]
null
null
null
Final_Project_AP_STC510.ipynb
alexpetrosino/STC-510
32874cc5849a6c119c4545750858bc3bf1c4f79b
[ "CC0-1.0" ]
null
null
null
Final_Project_AP_STC510.ipynb
alexpetrosino/STC-510
32874cc5849a6c119c4545750858bc3bf1c4f79b
[ "CC0-1.0" ]
null
null
null
64.724582
11,818
0.610754
[ [ [ "!pip install praw\r\nimport praw\r\nfrom time import sleep\r\nimport datetime\r\nfrom datetime import timedelta", "Collecting praw\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/48/a8/a2e2d0750ee17c7e3d81e4695a0338ad0b3f231853b8c3fa339ff2d25c7c/praw-7.2.0-py3-none-any.whl (159kB)\n\r\u001b[K |██ | 10kB 15.1MB/s eta 0:00:01\r\u001b[K |████ | 20kB 21.0MB/s eta 0:00:01\r\u001b[K |██████▏ | 30kB 10.0MB/s eta 0:00:01\r\u001b[K |████████▏ | 40kB 10.1MB/s eta 0:00:01\r\u001b[K |██████████▎ | 51kB 7.9MB/s eta 0:00:01\r\u001b[K |████████████▎ | 61kB 7.8MB/s eta 0:00:01\r\u001b[K |██████████████▍ | 71kB 8.7MB/s eta 0:00:01\r\u001b[K |████████████████▍ | 81kB 9.5MB/s eta 0:00:01\r\u001b[K |██████████████████▌ | 92kB 8.0MB/s eta 0:00:01\r\u001b[K |████████████████████▌ | 102kB 8.3MB/s eta 0:00:01\r\u001b[K |██████████████████████▋ | 112kB 8.3MB/s eta 0:00:01\r\u001b[K |████████████████████████▋ | 122kB 8.3MB/s eta 0:00:01\r\u001b[K |██████████████████████████▊ | 133kB 8.3MB/s eta 0:00:01\r\u001b[K |████████████████████████████▊ | 143kB 8.3MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▉ | 153kB 8.3MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 163kB 8.3MB/s \n\u001b[?25hCollecting prawcore<3,>=2\n Downloading https://files.pythonhosted.org/packages/7d/df/4a9106bea0d26689c4b309da20c926a01440ddaf60c09a5ae22684ebd35f/prawcore-2.0.0-py3-none-any.whl\nCollecting websocket-client>=0.54.0\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/4c/5f/f61b420143ed1c8dc69f9eaec5ff1ac36109d52c80de49d66e0c36c3dfdf/websocket_client-0.57.0-py2.py3-none-any.whl (200kB)\n\u001b[K |████████████████████████████████| 204kB 12.1MB/s \n\u001b[?25hCollecting update-checker>=0.18\n Downloading https://files.pythonhosted.org/packages/0c/ba/8dd7fa5f0b1c6a8ac62f8f57f7e794160c1f86f31c6d0fb00f582372a3e4/update_checker-0.18.0-py3-none-any.whl\nRequirement already satisfied: requests<3.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from prawcore<3,>=2->praw) (2.23.0)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from websocket-client>=0.54.0->praw) (1.15.0)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2->praw) (2.10)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2->praw) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2->praw) (1.24.3)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0,>=2.6.0->prawcore<3,>=2->praw) (2020.12.5)\nInstalling collected packages: prawcore, websocket-client, update-checker, praw\nSuccessfully installed praw-7.2.0 prawcore-2.0.0 update-checker-0.18.0 websocket-client-0.57.0\n" ] ], [ [ "I am going to start out using the Reddit app, and pulling from the subreddit, Wallstreet bets. There has been a lot of news about this subreddit and the stock market in coordination with Gamestop, AMC and BB shares. I want to Analyze the lingo used in this forum and the different shares being discussed. My guesses would be GME being the most talked about stock, Lingo would have diamond hands being the most common, and DFV associated with GME over 60% of the time.\r\n\r\n", "_____no_output_____" ] ], [ [ "uname = 'Apetty914'\r\nupassword = 'NOpe noep nopppeee!'\r\napp_id = 'BpXCT5yB3PXJ3A'\r\napp_secret = 'gkCFVhnara4PjBRGnTsNz4JR3N-GPw'", "_____no_output_____" ], [ "#This is just identifying the app made in reddit\r\nreddit = praw.Reddit(user_agent=\"Tendies (by /u/Apetty914)\",\r\n client_id=app_id, client_secret=app_secret,\r\n username=uname, password=upassword)", "_____no_output_____" ] ], [ [ "I have provided my identification and the app being used. Now I want to start pulling the searches I want. I am not going to collect comments, because I will be here all day and night. ", "_____no_output_____" ] ], [ [ "subreddit = reddit.subreddit('wallstreetbets')\r\nfor submission in subreddit.stream.submissions():\r\n print(submission.title)", "It appears that you are using PRAW in an asynchronous environment.\nIt is strongly recommended to use Async PRAW: https://asyncpraw.readthedocs.io.\nSee https://praw.readthedocs.io/en/latest/getting_started/multiple_instances.html#discord-bots-and-asynchronous-environments for more info.\n\nIt appears that you are using PRAW in an asynchronous environment.\nIt is strongly recommended to use Async PRAW: https://asyncpraw.readthedocs.io.\nSee https://praw.readthedocs.io/en/latest/getting_started/multiple_instances.html#discord-bots-and-asynchronous-environments for more info.\n\n" ], [ "#I am making the first list of the stocks.\r\ntopiclist1 = []\r\nkeywords = 'GME','AMC','BB',\r\nsubreddit = reddit.subreddit('wallstreetbets')\r\nresp = subreddit.search(keywords, limit=100)\r\n\r\nfor submission in resp:\r\n topiclist1.append(submission.title)\r\n sleep(2)\r\n print(submission.id,submission.title,submission.author.name)", "It appears that you are using PRAW in an asynchronous environment.\nIt is strongly recommended to use Async PRAW: https://asyncpraw.readthedocs.io.\nSee https://praw.readthedocs.io/en/latest/getting_started/multiple_instances.html#discord-bots-and-asynchronous-environments for more info.\n\n" ], [ "#now i am going to search for words that are the 'lingo' in the subreddit\r\ntopiclist2 = []\r\nkeywords = 'tendies','diamond hands', 'hold', 'paper hands'\r\nsubreddit = reddit.subreddit('wallstreetbets')\r\nresp = subreddit.search(keywords, limit=100)\r\n\r\nfor submission in resp:\r\n topiclist2.append(submission.title)\r\n sleep(2)\r\n print(submission.id,submission.title,submission.author.name)", "It appears that you are using PRAW in an asynchronous environment.\nIt is strongly recommended to use Async PRAW: https://asyncpraw.readthedocs.io.\nSee https://praw.readthedocs.io/en/latest/getting_started/multiple_instances.html#discord-bots-and-asynchronous-environments for more info.\n\n" ], [ "#Lastly im am going to look for the mention of DFV, usually how his username is abbreviated.\r\ntopiclist3 = []\r\nkeywords = 'DFV'\r\nsubreddit = reddit.subreddit('wallstreetbets')\r\nresp = subreddit.search(keywords, limit=100)\r\n\r\nfor submission in resp:\r\n topiclist3.append(submission.title)\r\n sleep(2)\r\n print(submission.id,submission.title,submission.author.name)", "It appears that you are using PRAW in an asynchronous environment.\nIt is strongly recommended to use Async PRAW: https://asyncpraw.readthedocs.io.\nSee https://praw.readthedocs.io/en/latest/getting_started/multiple_instances.html#discord-bots-and-asynchronous-environments for more info.\n\n" ] ], [ [ "Just checking to make sure a topic list has the posts saved.\r\n", "_____no_output_____" ] ], [ [ "len(topiclist2)", "_____no_output_____" ], [ "print(topiclist1)", "['Don’t be scared by the drop in $BB $GME etc. Hedge Funds trade after hours to scare y’all off... KEEP BUYING AND HOLDING 🚀🚀', \"BB - Now you see why Wall Street wants to keep this under $15. Let's make them pay out all these contracts tomorrow. PT: $30+\", 'YOU CAN STILL BUY GME & BB on WeBull, eTrade, fidelity, and others, post your platform of choice and keep those 💎🙌🏻 strong!', 'Lawmakers side with retail investors against Wall Street institutions !!!!!!! GME AMC BB NOK 🚀🚀🚀', '$GME, $BB, $NOK, $AMC Option Expiry Today - VERY CLOSE TO BEING HUGE', 'GME with AMC and BB overlay - Virtually Identical.', '$BB mic dropped about SolarWinds a few hours ago, bless 💎Papa Chen.💎 🚀🚀🚀🚀🚀👨\\u200d🚀', '$NOK $BB $BBBY $GME $AMC $EXPR', 'All the same game, NOK BB GME AMC Chart Correlation', 'A Friendly Reminder to BANG (BB, AMC, NOK, GME) Gang', 'Aussie here supporting the cause! GME and BB with another $10k ready for market open. 💎🙌 Strong together!', 'Zacks just upgraded BB. Price Target $29', 'BB workin its way to the moon 🚀🚀🚀', '$BANG (BB, AMC, NOK, GME) Options Volume Analysis', 'AMC, BB AND MOST IMPORTANTLY GME WILL BE BACK! HOLD THE LINE 💎🙌', 'The Next Wall Street Barrier: The weekend cool down ❄️ #KEEPSQUEEZING AMC GME NOK BB 🚀🌕💰💎🙌🏻', '$BB DD thread: Why this retard believes the fair market value for $BB is $45. Obligatory 🚀🚀🚀🚀🚀🚀🚀', '$BB gonna fly after this', 'Citadel has blocked all trading partners (Such as Robinhood) to trade GME, BB and others. As over 40% of all trades in the US market are via Citadel, they are artificially limiting the supply balance to reduce these stocks in their favour.', 'These were all the stocks that Robinhood blocked: $GME $KOSS $AMC $NOK $BB $BBBY $VIR $GNUS $AAL $NAKD', 'Thank God someone influential is genuinely on our side, btw do you know guys that his cat name is $BB 🚀🚀🚀🚀', '$BB ‘not aware’ as to why they are flying to Pluto 🚀', \"Daddy's DD of the week: $BB\", 'Unemployed with $80,000 in student loans and I am HOLDING! These hedge fund losers can pay for my degrees! DIAMOND HANDS!!! $GME $BB $AMC', 'I can no longer lurk, and want to boost the LACK OF MORALE HERE AS OF LATE. Down 58K so the natural response was to GET SOME MORE $GME BABY. HOLD YOU APES! diamond hands bb!', 'BB 💎✋🏽 WERE IN THIS TOGETHER BOYS, HOLD TO THE MOON! 💎💎💎🚀🚀🚀🚀🦍🦍🦍', 'Lucid Motors are looking for a QNX engineer. QNX = $BB', '$BB most secure system. Get your calls ready', \"BB - a long and factual look at Chen's hints and predictions during earnings calls\", 'AMC, GME, BB, PLTR. Started with 3k in December. Oops?', 'Calling all $BB (Big Boys) to the casino', \"DON'T COUNT $BB OUT!\", 'Have officially joined all the $BB retards', 'Comprehensive Guide about BB and how it shall take off in coming years', '$400k deep into $BB... let’s go 😤', 'DO NOT LISTEN TO THE BOTS. HOLD $GME AND BUY $BB. DO NOT LET THEM DIVIDE AND CONQUER. WE ARE GOING TO FUCKING MARS 🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀', '$AMC $GME $BB ALL HAVE COMMON WITH THIS STOCK... $SNDL', 'People on Robinhood who own GME are most likely to also own BB and PLTR in their portfolio. 😂 🚀🚀🚀', \"BB waits... I'm just glad I'm here with all of you degenerates...\", 'BB/AMZN Speculation: Blackberry IVY for the automotive industry is just a cover. We are thinking too small.', 'WE ARE GOING TO THE MOON BOYS GME BB AND AMC FULL FUCKING SEND 💸🚀💸🚀🚀💸🚀😈🤩🤪🚀', 'Can we just have a GME BB AMC megathread', '$GME $BB and $AMC when asked if you should buy', '$BB isn’t recovering today. I can’t fucking average down on RH. I’m holding hoping for a large miracle next week and going to make a drink now. 🙌', 'Unrealized Loss Porn GME BB AMC', 'BANG (BB, AMC, NOK, GME) Donation Megathread', 'Short attacks happening at the exact same time. Blue = GME, Orange = AMC, Teal = BB', 'HOT $BB Gamma Squeeze DD', 'This is WAR $GME $BB $NOK', '$NOK is almost certainly a plant. $GME is the most mechanically likely to squeeze, followed by $AMC and $BB', 'YOLO into BB. Let’s GOOOO', 'Please only focus on GME and BB, thanks. This subreddit is being overrun by bots and managers who are shitting themselves.', 'BB TO THE MOON!!!! 🚀🚀🚀🚀 💎💎👏🏻👏🏻', 'ALL ABOARD THE RETARDED BB ROCKETSHIP!!!! 🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀', '$BB will moon in the short term, but not because of fundamentals.', '$BB’s Time Has Come', 'The Full DD on $BB: An Elon Musk+Jeff Bezos sandwich for our Supreme Leader John Chen', '$BB More then just hype 30$+', 'Summary of all $BB (BlackBerry) fundamentals/news', 'BB Stock long term. A future player in electric vehicles?', 'Doing my part , BB will take off soon 🚀🚀🚀🚀', '10K PL. CURRENTLY 78.97% Short Interest @21.9% FEE. AMC 1 week behind GME!!!!!!! HOLD GME AMC BB NOK 💎💎💎 🚀🚀🚀', '$BB Is going to take off like a literal rocket', 'RH is limiting these stocks by the way https://www.google.com/amp/s/investorplace.com/2021/01/robinhood-bans-reddit-stocks-wallstreetbets-gme-cciv-sndl-jagx-amc-nok-bb/amp/', \"📢LISTEN UP!! I'm down 8k. I'm a student and this is an ENTIRE YEARS WORTH OF WAGES FOR ME. But you know what I'm going to do? HOLDDDDDD 💎🙌 Because I BELIEVE in you! Also, FUCK ROBINHOOD!! GME BB!!!🚀🚀🚀\", 'A letter to each of you who bought GME, AMC, NOK, BB.', 'For my BB people out there. I’m holding with you.', 'Could not have been more proud of all you retards and apes $SLV $GME $NOK $BB $AMC', 'BB strong incase anyone was doubting I’m 120k deep 🚀🚀🚀🚀', 'BB - If you are ever near or close by a church, lit a few candles for my dying portfolio.', '🚀🚀$BB Gang 🚀🚀', 'GME AMC BB NOK etc. to the moon ‼️‼️‼️🚀🚀🚀🚀🚀 WE WILL HOLD 💎🖐🏾', 'BlackBerry ($BB), Why you need it.', '$BB Bear DD. The case against QNX.', \"All in on BB bitches let's take the world to the moon 🚀🚀🚀🤑🤑🤑\", \"$30,000 into $BB and $NOK!! LET'S GO RETARDS!\", 'BB gamma squeeze if it closes over $13 credits: u/State_of_Affairs', 'I AINT SELLIN 🚀🚀🚀🚀 BB TO THE FUCKING MOOONN 🌑🌑💎💎🤝🤝🚀🚀🚀🚀🚀', 'BB TOO THE MOON!!!🚀🚀🚀🚀 I did my part did you do yours!?', '[BB] 💎 🙌 in Action.', \"Don't buy $BB calls, buy $BB shares - Here's why\", '🚀🚀 $BB 🙌💎', \"Missed the Last Dip but Sure as Fuck Won't Miss This One!! $GME, $BB 🤝💎🚀🚀🚀🚀\", '$BB INSANE CHART ANALYSIS 🚀🚀🚀🚀', '5 years slide of $BB history in one picture', 'Sold all my shares this morning.. oh wait I didn’t wake up this morning a paper handed PUSSY #BB #GME #AMC 🙌🏼💎 #FORTHECAUSE', 'Buy high and hold😂😂 BB to the moon', \"Notes about BB, Cylance, Amazon, Kuiper, antennas, and why I'm 1000% okay with buying BB at $20.\", 'Need to squeeze GME if you want AMC, NOK, BB, BBBY, etc to reap the rewards. It’s the gate that’ll allow the rest in.', 'BB yolo. If it hits $8 I’ll throw another 10K at it. A big buy and forget about stock.', 'NOK AND BB ARE BEING MANIPULATED IN THE SAME WAY', 'Charles “Chad” Payne to a boomer on the right:”I’ve bought BB in my own account almost a year ago. WHO CARES ABOUT YOUR THESIS?! WHO CARES ABOUT YOUR SHORT FUNDAMENTALS?! I told you a year ago about BB 5G technology. These people are not stupid.”', '🅱️🅱️ short interest $BB', \"I know it's GME time but Ill be holding on to these stocks because I like it. BB for long 🚀🚀🚀\", 'LETS GO BB & GME BUY AND HOLD LETS FUCKING GO BOYS WE GOING TO THE MOON💸🚀🚀💸🚀💸🚀💸', 'BB GANG - DONT STOP TILL WE REACH THE MOON OF FUCKING ENDOR 🚀🚀🚀🚀🚀🚀🚀🚀', 'Bb 💎🙌🏾💎🙌🏾 60% down', \"GME AMC BB etc aren't the first and won't be the last. If you're new, stick around. This sub has made many great calls and plenty of us have been down big and managed to turn it around and come out ahead. Just find your next move out if you're a believer just hold and wait\", 'BB - Vent on the 1/29 calls you had that were killed by suits', 'BB partnership']\n" ] ], [ [ "I want to run each of my topic lists through a list that will tell me which key word was in the post. Then I can count them and graph them.\r\n", "_____no_output_____" ] ], [ [ "stock = []\r\nfor topic in topiclist1:\r\n if 'GME' in topic:\r\n gamestop=('GME')\r\n stock.append(gamestop)\r\n if 'BB' in topic:\r\n blackberry=('BB')\r\n stock.append(blackberry)\r\n if 'AMC' in topic:\r\n movies=('AMC')\r\n stock.append(movies)\r\nelse: \r\n print('error')", "error\n" ], [ "#print out the new stock list. Be able to count after.\r\nstock", "_____no_output_____" ], [ "stock.count('BB')", "_____no_output_____" ] ], [ [ "Now I have to make the lists for the other topic lists.\r\n", "_____no_output_____" ] ], [ [ "lingo = []\r\nfor topic in topiclist2:\r\n if 'tendies' in topic:\r\n tendies=('tendies')\r\n lingo.append(tendies)\r\n if 'diamond hands' in topic:\r\n dh=('diamond hands')\r\n lingo.append(dh)\r\n if 'paper hands' in topic:\r\n ph=('paper hands')\r\n lingo.append(ph)\r\n if 'hold' in topic:\r\n hld=('hold')\r\n lingo.append(hld)\r\nelse: \r\n print('error')", "error\n" ], [ "len(lingo)", "_____no_output_____" ], [ "# I want to search these titles for words that could be used as following his lead or advice. Things like still in, 'like the stock', and GME as this is the stock he is mostly associated with.\r\nDFVlist = []\r\nfor topic in topiclist3:\r\n if 'GME' in topic:\r\n gamestop=('GME')\r\n DFVlist.append(gamestop)\r\n if 'like the stock' in topic:\r\n lts=('like the stock')\r\n DFVlist.append(lts)\r\n if 'still in' in topic:\r\n si=('still in')\r\n DFVlist.append(si)\r\nelse: \r\n print('error')", "error\n" ], [ "DFVlist", "_____no_output_____" ] ], [ [ "So now that I have lists with the 'phrases' isolated, I want to make them into dataframes, so I am able to graph them. I have to import pandas and numpy for that.", "_____no_output_____" ] ], [ [ "import pandas as pd\r\nimport numpy as np\r\nimport matplotlib as plot", "_____no_output_____" ], [ "#make stock list into dataframe\r\nfrom pandas import DataFrame\r\nstocks = pd.DataFrame (stock,columns=['ticker'])", "_____no_output_____" ], [ "stocks", "_____no_output_____" ] ], [ [ "Now I know it works I will make the other two lists into dataframes as well.\r\n", "_____no_output_____" ] ], [ [ "dfvmentions = pd.DataFrame (DFVlist, columns=['mention'])\r\nterms = pd.DataFrame (lingo, columns=['term'])", "_____no_output_____" ], [ "dfvmentions", "_____no_output_____" ], [ "terms", "_____no_output_____" ] ], [ [ "Know I have these dataframes, I can do some quick looks at the data. The stocks have more than 100 entries, meaning the titles often mentioned more than one of the stocks. DFV mentions only contained a term I looked for 25% of the time, could mean people jus tlike to talk about him for other reasons. Also the lingo terms only had 50 hits, but I pulled 100 titles. I think it has to do with the conjugation of terms I looked for.", "_____no_output_____" ], [ "I want to get counts of each of my variables in each dataframe. then i will plot them, I want ratios so I will due pie charts.\r\n", "_____no_output_____" ] ], [ [ "stocks.ticker.value_counts()", "_____no_output_____" ], [ "dfvmentions.mention.value_counts()", "_____no_output_____" ], [ "terms.term.value_counts()", "_____no_output_____" ] ], [ [ "I can graph these now for a nice display of the data.\r\n", "_____no_output_____" ] ], [ [ "stocks.ticker.value_counts().plot.pie()", "_____no_output_____" ], [ "terms.term.value_counts().plot.pie()", "_____no_output_____" ], [ "dfvmentions.mention.value_counts().plot.pie()", "_____no_output_____" ] ], [ [ "To go over my hunches or very basic hypothesis, I am on the right track with GME mentions with DFV if you only take the last 100 posts (Im almost positive if you go back a month you may find different answers.) However, 75% of his mentions do not have a term I looked for in them. Interesting, and more posts should be collected.\r\n\r\nThe lingo is very telling about the vibe of the subreddit. Diamond hands refers to holding and not selling. Paper hands was by far the most common term used and could be being used as making fun or calling out those who have sold their shares. Emojis replacing the actual words is something to search for in the future.\r\n\r\nIn regards to the mention of stocks, I was shocked that the last 100 posts were not dominated by GME. I want to only search for the ticker because it makes it easier on a trading page and not worry about spelling/ capitilization. BB being the over arching dominating mention for the last 100 posts is intriguing. Even to me as I have been watching the community for months. ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
4a87a1b024a6eefcc0edf08f7f42538a61784722
22,970
ipynb
Jupyter Notebook
inv1.ipynb
StupidPotc/laba5
fca285dc77f9cdc02b67f0a75b6d4ba3bd2dc7ad
[ "MIT" ]
null
null
null
inv1.ipynb
StupidPotc/laba5
fca285dc77f9cdc02b67f0a75b6d4ba3bd2dc7ad
[ "MIT" ]
null
null
null
inv1.ipynb
StupidPotc/laba5
fca285dc77f9cdc02b67f0a75b6d4ba3bd2dc7ad
[ "MIT" ]
null
null
null
179.453125
20,184
0.914367
[ [ [ "<h2> Задача по физике <h2>", "_____no_output_____" ], [ "<h4>Расстояние (S) между городами A и B = 600 км. Одновременно из обоих городов навстречу друг другу выезжают автомашины. Машина из города М движется со скоростью = 60 км/ч, из города К — со скоростью ν2 = 40 км/ч. Построить график зависимости пути от времени для каждой из машин и по ним определить место встречи и время их движения до встречи.<h4>", "_____no_output_____" ], [ "<h5> Решение <h5>\n Движение происходит вдоль оси Х между точкой 0 и точкой 360 , согласно формуле\n ", "_____no_output_____" ], [ "x1(t) = v1*t\n", "_____no_output_____" ], [ "x2(t) = s - v2*t", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nplt.plot([0, 60,120, 180, 240, 300, 360])\nplt.plot([360, 320, 280, 240, 200,160, 120, 80, 40, 0])\nplt.ylabel('x, км', fontsize=12, color='blue')\nplt.xlabel('t, ч', fontsize=12, color='blue')\nplt.text(4.8, 270, 'X1(t)')\nplt.text(4.8, 35, 'X2(t)')\nplt.grid()\nplt.show()", "_____no_output_____" ] ], [ [ "<h5>Графики этих функицй изображены на рисунке. В момент втречи координаты машин совпадают<h5>", "_____no_output_____" ], [ "<h6>x1(t)=x2(t0)=x0<h6>", "_____no_output_____" ], [ "<h5>Проекция точки пересечения на ось T показывает время t0, когда это произошло.<h5>", "_____no_output_____" ], [ "<h5>Ответ: x0 = 210 км, t0 = 3,5 ч <h5>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
4a87ac43a22b416c7028ee3261965675cbf4588a
245,652
ipynb
Jupyter Notebook
Function Approximation by Neural Network/Multi-variate LASSO regression with CV.ipynb
thirupathi-chintu/Machine-Learning-with-Python
0bb8753a5140c8e69a24f2ab95c7ef133ac025a6
[ "BSD-2-Clause" ]
1,803
2018-11-26T20:53:23.000Z
2022-03-31T15:25:29.000Z
Function Approximation by Neural Network/Multi-variate LASSO regression with CV.ipynb
thirupathi-chintu/Machine-Learning-with-Python
0bb8753a5140c8e69a24f2ab95c7ef133ac025a6
[ "BSD-2-Clause" ]
8
2019-02-05T04:09:57.000Z
2022-02-19T23:46:27.000Z
Function Approximation by Neural Network/Multi-variate LASSO regression with CV.ipynb
thirupathi-chintu/Machine-Learning-with-Python
0bb8753a5140c8e69a24f2ab95c7ef133ac025a6
[ "BSD-2-Clause" ]
1,237
2018-11-28T19:48:55.000Z
2022-03-31T15:25:07.000Z
110.157848
36,214
0.796354
[ [ [ "# Multi-variate Rregression Metamodel with DOE based on random sampling\n* Input variable space should be constructed using random sampling, not classical factorial DOE\n* Linear fit is often inadequate but higher-order polynomial fits often leads to overfitting i.e. learns spurious, flawed relationships between input and output\n* R-square fit can often be misleding measure in case of high-dimensional regression\n* Metamodel can be constructed by selectively discovering features (or their combination) which matter and shrinking other high-order terms towards zero\n\n** [LASSO](https://en.wikipedia.org/wiki/Lasso_(statistics)) is an effective regularization technique for this purpose**\n\n#### LASSO: Least Absolute Shrinkage and Selection Operator\n$$ {\\displaystyle \\min _{\\beta _{0},\\beta }\\left\\{{\\frac {1}{N}}\\sum _{i=1}^{N}(y_{i}-\\beta _{0}-x_{i}^{T}\\beta )^{2}\\right\\}{\\text{ subject to }}\\sum _{j=1}^{p}|\\beta _{j}|\\leq t.} $$", "_____no_output_____" ], [ "### Import libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Global variables", "_____no_output_____" ] ], [ [ "N_points = 20 # Number of sample points\n# start with small < 40 points and see how the regularized model makes a difference. \n# Then increase the number is see the difference\nnoise_mult = 50 # Multiplier for the noise term\nnoise_mean = 10 # Mean for the Gaussian noise adder\nnoise_sd = 10 # Std. Dev. for the Gaussian noise adder", "_____no_output_____" ] ], [ [ "### Generate feature vectors based on random sampling", "_____no_output_____" ] ], [ [ "X=np.array(10*np.random.randn(N_points,5))", "_____no_output_____" ], [ "df=pd.DataFrame(X,columns=['Feature'+str(l) for l in range(1,6)])", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "### Plot the random distributions of input features", "_____no_output_____" ] ], [ [ "for i in df.columns:\n df.hist(i,bins=5,xlabelsize=15,ylabelsize=15,figsize=(8,6))", "_____no_output_____" ] ], [ [ "### Generate the output variable by analytic function + Gaussian noise (our goal will be to *'learn'* this function)", "_____no_output_____" ], [ "#### Let's construst the ground truth or originating function as follows: \n \n$ y=f(x_1,x_2,x_3,x_4,x_5)= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+\\psi(x)\\ :\\ \\psi(x) = {\\displaystyle f(x\\;|\\;\\mu ,\\sigma ^{2})={\\frac {1}{\\sqrt {2\\pi \\sigma ^{2}}}}\\;e^{-{\\frac {(x-\\mu )^{2}}{2\\sigma ^{2}}}}}$", "_____no_output_____" ] ], [ [ "df['y']=5*df['Feature1']**2+13*df['Feature2']+0.1*df['Feature3']**2*df['Feature1'] \\\n+2*df['Feature4']*df['Feature5']+0.1*df['Feature5']**3+0.8*df['Feature1']*df['Feature4']*df['Feature5'] \\\n+noise_mult*np.random.normal(loc=noise_mean,scale=noise_sd)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "### Plot single-variable scatterplots\n** It is clear that no clear pattern can be gussed with these single-variable plots **", "_____no_output_____" ] ], [ [ "for i in df.columns:\n df.plot.scatter(i,'y', edgecolors=(0,0,0),s=50,c='g',grid=True)", "_____no_output_____" ] ], [ [ "### Standard linear regression", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression", "_____no_output_____" ], [ "linear_model = LinearRegression(normalize=True)", "_____no_output_____" ], [ "X_linear=df.drop('y',axis=1)\ny_linear=df['y']", "_____no_output_____" ], [ "linear_model.fit(X_linear,y_linear)", "_____no_output_____" ], [ "y_pred_linear = linear_model.predict(X_linear)", "_____no_output_____" ] ], [ [ "### R-square of simple linear fit is very bad, coefficients have no meaning i.e. we did not 'learn' the function", "_____no_output_____" ] ], [ [ "RMSE_linear = np.sqrt(np.sum(np.square(y_pred_linear-y_linear)))", "_____no_output_____" ], [ "print(\"Root-mean-square error of linear model:\",RMSE_linear)", "Root-mean-square error of linear model: 4591.42864942\n" ], [ "coeff_linear = pd.DataFrame(linear_model.coef_,index=df.drop('y',axis=1).columns, columns=['Linear model coefficients'])\ncoeff_linear", "_____no_output_____" ], [ "print (\"R2 value of linear model:\",linear_model.score(X_linear,y_linear))", "R2 value of linear model: 0.329123897412\n" ], [ "plt.figure(figsize=(12,8))\nplt.xlabel(\"Predicted value with linear fit\",fontsize=20)\nplt.ylabel(\"Actual y-values\",fontsize=20)\nplt.grid(1)\nplt.scatter(y_pred_linear,y_linear,edgecolors=(0,0,0),lw=2,s=80)\nplt.plot(y_pred_linear,y_pred_linear, 'k--', lw=2)", "_____no_output_____" ] ], [ [ "### Create polynomial features", "_____no_output_____" ] ], [ [ "from sklearn.preprocessing import PolynomialFeatures", "_____no_output_____" ], [ "poly1 = PolynomialFeatures(3,include_bias=False)", "_____no_output_____" ], [ "X_poly = poly1.fit_transform(X)\nX_poly_feature_name = poly1.get_feature_names(['Feature'+str(l) for l in range(1,6)])\nprint(\"The feature vector list:\\n\",X_poly_feature_name)\nprint(\"\\nLength of the feature vector:\",len(X_poly_feature_name))", "The feature vector list:\n ['Feature1', 'Feature2', 'Feature3', 'Feature4', 'Feature5', 'Feature1^2', 'Feature1 Feature2', 'Feature1 Feature3', 'Feature1 Feature4', 'Feature1 Feature5', 'Feature2^2', 'Feature2 Feature3', 'Feature2 Feature4', 'Feature2 Feature5', 'Feature3^2', 'Feature3 Feature4', 'Feature3 Feature5', 'Feature4^2', 'Feature4 Feature5', 'Feature5^2', 'Feature1^3', 'Feature1^2 Feature2', 'Feature1^2 Feature3', 'Feature1^2 Feature4', 'Feature1^2 Feature5', 'Feature1 Feature2^2', 'Feature1 Feature2 Feature3', 'Feature1 Feature2 Feature4', 'Feature1 Feature2 Feature5', 'Feature1 Feature3^2', 'Feature1 Feature3 Feature4', 'Feature1 Feature3 Feature5', 'Feature1 Feature4^2', 'Feature1 Feature4 Feature5', 'Feature1 Feature5^2', 'Feature2^3', 'Feature2^2 Feature3', 'Feature2^2 Feature4', 'Feature2^2 Feature5', 'Feature2 Feature3^2', 'Feature2 Feature3 Feature4', 'Feature2 Feature3 Feature5', 'Feature2 Feature4^2', 'Feature2 Feature4 Feature5', 'Feature2 Feature5^2', 'Feature3^3', 'Feature3^2 Feature4', 'Feature3^2 Feature5', 'Feature3 Feature4^2', 'Feature3 Feature4 Feature5', 'Feature3 Feature5^2', 'Feature4^3', 'Feature4^2 Feature5', 'Feature4 Feature5^2', 'Feature5^3']\n\nLength of the feature vector: 55\n" ], [ "df_poly = pd.DataFrame(X_poly, columns=X_poly_feature_name)\ndf_poly.head()", "_____no_output_____" ], [ "df_poly['y']=df['y']\ndf_poly.head()", "_____no_output_____" ], [ "X_train=df_poly.drop('y',axis=1)\ny_train=df_poly['y']", "_____no_output_____" ] ], [ [ "### Polynomial model without regularization and cross-validation", "_____no_output_____" ] ], [ [ "poly2 = LinearRegression(normalize=True)", "_____no_output_____" ], [ "model_poly=poly2.fit(X_train,y_train)\ny_poly = poly2.predict(X_train)\nRMSE_poly=np.sqrt(np.sum(np.square(y_poly-y_train)))\nprint(\"Root-mean-square error of simple polynomial model:\",RMSE_poly)", "Root-mean-square error of simple polynomial model: 1.64850814918e-11\n" ] ], [ [ "### The non-regularized polunomial model (notice the coeficients are not learned properly)\n** Recall that the originating function is: ** \n$ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $", "_____no_output_____" ] ], [ [ "coeff_poly = pd.DataFrame(model_poly.coef_,index=df_poly.drop('y',axis=1).columns, \n columns=['Coefficients polynomial model'])\ncoeff_poly", "_____no_output_____" ] ], [ [ "#### R-square value of the simple polynomial model is perfect but the model is flawed as shown above i.e. it learned wrong coefficients and overfitted the to the data", "_____no_output_____" ] ], [ [ "print (\"R2 value of simple polynomial model:\",model_poly.score(X_train,y_train))", "R2 value of simple polynomial model: 1.0\n" ] ], [ [ "### Polynomial model with cross-validation and LASSO regularization\n** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded **", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LassoCV", "_____no_output_____" ], [ "model1 = LassoCV(cv=10,verbose=0,normalize=True,eps=0.001,n_alphas=100, tol=0.0001,max_iter=5000)", "_____no_output_____" ], [ "model1.fit(X_train,y_train)", "_____no_output_____" ], [ "y_pred1 = np.array(model1.predict(X_train))", "_____no_output_____" ], [ "RMSE_1=np.sqrt(np.sum(np.square(y_pred1-y_train)))\nprint(\"Root-mean-square error of Metamodel:\",RMSE_1)", "Root-mean-square error of Metamodel: 14.6011217949\n" ], [ "coeff1 = pd.DataFrame(model1.coef_,index=df_poly.drop('y',axis=1).columns, columns=['Coefficients Metamodel'])\ncoeff1", "_____no_output_____" ], [ "model1.score(X_train,y_train)", "_____no_output_____" ], [ "model1.alpha_", "_____no_output_____" ] ], [ [ "### Printing only the non-zero coefficients of the regularized model (notice the coeficients are learned well enough)\n** Recall that the originating function is: ** \n$ y= 5x_1^2+13x_2+0.1x_1x_3^2+2x_4x_5+0.1x_5^3+0.8x_1x_4x_5+noise $", "_____no_output_____" ] ], [ [ "coeff1[coeff1['Coefficients Metamodel']!=0]", "_____no_output_____" ], [ "plt.figure(figsize=(12,8))\nplt.xlabel(\"Predicted value with Regularized Metamodel\",fontsize=20)\nplt.ylabel(\"Actual y-values\",fontsize=20)\nplt.grid(1)\nplt.scatter(y_pred1,y_train,edgecolors=(0,0,0),lw=2,s=80)\nplt.plot(y_pred1,y_pred1, 'k--', lw=2)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a87b328fc8213df50e7d2bd91c55f14d198dae5
3,692
ipynb
Jupyter Notebook
lectures/Lecture_9.ipynb
KiranArun/CSMM.102x-Machine_Learning
a2d5cda6e380ebdd0546d3f8ed07cb0881b685fc
[ "MIT" ]
6
2019-09-27T13:50:16.000Z
2021-01-14T10:13:25.000Z
lectures/Lecture_9.ipynb
semihkumluk/CSMM.102x-Machine_Learning
a2d5cda6e380ebdd0546d3f8ed07cb0881b685fc
[ "MIT" ]
null
null
null
lectures/Lecture_9.ipynb
semihkumluk/CSMM.102x-Machine_Learning
a2d5cda6e380ebdd0546d3f8ed07cb0881b685fc
[ "MIT" ]
4
2018-12-05T18:03:42.000Z
2020-02-22T15:42:09.000Z
29.536
161
0.499729
[ [ [ "# Logistic Regression", "_____no_output_____" ], [ "$L = \\ln\\frac{p(y = +1|x)}{p(y = -1|x)}$\n\n- L >> 0: more confident in y = +1\n- L << 0: more confident in y = -1\n- L = 0: dunno\n\nLinear function $x^Tw + w_0$ captures:\n- $\\left|\\frac{x^Tw}{||w||_2} + \\frac{w_0}{||w||_2}\\right|$ gives us distance from hyperplane to x\n- sign of function captures which side x is on\n- As x moves away/towards H, we become more/less confident", "_____no_output_____" ], [ "the previous bayes classifier had a prior on y, and distribution on data x so the weights were restricted\n\n$p(y = -1|x) = 1 - p(y = +1|x)$\n\n$p(y = +1|x) = \\frac{\\exp\\{x^Tw + w_0\\}}{1 + \\exp\\{x^Tw + w_0\\}} = \\sigma(x^Tw + w_0)$\n- called sigmoid function\n\nwith offset absorbed into x and w\n\n$P(y_i = +1|x_i, w) = \\sigma(x^T_iw)$\n\n$\\sigma(x^T_iw) = \\frac{e^{x^T_iw}}{1 + e^{x^T_iw}}$\n\n#### this is discrimitive because x is not directly modeled\n\n- discriminitive: p(y|x)\n- discriminitive: p(x|y)p(y)\n- bayes a generative because x is modeled\n\n#### Joint Likelihood\n\n$\\prod^n_{i=1}p(y_i|x_i, w)$\n\n$= \\prod^n_{i=1}\\sigma(x^T_iw)^{\\mathbb{1}(y_i = +1)}(1-\\sigma(x^T_iw))^{\\mathbb{1}(y_i = -1)}$\n- the joint probability (using the sigmoid function) of each value\n\n$= \\left(\\frac{e^{x^T_iw}}{1 + e^{x^T_iw}}\\right)^{\\mathbb{1}(y_i = +1)}\\left(1-\\frac{e^{x^T_iw}}{1 + e^{x^T_iw}}\\right)^{\\mathbb{1}(y_i = -1)}$\n- this is our confidence\n\n$= \\frac{e^{y_ix^T_iw}}{1 + e^{y_ix^T_iw}}$\n- this is $\\sigma(y_i\\cdot x^T_iw)$\n- we want to maximize this over w\n\n$w_{ML} = \\arg\\max_w\\sum^n_{i=1}\\ln\\sigma(y_i\\cdot x^T_iw)$\n\n$w_{ML} = \\arg\\max_wL$\n\nwe cant set $\\triangledown_wL = 0$\n\n$w\\prime = w + \\eta\\triangledown_wL$\n\n$\\triangledown_wL = \\sum^n_{i=1}(1 - \\sigma(y_i\\cdot x^T_iw))y_ix_i$\n- its like the peceptron\n- but we're multiplying the addition by our unconfidence\n\n$w^{(t+1)} = w^{(t)} + \\eta\\sum^n_{i=1}(1 - \\sigma(y_i\\cdot x^T_iw))y_ix_i$\n\n#### for linearly seperable data:\n\nbecause $||w||_2 \\to \\infty$ so that $L \\to 1$\n\n#### for nearly linear seperable data:\n\noverfitting where $||w||_2 \\to \\infty$ so that $L \\to 1$ for majority of values but some wont be accurate\n\n### Regularization\n\n$w_{map} = \\arg\\max_w\\sum^n_{i=1}\\ln\\sigma(y_i\\cdot x^T_iw) - \\frac{\\lambda}{2}w^T w$\n", "_____no_output_____" ], [ "## Laplace Approximation", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ] ]
4a87cbebccf5e1718e26884f95aa7de476056d19
86,375
ipynb
Jupyter Notebook
Password Strength Using NLP/Password_strength_NLP.ipynb
Cavin6080/Machine_Learning
e05ea92775ecc14daa748758bd3d46a3b3657286
[ "Apache-2.0" ]
null
null
null
Password Strength Using NLP/Password_strength_NLP.ipynb
Cavin6080/Machine_Learning
e05ea92775ecc14daa748758bd3d46a3b3657286
[ "Apache-2.0" ]
null
null
null
Password Strength Using NLP/Password_strength_NLP.ipynb
Cavin6080/Machine_Learning
e05ea92775ecc14daa748758bd3d46a3b3657286
[ "Apache-2.0" ]
null
null
null
28.107712
7,324
0.422657
[ [ [ "import numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport random\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix, accuracy_score\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.model_selection import train_test_split\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "df = pd.read_csv('data.csv', error_bad_lines=False)\ndf.head()", "b'Skipping line 2810: expected 2 fields, saw 5\\nSkipping line 4641: expected 2 fields, saw 5\\nSkipping line 7171: expected 2 fields, saw 5\\nSkipping line 11220: expected 2 fields, saw 5\\nSkipping line 13809: expected 2 fields, saw 5\\nSkipping line 14132: expected 2 fields, saw 5\\nSkipping line 14293: expected 2 fields, saw 5\\nSkipping line 14865: expected 2 fields, saw 5\\nSkipping line 17419: expected 2 fields, saw 5\\nSkipping line 22801: expected 2 fields, saw 5\\nSkipping line 25001: expected 2 fields, saw 5\\nSkipping line 26603: expected 2 fields, saw 5\\nSkipping line 26742: expected 2 fields, saw 5\\nSkipping line 29702: expected 2 fields, saw 5\\nSkipping line 32767: expected 2 fields, saw 5\\nSkipping line 32878: expected 2 fields, saw 5\\nSkipping line 35643: expected 2 fields, saw 5\\nSkipping line 36550: expected 2 fields, saw 5\\nSkipping line 38732: expected 2 fields, saw 5\\nSkipping line 40567: expected 2 fields, saw 5\\nSkipping line 40576: expected 2 fields, saw 5\\nSkipping line 41864: expected 2 fields, saw 5\\nSkipping line 46861: expected 2 fields, saw 5\\nSkipping line 47939: expected 2 fields, saw 5\\nSkipping line 48628: expected 2 fields, saw 5\\nSkipping line 48908: expected 2 fields, saw 5\\nSkipping line 57582: expected 2 fields, saw 5\\nSkipping line 58782: expected 2 fields, saw 5\\nSkipping line 58984: expected 2 fields, saw 5\\nSkipping line 61518: expected 2 fields, saw 5\\nSkipping line 63451: expected 2 fields, saw 5\\nSkipping line 68141: expected 2 fields, saw 5\\nSkipping line 72083: expected 2 fields, saw 5\\nSkipping line 74027: expected 2 fields, saw 5\\nSkipping line 77811: expected 2 fields, saw 5\\nSkipping line 83958: expected 2 fields, saw 5\\nSkipping line 85295: expected 2 fields, saw 5\\nSkipping line 88665: expected 2 fields, saw 5\\nSkipping line 89198: expected 2 fields, saw 5\\nSkipping line 92499: expected 2 fields, saw 5\\nSkipping line 92751: expected 2 fields, saw 5\\nSkipping line 93689: expected 2 fields, saw 5\\nSkipping line 94776: expected 2 fields, saw 5\\nSkipping line 97334: expected 2 fields, saw 5\\nSkipping line 102316: expected 2 fields, saw 5\\nSkipping line 103421: expected 2 fields, saw 5\\nSkipping line 106872: expected 2 fields, saw 5\\nSkipping line 109363: expected 2 fields, saw 5\\nSkipping line 110117: expected 2 fields, saw 5\\nSkipping line 110465: expected 2 fields, saw 5\\nSkipping line 113843: expected 2 fields, saw 5\\nSkipping line 115634: expected 2 fields, saw 5\\nSkipping line 121518: expected 2 fields, saw 5\\nSkipping line 123692: expected 2 fields, saw 5\\nSkipping line 124708: expected 2 fields, saw 5\\nSkipping line 129608: expected 2 fields, saw 5\\nSkipping line 133176: expected 2 fields, saw 5\\nSkipping line 135532: expected 2 fields, saw 5\\nSkipping line 138042: expected 2 fields, saw 5\\nSkipping line 139485: expected 2 fields, saw 5\\nSkipping line 140401: expected 2 fields, saw 5\\nSkipping line 144093: expected 2 fields, saw 5\\nSkipping line 149850: expected 2 fields, saw 5\\nSkipping line 151831: expected 2 fields, saw 5\\nSkipping line 158014: expected 2 fields, saw 5\\nSkipping line 162047: expected 2 fields, saw 5\\nSkipping line 164515: expected 2 fields, saw 5\\nSkipping line 170313: expected 2 fields, saw 5\\nSkipping line 171325: expected 2 fields, saw 5\\nSkipping line 171424: expected 2 fields, saw 5\\nSkipping line 175920: expected 2 fields, saw 5\\nSkipping line 176210: expected 2 fields, saw 5\\nSkipping line 183603: expected 2 fields, saw 5\\nSkipping line 190264: expected 2 fields, saw 5\\nSkipping line 191683: expected 2 fields, saw 5\\nSkipping line 191988: expected 2 fields, saw 5\\nSkipping line 195450: expected 2 fields, saw 5\\nSkipping line 195754: expected 2 fields, saw 5\\nSkipping line 197124: expected 2 fields, saw 5\\nSkipping line 199263: expected 2 fields, saw 5\\nSkipping line 202603: expected 2 fields, saw 5\\nSkipping line 209960: expected 2 fields, saw 5\\nSkipping line 213218: expected 2 fields, saw 5\\nSkipping line 217060: expected 2 fields, saw 5\\nSkipping line 220121: expected 2 fields, saw 5\\nSkipping line 223518: expected 2 fields, saw 5\\nSkipping line 226293: expected 2 fields, saw 5\\nSkipping line 227035: expected 2 fields, saw 7\\nSkipping line 227341: expected 2 fields, saw 5\\nSkipping line 227808: expected 2 fields, saw 5\\nSkipping line 228516: expected 2 fields, saw 5\\nSkipping line 228733: expected 2 fields, saw 5\\nSkipping line 232043: expected 2 fields, saw 5\\nSkipping line 232426: expected 2 fields, saw 5\\nSkipping line 234490: expected 2 fields, saw 5\\nSkipping line 239626: expected 2 fields, saw 5\\nSkipping line 240461: expected 2 fields, saw 5\\nSkipping line 244518: expected 2 fields, saw 5\\nSkipping line 245395: expected 2 fields, saw 5\\nSkipping line 246168: expected 2 fields, saw 5\\nSkipping line 246655: expected 2 fields, saw 5\\nSkipping line 246752: expected 2 fields, saw 5\\nSkipping line 247189: expected 2 fields, saw 5\\nSkipping line 250276: expected 2 fields, saw 5\\nSkipping line 255327: expected 2 fields, saw 5\\nSkipping line 257094: expected 2 fields, saw 5\\n'\nb'Skipping line 264626: expected 2 fields, saw 5\\nSkipping line 265028: expected 2 fields, saw 5\\nSkipping line 269150: expected 2 fields, saw 5\\nSkipping line 271360: expected 2 fields, saw 5\\nSkipping line 273975: expected 2 fields, saw 5\\nSkipping line 274742: expected 2 fields, saw 5\\nSkipping line 276227: expected 2 fields, saw 5\\nSkipping line 279807: expected 2 fields, saw 5\\nSkipping line 283425: expected 2 fields, saw 5\\nSkipping line 287468: expected 2 fields, saw 5\\nSkipping line 292995: expected 2 fields, saw 5\\nSkipping line 293496: expected 2 fields, saw 5\\nSkipping line 293735: expected 2 fields, saw 5\\nSkipping line 295060: expected 2 fields, saw 5\\nSkipping line 296643: expected 2 fields, saw 5\\nSkipping line 296848: expected 2 fields, saw 5\\nSkipping line 308926: expected 2 fields, saw 5\\nSkipping line 310360: expected 2 fields, saw 5\\nSkipping line 317004: expected 2 fields, saw 5\\nSkipping line 318207: expected 2 fields, saw 5\\nSkipping line 331783: expected 2 fields, saw 5\\nSkipping line 333864: expected 2 fields, saw 5\\nSkipping line 335958: expected 2 fields, saw 5\\nSkipping line 336290: expected 2 fields, saw 5\\nSkipping line 343526: expected 2 fields, saw 5\\nSkipping line 343857: expected 2 fields, saw 5\\nSkipping line 344059: expected 2 fields, saw 5\\nSkipping line 348691: expected 2 fields, saw 5\\nSkipping line 353446: expected 2 fields, saw 5\\nSkipping line 357073: expected 2 fields, saw 5\\nSkipping line 359753: expected 2 fields, saw 5\\nSkipping line 359974: expected 2 fields, saw 5\\nSkipping line 366534: expected 2 fields, saw 5\\nSkipping line 369514: expected 2 fields, saw 5\\nSkipping line 377759: expected 2 fields, saw 5\\nSkipping line 379327: expected 2 fields, saw 5\\nSkipping line 380769: expected 2 fields, saw 5\\nSkipping line 381073: expected 2 fields, saw 5\\nSkipping line 381489: expected 2 fields, saw 5\\nSkipping line 386304: expected 2 fields, saw 5\\nSkipping line 387635: expected 2 fields, saw 5\\nSkipping line 389613: expected 2 fields, saw 5\\nSkipping line 392604: expected 2 fields, saw 5\\nSkipping line 393184: expected 2 fields, saw 5\\nSkipping line 395530: expected 2 fields, saw 5\\nSkipping line 396939: expected 2 fields, saw 5\\nSkipping line 397385: expected 2 fields, saw 5\\nSkipping line 397509: expected 2 fields, saw 5\\nSkipping line 402902: expected 2 fields, saw 5\\nSkipping line 405187: expected 2 fields, saw 5\\nSkipping line 408412: expected 2 fields, saw 5\\nSkipping line 419423: expected 2 fields, saw 5\\nSkipping line 420962: expected 2 fields, saw 5\\nSkipping line 425965: expected 2 fields, saw 5\\nSkipping line 427496: expected 2 fields, saw 5\\nSkipping line 438881: expected 2 fields, saw 5\\nSkipping line 439776: expected 2 fields, saw 5\\nSkipping line 440345: expected 2 fields, saw 5\\nSkipping line 445507: expected 2 fields, saw 5\\nSkipping line 445548: expected 2 fields, saw 5\\nSkipping line 447184: expected 2 fields, saw 5\\nSkipping line 448603: expected 2 fields, saw 5\\nSkipping line 451732: expected 2 fields, saw 5\\nSkipping line 458249: expected 2 fields, saw 5\\nSkipping line 460274: expected 2 fields, saw 5\\nSkipping line 467630: expected 2 fields, saw 5\\nSkipping line 473961: expected 2 fields, saw 5\\nSkipping line 476281: expected 2 fields, saw 5\\nSkipping line 478010: expected 2 fields, saw 5\\nSkipping line 478322: expected 2 fields, saw 5\\nSkipping line 479999: expected 2 fields, saw 5\\nSkipping line 480898: expected 2 fields, saw 5\\nSkipping line 481688: expected 2 fields, saw 5\\nSkipping line 485193: expected 2 fields, saw 5\\nSkipping line 485519: expected 2 fields, saw 5\\nSkipping line 486000: expected 2 fields, saw 5\\nSkipping line 489063: expected 2 fields, saw 5\\nSkipping line 494525: expected 2 fields, saw 5\\nSkipping line 495009: expected 2 fields, saw 5\\nSkipping line 501954: expected 2 fields, saw 5\\nSkipping line 508035: expected 2 fields, saw 5\\nSkipping line 508828: expected 2 fields, saw 5\\nSkipping line 509833: expected 2 fields, saw 5\\nSkipping line 510410: expected 2 fields, saw 5\\nSkipping line 518229: expected 2 fields, saw 5\\nSkipping line 520302: expected 2 fields, saw 5\\nSkipping line 520340: expected 2 fields, saw 5\\n'\n" ], [ "df['strength'].unique()", "_____no_output_____" ], [ "df.isna().sum()", "_____no_output_____" ], [ "#Checking at which row and column password is null\ndf[df['password'].isnull()]", "_____no_output_____" ], [ "df.dropna(inplace=True)", "_____no_output_____" ], [ "df.isna().sum()", "_____no_output_____" ], [ "sns.countplot(df['strength'])", "_____no_output_____" ], [ "password_tuple = np.array(df)\npassword_tuple", "_____no_output_____" ], [ "random.shuffle(password_tuple)", "_____no_output_____" ], [ "X = [labels[0] for labels in password_tuple]\ny = [labels[1] for labels in password_tuple]", "_____no_output_____" ], [ "X", "_____no_output_____" ], [ "y", "_____no_output_____" ], [ "#For converting string to characters\ndef string_to_char(input):\n char = []\n for i in input:\n char.append(i)\n return char", "_____no_output_____" ], [ "vectorizer = TfidfVectorizer(tokenizer=string_to_char)\nX = vectorizer.fit_transform(X)", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ], [ "vectorizer.get_feature_names()", "_____no_output_____" ], [ "first_doc_vector = X[0]\nfirst_doc_vector.T.todense()", "_____no_output_____" ], [ "data = pd.DataFrame(first_doc_vector.T.todense(), index=vectorizer.get_feature_names(), columns=['TF-IDF'])\ndata", "_____no_output_____" ], [ "data.sort_values(by=['TF-IDF'],ascending=False)", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "regressor = LogisticRegression(random_state=0, multi_class='multinomial')\nregressor.fit(X_train, y_train)", "_____no_output_____" ], [ "y_pred = regressor.predict(X_test)\ny_pred", "_____no_output_____" ], [ "cm = confusion_matrix(y_test, y_pred)\nprint('The confusion matrix is: \\n',cm)\nprint('The accuracy is',accuracy_score(y_test, y_pred))", "The confusion matrix is: \n [[ 5225 12828 13]\n [ 3636 92956 2602]\n [ 35 5062 11571]]\nThe accuracy is 0.819485096469745\n" ], [ "print(classification_report(y_test, y_pred))", " precision recall f1-score support\n\n 0 0.59 0.29 0.39 18066\n 1 0.84 0.94 0.89 99194\n 2 0.82 0.69 0.75 16668\n\n accuracy 0.82 133928\n macro avg 0.75 0.64 0.67 133928\nweighted avg 0.80 0.82 0.80 133928\n\n" ], [ "import pickle\nfilename = 'finalized_model.sav'\npickle.dump(regressor, open(filename, 'wb'))\n\n# load the model from disk\n#loaded_model = pickle.load(open(filename, 'rb'))\n#result = loaded_model.score(X_test, Y_test)\n#print(result)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a87cc4e2a0b6eea3075950a339958ac2afa7385
3,706
ipynb
Jupyter Notebook
scripts/onnx/notebooks/classification/resnet50_v1d_0.37.ipynb
stephenkl/gluon-cv
e5b13e4e95ceb2f604c9b142a67a9b48a0aff407
[ "Apache-2.0" ]
5,447
2018-04-25T18:02:51.000Z
2022-03-31T00:59:49.000Z
scripts/onnx/notebooks/classification/resnet50_v1d_0.37.ipynb
stephenkl/gluon-cv
e5b13e4e95ceb2f604c9b142a67a9b48a0aff407
[ "Apache-2.0" ]
1,566
2018-04-25T21:14:04.000Z
2022-03-31T06:42:42.000Z
scripts/onnx/notebooks/classification/resnet50_v1d_0.37.ipynb
stephenkl/gluon-cv
e5b13e4e95ceb2f604c9b142a67a9b48a0aff407
[ "Apache-2.0" ]
1,345
2018-04-25T18:44:13.000Z
2022-03-30T19:32:53.000Z
25.916084
261
0.545872
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a87dc937da61401ecb4d0c5ce5bda60762bfd03
46,741
ipynb
Jupyter Notebook
Evaluation/BLEU/.ipynb_checkpoints/Confusion Matrix - Common Lines - World War 2-checkpoint.ipynb
mihirsam/Information-Extraction-using-CNN
7e939b8f37c1e06a1639f4a15d51df817e835bd2
[ "MIT" ]
7
2020-07-20T06:38:29.000Z
2022-01-25T07:51:08.000Z
Evaluation/BLEU/.ipynb_checkpoints/Confusion Matrix - Common Lines - World War 2-checkpoint.ipynb
mihirsam/Information-Extraction-using-CNN
7e939b8f37c1e06a1639f4a15d51df817e835bd2
[ "MIT" ]
null
null
null
Evaluation/BLEU/.ipynb_checkpoints/Confusion Matrix - Common Lines - World War 2-checkpoint.ipynb
mihirsam/Information-Extraction-using-CNN
7e939b8f37c1e06a1639f4a15d51df817e835bd2
[ "MIT" ]
4
2019-10-09T09:44:53.000Z
2020-12-07T14:31:21.000Z
313.697987
43,180
0.933313
[ [ [ "from LineSplit import LineSplit", "_____no_output_____" ], [ "# check files\n\nfiles = ['summary_worldwar2.txt', 'Ref_WorldWar2_sir.txt', 'Ref_WorldWar2_mihir.txt', 'Ref_WorldWar2_chandni.txt', 'Ref_WorldWar2_sweta.txt']\n\nfor file in files:\n print(f\"Lines Count {file}: \", len(LineSplit(file)))", "Lines Count summary_worldwar2.txt: 13\nLines Count Ref_WorldWar2_sir.txt: 13\nLines Count Ref_WorldWar2_mihir.txt: 13\nLines Count Ref_WorldWar2_chandni.txt: 13\nLines Count Ref_WorldWar2_sweta.txt: 13\n" ], [ "common_matrix = []\n\nfor fOne in files:\n listOne = LineSplit(fOne)\n row = []\n \n for fTwo in files:\n listTwo = LineSplit(fTwo)\n cLines = []\n for line in listOne:\n if line in listTwo:\n cLines.append(line)\n listTwo.remove(line)\n row.append(len(cLines))\n common_matrix.append(row)", "_____no_output_____" ], [ "common_matrix", "_____no_output_____" ], [ "import seaborn as sn\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# ['candidate', 'ref_sir','ref_mihir', ref_chandni', 'ref_sweta']\ndf_cm = pd.DataFrame(common_matrix, index = ['candidate', 'ref_1', 'ref_2', 'ref_3', 'ref_4'],\n columns = ['candidate', 'ref_1', 'ref_2', 'ref_3', 'ref_4'])\n\n\nsn.set(rc={'figure.figsize':(15.7,10.27)})\nsn.set(font_scale=1.4)#for label size\nsn.heatmap(df_cm, annot=True,annot_kws={\"size\": 16}, cmap=\"summer\")# font size\nplt.title(label=\"Common Sentences Confusion Matrix of World War 2 Classifier\", pad=10.0)\nplt.savefig('confusion_commonLines_worldwar2.png')\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a87e5d6191a827b8677d169b3c3052ea5ba8b87
12,021
ipynb
Jupyter Notebook
Machine Learning with PySpark/Chapter 5 Logistic regression.ipynb
sonwanesuresh95/Books-to-notebooks
7e56d31395cfda258baefa93d5181839d1a829dc
[ "MIT" ]
1
2021-03-09T06:22:46.000Z
2021-03-09T06:22:46.000Z
Machine Learning with PySpark/Chapter 5 Logistic regression.ipynb
sonwanesuresh95/Books-to-notebooks
7e56d31395cfda258baefa93d5181839d1a829dc
[ "MIT" ]
null
null
null
Machine Learning with PySpark/Chapter 5 Logistic regression.ipynb
sonwanesuresh95/Books-to-notebooks
7e56d31395cfda258baefa93d5181839d1a829dc
[ "MIT" ]
null
null
null
21.504472
150
0.428833
[ [ [ "# Logistic Regression in pyspark", "_____no_output_____" ] ], [ [ "import findspark", "_____no_output_____" ], [ "findspark.init()", "_____no_output_____" ], [ "import pyspark", "_____no_output_____" ], [ "from pyspark.sql import SparkSession", "_____no_output_____" ], [ "spark = SparkSession.builder.appName('logreg').getOrCreate()", "_____no_output_____" ] ], [ [ "# Load dataset", "_____no_output_____" ] ], [ [ "df = spark.read.csv(r'logreg.csv', header=True, inferSchema=True)", "_____no_output_____" ], [ "df.show(5)", "+---------+---+--------------+--------+----------------+------+\n| Country|Age|Repeat_Visitor|Platform|Web_pages_viewed|Status|\n+---------+---+--------------+--------+----------------+------+\n| India| 41| 1| Yahoo| 21| 1|\n| Brazil| 28| 1| Yahoo| 5| 0|\n| Brazil| 40| 0| Google| 3| 0|\n|Indonesia| 31| 1| Bing| 15| 1|\n| Malaysia| 32| 0| Google| 15| 1|\n+---------+---+--------------+--------+----------------+------+\nonly showing top 5 rows\n\n" ], [ "df.count(),len(df.columns)", "_____no_output_____" ], [ "df.printSchema()", "root\n |-- Country: string (nullable = true)\n |-- Age: integer (nullable = true)\n |-- Repeat_Visitor: integer (nullable = true)\n |-- Platform: string (nullable = true)\n |-- Web_pages_viewed: integer (nullable = true)\n |-- Status: integer (nullable = true)\n\n" ], [ "df.describe().show()", "+-------+--------+-----------------+-----------------+--------+-----------------+------------------+\n|summary| Country| Age| Repeat_Visitor|Platform| Web_pages_viewed| Status|\n+-------+--------+-----------------+-----------------+--------+-----------------+------------------+\n| count| 20000| 20000| 20000| 20000| 20000| 20000|\n| mean| null| 28.53955| 0.5029| null| 9.5533| 0.5|\n| stddev| null|7.888912950773227|0.500004090187782| null|6.073903499824976|0.5000125004687693|\n| min| Brazil| 17| 0| Bing| 1| 0|\n| max|Malaysia| 111| 1| Yahoo| 29| 1|\n+-------+--------+-----------------+-----------------+--------+-----------------+------------------+\n\n" ] ], [ [ "# Converting categorical variables to features", "_____no_output_____" ], [ "## Use LabelEncoding -> OneHotEncoding", "_____no_output_____" ] ], [ [ "from pyspark.ml.feature import StringIndexer, VectorAssembler, OneHotEncoder", "_____no_output_____" ], [ "lbl_enc = StringIndexer(inputCol='Country',outputCol='country_labels').fit(df)", "_____no_output_____" ], [ "df = lbl_enc.transform(df)", "_____no_output_____" ], [ "df.show(5)", "+---------+---+--------------+--------+----------------+------+--------------+\n| Country|Age|Repeat_Visitor|Platform|Web_pages_viewed|Status|country_labels|\n+---------+---+--------------+--------+----------------+------+--------------+\n| India| 41| 1| Yahoo| 21| 1| 1.0|\n| Brazil| 28| 1| Yahoo| 5| 0| 2.0|\n| Brazil| 40| 0| Google| 3| 0| 2.0|\n|Indonesia| 31| 1| Bing| 15| 1| 0.0|\n| Malaysia| 32| 0| Google| 15| 1| 3.0|\n+---------+---+--------------+--------+----------------+------+--------------+\nonly showing top 5 rows\n\n" ], [ "lbl_enc = StringIndexer(inputCol='Platform', outputCol='Platform_labels').fit(df)", "_____no_output_____" ], [ "df = lbl_enc.transform(df)", "_____no_output_____" ], [ "ohe = OneHotEncoder(inputCol='country_labels',outputCol='country_vector').fit(df)", "_____no_output_____" ], [ "df = ohe.transform(df)", "_____no_output_____" ], [ "ohe = OneHotEncoder(inputCol='Platform_labels',outputCol='platform_vector').fit(df)", "_____no_output_____" ], [ "df = ohe.transform(df)", "_____no_output_____" ], [ "vec_asm = VectorAssembler(inputCols=['country_vector', 'Repeat_Visitor', 'Web_pages_viewed', 'Status', 'platform_vector'], outputCol='features')", "_____no_output_____" ], [ "df = vec_asm.transform(df)", "_____no_output_____" ], [ "df = df.select('features','Status')", "_____no_output_____" ] ], [ [ "# Splitting Dataset", "_____no_output_____" ] ], [ [ "train_df, test_df = df.randomSplit([0.8,0.2])", "_____no_output_____" ], [ "train_df.groupBy('Status').count().show()", "+------+-----+\n|Status|count|\n+------+-----+\n| 1| 7984|\n| 0| 7978|\n+------+-----+\n\n" ] ], [ [ "# Training Model", "_____no_output_____" ] ], [ [ "from pyspark.ml.classification import LogisticRegression", "_____no_output_____" ], [ "logreg = LogisticRegression(labelCol='Status').fit(df)", "_____no_output_____" ] ], [ [ "# Prediction and Evaluation", "_____no_output_____" ] ], [ [ "preds = logreg.evaluate(test_df).predictions", "_____no_output_____" ], [ "preds = preds.select('Status','prediction')", "_____no_output_____" ], [ "TP = preds.filter((preds['Status'] == 1) & (preds['prediction'] == 1)).count()", "_____no_output_____" ], [ "TN = preds.filter((preds['Status'] == 0) & (preds['prediction'] == 0)).count()", "_____no_output_____" ], [ "FP = preds.filter((preds['Status'] == 0)&(preds['prediction'] == 1)).count()", "_____no_output_____" ], [ "FN = preds.filter((preds['Status'] == 1)&(preds['prediction'] == 0)).count()", "_____no_output_____" ], [ "acc = (TP+TN)/(TP+TN+FP+FN)", "_____no_output_____" ], [ "acc", "_____no_output_____" ], [ "precision = TP/(TP+FP)", "_____no_output_____" ], [ "recall = TP/(TP+FN)", "_____no_output_____" ], [ "precision", "_____no_output_____" ], [ "recall", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a881f34010ab9bbdcf6be3eabae9e9376653024
179,890
ipynb
Jupyter Notebook
PyCitySchools_Challenge.ipynb
maldonado91/School-District-Analysis
b781b5ea52774b0fdf8cd626f631f2e0d365a929
[ "Apache-2.0" ]
null
null
null
PyCitySchools_Challenge.ipynb
maldonado91/School-District-Analysis
b781b5ea52774b0fdf8cd626f631f2e0d365a929
[ "Apache-2.0" ]
null
null
null
PyCitySchools_Challenge.ipynb
maldonado91/School-District-Analysis
b781b5ea52774b0fdf8cd626f631f2e0d365a929
[ "Apache-2.0" ]
null
null
null
37.167355
210
0.394069
[ [ [ "# Dependencies and Setup\nimport pandas as pd\n# import dataframe_image as dfi\n\n# File to Load (Remember to change the path if needed.)\nschool_data_to_load = \"Resources/schools_complete.csv\"\nstudent_data_to_load = \"Resources/students_complete.csv\"\n\n# Read the School Data and Student Data and store into a Pandas DataFrame\nschool_data_df = pd.read_csv(school_data_to_load)\nstudent_data_df = pd.read_csv(student_data_to_load)\n\n# Cleaning Student Names and Replacing Substrings in a Python String\n# Add each prefix and suffix to remove to a list.\nprefixes_suffixes = [\"Dr. \", \"Mr. \",\"Ms. \", \"Mrs. \", \"Miss \", \" MD\", \" DDS\", \" DVM\", \" PhD\"]\n\n# Iterate through the words in the \"prefixes_suffixes\" list and replace them with an empty space, \"\".\nfor word in prefixes_suffixes:\n student_data_df[\"student_name\"] = student_data_df[\"student_name\"].str.replace(word,\"\")\n\n# Check names.\nstudent_data_df.head(10)", "_____no_output_____" ] ], [ [ "## Deliverable 1: Replace the reading and math scores.\n\n### Replace the 9th grade reading and math scores at Thomas High School with NaN.", "_____no_output_____" ] ], [ [ "# Install numpy using conda install numpy or pip install numpy. \n# Step 1. Import numpy as np.\nimport numpy as np", "_____no_output_____" ], [ "student_data_df", "_____no_output_____" ], [ "# Step 2. Use the loc method on the student_data_df to select all the reading scores from the 9th grade at Thomas High School and replace them with NaN.\nstudent_data_df.loc[((student_data_df['school_name'] == 'Thomas High School') & (student_data_df['grade'] == '9th')), 'reading_score'] = np.nan", "_____no_output_____" ], [ "# Step 3. Refactor the code in Step 2 to replace the math scores with NaN.\nstudent_data_df.loc[((student_data_df['school_name'] == 'Thomas High School') & (student_data_df['grade'] == '9th')), 'math_score'] = np.nan", "_____no_output_____" ], [ "# Step 4. Check the student data for NaN's. \nstudent_data_df[student_data_df['reading_score'].isnull() & student_data_df['reading_score'].isnull()]", "_____no_output_____" ] ], [ [ "## Deliverable 2 : Repeat the school district analysis", "_____no_output_____" ], [ "### District Summary", "_____no_output_____" ] ], [ [ "# Combine the data into a single dataset\nschool_data_complete_df = pd.merge(student_data_df, school_data_df, how=\"left\", on=[\"school_name\", \"school_name\"])\nschool_data_complete_df.head()", "_____no_output_____" ], [ "# Calculate the Totals (Schools and Students)\nschool_count = len(school_data_complete_df[\"school_name\"].unique())\nstudent_count = school_data_complete_df[\"Student ID\"].count()\n\n# Calculate the Total Budget\ntotal_budget = school_data_df[\"budget\"].sum()", "_____no_output_____" ], [ "# Calculate the Average Scores using the \"clean_student_data\".\naverage_reading_score = school_data_complete_df[\"reading_score\"].mean()\naverage_math_score = school_data_complete_df[\"math_score\"].mean()", "_____no_output_____" ], [ "# Step 1. Get the number of students that are in ninth grade at Thomas High School.\n# These students have no grades. \nstudent_count_ninth_thomas = student_data_df[student_data_df['reading_score'].isnull() & student_data_df['reading_score'].isnull()].count()[0]\n\n\n# Get the total student count \nstudent_count = school_data_complete_df[\"Student ID\"].count()\n\n\n# Step 2. Subtract the number of students that are in ninth grade at \n# Thomas High School from the total student count to get the new total student count.\nnew_student_count = student_count - student_count_ninth_thomas\nnew_student_count", "_____no_output_____" ], [ "# Calculate the passing rates using the \"clean_student_data\".\npassing_math_count = school_data_complete_df[(school_data_complete_df[\"math_score\"] >= 70)].count()[\"student_name\"]\npassing_reading_count = school_data_complete_df[(school_data_complete_df[\"reading_score\"] >= 70)].count()[\"student_name\"]", "_____no_output_____" ], [ "# Step 3. Calculate the passing percentages with the new total student count.\npassing_math_percentage = (passing_math_count / new_student_count) * 100 \npassing_reading_percentage = (passing_reading_count / new_student_count) * 100\npassing_math_percentage, passing_reading_percentage", "_____no_output_____" ], [ "# Calculate the students who passed both reading and math.\npassing_math_reading = school_data_complete_df[(school_data_complete_df[\"math_score\"] >= 70)\n & (school_data_complete_df[\"reading_score\"] >= 70)]\n\n# Calculate the number of students that passed both reading and math.\noverall_passing_math_reading_count = passing_math_reading[\"student_name\"].count()\n\n\n# Step 4.Calculate the overall passing percentage with new total student count.\noverall_passing_math_reading_percentage = (overall_passing_math_reading_count / new_student_count) * 100\noverall_passing_math_reading_percentage", "_____no_output_____" ], [ "# Create a DataFrame\ndistrict_summary_df = pd.DataFrame(\n [{\"Total Schools\": school_count, \n \"Total Students\": student_count, \n \"Total Budget\": total_budget,\n \"Average Math Score\": average_math_score, \n \"Average Reading Score\": average_reading_score,\n \"% Passing Math\": passing_math_percentage,\n \"% Passing Reading\": passing_reading_percentage,\n \"% Overall Passing\": \noverall_passing_math_reading_percentage}])\n\n\n\n# Format the \"Total Students\" to have the comma for a thousands separator.\ndistrict_summary_df[\"Total Students\"] = district_summary_df[\"Total Students\"].map(\"{:,}\".format)\n# Format the \"Total Budget\" to have the comma for a thousands separator, a decimal separator and a \"$\".\ndistrict_summary_df[\"Total Budget\"] = district_summary_df[\"Total Budget\"].map(\"${:,.2f}\".format)\n# Format the columns.\ndistrict_summary_df[\"Average Math Score\"] = district_summary_df[\"Average Math Score\"].map(\"{:.1f}\".format)\ndistrict_summary_df[\"Average Reading Score\"] = district_summary_df[\"Average Reading Score\"].map(\"{:.1f}\".format)\ndistrict_summary_df[\"% Passing Math\"] = district_summary_df[\"% Passing Math\"].map(\"{:.1f}\".format)\ndistrict_summary_df[\"% Passing Reading\"] = district_summary_df[\"% Passing Reading\"].map(\"{:.1f}\".format)\ndistrict_summary_df[\"% Overall Passing\"] = district_summary_df[\"% Overall Passing\"].map(\"{:.1f}\".format)\n\n# Display the data frame\ndistrict_summary_df", "_____no_output_____" ] ], [ [ "## School Summary", "_____no_output_____" ] ], [ [ "# Determine the School Type\nper_school_types = school_data_df.set_index([\"school_name\"])[\"type\"]\n\n# Calculate the total student count.\nper_school_counts = school_data_complete_df[\"school_name\"].value_counts()\n\n# Calculate the total school budget and per capita spending\nper_school_budget = school_data_complete_df.groupby([\"school_name\"]).mean()[\"budget\"]\n# Calculate the per capita spending.\nper_school_capita = per_school_budget / per_school_counts\n\n# Calculate the average test scores.\nper_school_math = school_data_complete_df.groupby([\"school_name\"]).mean()[\"math_score\"]\nper_school_reading = school_data_complete_df.groupby([\"school_name\"]).mean()[\"reading_score\"]\n\n# Calculate the passing scores by creating a filtered DataFrame.\nper_school_passing_math = school_data_complete_df[(school_data_complete_df[\"math_score\"] >= 70)]\nper_school_passing_reading = school_data_complete_df[(school_data_complete_df[\"reading_score\"] >= 70)]\n\n# Calculate the number of students passing math and passing reading by school.\nper_school_passing_math = per_school_passing_math.groupby([\"school_name\"]).count()[\"student_name\"]\nper_school_passing_reading = per_school_passing_reading.groupby([\"school_name\"]).count()[\"student_name\"]\n\n# Calculate the percentage of passing math and reading scores per school.\nper_school_passing_math = per_school_passing_math / per_school_counts * 100\nper_school_passing_reading = per_school_passing_reading / per_school_counts * 100\n\n# Calculate the students who passed both reading and math.\nper_passing_math_reading = school_data_complete_df[(school_data_complete_df[\"reading_score\"] >= 70)\n & (school_data_complete_df[\"math_score\"] >= 70)]\n\n# Calculate the number of students passing math and passing reading by school.\nper_passing_math_reading = per_passing_math_reading.groupby([\"school_name\"]).count()[\"student_name\"]\n\n# Calculate the percentage of passing math and reading scores per school.\nper_overall_passing_percentage = per_passing_math_reading / per_school_counts * 100", "_____no_output_____" ], [ "# Create the DataFrame\nper_school_summary_df = pd.DataFrame({\n \"School Type\": per_school_types,\n \"Total Students\": per_school_counts,\n \"Total School Budget\": per_school_budget,\n \"Per Student Budget\": per_school_capita,\n \"Average Math Score\": per_school_math,\n \"Average Reading Score\": per_school_reading,\n \"% Passing Math\": per_school_passing_math,\n \"% Passing Reading\": per_school_passing_reading,\n \"% Overall Passing\": per_overall_passing_percentage})\n\n\nper_school_summary_df.head()", "_____no_output_____" ], [ "# Format the Total School Budget and the Per Student Budget\nper_school_summary_df[\"Total School Budget\"] = per_school_summary_df[\"Total School Budget\"].map(\"${:,.2f}\".format)\nper_school_summary_df[\"Per Student Budget\"] = per_school_summary_df[\"Per Student Budget\"].map(\"${:,.2f}\".format)\n\n# Display the data frame\nper_school_summary_df", "_____no_output_____" ], [ "# Step 5. Get the number of 10th-12th graders from Thomas High School (THS).\nstudent_count_non_ninth_thomas = school_data_complete_df.loc[(school_data_complete_df['school_name'] == 'Thomas High School') \n & (school_data_complete_df['grade'] != '9th')].count()[0]", "_____no_output_____" ], [ "# Step 6. Get all the students passing math from THS\nstudent_non_ninth_thomas = school_data_complete_df.loc[(school_data_complete_df['school_name'] == 'Thomas High School') \n & (school_data_complete_df['grade'] != '9th')]\nstudent_non_ninth_thomas_passing_math = student_non_ninth_thomas.loc[student_non_ninth_thomas['math_score'] >= 70]\nstudent_non_ninth_thomas_passing_math", "_____no_output_____" ], [ "# Step 7. Get all the students passing reading from THS\nstudent_non_ninth_thomas_passing_reading = student_non_ninth_thomas.loc[student_non_ninth_thomas['reading_score'] >= 70]\nstudent_non_ninth_thomas_passing_reading", "_____no_output_____" ], [ "# Step 8. Get all the students passing math and reading from THS\nstudent_non_ninth_thomas_passing_math_reading = student_non_ninth_thomas.loc[(student_non_ninth_thomas['math_score'] >= 70) &(student_non_ninth_thomas['reading_score'] >= 70)]", "_____no_output_____" ], [ "# Step 9. Calculate the percentage of 10th-12th grade students passing math from Thomas High School. \n(student_non_ninth_thomas_passing_math.count()[0] / student_non_ninth_thomas.count()[0]) * 100", "_____no_output_____" ], [ "# Step 10. Calculate the percentage of 10th-12th grade students passing reading from Thomas High School.\n(student_non_ninth_thomas_passing_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100", "_____no_output_____" ], [ "# Step 11. Calculate the overall passing percentage of 10th-12th grade from Thomas High School. \n(student_non_ninth_thomas_passing_math_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100\n", "_____no_output_____" ], [ "# Step 12. Replace the passing math percent for Thomas High School in the per_school_summary_df.\nper_school_summary_df.loc[per_school_summary_df.index == 'Thomas High School', '% Passing Math'] = (student_non_ninth_thomas_passing_math.count()[0] / student_non_ninth_thomas.count()[0]) * 100", "_____no_output_____" ], [ "# Step 13. Replace the passing reading percentage for Thomas High School in the per_school_summary_df.\nper_school_summary_df.loc[per_school_summary_df.index == 'Thomas High School', '% Passing Reading'] = (student_non_ninth_thomas_passing_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100", "_____no_output_____" ], [ "# Step 14. Replace the overall passing percentage for Thomas High School in the per_school_summary_df.\nper_school_summary_df.loc[per_school_summary_df.index == 'Thomas High School', '% Overall Passing'] = (student_non_ninth_thomas_passing_math_reading.count()[0] / student_non_ninth_thomas.count()[0]) * 100", "_____no_output_____" ], [ "per_school_summary_df", "_____no_output_____" ] ], [ [ "## High and Low Performing Schools ", "_____no_output_____" ] ], [ [ "# Sort and show top five schools.\nper_school_summary_df.sort_values('% Overall Passing', ascending=False).head(5)", "_____no_output_____" ], [ "# Sort and show bottom five schools.\nper_school_summary_df.sort_values('% Overall Passing', ascending=True).head(5)", "_____no_output_____" ] ], [ [ "## Math and Reading Scores by Grade", "_____no_output_____" ] ], [ [ "# Create a Series of scores by grade levels using conditionals.\nninth_graders = school_data_complete_df[(school_data_complete_df[\"grade\"] == \"9th\")]\ntenth_graders = school_data_complete_df[(school_data_complete_df[\"grade\"] == \"10th\")]\neleventh_graders = school_data_complete_df[(school_data_complete_df[\"grade\"] == \"11th\")]\ntwelfth_graders = school_data_complete_df[(school_data_complete_df[\"grade\"] == \"12th\")]\n\n# Group each school Series by the school name for the average math score.\nmath_score_ninth_graders = ninth_graders.groupby('school_name').mean()['math_score']\nmath_score_tenth_graders = tenth_graders.groupby('school_name').mean()['math_score']\nmath_score_eleventh_graders = eleventh_graders.groupby('school_name').mean()['math_score']\nmath_score_twelfth_graders = twelfth_graders.groupby('school_name').mean()['math_score']\n\n# Group each school Series by the school name for the average reading score.\nreading_score_ninth_graders = ninth_graders.groupby('school_name').mean()['reading_score']\nreading_score_tenth_graders = tenth_graders.groupby('school_name').mean()['reading_score']\nreading_score_eleventh_graders = eleventh_graders.groupby('school_name').mean()['reading_score']\nreading_score_twelfth_graders = twelfth_graders.groupby('school_name').mean()['reading_score']", "_____no_output_____" ], [ "# Combine each Series for average math scores by school into single data frame.\nmean_math_school = pd.DataFrame({\n '9th': math_score_ninth_graders,\n '10th': math_score_tenth_graders,\n '11th': math_score_eleventh_graders,\n '12th': math_score_twelfth_graders\n})\nmean_math_school.head()", "_____no_output_____" ], [ "# Combine each Series for average reading scores by school into single data frame.\nmean_reading_school = pd.DataFrame({\n '9th': reading_score_ninth_graders,\n '10th': reading_score_tenth_graders,\n '11th': reading_score_eleventh_graders,\n '12th': reading_score_twelfth_graders\n})\nmean_reading_school.head()", "_____no_output_____" ], [ "mean_math_school.dtypes", "_____no_output_____" ], [ "# Format each grade column.\nmean_math_school[\"9th\"] = mean_math_school[\"9th\"].map(\"{:,.1f}\".format)\nmean_math_school[\"10th\"] = mean_math_school[\"10th\"].map(\"{:,.1f}\".format)\nmean_math_school[\"11th\"] = mean_math_school[\"11th\"].map(\"{:,.1f}\".format)\nmean_math_school[\"12th\"] = mean_math_school[\"12th\"].map(\"{:,.1f}\".format)\n\nmean_reading_school[\"9th\"] = mean_reading_school[\"9th\"].map(\"{:,.1f}\".format)\nmean_reading_school[\"10th\"] = mean_reading_school[\"10th\"].map(\"{:,.1f}\".format)\nmean_reading_school[\"11th\"] = mean_reading_school[\"11th\"].map(\"{:,.1f}\".format)\nmean_reading_school[\"12th\"] = mean_reading_school[\"12th\"].map(\"{:,.1f}\".format)", "_____no_output_____" ], [ "# Remove the index.\nmean_math_school.index.name = None\n\n\n# Display the data frame\nmean_math_school", "_____no_output_____" ], [ "## Remove the index.\nmean_reading_school.index.name = None\n\n# Display the data frame\nmean_reading_school", "_____no_output_____" ] ], [ [ "## Scores by School Spending", "_____no_output_____" ] ], [ [ "# Establish the spending bins and group names.\nspending_bins = [0, 585, 630, 645, 675]\ngroup_names = [\"<$584\", \"$585-629\", \"$630-644\", \"$645-675\"]\n\n# Categorize spending based on the bins.\nper_school_summary_df[\"Spending Ranges (Per Student)\"] = pd.cut(per_school_capita, spending_bins, labels=group_names)\n\nper_school_summary_df", "_____no_output_____" ], [ "# Calculate averages for the desired columns. \nspending_math_scores = per_school_summary_df.groupby([\"Spending Ranges (Per Student)\"]).mean()[\"Average Math Score\"]\n\nspending_reading_scores = per_school_summary_df.groupby([\"Spending Ranges (Per Student)\"]).mean()[\"Average Reading Score\"]\n\nspending_passing_math = per_school_summary_df.groupby([\"Spending Ranges (Per Student)\"]).mean()[\"% Passing Math\"]\n\nspending_passing_reading = per_school_summary_df.groupby([\"Spending Ranges (Per Student)\"]).mean()[\"% Passing Reading\"]\n\noverall_passing_spending = per_school_summary_df.groupby([\"Spending Ranges (Per Student)\"]).mean()[\"% Overall Passing\"]", "_____no_output_____" ], [ "# Create the DataFrame\nspending_summary_df = pd.DataFrame({\n \"Average Math Score\" : spending_math_scores,\n \"Average Reading Score\": spending_reading_scores,\n \"% Passing Math\": spending_passing_math,\n \"% Passing Reading\": spending_passing_reading,\n \"% Overall Passing\": overall_passing_spending})\n\nspending_summary_df", "_____no_output_____" ], [ "# Format the DataFrame \nspending_summary_df[\"Average Math Score\"] = spending_summary_df[\"Average Math Score\"].map(\"{:.1f}\".format)\nspending_summary_df[\"Average Reading Score\"] = spending_summary_df[\"Average Reading Score\"].map(\"{:.1f}\".format)\nspending_summary_df[\"% Passing Math\"] = spending_summary_df[\"% Passing Math\"].map(\"{:.0f}\".format)\nspending_summary_df[\"% Passing Reading\"] = spending_summary_df[\"% Passing Reading\"].map(\"{:.0f}\".format)\nspending_summary_df[\"% Overall Passing\"] = spending_summary_df[\"% Overall Passing\"].map(\"{:.0f}\".format)\nspending_summary_df", "_____no_output_____" ] ], [ [ "## Scores by School Size", "_____no_output_____" ] ], [ [ "# Establish the bins.\nsize_bins = [0, 1000, 2000, 5000]\ngroup_names = [\"Small (<1000)\", \"Medium (1000-2000)\", \"Large (2000-5000)\"]\n\n# Categorize spending based on the bins.\nper_school_summary_df[\"School Size\"] = pd.cut(per_school_summary_df['Total Students'], size_bins, labels=group_names)\n\nper_school_summary_df", "_____no_output_____" ], [ "per_school_summary_df[per_school_summary_df.index == 'Thomas High School']", "_____no_output_____" ], [ "# Calculate averages for the desired columns. \n# Calculate averages for the desired columns.\nsize_math_scores = per_school_summary_df.groupby([\"School Size\"]).mean()[\"Average Math Score\"]\n\nsize_reading_scores = per_school_summary_df.groupby([\"School Size\"]).mean()[\"Average Reading Score\"]\n\nsize_passing_math = per_school_summary_df.groupby([\"School Size\"]).mean()[\"% Passing Math\"]\n\nsize_passing_reading = per_school_summary_df.groupby([\"School Size\"]).mean()[\"% Passing Reading\"]\n\nsize_overall_passing = per_school_summary_df.groupby([\"School Size\"]).mean()[\"% Overall Passing\"]", "_____no_output_____" ], [ "# Assemble into DataFrame. \nsize_summary_df = pd.DataFrame({\n \"Average Math Score\" : size_math_scores,\n \"Average Reading Score\": size_reading_scores,\n \"% Passing Math\": size_passing_math,\n \"% Passing Reading\": size_passing_reading,\n \"% Overall Passing\": size_overall_passing})\n\nsize_summary_df", "_____no_output_____" ], [ "# Format the DataFrame \n# Formatting.\nsize_summary_df[\"Average Math Score\"] = size_summary_df[\"Average Math Score\"].map(\"{:.2f}\".format)\n\nsize_summary_df[\"Average Reading Score\"] = size_summary_df[\"Average Reading Score\"].map(\"{:.2f}\".format)\n\nsize_summary_df[\"% Passing Math\"] = size_summary_df[\"% Passing Math\"].map(\"{:.2f}\".format)\n\nsize_summary_df[\"% Passing Reading\"] = size_summary_df[\"% Passing Reading\"].map(\"{:.2f}\".format)\n\nsize_summary_df[\"% Overall Passing\"] = size_summary_df[\"% Overall Passing\"].map(\"{:.2f}\".format)\n\nsize_summary_df", "_____no_output_____" ] ], [ [ "## Scores by School Type", "_____no_output_____" ] ], [ [ "# Calculate averages for the desired columns. \ntype_math_scores = per_school_summary_df.groupby([\"School Type\"]).mean()[\"Average Math Score\"]\n\ntype_reading_scores = per_school_summary_df.groupby([\"School Type\"]).mean()[\"Average Reading Score\"]\n\ntype_passing_math = per_school_summary_df.groupby([\"School Type\"]).mean()[\"% Passing Math\"]\n\ntype_passing_reading = per_school_summary_df.groupby([\"School Type\"]).mean()[\"% Passing Reading\"]\n\ntype_overall_passing = per_school_summary_df.groupby([\"School Type\"]).mean()[\"% Overall Passing\"]", "_____no_output_____" ], [ "# Assemble into DataFrame. \ntype_summary_df = pd.DataFrame({\n \"Average Math Score\" : type_math_scores,\n \"Average Reading Score\": type_reading_scores,\n \"% Passing Math\": type_passing_math,\n \"% Passing Reading\": type_passing_reading,\n \"% Overall Passing\": type_overall_passing})\n\ntype_summary_df", "_____no_output_____" ], [ "# # Format the DataFrame \ntype_summary_df[\"Average Math Score\"] = type_summary_df[\"Average Math Score\"].map(\"{:.1f}\".format)\n\ntype_summary_df[\"Average Reading Score\"] = type_summary_df[\"Average Reading Score\"].map(\"{:.1f}\".format)\n\ntype_summary_df[\"% Passing Math\"] = type_summary_df[\"% Passing Math\"].map(\"{:.0f}\".format)\n\ntype_summary_df[\"% Passing Reading\"] = type_summary_df[\"% Passing Reading\"].map(\"{:.0f}\".format)\n\ntype_summary_df[\"% Overall Passing\"] = type_summary_df[\"% Overall Passing\"].map(\"{:.0f}\".format)\n\ntype_summary_df", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a88302f70b274b096b03d75bd5434e1769a8506
10,338
ipynb
Jupyter Notebook
components/gcp/ml_engine/batch_predict/sample.ipynb
JohnPaton/pipelines
d673a1f954ff4f5a54336cb6f9e8748a9ca5502d
[ "Apache-2.0" ]
null
null
null
components/gcp/ml_engine/batch_predict/sample.ipynb
JohnPaton/pipelines
d673a1f954ff4f5a54336cb6f9e8748a9ca5502d
[ "Apache-2.0" ]
null
null
null
components/gcp/ml_engine/batch_predict/sample.ipynb
JohnPaton/pipelines
d673a1f954ff4f5a54336cb6f9e8748a9ca5502d
[ "Apache-2.0" ]
null
null
null
36.659574
338
0.619269
[ [ [ "# Batch predicting using Cloud Machine Learning Engine\nA Kubeflow Pipeline component to submit a batch prediction job against a trained model to Cloud ML Engine service.\n\n## Intended use\nUse the component to run a batch prediction job against a deployed model in Cloud Machine Learning Engine. The prediction output will be stored in a Cloud Storage bucket.\n\n## Runtime arguments\nName | Description | Type | Optional | Default\n:--- | :---------- | :--- | :------- | :------\nproject_id | The ID of the parent project of the job. | GCPProjectID | No |\nmodel_path | Required. The path to the model. It can be one of the following paths:<ul><li>`projects/[PROJECT_ID]/models/[MODEL_ID]`</li><li>`projects/[PROJECT_ID]/models/[MODEL_ID]/versions/[VERSION_ID]`</li><li>Cloud Storage path of a model file.</li></ul> | String | No |\ninput_paths | The Cloud Storage location of the input data files. May contain wildcards. For example: `gs://foo/*.csv` | List | No |\ninput_data_format | The format of the input data files. See [DataFormat](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#DataFormat). | String | No |\noutput_path | The Cloud Storage location for the output data. | GCSPath | No |\nregion | The region in Compute Engine where the prediction job is run. | GCPRegion | No |\noutput_data_format | The format of the output data files. See [DataFormat](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#DataFormat). | String | Yes | `JSON`\nprediction_input | The JSON input parameters to create a prediction job. See [PredictionInput](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#PredictionInput) to know more. | Dict | Yes | ` `\njob_id_prefix | The prefix of the generated job id. | String | Yes | ` `\nwait_interval | A time-interval to wait for in case the operation has a long run time. | Integer | Yes | `30`\n\n## Output\nName | Description | Type\n:--- | :---------- | :---\njob_id | The ID of the created batch job. | String\noutput_path | The output path of the batch prediction job | GCSPath\n\n\n## Cautions & requirements\n\nTo use the component, you must:\n* Setup cloud environment by following the [guide](https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-training-prediction#setup).\n* The component is running under a secret of [Kubeflow user service account](https://www.kubeflow.org/docs/started/getting-started-gke/#gcp-service-accounts) in a Kubeflow cluster. For example:\n\n```python\nmlengine_predict_op(...).apply(gcp.use_gcp_secret('user-gcp-sa'))\n\n```\n* Grant Kubeflow user service account the read access to the Cloud Storage buckets which contains the input data.\n* Grant Kubeflow user service account the write access to the Cloud Storage bucket of the output directory.\n\n\n## Detailed Description\n\nThe component accepts following input data:\n* A trained model: it can be a model file in Cloud Storage, or a deployed model or version in Cloud Machine Learning Engine. The path to the model is specified by the `model_path` parameter.\n* Input data: the data will be used to make predictions against the input trained model. The data can be in [multiple formats](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#DataFormat). The path of the data is specified by `input_paths` parameter and the format is specified by `input_data_format` parameter.\n\nHere are the steps to use the component in a pipeline:\n1. Install KFP SDK\n", "_____no_output_____" ] ], [ [ "%%capture --no-stderr\n\nKFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.14/kfp.tar.gz'\n!pip3 install $KFP_PACKAGE --upgrade", "_____no_output_____" ] ], [ [ "2. Load the component using KFP SDK", "_____no_output_____" ] ], [ [ "import kfp.components as comp\n\nmlengine_batch_predict_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/d2f5cc92a46012b9927209e2aaccab70961582dc/components/gcp/ml_engine/batch_predict/component.yaml')\nhelp(mlengine_batch_predict_op)", "_____no_output_____" ] ], [ [ "For more information about the component, please checkout:\n* [Component python code](https://github.com/kubeflow/pipelines/blob/master/component_sdk/python/kfp_component/google/ml_engine/_batch_predict.py)\n* [Component docker file](https://github.com/kubeflow/pipelines/blob/master/components/gcp/container/Dockerfile)\n* [Sample notebook](https://github.com/kubeflow/pipelines/blob/master/components/gcp/ml_engine/batch_predict/sample.ipynb)\n* [Cloud Machine Learning Engine job REST API](https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs)\n\n### Sample Code\n\nNote: the sample code below works in both IPython notebook or python code directly.\n\nIn this sample, we batch predict against a pre-built trained model from `gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/` and use the test data from `gs://ml-pipeline-playground/samples/ml_engine/census/test.json`. \n\n#### Inspect the test data", "_____no_output_____" ] ], [ [ "!gsutil cat gs://ml-pipeline-playground/samples/ml_engine/census/test.json", "_____no_output_____" ] ], [ [ "#### Set sample parameters", "_____no_output_____" ] ], [ [ "# Required Parameters\nPROJECT_ID = '<Please put your project ID here>'\nGCS_WORKING_DIR = 'gs://<Please put your GCS path here>' # No ending slash", "_____no_output_____" ], [ "# Optional Parameters\nEXPERIMENT_NAME = 'CLOUDML - Batch Predict'\nOUTPUT_GCS_PATH = GCS_WORKING_DIR + '/batch_predict/output/'", "_____no_output_____" ] ], [ [ "#### Example pipeline that uses the component", "_____no_output_____" ] ], [ [ "import kfp.dsl as dsl\nimport kfp.gcp as gcp\nimport json\[email protected](\n name='CloudML batch predict pipeline',\n description='CloudML batch predict pipeline'\n)\ndef pipeline(\n project_id = PROJECT_ID, \n model_path = 'gs://ml-pipeline-playground/samples/ml_engine/census/trained_model/', \n input_paths = '[\"gs://ml-pipeline-playground/samples/ml_engine/census/test.json\"]', \n input_data_format = 'JSON', \n output_path = OUTPUT_GCS_PATH, \n region = 'us-central1', \n output_data_format='', \n prediction_input = json.dumps({\n 'runtimeVersion': '1.10'\n }), \n job_id_prefix='',\n wait_interval='30'):\n mlengine_batch_predict_op(\n project_id=project_id, \n model_path=model_path, \n input_paths=input_paths, \n input_data_format=input_data_format, \n output_path=output_path, \n region=region, \n output_data_format=output_data_format, \n prediction_input=prediction_input, \n job_id_prefix=job_id_prefix,\n wait_interval=wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))", "_____no_output_____" ] ], [ [ "#### Compile the pipeline", "_____no_output_____" ] ], [ [ "pipeline_func = pipeline\npipeline_filename = pipeline_func.__name__ + '.zip'\nimport kfp.compiler as compiler\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)", "_____no_output_____" ] ], [ [ "#### Submit the pipeline for execution", "_____no_output_____" ] ], [ [ "#Specify pipeline argument values\narguments = {}\n\n#Get or create an experiment and submit a pipeline run\nimport kfp\nclient = kfp.Client()\nexperiment = client.create_experiment(EXPERIMENT_NAME)\n\n#Submit a pipeline run\nrun_name = pipeline_func.__name__ + ' run'\nrun_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)", "_____no_output_____" ] ], [ [ "#### Inspect prediction results", "_____no_output_____" ] ], [ [ "OUTPUT_FILES_PATTERN = OUTPUT_GCS_PATH + '*'\n!gsutil cat OUTPUT_FILES_PATTERN", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a88366e7cd8a990e306b15a63fd327e9f35aae9
27,754
ipynb
Jupyter Notebook
10 Steps to Become a Data Scientist.ipynb
cboychinedu/Data_Analysis_Workflow
6912a032ae7cc70a77a749cf3aa43ea1bda637e8
[ "Apache-2.0" ]
null
null
null
10 Steps to Become a Data Scientist.ipynb
cboychinedu/Data_Analysis_Workflow
6912a032ae7cc70a77a749cf3aa43ea1bda637e8
[ "Apache-2.0" ]
null
null
null
10 Steps to Become a Data Scientist.ipynb
cboychinedu/Data_Analysis_Workflow
6912a032ae7cc70a77a749cf3aa43ea1bda637e8
[ "Apache-2.0" ]
null
null
null
29.153361
429
0.592563
[ [ [ " ## <div align=\"center\"> 10 Steps to Become a Data Scientist +20Q</div>\n <div align=\"center\">**quite practical and far from any theoretical concepts**</div>\n<div style=\"text-align:center\">last update: <b>15/01/2019</b></div>\n<img src=\"http://s9.picofile.com/file/8338833934/DS.png\"/>", "_____no_output_____" ], [ "\n\n---------------------------------------------------------------------\nFork and Run this course on GitHub:\n> #### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)\n\n\n-------------------------------------------------------------------------------------------------------------\n <b>I hope you find this kernel helpful and some <font color=\"red\"> UPVOTES</font> would be very much appreciated.</b>\n \n -----------\n", "_____no_output_____" ], [ " <a id=\"top\"></a> <br>\n**Notebook Content**\n\n [Introduction](#Introduction)\n1. [Python](#Python)\n1. [Python Packages](#PythonPackages)\n1. [Mathematics and Linear Algebra](#Algebra)\n1. [Programming & Analysis Tools](#Programming)\n1. [Big Data](#BigData)\n1. [Data visualization](#Datavisualization)\n1. [Data Cleaning](#DataCleaning)\n1. [How to solve Problem?](#Howto)\n1. [Machine Learning](#MachineLearning)\n1. [Deep Learning](#DeepLearning)", "_____no_output_____" ], [ " <a id=\"Introduction\"></a> <br>\n# Introduction\nIf you Read and Follow **Job Ads** to hire a machine learning expert or a data scientist, you find that some skills you should have to get the job. In this Kernel, I want to review **10 skills** that are essentials to get the job. In fact, this kernel is a reference for **10 other kernels**, which you can learn with them, all of the skills that you need. \n\nWe have used two well-known DataSets **Titanic** and **House prices** for starting but when you learned python and python packages, you can start using other datasets too. \n\n**Ready to learn**! you will learn 10 skills as data scientist: \n\n1. [Learn Python](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-1)\n1. [Learn python packages](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-2) \n1. [Linear Algebra for Data Scientists](https://www.kaggle.com/mjbahmani/linear-algebra-for-data-scientists)\n1. [Programming & Analysis Tools](https://www.kaggle.com/mjbahmani/machine-learning-workflow-for-house-prices)\n1. [Big Data](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)\n1. [Top 5 Data Visualization Libraries Tutorial](https://www.kaggle.com/mjbahmani/top-5-data-visualization-libraries-tutorial)\n1. [How to solve Problem?](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)\n1. [Data Cleaning](https://www.kaggle.com/mjbahmani/some-eda-for-elo)\n1. [Machine Learning](https://www.kaggle.com/mjbahmani/a-comprehensive-ml-workflow-with-python)\n1. [Deep Learning](https://www.kaggle.com/mjbahmani/top-5-deep-learning-frameworks-tutorial) \n\nThanks to **Kaggle team** due to provide a great professional community for Data Scientists.\n###### [go to top](#top)", "_____no_output_____" ], [ " <a id=\"1\"></a> <br>\n# 1-Python\nThe first step in this course for beginners is python learning. Just takes **10 hours** to learn python.\n\nfor reading this section **please** fork and run the following kernel:\n\n[Learn Python](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-1)\n \n ###### [go to top](#top)", "_____no_output_____" ], [ "<a id=\"PythonPackages\"></a> <br>\n# 2-Python Packages\nIn the second step, we will learn the necessary libraries that are essential for any data scietist.\n1. Numpy\n1. Pandas\n1. Matplotlib\n1. Seaborn\n1. TensorFlow\n1. NLTK\n1. Sklearn\nand so on\n\n<img src=\"http://s8.picofile.com/file/8338227868/packages.png\">\n\nfor Reading this section **please** fork and run this kernel:\n\n\n\n1. [The data scientist's toolbox tutorial 1](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-1)\n\n1. [The data scientist's toolbox tutorial 2](https://www.kaggle.com/mjbahmani/the-data-scientist-s-toolbox-tutorial-2)\n\n###### [go to top](#top)", "_____no_output_____" ], [ "<a id=\"Algebra\"></a> <br>\n## 3- Mathematics and Linear Algebra\nLinear algebra is the branch of mathematics that deals with vector spaces. good understanding of Linear Algebra is intrinsic to analyze Machine Learning algorithms, especially for Deep Learning where so much happens behind the curtain.you have my word that I will try to keep mathematical formulas & derivations out of this completely mathematical topic and I try to cover all of subject that you need as data scientist.\n\n<img src=\" https://s3.amazonaws.com/www.mathnasium.com/upload/824/images/algebra.jpg \" height=\"300\" width=\"300\">\n\nfor Reading this section **please** fork and run this kernel:\n\n[Linear Algebra for Data Scientists](https://www.kaggle.com/mjbahmani/linear-algebra-for-data-scientists)\n###### [go to top](#top)", "_____no_output_____" ], [ "<a id=\"Programming\"></a> <br>\n## 4- Programming & Analysis Tools\n\nThis section is not completed yet but for reading an alternative for it **please** fork and run this kernel:\n\n[Programming & Analysis Tools](https://www.kaggle.com/mjbahmani/machine-learning-workflow-for-house-prices)\n\n###### [go to top](#top)", "_____no_output_____" ], [ "<a id=\"BigData\"></a> <br>\n## 5- Big Data\n\nThis section is not completed yet but for reading an alternative for it **please** fork and run this kernel:\n\n[Big Data](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)\n", "_____no_output_____" ], [ "<a id=\"Datavisualization\"></a> <br>\n## 6- Data Visualization\nFor Reading this section **please** fork and upvote this kernel:\n\n[Top 5 Data Visualization Libraries Tutorial](https://www.kaggle.com/mjbahmani/top-5-data-visualization-libraries-tutorial)", "_____no_output_____" ], [ "<a id=\"DataCleaning\"></a> <br>\n## 7- Data Cleaning\nCertainly another important step in the way of specialization is learning how to clean the data.\nIn this section, we will do this on Elo data set.\nfor Reading this section **please** fork and upvote this kernel:\n\n[Data Cleaning](https://www.kaggle.com/mjbahmani/some-eda-for-elo)", "_____no_output_____" ], [ "<a id=\"Howto\"></a> <br>\n## 8- How to Solve Problem?\nIf you have already read some [machine learning books](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist/tree/master/Ebooks). You have noticed that there are different ways to stream data into machine learning.\n\nmost of these books share the following steps (checklist):\n* Define the Problem(Look at the big picture)\n* Specify Inputs & Outputs\n* Data Collection\n* Exploratory data analysis\n* Data Preprocessing\n* Model Design, Training, and Offline Evaluation\n* Model Deployment, Online Evaluation, and Monitoring\n* Model Maintenance, Diagnosis, and Retraining\n\n**You can see my workflow in the below image** :\n <img src=\"http://s9.picofile.com/file/8338227634/workflow.png\" />\n## 8-1 Real world Application Vs Competitions\nJust a simple comparison between real-world apps with competitions:\n<img src=\"http://s9.picofile.com/file/8339956300/reallife.png\" height=\"600\" width=\"500\" />\n**you should\tfeel free\tto\tadapt \tthis\tchecklist \tto\tyour needs**\n \n## 8-2 Problem Definition\nI think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)\n\nProblem Definition has four steps that have illustrated in the picture below:\n<img src=\"http://s8.picofile.com/file/8338227734/ProblemDefination.png\">\n \n### 8-2-1 Problem Feature\nThe sinking of the Titanic is one of the most infamous shipwrecks in history. **On April 15, 1912**, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing **1502 out of 2224** passengers and crew. That's why the name DieTanic. This is a very unforgetable disaster that no one in the world can forget.\n\nIt took about $7.5 million to build the Titanic and it sunk under the ocean due to collision. The Titanic Dataset is a very good dataset for begineers to start a journey in data science and participate in competitions in Kaggle.\n\nwe will use the classic titanic data set. This dataset contains information about **11 different variables**:\n<img src=\"http://s9.picofile.com/file/8340453092/Titanic_feature.png\" height=\"500\" width=\"500\">\n\n* Survival\n* Pclass\n* Name\n* Sex\n* Age\n* SibSp\n* Parch\n* Ticket\n* Fare\n* Cabin\n* Embarked\n\n<font color='red'><b>Question</b></font>\n1. It's your train what's House Price Data sets feature?\n\n### 8-2-2 Aim\n\nIt is your job to predict if a passenger survived the sinking of the Titanic or not. For each PassengerId in the test set, you must predict a 0 or 1 value for the Survived variable.\n\n \n### 8-2-3 Variables\n\n1. **Age** ==>> Age is fractional if less than 1. If the age is estimated, is it in the form of xx.5\n\n2. **Sibsp** ==>> The dataset defines family relations in this way...\n\n a. Sibling = brother, sister, stepbrother, stepsister\n\n b. Spouse = husband, wife (mistresses and fiancés were ignored)\n\n3. **Parch** ==>> The dataset defines family relations in this way...\n\n a. Parent = mother, father\n\n b. Child = daughter, son, stepdaughter, stepson\n\n c. Some children travelled only with a nanny, therefore parch=0 for them.\n\n4. **Pclass** ==>> A proxy for socio-economic status (SES)\n\n * 1st = Upper\n * 2nd = Middle\n * 3rd = Lower\n \n5. **Embarked** ==>> nominal datatype \n6. **Name** ==>> nominal datatype . It could be used in feature engineering to derive the gender from title\n7. **Sex** ==>> nominal datatype \n8. **Ticket** ==>> that have no impact on the outcome variable. Thus, they will be excluded from analysis\n9. **Cabin** ==>> is a nominal datatype that can be used in feature engineering\n11. **Fare** ==>> Indicating the fare\n12. **PassengerID ** ==>> have no impact on the outcome variable. Thus, it will be excluded from analysis\n11. **Survival** is ==>> **[dependent variable](http://www.dailysmarty.com/posts/difference-between-independent-and-dependent-variables-in-machine-learning)** , 0 or 1\n\n\n**<< Note >>**\n\n> You must answer the following question:\nHow does your company expact to use and benfit from your model.\n\nfor Reading this section **please** fork and upvote this kernel:\n\n[How to solve Problem?](https://www.kaggle.com/mjbahmani/a-data-science-framework-for-quora)\n###### [Go to top](#top)", "_____no_output_____" ], [ "<a id=\"MachineLearning\"></a> <br>\n## 9- Machine learning \nfor reading this section **please** fork and upvote this kernel:\n\n[A Comprehensive ML Workflow with Python](https://www.kaggle.com/mjbahmani/a-comprehensive-ml-workflow-with-python)\n\n", "_____no_output_____" ], [ "<a id=\"DeepLearning\"></a> <br>\n## 10- Deep Learning\n\nfor Reading this section **please** fork and upvote this kernel:\n\n[A-Comprehensive-Deep-Learning-Workflow-with-Python](https://www.kaggle.com/mjbahmani/a-comprehensive-deep-learning-workflow-with-python)\n\n---------------------------\n", "_____no_output_____" ], [ "# <div align=\"center\"> 50 machine learning questions & answers for Beginners </div>\nIf you are studying this kernel, you're probably at beginning of this journey. Here are some useful Python codes you need to get started.", "_____no_output_____" ] ], [ [ "import matplotlib.animation as animation\nfrom matplotlib.figure import Figure\nimport plotly.figure_factory as ff\nimport matplotlib.pylab as pylab\nfrom ipywidgets import interact\nimport plotly.graph_objs as go\nimport plotly.offline as py\nfrom random import randint\nfrom plotly import tools\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport warnings\nimport string\nimport numpy\nimport csv\nimport os", "_____no_output_____" ] ], [ [ "## 1-how to import your data?", "_____no_output_____" ], [ "What you have in your data folder?", "_____no_output_____" ] ], [ [ "print(os.listdir(\"../input/\"))", "_____no_output_____" ] ], [ [ "import all of your data", "_____no_output_____" ] ], [ [ "titanic_train=pd.read_csv('../input/train.csv')\ntitanic_test=pd.read_csv('../input/test.csv')", "_____no_output_____" ] ], [ [ "Or import just %10 of your data", "_____no_output_____" ] ], [ [ "titanic_train2=pd.read_csv('../input/train.csv',nrows=1000)\n ", "_____no_output_____" ] ], [ [ "How to see the size of your data:", "_____no_output_____" ] ], [ [ "print(\"Train: rows:{} columns:{}\".format(titanic_train.shape[0], titanic_train.shape[1]))", "_____no_output_____" ] ], [ [ "**For reading more about how to import your data you can visit: **[this Kernel](https://www.kaggle.com/dansbecker/finding-your-files-in-kaggle-kernels)", "_____no_output_____" ], [ "## 2- How to check missed data?", "_____no_output_____" ] ], [ [ "titanic_train.isna().sum()", "_____no_output_____" ] ], [ [ "or you can use below code", "_____no_output_____" ] ], [ [ "total = titanic_train.isnull().sum().sort_values(ascending=False)\npercent = (titanic_train.isnull().sum()/titanic_train.isnull().count()).sort_values(ascending=False)\nmissing_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])\nmissing_data.head(20)", "_____no_output_____" ] ], [ [ "## 3- How to view the statistical characteristics of the data?", "_____no_output_____" ] ], [ [ "titanic_train.describe()", "_____no_output_____" ] ], [ [ "or just for one column", "_____no_output_____" ] ], [ [ "titanic_train['Age'].describe()", "_____no_output_____" ] ], [ [ "with a another shape", "_____no_output_____" ] ], [ [ "titanic_train.Age.describe()", "_____no_output_____" ] ], [ [ "## 4- How check the column's name?", "_____no_output_____" ] ], [ [ "titanic_train.columns", "_____no_output_____" ] ], [ [ "or you can the check the column name with another ways too", "_____no_output_____" ] ], [ [ "titanic_train.head()", "_____no_output_____" ] ], [ [ "## 5- how to view randomly your data set ?", "_____no_output_____" ] ], [ [ "titanic_train.sample(5)", "_____no_output_____" ] ], [ [ "## 6-How random row selection in Pandas dataframe?", "_____no_output_____" ] ], [ [ "titanic_train.sample(frac=0.007)", "_____no_output_____" ] ], [ [ "## 7- How to copy a column and drop it ?", "_____no_output_____" ] ], [ [ "PassengerId=titanic_train['PassengerId'].copy()\n", "_____no_output_____" ], [ "PassengerId.head()", "_____no_output_____" ], [ "type(PassengerId)", "_____no_output_____" ], [ "titanic_train=titanic_train.drop('PassengerId',1)", "_____no_output_____" ], [ "titanic_train.head()", "_____no_output_____" ], [ "titanic_train=pd.read_csv('../input/train.csv')", "_____no_output_____" ] ], [ [ "## 8- How to check out last 5 row of the dataset?\n", "_____no_output_____" ], [ "we use tail() function", "_____no_output_____" ] ], [ [ "titanic_train.tail() ", "_____no_output_____" ] ], [ [ "\n# 9- How to concatenation operations along an axis?", "_____no_output_____" ] ], [ [ "all_data = pd.concat((titanic_train.loc[:,'Pclass':'Embarked'],\n titanic_test.loc[:,'Pclass':'Embarked']))", "_____no_output_____" ], [ "all_data.head()", "_____no_output_____" ], [ "titanic_train.shape", "_____no_output_____" ], [ "titanic_test.shape", "_____no_output_____" ], [ "all_data.shape", "_____no_output_____" ] ], [ [ "## 10- How to see unique values for a culomns?", "_____no_output_____" ] ], [ [ "titanic_train['Sex'].unique()\n", "_____no_output_____" ], [ "titanic_train['Cabin'].unique()\n", "_____no_output_____" ], [ "titanic_train['Pclass'].unique()\n", "_____no_output_____" ] ], [ [ "## 11- How to perform some query on your datasets?", "_____no_output_____" ] ], [ [ "titanic_train[titanic_train['Age']>70]", "_____no_output_____" ], [ "titanic_train[titanic_train['Pclass']==1]", "_____no_output_____" ] ], [ [ "---------------------------------------------------------------------\nFork and Run this kernel on GitHub:\n> ###### [ GitHub](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)\n\n \n\n-------------------------------------------------------------------------------------------------------------\n <b>I hope you find this kernel helpful and some <font color=\"red\">UPVOTES</font> would be very much appreciated</b>\n \n -----------", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a883d57338ac0cb3975b80fdffc311353582850
11,059
ipynb
Jupyter Notebook
dementia_optima/models/.ipynb_checkpoints/notebook_script-checkpoint.ipynb
SDM-TIB/dementia_mmse
bf22947aa06350edd454794eb2b926f082d2dcf2
[ "MIT" ]
null
null
null
dementia_optima/models/.ipynb_checkpoints/notebook_script-checkpoint.ipynb
SDM-TIB/dementia_mmse
bf22947aa06350edd454794eb2b926f082d2dcf2
[ "MIT" ]
null
null
null
dementia_optima/models/.ipynb_checkpoints/notebook_script-checkpoint.ipynb
SDM-TIB/dementia_mmse
bf22947aa06350edd454794eb2b926f082d2dcf2
[ "MIT" ]
null
null
null
44.592742
295
0.55656
[ [ [ "#!/usr/bin/env python\n# coding: utf-8\n\n# ------\n# **Dementia Patients -- Analysis and Prediction**\n### ***Author : Akhilesh Vyas***\n### ****Date : Januaray, 2020****\n\nimport numpy as np\nimport pandas as pd\nfrom sklearn.ensemble import RandomForestClassifier\nimport pickle\nimport re", "_____no_output_____" ], [ "# define path\ndata_path = '../../../datalcdem/data/optima/dementia_18July/class_fast_normal_slow_api_inputs/'\nresult_path = '../../../datalcdem/data/optima/dementia_18July/class_fast_normal_slow_api_inputs/results/'\n\n# define\nbest_params_rf_dict = {'n_estimators': 25, 'min_samples_split': 3, 'min_samples_leaf': 1, 'max_features': 'log2', 'max_depth': 8, 'bootstrap': False}\nclass_names_dict = {'Slow':0, 'Slow_MiS':1, 'Normal':2, 'Normal_MiS':3, 'Fast':4, 'Fast_MiS':5}\n\n\n# In[3]:\n\n\ndef replaceNamesComandMed(col):\n #col1 = re.sub(r'[aA-zZ]*_cui_', 'Episode', col)\n if 'Medication_cui_' in col:\n return 'Medication_cui_'+str('0_')+col.split('_')[-1]\n if 'Comorbidity_cui_' in col:\n return 'Comorbidity_cui_'+str('0_')+col.split('_')[-1]\n if 'Age_At_Episode' in col:\n # print ('Age_At_Episode_'+str('0'))\n return 'Age_At_Episode_'+str('0')\n return col\n\ndef replacewithEpisode(col):\n if 'Medication_cui_' in col:\n return 'Episode'+str(col.split('_')[-2])+'_Med_'+col.split('_')[-1]\n if 'Comorbidity_cui_' in col:\n return 'Episode'+str(col.split('_')[-2])+'_Com_'+col.split('_')[-1]\n if 'Age_At_Episode' in col:\n return 'Episode'+str(col.split('_')[-1])+'_Age'\n return col\n\n\n# In[4]:\n\n\n# read dataframes\ndf_fea_all = pd.read_csv(data_path+'final_features_file_without_feature_selection_smote.csv')\n# print (df_fea_all.shape)\ndf_fea_rfecv = pd.read_csv(data_path+'final_features_file_with_feature_selection_rfecv.csv')\ndf_fea_rfecv.rename(columns={col:col.replace('_TFV_', '_') for col in df_fea_all.columns}, inplace=True)\ndf_fea_rfecv.rename(columns={col:replaceNamesComandMed(col) for col in df_fea_rfecv.columns.tolist() if not bool(re.search(r'_[1-9]_?',col))}, inplace=True)\ndf_fea_rfecv.rename(columns={col:replacewithEpisode(col) for col in df_fea_rfecv.columns.tolist()}, inplace=True)\ndf_fea_rfecv.rename(columns={'CAMDEX SCORES: MINI MENTAL SCORE_CATEGORY_Mild':'Initial_MMSE_Score_Mild', \n 'CAMDEX SCORES: MINI MENTAL SCORE_CATEGORY_Moderate':'Initial_MMSE_Score_Moderate'}, inplace=True)\n# print(df_fea_rfecv.shape)\n\n# read object\ndata_p_i = pickle.load(open(data_path + 'data_p_i.pickle', 'rb'))\ntarget_p_i = pickle.load(open(data_path + 'target_p_i.pickle', 'rb')) \nrfecv_support_ = pickle.load(open(data_path + 'rfecv.support_.pickle', 'rb'))\n# print(data_p_i.shape, target_p_i.shape, rfecv_support_.shape)\n\n#read dictionary\n# Treatment data\ntreatmnt_df = pd.read_csv(data_path+'Treatments.csv')\n# print(treatmnt_df.head(5))\ntreatmnt_dict = dict(zip(treatmnt_df['name'], treatmnt_df['CUI_ID']))\n# print ('\\n Unique Treatment data size: {}\\n'.format(len(treatmnt_dict)))\n\n# Comorbidities data\ncomorb_df = pd.read_csv(data_path+'comorbidities.csv')\n# print(comorb_df.head(5))\ncomorb_dict = dict(zip(comorb_df['name'], comorb_df['CUI_ID']))\n# print ('\\n Unique Comorbidities data size: {}\\n'.format(len(comorb_dict)))\n\n\n# In[5]:\n# for i, j in zip(df_fea_rfecv.columns.to_list(), pd.read_csv(data_path+'final_features_file_with_feature_selection_rfecv.csv').columns.tolist()):\n# print (j,' ', i)\n# In[6]:\n\n\n# Classification Model\ndata_p_grid = data_p_i[:,rfecv_support_]\nrf_bp = best_params_rf_dict\n\nrf_classifier=RandomForestClassifier(n_estimators=rf_bp[\"n_estimators\"],\n min_samples_split=rf_bp['min_samples_split'],\n min_samples_leaf=rf_bp['min_samples_leaf'],\n max_features=rf_bp['max_features'],\n max_depth=rf_bp['max_depth'],\n bootstrap=rf_bp['bootstrap'])\nrf_classifier.fit(data_p_grid, target_p_i)\n\n\n# Example for testing classification model\n'''st_ix_p = 21\nend_ix_p = 22\np = data_p_grid[st_ix_p:end_ix_p]\nt = target_p_i[st_ix_p:end_ix_p]\nprint ('Mean Accuracy: ', rf_classifier.score(data_p_grid, target_p_i)*100)\nprint ('Predict Probability: ', rf_classifier.predict_proba(p)*100)\nprint ('Prediction: ', rf_classifier.predict(p))\nprint ('Target: ', t)'''\n\n\n# In[48]:\npatient_data_in = {'gender':['Female'], 'dementia':['True'], 'smoker':['no_smoker'], 'alcohol':'mild_drinking', 'education':['medium'], 'bmi':23, 'weight':60,\n 'apoe':['E3E3'], 'Initial_MMSE_Score':['Mild'], 'Episode1_Com':['C0002965', 'C0042847'], 'Episode1_Med':['C0014695', 'C0039943'], 'Episode1_Age':67, 'Episode2_Com':['C0032533'], \n 'Episode2_Med':['C1166521'], 'Episode2_Age':69}\n\n\ndef create_patient_feature_vector(patient_data_in):\n print ('create_patient_feature_vector')\n patient_data = pd.DataFrame(data=np.zeros(shape=df_fea_rfecv.iloc[0:1, 0:-1].shape), columns=df_fea_rfecv.columns[0:-1])\n print (patient_data.shape)\n for key,value in patient_data_in.items():\n try:\n if type(value)==list:\n if len(value)>0:\n print (key, value)\n for i in value:\n if key+'_'+str(i) in patient_data.columns.tolist():\n print ('##############', key+'_'+str(i))\n patient_data.at[0, key+'_'+str(i)] = 1.0 \n elif key+'_'+str(value) in patient_data.columns.tolist():\n print ('#########', key+'_'+str(value))\n patient_data.at[0, key+'_'+str(value)] = 1.0\n elif key in patient_data.columns.tolist():\n print ('########',key)\n patient_data.at[0, key] = value\n except Exception as e:\n template = \"An exception of type {0} occurred. Arguments:\\n{1!r}\"\n message = template.format(type(e).__name__, e.args)\n print (message)\n return None\n \n return patient_data\n\n\ndef result(patient_vect):\n print (patient_vect.shape)\n print (patient_vect)\n print ('result')\n try:\n print ('Predict Probability: ', rf_classifier.predict_proba(patient_vect)*100)\n print ('Prediction: ', rf_classifier.predict(patient_vect))\n predicted_class = list(class_names_dict.keys())[rf_classifier.predict(patient_vect)[0]]\n class_probability = {i:j for i, j in zip(list(class_names_dict.keys()), rf_classifier.predict_proba(patient_vect)[0])}\n response= { 'modelName': 'Classification of Dementia Patient Progression',\n 'class_probability': class_probability,\n 'predicted_class': predicted_class}\n return response\n \n except Exception as e:\n template = \"An exception of type {0} occurred. Arguments:\\n{1!r}\"\n message = template.format(type(e).__name__, e.args)\n print (message)\n return None\n\n\ndef main():\n # patient_data_in to be get by Web API\n try:\n print ('Shape of patient vector with all selected Feature', df_fea_all.shape)\n print('Shape of patient vector with selected Feature', df_fea_rfecv.shape)\n print(data_p_i.shape, target_p_i.shape, rfecv_support_.shape)\n print('Medication:\\n', treatmnt_df.head(5))\n print ('\\n Unique Treatment data size: {}\\n'.format(len(treatmnt_dict)))\n print('Comorbidities\\n', comorb_df.head(5))\n print ('\\n Unique Comorbidities data size: {}\\n'.format(len(comorb_dict)))\n pat_df = create_patient_feature_vector(patient_data_in)\n print ('pat_df', pat_df)\n patient_data_fea = pat_df.values.reshape(1,-1)\n response = result(patient_data_fea)\n print (response)\n return response\n except Exception as e:\n template = \"An exception of type {0} occurred. Arguments:\\n{1!r}\"\n message = template.format(type(e).__name__, e.args)\n print (message)\n return None\n \n\nprint(\"\\n\\n######## Predictive Model for Dementia Patients#############\\n\\n\")\nmain()\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
4a883e742ed8c04e66a71065f3feab24371706f0
23,890
ipynb
Jupyter Notebook
Keyword Extraction with TF-IDF and SKlearn.ipynb
Jerry671/natural-language-project
943415ac6085a74363b4f7e881454ebcfccc7689
[ "Apache-2.0" ]
null
null
null
Keyword Extraction with TF-IDF and SKlearn.ipynb
Jerry671/natural-language-project
943415ac6085a74363b4f7e881454ebcfccc7689
[ "Apache-2.0" ]
null
null
null
Keyword Extraction with TF-IDF and SKlearn.ipynb
Jerry671/natural-language-project
943415ac6085a74363b4f7e881454ebcfccc7689
[ "Apache-2.0" ]
1
2020-11-28T07:41:42.000Z
2020-11-28T07:41:42.000Z
38.346709
879
0.603684
[ [ [ "## Extracting Important Keywords from Text with TF-IDF and Python's Scikit-Learn \n\nBack in 2006, when I had to use TF-IDF for keyword extraction in Java, I ended up writing all of the code from scratch as Data Science nor GitHub were a thing back then and libraries were just limited. The world is much different today. You have several [libraries](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html#sklearn.feature_extraction.text.TfidfTransformer) and [open-source code on Github](https://github.com/topics/tf-idf?o=desc&s=forks) that provide a decent implementation of TF-IDF. If you don't need a lot of control over how the TF-IDF math is computed then I would highly recommend re-using libraries from known packages such as [Spark's MLLib](https://spark.apache.org/docs/2.2.0/mllib-feature-extraction.html) or [Python's scikit-learn](http://scikit-learn.org/stable/). \n\nThe one problem that I noticed with these libraries is that they are meant as a pre-step for other tasks like clustering, topic modeling and text classification. [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) can actually be used to extract important keywords from a document to get a sense of what characterizes a document. For example, if you are dealing with wikipedia articles, you can use tf-idf to extract words that are unique to a given article. These keywords can be used as a very simple summary of the document, it can be used for text-analytics (when we look at these keywords in aggregate), as candidate labels for a document and more. \n\nIn this article, I will show you how you can use scikit-learn to extract top keywords for a given document using its tf-idf modules. We will specifically do this on a stackoverflow dataset. ", "_____no_output_____" ], [ "## Dataset\nSince we used some pretty clean user reviews in some of my previous tutorials, in this example, we will be using a Stackoverflow dataset which is slightly noisier and simulates what you could be dealing with in real life. You can find this dataset in [my tutorial repo](https://github.com/kavgan/data-science-tutorials/tree/master/tf-idf/data). Notice that there are two files, the larger file with (20,000 posts)[https://github.com/kavgan/data-science-tutorials/tree/master/tf-idf/data] is used to compute the Inverse Document Frequency (IDF) and the smaller file with [500 posts](https://github.com/kavgan/data-science-tutorials/tree/master/tf-idf/data) would be used as a test set for us to extract keywords from. This dataset is based on the publicly available [Stackoverflow dump on Google's Big Query](https://cloud.google.com/bigquery/public-data/stackoverflow).\n\nLet's take a peek at our dataset. The code below reads a one per line json string from `data/stackoverflow-data-idf.json` into a pandas data frame and prints out its schema and total number of posts. Here, `lines=True` simply means we are treating each line in the text file as a separate json string. With this, the json in line 1 is not related to the json in line 2.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\n# read json into a dataframe\ndf_idf=pd.read_json(\"data/stackoverflow-data-idf.json\",lines=True)\n\n# print schema\nprint(\"Schema:\\n\\n\",df_idf.dtypes)\nprint(\"Number of questions,columns=\",df_idf.shape)\n", "Schema:\n\n accepted_answer_id float64\nanswer_count int64\nbody object\ncomment_count int64\ncommunity_owned_date object\ncreation_date object\nfavorite_count float64\nid int64\nlast_activity_date object\nlast_edit_date object\nlast_editor_display_name object\nlast_editor_user_id float64\nowner_display_name object\nowner_user_id float64\npost_type_id int64\nscore int64\ntags object\ntitle object\nview_count int64\ndtype: object\nNumber of questions,columns= (20000, 19)\n" ] ], [ [ "Take note that this stackoverflow dataset contains 19 fields including post title, body, tags, dates and other metadata which we don't quite need for this tutorial. What we are mostly interested in for this tutorial is the `body` and `title` which is our source of text. We will now create a field that combines both body and title so we have it in one field. We will also print the second `text` entry in our new field just to see what the text looks like.", "_____no_output_____" ] ], [ [ "import re\ndef pre_process(text):\n \n # lowercase\n text=text.lower()\n \n #remove tags\n text=re.sub(\"</?.*?>\",\" <> \",text)\n \n # remove special characters and digits\n text=re.sub(\"(\\\\d|\\\\W)+\",\" \",text)\n \n return text\n\ndf_idf['text'] = df_idf['title'] + df_idf['body']\ndf_idf['text'] = df_idf['text'].apply(lambda x:pre_process(x))\n\n#show the first 'text'\ndf_idf['text'][2]", "_____no_output_____" ] ], [ [ "Hmm, doesn't look very pretty with all the html in there, but that's the point. Even in such a mess we can extract some great stuff out of this. While you can eliminate all code from the text, we will keep the code sections for this tutorial for the sake of simplicity. ", "_____no_output_____" ], [ "## Creating the IDF\n\n### CountVectorizer to create a vocabulary and generate word counts\nThe next step is to start the counting process. We can use the CountVectorizer to create a vocabulary from all the text in our `df_idf['text']` and generate counts for each row in `df_idf['text']`. The result of the last two lines is a sparse matrix representation of the counts, meaning each column represents a word in the vocabulary and each row represents the document in our dataset where the values are the word counts. Note that with this representation, counts of some words could be 0 if the word did not appear in the corresponding document.", "_____no_output_____" ] ], [ [ "from sklearn.feature_extraction.text import CountVectorizer\nimport re\n\ndef get_stop_words(stop_file_path):\n \"\"\"load stop words \"\"\"\n \n with open(stop_file_path, 'r', encoding=\"utf-8\") as f:\n stopwords = f.readlines()\n stop_set = set(m.strip() for m in stopwords)\n return frozenset(stop_set)\n\n#load a set of stop words\nstopwords=get_stop_words(\"resources/stopwords.txt\")\n\n#get the text column \ndocs=df_idf['text'].tolist()\n\n#create a vocabulary of words, \n#ignore words that appear in 85% of documents, \n#eliminate stop words\ncv=CountVectorizer(max_df=0.85,stop_words=stopwords)\nword_count_vector=cv.fit_transform(docs)", "_____no_output_____" ] ], [ [ "Now let's check the shape of the resulting vector. Notice that the shape below is `(20000,149391)` because we have 20,000 documents in our dataset (the rows) and the vocabulary size is `149391` meaning we have `149391` unique words (the columns) in our dataset minus the stopwords. In some of the text mining applications, such as clustering and text classification we limit the size of the vocabulary. It's really easy to do this by setting `max_features=vocab_size` when instantiating CountVectorizer.", "_____no_output_____" ] ], [ [ "word_count_vector.shape", "_____no_output_____" ] ], [ [ "Let's limit our vocabulary size to 10,000", "_____no_output_____" ] ], [ [ "cv=CountVectorizer(max_df=0.85,stop_words=stopwords,max_features=10000)\nword_count_vector=cv.fit_transform(docs)\nword_count_vector.shape", "_____no_output_____" ] ], [ [ "Now, let's look at 10 words from our vocabulary. Sweet, these are mostly programming related.", "_____no_output_____" ] ], [ [ "list(cv.vocabulary_.keys())[:10]", "_____no_output_____" ] ], [ [ "We can also get the vocabulary by using `get_feature_names()`", "_____no_output_____" ] ], [ [ "list(cv.get_feature_names())[2000:2015]", "_____no_output_____" ] ], [ [ "### TfidfTransformer to Compute Inverse Document Frequency (IDF) \nIn the code below, we are essentially taking the sparse matrix from CountVectorizer to generate the IDF when you invoke `fit`. An extremely important point to note here is that the IDF should be based on a large corpora and should be representative of texts you would be using to extract keywords. I've seen several articles on the Web that compute the IDF using a handful of documents. To understand why IDF should be based on a fairly large collection, please read this [page from Standford's IR book](https://nlp.stanford.edu/IR-book/html/htmledition/inverse-document-frequency-1.html).", "_____no_output_____" ] ], [ [ "from sklearn.feature_extraction.text import TfidfTransformer\n\ntfidf_transformer=TfidfTransformer(smooth_idf=True,use_idf=True)\ntfidf_transformer.fit(word_count_vector)", "_____no_output_____" ] ], [ [ "Let's look at some of the IDF values:", "_____no_output_____" ] ], [ [ "tfidf_transformer.idf_", "_____no_output_____" ] ], [ [ "## Computing TF-IDF and Extracting Keywords", "_____no_output_____" ], [ "Once we have our IDF computed, we are now ready to compute TF-IDF and extract the top keywords. In this example, we will extract top keywords for the questions in `data/stackoverflow-test.json`. This data file has 500 questions with fields identical to that of `data/stackoverflow-data-idf.json` as we saw above. We will start by reading our test file, extracting the necessary fields (title and body) and get the texts into a list.", "_____no_output_____" ] ], [ [ "# read test docs into a dataframe and concatenate title and body\ndf_test=pd.read_json(\"data/stackoverflow-test.json\",lines=True)\ndf_test['text'] = df_test['title'] + df_test['body']\ndf_test['text'] =df_test['text'].apply(lambda x:pre_process(x))\n\n# get test docs into a list\ndocs_test=df_test['text'].tolist()\ndocs_title=df_test['title'].tolist()\ndocs_body=df_test['body'].tolist()", "_____no_output_____" ], [ "def sort_coo(coo_matrix):\n tuples = zip(coo_matrix.col, coo_matrix.data)\n return sorted(tuples, key=lambda x: (x[1], x[0]), reverse=True)\n\ndef extract_topn_from_vector(feature_names, sorted_items, topn=10):\n \"\"\"get the feature names and tf-idf score of top n items\"\"\"\n \n #use only topn items from vector\n sorted_items = sorted_items[:topn]\n\n score_vals = []\n feature_vals = []\n\n for idx, score in sorted_items:\n fname = feature_names[idx]\n \n #keep track of feature name and its corresponding score\n score_vals.append(round(score, 3))\n feature_vals.append(feature_names[idx])\n\n #create a tuples of feature,score\n #results = zip(feature_vals,score_vals)\n results= {}\n for idx in range(len(feature_vals)):\n results[feature_vals[idx]]=score_vals[idx]\n \n return results", "_____no_output_____" ] ], [ [ "The next step is to compute the tf-idf value for a given document in our test set by invoking `tfidf_transformer.transform(...)`. This generates a vector of tf-idf scores. Next, we sort the words in the vector in descending order of tf-idf values and then iterate over to extract the top-n items with the corresponding feature names, In the example below, we are extracting keywords for the first document in our test set. \n\nThe `sort_coo(...)` method essentially sorts the values in the vector while preserving the column index. Once you have the column index then its really easy to look-up the corresponding word value as you would see in `extract_topn_from_vector(...)` where we do `feature_vals.append(feature_names[idx])`.", "_____no_output_____" ] ], [ [ "# you only needs to do this once\nfeature_names=cv.get_feature_names()\n\n# get the document that we want to extract keywords from\ndoc=docs_test[0]\n\n#generate tf-idf for the given document\ntf_idf_vector=tfidf_transformer.transform(cv.transform([doc]))\n\n#sort the tf-idf vectors by descending order of scores\nsorted_items=sort_coo(tf_idf_vector.tocoo())\n\n#extract only the top n; n here is 10\nkeywords=extract_topn_from_vector(feature_names,sorted_items,10)\n\n# now print the results\nprint(\"\\n=====Title=====\")\nprint(docs_title[0])\nprint(\"\\n=====Body=====\")\nprint(docs_body[0])\nprint(\"\\n===Keywords===\")\nfor k in keywords:\n print(k,keywords[k])", "\n=====Title=====\nIntegrate War-Plugin for m2eclipse into Eclipse Project\n\n=====Body=====\n<p>I set up a small web project with JSF and Maven. Now I want to deploy on a Tomcat server. Is there a possibility to automate that like a button in Eclipse that automatically deploys the project to Tomcat?</p>\n\n<p>I read about a the <a href=\"http://maven.apache.org/plugins/maven-war-plugin/\" rel=\"nofollow noreferrer\">Maven War Plugin</a> but I couldn't find a tutorial how to integrate that into my process (eclipse/m2eclipse).</p>\n\n<p>Can you link me to help or try to explain it. Thanks.</p>\n\n===Keywords===\neclipse 0.593\nwar 0.317\nintegrate 0.281\nmaven 0.273\ntomcat 0.27\nproject 0.239\nplugin 0.214\nautomate 0.157\njsf 0.152\npossibility 0.146\n" ] ], [ [ "From the keywords above, the top keywords actually make sense, it talks about `eclipse`, `maven`, `integrate`, `war` and `tomcat` which are all unique to this specific question. There are a couple of kewyords that could have been eliminated such as `possibility` and perhaps even `project` and you can do this by adding more common words to your stop list and you can even create your own set of stop list, very specific to your domain as [described here](http://kavita-ganesan.com/tips-for-constructing-custom-stop-word-lists/).\n\n", "_____no_output_____" ] ], [ [ "# put the common code into several methods\ndef get_keywords(idx):\n\n #generate tf-idf for the given document\n tf_idf_vector=tfidf_transformer.transform(cv.transform([docs_test[idx]]))\n\n #sort the tf-idf vectors by descending order of scores\n sorted_items=sort_coo(tf_idf_vector.tocoo())\n\n #extract only the top n; n here is 10\n keywords=extract_topn_from_vector(feature_names,sorted_items,10)\n \n return keywords\n\ndef print_results(idx,keywords):\n # now print the results\n print(\"\\n=====Title=====\")\n print(docs_title[idx])\n print(\"\\n=====Body=====\")\n print(docs_body[idx])\n print(\"\\n===Keywords===\")\n for k in keywords:\n print(k,keywords[k])\n\n", "_____no_output_____" ] ], [ [ "Now let's look at keywords generated for a much longer question: \n", "_____no_output_____" ] ], [ [ "idx=120\nkeywords=get_keywords(idx)\nprint_results(idx,keywords)", "\n=====Title=====\nSQL Import Wizard - Error\n\n=====Body=====\n<p>I have a CSV file that I'm trying to import into SQL Management Server Studio.</p>\n\n<p>In Excel, the column giving me trouble looks like this:\n<a href=\"https://i.stack.imgur.com/pm0uS.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/pm0uS.png\" alt=\"enter image description here\"></a></p>\n\n<p>Tasks > import data > Flat Source File > select file</p>\n\n<p><a href=\"https://i.stack.imgur.com/G4b6I.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/G4b6I.png\" alt=\"enter image description here\"></a></p>\n\n<p>I set the data type for this column to DT_NUMERIC, adjust the DataScale to 2 in order to get 2 decimal places, but when I click over to Preview, I see that it's clearly not recognizing the numbers appropriately:</p>\n\n<p><a href=\"https://i.stack.imgur.com/NZhiQ.png\" rel=\"nofollow noreferrer\"><img src=\"https://i.stack.imgur.com/NZhiQ.png\" alt=\"enter image description here\"></a></p>\n\n<p>The column mapping for this column is set to type = decimal; precision 18; scale 2.</p>\n\n<p>Error message: Data Flow Task 1: Data conversion failed. The data conversion for column \"Amount\" returned status value 2 and status text \"The value could not be converted because of a potential loss of data.\".\n (SQL Server Import and Export Wizard)</p>\n\n<p>Can someone identify where I'm going wrong here? Thanks!</p>\n\n===Keywords===\ncolumn 0.365\nimport 0.286\ndata 0.283\nwizard 0.27\ndecimal 0.227\nconversion 0.224\nsql 0.217\nstatus 0.164\nfile 0.147\nappropriately 0.142\n" ] ], [ [ "Whoala! Now you can extract important keywords from any type of text! ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a883f95f30b4ea02187064fa770de046e233e36
338,589
ipynb
Jupyter Notebook
Population_Segmentation/Pop_Segmentation_Solution.ipynb
manuelzander/ML_SageMaker_Studies
db8dbaa3c198b5073d0c06fdafb0641e69b9c371
[ "MIT" ]
null
null
null
Population_Segmentation/Pop_Segmentation_Solution.ipynb
manuelzander/ML_SageMaker_Studies
db8dbaa3c198b5073d0c06fdafb0641e69b9c371
[ "MIT" ]
null
null
null
Population_Segmentation/Pop_Segmentation_Solution.ipynb
manuelzander/ML_SageMaker_Studies
db8dbaa3c198b5073d0c06fdafb0641e69b9c371
[ "MIT" ]
null
null
null
60.354545
18,048
0.687358
[ [ [ "# Population Segmentation with SageMaker\n\nIn this notebook, you'll employ two, unsupervised learning algorithms to do **population segmentation**. Population segmentation aims to find natural groupings in population data that reveal some feature-level similarities between different regions in the US.\n\nUsing **principal component analysis** (PCA) you will reduce the dimensionality of the original census data. Then, you'll use **k-means clustering** to assign each US county to a particular cluster based on where a county lies in component space. How each cluster is arranged in component space can tell you which US counties are most similar and what demographic traits define that similarity; this information is most often used to inform targeted, marketing campaigns that want to appeal to a specific group of people. This cluster information is also useful for learning more about a population by revealing patterns between regions that you otherwise may not have noticed.\n\n### US Census Data\n\nYou'll be using data collected by the [US Census](https://en.wikipedia.org/wiki/United_States_Census), which aims to count the US population, recording demographic traits about labor, age, population, and so on, for each county in the US. The bulk of this notebook was taken from an existing SageMaker example notebook and [blog post](https://aws.amazon.com/blogs/machine-learning/analyze-us-census-data-for-population-segmentation-using-amazon-sagemaker/), and I've broken it down further into demonstrations and exercises for you to complete.\n\n### Machine Learning Workflow\n\nTo implement population segmentation, you'll go through a number of steps:\n* Data loading and exploration\n* Data cleaning and pre-processing \n* Dimensionality reduction with PCA\n* Feature engineering and data transformation\n* Clustering transformed data with k-means\n* Extracting trained model attributes and visualizing k clusters\n\nThese tasks make up a complete, machine learning workflow from data loading and cleaning to model deployment. Each exercise is designed to give you practice with part of the machine learning workflow, and to demonstrate how to use SageMaker tools, such as built-in data management with S3 and built-in algorithms.\n\n---", "_____no_output_____" ], [ "First, import the relevant libraries into this SageMaker notebook. ", "_____no_output_____" ] ], [ [ "# data managing and display libs\nimport pandas as pd\nimport numpy as np\nimport os\nimport io\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib inline ", "_____no_output_____" ], [ "# sagemaker libraries\nimport boto3\nimport sagemaker", "_____no_output_____" ] ], [ [ "## Loading the Data from Amazon S3\n\nThis particular dataset is already in an Amazon S3 bucket; you can load the data by pointing to this bucket and getting a data file by name. \n\n> You can interact with S3 using a `boto3` client.", "_____no_output_____" ] ], [ [ "# boto3 client to get S3 data\ns3_client = boto3.client('s3')\n# S3 bucket name\nbucket_name='aws-ml-blog-sagemaker-census-segmentation'", "_____no_output_____" ] ], [ [ "Take a look at the contents of this bucket; get a list of objects that are contained within the bucket and print out the names of the objects. You should see that there is one file, 'Census_Data_for_SageMaker.csv'.", "_____no_output_____" ] ], [ [ "# get a list of objects in the bucket\nobj_list=s3_client.list_objects(Bucket=bucket_name)\n\n# print object(s)in S3 bucket\nfiles=[]\nfor contents in obj_list['Contents']:\n files.append(contents['Key'])\n \nprint(files)", "['Census_Data_for_SageMaker.csv']\n" ], [ "# there is one file --> one key\nfile_name=files[0]\n\nprint(file_name)", "Census_Data_for_SageMaker.csv\n" ] ], [ [ "Retrieve the data file from the bucket with a call to `client.get_object()`.", "_____no_output_____" ] ], [ [ "# get an S3 object by passing in the bucket and file name\ndata_object = s3_client.get_object(Bucket=bucket_name, Key=file_name)\n\n# what info does the object contain?\ndisplay(data_object)", "_____no_output_____" ], [ "# information is in the \"Body\" of the object\ndata_body = data_object[\"Body\"].read()\nprint('Data type: ', type(data_body))", "Data type: <class 'bytes'>\n" ] ], [ [ "This is a `bytes` datatype, which you can read it in using [io.BytesIO(file)](https://docs.python.org/3/library/io.html#binary-i-o).", "_____no_output_____" ] ], [ [ "# read in bytes data\ndata_stream = io.BytesIO(data_body)\n\n# create a dataframe\ncounties_df = pd.read_csv(data_stream, header=0, delimiter=\",\") \ncounties_df.head()", "_____no_output_____" ] ], [ [ "## Exploratory Data Analysis (EDA)\n\nNow that you've loaded in the data, it is time to clean it up, explore it, and pre-process it. Data exploration is one of the most important parts of the machine learning workflow because it allows you to notice any initial patterns in data distribution and features that may inform how you proceed with modeling and clustering the data.\n\n### EXERCISE: Explore data & drop any incomplete rows of data\n\nWhen you first explore the data, it is good to know what you are working with. How many data points and features are you starting with, and what kind of information can you get at a first glance? In this notebook, you're required to use complete data points to train a model. So, your first exercise will be to investigate the shape of this data and implement a simple, data cleaning step: dropping any incomplete rows of data.\n\nYou should be able to answer the **question**: How many data points and features are in the original, provided dataset? (And how many points are left after dropping any incomplete rows?)", "_____no_output_____" ] ], [ [ "# print out stats about data\n# rows = data, cols = features\nprint('(orig) rows, cols: ', counties_df.shape)\n\n# drop any incomplete data\nclean_counties_df = counties_df.dropna(axis=0)\nprint('(clean) rows, cols: ', clean_counties_df.shape)", "(orig) rows, cols: (3220, 37)\n(clean) rows, cols: (3218, 37)\n" ] ], [ [ "### EXERCISE: Create a new DataFrame, indexed by 'State-County'\n\nEventually, you'll want to feed these features into a machine learning model. Machine learning models need numerical data to learn from and not categorical data like strings (State, County). So, you'll reformat this data such that it is indexed by region and you'll also drop any features that are not useful for clustering.\n\nTo complete this task, perform the following steps, using your *clean* DataFrame, generated above:\n1. Combine the descriptive columns, 'State' and 'County', into one, new categorical column, 'State-County'. \n2. Index the data by this unique State-County name.\n3. After doing this, drop the old State and County columns and the CensusId column, which does not give us any meaningful demographic information.\n\nAfter completing this task, you should have a DataFrame with 'State-County' as the index, and 34 columns of numerical data for each county. You should get a resultant DataFrame that looks like the following (truncated for display purposes):\n```\n TotalPop\t Men\t Women\tHispanic\t...\n \nAlabama-Autauga\t55221\t 26745\t28476\t2.6 ...\nAlabama-Baldwin\t195121\t95314\t99807\t4.5 ...\nAlabama-Barbour\t26932\t 14497\t12435\t4.6 ...\n...\n\n```", "_____no_output_____" ] ], [ [ "# index data by 'State-County'\nclean_counties_df.index=clean_counties_df['State'] + \"-\" + clean_counties_df['County']\nclean_counties_df.head()", "_____no_output_____" ], [ "# drop the old State and County columns, and the CensusId column\n# clean df should be modified or created anew\ndrop=[\"CensusId\" , \"State\" , \"County\"]\nclean_counties_df = clean_counties_df.drop(columns=drop)\nclean_counties_df.head()", "_____no_output_____" ] ], [ [ "Now, what features do you have to work with?", "_____no_output_____" ] ], [ [ "# features\nfeatures_list = clean_counties_df.columns.values\nprint('Features: \\n', features_list)", "Features: \n ['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian'\n 'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr'\n 'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction'\n 'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp'\n 'WorkAtHome' 'MeanCommute' 'Employed' 'PrivateWork' 'PublicWork'\n 'SelfEmployed' 'FamilyWork' 'Unemployment']\n" ] ], [ [ "## Visualizing the Data\n\nIn general, you can see that features come in a variety of ranges, mostly percentages from 0-100, and counts that are integer values in a large range. Let's visualize the data in some of our feature columns and see what the distribution, over all counties, looks like.\n\nThe below cell displays **histograms**, which show the distribution of data points over discrete feature ranges. The x-axis represents the different bins; each bin is defined by a specific range of values that a feature can take, say between the values 0-5 and 5-10, and so on. The y-axis is the frequency of occurrence or the number of county data points that fall into each bin. I find it helpful to use the y-axis values for relative comparisons between different features.\n\nBelow, I'm plotting a histogram comparing methods of commuting to work over all of the counties. I just copied these feature names from the list of column names, printed above. I also know that all of these features are represented as percentages (%) in the original data, so the x-axes of these plots will be comparable.", "_____no_output_____" ] ], [ [ "# transportation (to work)\ntransport_list = ['Drive', 'Carpool', 'Transit', 'Walk', 'OtherTransp']\nn_bins = 50 # can decrease to get a wider bin (or vice versa)\n\nfor column_name in transport_list:\n ax=plt.subplots(figsize=(6,3))\n # get data by column_name and display a histogram\n ax = plt.hist(clean_counties_df[column_name], bins=n_bins)\n title=\"Histogram of \" + column_name\n plt.title(title, fontsize=12)\n plt.show()", "_____no_output_____" ] ], [ [ "### EXERCISE: Create histograms of your own\n\nCommute transportation method is just one category of features. If you take a look at the 34 features, you can see data on profession, race, income, and more. Display a set of histograms that interest you!\n", "_____no_output_____" ] ], [ [ "# create a list of features that you want to compare or examine\n# employment types\nmy_list = ['PrivateWork', 'PublicWork', 'SelfEmployed', 'FamilyWork', 'Unemployment']\nn_bins = 30 # define n_bins\n\n# histogram creation code is similar to above\nfor column_name in my_list:\n ax=plt.subplots(figsize=(6,3))\n # get data by column_name and display a histogram\n ax = plt.hist(clean_counties_df[column_name], bins=n_bins)\n title=\"Histogram of \" + column_name\n plt.title(title, fontsize=12)\n plt.show()", "_____no_output_____" ] ], [ [ "### EXERCISE: Normalize the data\n\nYou need to standardize the scale of the numerical columns in order to consistently compare the values of different features. You can use a [MinMaxScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) to transform the numerical values so that they all fall between 0 and 1.", "_____no_output_____" ] ], [ [ "# scale numerical features into a normalized range, 0-1\n\nfrom sklearn.preprocessing import MinMaxScaler\n\nscaler=MinMaxScaler()\n# store them in this dataframe\ncounties_scaled=pd.DataFrame(scaler.fit_transform(clean_counties_df.astype(float)))\n\n# get same features and State-County indices\ncounties_scaled.columns=clean_counties_df.columns\ncounties_scaled.index=clean_counties_df.index\n\ncounties_scaled.head()", "_____no_output_____" ], [ "counties_scaled.describe()", "_____no_output_____" ] ], [ [ "---\n# Data Modeling\n\n\nNow, the data is ready to be fed into a machine learning model!\n\nEach data point has 34 features, which means the data is 34-dimensional. Clustering algorithms rely on finding clusters in n-dimensional feature space. For higher dimensions, an algorithm like k-means has a difficult time figuring out which features are most important, and the result is, often, noisier clusters.\n\nSome dimensions are not as important as others. For example, if every county in our dataset has the same rate of unemployment, then that particular feature doesn’t give us any distinguishing information; it will not help t separate counties into different groups because its value doesn’t *vary* between counties.\n\n> Instead, we really want to find the features that help to separate and group data. We want to find features that cause the **most variance** in the dataset!\n\nSo, before I cluster this data, I’ll want to take a dimensionality reduction step. My aim will be to form a smaller set of features that will better help to separate our data. The technique I’ll use is called PCA or **principal component analysis**\n\n## Dimensionality Reduction\n\nPCA attempts to reduce the number of features within a dataset while retaining the “principal components”, which are defined as *weighted*, linear combinations of existing features that are designed to be linearly independent and account for the largest possible variability in the data! You can think of this method as taking many features and combining similar or redundant features together to form a new, smaller feature set.\n\nWe can reduce dimensionality with the built-in SageMaker model for PCA.", "_____no_output_____" ], [ "### Roles and Buckets\n\n> To create a model, you'll first need to specify an IAM role, and to save the model attributes, you'll need to store them in an S3 bucket.\n\nThe `get_execution_role` function retrieves the IAM role you created at the time you created your notebook instance. Roles are essentially used to manage permissions and you can read more about that [in this documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html). For now, know that we have a FullAccess notebook, which allowed us to access and download the census data stored in S3.\n\nYou must specify a bucket name for an S3 bucket in your account where you want SageMaker model parameters to be stored. Note that the bucket must be in the same region as this notebook. You can get a default S3 bucket, which automatically creates a bucket for you and in your region, by storing the current SageMaker session and calling `session.default_bucket()`.", "_____no_output_____" ] ], [ [ "from sagemaker import get_execution_role\n\nsession = sagemaker.Session() # store the current SageMaker session\n\n# get IAM role\nrole = get_execution_role()\nprint(role)", "arn:aws:iam::588771170433:role/service-role/AmazonSageMaker-ExecutionRole-20200507T142053\n" ], [ "# get default bucket\nbucket_name = session.default_bucket()\nprint(bucket_name)\nprint()", "sagemaker-eu-west-1-588771170433\n\n" ] ], [ [ "## Define a PCA Model\n\nTo create a PCA model, I'll use the built-in SageMaker resource. A SageMaker estimator requires a number of parameters to be specified; these define the type of training instance to use and the model hyperparameters. A PCA model requires the following constructor arguments:\n\n* role: The IAM role, which was specified, above.\n* train_instance_count: The number of training instances (typically, 1).\n* train_instance_type: The type of SageMaker instance for training.\n* num_components: An integer that defines the number of PCA components to produce.\n* sagemaker_session: The session used to train on SageMaker.\n\nDocumentation on the PCA model can be found [here](http://sagemaker.readthedocs.io/en/latest/pca.html).\n\nBelow, I first specify where to save the model training data, the `output_path`.", "_____no_output_____" ] ], [ [ "# define location to store model artifacts\nprefix = 'counties'\n\noutput_path='s3://{}/{}/'.format(bucket_name, prefix)\n\nprint('Training artifacts will be uploaded to: {}'.format(output_path))", "Training artifacts will be uploaded to: s3://sagemaker-eu-west-1-588771170433/counties/\n" ], [ "# define a PCA model\nfrom sagemaker import PCA\n\n# this is current features - 1\n# you'll select only a portion of these to use, later\nN_COMPONENTS=33\n\npca_SM = PCA(role=role,\n train_instance_count=1,\n train_instance_type='ml.c4.xlarge',\n output_path=output_path, # specified, above\n num_components=N_COMPONENTS, \n sagemaker_session=session)\n", "_____no_output_____" ] ], [ [ "### Convert data into a RecordSet format\n\nNext, prepare the data for a built-in model by converting the DataFrame to a numpy array of float values.\n\nThe *record_set* function in the SageMaker PCA model converts a numpy array into a **RecordSet** format that is the required format for the training input data. This is a requirement for _all_ of SageMaker's built-in models. The use of this data type is one of the reasons that allows training of models within Amazon SageMaker to perform faster, especially for large datasets.", "_____no_output_____" ] ], [ [ "# convert df to np array\ntrain_data_np = counties_scaled.values.astype('float32')\n\n# convert to RecordSet format\nformatted_train_data = pca_SM.record_set(train_data_np)", "_____no_output_____" ] ], [ [ "## Train the model\n\nCall the fit function on the PCA model, passing in our formatted, training data. This spins up a training instance to perform the training job.\n\nNote that it takes the longest to launch the specified training instance; the fitting itself doesn't take much time.", "_____no_output_____" ] ], [ [ "%%time\n\n# train the PCA mode on the formatted data\npca_SM.fit(formatted_train_data)", "2020-05-10 09:32:14 Starting - Starting the training job...\n2020-05-10 09:32:15 Starting - Launching requested ML instances......\n2020-05-10 09:33:22 Starting - Preparing the instances for training......\n2020-05-10 09:34:38 Downloading - Downloading input data\n2020-05-10 09:34:38 Training - Downloading the training image..\u001b[34mDocker entrypoint called with argument(s): train\u001b[0m\n\u001b[34mRunning default environment configuration script\u001b[0m\n\u001b[34m[05/10/2020 09:34:59 INFO 140257138956096] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-conf.json: {u'_num_gpus': u'auto', u'_log_level': u'info', u'subtract_mean': u'true', u'force_dense': u'true', u'epochs': 1, u'algorithm_mode': u'regular', u'extra_components': u'-1', u'_kvstore': u'dist_sync', u'_num_kv_servers': u'auto'}\u001b[0m\n\u001b[34m[05/10/2020 09:34:59 INFO 140257138956096] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'feature_dim': u'34', u'mini_batch_size': u'500', u'num_components': u'33'}\u001b[0m\n\u001b[34m[05/10/2020 09:34:59 INFO 140257138956096] Final configuration: {u'num_components': u'33', u'_num_gpus': u'auto', u'_log_level': u'info', u'subtract_mean': u'true', u'force_dense': u'true', u'epochs': 1, u'algorithm_mode': u'regular', u'feature_dim': u'34', u'extra_components': u'-1', u'_kvstore': u'dist_sync', u'_num_kv_servers': u'auto', u'mini_batch_size': u'500'}\u001b[0m\n\u001b[34m[05/10/2020 09:34:59 WARNING 140257138956096] Loggers have already been setup.\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Launching parameter server for role scheduler\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/f3af5ae2-b47f-45d3-83e2-0922a89d765a', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'AWS_REGION': 'eu-west-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-10-09-32-14-469', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-135-164.eu-west-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/c75ddace-c577-4e16-ba3a-17fccfab741c', 'PWD': '/', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-west-1:588771170433:training-job/pca-2020-05-10-09-32-14-469', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] envs={'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/f3af5ae2-b47f-45d3-83e2-0922a89d765a', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_NUM_WORKER': '1', 'DMLC_PS_ROOT_PORT': '9000', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.135.164', 'AWS_REGION': 'eu-west-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-10-09-32-14-469', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-135-164.eu-west-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/c75ddace-c577-4e16-ba3a-17fccfab741c', 'DMLC_ROLE': 'scheduler', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-west-1:588771170433:training-job/pca-2020-05-10-09-32-14-469', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Launching parameter server for role server\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/f3af5ae2-b47f-45d3-83e2-0922a89d765a', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'AWS_REGION': 'eu-west-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-10-09-32-14-469', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-135-164.eu-west-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/c75ddace-c577-4e16-ba3a-17fccfab741c', 'PWD': '/', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-west-1:588771170433:training-job/pca-2020-05-10-09-32-14-469', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] envs={'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/f3af5ae2-b47f-45d3-83e2-0922a89d765a', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_NUM_WORKER': '1', 'DMLC_PS_ROOT_PORT': '9000', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'SAGEMAKER_HTTP_PORT': '8080', 'HOME': '/root', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.135.164', 'AWS_REGION': 'eu-west-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-10-09-32-14-469', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-135-164.eu-west-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/c75ddace-c577-4e16-ba3a-17fccfab741c', 'DMLC_ROLE': 'server', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-west-1:588771170433:training-job/pca-2020-05-10-09-32-14-469', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Environment: {'ECS_CONTAINER_METADATA_URI': 'http://169.254.170.2/v3/f3af5ae2-b47f-45d3-83e2-0922a89d765a', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION_VERSION': '2', 'DMLC_PS_ROOT_PORT': '9000', 'DMLC_NUM_WORKER': '1', 'SAGEMAKER_HTTP_PORT': '8080', 'PATH': '/opt/amazon/bin:/usr/local/nvidia/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/amazon/bin:/opt/amazon/bin', 'PYTHONUNBUFFERED': 'TRUE', 'CANONICAL_ENVROOT': '/opt/amazon', 'LD_LIBRARY_PATH': '/opt/amazon/lib/python2.7/site-packages/cv2/../../../../lib:/usr/local/nvidia/lib64:/opt/amazon/lib', 'LANG': 'en_US.utf8', 'DMLC_INTERFACE': 'eth0', 'SHLVL': '1', 'DMLC_PS_ROOT_URI': '10.0.135.164', 'AWS_REGION': 'eu-west-1', 'NVIDIA_VISIBLE_DEVICES': 'void', 'TRAINING_JOB_NAME': 'pca-2020-05-10-09-32-14-469', 'HOME': '/root', 'PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION': 'cpp', 'ENVROOT': '/opt/amazon', 'SAGEMAKER_DATA_PATH': '/opt/ml', 'NVIDIA_DRIVER_CAPABILITIES': 'compute,utility', 'NVIDIA_REQUIRE_CUDA': 'cuda>=9.0', 'OMP_NUM_THREADS': '2', 'HOSTNAME': 'ip-10-0-135-164.eu-west-1.compute.internal', 'AWS_CONTAINER_CREDENTIALS_RELATIVE_URI': '/v2/credentials/c75ddace-c577-4e16-ba3a-17fccfab741c', 'DMLC_ROLE': 'worker', 'PWD': '/', 'DMLC_NUM_SERVER': '1', 'TRAINING_JOB_ARN': 'arn:aws:sagemaker:eu-west-1:588771170433:training-job/pca-2020-05-10-09-32-14-469', 'AWS_EXECUTION_ENV': 'AWS_ECS_EC2'}\u001b[0m\n\u001b[34mProcess 61 is a shell:scheduler.\u001b[0m\n\u001b[34mProcess 70 is a shell:server.\u001b[0m\n\u001b[34mProcess 1 is a worker.\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Using default worker.\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Loaded iterator creator application/x-recordio-protobuf for content type ('application/x-recordio-protobuf', '1.0')\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Loaded iterator creator application/x-labeled-vector-protobuf for content type ('application/x-labeled-vector-protobuf', '1.0')\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Loaded iterator creator protobuf for content type ('protobuf', '1.0')\u001b[0m\n\u001b[34m[05/10/2020 09:35:01 INFO 140257138956096] Create Store: dist_sync\u001b[0m\n" ] ], [ [ "## Accessing the PCA Model Attributes\n\nAfter the model is trained, we can access the underlying model parameters.\n\n### Unzip the Model Details\n\nNow that the training job is complete, you can find the job under **Jobs** in the **Training** subsection in the Amazon SageMaker console. You can find the job name listed in the training jobs. Use that job name in the following code to specify which model to examine.\n\nModel artifacts are stored in S3 as a TAR file; a compressed file in the output path we specified + 'output/model.tar.gz'. The artifacts stored here can be used to deploy a trained model.", "_____no_output_____" ] ], [ [ "# Get the name of the training job, it's suggested that you copy-paste\n# from the notebook or from a specific job in the AWS console\ntraining_job_name='pca-2020-05-10-09-32-14-469'\n\n# where the model is saved, by default\nmodel_key = os.path.join(prefix, training_job_name, 'output/model.tar.gz')\nprint(model_key)\n\n# download and unzip model\nboto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')\n\n# unzipping as model_algo-1\nos.system('tar -zxvf model.tar.gz')\nos.system('unzip model_algo-1')", "counties/pca-2020-05-10-09-32-14-469/output/model.tar.gz\n" ] ], [ [ "### MXNet Array\n\nMany of the Amazon SageMaker algorithms use MXNet for computational speed, including PCA, and so the model artifacts are stored as an array. After the model is unzipped and decompressed, we can load the array using MXNet.\n\nYou can take a look at the MXNet [documentation, here](https://aws.amazon.com/mxnet/).", "_____no_output_____" ] ], [ [ "import mxnet as mx\n\n# loading the unzipped artifacts\npca_model_params = mx.ndarray.load('model_algo-1')\n\n# what are the params\nprint(pca_model_params)", "{'s': \n[1.7896362e-02 3.0864021e-02 3.2130770e-02 3.5486195e-02 9.4831578e-02\n 1.2699370e-01 4.0288666e-01 1.4084760e+00 1.5100485e+00 1.5957943e+00\n 1.7783760e+00 2.1662524e+00 2.2966361e+00 2.3856051e+00 2.6954880e+00\n 2.8067985e+00 3.0175958e+00 3.3952675e+00 3.5731301e+00 3.6966958e+00\n 4.1890211e+00 4.3457499e+00 4.5410376e+00 5.0189657e+00 5.5786467e+00\n 5.9809699e+00 6.3925138e+00 7.6952214e+00 7.9913125e+00 1.0180052e+01\n 1.1718245e+01 1.3035975e+01 1.9592180e+01]\n<NDArray 33 @cpu(0)>, 'v': \n[[ 2.46869749e-03 2.56468095e-02 2.50773830e-03 ... -7.63925165e-02\n 1.59879066e-02 5.04589686e-03]\n [-2.80601848e-02 -6.86634064e-01 -1.96283013e-02 ... -7.59587288e-02\n 1.57304872e-02 4.95312130e-03]\n [ 3.25766727e-02 7.17300594e-01 2.40726061e-02 ... -7.68136829e-02\n 1.62378680e-02 5.13597298e-03]\n ...\n [ 1.12151138e-01 -1.17030945e-02 -2.88011521e-01 ... 1.39890045e-01\n -3.09406728e-01 -6.34506866e-02]\n [ 2.99992133e-02 -3.13433539e-03 -7.63589665e-02 ... 4.17341813e-02\n -7.06735924e-02 -1.42857227e-02]\n [ 7.33537527e-05 3.01008171e-04 -8.00925500e-06 ... 6.97060227e-02\n 1.20169498e-01 2.33626723e-01]]\n<NDArray 34x33 @cpu(0)>, 'mean': \n[[0.00988273 0.00986636 0.00989863 0.11017046 0.7560245 0.10094159\n 0.0186819 0.02940491 0.0064698 0.01154038 0.31539047 0.1222766\n 0.3030056 0.08220861 0.256217 0.2964254 0.28914267 0.40191284\n 0.57868284 0.2854676 0.28294644 0.82774544 0.34378946 0.01576072\n 0.04649627 0.04115358 0.12442778 0.47014 0.00980645 0.7608103\n 0.19442631 0.21674445 0.0294168 0.22177474]]\n<NDArray 1x34 @cpu(0)>}\n" ] ], [ [ "## PCA Model Attributes\n\nThree types of model attributes are contained within the PCA model.\n\n* **mean**: The mean that was subtracted from a component in order to center it.\n* **v**: The makeup of the principal components; (same as ‘components_’ in an sklearn PCA model).\n* **s**: The singular values of the components for the PCA transformation. This does not exactly give the % variance from the original feature space, but can give the % variance from the projected feature space.\n \nWe are only interested in v and s. \n\nFrom s, we can get an approximation of the data variance that is covered in the first `n` principal components. The approximate explained variance is given by the formula: the sum of squared s values for all top n components over the sum over squared s values for _all_ components:\n\n\\begin{equation*}\n\\frac{\\sum_{n}^{ } s_n^2}{\\sum s^2}\n\\end{equation*}\n\nFrom v, we can learn more about the combinations of original features that make up each principal component.\n", "_____no_output_____" ] ], [ [ "# get selected params\ns=pd.DataFrame(pca_model_params['s'].asnumpy())\nv=pd.DataFrame(pca_model_params['v'].asnumpy())", "_____no_output_____" ] ], [ [ "## Data Variance\n\nOur current PCA model creates 33 principal components, but when we create new dimensionality-reduced training data, we'll only select a few, top n components to use. To decide how many top components to include, it's helpful to look at how much **data variance** the components capture. For our original, high-dimensional data, 34 features captured 100% of our data variance. If we discard some of these higher dimensions, we will lower the amount of variance we can capture.\n\n### Tradeoff: dimensionality vs. data variance\n\nAs an illustrative example, say we have original data in three dimensions. So, three dimensions capture 100% of our data variance; these dimensions cover the entire spread of our data. The below images are taken from the PhD thesis, [“Approaches to analyse and interpret biological profile data”](https://publishup.uni-potsdam.de/opus4-ubp/frontdoor/index/index/docId/696) by Matthias Scholz, (2006, University of Potsdam, Germany).\n\n<img src='notebook_ims/3d_original_data.png' width=35% />\n\nNow, you may also note that most of this data seems related; it falls close to a 2D plane, and just by looking at the spread of the data, we can visualize that the original, three dimensions have some correlation. So, we can instead choose to create two new dimensions, made up of linear combinations of the original, three dimensions. These dimensions are represented by the two axes/lines, centered in the data. \n\n<img src='notebook_ims/pca_2d_dim_reduction.png' width=70% />\n\nIf we project this in a new, 2D space, we can see that we still capture most of the original data variance using *just* two dimensions. There is a tradeoff between the amount of variance we can capture and the number of component-dimensions we use to represent our data.\n\nWhen we select the top n components to use in a new data model, we'll typically want to include enough components to capture about 80-90% of the original data variance. In this project, we are looking at generalizing over a lot of data and we'll aim for about 80% coverage.", "_____no_output_____" ], [ "**Note**: The _top_ principal components, with the largest s values, are actually at the end of the s DataFrame. Let's print out the s values for the top n, principal components.", "_____no_output_____" ] ], [ [ "# looking at top 5 components\nn_principal_components = 5\n\nstart_idx = N_COMPONENTS - n_principal_components # 33-n\n\n# print a selection of s\nprint(s.iloc[start_idx:, :])", " 0\n28 7.991313\n29 10.180052\n30 11.718245\n31 13.035975\n32 19.592180\n" ] ], [ [ "### EXERCISE: Calculate the explained variance\n\nIn creating new training data, you'll want to choose the top n principal components that account for at least 80% data variance. \n\nComplete a function, `explained_variance` that takes in the entire array `s` and a number of top principal components to consider. Then return the approximate, explained variance for those top n components. \n\nFor example, to calculate the explained variance for the top 5 components, calculate s squared for *each* of the top 5 components, add those up and normalize by the sum of *all* squared s values, according to this formula:\n\n\\begin{equation*}\n\\frac{\\sum_{5}^{ } s_n^2}{\\sum s^2}\n\\end{equation*}\n\n> Using this function, you should be able to answer the **question**: What is the smallest number of principal components that captures at least 80% of the total variance in the dataset?", "_____no_output_____" ] ], [ [ "# Calculate the explained variance for the top n principal components\n# you may assume you have access to the global var N_COMPONENTS\ndef explained_variance(s, n_top_components):\n '''Calculates the approx. data variance that n_top_components captures.\n :param s: A dataframe of singular values for top components; \n the top value is in the last row.\n :param n_top_components: An integer, the number of top components to use.\n :return: The expected data variance covered by the n_top_components.'''\n \n start_idx = N_COMPONENTS - n_top_components ## 33-3 = 30, for example\n # calculate approx variance\n exp_variance = np.square(s.iloc[start_idx:,:]).sum()/np.square(s).sum()\n \n return exp_variance[0]\n", "_____no_output_____" ] ], [ [ "### Test Cell\n\nTest out your own code by seeing how it responds to different inputs; does it return a reasonable value for the single, top component? What about for the top 5 components?", "_____no_output_____" ] ], [ [ "# test cell\nn_top_components = 7 # select a value for the number of top components\n\n# calculate the explained variance\nexp_variance = explained_variance(s, n_top_components)\nprint('Explained variance: ', exp_variance)", "Explained variance: 0.80167246\n" ] ], [ [ "As an example, you should see that the top principal component accounts for about 32% of our data variance! Next, you may be wondering what makes up this (and other components); what linear combination of features make these components so influential in describing the spread of our data?\n\nBelow, let's take a look at our original features and use that as a reference.", "_____no_output_____" ] ], [ [ "# features\nfeatures_list = counties_scaled.columns.values\nprint('Features: \\n', features_list)", "Features: \n ['TotalPop' 'Men' 'Women' 'Hispanic' 'White' 'Black' 'Native' 'Asian'\n 'Pacific' 'Citizen' 'Income' 'IncomeErr' 'IncomePerCap' 'IncomePerCapErr'\n 'Poverty' 'ChildPoverty' 'Professional' 'Service' 'Office' 'Construction'\n 'Production' 'Drive' 'Carpool' 'Transit' 'Walk' 'OtherTransp'\n 'WorkAtHome' 'MeanCommute' 'Employed' 'PrivateWork' 'PublicWork'\n 'SelfEmployed' 'FamilyWork' 'Unemployment']\n" ] ], [ [ "## Component Makeup\n\nWe can now examine the makeup of each PCA component based on **the weightings of the original features that are included in the component**. The following code shows the feature-level makeup of the first component.\n\nNote that the components are again ordered from smallest to largest and so I am getting the correct rows by calling N_COMPONENTS-1 to get the top, 1, component.", "_____no_output_____" ] ], [ [ "import seaborn as sns\n\ndef display_component(v, features_list, component_num, n_weights=10):\n \n # get index of component (last row - component_num)\n row_idx = N_COMPONENTS-component_num\n\n # get the list of weights from a row in v, dataframe\n v_1_row = v.iloc[:, row_idx]\n v_1 = np.squeeze(v_1_row.values)\n\n # match weights to features in counties_scaled dataframe, using list comporehension\n comps = pd.DataFrame(list(zip(v_1, features_list)), \n columns=['weights', 'features'])\n\n # we'll want to sort by the largest n_weights\n # weights can be neg/pos and we'll sort by magnitude\n comps['abs_weights']=comps['weights'].apply(lambda x: np.abs(x))\n sorted_weight_data = comps.sort_values('abs_weights', ascending=False).head(n_weights)\n\n # display using seaborn\n ax=plt.subplots(figsize=(10,6))\n ax=sns.barplot(data=sorted_weight_data, \n x=\"weights\", \n y=\"features\", \n palette=\"Blues_d\")\n ax.set_title(\"PCA Component Makeup, Component #\" + str(component_num))\n plt.show()\n", "_____no_output_____" ], [ "# display makeup of first component\nnum=2\ndisplay_component(v, counties_scaled.columns.values, component_num=num, n_weights=10)", "_____no_output_____" ] ], [ [ "# Deploying the PCA Model\n\nWe can now deploy this model and use it to make \"predictions\". Instead of seeing what happens with some test data, we'll actually want to pass our training data into the deployed endpoint to create principal components for each data point. \n\nRun the cell below to deploy/host this model on an instance_type that we specify.", "_____no_output_____" ] ], [ [ "%%time\n# this takes a little while, around 7mins\npca_predictor = pca_SM.deploy(initial_instance_count=1, \n instance_type='ml.t2.medium')", "-----------------!CPU times: user 279 ms, sys: 27.5 ms, total: 307 ms\nWall time: 8min 32s\n" ] ], [ [ "We can pass the original, numpy dataset to the model and transform the data using the model we created. Then we can take the largest n components to reduce the dimensionality of our data.", "_____no_output_____" ] ], [ [ "# pass np train data to the PCA model\ntrain_pca = pca_predictor.predict(train_data_np)", "_____no_output_____" ], [ "# check out the first item in the produced training features\ndata_idx = 0\nprint(train_pca[data_idx])", "label {\n key: \"projection\"\n value {\n float32_tensor {\n values: 0.0002009272575378418\n values: 0.0002455431967973709\n values: -0.0005782842636108398\n values: -0.0007815659046173096\n values: -0.00041911262087523937\n values: -0.0005133943632245064\n values: -0.0011316537857055664\n values: 0.0017268601804971695\n values: -0.005361668765544891\n values: -0.009066537022590637\n values: -0.008141040802001953\n values: -0.004735097289085388\n values: -0.00716288760304451\n values: 0.0003725700080394745\n values: -0.01208949089050293\n values: 0.02134685218334198\n values: 0.0009293854236602783\n values: 0.002417147159576416\n values: -0.0034637749195098877\n values: 0.01794189214706421\n values: -0.01639425754547119\n values: 0.06260128319263458\n values: 0.06637358665466309\n values: 0.002479255199432373\n values: 0.10011336207389832\n values: -0.1136140376329422\n values: 0.02589476853609085\n values: 0.04045158624649048\n values: -0.01082391943782568\n values: 0.1204797774553299\n values: -0.0883558839559555\n values: 0.16052711009979248\n values: -0.06027412414550781\n }\n }\n}\n\n" ] ], [ [ "### EXERCISE: Create a transformed DataFrame\n\nFor each of our data points, get the top n component values from the list of component data points, returned by our predictor above, and put those into a new DataFrame.\n\nYou should end up with a DataFrame that looks something like the following:\n```\n c_1\t c_2\t c_3\t c_4\t c_5\t ...\nAlabama-Autauga\t-0.060274\t0.160527\t-0.088356\t 0.120480\t-0.010824\t...\nAlabama-Baldwin\t-0.149684\t0.185969\t-0.145743\t-0.023092\t-0.068677\t...\nAlabama-Barbour\t0.506202\t 0.296662\t 0.146258\t 0.297829\t0.093111\t...\n...\n```", "_____no_output_____" ] ], [ [ "# create dimensionality-reduced data\ndef create_transformed_df(train_pca, counties_scaled, n_top_components):\n ''' Return a dataframe of data points with component features. \n The dataframe should be indexed by State-County and contain component values.\n :param train_pca: A list of pca training data, returned by a PCA model.\n :param counties_scaled: A dataframe of normalized, original features.\n :param n_top_components: An integer, the number of top components to use.\n :return: A dataframe, indexed by State-County, with n_top_component values as columns. \n '''\n # create new dataframe to add data to\n counties_transformed=pd.DataFrame()\n\n # for each of our new, transformed data points\n # append the component values to the dataframe\n for data in train_pca:\n # get component values for each data point\n components=data.label['projection'].float32_tensor.values\n counties_transformed=counties_transformed.append([list(components)])\n\n # index by county, just like counties_scaled\n counties_transformed.index=counties_scaled.index\n\n # keep only the top n components\n start_idx = N_COMPONENTS - n_top_components\n counties_transformed = counties_transformed.iloc[:,start_idx:]\n \n # reverse columns, component order \n return counties_transformed.iloc[:, ::-1]\n ", "_____no_output_____" ] ], [ [ "Now we can create a dataset where each county is described by the top n principle components that we analyzed earlier. Each of these components is a linear combination of the original feature space. We can interpret each of these components by analyzing the makeup of the component, shown previously.", "_____no_output_____" ] ], [ [ "# specify top n\ntop_n = 7\n\n# call your function and create a new dataframe\ncounties_transformed = create_transformed_df(train_pca, counties_scaled, n_top_components=top_n)\n\n# add descriptive columns\nPCA_list=['c_1', 'c_2', 'c_3', 'c_4', 'c_5', 'c_6', 'c_7']\ncounties_transformed.columns=PCA_list \n\n# print result\ncounties_transformed.head()", "_____no_output_____" ] ], [ [ "### Delete the Endpoint!\n\nNow that we've deployed the model and created our new, transformed training data, we no longer need the PCA endpoint.\n\nAs a clean up step, you should always delete your endpoints after you are done using them (and if you do not plan to deploy them to a website, for example).", "_____no_output_____" ] ], [ [ "# delete predictor endpoint\nsession.delete_endpoint(pca_predictor.endpoint)", "_____no_output_____" ] ], [ [ "---\n# Population Segmentation \n\nNow, you’ll use the unsupervised clustering algorithm, k-means, to segment counties using their PCA attributes, which are in the transformed DataFrame we just created. K-means is a clustering algorithm that identifies clusters of similar data points based on their component makeup. Since we have ~3000 counties and 34 attributes in the original dataset, the large feature space may have made it difficult to cluster the counties effectively. Instead, we have reduced the feature space to 7 PCA components, and we’ll cluster on this transformed dataset.", "_____no_output_____" ], [ "### EXERCISE: Define a k-means model\n\nYour task will be to instantiate a k-means model. A `KMeans` estimator requires a number of parameters to be instantiated, which allow us to specify the type of training instance to use, and the model hyperparameters. \n\nYou can read about the required parameters, in the [`KMeans` documentation](https://sagemaker.readthedocs.io/en/stable/kmeans.html); note that not all of the possible parameters are required.\n", "_____no_output_____" ], [ "### Choosing a \"Good\" K\n\nOne method for choosing a \"good\" k, is to choose based on empirical data. A bad k would be one so *high* that only one or two very close data points are near it, and another bad k would be one so *low* that data points are really far away from the centers.\n\nYou want to select a k such that data points in a single cluster are close together but that there are enough clusters to effectively separate the data. You can approximate this separation by measuring how close your data points are to each cluster center; the average centroid distance between cluster points and a centroid. After trying several values for k, the centroid distance typically reaches some \"elbow\"; it stops decreasing at a sharp rate and this indicates a good value of k. The graph below indicates the average centroid distance for value of k between 5 and 12.\n\n<img src='notebook_ims/elbow_graph.png' width=50% />\n\nA distance elbow can be seen around 8 when the distance starts to increase and then decrease at a slower rate. This indicates that there is enough separation to distinguish the data points in each cluster, but also that you included enough clusters so that the data points aren’t *extremely* far away from each cluster.", "_____no_output_____" ] ], [ [ "# define a KMeans estimator\nfrom sagemaker import KMeans\n\nNUM_CLUSTERS = 8\n\nkmeans = KMeans(role=role,\n train_instance_count=1,\n train_instance_type='ml.c4.xlarge',\n output_path=output_path, # using the same output path as was defined, earlier \n k=NUM_CLUSTERS)\n", "_____no_output_____" ] ], [ [ "### EXERCISE: Create formatted, k-means training data\n\nJust as before, you should convert the `counties_transformed` df into a numpy array and then into a RecordSet. This is the required format for passing training data into a `KMeans` model.", "_____no_output_____" ] ], [ [ "# convert the transformed dataframe into record_set data\nkmeans_train_data_np = counties_transformed.values.astype('float32')\nkmeans_formatted_data = kmeans.record_set(kmeans_train_data_np)", "_____no_output_____" ] ], [ [ "### EXERCISE: Train the k-means model\n\nPass in the formatted training data and train the k-means model.", "_____no_output_____" ] ], [ [ "%%time\n# train kmeans\nkmeans.fit(kmeans_formatted_data)", "2020-05-10 09:48:20 Starting - Starting the training job...\n2020-05-10 09:48:22 Starting - Launching requested ML instances...\n2020-05-10 09:49:19 Starting - Preparing the instances for training.........\n2020-05-10 09:50:44 Downloading - Downloading input data\n2020-05-10 09:50:44 Training - Downloading the training image...\n2020-05-10 09:51:10 Uploading - Uploading generated training model\u001b[34mDocker entrypoint called with argument(s): train\u001b[0m\n\u001b[34mRunning default environment configuration script\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Reading default configuration from /opt/amazon/lib/python2.7/site-packages/algorithm/resources/default-input.json: {u'_enable_profiler': u'false', u'_tuning_objective_metric': u'', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'_kvstore': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'true', u'epochs': u'1', u'init_method': u'random', u'local_lloyd_tol': u'0.0001', u'local_lloyd_max_iter': u'300', u'_disable_wait_to_read': u'false', u'extra_center_factor': u'auto', u'eval_metrics': u'[\"msd\"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'half_life_time_size': u'0', u'_num_slices': u'1'}\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Reading provided configuration from /opt/ml/input/config/hyperparameters.json: {u'feature_dim': u'7', u'k': u'8', u'force_dense': u'True'}\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Final configuration: {u'_tuning_objective_metric': u'', u'extra_center_factor': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'True', u'epochs': u'1', u'feature_dim': u'7', u'local_lloyd_tol': u'0.0001', u'_disable_wait_to_read': u'false', u'eval_metrics': u'[\"msd\"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'_enable_profiler': u'false', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'init_method': u'random', u'half_life_time_size': u'0', u'local_lloyd_max_iter': u'300', u'_kvstore': u'auto', u'k': u'8', u'_num_slices': u'1'}\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 WARNING 140623203432256] Loggers have already been setup.\u001b[0m\n\u001b[34mProcess 1 is a worker.\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Using default worker.\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Loaded iterator creator application/x-recordio-protobuf for content type ('application/x-recordio-protobuf', '1.0')\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Create Store: local\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] nvidia-smi took: 0.0251808166504 secs to identify 0 gpus\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Number of GPUs being used: 0\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Setting up with params: {u'_tuning_objective_metric': u'', u'extra_center_factor': u'auto', u'local_lloyd_init_method': u'kmeans++', u'force_dense': u'True', u'epochs': u'1', u'feature_dim': u'7', u'local_lloyd_tol': u'0.0001', u'_disable_wait_to_read': u'false', u'eval_metrics': u'[\"msd\"]', u'_num_kv_servers': u'1', u'mini_batch_size': u'5000', u'_enable_profiler': u'false', u'_num_gpus': u'auto', u'local_lloyd_num_trials': u'auto', u'_log_level': u'info', u'init_method': u'random', u'half_life_time_size': u'0', u'local_lloyd_max_iter': u'300', u'_kvstore': u'auto', u'k': u'8', u'_num_slices': u'1'}\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] 'extra_center_factor' was set to 'auto', evaluated to 10.\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Number of GPUs being used: 0\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] number of center slices 1\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 WARNING 140623203432256] Batch size 5000 is bigger than the first batch data. Effective batch size used to initialize is 3218\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 3218, \"sum\": 3218.0, \"min\": 3218}, \"Total Batches Seen\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Total Records Seen\": {\"count\": 1, \"max\": 3218, \"sum\": 3218.0, \"min\": 3218}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 3218, \"sum\": 3218.0, \"min\": 3218}, \"Reset Count\": {\"count\": 1, \"max\": 0, \"sum\": 0.0, \"min\": 0}}, \"EndTime\": 1589104267.281982, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"init_train_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KMeansWebscale\"}, \"StartTime\": 1589104267.28195}\n\u001b[0m\n\u001b[34m[2020-05-10 09:51:07.282] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 0, \"duration\": 37, \"num_examples\": 1, \"num_bytes\": 167336}\u001b[0m\n\u001b[34m[2020-05-10 09:51:07.335] [tensorio] [info] epoch_stats={\"data_pipeline\": \"/opt/ml/input/data/train\", \"epoch\": 1, \"duration\": 51, \"num_examples\": 1, \"num_bytes\": 167336}\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] processed a total of 3218 examples\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] #progress_metric: host=algo-1, completed 100 % of epochs\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"Max Batches Seen Between Resets\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Number of Batches Since Last Reset\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"Number of Records Since Last Reset\": {\"count\": 1, \"max\": 3218, \"sum\": 3218.0, \"min\": 3218}, \"Total Batches Seen\": {\"count\": 1, \"max\": 2, \"sum\": 2.0, \"min\": 2}, \"Total Records Seen\": {\"count\": 1, \"max\": 6436, \"sum\": 6436.0, \"min\": 6436}, \"Max Records Seen Between Resets\": {\"count\": 1, \"max\": 3218, \"sum\": 3218.0, \"min\": 3218}, \"Reset Count\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}}, \"EndTime\": 1589104267.335953, \"Dimensions\": {\"Host\": \"algo-1\", \"Meta\": \"training_data_iter\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KMeansWebscale\", \"epoch\": 0}, \"StartTime\": 1589104267.282372}\n\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] #throughput_metric: host=algo-1, train throughput=59897.6221249 records/second\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 WARNING 140623203432256] wait_for_all_workers will not sync workers since the kv store is not running distributed\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] shrinking 80 centers into 8\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #0. Current mean square distance 0.065400\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #1. Current mean square distance 0.063747\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #2. Current mean square distance 0.064844\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #3. Current mean square distance 0.062158\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #4. Current mean square distance 0.066447\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #5. Current mean square distance 0.064507\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #6. Current mean square distance 0.065868\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #7. Current mean square distance 0.062306\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #8. Current mean square distance 0.061626\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] local kmeans attempt #9. Current mean square distance 0.066844\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] finished shrinking process. Mean Square Distance = 0\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] #quality_metric: host=algo-1, train msd <loss>=0.0616257153451\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] compute all data-center distances: inner product took: 39.9807%, (0.023814 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] collect from kv store took: 11.5756%, (0.006895 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] splitting centers key-value pair took: 11.4483%, (0.006819 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] compute all data-center distances: point norm took: 8.5235%, (0.005077 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] batch data loading with context took: 8.4466%, (0.005031 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] predict compute msd took: 7.8002%, (0.004646 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] gradient: one_hot took: 7.1217%, (0.004242 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] gradient: cluster size took: 2.2159%, (0.001320 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] gradient: cluster center took: 1.5142%, (0.000902 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] update state and report convergance took: 0.7521%, (0.000448 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] update set-up time took: 0.3610%, (0.000215 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] compute all data-center distances: center norm took: 0.2318%, (0.000138 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] predict minus dist took: 0.0284%, (0.000017 secs)\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] TOTAL took: 0.0595636367798\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Number of GPUs being used: 0\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"finalize.time\": {\"count\": 1, \"max\": 317.5690174102783, \"sum\": 317.5690174102783, \"min\": 317.5690174102783}, \"initialize.time\": {\"count\": 1, \"max\": 32.84287452697754, \"sum\": 32.84287452697754, \"min\": 32.84287452697754}, \"model.serialize.time\": {\"count\": 1, \"max\": 0.15091896057128906, \"sum\": 0.15091896057128906, \"min\": 0.15091896057128906}, \"update.time\": {\"count\": 1, \"max\": 53.37214469909668, \"sum\": 53.37214469909668, \"min\": 53.37214469909668}, \"epochs\": {\"count\": 1, \"max\": 1, \"sum\": 1.0, \"min\": 1}, \"state.serialize.time\": {\"count\": 1, \"max\": 1.8079280853271484, \"sum\": 1.8079280853271484, \"min\": 1.8079280853271484}, \"_shrink.time\": {\"count\": 1, \"max\": 315.69600105285645, \"sum\": 315.69600105285645, \"min\": 315.69600105285645}}, \"EndTime\": 1589104267.655961, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KMeansWebscale\"}, \"StartTime\": 1589104267.243674}\n\u001b[0m\n\u001b[34m[05/10/2020 09:51:07 INFO 140623203432256] Test data is not provided.\u001b[0m\n\u001b[34m#metrics {\"Metrics\": {\"totaltime\": {\"count\": 1, \"max\": 473.89698028564453, \"sum\": 473.89698028564453, \"min\": 473.89698028564453}, \"setuptime\": {\"count\": 1, \"max\": 12.415170669555664, \"sum\": 12.415170669555664, \"min\": 12.415170669555664}}, \"EndTime\": 1589104267.656309, \"Dimensions\": {\"Host\": \"algo-1\", \"Operation\": \"training\", \"Algorithm\": \"AWS/KMeansWebscale\"}, \"StartTime\": 1589104267.656055}\n\u001b[0m\n" ] ], [ [ "### EXERCISE: Deploy the k-means model\n\nDeploy the trained model to create a `kmeans_predictor`.\n", "_____no_output_____" ] ], [ [ "%%time\n# deploy the model to create a predictor\nkmeans_predictor = kmeans.deploy(initial_instance_count=1, \n instance_type='ml.t2.medium')", "---------------!CPU times: user 260 ms, sys: 14.3 ms, total: 275 ms\nWall time: 7min 31s\n" ] ], [ [ "### EXERCISE: Pass in the training data and assign predicted cluster labels\n\nAfter deploying the model, you can pass in the k-means training data, as a numpy array, and get resultant, predicted cluster labels for each data point.", "_____no_output_____" ] ], [ [ "# get the predicted clusters for all the kmeans training data\ncluster_info=kmeans_predictor.predict(kmeans_train_data_np)", "_____no_output_____" ] ], [ [ "## Exploring the resultant clusters\n\nThe resulting predictions should give you information about the cluster that each data point belongs to.\n\nYou should be able to answer the **question**: which cluster does a given data point belong to?", "_____no_output_____" ] ], [ [ "# print cluster info for first data point\ndata_idx = 0\n\nprint('County is: ', counties_transformed.index[data_idx])\nprint()\nprint(cluster_info[data_idx])", "County is: Alabama-Autauga\n\nlabel {\n key: \"closest_cluster\"\n value {\n float32_tensor {\n values: 0.0\n }\n }\n}\nlabel {\n key: \"distance_to_cluster\"\n value {\n float32_tensor {\n values: 0.28176048398017883\n }\n }\n}\n\n" ] ], [ [ "### Visualize the distribution of data over clusters\n\nGet the cluster labels for each of our data points (counties) and visualize the distribution of points over each cluster.", "_____no_output_____" ] ], [ [ "# get all cluster labels\ncluster_labels = [c.label['closest_cluster'].float32_tensor.values[0] for c in cluster_info]", "_____no_output_____" ], [ "# count up the points in each cluster\ncluster_df = pd.DataFrame(cluster_labels)[0].value_counts()\n\nprint(cluster_df)", "0.0 883\n6.0 637\n7.0 437\n1.0 429\n4.0 367\n5.0 190\n3.0 185\n2.0 90\nName: 0, dtype: int64\n" ], [ "# another method of visualizing the distribution\n# display a histogram of cluster counts\nax =plt.subplots(figsize=(6,3))\nax = plt.hist(cluster_labels, bins=8, range=(-0.5, 7.5), color='blue', rwidth=0.5)\n\ntitle=\"Histogram of Cluster Counts\"\nplt.title(title, fontsize=12)\nplt.show()", "_____no_output_____" ] ], [ [ "Now, you may be wondering, what do each of these clusters tell us about these data points? To improve explainability, we need to access the underlying model to get the cluster centers. These centers will help describe which features characterize each cluster.", "_____no_output_____" ], [ "### Delete the Endpoint!\n\nNow that you've deployed the k-means model and extracted the cluster labels for each data point, you no longer need the k-means endpoint.", "_____no_output_____" ] ], [ [ "# delete kmeans endpoint\nsession.delete_endpoint(kmeans_predictor.endpoint)", "_____no_output_____" ] ], [ [ "---\n# Model Attributes & Explainability\n\nExplaining the result of the modeling is an important step in making use of our analysis. By combining PCA and k-means, and the information contained in the model attributes within a SageMaker trained model, you can learn about a population and remark on some patterns you've found, based on the data.", "_____no_output_____" ], [ "### EXERCISE: Access the k-means model attributes\n\nExtract the k-means model attributes from where they are saved as a TAR file in an S3 bucket.\n\nYou'll need to access the model by the k-means training job name, and then unzip the file into `model_algo-1`. Then you can load that file using MXNet, as before.", "_____no_output_____" ] ], [ [ "# download and unzip the kmeans model file\nkmeans_job_name = 'kmeans-2020-05-10-09-48-20-293'\n\nmodel_key = os.path.join(prefix, kmeans_job_name, 'output/model.tar.gz')\n\n# download the model file\nboto3.resource('s3').Bucket(bucket_name).download_file(model_key, 'model.tar.gz')\nos.system('tar -zxvf model.tar.gz')\nos.system('unzip model_algo-1')\n", "_____no_output_____" ], [ "# get the trained kmeans params using mxnet\nkmeans_model_params = mx.ndarray.load('model_algo-1')\n\nprint(kmeans_model_params)", "[\n[[-1.9876230e-01 1.0740490e-01 -2.2987928e-04 -8.1070364e-02\n -4.4613130e-02 -4.2198751e-02 -6.1457786e-03]\n [ 3.2303393e-01 2.0507385e-01 4.4284604e-02 2.3280022e-01\n 7.5355567e-02 -4.6409063e-02 3.3777352e-02]\n [ 1.3336301e+00 -2.1398336e-01 -1.7093013e-01 -4.3430179e-01\n -1.3161512e-01 1.3510802e-01 1.7220131e-01]\n [ 1.3628066e-01 -3.2483786e-01 1.1368125e-01 1.4699332e-01\n -1.4515534e-01 1.5164236e-02 -1.2911958e-01]\n [-1.3143247e-01 3.5041921e-02 -3.8888896e-01 6.7485303e-02\n -2.8044736e-02 7.2303727e-02 -7.9605849e-03]\n [ 3.5180914e-01 -2.4120869e-01 -1.1533824e-01 -1.9837452e-01\n 1.6968910e-01 -1.2472682e-01 -5.9172992e-02]\n [-3.4084089e-02 9.5869534e-02 1.6421616e-01 -6.8643674e-02\n -1.4964260e-02 1.0375764e-01 -2.8086459e-02]\n [-2.5730219e-01 -2.7023512e-01 6.1645910e-02 4.1565642e-02\n 3.4234852e-02 -2.7624033e-02 5.3338524e-02]]\n<NDArray 8x7 @cpu(0)>]\n" ] ], [ [ "There is only 1 set of model parameters contained within the k-means model: the cluster centroid locations in PCA-transformed, component space.\n\n* **centroids**: The location of the centers of each cluster in component space, identified by the k-means algorithm. \n", "_____no_output_____" ] ], [ [ "# get all the centroids\ncluster_centroids=pd.DataFrame(kmeans_model_params[0].asnumpy())\ncluster_centroids.columns=counties_transformed.columns\n\ndisplay(cluster_centroids)", "_____no_output_____" ] ], [ [ "### Visualizing Centroids in Component Space\n\nYou can't visualize 7-dimensional centroids in space, but you can plot a heatmap of the centroids and their location in the transformed feature space. \n\nThis gives you insight into what characteristics define each cluster. Often with unsupervised learning, results are hard to interpret. This is one way to make use of the results of PCA + clustering techniques, together. Since you were able to examine the makeup of each PCA component, you can understand what each centroid represents in terms of the PCA components.", "_____no_output_____" ] ], [ [ "# generate a heatmap in component space, using the seaborn library\nplt.figure(figsize = (12,9))\nax = sns.heatmap(cluster_centroids.T, cmap = 'YlGnBu')\nax.set_xlabel(\"Cluster\")\nplt.yticks(fontsize = 16)\nplt.xticks(fontsize = 16)\nax.set_title(\"Attribute Value by Centroid\")\nplt.show()", "_____no_output_____" ] ], [ [ "If you've forgotten what each component corresponds to at an original-feature-level, that's okay! You can use the previously defined `display_component` function to see the feature-level makeup.", "_____no_output_____" ] ], [ [ "# what do each of these components mean again?\n# let's use the display function, from above\ncomponent_num=4\ndisplay_component(v, counties_scaled.columns.values, component_num=component_num)", "_____no_output_____" ] ], [ [ "### Natural Groupings\n\nYou can also map the cluster labels back to each individual county and examine which counties are naturally grouped together.", "_____no_output_____" ] ], [ [ "# add a 'labels' column to the dataframe\ncounties_transformed['labels']=list(map(int, cluster_labels))\n\n# sort by cluster label 0-6\nsorted_counties = counties_transformed.sort_values('labels', ascending=True)\n# view some pts in cluster 0\nsorted_counties.head(20)", "_____no_output_____" ] ], [ [ "You can also examine one of the clusters in more detail, like cluster 1, for example. A quick glance at the location of the centroid in component space (the heatmap) tells us that it has the highest value for the `comp_6` attribute. You can now see which counties fit that description.", "_____no_output_____" ] ], [ [ "# get all counties with label == 1\ncluster=counties_transformed[counties_transformed['labels']==1]\ncluster.head()", "_____no_output_____" ] ], [ [ "## Final Cleanup!\n\n* Double check that you have deleted all your endpoints.\n* I'd also suggest manually deleting your S3 bucket, models, and endpoint configurations directly from your AWS console.\n\nYou can find thorough cleanup instructions, [in the documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-cleanup.html).", "_____no_output_____" ], [ "---\n# Conclusion\n\nYou have just walked through a machine learning workflow for unsupervised learning, specifically, for clustering a dataset using k-means after reducing the dimensionality using PCA. By accessing the underlying models created within SageMaker, you were able to improve the explainability of your model and draw insights from the resultant clusters. \n\nUsing these techniques, you have been able to better understand the essential characteristics of different counties in the US and segment them into similar groups, accordingly.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a885801143c586c9b45d6141a1c9e5ddec08a24
131,859
ipynb
Jupyter Notebook
THAI-Sfc-Temp-Hist.ipynb
projectcuisines/gcm_ana
cd9f7d47dd4a9088bcd7556b4955d9b8e09b9741
[ "MIT" ]
1
2021-09-29T18:03:56.000Z
2021-09-29T18:03:56.000Z
THAI-Sfc-Temp-Hist.ipynb
projectcuisines/gcm_ana
cd9f7d47dd4a9088bcd7556b4955d9b8e09b9741
[ "MIT" ]
null
null
null
THAI-Sfc-Temp-Hist.ipynb
projectcuisines/gcm_ana
cd9f7d47dd4a9088bcd7556b4955d9b8e09b9741
[ "MIT" ]
null
null
null
549.4125
125,136
0.939966
[ [ [ "# Histograms of time-mean surface temperature", "_____no_output_____" ], [ "## Import the libraries", "_____no_output_____" ] ], [ [ "# Data analysis and viz libraries\nimport aeolus.plot as aplt\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport xarray as xr", "_____no_output_____" ], [ "# Local modules\nfrom calc import sfc_temp\nimport mypaths\nfrom names import names\nfrom commons import MODELS\nimport const_ben1_hab1 as const\nfrom plot_func import (\n KW_MAIN_TTL,\n KW_SBPLT_LABEL,\n figsave,\n)", "_____no_output_____" ], [ "plt.style.use(\"paper.mplstyle\")", "_____no_output_____" ] ], [ [ "## Load the data", "_____no_output_____" ], [ "Load the time-averaged data previously preprocessed.", "_____no_output_____" ] ], [ [ "THAI_cases = [\"Hab1\", \"Hab2\"]", "_____no_output_____" ], [ "# Load data\ndatasets = {} # Create an empty dictionary to store all data\n# for each of the THAI cases, create a nested directory for models\nfor THAI_case in THAI_cases:\n datasets[THAI_case] = {}\n for model_key in MODELS.keys():\n datasets[THAI_case][model_key] = xr.open_dataset(\n mypaths.datadir / model_key / f\"{THAI_case}_time_mean_{model_key}.nc\"\n )", "_____no_output_____" ], [ "bin_step = 10\nbins = np.arange(170, 321, bin_step)\nbin_mid = (bins[:-1] + bins[1:]) * 0.5\nt_sfc_step = abs(bins - const.t_melt).max()", "_____no_output_____" ], [ "ncols = 1\nnrows = 2\n\nwidth = 0.75 * bin_step / len(MODELS)\n\nfig, axs = plt.subplots(ncols=ncols, nrows=nrows, figsize=(ncols * 8, nrows * 4.5))\niletters = aplt.subplot_label_generator()\nfor THAI_case, ax in zip(THAI_cases, axs.flat):\n ax.set_title(f\"{next(iletters)}\", **KW_SBPLT_LABEL)\n ax.set_xlim(bins[0], bins[-1])\n ax.set_xticks(bins)\n ax.grid(axis=\"x\")\n if ax.get_subplotspec().is_last_row():\n ax.set_xlabel(\"Surface temperature [$K$]\")\n ax.set_title(THAI_case, **KW_MAIN_TTL)\n # ax2 = ax.twiny()\n # ax2.set_xlim(bins[0], bins[-1])\n # ax2.axvline(const.t_melt, color=\"k\", linestyle=\"--\")\n # ax2.set_xticks([const.t_melt])\n # ax2.set_xticklabels([const.t_melt])\n # ax.vlines(const.t_melt, ymin=0, ymax=38.75, color=\"k\", linestyle=\"--\")\n # ax.vlines(const.t_melt, ymin=41.5, ymax=45, color=\"k\", linestyle=\"--\")\n # ax.text(const.t_melt, 40, f\"{const.t_melt:.2f}\", ha=\"center\", va=\"center\", fontsize=\"small\")\n ax.imshow(\n np.linspace(0, 1, 100).reshape(1, -1),\n extent=[const.t_melt - t_sfc_step, const.t_melt + t_sfc_step, 0, 45],\n aspect=\"auto\",\n cmap=\"seismic\",\n alpha=0.25,\n )\n ax.set_ylim([0, 45])\n if ax.get_subplotspec().is_first_col():\n ax.set_ylabel(\"Area fraction [%]\")\n\n for i, (model_key, model_dict) in zip([-3, -1, 1, 3], MODELS.items()):\n model_names = names[model_key]\n ds = datasets[THAI_case][model_key]\n arr = sfc_temp(ds, model_key, const)\n weights = xr.broadcast(np.cos(np.deg2rad(arr.latitude)), arr)[0].values.ravel()\n # tot_pnts = arr.size\n hist, _ = np.histogram(\n arr.values.ravel(), bins=bins, weights=weights, density=True\n )\n hist *= 100 * bin_step\n # hist = hist / tot_pnts * 100\n # hist[hist==0] = np.nan\n\n ax.bar(\n bin_mid + (i * width / 2),\n hist,\n width=width,\n facecolor=model_dict[\"color\"],\n edgecolor=\"none\",\n alpha=0.8,\n label=model_dict[\"title\"],\n )\n ax.legend(loc=\"upper left\")\nfig.tight_layout()\nfig.align_labels()\n\nfigsave(\n fig,\n mypaths.plotdir / f\"{'_'.join(THAI_cases)}__hist__t_sfc_weighted\",\n)", "Saved to ../plots/Hab1_Hab2__hist__t_sfc_weighted.png\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
4a8858d1b4f1cff5223158d2e4ca196d2fcb6209
913,680
ipynb
Jupyter Notebook
Near2Far_ZonePlate.ipynb
flexcompute/tidy3d-notebooks
4ac6f18d1fa75f98e43a03704639f7d10a1d665c
[ "MIT" ]
5
2021-06-11T16:58:57.000Z
2022-03-15T20:50:28.000Z
Near2Far_ZonePlate.ipynb
flexcompute/tidy3d-notebooks
4ac6f18d1fa75f98e43a03704639f7d10a1d665c
[ "MIT" ]
null
null
null
Near2Far_ZonePlate.ipynb
flexcompute/tidy3d-notebooks
4ac6f18d1fa75f98e43a03704639f7d10a1d665c
[ "MIT" ]
1
2021-12-21T09:33:14.000Z
2021-12-21T09:33:14.000Z
1,169.884763
425,368
0.957682
[ [ [ "# Near to far field transformation\n\nSee on [github](https://github.com/flexcompute/tidy3d-notebooks/blob/main/Near2Far_ZonePlate.ipynb), run on [colab](https://colab.research.google.com/github/flexcompute/tidy3d-notebooks/blob/main/Near2Far_ZonePlate.ipynb), or just follow along with the output below.\n\nThis tutorial will show you how to solve for electromagnetic fields far away from your structure using field information stored on a nearby surface.\n\nThis technique is called a 'near field to far field transformation' and is very useful for reducing the simulation size needed for structures involving lots of empty space.\n\nAs an example, we will simulate a simple zone plate lens with a very thin domain size to get the transmitted fields measured just above the structure. Then, we'll show how to use the `Near2Far` feature from `tidy3D` to extrapolate to the fields at the focal plane above the lens.", "_____no_output_____" ] ], [ [ "# get the most recent version of tidy3d\n!pip install -q --upgrade tidy3d\n\n# make sure notebook plots inline\n%matplotlib inline\n\n# standard python imports\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sys\n\n# import client side tidy3d\nimport tidy3d as td\nfrom tidy3d import web", "_____no_output_____" ] ], [ [ "## Problem Setup\nBelow is a rough sketch of the setup of a near field to far field transformation.\n\nThe transmitted near fields are measured just above the metalens on the blue line, and the near field to far field transformation is then used to project the fields to the focal plane above at the red line.\n\n<img src=\"img/n2f_diagram.png\" width=800>", "_____no_output_____" ], [ "## Define Simulation Parameters\n\nAs always, we first need to define our simulation parameters. As a reminder, all length units in `tidy3D` are specified in microns.", "_____no_output_____" ] ], [ [ "# 1 nanometer in units of microns (for conversion)\nnm = 1e-3\n\n# free space central wavelength\nwavelength = 1.0\n\n# numerical aperture\nNA = 0.8\n\n# thickness of lens features\nH = 200 * nm\n\n# space between bottom PML and substrate (-z)\n# and the space between lens structure and top pml (+z)\nspace_below_sub = 1.5 * wavelength\n\n# thickness of substrate (um)\nthickness_sub = wavelength / 2\n\n# side length (xy plane) of entire metalens (um)\nlength_xy = 40 * wavelength\n\n# Lens and substrate refractive index\nn_TiO2 = 2.40\nn_SiO2 = 1.46\n\n# define material properties\nair = td.Medium(epsilon=1.0)\nSiO2 = td.Medium(epsilon=n_SiO2**2)\nTiO2 = td.Medium(epsilon=n_TiO2**2)\n\n# resolution of simulation (15 or more grids per wavelength is adequate)\ngrids_per_wavelength = 20\n\n# Number of PML layers to use around edges of simulation, choose thickness of one wavelength to be safe\nnpml = grids_per_wavelength", "_____no_output_____" ] ], [ [ "## Process Geometry\n\nNext we perform some conversions based on these parameters to define the simulation.", "_____no_output_____" ] ], [ [ "# grid size (um)\ndl = wavelength / grids_per_wavelength\n\n# because the wavelength is in microns, use builtin td.C_0 (um/s) to get frequency in Hz\nf0 = td.C_0 / wavelength\n\n# Define PML layers, for this application we surround the whole structure in PML to isolate the fields\npml_layers = [npml, npml, npml]\n\n# domain size in z, note, we're just simulating a thin slice: (space -> substrate -> lens thickness -> space)\nlength_z = space_below_sub + thickness_sub + H + space_below_sub\n\n# construct simulation size array\nsim_size = np.array([length_xy, length_xy, length_z])", "_____no_output_____" ] ], [ [ "## Create Geometry\n\nNow we create the ring metalens programatically", "_____no_output_____" ] ], [ [ "# define substrate\nsubstrate = td.Box(\n center=[0, 0, -length_z/2 + space_below_sub + thickness_sub / 2.0],\n size=[td.inf, td.inf, thickness_sub],\n material=SiO2)\n\n# create a running list of structures\ngeometry = [substrate]\n\n# focal length\nfocal_length = length_xy / 2 / NA * np.sqrt(1 - NA**2)\n\n# location from center for edge of the n-th inner ring, see https://en.wikipedia.org/wiki/Zone_plate\ndef edge(n):\n return np.sqrt(n * wavelength * focal_length + n**2 * wavelength**2 / 4)\n\n# loop through the ring indeces until it's too big and add each to geometry list\nn = 1\nr = edge(n)\n\nwhile r < 2 * length_xy:\n # progressively wider cylinders, material alternating between air and TiO2 \n cyl = td.Cylinder(\n center = [0,0,-length_z/2 + space_below_sub + thickness_sub + H / 2],\n axis='z',\n radius=r,\n height=H,\n material=TiO2 if n % 2 == 0 else air,\n name=f'cylinder_n={n}'\n )\n geometry.append(cyl)\n n += 1\n r = edge(n)\n\n# reverse geometry list so that inner, smaller rings are added last and therefore override larger rings.\ngeometry.reverse()", "_____no_output_____" ] ], [ [ "## Create Source\n\nCreate a plane wave incident from below the metalens", "_____no_output_____" ] ], [ [ "# Bandwidth in Hz\nfwidth = f0 / 10.0\n\n# Gaussian source offset; the source peak is at time t = offset/fwidth\noffset = 4.\n\n# time dependence of source\ngaussian = td.GaussianPulse(f0, fwidth, offset=offset, phase=0)\n\nsource = td.PlaneWave(\n source_time=gaussian,\n injection_axis='+z',\n position=-length_z/2 + space_below_sub / 2, # halfway between PML and substrate\n polarization='x')\n\n# Simulation run time\nrun_time = 40 / fwidth", "_____no_output_____" ] ], [ [ "## Create Monitor\n\nCreate a near field monitor to measure the fields just above the metalens", "_____no_output_____" ] ], [ [ "# place it halfway between top of lens and PML\nmonitor_near = td.FreqMonitor(\n center=[0., 0., -length_z/2 + space_below_sub + thickness_sub + H + space_below_sub / 2],\n size=[length_xy, length_xy, 0],\n freqs=[f0],\n name='near_field')", "_____no_output_____" ] ], [ [ "## Create Simulation\n\nPut everything together and define a simulation object\n", "_____no_output_____" ] ], [ [ "sim = td.Simulation(size=sim_size,\n mesh_step=[dl, dl, dl],\n structures=geometry,\n sources=[source],\n monitors=[monitor_near],\n run_time=run_time,\n pml_layers=pml_layers)", "Initializing simulation...\nMesh step (micron): [5.00e-02, 5.00e-02, 5.00e-02].\nSimulation domain in number of grid points: [840, 840, 114].\nTotal number of computational grid points: 8.04e+07.\nTotal number of time steps: 15397.\nEstimated data size (GB) of monitor near_field: 0.0307.\n" ] ], [ [ "## Visualize Geometry\n\nLets take a look and make sure everything is defined properly\n", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 8))\n\n# Time the visualization of the 2D plane\nsim.viz_eps_2D(normal='x', position=0.1, ax=ax1);\nsim.viz_eps_2D(normal='z', position=-length_z/2 + space_below_sub + thickness_sub + H / 2, ax=ax2);", "_____no_output_____" ] ], [ [ "## Run Simulation\n\nNow we can run the simulation and download the results\n", "_____no_output_____" ] ], [ [ "# Run simulation\nproject = web.new_project(sim.export(), task_name='near2far_docs')\nweb.monitor_project(project['taskId'])", "Uploading the json file...\nProject 'near2far_docs-16b6e151d8baf288' status: success... \n\n" ], [ "# download and load the results\nprint('Downloading results')\nweb.download_results(project['taskId'], target_folder='output')\nsim.load_results('output/monitor_data.hdf5')\n\n# print stats from the logs\nwith open(\"output/tidy3d.log\") as f:\n print(f.read())", "Downloading results\nApplying source normalization to all frequency monitors using source index 0.\nSimulation domain Nx, Ny, Nz: [840, 840, 114]\nApplied symmetries: [0, 0, 0]\nNumber of computational grid points: 8.0438e+07.\nUsing subpixel averaging: True\nNumber of time steps: 15397\nAutomatic shutoff factor: 1.00e-05\nTime step (s): 8.6662e-17\n\nCompute source modes time (s): 0.0573\nCompute monitor modes time (s): 0.0699\n\nRest of setup time (s): 10.1887\n\nStarting solver...\n- Time step 245 / time 2.12e-14s ( 1 % done), field decay: 1.00e+00\n- Time step 615 / time 5.33e-14s ( 4 % done), field decay: 6.31e-02\n- Time step 1231 / time 1.07e-13s ( 8 % done), field decay: 8.24e-03\n- Time step 1847 / time 1.60e-13s ( 12 % done), field decay: 2.83e-03\n- Time step 2463 / time 2.13e-13s ( 16 % done), field decay: 5.84e-04\n- Time step 3079 / time 2.67e-13s ( 20 % done), field decay: 1.30e-04\n- Time step 3695 / time 3.20e-13s ( 24 % done), field decay: 4.33e-05\n- Time step 4311 / time 3.74e-13s ( 28 % done), field decay: 2.61e-05\n- Time step 4927 / time 4.27e-13s ( 32 % done), field decay: 1.64e-05\n- Time step 5542 / time 4.80e-13s ( 36 % done), field decay: 1.39e-05\n- Time step 6158 / time 5.34e-13s ( 40 % done), field decay: 1.20e-05\n- Time step 6774 / time 5.87e-13s ( 44 % done), field decay: 1.13e-05\n- Time step 7390 / time 6.40e-13s ( 48 % done), field decay: 1.09e-05\n- Time step 8006 / time 6.94e-13s ( 52 % done), field decay: 1.04e-05\n- Time step 8622 / time 7.47e-13s ( 56 % done), field decay: 9.66e-06\nField decay smaller than shutoff factor, exiting solver.\n\nSolver time (s): 346.5106\nPost-processing time (s): 0.3770\n\n" ] ], [ [ "## Visualization \n\nLet's inspect the near field using the Tidy3D builtin field visualization methods. \nFor more details see the documentation of [viz_field_2D](https://simulation.cloud/docs/html/generated/tidy3d.Simulation.viz_field_2D.html#tidy3d.Simulation.viz_field_2D).", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(1, 3, figsize=(15, 4))\nfor ax, val in zip(axes, ('re', 'abs', 'int')):\n im = sim.viz_field_2D(monitor_near, eps_alpha=0, comp='x', val=val, cbar=True, ax=ax)\nplt.show()", "_____no_output_____" ] ], [ [ "## Setting Up Near 2 Far\n\nTo set up near to far, we first need to grab the data from the nearfield monitor.", "_____no_output_____" ] ], [ [ "# near field monitor data dictionary\nmonitor_data = sim.data(monitor_near)\n\n# grab the raw data for plotting later\nxs = monitor_data['xmesh']\nys =monitor_data['ymesh']\nE_near = np.squeeze(monitor_data['E'])", "_____no_output_____" ] ], [ [ "Then, we create a `td.Near2Far` object using the monitor data dictionary as follows.\n\nThis object just stores near field data and provides [various methods](https://simulation.cloud/docs/html/generated/tidy3d.Near2Far.html#tidy3d.Near2Far) for looking at various far field quantities.", "_____no_output_____" ] ], [ [ "# from near2far_tidy3d import Near2Far\nn2f = td.Near2Far(monitor_data)", "_____no_output_____" ] ], [ [ "## Getting Far Field Data\n\nWith the `Near2Far` object initialized, we just need to call one of it's methods to get a far field quantity.\n\nFor this example, we use `Near2Far.get_fields_cartesian(x,y,z)` to get the fields at an `x,y,z` point relative to the monitor center.\n\nBelow, we scan through x and y points in a plane located at `z=z0` and record the far fields.", "_____no_output_____" ] ], [ [ "# points to project to\nnum_far = 40\nxs_far = 4 * wavelength * np.linspace(-0.5, 0.5, num_far)\nys_far = 4 * wavelength * np.linspace(-0.5, 0.5, num_far)\n\n# get a mesh in cartesian, convert to spherical\nNx, Ny = len(xs), len(ys)\n\n# initialize the far field values\nE_far = np.zeros((3, num_far, num_far), dtype=complex)\nH_far = np.zeros((3, num_far, num_far), dtype=complex) \n\n# loop through points in the output plane\nfor i in range(num_far):\n sys.stdout.write(\" \\rGetting far fields, %2d%% done\"%(100*i/(num_far + 1)))\n sys.stdout.flush()\n x = xs_far[i]\n for j in range(num_far):\n y = ys_far[j]\n\n # compute and store the outputs from projection function at the focal plane\n E, H = n2f.get_fields_cartesian(x, y, focal_length)\n E_far[:, i, j] = E\n H_far[:, i, j] = H\nsys.stdout.write(\"\\nDone!\")", "Getting far fields, 95% done\nDone!" ] ], [ [ "## Plot Results\nNow we can plot the near and far fields together", "_____no_output_____" ] ], [ [ "# plot everything\nf, ((ax1, ax2, ax3),\n (ax4, ax5, ax6)) = plt.subplots(2, 3, tight_layout=True, figsize=(10, 5))\n\ndef pmesh(xs, ys, array, ax, cmap):\n im = ax.pcolormesh(xs, ys, array.T, cmap=cmap, shading='auto')\n return im\n\nim1 = pmesh(xs, ys, np.real(E_near[0]), ax=ax1, cmap='RdBu')\nim2 = pmesh(xs, ys, np.real(E_near[1]), ax=ax2, cmap='RdBu')\nim3 = pmesh(xs, ys, np.real(E_near[2]), ax=ax3, cmap='RdBu')\nim4 = pmesh(xs_far, ys_far, np.real(E_far[0]), ax=ax4, cmap='RdBu')\nim5 = pmesh(xs_far, ys_far, np.real(E_far[1]), ax=ax5, cmap='RdBu')\nim6 = pmesh(xs_far, ys_far, np.real(E_far[2]), ax=ax6, cmap='RdBu')\n\n\nax1.set_title('near field $E_x(x,y)$')\nax2.set_title('near field $E_y(x,y)$')\nax3.set_title('near field $E_z(x,y)$')\nax4.set_title('far field $E_x(x,y)$')\nax5.set_title('far field $E_y(x,y)$')\nax6.set_title('far field $E_z(x,y)$')\n\nplt.colorbar(im1, ax=ax1)\nplt.colorbar(im2, ax=ax2)\nplt.colorbar(im3, ax=ax3)\nplt.colorbar(im4, ax=ax4)\nplt.colorbar(im5, ax=ax5)\nplt.colorbar(im6, ax=ax6)\n\nplt.show()", "_____no_output_____" ], [ "# we can also use the far field data and plot the field intensity to see the focusing effect\n\nintensity_far = np.sum(np.square(np.abs(E_far)), axis=0)\n\n_, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))\n\nim1 = pmesh(xs_far, ys_far, intensity_far, ax=ax1, cmap='magma')\nim2 = pmesh(xs_far, ys_far, np.sqrt(intensity_far), ax=ax2, cmap='magma')\n\nax1.set_title('$|E(x,y)|^2$')\nax2.set_title('$|E(x,y)|$')\n\nplt.colorbar(im1, ax=ax1)\nplt.colorbar(im2, ax=ax2)\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a8866971ed9a2b3daf64641aeed6a5959d9c1cb
72,530
ipynb
Jupyter Notebook
docs/tutorials/01_introduction_ibm_cloud_runtime.ipynb
rathishcholarajan/qiskit-ibm-runtime
315a088a844dc8aa4452bde6136b53694dfb3220
[ "Apache-2.0" ]
null
null
null
docs/tutorials/01_introduction_ibm_cloud_runtime.ipynb
rathishcholarajan/qiskit-ibm-runtime
315a088a844dc8aa4452bde6136b53694dfb3220
[ "Apache-2.0" ]
null
null
null
docs/tutorials/01_introduction_ibm_cloud_runtime.ipynb
rathishcholarajan/qiskit-ibm-runtime
315a088a844dc8aa4452bde6136b53694dfb3220
[ "Apache-2.0" ]
null
null
null
96.577896
25,160
0.840618
[ [ [ "![qiskit_header.png](attachment:qiskit_header.png)", "_____no_output_____" ], [ "# Qiskit Runtime on IBM Cloud", "_____no_output_____" ], [ "Qiskit Runtime is now part of the IBM Quantum Services on IBM Cloud. To use this service, you'll need to create an IBM Cloud account and a quantum service instance. [This guide](https://cloud.ibm.com/docs/account?topic=account-account-getting-started) contains step-by-step instructions on setting up the account and [this page](https://cloud.ibm.com/docs/quantum-computing?topic=quantum-computing-quickstart) explains how to create a service instance, including directions to find your IBM Cloud API key and Cloud Resource Name (CRN), which you will need later in this tutorial. \n\n\n\nThis tutorial assumes that you know how to use Qiskit, including using it to create circuits. If you are not familiar with Qiskit, the [Qiskit Textbook](https://qiskit.org/textbook/preface.html) is a great resource to learn about both Qiskit and quantum computation in general. ", "_____no_output_____" ], [ "# qiskit-ibm-runtime", "_____no_output_____" ], [ "Once you have an IBM Cloud account and service instance set up, you can use `qiskit-ibm-runtime` to access Qiskit Runtime on IBM Cloud. `qiskit-ibm-runtime` provides the interface to interact with Qiskit Runtime. You can, for example, use it to query and execute runtime programs.", "_____no_output_____" ], [ "## Installation", "_____no_output_____" ], [ "You can install the `qiskit-ibm-runtime` package using pip:\n\n```\npip install qiskit-ibm-runtime\n```", "_____no_output_____" ], [ "## Account initialization", "_____no_output_____" ], [ "Before you can start using Qiskit Runtime, you need to initialize your account by calling `QiskitRuntimeService` with your IBM Cloud API key and the CRN or service name of your service instance.\n\nYou can also choose to save your credentials on disk (in the `$HOME/.qiskit/qiskit-ibm.json` file). By doing so, you only need to use `QiskitRuntimeService()` in the future to initialize your account.\n\nFor more information about account management, such as how to delete or view an account, see [04_account_management.ipynb](04_account_management.ipynb). ", "_____no_output_____" ], [ "<div class=\"alert alert-block alert-info\">\n<b>Note:</b> Account credentials are saved in plain text, so only do so if you are using a trusted device. \n</div>", "_____no_output_____" ] ], [ [ "from qiskit_ibm_runtime import QiskitRuntimeService\n\n# Save account on disk.\n# QiskitRuntimeService.save_account(channel=\"ibm_cloud\", token=<IBM Cloud API key>, instance=<IBM Cloud CRN> or <IBM Cloud service name>)\n\nservice = QiskitRuntimeService()", "_____no_output_____" ] ], [ [ "The `<IBM Cloud API key>` in the example above is your IBM Cloud API key and looks something like\n\n```\nkYgdggnD-qx5k2u0AAFUKv3ZPW_avg0eQ9sK75CCW7hw\n```\n\nThe `<IBM Cloud CRN>` is the Cloud Resource Name and looks something like\n\n```\ncrn:v1:bluemix:public:quantum-computing:us-east:a/b947c1c5a9378d64aed96696e4d76e8e:a3a7f181-35aa-42c8-94d6-7c8ed6e1a94b::\n```\n\nThe `<IBM Cloud service name>` is user-provided and defaults to something like\n```\nQuantum Services-9p\n```\nIf you choose to set `instance` to the service name, the initialization time of the `QiskitRuntimeService` is slightly higher because the required `CRN` value is internally resolved via IBM Cloud APIs.", "_____no_output_____" ], [ "## Listing programs <a name='listing_program'>", "_____no_output_____" ], [ "There are three methods that can be used to find metadata of available programs:\n\n- `pprint_programs()`: pretty prints summary metadata of available programs\n- `programs()`: returns a list of `RuntimeProgram` instances\n- `program()`: returns a single `RuntimeProgram` instance\n\nThe metadata of a runtime program includes its ID, name, description, maximum execution time, backend requirements, input parameters, and return values. Maximum execution time is the maximum amount of time, in seconds, a program can run before being forcibly terminated.", "_____no_output_____" ], [ "To print the summary metadata of the programs (by default first 20 programs are displayed):", "_____no_output_____" ] ], [ [ "service.pprint_programs(limit=None)", "==================================================\nhello-world:\n Name: hello-world\n Description: Get started by running this simple test program.\n==================================================\nsampler:\n Name: sampler\n Description: Generates quasi-probabilities by sampling quantum circuits.\n==================================================\nestimator:\n Name: estimator\n Description: Calculates expectation values of quantum operators.\n" ] ], [ [ "You can use the `limit` and `skip` parameters in `pprint_programs()` and `programs()` to page through all programs. \n\nYou can pass `detailed=True` parameter to `pprint_programs()` to view all the metadata for the programs. \n\nThe program metadata, once fetched, is cached for performance reasons. But you can pass the `refresh=True` parameter to get the latest data from the server. ", "_____no_output_____" ], [ "To print the metadata of the program `sampler`:", "_____no_output_____" ] ], [ [ "program = service.program(\"sampler\")\nprint(program)", "sampler:\n Name: sampler\n Description: Generates quasi-probabilities by sampling quantum circuits.\n Creation date: 2021-10-26T14:41:57Z\n Update date: 2022-05-11T14:35:25.79967Z\n Max execution time: 10000\n Backend requirements:\n none\n Input parameters:\n Properties:\n - circuit_indices:\n Description: Indices of the circuits to evaluate.\n Type: array\n Required: True\n - circuits:\n Description: A single or list of QuantumCircuits.\n Type: ['array', 'object']\n Required: True\n - parameter_values:\n Description: A list of concrete parameters to be bound for each circuit. If specified, it must have the same length as circuit_indices.\n Type: array\n Required: False\n - parameters:\n Description: Parameters of the quantum circuits.\n Type: array\n Required: False\n - run_options:\n Description: A collection of key-value pairs identifying the execution options, such as shots.\n Type: object\n Required: False\n - skip_transpilation:\n Default: False\n Description: Whether to skip circuit transpilation.\n Type: boolean\n Required: False\n Interim results:\n none\n Returns:\n Properties:\n - metadata:\n Description: Additional metadata.\n Type: array\n Required: False\n - quasi_dists:\n Description: List of quasi-probabilities returned for each circuit.\n Type: array\n Required: False\n" ] ], [ [ "As you can see from above, the primitive `sampler` calculates the distributions generated by given circuits executed on the target backend. It takes a number of parameters, but `circuits` and `circuit_indices` are the only required parameters. When the program finishes, it returns the quasi-probabilities for each circuit.", "_____no_output_____" ], [ "## Invoking a runtime program <a name='invoking_program'>", "_____no_output_____" ], [ "You can use the [QiskitRuntimeService.run()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.run) method to invoke a runtime program. This method takes the following parameters:\n\n- `program_id`: ID of the program to run.\n- `inputs`: Program input parameters. These input values are passed to the runtime program.\n- `options`: Runtime options. These options control the execution environment. Currently the only available option is `backend_name`, which is optional for cloud runtime.\n- `result_decoder`: Optional class used to decode the job result.", "_____no_output_____" ], [ "Below is an example of invoking the `sampler` program.", "_____no_output_____" ], [ "First we need to construct a circuit as the input to `sampler` using Qiskit.", "_____no_output_____" ] ], [ [ "from qiskit import QuantumCircuit\n\nN = 6\nqc = QuantumCircuit(N)\n\nqc.x(range(0, N))\nqc.h(range(0, N))\n\nfor kk in range(N // 2, 0, -1):\n qc.ch(kk, kk - 1)\nfor kk in range(N // 2, N - 1):\n qc.ch(kk, kk + 1)\nqc.measure_all()\nqc.draw(\"mpl\", fold=-1)", "_____no_output_____" ] ], [ [ "We now use this circuit as the input to `sampler`:", "_____no_output_____" ] ], [ [ "# Specify the program inputs here.\nprogram_inputs = {\n \"circuits\": qc,\n \"circuit_indices\": [0],\n}\n\njob = service.run(\n program_id=\"sampler\",\n inputs=program_inputs,\n)\n\n# Printing the job ID in case we need to retrieve it later.\nprint(f\"Job ID: {job.job_id}\")\n\n# Get the job result - this is blocking and control may not return immediately.\nresult = job.result()\nprint(result)\n# see which backend the job was executed on\nprint(job.backend) ", "Job ID: caat1colee5v49iqrde0\n{'quasi_dists': [{'100000': 0.013671875, '100001': 0.01171875, '000001': 0.0185546875, '100011': 0.0361328125, '110111': 0.1416015625, '111111': 0.478515625, '110011': 0.05859375, '110000': 0.0322265625, '110001': 0.0263671875, '000000': 0.0107421875, '000011': 0.02734375, '000111': 0.078125, '100111': 0.06640625}], 'metadata': [{'header_metadata': {}, 'shots': 1024}]}\n<IBMBackend('ibmq_qasm_simulator')>\n" ] ], [ [ "### Runtime job", "_____no_output_____" ], [ "The `run()` method returns a [RuntimeJob](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.RuntimeJob.html#qiskit_ibm_runtime.RuntimeJob) instance, which represents the asynchronous execution instance of the program. \n\nSome of the `RuntimeJob` methods:\n\n- `status()`: Return job status.\n- `result()`: Wait for the job to finish and return the final result.\n- `cancel()`: Cancel the job.\n- `wait_for_final_state()`: Wait for the job to finish.\n- `logs()`: Return job logs.\n- `error_message()`: Returns the reason if the job failed and `None` otherwise.\n\nSome of the `RuntimeJob` attributes:\n\n- `job_id`: Unique identifier of the job.\n- `backend`: The backend where the job is run.\n- `program_id`: ID of the program the execution is for.\n\n\nRefer to the [RuntimeJob API documentation](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.RuntimeJob.html#qiskit_ibm_runtime.RuntimeJob) for a full list of methods and usage. ", "_____no_output_____" ], [ "<div class=\"alert alert-block alert-info\">\n<b>Note:</b> To ensure fairness, there is a maximum execution time for each Qiskit Runtime job. Refer to <a href=\"https://qiskit.org/documentation/partners/qiskit_ibm_runtime/max_time.html#qiskit-runtime-on-ibm-cloud\">this documentation</a> on what the time limit is.\n</div>", "_____no_output_____" ], [ "## Selecting a backend", "_____no_output_____" ], [ "A **backend** is a quantum device or simulator capable of running quantum circuits or pulse schedules.\n\nIn the example above, we invoked a runtime program without specifying which backend it should run on. In this case the server automatically picks the one that is the least busy. Alternatively, you can choose a specific backend to run your program. \n\nTo list all the backends you have access to:", "_____no_output_____" ] ], [ [ "service.backends()", "_____no_output_____" ] ], [ [ "The [QiskitRuntimeService.backends()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.backends) method also takes filters. For example, to find all real devices that have at least five qubits:", "_____no_output_____" ] ], [ [ "service.backends(simulator=False, min_num_qubits=5)", "_____no_output_____" ] ], [ [ "[QiskitRuntimeService.backends()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.backends) returns a list of [IBMBackend](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.IBMBackend.html#qiskit_ibm_runtime.IBMBackend) instances. Each instance represents a particular backend. Attributes and methods of an `IBMBackend` provide more information about the backend, such as its qubit count, error rate, and status.\n\nFor more information about backends, such as commonly used attributes, see [03_backends.ipynb](03_backends.ipynb).", "_____no_output_____" ], [ "Once you select a backend to use, you can specify the name of the backend in the `options` parameter:", "_____no_output_____" ] ], [ [ "# Specify the program inputs here.\nprogram_inputs = {\n \"circuits\": qc,\n \"circuit_indices\": [0],\n}\n\n# Specify the backend name.\noptions = {\"backend_name\": \"ibmq_qasm_simulator\"}\n\njob = service.run(\n program_id=\"sampler\",\n options=options,\n inputs=program_inputs,\n)\n\n# Printing the job ID in case we need to retrieve it later.\nprint(f\"Job ID: {job.job_id}\")\n\n# Get the job result - this is blocking and control may not return immediately.\nresult = job.result()\nprint(result)", "Job ID: caat1eglee5v49iqrdf0\n{'quasi_dists': [{'100000': 0.0146484375, '000001': 0.021484375, '100011': 0.0302734375, '110111': 0.123046875, '111111': 0.494140625, '110000': 0.0283203125, '110011': 0.072265625, '000000': 0.0126953125, '100111': 0.0654296875, '000111': 0.0595703125, '000011': 0.0380859375, '100001': 0.0185546875, '110001': 0.021484375}], 'metadata': [{'header_metadata': {}, 'shots': 1024}]}\n" ] ], [ [ "## Retrieving previously run jobs", "_____no_output_____" ], [ "You can use the [QiskitRuntimeService.job()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.job) method to retrieve a previously executed runtime job. Attributes of this [RuntimeJob](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.RuntimeJob.html#qiskit_ibm_runtime.RuntimeJob) instance can tell you about the execution:", "_____no_output_____" ] ], [ [ "retrieved_job = service.job(job.job_id)\nprint(\n f\"Job {retrieved_job.job_id} is an execution instance of runtime program {retrieved_job.program_id}.\"\n)\nprint(\n f\"This job ran on backend {retrieved_job.backend} and had input parameters {retrieved_job.inputs}\"\n)", "Job caat1eglee5v49iqrdf0 is an execution instance of runtime program sampler.\nThis job ran on backend <IBMBackend('ibmq_qasm_simulator')> and had input parameters {'circuits': <qiskit.circuit.quantumcircuit.QuantumCircuit object at 0x7f9b509829a0>, 'circuit_indices': [0]}\n" ] ], [ [ "Similarly, you can use [QiskitRuntimeService.jobs()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.jobs) to get a list of jobs. You can specify a limit on how many jobs to return. The default limit is 10:", "_____no_output_____" ] ], [ [ "retrieved_jobs = service.jobs(limit=1)\nfor rjob in retrieved_jobs:\n print(rjob.job_id)", "caat1eglee5v49iqrdf0\n" ] ], [ [ "## Deleting a job", "_____no_output_____" ], [ "You can use the [QiskitRuntimeService.delete_job()](https://qiskit.org/documentation/partners/qiskit_ibm_runtime/stubs/qiskit_ibm_runtime.QiskitRuntimeService.html#qiskit_ibm_runtime.QiskitRuntimeService.delete_job) method to delete a job. You can only delete your own jobs, and this action cannot be reversed. ", "_____no_output_____" ] ], [ [ "service.delete_job(job.job_id)", "_____no_output_____" ] ], [ [ "# Next steps", "_____no_output_____" ], [ "There are additional tutorials in this directory:\n\n- [02_introduction_ibm_quantum_runtime.ipynb](02_introduction_ibm_quantum_runtime.ipynb) is the corresponding tutorial on using Qiskit Runtime on IBM Quantum. You can skip this tutorial if you don't plan on using Qiskit Runtime on IBM Quantum.\n- [03_backends.ipynb](03_backends.ipynb) describes how to find a target backend for the Qiskit Runtime program you want to invoke. \n- [04_account_management.ipynb](04_account_management.ipynb) describes how to save, load, and delete your account credentials on disk.\n- [qiskit_runtime_vqe_program.ipynb](sample_vqe_program/qiskit_runtime_vqe_program.ipynb) goes into more details on uploading a real-world program (VQE). \n- [qka.ipynb](qka.ipynb), [vqe.ipynb](vqe.ipynb), and [qiskit_runtime_expval_program.ipynb](sample_expval_program/qiskit_runtime_expval_program.ipynb) describe how to use the public programs `qka`, `vqe`, and `sample-expval`, respectively. These programs are currently only available in Qiskit Runtime on IBM Quantum.", "_____no_output_____" ] ], [ [ "from qiskit.tools.jupyter import *\n\n%qiskit_copyright", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
4a886a92d74833f280221ec2fa2198cecdaab393
148,112
ipynb
Jupyter Notebook
SHDKY/V_model/V-H3.ipynb
biljiang/pyprojects
10095c6b8f2f32831e8a36e122d1799f135dc5df
[ "MIT" ]
null
null
null
SHDKY/V_model/V-H3.ipynb
biljiang/pyprojects
10095c6b8f2f32831e8a36e122d1799f135dc5df
[ "MIT" ]
null
null
null
SHDKY/V_model/V-H3.ipynb
biljiang/pyprojects
10095c6b8f2f32831e8a36e122d1799f135dc5df
[ "MIT" ]
null
null
null
100.483039
71,544
0.789112
[ [ [ "import numpy as np\nimport pandas as pd\nimport pathlib as pl\nfrom datetime import datetime\nimport matplotlib.pyplot as plot\nfrom pandas import DataFrame, Series", "_____no_output_____" ], [ "# read from csv files\n\ndatapath=pl.Path(\"../csvdata\")\n\nfile_list=[]\ndfs=[]\nfor x in datapath.glob(\"*H.csv\"):\n print(\"Reading \"+x.name)\n f=pd.read_csv(x,index_col=0,na_values=\" \")\n dfs.append(f)\n file_list.append(x.name)", "Reading 110kVChangLiu1-H.csv\nReading 110kVChangLiu3-H.csv\nReading 35kVJiangChuan4-H.csv\nReading 110kVChangLiu4-H.csv\nReading 110kVJiangChuan3-H.csv\nReading 110kVJiangChuan2-H.csv\nReading 10kVWanKe2-H.csv\nReading 110kVChangLiu2-H.csv\nReading 110kVJiangChuan1-H.csv\nReading 35kVJiangChuan5-H.csv\nReading 10kVQinZhou4-H.csv\nReading 10kVQinZhou3-H.csv\nReading 35kVKunYang1-H.csv\nReading 10kVWanKe1-H.csv\nReading 10kVKunYang3-H.csv\n" ], [ "f.columns", "_____no_output_____" ], [ "tr_data=DataFrame()\ni=0\nfor df in dfs:\n for c in df.columns:\n if c.find(\"Avg[Va H3]\")!=-1:\n print (c)\n tr_data[c]=df[c]\n i+=1\nprint(i)", " Changliu1 - Avg[Va H3] (V)\n Changliu3 - Avg[Va H3] (V)\n JiangChuan4 - Avg[Va H3] (V)\n Changliu4 - Avg[Va H3] (V)\n JiangChuan3 - Avg[Va H3] (V)\n JiangChuan2 - Avg[Va H3] (V)\n WanKe2 - Avg[Va H3] (V)\n Changliu2 - Avg[Va H3] (V)\n JiangChuan1 - Avg[Va H3] (V)\n JiangChuan5 - Avg[Va H3] (V)\n QinZhou4 - Avg[Va H3] (V)\n QinZhou3 - Avg[Va H3] (V)\n KunYang1 - Avg[Va H3] (V)\n WanKe1 - Avg[Va H3] (V)\n KunYang3 - Avg[Va H3] (V)\n15\n" ], [ "train_data=tr_data.dropna()", "_____no_output_____" ], [ "len(train_data)", "_____no_output_____" ], [ "train_data=train_data.dropna(axis=1)", "_____no_output_____" ], [ "corMat=DataFrame(train_data.corr())\nplot.pcolor(corMat)\nplot.show()", "_____no_output_____" ], [ "train_data.describe()", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\ntrain,test= train_test_split(train_data,test_size=0.2,random_state=0)", "_____no_output_____" ], [ "MAX= train.max()\nMIN = train.min()", "_____no_output_____" ], [ "train_s=(train-MIN)/(MAX-MIN)\ntest_s=(test-MIN)/(MAX-MIN)", "_____no_output_____" ], [ "train_s=train_s.fillna(0)\ntest_s=test_s.fillna(0)", "_____no_output_____" ], [ "# generate X_train,Y_train,X_test,Y_test for all target features\nX_train=train_s.copy();Y_train=DataFrame()\n\nfor c in train_s.columns:\n if c.find(\"WanKe1\") !=-1 :\n Y_train[c]=train_s[c]\n X_train=X_train.drop(c,axis=1)\n\n \nX_test=test_s.copy();Y_test=DataFrame()\n\nfor c in test_s.columns:\n if c.find(\"WanKe1\") !=-1 :\n Y_test[c]=test_s[c]\n X_test=X_test.drop(c,axis=1)", "_____no_output_____" ], [ "###### network from keras for SHDKY data simulation ###########\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation,Input\nimport keras\n\n\nmodel = Sequential()\n\nmodel.add(Dense(28, input_dim=14,kernel_initializer=\"normal\"))\nmodel.add(Activation('relu'))\nmodel.add(Dense(7, activation='relu',kernel_initializer=\"normal\"))\nmodel.add(Dense(1, activation='linear',kernel_initializer=\"normal\"))", "/home/techstar/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n" ], [ "model.compile(loss='mean_squared_error', \n optimizer=keras.optimizers.SGD(lr=0.02))", "_____no_output_____" ], [ "model.fit(X_train, Y_train, epochs=30,batch_size=30,\n shuffle=True,verbose=2,validation_split=0.2) ", "Train on 2111 samples, validate on 528 samples\nEpoch 1/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 2/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 3/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 4/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 5/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 6/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 7/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 8/30\n - 0s - loss: 0.0020 - val_loss: 0.0025\nEpoch 9/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 10/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 11/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 12/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 13/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 14/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 15/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 16/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 17/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 18/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 19/30\n - 0s - loss: 0.0021 - val_loss: 0.0025\nEpoch 20/30\n - 0s - loss: 0.0020 - val_loss: 0.0025\nEpoch 21/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 22/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 23/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 24/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 25/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 26/30\n - 0s - loss: 0.0021 - val_loss: 0.0024\nEpoch 27/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 28/30\n - 0s - loss: 0.0021 - val_loss: 0.0025\nEpoch 29/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\nEpoch 30/30\n - 0s - loss: 0.0020 - val_loss: 0.0024\n" ], [ "#((model.predict(X_test)-Y_test)/Y_test)", "_____no_output_____" ], [ "((model.predict(X_test)-Y_test)).describe()", "_____no_output_____" ], [ "c=Y_test.columns\nR= pd.DataFrame(model.predict(X_test),columns=['V_pred'])\nR.index = Y_test.index\nR= R.join(Y_test)\nR= R*(MAX[c]-MIN[c]).values[0]+MIN[c].values[0]\n", "_____no_output_____" ], [ "R_show=R.iloc[200:300,:]\n\nimport pylab\npylab.rcParams['figure.figsize'] = (12.0, 4.0)\n\nR_show.plot(alpha =0.5,figsize = (12,4))", "_____no_output_____" ], [ "R[\"diff\"]=(R[\"V_pred\"]-R[c[0]])\nR[\"diff_ptg\"]=(R[\"V_pred\"]-R[c[0]])/R[c[0]]", "_____no_output_____" ], [ "R.describe()", "_____no_output_____" ], [ "(R.diff_ptg.quantile(0.05),R.diff_ptg.quantile(0.95))", "_____no_output_____" ], [ "R[:6]", "_____no_output_____" ], [ "R.corr()", "_____no_output_____" ], [ "train_data.corr()[c[0]].sort_values()", "_____no_output_____" ], [ "model.save(\"model_h5/M_VH3.h5\")", "_____no_output_____" ], [ "test[c][:6]", "_____no_output_____" ], [ "R[c][:6]", "_____no_output_____" ], [ "R[c].hist(bins=100)", "_____no_output_____" ], [ "R['V_pred'].hist(bins=100)", "_____no_output_____" ], [ "R.diff_ptg.hist(bins=100)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a886dbbf8c62fd6298e6e2a135133294969ba3a
19,613
ipynb
Jupyter Notebook
tutorials/datasci/tour/data_visualization.ipynb
RAPIDSAcademy/rapidsacademy
3a557c9bd14c56bf24acf7ecd9a59ff0dec0b178
[ "BSD-3-Clause" ]
27
2020-05-28T00:13:08.000Z
2022-03-28T15:52:38.000Z
tutorials/datasci/tour/data_visualization.ipynb
RAPIDSAcademy/rapidsacademy
3a557c9bd14c56bf24acf7ecd9a59ff0dec0b178
[ "BSD-3-Clause" ]
1
2020-08-03T18:51:11.000Z
2020-08-03T18:51:11.000Z
tutorials/datasci/tour/data_visualization.ipynb
RAPIDSAcademy/rapidsacademy
3a557c9bd14c56bf24acf7ecd9a59ff0dec0b178
[ "BSD-3-Clause" ]
8
2020-06-03T07:57:41.000Z
2021-01-17T10:45:27.000Z
32.797659
306
0.564473
[ [ [ "# Data Visualization\n\nThe RAPIDS AI ecosystem and `cudf.DataFrame` are built on a series of standards that simplify interoperability with established and emerging data science tools.\n\nWith a growing number of libraries adding GPU support, and a `cudf.DataFrame`’s ability to convert `.to_pandas()`, a large portion of the Python Visualization ([PyViz](pyviz.org/tools.html)) stack is immediately available to display your data. \n\nIn this Notebook, we’ll walk through some of the data visualization possibilities with BlazingSQL. \n\nBlog post: [Data Visualization with BlazingSQL](https://blog.blazingdb.com/data-visualization-with-blazingsql-12095862eb73?source=friends_link&sk=94fc5ee25f2a3356b4a9b9a49fd0f3a1)\n\n#### Overview \n- [Matplotlib](#Matplotlib)\n- [Datashader](#Datashader)\n- [HoloViews](#HoloViews)\n- [cuxfilter](#cuxfilter)", "_____no_output_____" ] ], [ [ "from blazingsql import BlazingContext\nbc = BlazingContext()", "_____no_output_____" ] ], [ [ "### Dataset\n\nThe data we’ll be using for this demo comes from the [NYC Taxi dataset](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page) and is stored in a public AWS S3 bucket.", "_____no_output_____" ] ], [ [ "bc.s3('blazingsql-colab', bucket_name='blazingsql-colab')\n\nbc.create_table('taxi', 's3://blazingsql-colab/yellow_taxi/taxi_data.parquet')", "_____no_output_____" ] ], [ [ "Let's give the data a quick look to get a clue what we're looking at.", "_____no_output_____" ] ], [ [ "bc.sql('select * from taxi').tail()", "_____no_output_____" ] ], [ [ "## Matplotlib \n\n[GitHub](https://github.com/matplotlib/matplotlib)\n\n> _Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations in Python._\n\nBy calling the `.to_pandas()` method, we can convert a `cudf.DataFrame` into a `pandas.DataFrame` and instantly access Matplotlib with `.plot()`.\n\nFor example, **does the `passenger_count` influence the `tip_amount`?**", "_____no_output_____" ] ], [ [ "bc.sql('SELECT * FROM taxi').to_pandas().plot(kind='scatter', x='passenger_count', y='tip_amount')", "_____no_output_____" ] ], [ [ "Other than the jump from 0 to 1 or outliers at 5 and 6, having more passengers might not be a good deal for the driver's `tip_amount`.", "_____no_output_____" ], [ "Let's see what demand is like. Based on dropoff time, **how many riders were transported by hour?** i.e. column `7` will be the total number of passengers dropped off from 7:00 AM through 7:59 AM for all days in this time period.", "_____no_output_____" ] ], [ [ "riders_by_hour = '''\n select\n sum(passenger_count) as sum_riders,\n hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP)) as hour_of_the_day\n from\n taxi\n group by\n hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP))\n order by\n hour(cast(tpep_dropoff_datetime || '.0' as TIMESTAMP))\n '''\nbc.sql(riders_by_hour).to_pandas().plot(kind='bar', x='hour_of_the_day', y='sum_riders', title='Sum Riders by Hour', figsize=(12, 6))", "_____no_output_____" ] ], [ [ "Looks like the morning gets started around 6:00 AM, and builds up to a sustained lunchtime double peak from 12:00 PM - 3:00 PM. After a quick 3:00 PM - 5:00 PM siesta, we're right back for prime time from 6:00 PM to 8:00 PM. It's downhill from there, but tomorrow is a new day!", "_____no_output_____" ] ], [ [ "solo_rate = len(bc.sql('select * from taxi where passenger_count = 1')) / len(bc.sql('select * from taxi')) * 100\n\nprint(f'{solo_rate}% of rides have only 1 passenger.')", "_____no_output_____" ] ], [ [ "The overwhelming majority of rides have just 1 passenger. How consistent is this solo rider rate? **What's the average `passenger_count` per trip by hour?** \n\nAnd maybe time of day plays a role in `tip_amount` as well, **what's the average `tip_amount` per trip by hour?**\n\nWe can run both queries in the same cell and the results will display inline.", "_____no_output_____" ] ], [ [ "xticks = [n for n in range(24)]\n\navg_riders_by_hour = '''\n select\n avg(passenger_count) as avg_passenger_count,\n hour(dropoff_ts) as hour_of_the_day\n from (\n select\n passenger_count, \n cast(tpep_dropoff_datetime || '.0' as TIMESTAMP) dropoff_ts\n from\n taxi\n )\n group by\n hour(dropoff_ts)\n order by\n hour(dropoff_ts)\n '''\nbc.sql(avg_riders_by_hour).to_pandas().plot(kind='line', x='hour_of_the_day', y='avg_passenger_count', title='Avg. # Riders per Trip by Hour', xticks=xticks, figsize=(12, 6))\n\navg_tip_by_hour = '''\n select\n avg(tip_amount) as avg_tip_amount,\n hour(dropoff_ts) as hour_of_the_day\n from (\n select\n tip_amount, \n cast(tpep_dropoff_datetime || '.0' as TIMESTAMP) dropoff_ts\n from\n taxi\n )\n group by\n hour(dropoff_ts)\n order by\n hour(dropoff_ts)\n '''\nbc.sql(avg_tip_by_hour).to_pandas().plot(kind='line', x='hour_of_the_day', y='avg_tip_amount', title='Avg. Tip ($) per Trip by Hour', xticks=xticks, figsize=(12, 6))", "_____no_output_____" ] ], [ [ "Interestingly, they almost resemble each other from 8:00 PM to 9:00 AM, but where average `passenger_count` continues to rise until 3:00 PM, average `tip_amount` takes a dip until 3:00 PM. \n\nFrom 3:00 PM - 8:00 PM average `tip_amount` starts rising and average `passenger_count` waits patiently for it to catch up.\n\nAverage `tip_amount` peaks at midnight, and bottoms out at 5:00 AM. Average `passenger_count` is highest around 3:00 AM, and lowest at 6:00 AM.", "_____no_output_____" ], [ "## Datashader\n \n[GitHub](https://github.com/holoviz/datashader)\n\n> Datashader is a data rasterization pipeline for automating the process of creating meaningful representations of large amounts of data.\n\nAs of [holoviz/datashader#793](https://github.com/holoviz/datashader/pull/793), the following Datashader features accept `cudf.DataFrame` and `dask_cudf.DataFrame` input:\n\n- `Canvas.points`, `Canvas.line` and `Canvas.area` rasterization\n- All reduction operations except `var` and `std`. \n- `transfer_functions.shade` (both 2D and 3D) inputs\n\n#### Colorcet\n\n[GitHub](https://github.com/holoviz/colorcet)\n\n> Colorcet is a collection of perceptually uniform colormaps for use with Python plotting programs like bokeh, matplotlib, holoviews, and datashader based on the set of perceptually uniform colormaps created by Peter Kovesi at the Center for Exploration Targeting.", "_____no_output_____" ] ], [ [ "from datashader import Canvas, transfer_functions as tf\nfrom colorcet import fire", "_____no_output_____" ] ], [ [ "**Do dropoff locations change based on the time of day?** Let's say 6AM-4PM vs 6PM-4AM.", "_____no_output_____" ], [ "Dropoffs from 6:00 AM to 4:00 PM", "_____no_output_____" ] ], [ [ "query = '''\n select \n dropoff_x, dropoff_y \n from \n taxi \n where \n hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 6 AND 15\n '''\nnyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')\ntf.set_background(tf.shade(nyc, cmap=fire), \"black\")", "_____no_output_____" ] ], [ [ "Dropoffs from 6:00 PM to 4:00 AM", "_____no_output_____" ] ], [ [ "query = '''\n select \n dropoff_x, dropoff_y \n from \n taxi \n where \n hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 18 AND 23\n OR hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 0 AND 3\n '''\nnyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')\ntf.set_background(tf.shade(nyc, cmap=fire), \"black\")", "_____no_output_____" ] ], [ [ "While Manhattan makes up the majority of the dropoff geography from 6:00 AM to 4:00 PM, Midtown's spark grows and spreads deeper into Brooklyn and Queens in the 6:00 PM to 4:00 AM window. \n\nConsistent with the more decentralized look across the map, dropoffs near LaGuardia Airport (upper-middle right side) also die down relative to surrounding areas as the night rolls in.", "_____no_output_____" ], [ "## HoloViews \n\n[GitHub](https://github.com/holoviz/holoviews)\n\n> HoloViews is an open-source Python library designed to make data analysis and visualization seamless and simple. With HoloViews, you can usually express what you want to do in very few lines of code, letting you focus on what you are trying to explore and convey, not on the process of plotting.\n\nBy calling the `.to_pandas()` method, we can convert a `cudf.DataFrame` into a `pandas.DataFrame` and hand off to HoloViews or other CPU visualization packages.", "_____no_output_____" ] ], [ [ "from holoviews import extension, opts\nfrom holoviews import Scatter, Dimension\nimport holoviews.operation.datashader as hd\n\nextension('bokeh')\nopts.defaults(opts.Scatter(height=425, width=425), opts.RGB(height=425, width=425))\n\ncmap = [(49,130,189), (107,174,214), (123,142,216), (226,103,152), (255,0,104), (50,50,50)]", "_____no_output_____" ] ], [ [ "With HoloViews, we can easily explore the relationship of multiple scatter plots by saving them as variables and displaying them side-by-side with the same code cell.\n\nFor example, let's reexamine `passenger_count` vs `tip_amount` next to a new `holoviews.Scatter` of `fare_amount` vs `tip_amount`.\n\n**Does `passenger_count` affect `tip_amount`?**", "_____no_output_____" ] ], [ [ "s = Scatter(bc.sql('select passenger_count, tip_amount from taxi').to_pandas(), 'passenger_count', 'tip_amount')\n\n# 0-6 passengers, $0-$100 tip\nranged = s.redim.range(passenger_count=(-0.5, 6.5), tip_amount=(0, 100))\nshaded = hd.spread(hd.datashade(ranged, x_sampling=0.25, cmap=cmap))\n\nriders_v_tip = shaded.redim.label(passenger_count=\"Passenger Count\", tip_amount=\"Tip ($)\")", "_____no_output_____" ] ], [ [ "**How do `fare_amount` and `tip_amount` relate?**", "_____no_output_____" ] ], [ [ "s = Scatter(bc.sql('select fare_amount, tip_amount from taxi').to_pandas(), 'fare_amount', 'tip_amount')\n\n# 0-30 miles, $0-$60 tip\nranged = s.redim.range(fare_amount=(0, 100), tip_amount=(0, 100))\nshaded = hd.spread(hd.datashade(ranged, cmap=cmap))\n\nfare_v_tip = shaded.redim.label(fare_amount=\"Fare Amount ($)\", tip_amount=\"Tip ($)\")", "_____no_output_____" ] ], [ [ "Display the answers to both side by side.", "_____no_output_____" ] ], [ [ "riders_v_tip + fare_v_tip", "_____no_output_____" ] ], [ [ "## cuxfilter\n\n[GitHub](https://github.com/rapidsai/cuxfilter)\n\n> cuxfilter (ku-cross-filter) is a RAPIDS framework to connect web visualizations to GPU accelerated crossfiltering. Inspired by the javascript version of the original, it enables interactive and super fast multi-dimensional filtering of 100 million+ row tabular datasets via cuDF.\n\ncuxfilter allows us to culminate these charts into a dashboard.", "_____no_output_____" ] ], [ [ "import cuxfilter", "_____no_output_____" ] ], [ [ "Create `cuxfilter.DataFrame` from a `cudf.DataFrame`.", "_____no_output_____" ] ], [ [ "cux_df = cuxfilter.DataFrame.from_dataframe(bc.sql('SELECT passenger_count, tip_amount, dropoff_x, dropoff_y FROM taxi'))", "_____no_output_____" ] ], [ [ "Create some charts & define a dashboard object.", "_____no_output_____" ] ], [ [ "chart_0 = cuxfilter.charts.datashader.scatter_geo(x='dropoff_x', y='dropoff_y')\n\nchart_1 = cuxfilter.charts.bokeh.bar('passenger_count', add_interaction=False)\n\nchart_2 = cuxfilter.charts.datashader.heatmap(x='passenger_count', y='tip_amount', x_range=[-0.5, 6.5], y_range=[0, 100], \n color_palette=cmap, title='Passenger Count vs Tip Amount ($)')", "_____no_output_____" ], [ "dashboard = cux_df.dashboard([chart_0, chart_1, chart_2], title='NYC Yellow Cab')", "_____no_output_____" ] ], [ [ "Display charts in Notebook with `.view()`.", "_____no_output_____" ] ], [ [ "chart_0.view()", "_____no_output_____" ], [ "chart_2.view()", "_____no_output_____" ] ], [ [ "## Multi-GPU Data Visualization\n\nPackages like Datashader and cuxfilter support dask_cudf distributed objects (Series, DataFrame).", "_____no_output_____" ] ], [ [ "from dask_cuda import LocalCUDACluster\nfrom dask.distributed import Client\n\ncluster = LocalCUDACluster()\nclient = Client(cluster)\n\nbc = BlazingContext(dask_client=client, network_interface='lo')", "_____no_output_____" ], [ "bc.s3('blazingsql-colab', bucket_name='blazingsql-colab')\n\nbc.create_table('distributed_taxi', 's3://blazingsql-colab/yellow_taxi/taxi_data.parquet')", "_____no_output_____" ] ], [ [ "Dropoffs from 6:00 PM to 4:00 AM", "_____no_output_____" ] ], [ [ "query = '''\n select \n dropoff_x, dropoff_y \n from \n distributed_taxi \n where \n hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 18 AND 23\n OR hour(cast(tpep_pickup_datetime || '.0' as TIMESTAMP)) BETWEEN 0 AND 3\n '''\n\nnyc = Canvas().points(bc.sql(query), 'dropoff_x', 'dropoff_y')\n\ntf.set_background(tf.shade(nyc, cmap=fire), \"black\")", "_____no_output_____" ] ], [ [ "## That's the Data Vizualization Tour!\n\nYou've seen the basics of Data Visualization in BlazingSQL Notebooks and how to utilize it. Now is a good time to experiment with your own data and see how to parse, clean, and extract meaningful insights from it.\n\nWe'll now get into how to run Machine Learning with popular Python and GPU-accelerated Python packages.\n\nContinue to the [Machine Learning introductory Notebook](machine_learning.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a88955533bc28fff4ab09b8ec9b7e1af659d087
525,887
ipynb
Jupyter Notebook
src/MachineLearning/original/Bgt2Vec.ipynb
naseebth/Budget_Text_Analysis
cac0210b8b4b998fe798da92a9bbdd10eb1c4773
[ "MIT" ]
null
null
null
src/MachineLearning/original/Bgt2Vec.ipynb
naseebth/Budget_Text_Analysis
cac0210b8b4b998fe798da92a9bbdd10eb1c4773
[ "MIT" ]
null
null
null
src/MachineLearning/original/Bgt2Vec.ipynb
naseebth/Budget_Text_Analysis
cac0210b8b4b998fe798da92a9bbdd10eb1c4773
[ "MIT" ]
null
null
null
384.983163
402,872
0.929401
[ [ [ "# Bgt2Vec", "_____no_output_____" ], [ "Original code is generated from © Yuriy Guts, 2016", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "from __future__ import absolute_import, division, print_function", "_____no_output_____" ], [ "import codecs\nimport glob\nimport logging\nimport multiprocessing\nimport os\nimport pprint\nimport re", "_____no_output_____" ], [ "import nltk\nimport gensim.models.word2vec as w2v\nimport sklearn.manifold\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns", "_____no_output_____" ], [ "%pylab inline", "Populating the interactive namespace from numpy and matplotlib\n" ] ], [ [ "**Set up logging**", "_____no_output_____" ] ], [ [ "logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)", "_____no_output_____" ] ], [ [ "**Download NLTK tokenizer models (only the first time)**", "_____no_output_____" ] ], [ [ "nltk.download(\"punkt\")", "[nltk_data] Downloading package punkt to\n[nltk_data] C:\\Users\\Sultan\\AppData\\Roaming\\nltk_data...\n[nltk_data] Package punkt is already up-to-date!\n" ] ], [ [ "## Prepare Corpus", "_____no_output_____" ] ], [ [ "# change the current directory to read the data\nos.chdir(r\"C:\\Users\\Sultan\\Desktop\\data\\PreprocessedData\") ", "_____no_output_____" ], [ "df = pd.read_csv('CombinedData.csv', engine='python')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "# Rename col 0\ndf.columns = ['word','organization','year']\ndf.head()", "_____no_output_____" ], [ "corpus = df.word\n# Join the elements and sperate them by a single space\ncorpus = ' '.join(word for word in corpus)", "_____no_output_____" ], [ "corpus[:196]", "_____no_output_____" ], [ "# change the current directory to read the data\nos.chdir(r\"C:\\Users\\Sultan\\Desktop\\data\\PreprocessedData\\TextFiles\") \n\n# Creating a text file\ntext_data = open(\"CombinedData.txt\",\"a\") \n\n# Writing the string to the file\ntext_data.write(corpus)\n\n# Closing the file\ntext_data.close() ", "_____no_output_____" ] ], [ [ "**Load files**", "_____no_output_____" ] ], [ [ "bgt_filename = \"CombinedData.txt\"", "_____no_output_____" ], [ "corpus_raw = u\"\"\nprint(\"Reading '{0}'...\".format(bgt_filename))\nwith codecs.open(bgt_filename, \"r\", \"utf-8\") as book_file:\n corpus_raw += book_file.read()\n print(\"Corpus is now {0} characters long\".format(len(corpus_raw)))\n print()", "Reading 'CombinedData.txt'...\nCorpus is now 133965480 characters long\n\n" ] ], [ [ "**Split the corpus into sentences**", "_____no_output_____" ] ], [ [ "tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')", "_____no_output_____" ], [ "raw_sentences = tokenizer.tokenize(corpus_raw)", "_____no_output_____" ], [ "def sentence_to_wordlist(raw):\n clean = re.sub(\"[^a-zA-Z]\",\" \", raw)\n words = clean.split()\n return words", "_____no_output_____" ], [ "sentences = []\nfor raw_sentence in raw_sentences:\n if len(raw_sentence) > 0:\n sentences.append(sentence_to_wordlist(raw_sentence))", "_____no_output_____" ], [ "token_count = sum([len(sentence) for sentence in sentences])\nprint(\"corpus contains {0:,} tokens\".format(token_count))", "corpus contains 15,922,096 tokens\n" ] ], [ [ "## Train Word2Vec", "_____no_output_____" ] ], [ [ "# Dimensionality of the resulting word vectors.\nnum_features = 300\n\n# Minimum word count threshold.\nmin_word_count = 3\n\n# Number of threads to run in parallel.\nnum_workers = multiprocessing.cpu_count()\n\n# Context window length.\ncontext_size = 7\n\n# Downsample setting for frequent words.\ndownsampling = 1e-3\n\n# Seed for the RNG, to make the results reproducible.\nseed = 1", "_____no_output_____" ], [ "bgt2vec = w2v.Word2Vec(\n sg=1,\n seed=seed,\n workers=num_workers,\n size=num_features,\n min_count=min_word_count,\n window=context_size,\n sample=downsampling\n \n)", "_____no_output_____" ], [ "bgt2vec.build_vocab(sentences)", "2019-12-05 16:31:45,407 : INFO : collecting all words and their counts\n2019-12-05 16:31:45,423 : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types\n2019-12-05 16:31:52,184 : INFO : collected 34079 word types from a corpus of 15922096 raw words and 1 sentences\n2019-12-05 16:31:52,186 : INFO : Loading a fresh vocabulary\n2019-12-05 16:31:53,706 : INFO : effective_min_count=3 retains 34079 unique words (100% of original 34079, drops 0)\n2019-12-05 16:31:53,706 : INFO : effective_min_count=3 leaves 15922096 word corpus (100% of original 15922096, drops 0)\n2019-12-05 16:31:54,076 : INFO : deleting the raw counts dictionary of 34079 items\n2019-12-05 16:31:54,076 : INFO : sample=0.001 downsamples 42 most-common words\n2019-12-05 16:31:54,076 : INFO : downsampling leaves estimated 14607448 word corpus (91.7% of prior 15922096)\n2019-12-05 16:31:54,361 : INFO : estimated required memory for 34079 words and 300 dimensions: 98829100 bytes\n2019-12-05 16:31:54,361 : INFO : resetting layer weights\n" ], [ "print(\"Word2Vec vocabulary length:\", len(bgt2vec.wv.vocab))", "Word2Vec vocabulary length: 34079\n" ] ], [ [ "**Start training**", "_____no_output_____" ] ], [ [ "bgt2vec.train(sentences, total_examples=bgt2vec.corpus_count, epochs=50)", "2019-12-05 16:31:56,050 : INFO : training model with 3 workers on 34079 vocabulary and 300 features, using sg=1 hs=0 sample=0.001 negative=5 window=7\n2019-12-05 16:31:56,174 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:56,178 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:31:56,783 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:31:56,798 : INFO : EPOCH - 1 : training on 15922096 raw words (10000 effective words) took 0.7s, 14071 effective words/s\n2019-12-05 16:31:56,828 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:56,833 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:31:57,336 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:31:57,338 : INFO : EPOCH - 2 : training on 15922096 raw words (10000 effective words) took 0.5s, 19150 effective words/s\n2019-12-05 16:31:57,364 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:57,380 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:31:57,884 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:31:57,884 : INFO : EPOCH - 3 : training on 15922096 raw words (10000 effective words) took 0.5s, 18365 effective words/s\n2019-12-05 16:31:57,920 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:57,939 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:31:58,416 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:31:58,416 : INFO : EPOCH - 4 : training on 15922096 raw words (10000 effective words) took 0.5s, 19065 effective words/s\n2019-12-05 16:31:58,446 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:58,454 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:31:58,941 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:31:58,941 : INFO : EPOCH - 5 : training on 15922096 raw words (10000 effective words) took 0.5s, 19742 effective words/s\n2019-12-05 16:31:58,970 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:58,976 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:31:59,442 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:31:59,458 : INFO : EPOCH - 6 : training on 15922096 raw words (10000 effective words) took 0.5s, 20112 effective words/s\n2019-12-05 16:31:59,477 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:59,486 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:31:59,943 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:31:59,959 : INFO : EPOCH - 7 : training on 15922096 raw words (10000 effective words) took 0.5s, 20305 effective words/s\n2019-12-05 16:31:59,977 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:31:59,989 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:00,460 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:00,460 : INFO : EPOCH - 8 : training on 15922096 raw words (10000 effective words) took 0.5s, 20057 effective words/s\n2019-12-05 16:32:00,486 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:00,492 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:00,993 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:00,993 : INFO : EPOCH - 9 : training on 15922096 raw words (10000 effective words) took 0.5s, 19342 effective words/s\n2019-12-05 16:32:01,013 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:01,024 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:01,510 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:01,510 : INFO : EPOCH - 10 : training on 15922096 raw words (10000 effective words) took 0.5s, 19712 effective words/s\n2019-12-05 16:32:01,533 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:01,542 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:02,026 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:02,026 : INFO : EPOCH - 11 : training on 15922096 raw words (10000 effective words) took 0.5s, 19448 effective words/s\n2019-12-05 16:32:02,058 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:02,065 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:02,564 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:02,564 : INFO : EPOCH - 12 : training on 15922096 raw words (10000 effective words) took 0.5s, 19325 effective words/s\n2019-12-05 16:32:02,585 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:02,591 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:03,113 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:03,113 : INFO : EPOCH - 13 : training on 15922096 raw words (10000 effective words) took 0.5s, 18574 effective words/s\n2019-12-05 16:32:03,136 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:03,143 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:03,614 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:03,614 : INFO : EPOCH - 14 : training on 15922096 raw words (10000 effective words) took 0.5s, 19959 effective words/s\n2019-12-05 16:32:03,646 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:03,653 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:04,130 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:04,130 : INFO : EPOCH - 15 : training on 15922096 raw words (10000 effective words) took 0.5s, 19937 effective words/s\n2019-12-05 16:32:04,156 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:04,163 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:04,631 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:04,631 : INFO : EPOCH - 16 : training on 15922096 raw words (10000 effective words) took 0.5s, 20117 effective words/s\n2019-12-05 16:32:04,678 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:04,686 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:05,157 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:05,173 : INFO : EPOCH - 17 : training on 15922096 raw words (10000 effective words) took 0.5s, 19231 effective words/s\n2019-12-05 16:32:05,197 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:05,203 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:05,727 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:05,727 : INFO : EPOCH - 18 : training on 15922096 raw words (10000 effective words) took 0.5s, 18321 effective words/s\n2019-12-05 16:32:05,750 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:05,756 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:06,244 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:06,244 : INFO : EPOCH - 19 : training on 15922096 raw words (10000 effective words) took 0.5s, 19347 effective words/s\n2019-12-05 16:32:06,294 : INFO : worker thread finished; awaiting finish of 2 more threads\n2019-12-05 16:32:06,306 : INFO : worker thread finished; awaiting finish of 1 more threads\n2019-12-05 16:32:06,845 : INFO : worker thread finished; awaiting finish of 0 more threads\n2019-12-05 16:32:06,845 : INFO : EPOCH - 20 : training on 15922096 raw words (10000 effective words) took 0.6s, 17447 effective words/s\n" ] ], [ [ "**Save to file, can be useful later**", "_____no_output_____" ] ], [ [ "if not os.path.exists(\"trained\"):\n os.makedirs(\"trained\")", "_____no_output_____" ], [ "bgt2vec.save(os.path.join(\"trained\", \"bgt2vec.w2v\"))", "2019-12-05 16:32:21,478 : INFO : saving Word2Vec object under trained\\bgt2vec.w2v, separately None\n2019-12-05 16:32:21,484 : INFO : not storing attribute vectors_norm\n2019-12-05 16:32:21,488 : INFO : not storing attribute cum_table\n2019-12-05 16:32:27,553 : INFO : saved trained\\bgt2vec.w2v\n" ] ], [ [ "## Explore the trained model.", "_____no_output_____" ] ], [ [ "thrones2vec = w2v.Word2Vec.load(os.path.join(\"trained\", \"bgt2vec.w2v\"))", "2019-12-05 16:32:27,684 : INFO : loading Word2Vec object from trained\\bgt2vec.w2v\n2019-12-05 16:32:29,867 : INFO : loading wv recursively from trained\\bgt2vec.w2v.wv.* with mmap=None\n2019-12-05 16:32:29,867 : INFO : setting ignored attribute vectors_norm to None\n2019-12-05 16:32:29,867 : INFO : loading vocabulary recursively from trained\\bgt2vec.w2v.vocabulary.* with mmap=None\n2019-12-05 16:32:29,884 : INFO : loading trainables recursively from trained\\bgt2vec.w2v.trainables.* with mmap=None\n2019-12-05 16:32:29,888 : INFO : setting ignored attribute cum_table to None\n2019-12-05 16:32:29,957 : INFO : loaded trained\\bgt2vec.w2v\n" ] ], [ [ "### Compress the word vectors into 2D space and plot them", "_____no_output_____" ] ], [ [ "tsne = sklearn.manifold.TSNE(n_components=2, random_state=0)", "_____no_output_____" ], [ "all_word_vectors_matrix = bgt2vec.wv.syn0", "C:\\Users\\Sultan\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `syn0` (Attribute will be removed in 4.0.0, use self.vectors instead).\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "**Train t-SNE**", "_____no_output_____" ] ], [ [ "all_word_vectors_matrix_2d = tsne.fit_transform(all_word_vectors_matrix)", "_____no_output_____" ] ], [ [ "**Plot the big picture**", "_____no_output_____" ] ], [ [ "points = pd.DataFrame(\n [\n (word, coords[0], coords[1])\n for word, coords in [\n (word, all_word_vectors_matrix_2d[bgt2vec.wv.vocab[word].index])\n for word in bgt2vec.wv.vocab\n ]\n ],\n columns=[\"word\", \"x\", \"y\"]\n)", "_____no_output_____" ], [ "points.head(10)", "_____no_output_____" ], [ "sns.set_context(\"poster\")", "_____no_output_____" ], [ "points.plot.scatter(\"x\", \"y\", s=10, figsize=(20, 12))", "_____no_output_____" ] ], [ [ "**Zoom in to some interesting places**", "_____no_output_____" ] ], [ [ "def plot_region(x_bounds, y_bounds):\n slice = points[\n (x_bounds[0] <= points.x) &\n (points.x <= x_bounds[1]) & \n (y_bounds[0] <= points.y) &\n (points.y <= y_bounds[1])\n ]\n \n ax = slice.plot.scatter(\"x\", \"y\", s=35, figsize=(10, 8))\n for i, point in slice.iterrows():\n ax.text(point.x + 0.005, point.y + 0.005, point.word, fontsize=11)", "_____no_output_____" ] ], [ [ "**words related endup together**", "_____no_output_____" ] ], [ [ "plot_region(x_bounds=(5, 10), y_bounds=(-0.5, -0.1))", "_____no_output_____" ] ], [ [ "### Explore semantic similarities between words", "_____no_output_____" ], [ "**Words closest to the given word**", "_____no_output_____" ] ], [ [ "bgt2vec.most_similar(\"guilford\")", "C:\\Users\\Sultan\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `most_similar` (Method will be removed in 4.0.0, use self.wv.most_similar() instead).\n \"\"\"Entry point for launching an IPython kernel.\n2019-12-05 17:35:22,133 : INFO : precomputing L2-norms of word weight vectors\n" ], [ "bgt2vec.most_similar(\"year\")", "C:\\Users\\Sultan\\Anaconda3\\lib\\site-packages\\ipykernel_launcher.py:1: DeprecationWarning: Call to deprecated `most_similar` (Method will be removed in 4.0.0, use self.wv.most_similar() instead).\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "**Linear relationships between word pairs**", "_____no_output_____" ] ], [ [ "def nearest_similarity_cosmul(start1, end1, end2):\n similarities = bgt2vec.most_similar_cosmul(\n positive=[end2, start1],\n negative=[end1]\n )\n start2 = similarities[0][0]\n print(\"{start1} is related to {end1}, as {start2} is related to {end2}\".format(**locals()))\n return start2", "_____no_output_____" ], [ "nearest_similarity_cosmul(\"guilford\",\"county\",\"year\")", "guilford is related to county, as fiscal is related to year\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a88a2cbf39cea1ae86939e92d61d3a87082c276
39,147
ipynb
Jupyter Notebook
examples/notebooks/demo.ipynb
YaLTeR/catalyst
4b875b50b3c63ac2dac1f19399af0c016dfb4e2f
[ "Apache-2.0" ]
null
null
null
examples/notebooks/demo.ipynb
YaLTeR/catalyst
4b875b50b3c63ac2dac1f19399af0c016dfb4e2f
[ "Apache-2.0" ]
null
null
null
examples/notebooks/demo.ipynb
YaLTeR/catalyst
4b875b50b3c63ac2dac1f19399af0c016dfb4e2f
[ "Apache-2.0" ]
null
null
null
30.655442
173
0.511355
[ [ [ "# Demo\n\nMinimal working examples with Catalyst.\n- ML - Projector, aka \"Linear regression is my profession\"\n- CV - mnist classification, autoencoder, variational autoencoder\n- GAN - mnist again :)\n- NLP - sentiment analysis\n- RecSys - movie recommendations", "_____no_output_____" ] ], [ [ "! pip install -U torch==1.4.0 torchvision==0.5.0 torchtext==0.5.0 catalyst==20.05 pandas==1.0.1 tqdm==4.43", "_____no_output_____" ], [ "# for tensorboard integration\n# !pip install tensorflow\n# %load_ext tensorboard\n# %tensorboard --logdir ./logs", "_____no_output_____" ], [ "import torch\nimport torchvision\nimport torchtext\nimport catalyst\n\nprint(\n \"torch\", torch.__version__, \"\\n\",\n \"torchvision\", torchvision.__version__, \"\\n\",\n \"torchtext\", torchtext.__version__, \"\\n\",\n \"catalyst\", catalyst.__version__,\n)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# ML - Projector", "_____no_output_____" ] ], [ [ "import torch\nfrom torch.utils.data import DataLoader, TensorDataset\nfrom catalyst.dl import SupervisedRunner\n\n# experiment setup\nlogdir = \"./logdir\"\nnum_epochs = 8\n\n# data\nnum_samples, num_features = int(1e4), int(1e1)\nX, y = torch.rand(num_samples, num_features), torch.rand(num_samples)\ndataset = TensorDataset(X, y)\nloader = DataLoader(dataset, batch_size=32, num_workers=1)\nloaders = {\"train\": loader, \"valid\": loader}\n\n# model, criterion, optimizer, scheduler\nmodel = torch.nn.Linear(num_features, 1)\ncriterion = torch.nn.MSELoss()\noptimizer = torch.optim.Adam(model.parameters())\nscheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, [3, 6])\n\n# model training\nrunner = SupervisedRunner()\nrunner.train(\n model=model,\n criterion=criterion,\n optimizer=optimizer,\n scheduler=scheduler,\n loaders=loaders,\n logdir=logdir,\n num_epochs=num_epochs,\n verbose=True,\n)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# MNIST - classification", "_____no_output_____" ] ], [ [ "import os\nimport torch\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision import transforms\n\nmodel = torch.nn.Linear(28 * 28, 10)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\nloaders = {\n \"train\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n \"valid\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n}", "_____no_output_____" ], [ "from catalyst import dl\nfrom catalyst.utils import metrics\n\nclass CustomRunner(dl.Runner):\n def _handle_batch(self, batch):\n x, y = batch\n y_hat = self.model(x.view(x.size(0), -1))\n loss = F.cross_entropy(y_hat, y)\n accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))\n \n self.state.batch_metrics = {\n \"loss\": loss,\n \"accuracy01\": accuracy01,\n \"accuracy03\": accuracy03,\n \"accuracy05\": accuracy05,\n }\n \n if self.state.is_train_loader:\n loss.backward()\n self.state.optimizer.step()\n self.state.optimizer.zero_grad()\n ", "_____no_output_____" ], [ "runner = CustomRunner()\nrunner.train(\n model=model, \n optimizer=optimizer, \n loaders=loaders, \n num_epochs=1,\n verbose=True,\n timeit=False,\n logdir=\"./logs_custom\"\n)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# MNIST - classification with AutoEncoder", "_____no_output_____" ] ], [ [ "import os\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision import transforms\n\nclass ClassifyAE(nn.Module):\n def __init__(self, in_features, hid_features, out_features):\n super().__init__()\n self.encoder = nn.Sequential(nn.Linear(in_features, hid_features), nn.Tanh())\n self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())\n self.clf = nn.Linear(hid_features, out_features)\n \n def forward(self, x):\n z = self.encoder(x)\n y_hat = self.clf(z)\n x_ = self.decoder(z)\n return y_hat, x_\n \n\nmodel = ClassifyAE(28 * 28, 128, 10)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\nloaders = {\n \"train\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n \"valid\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n}", "_____no_output_____" ], [ "from catalyst import dl\nfrom catalyst.utils import metrics\n\nclass CustomRunner(dl.Runner):\n def _handle_batch(self, batch):\n x, y = batch\n x = x.view(x.size(0), -1)\n y_hat, x_ = self.model(x)\n loss_clf = F.cross_entropy(y_hat, y)\n loss_ae = F.mse_loss(x_, x)\n loss = loss_clf + loss_ae\n accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))\n \n self.state.batch_metrics = {\n \"loss_clf\": loss_clf,\n \"loss_ae\": loss_ae,\n \"loss\": loss,\n \"accuracy01\": accuracy01,\n \"accuracy03\": accuracy03,\n \"accuracy05\": accuracy05,\n }\n \n if self.state.is_train_loader:\n loss.backward()\n self.state.optimizer.step()\n self.state.optimizer.zero_grad()\n ", "_____no_output_____" ], [ "runner = CustomRunner()\nrunner.train(\n model=model, \n optimizer=optimizer, \n loaders=loaders,\n num_epochs=1,\n verbose=True,\n timeit=False,\n logdir=\"./logs_custom_ae\"\n)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# MNIST - classification with Variational AutoEncoder", "_____no_output_____" ] ], [ [ "import os\nimport numpy as np\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision import transforms\n\n\nLOG_SCALE_MAX = 2\nLOG_SCALE_MIN = -10\n\n\ndef normal_sample(mu, sigma):\n \"\"\"\n Sample from multivariate Gaussian distribution z ~ N(z|mu,sigma)\n while supporting backpropagation through its mean and variance.\n \"\"\"\n return mu + sigma * torch.randn_like(sigma)\n\n\ndef normal_logprob(mu, sigma, z):\n \"\"\"\n Probability density function of multivariate Gaussian distribution\n N(z|mu,sigma).\n \"\"\"\n normalization_constant = (-sigma.log() - 0.5 * np.log(2 * np.pi))\n square_term = -0.5 * ((z - mu) / sigma)**2\n logprob_vec = normalization_constant + square_term\n logprob = logprob_vec.sum(1)\n return logprob\n\n\nclass ClassifyVAE(torch.nn.Module):\n def __init__(self, in_features, hid_features, out_features):\n super().__init__()\n self.encoder = torch.nn.Linear(in_features, hid_features * 2)\n self.decoder = nn.Sequential(nn.Linear(hid_features, in_features), nn.Sigmoid())\n self.clf = torch.nn.Linear(hid_features, out_features)\n \n def forward(self, x, deterministic=False):\n z = self.encoder(x)\n bs, z_dim = z.shape\n\n loc, log_scale = z[:, :z_dim // 2], z[:, z_dim // 2:]\n log_scale = torch.clamp(log_scale, LOG_SCALE_MIN, LOG_SCALE_MAX)\n scale = torch.exp(log_scale)\n z_ = loc if deterministic else normal_sample(loc, scale)\n z_logprob = normal_logprob(loc, scale, z_)\n z_ = z_.view(bs, -1)\n x_ = self.decoder(z_)\n y_hat = self.clf(z_)\n\n return y_hat, x_, z_logprob, loc, log_scale\n \n\nmodel = ClassifyVAE(28 * 28, 64, 10)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\nloaders = {\n \"train\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n \"valid\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n}", "_____no_output_____" ], [ "from catalyst import dl\nfrom catalyst.utils import metrics\n\nclass CustomRunner(dl.Runner):\n def _handle_batch(self, batch):\n kld_regularization = 0.1\n logprob_regularization = 0.01\n \n x, y = batch\n x = x.view(x.size(0), -1)\n y_hat, x_, z_logprob, loc, log_scale = self.model(x)\n \n loss_clf = F.cross_entropy(y_hat, y)\n loss_ae = F.mse_loss(x_, x)\n loss_kld = -0.5 * torch.mean(\n 1 + log_scale - loc.pow(2) - log_scale.exp()\n ) * kld_regularization\n loss_logprob = torch.mean(z_logprob) * logprob_regularization\n loss = loss_clf + loss_ae + loss_kld + loss_logprob\n accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))\n \n self.state.batch_metrics = {\n \"loss_clf\": loss_clf,\n \"loss_ae\": loss_ae,\n \"loss_kld\": loss_kld,\n \"loss_logprob\": loss_logprob,\n \"loss\": loss,\n \"accuracy01\": accuracy01,\n \"accuracy03\": accuracy03,\n \"accuracy05\": accuracy05,\n }\n \n if self.state.is_train_loader:\n loss.backward()\n self.state.optimizer.step()\n self.state.optimizer.zero_grad()\n ", "_____no_output_____" ], [ "runner = CustomRunner()\nrunner.train(\n model=model, \n optimizer=optimizer, \n loaders=loaders,\n num_epochs=1,\n verbose=True,\n timeit=False,\n logdir=\"./logs_custom_vae\"\n)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# MNIST - segmentation with classification auxiliary task", "_____no_output_____" ] ], [ [ "import os\nimport torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\nfrom torchvision import transforms\n\nclass ClassifyUnet(nn.Module):\n def __init__(self, in_channels, in_hw, out_features):\n super().__init__()\n self.encoder = nn.Sequential(nn.Conv2d(in_channels, in_channels, 3, 1, 1), nn.Tanh())\n self.decoder = nn.Conv2d(in_channels, in_channels, 3, 1, 1)\n self.clf = nn.Linear(in_channels * in_hw * in_hw, out_features)\n\n def forward(self, x):\n z = self.encoder(x)\n z_ = z.view(z.size(0), -1)\n y_hat = self.clf(z_)\n x_ = self.decoder(z)\n return y_hat, x_\n \n\nmodel = ClassifyUnet(1, 28, 10)\noptimizer = torch.optim.Adam(model.parameters(), lr=0.02)\n\nloaders = {\n \"train\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n \"valid\": DataLoader(\n MNIST(os.getcwd(), train=False, download=True, transform=transforms.ToTensor()), \n batch_size=32),\n}", "_____no_output_____" ], [ "from catalyst import dl\nfrom catalyst.utils import metrics\n\nclass CustomRunner(dl.Runner):\n def _handle_batch(self, batch):\n x, y = batch\n x_noise = (x + torch.rand_like(x)).clamp_(0, 1)\n y_hat, x_ = self.model(x_noise)\n\n loss_clf = F.cross_entropy(y_hat, y)\n iou = metrics.iou(x_, x)\n loss_iou = 1 - iou\n loss = loss_clf + loss_iou\n accuracy01, accuracy03, accuracy05 = metrics.accuracy(y_hat, y, topk=(1, 3, 5))\n \n self.state.batch_metrics = {\n \"loss_clf\": loss_clf,\n \"loss_iou\": loss_iou,\n \"loss\": loss,\n \"iou\": iou,\n \"accuracy01\": accuracy01,\n \"accuracy03\": accuracy03,\n \"accuracy05\": accuracy05,\n }\n \n if self.state.is_train_loader:\n loss.backward()\n self.state.optimizer.step()\n self.state.optimizer.zero_grad()\n ", "_____no_output_____" ], [ "runner = CustomRunner()\nrunner.train(\n model=model, \n optimizer=optimizer, \n loaders=loaders,\n num_epochs=1,\n verbose=True,\n timeit=False,\n logdir=\"./logs_custom_unet\"\n)", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# GAN", "_____no_output_____" ] ], [ [ "import torch\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom catalyst.contrib.nn.modules import GlobalMaxPool2d, Flatten, Lambda\n\n# Create the discriminator\ndiscriminator = nn.Sequential(\n nn.Conv2d(1, 64, (3, 3), stride=(2, 2), padding=1),\n nn.LeakyReLU(0.2, inplace=True),\n nn.Conv2d(64, 128, (3, 3), stride=(2, 2), padding=1),\n nn.LeakyReLU(0.2, inplace=True),\n GlobalMaxPool2d(),\n Flatten(),\n nn.Linear(128, 1)\n)\n\n# Create the generator\nlatent_dim = 128\ngenerator = nn.Sequential(\n # We want to generate 128 coefficients to reshape into a 7x7x128 map\n nn.Linear(128, 128 * 7 * 7),\n nn.LeakyReLU(0.2, inplace=True),\n Lambda(lambda x: x.view(x.size(0), 128, 7, 7)),\n nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n nn.LeakyReLU(0.2, inplace=True),\n nn.ConvTranspose2d(128, 128, (4, 4), stride=(2, 2), padding=1),\n nn.LeakyReLU(0.2, inplace=True),\n nn.Conv2d(128, 1, (7, 7), padding=3),\n nn.Sigmoid(),\n)\n\n# Final model\nmodel = {\n \"generator\": generator,\n \"discriminator\": discriminator,\n}\n\noptimizer = {\n \"generator\": torch.optim.Adam(generator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n \"discriminator\": torch.optim.Adam(discriminator.parameters(), lr=0.0003, betas=(0.5, 0.999)),\n}", "_____no_output_____" ], [ "from catalyst import dl\n\nclass CustomRunner(dl.Runner):\n\n def _handle_batch(self, batch):\n real_images, _ = batch\n batch_metrics = {}\n \n # Sample random points in the latent space\n batch_size = real_images.shape[0]\n random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.device)\n \n # Decode them to fake images\n generated_images = self.model[\"generator\"](random_latent_vectors).detach()\n # Combine them with real images\n combined_images = torch.cat([generated_images, real_images])\n \n # Assemble labels discriminating real from fake images\n labels = torch.cat([\n torch.ones((batch_size, 1)), torch.zeros((batch_size, 1))\n ]).to(self.device)\n # Add random noise to the labels - important trick!\n labels += 0.05 * torch.rand(labels.shape).to(self.device)\n \n # Train the discriminator\n predictions = self.model[\"discriminator\"](combined_images)\n batch_metrics[\"loss_discriminator\"] = \\\n F.binary_cross_entropy_with_logits(predictions, labels)\n \n # Sample random points in the latent space\n random_latent_vectors = torch.randn(batch_size, latent_dim).to(self.device)\n # Assemble labels that say \"all real images\"\n misleading_labels = torch.zeros((batch_size, 1)).to(self.device)\n \n # Train the generator\n generated_images = self.model[\"generator\"](random_latent_vectors)\n predictions = self.model[\"discriminator\"](generated_images)\n batch_metrics[\"loss_generator\"] = \\\n F.binary_cross_entropy_with_logits(predictions, misleading_labels)\n \n self.state.batch_metrics.update(**batch_metrics)", "_____no_output_____" ], [ "\nimport os\nimport torchvision.transforms as transforms\nfrom torch.utils.data import DataLoader\nfrom torchvision.datasets import MNIST\n\nloaders = {\n \"train\": DataLoader(\n MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor()), \n batch_size=64),\n}\n\nrunner = CustomRunner()\nrunner.train(\n model=model, \n optimizer=optimizer,\n loaders=loaders,\n callbacks=[\n dl.OptimizerCallback(\n optimizer_key=\"generator\", \n metric_key=\"loss_generator\"\n ),\n dl.OptimizerCallback(\n optimizer_key=\"discriminator\", \n metric_key=\"loss_discriminator\"\n ),\n ],\n main_metric=\"loss_generator\",\n num_epochs=20,\n verbose=True,\n logdir=\"./logs_gan\",\n)\n", "_____no_output_____" ] ], [ [ "# NLP", "_____no_output_____" ] ], [ [ "import torch\nfrom torch import nn, optim\nimport torch.nn.functional as F \n\nimport torchtext\nfrom torchtext.datasets import text_classification", "_____no_output_____" ], [ "NGRAMS = 2\nimport os\nif not os.path.isdir('./data'):\n os.mkdir('./data')\nif not os.path.isdir('./data/nlp'):\n os.mkdir('./data/nlp')\ntrain_dataset, valid_dataset = text_classification.DATASETS['AG_NEWS'](\n root='./data/nlp', ngrams=NGRAMS, vocab=None)", "_____no_output_____" ], [ "VOCAB_SIZE = len(train_dataset.get_vocab())\nEMBED_DIM = 32\nNUM_CLASS = len(train_dataset.get_labels())\nBATCH_SIZE = 32", "_____no_output_____" ], [ "def generate_batch(batch):\n label = torch.tensor([entry[0] for entry in batch])\n text = [entry[1] for entry in batch]\n offsets = [0] + [len(entry) for entry in text]\n # torch.Tensor.cumsum returns the cumulative sum\n # of elements in the dimension dim.\n # torch.Tensor([1.0, 2.0, 3.0]).cumsum(dim=0)\n\n offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)\n text = torch.cat(text)\n output = {\n \"text\": text,\n \"offsets\": offsets,\n \"label\": label\n }\n return output\n\ntrain_loader = torch.utils.data.DataLoader(\n train_dataset, \n batch_size=BATCH_SIZE, \n shuffle=True,\n collate_fn=generate_batch,\n)\n\nvalid_loader = torch.utils.data.DataLoader(\n valid_dataset, \n batch_size=BATCH_SIZE, \n shuffle=False,\n collate_fn=generate_batch,\n)", "_____no_output_____" ], [ "class TextSentiment(nn.Module):\n def __init__(self, vocab_size, embed_dim, num_class):\n super().__init__()\n self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)\n self.fc = nn.Linear(embed_dim, num_class)\n self.init_weights()\n\n def init_weights(self):\n initrange = 0.5\n self.embedding.weight.data.uniform_(-initrange, initrange)\n self.fc.weight.data.uniform_(-initrange, initrange)\n self.fc.bias.data.zero_()\n\n def forward(self, text, offsets):\n embedded = self.embedding(text, offsets)\n return self.fc(embedded)", "_____no_output_____" ], [ "model = TextSentiment(VOCAB_SIZE, EMBED_DIM, NUM_CLASS)\ncriterion = torch.nn.CrossEntropyLoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=4.0)\nscheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1, gamma=0.9)", "_____no_output_____" ], [ "from catalyst.dl import SupervisedRunner, \\\n CriterionCallback, AccuracyCallback\n\n# input_keys - which key from dataloader we need to pass to the model\nrunner = SupervisedRunner(input_key=[\"text\", \"offsets\"])\n\nrunner.train(\n model=model, \n criterion=criterion,\n optimizer=optimizer, \n scheduler=scheduler,\n loaders={'train': train_loader, 'valid': valid_loader},\n logdir=\"./logs_nlp\",\n num_epochs=3,\n verbose=True,\n # input_key - which key from dataloader we need to pass to criterion as target label\n callbacks=[\n CriterionCallback(input_key=\"label\"),\n AccuracyCallback(input_key=\"label\")\n ]\n)", "_____no_output_____" ] ], [ [ "# RecSys", "_____no_output_____" ] ], [ [ "import time\nimport os\nimport requests\nimport tqdm\n\nimport numpy as np\nimport pandas as pd\nimport scipy.sparse as sp\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F \nimport torch.utils.data as td\nimport torch.optim as to\n\nimport matplotlib.pyplot as pl\nimport seaborn as sns", "_____no_output_____" ], [ "# Configuration\n\n# The directory to store the data\ndata_dir = \"data/recsys\"\n\ntrain_rating = \"ml-1m.train.rating\"\ntest_negative = \"ml-1m.test.negative\"\n\n# NCF config\ntrain_negative_samples = 4\ntest_negative_samples = 99\nembedding_dim = 64\nhidden_dim = 32\n\n# Training config\nbatch_size = 256\nepochs = 10 # Original implementation uses 20\ntop_k=10", "_____no_output_____" ], [ "if not os.path.isdir('./data'):\n os.mkdir('./data')\nif not os.path.isdir('./data/recsys'):\n os.mkdir('./data/recsys')\n \nfor file_name in [train_rating, test_negative]:\n file_path = os.path.join(data_dir, file_name)\n if os.path.exists(file_path):\n print(\"Skip loading \" + file_name)\n continue\n with open(file_path, \"wb\") as tf:\n print(\"Load \" + file_name)\n r = requests.get(\"https://raw.githubusercontent.com/hexiangnan/neural_collaborative_filtering/master/Data/\" + file_name, allow_redirects=True)\n tf.write(r.content)", "_____no_output_____" ], [ "def preprocess_train():\n train_data = pd.read_csv(os.path.join(data_dir, train_rating), sep='\\t', header=None, names=['user', 'item'], usecols=[0, 1], dtype={0: np.int32, 1: np.int32})\n\n user_num = train_data['user'].max() + 1\n item_num = train_data['item'].max() + 1\n\n train_data = train_data.values.tolist()\n\n # Convert ratings as a dok matrix\n train_mat = sp.dok_matrix((user_num, item_num), dtype=np.float32)\n for user, item in train_data:\n train_mat[user, item] = 1.0\n \n return train_data, train_mat, user_num, item_num\n\n\ntrain_data, train_mat, user_num, item_num = preprocess_train()", "_____no_output_____" ], [ "def preprocess_test():\n test_data = []\n with open(os.path.join(data_dir, test_negative)) as tnf:\n for line in tnf:\n parts = line.split('\\t')\n assert len(parts) == test_negative_samples + 1\n \n user, positive = eval(parts[0])\n test_data.append([user, positive])\n \n for negative in parts[1:]:\n test_data.append([user, int(negative)])\n\n return test_data\n\n\nvalid_data = preprocess_test()", "_____no_output_____" ], [ "class NCFDataset(td.Dataset):\n \n def __init__(self, positive_data, item_num, positive_mat, negative_samples=0):\n super(NCFDataset, self).__init__()\n self.positive_data = positive_data\n self.item_num = item_num\n self.positive_mat = positive_mat\n self.negative_samples = negative_samples\n \n self.reset()\n \n def reset(self):\n print(\"Resetting dataset\")\n if self.negative_samples > 0:\n negative_data = self.sample_negatives()\n data = self.positive_data + negative_data\n labels = [1] * len(self.positive_data) + [0] * len(negative_data)\n else:\n data = self.positive_data\n labels = [0] * len(self.positive_data)\n \n self.data = np.concatenate([\n np.array(data), \n np.array(labels)[:, np.newaxis]], \n axis=1\n )\n \n\n def sample_negatives(self):\n negative_data = []\n for user, positive in self.positive_data:\n for _ in range(self.negative_samples):\n negative = np.random.randint(self.item_num)\n while (user, negative) in self.positive_mat:\n negative = np.random.randint(self.item_num)\n \n negative_data.append([user, negative])\n\n return negative_data\n\n def __len__(self):\n return len(self.data)\n\n def __getitem__(self, idx):\n user, item, label = self.data[idx]\n output = {\n \"user\": user,\n \"item\": item,\n \"label\": np.float32(label),\n }\n return output\n\n \nclass SamplerWithReset(td.RandomSampler):\n def __iter__(self):\n self.data_source.reset()\n return super().__iter__()", "_____no_output_____" ], [ "train_dataset = NCFDataset(\n train_data, \n item_num, \n train_mat, \n train_negative_samples\n)\ntrain_loader = td.DataLoader(\n train_dataset, \n batch_size=batch_size, \n shuffle=False, \n num_workers=4,\n sampler=SamplerWithReset(train_dataset)\n)\n\nvalid_dataset = NCFDataset(valid_data, item_num, train_mat)\nvalid_loader = td.DataLoader(\n valid_dataset, \n batch_size=test_negative_samples+1, \n shuffle=False, \n num_workers=0\n)", "_____no_output_____" ], [ "class Ncf(nn.Module):\n \n def __init__(self, user_num, item_num, embedding_dim, hidden_dim):\n super(Ncf, self).__init__()\n \n self.user_embeddings = nn.Embedding(user_num, embedding_dim)\n self.item_embeddings = nn.Embedding(item_num, embedding_dim)\n\n self.layers = nn.Sequential(\n nn.Linear(2 * embedding_dim, hidden_dim),\n nn.ReLU(),\n nn.Linear(hidden_dim, hidden_dim),\n nn.ReLU(),\n nn.Linear(hidden_dim, 1)\n )\n\n self.initialize()\n\n def initialize(self):\n nn.init.normal_(self.user_embeddings.weight, std=0.01)\n nn.init.normal_(self.item_embeddings.weight, std=0.01)\n\n for layer in self.layers:\n if isinstance(layer, nn.Linear):\n nn.init.xavier_uniform_(layer.weight)\n layer.bias.data.zero_()\n \n def forward(self, user, item):\n user_embedding = self.user_embeddings(user)\n item_embedding = self.item_embeddings(item)\n concat = torch.cat((user_embedding, item_embedding), -1)\n return self.layers(concat).view(-1)\n \n def name(self):\n return \"Ncf\"", "_____no_output_____" ], [ "def hit_metric(recommended, actual):\n return int(actual in recommended)\n\n\ndef dcg_metric(recommended, actual):\n if actual in recommended:\n index = recommended.index(actual)\n return np.reciprocal(np.log2(index + 2))\n return 0", "_____no_output_____" ], [ "model = Ncf(user_num, item_num, embedding_dim, hidden_dim)\ncriterion = nn.BCEWithLogitsLoss()\noptimizer = to.Adam(model.parameters())", "_____no_output_____" ], [ "from catalyst.dl import Callback, CallbackOrder, State\n\nclass NdcgLoaderMetricCallback(Callback):\n def __init__(self):\n super().__init__(CallbackOrder.Metric)\n\n def on_batch_end(self, state: State):\n item = state.input[\"item\"]\n predictions = state.output[\"logits\"]\n\n _, indices = torch.topk(predictions, top_k)\n recommended = torch.take(item, indices).cpu().numpy().tolist()\n\n item = item[0].item()\n state.batch_metrics[\"hits\"] = hit_metric(recommended, item)\n state.batch_metrics[\"dcgs\"] = dcg_metric(recommended, item)", "_____no_output_____" ], [ "from catalyst.dl import SupervisedRunner, CriterionCallback\n\n# input_keys - which key from dataloader we need to pass to the model\nrunner = SupervisedRunner(input_key=[\"user\", \"item\"])\n\nrunner.train(\n model=model, \n criterion=criterion,\n optimizer=optimizer, \n loaders={'train': train_loader, 'valid': valid_loader},\n logdir=\"./logs_recsys\",\n num_epochs=3,\n verbose=True,\n # input_key - which key from dataloader we need to pass to criterion as target label\n callbacks=[\n CriterionCallback(input_key=\"label\"),\n NdcgLoaderMetricCallback()\n ]\n)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a88cac58d80e54f2f7c26115a338eb56d014544
257,200
ipynb
Jupyter Notebook
3_5_weight-initialization/weight_initialization_exercise.ipynb
JasmineMou/udacity_DL
5e33a2aae8a062948cc17e147ec25de334f0ed41
[ "MIT" ]
null
null
null
3_5_weight-initialization/weight_initialization_exercise.ipynb
JasmineMou/udacity_DL
5e33a2aae8a062948cc17e147ec25de334f0ed41
[ "MIT" ]
null
null
null
3_5_weight-initialization/weight_initialization_exercise.ipynb
JasmineMou/udacity_DL
5e33a2aae8a062948cc17e147ec25de334f0ed41
[ "MIT" ]
null
null
null
289.966178
54,368
0.914176
[ [ [ "# Weight Initialization\nIn this lesson, you'll learn how to find good initial weights for a neural network. Weight initialization happens once, when a model is created and before it trains. Having good initial weights can place the neural network close to the optimal solution. This allows the neural network to come to the best solution quicker. \n\n<img src=\"notebook_ims/neuron_weights.png\" width=40%/>\n\n\n## Initial Weights and Observing Training Loss\n\nTo see how different weights perform, we'll test on the same dataset and neural network. That way, we know that any changes in model behavior are due to the weights and not any changing data or model structure. \n> We'll instantiate at least two of the same models, with _different_ initial weights and see how the training loss decreases over time, such as in the example below. \n\n<img src=\"notebook_ims/loss_comparison_ex.png\" width=60%/>\n\nSometimes the differences in training loss, over time, will be large and other times, certain weights offer only small improvements.\n\n### Dataset and Model\n\nWe'll train an MLP to classify images from the [Fashion-MNIST database](https://github.com/zalandoresearch/fashion-mnist) to demonstrate the effect of different initial weights. As a reminder, the FashionMNIST dataset contains images of clothing types; `classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']`. The images are normalized so that their pixel values are in a range [0.0 - 1.0). Run the cell below to download and load the dataset.\n\n---\n#### EXERCISE\n\n[Link to normalized distribution, exercise code](#normalex)\n\n---", "_____no_output_____" ], [ "### Import Libraries and Load [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)", "_____no_output_____" ] ], [ [ "import torch\nimport numpy as np\nfrom torchvision import datasets\nimport torchvision.transforms as transforms\nfrom torch.utils.data.sampler import SubsetRandomSampler\n\n# number of subprocesses to use for data loading\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 100\n# percentage of training set to use as validation\nvalid_size = 0.2\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# choose the training and test datasets\ntrain_data = datasets.FashionMNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.FashionMNIST(root='data', train=False,\n download=True, transform=transform)\n\n# obtain training indices that will be used for validation\nnum_train = len(train_data)\nindices = list(range(num_train))\nnp.random.shuffle(indices)\nsplit = int(np.floor(valid_size * num_train))\ntrain_idx, valid_idx = indices[split:], indices[:split]\n\n# define samplers for obtaining training and validation batches\ntrain_sampler = SubsetRandomSampler(train_idx)\nvalid_sampler = SubsetRandomSampler(valid_idx)\n\n# prepare data loaders (combine dataset and sampler)\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,\n sampler=train_sampler, num_workers=num_workers)\nvalid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, \n sampler=valid_sampler, num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, \n num_workers=num_workers)\n\n# specify the image classes\nclasses = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']", "Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz\nDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-labels-idx1-ubyte.gz\nDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-images-idx3-ubyte.gz\nDownloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/t10k-labels-idx1-ubyte.gz\nProcessing...\nDone!\n" ] ], [ [ "### Visualize Some Training Data", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(20):\n ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n ax.set_title(classes[labels[idx]])", "_____no_output_____" ] ], [ [ "## Define the Model Architecture\n\nWe've defined the MLP that we'll use for classifying the dataset.\n\n### Neural Network\n<img style=\"float: left\" src=\"notebook_ims/neural_net.png\" width=50%/>\n\n\n* A 3 layer MLP with hidden dimensions of 256 and 128. \n\n* This MLP accepts a flattened image (784-value long vector) as input and produces 10 class scores as output.\n---\nWe'll test the effect of different initial weights on this 3 layer neural network with ReLU activations and an Adam optimizer. \n\nThe lessons you learn apply to other neural networks, including different activations and optimizers.", "_____no_output_____" ], [ "---\n## Initialize Weights\nLet's start looking at some initial weights.\n### All Zeros or Ones\nIf you follow the principle of [Occam's razor](https://en.wikipedia.org/wiki/Occam's_razor), you might think setting all the weights to 0 or 1 would be the best solution. This is not the case.\n\nWith every weight the same, all the neurons at each layer are producing the same output. This makes it hard to decide which weights to adjust.\n\nLet's compare the loss with all ones and all zero weights by defining two models with those constant weights.\n\nBelow, we are using PyTorch's [nn.init](https://pytorch.org/docs/stable/nn.html#torch-nn-init) to initialize each Linear layer with a constant weight. The init library provides a number of weight initialization functions that give you the ability to initialize the weights of each layer according to layer type.\n\nIn the case below, we look at every layer/module in our model. If it is a Linear layer (as all three layers are for this MLP), then we initialize those layer weights to be a `constant_weight` with bias=0 using the following code:\n>```\nif isinstance(m, nn.Linear):\n nn.init.constant_(m.weight, constant_weight)\n nn.init.constant_(m.bias, 0)\n```\n\nThe `constant_weight` is a value that you can pass in when you instantiate the model.", "_____no_output_____" ] ], [ [ "import torch.nn as nn\nimport torch.nn.functional as F\n\n# define the NN architecture\nclass Net(nn.Module):\n def __init__(self, hidden_1=256, hidden_2=128, constant_weight=None):\n super(Net, self).__init__()\n # linear layer (784 -> hidden_1)\n self.fc1 = nn.Linear(28 * 28, hidden_1)\n # linear layer (hidden_1 -> hidden_2)\n self.fc2 = nn.Linear(hidden_1, hidden_2)\n # linear layer (hidden_2 -> 10)\n self.fc3 = nn.Linear(hidden_2, 10)\n # dropout layer (p=0.2)\n self.dropout = nn.Dropout(0.2)\n \n # initialize the weights to a specified, constant value\n if(constant_weight is not None):\n for m in self.modules():\n if isinstance(m, nn.Linear):\n nn.init.constant_(m.weight, constant_weight)\n nn.init.constant_(m.bias, 0)\n \n \n def forward(self, x):\n # flatten image input\n x = x.view(-1, 28 * 28)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc1(x))\n # add dropout layer\n x = self.dropout(x)\n # add hidden layer, with relu activation function\n x = F.relu(self.fc2(x))\n # add dropout layer\n x = self.dropout(x)\n # add output layer\n x = self.fc3(x)\n return x\n", "_____no_output_____" ] ], [ [ "### Compare Model Behavior\n\nBelow, we are using `helpers.compare_init_weights` to compare the training and validation loss for the two models we defined above, `model_0` and `model_1`. This function takes in a list of models (each with different initial weights), the name of the plot to produce, and the training and validation dataset loaders. For each given model, it will plot the training loss for the first 100 batches and print out the validation accuracy after 2 training epochs. *Note: if you've used a small batch_size, you may want to increase the number of epochs here to better compare how models behave after seeing a few hundred images.* \n\nWe plot the loss over the first 100 batches to better judge which model weights performed better at the start of training. **I recommend that you take a look at the code in `helpers.py` to look at the details behind how the models are trained, validated, and compared.**\n\nRun the cell below to see the difference between weights of all zeros against all ones.", "_____no_output_____" ] ], [ [ "# initialize two NN's with 0 and 1 constant weights\nmodel_0 = Net(constant_weight=0)\nmodel_1 = Net(constant_weight=1)", "_____no_output_____" ], [ "import helpers\n\n# put them in list form to compare\nmodel_list = [(model_0, 'All Zeros'),\n (model_1, 'All Ones')]\n\n\n# plot the loss over the first 100 batches\nhelpers.compare_init_weights(model_list, \n 'All Zeros vs All Ones', \n train_loader,\n valid_loader)", "_____no_output_____" ] ], [ [ "As you can see the accuracy is close to guessing for both zeros and ones, around 10%.\n\nThe neural network is having a hard time determining which weights need to be changed, since the neurons have the same output for each layer. To avoid neurons with the same output, let's use unique weights. We can also randomly select these weights to avoid being stuck in a local minimum for each run.\n\nA good solution for getting these random weights is to sample from a uniform distribution.", "_____no_output_____" ], [ "### Uniform Distribution\nA [uniform distribution](https://en.wikipedia.org/wiki/Uniform_distribution_(continuous%29) has the equal probability of picking any number from a set of numbers. We'll be picking from a continuous distribution, so the chance of picking the same number is low. We'll use NumPy's `np.random.uniform` function to pick random numbers from a uniform distribution.\n\n>#### [`np.random_uniform(low=0.0, high=1.0, size=None)`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.uniform.html)\n>Outputs random values from a uniform distribution.\n\n>The generated values follow a uniform distribution in the range [low, high). The lower bound minval is included in the range, while the upper bound maxval is excluded.\n\n>- **low:** The lower bound on the range of random values to generate. Defaults to 0.\n- **high:** The upper bound on the range of random values to generate. Defaults to 1.\n- **size:** An int or tuple of ints that specify the shape of the output array.\n\nWe can visualize the uniform distribution by using a histogram. Let's map the values from `np.random_uniform(-3, 3, [1000])` to a histogram using the `helper.hist_dist` function. This will be `1000` random float values from `-3` to `3`, excluding the value `3`.", "_____no_output_____" ] ], [ [ "helpers.hist_dist('Random Uniform (low=-3, high=3)', np.random.uniform(-3, 3, [1000]))", "_____no_output_____" ] ], [ [ "The histogram used 500 buckets for the 1000 values. Since the chance for any single bucket is the same, there should be around 2 values for each bucket. That's exactly what we see with the histogram. Some buckets have more and some have less, but they trend around 2.\n\nNow that you understand the uniform function, let's use PyTorch's `nn.init` to apply it to a model's initial weights.\n\n### Uniform Initialization, Baseline\n\n\nLet's see how well the neural network trains using a uniform weight initialization, where `low=0.0` and `high=1.0`. Below, I'll show you another way (besides in the Net class code) to initialize the weights of a network. To define weights outside of the model definition, you can:\n>1. Define a function that assigns weights by the type of network layer, *then* \n2. Apply those weights to an initialized model using `model.apply(fn)`, which applies a function to each model layer.\n\nThis time, we'll use `weight.data.uniform_` to initialize the weights of our model, directly.", "_____no_output_____" ] ], [ [ "# takes in a module and applies the specified weight initialization\ndef weights_init_uniform(m):\n classname = m.__class__.__name__\n # for every Linear layer in a model..\n if classname.find('Linear') != -1:\n # apply a uniform distribution to the weights and a bias=0\n m.weight.data.uniform_(0.0, 1.0)\n m.bias.data.fill_(0)", "_____no_output_____" ], [ "# create a new model with these weights\nmodel_uniform = Net()\nmodel_uniform.apply(weights_init_uniform)", "_____no_output_____" ], [ "# evaluate behavior \nhelpers.compare_init_weights([(model_uniform, 'Uniform Weights')], \n 'Uniform Baseline', \n train_loader,\n valid_loader)", "_____no_output_____" ] ], [ [ "---\nThe loss graph is showing the neural network is learning, which it didn't with all zeros or all ones. We're headed in the right direction!\n\n## General rule for setting weights\nThe general rule for setting the weights in a neural network is to set them to be close to zero without being too small. \n>Good practice is to start your weights in the range of $[-y, y]$ where $y=1/\\sqrt{n}$ \n($n$ is the number of inputs to a given neuron).\n\nLet's see if this holds true; let's create a baseline to compare with and center our uniform range over zero by shifting it over by 0.5. This will give us the range [-0.5, 0.5).", "_____no_output_____" ] ], [ [ "# takes in a module and applies the specified weight initialization\ndef weights_init_uniform_center(m):\n classname = m.__class__.__name__\n # for every Linear layer in a model..\n if classname.find('Linear') != -1:\n # apply a centered, uniform distribution to the weights\n m.weight.data.uniform_(-0.5, 0.5)\n m.bias.data.fill_(0)\n\n# create a new model with these weights\nmodel_centered = Net()\nmodel_centered.apply(weights_init_uniform_center)", "_____no_output_____" ] ], [ [ "Then let's create a distribution and model that uses the **general rule** for weight initialization; using the range $[-y, y]$, where $y=1/\\sqrt{n}$ .\n\nAnd finally, we'll compare the two models.", "_____no_output_____" ] ], [ [ "# takes in a module and applies the specified weight initialization\ndef weights_init_uniform_rule(m):\n classname = m.__class__.__name__\n # for every Linear layer in a model..\n if classname.find('Linear') != -1:\n # get the number of the inputs\n n = m.in_features\n y = 1.0/np.sqrt(n)\n m.weight.data.uniform_(-y, y)\n m.bias.data.fill_(0)\n\n# create a new model with these weights\nmodel_rule = Net()\nmodel_rule.apply(weights_init_uniform_rule)", "_____no_output_____" ], [ "# compare these two models\nmodel_list = [(model_centered, 'Centered Weights [-0.5, 0.5)'), \n (model_rule, 'General Rule [-y, y)')]\n\n# evaluate behavior \nhelpers.compare_init_weights(model_list, \n '[-0.5, 0.5) vs [-y, y)', \n train_loader,\n valid_loader)", "_____no_output_____" ] ], [ [ "This behavior is really promising! Not only is the loss decreasing, but it seems to do so very quickly for our uniform weights that follow the general rule; after only two epochs we get a fairly high validation accuracy and this should give you some intuition for why starting out with the right initial weights can really help your training process!\n\n---\n\nSince the uniform distribution has the same chance to pick *any value* in a range, what if we used a distribution that had a higher chance of picking numbers closer to 0? Let's look at the normal distribution.\n\n### Normal Distribution\nUnlike the uniform distribution, the [normal distribution](https://en.wikipedia.org/wiki/Normal_distribution) has a higher likelihood of picking number close to it's mean. To visualize it, let's plot values from NumPy's `np.random.normal` function to a histogram.\n\n>[np.random.normal(loc=0.0, scale=1.0, size=None)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.normal.html)\n\n>Outputs random values from a normal distribution.\n\n>- **loc:** The mean of the normal distribution.\n- **scale:** The standard deviation of the normal distribution.\n- **shape:** The shape of the output array.", "_____no_output_____" ] ], [ [ "helpers.hist_dist('Random Normal (mean=0.0, stddev=1.0)', np.random.normal(size=[1000]))", "_____no_output_____" ] ], [ [ "Let's compare the normal distribution against the previous, rule-based, uniform distribution.\n\n<a id='normalex'></a>\n#### TODO: Define a weight initialization function that gets weights from a normal distribution \n> The normal distribution should have a mean of 0 and a standard deviation of $y=1/\\sqrt{n}$", "_____no_output_____" ] ], [ [ "## complete this function\ndef weights_init_normal(m):\n '''Takes in a module and initializes all linear layers with weight\n values taken from a normal distribution.'''\n \n classname = m.__class__.__name__\n # for every Linear layer in a model\n # m.weight.data shoud be taken from a normal distribution\n # m.bias.data should be 0\n if classname.find('Linear') != -1:\n y = 1/np.sqrt(m.in_features)\n m.weight.data.normal_(0.0, y)\n m.bias.data.fill_(0)", "_____no_output_____" ], [ "## -- no need to change code below this line -- ##\n\n# create a new model with the rule-based, uniform weights\nmodel_uniform_rule = Net()\nmodel_uniform_rule.apply(weights_init_uniform_rule)\n\n# create a new model with the rule-based, NORMAL weights\nmodel_normal_rule = Net()\nmodel_normal_rule.apply(weights_init_normal)", "_____no_output_____" ], [ "# compare the two models\nmodel_list = [(model_uniform_rule, 'Uniform Rule [-y, y)'), \n (model_normal_rule, 'Normal Distribution')]\n\n# evaluate behavior \nhelpers.compare_init_weights(model_list, \n 'Uniform vs Normal', \n train_loader,\n valid_loader)", "_____no_output_____" ] ], [ [ "The normal distribution gives us pretty similar behavior compared to the uniform distribution, in this case. This is likely because our network is so small; a larger neural network will pick more weight values from each of these distributions, magnifying the effect of both initialization styles. In general, a normal distribution will result in better performance for a model.\n", "_____no_output_____" ], [ "---\n\n### Automatic Initialization\n\nLet's quickly take a look at what happens *without any explicit weight initialization*.", "_____no_output_____" ] ], [ [ "## Instantiate a model with _no_ explicit weight initialization \n### Create new models again to avoid accidently train them for more epics.\n\nmodel_no=model_centered=model_uniform_rule=model_normal_rule = Net()\n\nmodel_centered.apply(weights_init_uniform_center)\nmodel_uniform_rule.apply(weights_init_uniform_rule)\nmodel_normal_rule.apply(weights_init_normal)", "_____no_output_____" ], [ "## evaluate the behavior using helpers.compare_init_weights\n\nmodel_list = [(model_no, 'No explicit init.'), \n (model_centered, 'Centered Weights [-0.5, 0.5)'), \n (model_uniform_rule, 'Uniform Rule [-y, y)'), \n (model_normal_rule, 'Normal Distribution (m=0, std=y)')]\n\n# evaluate behavior \nhelpers.compare_init_weights(model_list, \n 'no vs centered vs uniform vs normal', \n train_loader,\n valid_loader)", "_____no_output_____" ] ], [ [ "As you complete this exercise, keep in mind these questions:\n* What initializaion strategy has the lowest training loss after two epochs? What about highest validation accuracy?\n* After testing all these initial weight options, which would you decide to use in a final classification model?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a88cc354b1c5fca8cc882170617a38a2f1e8038
1,395
ipynb
Jupyter Notebook
chapter1/homework/computer/201611680559.ipynb
hpishacker/python_tutorial
9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd
[ "MIT" ]
76
2017-09-26T01:07:26.000Z
2021-02-23T03:06:25.000Z
chapter1/homework/computer/201611680559.ipynb
hpishacker/python_tutorial
9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd
[ "MIT" ]
5
2017-12-10T08:40:11.000Z
2020-01-10T03:39:21.000Z
chapter1/homework/computer/201611680559.ipynb
hacker-14/python_tutorial
4a110b12aaab1313ded253f5207ff263d85e1b56
[ "MIT" ]
112
2017-09-26T01:07:30.000Z
2021-11-25T19:46:51.000Z
15.163043
34
0.460215
[ [ [ "print('Hello World')", "Hello World\n" ], [ "print(100)", "100\n" ], [ "print(123.7)", "123.7\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
4a88d0d92ba4baba903fa0eaa2fe2261defb4ec0
32,907
ipynb
Jupyter Notebook
notebooks/AWH-Geo.ipynb
AWH-GlobalPotential-X/AWH-Geo
1bc0fc690b5ca04b96aabc2b037ed19d7770d5dd
[ "Apache-2.0" ]
31
2021-05-03T10:00:45.000Z
2022-03-25T20:30:23.000Z
notebooks/AWH-Geo.ipynb
AWH-GlobalPotential-X/AWH-Geo
1bc0fc690b5ca04b96aabc2b037ed19d7770d5dd
[ "Apache-2.0" ]
null
null
null
notebooks/AWH-Geo.ipynb
AWH-GlobalPotential-X/AWH-Geo
1bc0fc690b5ca04b96aabc2b037ed19d7770d5dd
[ "Apache-2.0" ]
3
2021-11-04T08:52:35.000Z
2021-11-11T16:54:49.000Z
51.417188
245
0.547361
[ [ [ "<a href=\"https://colab.research.google.com/github/AWH-GlobalPotential-X/AWH-Geo/blob/master/notebooks/AWH-Geo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Welcome to AWH-Geo\n\nThis tool requires a [Google Drive](https://drive.google.com/drive/my-drive) and [Earth Engine](https://developers.google.com/earth-engine/) Account.\n\n[Start here](https://drive.google.com/drive/u/1/folders/1EzuqsbADrtdXChcpHqygTh7SuUw0U_QB) to create a new Output Table from the template:\n 1. Right-click on \"OutputTable_TEMPLATE\" file > Make a Copy to your own Drive folder\n 2. Rename the new file \"OuputTable_CODENAME\" with CODENAME (max 83 characters!) as a unique output table code. If including a date in the code, use the YYYYMMDD date format.\n 3. Enter in the output values in L/hr to each cell in each of the 10%-interval rH bins... interpolate in Sheets as necessary.\n\nThen, click \"Connect\" at the top right of this notebook.\n\nThen run each of the code blocks below, following instructions. For \"OutputTableCode\" inputs, use the CODENAME you created in Sheets.\n\n\n", "_____no_output_____" ] ], [ [ "#@title Basic setup and earthengine access.\n\nprint('Welcome to AWH-Geo')\n\n# import, authenticate, then initialize EarthEngine module ee\n# https://developers.google.com/earth-engine/python_install#package-import\nimport ee \nprint('Make sure the EE version is v0.1.215 or greater...')\nprint('Current EE version = v' + ee.__version__)\nprint('')\nee.Authenticate()\nee.Initialize()\n\nworldGeo = ee.Geometry.Polygon( # Created for some masking and geo calcs\n coords=[[-180,-90],[-180,0],[-180,90],[-30,90],[90,90],[180,90],\n [180,0],[180,-90],[30,-90],[-90,-90],[-180,-90]],\n geodesic=False,\n proj='EPSG:4326'\n)\n\n\n", "_____no_output_____" ], [ "#@title Test Earth Engine connection (see Mt Everest elev and a green map)\n# Print the elevation of Mount Everest.\ndem = ee.Image('USGS/SRTMGL1_003')\nxy = ee.Geometry.Point([86.9250, 27.9881])\nelev = dem.sample(xy, 30).first().get('elevation').getInfo()\nprint('Mount Everest elevation (m):', elev)\n\n# Access study assets\nfrom IPython.display import Image\njmpGeofabric_image = ee.Image('users/awhgeoglobal/jmpGeofabric_image') # access to study folder in EE\nImage(url=jmpGeofabric_image.getThumbUrl({'min': 0, 'max': 1, 'dimensions': 512,\n 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))\n", "_____no_output_____" ], [ "#@title Set up access to Google Sheets (follow instructions)\nfrom google.colab import auth\nauth.authenticate_user()\n\n# gspread is module to access Google Sheets through python\n# https://gspread.readthedocs.io/en/latest/index.html\nimport gspread\nfrom oauth2client.client import GoogleCredentials\ngc = gspread.authorize(GoogleCredentials.get_application_default()) # get credentials", "_____no_output_____" ], [ "#@title STEP 1: Export timeseries for given OutputTable: enter CODENAME (without \"OutputTable_\" prefix) below\n\nOutputTableCode = \"\" #@param {type:\"string\"}\nStartYear = 2010 #@param {type:\"integer\"}\nEndYear = 2020 #@param {type:\"integer\"}\nExportWeekly_1_or_0 = 0#@param {type:\"integer\"}\n\nee_username = ee.String(ee.Dictionary(ee.List(ee.data.getAssetRoots()).get(0)).get('id'))\nee_username = ee_username.getInfo()\n\nyears = list(range(StartYear,EndYear))\nprint('Time Period: ', years)\n\ndef timeseriesExport(outputTable_code):\n \n \"\"\"\n This script runs the output table value over the climate variables using the \n nearest lookup values, worldwide, every three hours during a user-determined \n period. It then resamples the temporal interval by averaging the hourly output \n over semi-week periods. It then converts the resulting image collection into a \n single image with several bands, each of which representing one (hourly or \n semi-week) interval. Finally, it exports this image over 3-month tranches and \n saves each as an EE Image Assets with appropriate names corresponding to the \n tranche's time period. \n \"\"\"\n \n # print the output table code from user input for confirmation\n print('outputTable code:', outputTable_code)\n\n # CLIMATE DATA PRE-PROCESSING\n # ERA5-Land climate dataset used for worldwide (derived) climate metrics\n # https://www.ecmwf.int/en/era5-land\n # era5-land HOURLY images in EE catalog\n era5Land = ee.ImageCollection('ECMWF/ERA5_LAND/HOURLY') \n # print('era5Land',era5Land.limit(50)) # print some data for inspection (debug)\n era5Land_proj = era5Land.first().projection() # get ERA5-Land projection & scale for export\n era5Land_scale = era5Land_proj.nominalScale()\n print('era5Land_scale (should be ~11132):',era5Land_scale.getInfo())\n era5Land_filtered = era5Land.filterDate( # ERA5-Land climate data\n str(StartYear-1) + '-12-31', str(EndYear) + '-01-01').select( # filter by date\n # filter by ERA5-Land image collection bands \n [\n 'dewpoint_temperature_2m', # K (https://apps.ecmwf.int/codes/grib/param-db?id=168)\n 'surface_solar_radiation_downwards', # J/m^2 (Accumulated value. Divide by 3600 to get W/m^2 over hourly interval https://apps.ecmwf.int/codes/grib/param-db?id=176)\n 'temperature_2m' # K\n ]) \n # print('era5Land_filtered',era5Land_filtered.limit(50))\n\n print('Wait... retrieving data from sheets takes a couple minutes')\n\n # COLLECT OUTPUT TABLE DATA FROM SHEETS INTO PYTHON ARRAYS\n # gspread function which will look in list of gSheets accessible to user\n # in Earth Engine, an array is a list of lists.\n # loop through worksheet tabs and build a list of lists of lists (3 dimensional)\n # to organize output values [L/hr] by the 3 physical variables in the following\n # order: by temperature (first nesting leve), ghi (second nesting level), then\n # rH (third nesting level).\n spreadsheet = gc.open('OutputTable_' + outputTable_code) \n outputArray = list() # create empty array\n rH_labels = ['rH0','rH10','rH20','rH30','rH40','rH50', # worksheet tab names\n 'rH60','rH70','rH80','rH90','rH100']\n for rH in rH_labels: # loop to create 3-D array (list of lists of lists)\n rH_interval_array = list() \n worksheet = spreadsheet.worksheet(rH)\n for x in list(range(7,26)): # relevant ranges in output table sheet\n rH_interval_array.append([float(y) for y in worksheet.row_values(x)])\n outputArray.append(rH_interval_array)\n # print('Output Table values:', outputArray) # for debugging\n # create an array image in EE (each pixel is a multi-dimensional matrix)\n outputImage_arrays = ee.Image(ee.Array(outputArray)) # values are in [L/hr]\n\n def processTimeseries(i): # core processing algorithm with lookups to outputTable\n\n \"\"\"\n This is the core AWH-Geo algorithm to convert image-based input climate data \n into an image of AWG device output [L/time] based on a given output lookup table.\n It runs across the ERA5-Land image collection timeseries and runs the lookup table \n on each pixel of each image representing each hourly climate timestep.\n \"\"\"\n\n i = ee.Image(i) # cast as image\n i = i.updateMask(i.select('temperature_2m').mask()) # ensure mask is applied to all bands\n timestamp_millis = ee.Date(i.get('system:time_start'))\n i_previous = ee.Image(era5Land_filtered.filterDate(\n timestamp_millis.advance(-1,'hour')).first())\n rh = ee.Image().expression( # relative humidity calculation [%]\n # from http://bmcnoldy.rsmas.miami.edu/Humidity.html\n '100 * (e**((17.625 * Td) / (To + Td)) / e**((17.625 * T) / (To + T)))', {\n 'e': 2.718281828459045, # Euler's constant\n 'T': i.select('temperature_2m').subtract(273.15), # temperature K converted to Celsius [°C]\n 'Td': i.select('dewpoint_temperature_2m').subtract(273.15), # dewpoint temperature K converted to Celsius [°C]\n 'To': 243.04 # reference temperature [K]\n }).rename('rh')\n ghi = ee.Image(ee.Algorithms.If( # because this parameter in ERA5 is cumulative in J/m^2...\n condition=ee.Number(timestamp_millis.get('hour')).eq(1), # ...from last obseration...\n trueCase=i.select('surface_solar_radiation_downwards'), # ...current value must be...\n falseCase=i.select('surface_solar_radiation_downwards').subtract( # ...subtracted from last...\n i_previous.select('surface_solar_radiation_downwards')) # ... then divided by seconds\n )).divide(3600).rename('ghi') # solar global horizontal irradiance [W/m^2]\n temp = i.select('temperature_2m'\n ).subtract(273.15).rename('temp') # temperature K converted to Celsius [°C]\n rhClamp = rh.clamp(0.1,100) # relative humdity clamped to output table range [%]\n ghiClamp = ghi.clamp(0.1,1300) # global horizontal irradiance clamped to range [W/m^2]\n tempClamp = temp.clamp(0.1,45) # temperature clamped to output table range [°C]\n # convert climate variables to lookup integers\n rhLookup = rhClamp.divide(10\n ).round().int().rename('rhLookup') # rH lookup interval\n tempLookup = tempClamp.divide(2.5\n ).round().int().rename('tempLookup') # temp lookup interval\n ghiLookup = ghiClamp.divide(100\n ).add(1).round().int().rename('ghiLookup') # ghi lookup interval\n # combine lookup values in a 3-band image\n xyzLookup = ee.Image(rhLookup).addBands(tempLookup).addBands(ghiLookup) \n # lookup values in 3D array for each pixel to return AWG output from table [L/hr]\n # set output to 0 if temperature is less than 0 deg C\n output = outputImage_arrays.arrayGet(xyzLookup).multiply(temp.gt(0))\n nightMask = ghi.gt(0.5) # mask pixels which have no incident sunlight\n\n return ee.Image(output.rename('O').addBands( # return image of output labeled \"O\" [L/hr]\n rh.updateMask(nightMask)).addBands(\n ghi.updateMask(nightMask)).addBands(\n temp.updateMask(nightMask)).setMulti({ # add physical variables as bands\n 'system:time_start': timestamp_millis # set time as property\n })).updateMask(1) # close partial masks at continental edges\n\n def outputHourly_export(timeStart, timeEnd, year):\n\n \"\"\"\n Run the lookup processing function (from above) across the entire climate \n timeseries at the finest temporal interval (1 hr for ERA5-Land). Convert the \n resulting image collection as a single image with a band for each timestep \n to allow for export as an Earth Engine asset (you cannot export/save image\n collections as assets).\n \"\"\"\n\n # filter ERA5-Land climate data by time\n era5Land_filtered_section = era5Land_filtered.filterDate(timeStart, timeEnd)\n\n # print('era5Land_filtered_section',era5Land_filtered_section.limit(1).getInfo())\n\n outputHourly = era5Land_filtered_section.map(processTimeseries) \n # outputHourly_toBands_pre = outputHourly.select(['ghi']).toBands()\n outputHourly_toBands_pre = outputHourly.select(['O']).toBands()\n outputHourly_toBands = outputHourly_toBands_pre.select(\n # input climate variables as multiband image with each band representing timestep\n outputHourly_toBands_pre.bandNames(), \n # rename bands by timestamp\n outputHourly_toBands_pre.bandNames().map(\n lambda name: ee.String('H').cat( # \"H\" for hourly\n ee.String(name).replace('T','')\n )\n )\n )\n\n # notify user of export\n print('Exporting outputHourly year:', year)\n task = ee.batch.Export.image.toAsset(\n image=ee.Image(outputHourly_toBands),\n region=worldGeo,\n description='O_hourly_' + outputTable_code + '_' + year,\n assetId=ee_username + '/O_hourly_' + outputTable_code + '_' + year,\n scale=era5Land_scale.getInfo(),\n crs='EPSG:4326',\n crsTransform=[0.1,0,-180.05,0,-0.1,90.05],\n maxPixels=1e10,\n maxWorkers=2000\n )\n task.start()\n \n # run timeseries export on entire hourly ERA5-Land for each yearly tranche\n\n for y in years:\n y = str(y)\n outputHourly_export(y + '-01-01', y + '-04-01', y + 'a')\n outputHourly_export(y + '-04-01', y + '-07-01', y + 'b')\n outputHourly_export(y + '-07-01', y + '-10-01', y + 'c')\n outputHourly_export(y + '-10-01', str(int(y)+1) + '-01-01', y + 'd')\n\n def outputWeekly_export(timeStart, timeEnd, year):\n\n era5Land_filtered_section = era5Land_filtered.filterDate(timeStart, timeEnd) # filter ERA5-Land climate data by time\n \n outputHourly = era5Land_filtered_section.map(processTimeseries)\n \n # resample values over time by 2-week aggregations\n # Define a time interval\n start = ee.Date(timeStart)\n end = ee.Date(timeEnd)\n # Number of years, in DAYS_PER_RANGE-day increments.\n DAYS_PER_RANGE = 14\n # DateRangeCollection, which contains the ranges we're interested in.\n drc = ee.call(\"BetterDateRangeCollection\",\n start,\n end, \n DAYS_PER_RANGE, \n \"day\",\n True)\n # This filter will join images with the date range that contains their start time.\n filter = ee.Filter.dateRangeContains(\"date_range\", None, \"system:time_start\")\n # Save all of the matching values under \"matches\".\n join = ee.Join.saveAll(\"matches\")\n # Do the join.\n joinedResult = join.apply(drc, outputHourly, filter)\n # print('joinedResult',joinedResult)\n \n # Map over the functions, and add the mean of the matches as \"meanForRange\".\n joinedResult = joinedResult.map(\n lambda e: e.set(\"meanForRange\", ee.ImageCollection.fromImages(e.get(\"matches\")).mean())\n )\n # print('joinedResult',joinedResult)\n\n # roll resampled images into new image collection\n outputWeekly = ee.ImageCollection(joinedResult.map(\n lambda f: ee.Image(f.get('meanForRange'))\n ))\n # print('outputWeekly',outputWeekly.getInfo())\n\n # convert image collection into image with many bands which can be saved as EE asset\n outputWeekly_toBands_pre = outputWeekly.toBands()\n outputWeekly_toBands = outputWeekly_toBands_pre.select(\n outputWeekly_toBands_pre.bandNames(), # input climate variables as multiband image with each band representing timestep\n outputWeekly_toBands_pre.bandNames().map(\n lambda name: ee.String('W').cat(name)\n )\n )\n \n task = ee.batch.Export.image.toAsset(\n image=ee.Image(outputWeekly_toBands),\n region=worldGeo,\n description='O_weekly_' + outputTable_code + '_' + year,\n assetId=ee_username + '/O_weekly_' + outputTable_code + '_' + year,\n scale=era5Land_scale.getInfo(),\n crs='EPSG:4326',\n crsTransform=[0.1,0,-180.05,0,-0.1,90.05],\n maxPixels=1e10,\n maxWorkers=2000\n )\n if ExportWeekly_1_or_0 == 1:\n task.start() # remove comment hash if weekly exports are desired\n print('Exporting outputWeekly year:', year)\n\n # run semi-weekly timeseries export on ERA5-Land by year\n for y in years:\n y = str(y)\n outputWeekly_export(y + '-01-01', y + '-04-01', y + 'a')\n outputWeekly_export(y + '-04-01', y + '-07-01', y + 'b')\n outputWeekly_export(y + '-07-01', y + '-10-01', y + 'c')\n outputWeekly_export(y + '-10-01', str(int(y)+1) + '-01-01', y + 'd')\n\ntimeseriesExport(OutputTableCode)\n\nprint('Complete! Read instructions below')", "_____no_output_____" ] ], [ [ "# *Before moving on to the next step... Wait until above tasks are complete in the task manager: https://code.earthengine.google.com/*\n(right pane, tab \"tasks\", click \"refresh\"; the should show up once the script prints \"Exporting...\")\n\n", "_____no_output_____" ] ], [ [ "#@title Re-instate earthengine access (follow instructions)\n\nprint('Welcome Back to AWH-Geo')\nprint('')\n\n# import, authenticate, then initialize EarthEngine module ee\n# https://developers.google.com/earth-engine/python_install#package-import\nimport ee \nprint('Make sure the EE version is v0.1.215 or greater...')\nprint('Current EE version = v' + ee.__version__)\nprint('')\nee.Authenticate()\nee.Initialize()\n\nworldGeo = ee.Geometry.Polygon( # Created for some masking and geo calcs\n coords=[[-180,-90],[-180,0],[-180,90],[-30,90],[90,90],[180,90],\n [180,0],[180,-90],[30,-90],[-90,-90],[-180,-90]],\n geodesic=False,\n proj='EPSG:4326'\n)\n\n", "_____no_output_____" ], [ "#@title STEP 2: Export statistical results for given OutputTable: enter CODENAME (without \"OutputTable_\" prefix) below\n\nee_username = ee.String(ee.Dictionary(ee.List(ee.data.getAssetRoots()).get(0)).get('id'))\nee_username = ee_username.getInfo()\n\nOutputTableCode = \"\" #@param {type:\"string\"}\nStartYear = 2010 #@param {type:\"integer\"}\nEndYear = 2020 #@param {type:\"integer\"}\nSuffixName_optional = \"\" #@param {type:\"string\"}\nExportMADP90s_1_or_0 = 0#@param {type:\"integer\"}\n\nyears = list(range(StartYear,EndYear))\nprint('Time Period: ', years)\n\ndef generateStats(outputTable_code):\n \n \"\"\"\n This function generates single images which contain time-aggregated output \n statistics including overall mean and shortfall metrics such as MADP90s. \n \"\"\"\n \n # CLIMATE DATA PRE-PROCESSING\n # ERA5-Land climate dataset used for worldwide (derived) climate metrics\n # https://www.ecmwf.int/en/era5-land\n # era5-land HOURLY images in EE catalog\n era5Land = ee.ImageCollection('ECMWF/ERA5_LAND/HOURLY') \n # print('era5Land',era5Land.limit(50)) # print some data for inspection (debug)\n era5Land_proj = era5Land.first().projection() # get ERA5-Land projection & scale for export\n era5Land_scale = era5Land_proj.nominalScale()\n \n # setup the image collection timeseries to chart\n # unravel and concatenate all the image stages into a single image collection\n \n def unravel(i): # function to \"unravel\" image bands into an image collection\n def setDate(bandName): # loop over band names in image and return a LIST of ... \n dateCode = ee.Date.parse( # ... images, one for each band\n format='yyyyMMddHH',\n date=ee.String(ee.String(bandName).split('_').get(0)).slice(1) # get date periods from band name\n )\n return i.select([bandName]).rename('O').set('system:time_start',dateCode)\n i = ee.Image(i)\n return i.bandNames().map(setDate) # returns a LIST of images\n\n yearCode_list = ee.List(sum([[ # each image units in [L/hr]\n unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'a')),\n unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'b')),\n unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'c')),\n unravel(ee.Image(ee_username + '/O_hourly_' + outputTable_code + '_' + str(y)+'d')) \n ] for y in years], [])).flatten()\n\n outputTimeseries = ee.ImageCollection(yearCode_list)\n\n Od_overallMean = outputTimeseries.mean().multiply(24).rename('Od') # hourly output x 24 = mean daily output [L/day]\n \n # export overall daily mean\n task = ee.batch.Export.image.toAsset(\n image=Od_overallMean,\n region=worldGeo,\n description='Od_overallMean_' + outputTable_code + SuffixName_optional,\n assetId=ee_username + '/Od_overallMean_' + outputTable_code + SuffixName_optional,\n scale=era5Land_scale.getInfo(),\n crs='EPSG:4326',\n crsTransform=[0.1,0,-180.05,0,-0.1,90.05],\n maxPixels=1e10,\n maxWorkers=2000\n )\n task.start()\n print('Exporting Od_overallMean_' + outputTable_code + SuffixName_optional)\n\n ## run the moving average function over the timeseries using DAILY averages\n\n # start and end dates over which to calculate aggregate statistics\n startDate = ee.Date(str(StartYear) + '-01-01')\n endDate = ee.Date(str(EndYear) + '-01-01')\n\n # resample values over time by daily aggregations\n # Number of years, in DAYS_PER_RANGE-day increments.\n DAYS_PER_RANGE = 1\n # DateRangeCollection, which contains the ranges we're interested in.\n drc = ee.call('BetterDateRangeCollection',\n startDate,\n endDate, \n DAYS_PER_RANGE, \n 'day',\n True)\n # This filter will join images with the date range that contains their start time.\n filter = ee.Filter.dateRangeContains('date_range', None, 'system:time_start')\n # Save all of the matching values under \"matches\".\n join = ee.Join.saveAll('matches')\n # Do the join.\n joinedResult = join.apply(drc, outputTimeseries, filter)\n # print('joinedResult',joinedResult)\n \n # Map over the functions, and add the mean of the matches as \"meanForRange\".\n joinedResult = joinedResult.map(\n lambda e: e.set('meanForRange', ee.ImageCollection.fromImages(e.get('matches')).mean())\n )\n # print('joinedResult',joinedResult)\n\n # roll resampled images into new image collection\n outputDaily = ee.ImageCollection(joinedResult.map(\n lambda f: ee.Image(f.get('meanForRange')).set(\n 'system:time_start',\n ee.Date.parse('YYYYMMdd',f.get('system:index')).millis()\n )\n ))\n # print('outputDaily',outputDaily.getInfo())\n\n outputDaily_p90 = ee.ImageCollection( # collate rolling periods into new image collection of rolling average values\n outputDaily.toList(outputDaily.size())).reduce(\n ee.Reducer.percentile( # reduce image collection by percentile\n [10] # 100% - 90% = 10%\n )).multiply(24).rename('Od')\n\n task = ee.batch.Export.image.toAsset(\n image=outputDaily_p90,\n region=worldGeo,\n description='Od_DailyP90_' + outputTable_code + SuffixName_optional,\n assetId=ee_username + '/Od_DailyP90_' + outputTable_code + SuffixName_optional,\n scale=era5Land_scale.getInfo(),\n crs='EPSG:4326',\n crsTransform=[0.1,0,-180.05,0,-0.1,90.05],\n maxPixels=1e10,\n maxWorkers=2000\n )\n if ExportMADP90s_1_or_0 == 1:\n task.start()\n print('Exporting Od_DailyP90_' + outputTable_code + SuffixName_optional)\n\n def rollingStats(period): # run rolling stat function for each rolling period scenerio\n\n # collect neighboring time periods into a join\n timeFilter = ee.Filter.maxDifference(\n difference=float(period)/2 * 24 * 60 * 60 * 1000, # mid-centered window\n leftField='system:time_start', \n rightField='system:time_start'\n )\n rollingPeriod_join = ee.ImageCollection(ee.Join.saveAll('images').apply(\n primary=outputDaily, # apply the join on itself to collect images\n secondary=outputDaily, \n condition=timeFilter\n ))\n def rollingPeriod_mean(i): # get the mean across each collected periods\n i = ee.Image(i) # collected images stored in \"images\" property of each timestep image\n return ee.ImageCollection.fromImages(i.get('images')).mean()\n outputDaily_rollingMean = rollingPeriod_join.filterDate(\n startDate.advance(float(period)/2,'days'),\n endDate.advance(float(period)/-2,'days')\n ).map(rollingPeriod_mean,True)\n\n Od_p90_rolling = ee.ImageCollection( # collate rolling periods into new image collection of rolling average values\n outputDaily_rollingMean.toList(outputDaily_rollingMean.size())).reduce(\n ee.Reducer.percentile( # reduce image collection by percentile\n [10] # 100% - 90% = 10%\n )).multiply(24).rename('Od') # hourly output x 24 = mean daily output [L/day]\n\n task = ee.batch.Export.image.toAsset(\n image=Od_p90_rolling,\n region=worldGeo,\n description='Od_MADP90_'+ period + 'day_' + outputTable_code + SuffixName_optional,\n assetId=ee_username + '/Od_MADP90_'+ period + 'day_' + outputTable_code + SuffixName_optional,\n scale=era5Land_scale.getInfo(),\n crs='EPSG:4326',\n crsTransform=[0.1,0,-180.05,0,-0.1,90.05],\n maxPixels=1e10,\n maxWorkers=2000\n )\n if ExportMADP90s_1_or_0 == 1:\n task.start()\n print('Exporting Od_MADP90_' + period + 'day_' + outputTable_code + SuffixName_optional)\n \n rollingPeriods = [\n '007',\n '030',\n # '060',\n '090',\n # '180',\n ] # define custom rolling periods over which to calc MADP90 [days]\n \n for period in rollingPeriods: # execute the calculations & export\n # print(period)\n rollingStats(period)\n\ngenerateStats(OutputTableCode) # run stats function\n\nprint('Complete! Go to next step.')", "_____no_output_____" ] ], [ [ "Wait until these statistics are completed processing. Track them in the task manager: https://code.earthengine.google.com/\n\nWhen they are finished.... [Go here to see maps](https://code.earthengine.google.com/fac0cc72b2ac2e431424cbf45b2852cf)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a88e0fd0aeb1de7017c12874e9de2f70cd5c35d
33,703
ipynb
Jupyter Notebook
classification/k_nearest_neighbor/cancer_pred_knn.ipynb
maxxie114/Basic_ML_Research
2507ea4bb0055a56a787fb4b7eb0349c85c6a890
[ "MIT" ]
4
2021-06-12T07:38:35.000Z
2022-01-18T21:16:31.000Z
classification/k_nearest_neighbor/cancer_pred_knn.ipynb
maxxie114/Basic_ML_Research
2507ea4bb0055a56a787fb4b7eb0349c85c6a890
[ "MIT" ]
null
null
null
classification/k_nearest_neighbor/cancer_pred_knn.ipynb
maxxie114/Basic_ML_Research
2507ea4bb0055a56a787fb4b7eb0349c85c6a890
[ "MIT" ]
null
null
null
115.027304
22,626
0.810106
[ [ [ "# This is a project that uses KNN to predict breast Cancer\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# Load Dataset\nbreast_cancer_data = load_breast_cancer()\n\n# Inspect Dataset\nprint(breast_cancer_data.data[0])\nprint(breast_cancer_data.feature_names)\nprint(breast_cancer_data.target)\nprint(breast_cancer_data.target_names)", "[1.799e+01 1.038e+01 1.228e+02 1.001e+03 1.184e-01 2.776e-01 3.001e-01\n 1.471e-01 2.419e-01 7.871e-02 1.095e+00 9.053e-01 8.589e+00 1.534e+02\n 6.399e-03 4.904e-02 5.373e-02 1.587e-02 3.003e-02 6.193e-03 2.538e+01\n 1.733e+01 1.846e+02 2.019e+03 1.622e-01 6.656e-01 7.119e-01 2.654e-01\n 4.601e-01 1.189e-01]\n['mean radius' 'mean texture' 'mean perimeter' 'mean area'\n 'mean smoothness' 'mean compactness' 'mean concavity'\n 'mean concave points' 'mean symmetry' 'mean fractal dimension'\n 'radius error' 'texture error' 'perimeter error' 'area error'\n 'smoothness error' 'compactness error' 'concavity error'\n 'concave points error' 'symmetry error' 'fractal dimension error'\n 'worst radius' 'worst texture' 'worst perimeter' 'worst area'\n 'worst smoothness' 'worst compactness' 'worst concavity'\n 'worst concave points' 'worst symmetry' 'worst fractal dimension']\n[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n 1 0 0 0 0 0 0 0 0 1 0 1 1 1 1 1 0 0 1 0 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 0\n 1 0 1 0 0 1 1 1 0 0 1 0 0 0 1 1 1 0 1 1 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 1 1\n 1 1 1 1 1 1 0 0 0 1 0 0 1 1 1 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 0 1 1 1 1 0 1\n 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 0 1 1 0 0 1 1 0 0 1 1 1 1 0 1 1 0 0 0 1 0\n 1 0 1 1 1 0 1 1 0 0 1 0 0 0 0 1 0 0 0 1 0 1 0 1 1 0 1 0 0 0 0 1 1 0 0 1 1\n 1 0 1 1 1 1 1 0 0 1 1 0 1 1 0 0 1 0 1 1 1 1 0 1 1 1 1 1 0 1 0 0 0 0 0 0 0\n 0 0 0 0 0 0 0 1 1 1 1 1 1 0 1 0 1 1 0 1 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1\n 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 0 1 1 1 1 0 0 0 1 1\n 1 1 0 1 0 1 0 1 1 1 0 1 1 1 1 1 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 0\n 0 1 0 0 1 1 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1\n 1 0 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 0 1 1 1 1 1 0 1 1\n 0 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1\n 1 1 1 1 1 1 0 1 0 1 1 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1 1 0 1 0 1 0 0\n 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1\n 1 1 1 1 1 1 1 0 0 0 0 0 0 1]\n['malignant' 'benign']\n" ], [ "# Spliting Dataset\ntraining_data, validation_data, training_labels, validation_labels = train_test_split(breast_cancer_data.data, \n breast_cancer_data.target, test_size=0.2, random_state=100)\nprint(f\"{len(training_data)},{len(training_labels)},{len(validation_data)},{len(validation_labels)}\")", "455,455,114,114\n" ], [ "# KNN training and prediction\n# Using a loop to find the best k\naccuracies = []\nfor k in range(1,101):\n classifier = KNeighborsClassifier(n_neighbors = k)\n classifier.fit(training_data, training_labels)\n # Score the model\n score = classifier.score(validation_data, validation_labels)\n print(f\"k={k}:{score}\")\n accuracies.append(score)", "k=1:0.9298245614035088\nk=2:0.9385964912280702\nk=3:0.9473684210526315\nk=4:0.9473684210526315\nk=5:0.9473684210526315\nk=6:0.9473684210526315\nk=7:0.9473684210526315\nk=8:0.9473684210526315\nk=9:0.956140350877193\nk=10:0.956140350877193\nk=11:0.956140350877193\nk=12:0.956140350877193\nk=13:0.956140350877193\nk=14:0.956140350877193\nk=15:0.956140350877193\nk=16:0.956140350877193\nk=17:0.956140350877193\nk=18:0.956140350877193\nk=19:0.956140350877193\nk=20:0.956140350877193\nk=21:0.956140350877193\nk=22:0.956140350877193\nk=23:0.9649122807017544\nk=24:0.9649122807017544\nk=25:0.956140350877193\nk=26:0.956140350877193\nk=27:0.956140350877193\nk=28:0.956140350877193\nk=29:0.9473684210526315\nk=30:0.9473684210526315\nk=31:0.9473684210526315\nk=32:0.9473684210526315\nk=33:0.9473684210526315\nk=34:0.9473684210526315\nk=35:0.9473684210526315\nk=36:0.9473684210526315\nk=37:0.956140350877193\nk=38:0.956140350877193\nk=39:0.956140350877193\nk=40:0.956140350877193\nk=41:0.956140350877193\nk=42:0.956140350877193\nk=43:0.956140350877193\nk=44:0.9473684210526315\nk=45:0.956140350877193\nk=46:0.9473684210526315\nk=47:0.956140350877193\nk=48:0.956140350877193\nk=49:0.956140350877193\nk=50:0.956140350877193\nk=51:0.9473684210526315\nk=52:0.9473684210526315\nk=53:0.9473684210526315\nk=54:0.956140350877193\nk=55:0.956140350877193\nk=56:0.9649122807017544\nk=57:0.9473684210526315\nk=58:0.9473684210526315\nk=59:0.9385964912280702\nk=60:0.9298245614035088\nk=61:0.9298245614035088\nk=62:0.9385964912280702\nk=63:0.9473684210526315\nk=64:0.9385964912280702\nk=65:0.9385964912280702\nk=66:0.9385964912280702\nk=67:0.9385964912280702\nk=68:0.9385964912280702\nk=69:0.9385964912280702\nk=70:0.9385964912280702\nk=71:0.9385964912280702\nk=72:0.9385964912280702\nk=73:0.9385964912280702\nk=74:0.9385964912280702\nk=75:0.9385964912280702\nk=76:0.9385964912280702\nk=77:0.9298245614035088\nk=78:0.9298245614035088\nk=79:0.9298245614035088\nk=80:0.9298245614035088\nk=81:0.9210526315789473\nk=82:0.9298245614035088\nk=83:0.9210526315789473\nk=84:0.9385964912280702\nk=85:0.9298245614035088\nk=86:0.9385964912280702\nk=87:0.9385964912280702\nk=88:0.9385964912280702\nk=89:0.9298245614035088\nk=90:0.9298245614035088\nk=91:0.9210526315789473\nk=92:0.9385964912280702\nk=93:0.9210526315789473\nk=94:0.9298245614035088\nk=95:0.9298245614035088\nk=96:0.9385964912280702\nk=97:0.9298245614035088\nk=98:0.9385964912280702\nk=99:0.9298245614035088\nk=100:0.9298245614035088\n" ], [ "# Map the k-accuracy graph\nk_list = list(range(1,101))\nplt.plot(k_list, accuracies)\nplt.xlabel(\"k\")\nplt.ylabel(\"Validation Accuracy\")\nplt.title(\"Breast Cancer Classifier Accuracy\")\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a88e4293dc6154f3e7d8c0e9174d83799d6cc55
928,388
ipynb
Jupyter Notebook
Scraping Geofenced Tweets.ipynb
gstonedata/scraped-anti-vax-tweets
634534ff384d0ef53489a89c8faa7da5ec2df331
[ "MIT" ]
null
null
null
Scraping Geofenced Tweets.ipynb
gstonedata/scraped-anti-vax-tweets
634534ff384d0ef53489a89c8faa7da5ec2df331
[ "MIT" ]
null
null
null
Scraping Geofenced Tweets.ipynb
gstonedata/scraped-anti-vax-tweets
634534ff384d0ef53489a89c8faa7da5ec2df331
[ "MIT" ]
null
null
null
1,771.732824
911,734
0.788888
[ [ [ "# Scraping Geofenced Anti Vax Tweets across Australia", "_____no_output_____" ], [ "Firstly, a new environment is created and activated.\n\nIn my initial attempt, the scraping didn't work. After some research on Stack Overflow I determined that a particular version of TWINT was currently needed to get around Twitter's suppression of the library:\n\npip3 install --upgrade git+https://github.com/himanshudabas/twint.git@origin/twint-fixes#egg=twint\n\nAdditionally, h3, shapely, asyncio, nest_asyncio were pip installed.", "_____no_output_____" ] ], [ [ "# import the necessary libraries and modules\nimport folium\nimport h3\nfrom shapely.geometry import Polygon, shape\nimport pandas as pd\nimport shapely\nimport numpy as np\nimport asyncio\nimport aiohttp\nimport nest_asyncio\nnest_asyncio.apply()\nimport twint\nimport os\nimport time", "_____no_output_____" ], [ "# coordinates of the outline of Australia, obtained from google maps, in format [latitude, longitude]\n# as a list for GeoJSON\naus_coordinates = [\n[-10.001449, 142.461773],\n[-13.955529, 144.074703],\n[-14.211275, 145.327144],\n[-18.802451, 146.733394],\n[-20.169543, 149.150386],\n[-21.912601, 149.831538],\n[-22.289226, 150.864253],\n[-24.986185, 152.819819],\n[-24.467278, 153.347163],\n[-28.661794, 153.874507],\n[-31.043641, 153.259273],\n[-32.454273, 152.611080],\n[-37.770824, 150.018307],\n[-39.317407, 146.458737],\n[-40.780646, 146.766354],\n[-40.614057, 147.777096],\n[-39.707293, 147.579342],\n[-39.622721, 148.304440],\n[-41.755025, 148.546139],\n[-43.628223, 147.996823],\n[-43.771193, 145.711667],\n[-39.567694, 143.442990],\n[-39.465991, 144.283444],\n[-40.986166, 146.494657],\n[-39.102407, 145.989286],\n[-38.589017, 144.451200],\n[-39.136501, 143.704130],\n[-38.279207, 140.056669],\n[-36.038046, 138.738310],\n[-36.463314, 136.387236],\n[-32.701853, 133.311064],\n[-31.884608, 130.938017],\n[-32.590845, 126.147978],\n[-33.153702, 124.379179],\n[-34.114131, 123.730986],\n[-34.205040, 120.061552],\n[-35.216505, 118.007109],\n[-35.000805, 116.106474],\n[-34.359361, 114.908964],\n[-33.429202, 114.914457],\n[-33.520842, 115.331937],\n[-31.996484, 115.419827],\n[-25.530107, 112.695218],\n[-21.460796, 113.705960],\n[-20.105003, 116.606351],\n[-19.111493, 120.638333],\n[-17.106727, 121.824856],\n[-14.229836, 124.923000],\n[-13.504545, 126.900539],\n[-14.527816, 129.053859],\n[-10.863039, 130.097560],\n[-10.690358, 132.646388],\n[-11.402017, 133.503322],\n[-11.875475, 135.250148],\n[-10.895406, 136.755275],\n[-15.831953, 137.436427],\n[-16.190997, 140.424708],\n[-12.371950, 140.997474],\n[-10.001449, 142.461773]\n]", "_____no_output_____" ], [ "# for GeoJSON object, the coordinates must be in format [longitude, latitude]\n\nfor coordinate in aus_coordinates:\n coordinate.reverse()", "_____no_output_____" ], [ "# Create a GeoJSON object representing a bounding box for the region of interest, e.g. using the reversed aus coordinates\n\ngeojson = {\n \"type\": \"FeatureCollection\",\n \"features\": [\n {\n \"type\": \"Feature\",\n \"properties\": {},\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\":[\n [\n[142.461773, -10.001449],\n[144.074703, -13.955529],\n[145.327144, -14.211275],\n[146.733394, -18.802451],\n[149.150386, -20.169543],\n[149.831538, -21.912601],\n[150.864253, -22.289226],\n[152.819819, -24.986185],\n[153.347163, -24.467278],\n[153.874507, -28.661794],\n[153.259273, -31.043641],\n[152.61108, -32.454273],\n[150.018307, -37.770824],\n[146.458737, -39.317407],\n[146.766354, -40.780646],\n[147.777096, -40.614057],\n[147.579342, -39.707293],\n[148.30444, -39.622721],\n[148.546139, -41.755025],\n[147.996823, -43.628223],\n[145.711667, -43.771193],\n[143.44299, -39.567694],\n[144.283444, -39.465991],\n[146.494657, -40.986166],\n[145.989286, -39.102407],\n[144.4512, -38.589017],\n[143.70413, -39.136501],\n[140.056669, -38.279207],\n[138.73831, -36.038046],\n[136.387236, -36.463314],\n[133.311064, -32.701853],\n[130.938017, -31.884608],\n[126.147978, -32.590845],\n[124.379179, -33.153702],\n[123.730986, -34.114131],\n[120.061552, -34.20504],\n[118.007109, -35.216505],\n[116.106474, -35.000805],\n[114.908964, -34.359361],\n[114.914457, -33.429202],\n[115.331937, -33.520842],\n[115.419827, -31.996484],\n[112.695218, -25.530107],\n[113.70596, -21.460796],\n[116.606351, -20.105003],\n[120.638333, -19.111493],\n[121.824856, -17.106727],\n[124.923, -14.229836],\n[126.900539, -13.504545],\n[129.053859, -14.527816],\n[130.09756, -10.863039],\n[132.646388, -10.690358],\n[133.503322, -11.402017],\n[135.250148, -11.875475],\n[136.755275, -10.895406],\n[137.436427, -15.831953],\n[140.424708, -16.190997],\n[140.997474, -12.37195],\n[142.461773, -10.001449\n ]\n ]\n ]\n }\n }\n ]\n}", "_____no_output_____" ], [ "# Add a 0.1 degree buffer around this area\n# 0.1 degree is chosen as there is already a buffer around the country\n# but this feature is useful to have for future projects\n\ns = shape(geojson['features'][0]['geometry'])\ns = s.buffer(0.1)\nfeature = {'type': 'Feature', 'properties': {}, 'geometry': shapely.geometry.mapping(s)}\nfeature['geometry']['coordinates'] = [[[v[0], v[1]] for v in feature['geometry']['coordinates'][0]]]\nfeature = feature['geometry']\nfeature['coordinates'][0] = [[v[1], v[0]] for v in feature['coordinates'][0]]", "_____no_output_____" ], [ "# map H3 hexagons (code taken from H3 example: https://github.com/uber/h3-py-notebooks/blob/master/notebooks/usage.ipynb)\n\ndef visualize_hexagons(hexagons, color=\"red\", folium_map=None):\n \"\"\"\n hexagons is a list of hexcluster. Each hexcluster is a list of hexagons. \n eg. [[hex1, hex2], [hex3, hex4]]\n \"\"\"\n polylines = []\n lat = []\n lng = []\n for hex in hexagons:\n polygons = h3.h3_set_to_multi_polygon([hex], geo_json=False)\n # flatten polygons into loops.\n outlines = [loop for polygon in polygons for loop in polygon]\n polyline = [outline + [outline[0]] for outline in outlines][0]\n lat.extend(map(lambda v:v[0],polyline))\n lng.extend(map(lambda v:v[1],polyline))\n polylines.append(polyline)\n \n if folium_map is None:\n m = folium.Map(location=[sum(lat)/len(lat), sum(lng)/len(lng)], zoom_start=13, tiles='cartodbpositron')\n else:\n m = folium_map\n for polyline in polylines:\n my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color=color)\n m.add_child(my_PolyLine)\n return m\n \n\ndef visualize_polygon(polyline, color):\n polyline.append(polyline[0])\n lat = [p[0] for p in polyline]\n lng = [p[1] for p in polyline]\n m = folium.Map(location=[sum(lat)/len(lat), sum(lng)/len(lng)], zoom_start=13, tiles='cartodbpositron')\n my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color=color)\n m.add_child(my_PolyLine)\n return m\n\n# find all hexagons with a center that falls within our buffered area of interest from above\npolyline = feature['coordinates'][0]\npolyline.append(polyline[0])\nlat = [p[0] for p in polyline]\nlng = [p[1] for p in polyline]\nm = folium.Map(location=[sum(lat)/len(lat), sum(lng)/len(lng)], zoom_start=6, tiles='cartodbpositron')\nmy_PolyLine=folium.PolyLine(locations=polyline,weight=8,color=\"green\")\nm.add_child(my_PolyLine)\n\n# make the list of hexagon IDs in our AOI\nhexagons = list(h3.polyfill(feature, 3))\n\n# map the hexagons\npolylines = []\nlat = []\nlng = []\nfor hex in hexagons:\n polygons = h3.h3_set_to_multi_polygon([hex], geo_json=False)\n # flatten polygons into loops.\n outlines = [loop for polygon in polygons for loop in polygon]\n polyline = [outline + [outline[0]] for outline in outlines][0]\n lat.extend(map(lambda v:v[0],polyline))\n lng.extend(map(lambda v:v[1],polyline))\n polylines.append(polyline)\nfor polyline in polylines:\n my_PolyLine=folium.PolyLine(locations=polyline,weight=8,color='red')\n m.add_child(my_PolyLine)\ndisplay(m)", "_____no_output_____" ], [ "# determine how many hexagons there are\nlen(hexagons)", "_____no_output_____" ] ], [ [ "725 hexagons tile this area at this zoom level. However, notice that some of them are fully over water and Australia is very sparsely populated, so not every area will have data.", "_____no_output_____" ] ], [ [ "np.sqrt((h3.cell_area(hexagons[0], unit='km^2')/0.827)/np.pi)", "_____no_output_____" ] ], [ [ "The ratio of the area of a regular hexagon to a circumscribed circle is 0.827. Scaling the area of each H3 cell by this factor can be used to find the radius of a circle that approximately circumscribes that cell. (This is approximate because H3 cells are not regular, as they must tile a sphere. In fact, some of them are pentagons. However, this approximation works for these purposes.\n\nWith this method, search queries can be automatically generated for each cell:", "_____no_output_____" ] ], [ [ "for hex in hexagons:\n center = h3.h3_to_geo(hex)\n r = np.sqrt((h3.cell_area(hex, unit='km^2')/0.827)/np.pi) \n query = \"(antivax) AND geocode:\" + str(center[0]) + \",\" + str(center[1]) + \",\" + str(r) + \"km\"\n \nprint(\"Example query:\", query)", "Example query: (antivax) AND geocode:-31.501034065628865,117.4768252329552,60.695810052732014km\n" ] ], [ [ "Next, these locations and radii can be used to download tweets using TWINT.", "_____no_output_____" ] ], [ [ "# create a folder to save each hexagons csv tweet file in\n\nos.mkdir('data_by_hex')", "_____no_output_____" ], [ "# Download tweets using TWINT for each hexagon in the list\nhexagons_left = hexagons\n\n# while loop to iterate through each hexagon\nwhile len(hexagons_left) > 0:\n # remove one hexagon at a time\n hex = hexagons_left.pop(0)\n # check if a csv file already exists for each hexagon\n if not os.path.exists('data_by_hex/' + hex + '.csv'):\n # find center of the hexagon\n center = h3.h3_to_geo(hex)\n # approximate radius of hexagon/circle\n r = np.sqrt(h3.cell_area(hex, unit='km^2')*1.20919957616/np.pi) \n # query terms and geocode \n query = \"(antivax OR anti vax) AND geocode:\" + str(center[0]) + \",\" + str(center[1]) + \",\" + str(r) + \"km\"\n print(query)\n\n # configure twint\n c = twint.Config()\n\n c.Search = query\n # save as csv files\n c.Store_csv = True\n # save in directory created previously, name after hexagon\n c.Output = \"data_by_hex/\" + hex + \".csv\"\n # do not show 'live' results\n c.Hide_output = True\n # start date of search, this will search til present\n c.Since = \"2021-01-01\"\n\n try:\n twint.run.Search(c)\n except:\n os.remove('data_by_hex/' + hex + '.csv')\n time.sleep(10)\n print('error, removing this one')\n hexagons_left.append(hex)\n \n print(len(hexagons_left), \"remaining\")", "_____no_output_____" ] ], [ [ "Quite surprisingly, or perhaps not due to the population density of Australia, only 33 csv files were created. \n\nThese 33 files show that from 1/1/21 - 20/6/21 only 33 out of a potential 725 hexagons had tweets sent from them containing antivax mentions.\n\nAnalysis will be done on these files and an choropleth map with time slider will be created to highlight anti vax hotspots.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a88ebeaf6365c9f305c87c1827295329cadf803
90,033
ipynb
Jupyter Notebook
Getting Started with TensorFlow 2/Week 2/MNIST.ipynb
pp-omega/TensorFlow2-for-Deep-Learning
4fb83fa2b8073f1d62710941a8339990cf303896
[ "MIT" ]
null
null
null
Getting Started with TensorFlow 2/Week 2/MNIST.ipynb
pp-omega/TensorFlow2-for-Deep-Learning
4fb83fa2b8073f1d62710941a8339990cf303896
[ "MIT" ]
null
null
null
Getting Started with TensorFlow 2/Week 2/MNIST.ipynb
pp-omega/TensorFlow2-for-Deep-Learning
4fb83fa2b8073f1d62710941a8339990cf303896
[ "MIT" ]
2
2021-08-14T10:43:39.000Z
2022-02-28T22:09:20.000Z
129.172166
37,952
0.876134
[ [ [ "# Programming Assignment", "_____no_output_____" ], [ "## CNN classifier for the MNIST dataset", "_____no_output_____" ], [ "### Instructions\n\nIn this notebook, you will write code to build, compile and fit a convolutional neural network (CNN) model to the MNIST dataset of images of handwritten digits.\n\nSome code cells are provided you in the notebook. You should avoid editing provided code, and make sure to execute the cells in order to avoid unexpected errors. Some cells begin with the line: \n\n`#### GRADED CELL ####`\n\nDon't move or edit this first line - this is what the automatic grader looks for to recognise graded cells. These cells require you to write your own code to complete them, and are automatically graded when you submit the notebook. Don't edit the function name or signature provided in these cells, otherwise the automatic grader might not function properly. Inside these graded cells, you can use any functions or classes that are imported below, but make sure you don't use any variables that are outside the scope of the function.\n\n### How to submit\n\nComplete all the tasks you are asked for in the worksheet. When you have finished and are happy with your code, press the **Submit Assignment** button at the top of this notebook.\n\n### Let's get started!\n\nWe'll start running some imports, and loading the dataset. Do not edit the existing imports in the following cell. If you would like to make further Tensorflow imports, you should add them here.", "_____no_output_____" ] ], [ [ "#### PACKAGE IMPORTS ####\n\n# Run this cell first to import all required packages. Do not make any imports elsewhere in the notebook\n\nimport tensorflow as tf\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# If you would like to make further imports from Tensorflow, add them here\n\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Flatten, Softmax, Conv2D, MaxPooling2D", "_____no_output_____" ] ], [ [ "#### The MNIST dataset\n\nIn this assignment, you will use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). It consists of a training set of 60,000 handwritten digits with corresponding labels, and a test set of 10,000 images. The images have been normalised and centred. The dataset is frequently used in machine learning research, and has become a standard benchmark for image classification models. \n\n- Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. \"Gradient-based learning applied to document recognition.\" Proceedings of the IEEE, 86(11):2278-2324, November 1998.\n\nYour goal is to construct a neural network that classifies images of handwritten digits into one of 10 classes.", "_____no_output_____" ], [ "#### Load and preprocess the data", "_____no_output_____" ] ], [ [ "# Run this cell to load the MNIST data\n\nmnist_data = tf.keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist_data.load_data()", "_____no_output_____" ] ], [ [ "First, preprocess the data by scaling the training and test images so their values lie in the range from 0 to 1.", "_____no_output_____" ] ], [ [ "#### GRADED CELL ####\n\n# Complete the following function. \n# Make sure to not change the function name or arguments.\n\ndef scale_mnist_data(train_images, test_images):\n \"\"\"\n This function takes in the training and test images as loaded in the cell above, and scales them\n so that they have minimum and maximum values equal to 0 and 1 respectively.\n Your function should return a tuple (train_images, test_images) of scaled training and test images.\n \"\"\"\n train_images = train_images / 255.\n test_images = test_images / 255.\n return (train_images, test_images)\n \n ", "_____no_output_____" ], [ "# Run your function on the input data\n\nscaled_train_images, scaled_test_images = scale_mnist_data(train_images, test_images)", "_____no_output_____" ], [ "# Add a dummy channel dimension\n\nscaled_train_images = scaled_train_images[..., np.newaxis]\nscaled_test_images = scaled_test_images[..., np.newaxis]", "_____no_output_____" ] ], [ [ "#### Build the convolutional neural network model", "_____no_output_____" ], [ "We are now ready to construct a model to fit to the data. Using the Sequential API, build your CNN model according to the following spec:\n\n* The model should use the `input_shape` in the function argument to set the input size in the first layer.\n* A 2D convolutional layer with a 3x3 kernel and 8 filters. Use 'SAME' zero padding and ReLU activation functions. Make sure to provide the `input_shape` keyword argument in this first layer.\n* A max pooling layer, with a 2x2 window, and default strides.\n* A flatten layer, which unrolls the input into a one-dimensional tensor.\n* Two dense hidden layers, each with 64 units and ReLU activation functions.\n* A dense output layer with 10 units and the softmax activation function.\n\nIn particular, your neural network should have six layers.", "_____no_output_____" ] ], [ [ "#### GRADED CELL ####\n\n# Complete the following function. \n# Make sure to not change the function name or arguments.\n\ndef get_model(input_shape):\n \"\"\"\n This function should build a Sequential model according to the above specification. Ensure the \n weights are initialised by providing the input_shape argument in the first layer, given by the\n function argument.\n Your function should return the model.\n \"\"\"\n model = Sequential([\n Conv2D(8, (3, 3), padding = 'same', activation='relu', input_shape=input_shape),\n MaxPooling2D((2, 2)),\n Flatten(),\n Dense(64, activation='relu'),\n Dense(64, activation='relu'),\n Dense(10, activation='softmax')\n ])\n \n return model\n ", "_____no_output_____" ], [ "# Run your function to get the model\n\nmodel = get_model(scaled_train_images[0].shape)", "_____no_output_____" ] ], [ [ "#### Compile the model\n\nYou should now compile the model using the `compile` method. To do so, you need to specify an optimizer, a loss function and a metric to judge the performance of your model.", "_____no_output_____" ] ], [ [ "#### GRADED CELL ####\n\n# Complete the following function. \n# Make sure to not change the function name or arguments.\n\ndef compile_model(model):\n \"\"\"\n This function takes in the model returned from your get_model function, and compiles it with an optimiser,\n loss function and metric.\n Compile the model using the Adam optimiser (with default settings), the cross-entropy loss function and\n accuracy as the only metric. \n Your function doesn't need to return anything; the model will be compiled in-place.\n \"\"\"\n model.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n ", "_____no_output_____" ], [ "# Run your function to compile the model\n\ncompile_model(model)", "_____no_output_____" ] ], [ [ "#### Fit the model to the training data\n\nNow you should train the model on the MNIST dataset, using the model's `fit` method. Set the training to run for 5 epochs, and return the training history to be used for plotting the learning curves.", "_____no_output_____" ] ], [ [ "#### GRADED CELL ####\n\n# Complete the following function. \n# Make sure to not change the function name or arguments.\n\ndef train_model(model, scaled_train_images, train_labels):\n \"\"\"\n This function should train the model for 5 epochs on the scaled_train_images and train_labels. \n Your function should return the training history, as returned by model.fit.\n \"\"\"\n history = model.fit(scaled_train_images, train_labels, epochs=5)\n print(history)\n return history\n ", "_____no_output_____" ], [ "# Run your function to train the model\n\nhistory = train_model(model, scaled_train_images, train_labels)", "Epoch 1/5\n1875/1875 [==============================] - 21s 11ms/step - loss: 0.2082 - accuracy: 0.9376\nEpoch 2/5\n1875/1875 [==============================] - 21s 11ms/step - loss: 0.0759 - accuracy: 0.9768\nEpoch 3/5\n1875/1875 [==============================] - 21s 11ms/step - loss: 0.0542 - accuracy: 0.9833\nEpoch 4/5\n1875/1875 [==============================] - 21s 11ms/step - loss: 0.0419 - accuracy: 0.9865\nEpoch 5/5\n1875/1875 [==============================] - 20s 11ms/step - loss: 0.0327 - accuracy: 0.9894\n<tensorflow.python.keras.callbacks.History object at 0x7fc6d66a7be0>\n" ] ], [ [ "#### Plot the learning curves\n\nWe will now plot two graphs:\n* Epoch vs accuracy\n* Epoch vs loss\n\nWe will load the model history into a pandas `DataFrame` and use the `plot` method to output the required graphs.", "_____no_output_____" ] ], [ [ "# Run this cell to load the model history into a pandas DataFrame\n\nframe = pd.DataFrame(history.history)", "_____no_output_____" ], [ "# Run this cell to make the Accuracy vs Epochs plot\n\nacc_plot = frame.plot(y=\"accuracy\", title=\"Accuracy vs Epochs\", legend=False)\nacc_plot.set(xlabel=\"Epochs\", ylabel=\"Accuracy\")", "_____no_output_____" ], [ "# Run this cell to make the Loss vs Epochs plot\n\nacc_plot = frame.plot(y=\"loss\", title = \"Loss vs Epochs\",legend=False)\nacc_plot.set(xlabel=\"Epochs\", ylabel=\"Loss\")", "_____no_output_____" ] ], [ [ "#### Evaluate the model\n\nFinally, you should evaluate the performance of your model on the test set, by calling the model's `evaluate` method.", "_____no_output_____" ] ], [ [ "#### GRADED CELL ####\n\n# Complete the following function. \n# Make sure to not change the function name or arguments.\n\ndef evaluate_model(model, scaled_test_images, test_labels):\n \"\"\"\n This function should evaluate the model on the scaled_test_images and test_labels. \n Your function should return a tuple (test_loss, test_accuracy).\n \"\"\"\n test_loss, test_accuracy = model.evaluate(scaled_test_images, test_labels, verbose=2)\n return (test_loss, test_accuracy)\n ", "_____no_output_____" ], [ "# Run your function to evaluate the model\n\ntest_loss, test_accuracy = evaluate_model(model, scaled_test_images, test_labels)\nprint(f\"Test loss: {test_loss}\")\nprint(f\"Test accuracy: {test_accuracy}\")", "313/313 - 2s - loss: 0.0601 - accuracy: 0.9807\nTest loss: 0.06005042791366577\nTest accuracy: 0.9807000160217285\n" ] ], [ [ "#### Model predictions\n\nLet's see some model predictions! We will randomly select four images from the test data, and display the image and label for each. \n\nFor each test image, model's prediction (the label with maximum probability) is shown, together with a plot showing the model's categorical distribution.", "_____no_output_____" ] ], [ [ "# Run this cell to get model predictions on randomly selected test images\n\nnum_test_images = scaled_test_images.shape[0]\n\nrandom_inx = np.random.choice(num_test_images, 4)\nrandom_test_images = scaled_test_images[random_inx, ...]\nrandom_test_labels = test_labels[random_inx, ...]\n\npredictions = model.predict(random_test_images)\n\nfig, axes = plt.subplots(4, 2, figsize=(16, 12))\nfig.subplots_adjust(hspace=0.4, wspace=-0.2)\n\nfor i, (prediction, image, label) in enumerate(zip(predictions, random_test_images, random_test_labels)):\n axes[i, 0].imshow(np.squeeze(image))\n axes[i, 0].get_xaxis().set_visible(False)\n axes[i, 0].get_yaxis().set_visible(False)\n axes[i, 0].text(10., -1.5, f'Digit {label}')\n axes[i, 1].bar(np.arange(len(prediction)), prediction)\n axes[i, 1].set_xticks(np.arange(len(prediction)))\n axes[i, 1].set_title(f\"Categorical distribution. Model prediction: {np.argmax(prediction)}\")\n \nplt.show()", "_____no_output_____" ] ], [ [ "Congratulations for completing this programming assignment! In the next week of the course we will take a look at including validation and regularisation in our model training, and introduce Keras callbacks.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a891d55cb135aaa99292734f0dee7f1e5a12fe8
598,845
ipynb
Jupyter Notebook
Rethinking_2/Chp_07.ipynb
gsahinpi/resources
cd7fc85021197e4cb31c791b8ca41e8d239fa321
[ "MIT" ]
1
2021-12-15T12:42:24.000Z
2021-12-15T12:42:24.000Z
Rethinking_2/Chp_07.ipynb
gsahinpi/resources
cd7fc85021197e4cb31c791b8ca41e8d239fa321
[ "MIT" ]
null
null
null
Rethinking_2/Chp_07.ipynb
gsahinpi/resources
cd7fc85021197e4cb31c791b8ca41e8d239fa321
[ "MIT" ]
null
null
null
128.673184
118,976
0.832311
[ [ [ "# Chapter 7", "_____no_output_____" ] ], [ [ "import arviz as az\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pymc3 as pm\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\n\nfrom patsy import dmatrix\nfrom scipy import stats\nfrom scipy.special import logsumexp", "_____no_output_____" ], [ "%config Inline.figure_format = 'retina'\naz.style.use(\"arviz-darkgrid\")\naz.rcParams[\"stats.hdi_prob\"] = 0.89 # set credible interval for entire notebook\nnp.random.seed(0)", "_____no_output_____" ] ], [ [ "#### Code 7.1", "_____no_output_____" ] ], [ [ "brains = pd.DataFrame.from_dict(\n {\n \"species\": [\n \"afarensis\",\n \"africanus\",\n \"habilis\",\n \"boisei\",\n \"rudolfensis\",\n \"ergaster\",\n \"sapiens\",\n ],\n \"brain\": [438, 452, 612, 521, 752, 871, 1350], # volume in cc\n \"mass\": [37.0, 35.5, 34.5, 41.5, 55.5, 61.0, 53.5], # mass in kg\n }\n)\n\nbrains", "_____no_output_____" ], [ "# Figure 7.2\n\nplt.scatter(brains.mass, brains.brain)\n\n# point labels\nfor i, r in brains.iterrows():\n if r.species == \"afarensis\":\n plt.text(r.mass + 0.5, r.brain, r.species, ha=\"left\", va=\"center\")\n elif r.species == \"sapiens\":\n plt.text(r.mass, r.brain - 25, r.species, ha=\"center\", va=\"top\")\n else:\n plt.text(r.mass, r.brain + 25, r.species, ha=\"center\")\n\nplt.xlabel(\"body mass (kg)\")\nplt.ylabel(\"brain volume (cc)\");", "_____no_output_____" ] ], [ [ "#### Code 7.2", "_____no_output_____" ] ], [ [ "brains.loc[:, \"mass_std\"] = (brains.loc[:, \"mass\"] - brains.loc[:, \"mass\"].mean()) / brains.loc[\n :, \"mass\"\n].std()\nbrains.loc[:, \"brain_std\"] = brains.loc[:, \"brain\"] / brains.loc[:, \"brain\"].max()", "_____no_output_____" ] ], [ [ "#### Code 7.3\n\nThis is modified from [Chapter 6 of 1st Edition](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb) (6.2 - 6.6).", "_____no_output_____" ] ], [ [ "m_7_1 = smf.ols(\"brain_std ~ mass_std\", data=brains).fit()\nm_7_1.summary()", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/statsmodels/stats/stattools.py:71: ValueWarning: omni_normtest is not valid with less than 8 observations; 7 samples were given.\n \"samples were given.\" % int(n), ValueWarning)\n" ] ], [ [ "#### Code 7.4", "_____no_output_____" ] ], [ [ "p, cov = np.polyfit(brains.loc[:, \"mass_std\"], brains.loc[:, \"brain_std\"], 1, cov=True)\n\npost = stats.multivariate_normal(p, cov).rvs(1000)\n\naz.summary({k: v for k, v in zip(\"ba\", post.T)}, kind=\"stats\")", "_____no_output_____" ] ], [ [ "#### Code 7.5", "_____no_output_____" ] ], [ [ "1 - m_7_1.resid.var() / brains.brain_std.var()", "_____no_output_____" ] ], [ [ "#### Code 7.6", "_____no_output_____" ] ], [ [ "def R2_is_bad(model):\n return 1 - model.resid.var() / brains.brain_std.var()\n\n\nR2_is_bad(m_7_1)", "_____no_output_____" ] ], [ [ "#### Code 7.7", "_____no_output_____" ] ], [ [ "m_7_2 = smf.ols(\"brain_std ~ mass_std + I(mass_std**2)\", data=brains).fit()\nm_7_2.summary()", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/statsmodels/stats/stattools.py:71: ValueWarning: omni_normtest is not valid with less than 8 observations; 7 samples were given.\n \"samples were given.\" % int(n), ValueWarning)\n" ] ], [ [ "#### Code 7.8", "_____no_output_____" ] ], [ [ "m_7_3 = smf.ols(\"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3)\", data=brains).fit()\nm_7_4 = smf.ols(\n \"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4)\",\n data=brains,\n).fit()\nm_7_5 = smf.ols(\n \"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4) + I(mass_std**5)\",\n data=brains,\n).fit()", "_____no_output_____" ] ], [ [ "#### Code 7.9", "_____no_output_____" ] ], [ [ "m_7_6 = smf.ols(\n \"brain_std ~ mass_std + I(mass_std**2) + I(mass_std**3) + I(mass_std**4) + I(mass_std**5) + I(mass_std**6)\",\n data=brains,\n).fit()", "_____no_output_____" ] ], [ [ "#### Code 7.10\n\nThe chapter gives code to produce the first panel of Figure 7.3. Here, produce the entire figure by looping over models 7.1-7.6.\n\nTo sample the posterior predictive on a new independent variable we make use of theano SharedVariable objects, as outlined [here](https://docs.pymc.io/notebooks/data_container.html)", "_____no_output_____" ] ], [ [ "models = [m_7_1, m_7_2, m_7_3, m_7_4, m_7_5, m_7_6]\nnames = [\"m_7_1\", \"m_7_2\", \"m_7_3\", \"m_7_4\", \"m_7_5\", \"m_7_6\"]\n\nmass_plot = np.linspace(33, 62, 100)\nmass_new = (mass_plot - brains.mass.mean()) / brains.mass.std()\n\nfig, axs = plt.subplots(3, 2, figsize=[6, 8.5], sharex=True, sharey=\"row\")\n\nfor model, name, ax in zip(models, names, axs.flat):\n prediction = model.get_prediction({\"mass_std\": mass_new})\n pred = prediction.summary_frame(alpha=0.11) * brains.brain.max()\n\n ax.plot(mass_plot, pred[\"mean\"])\n ax.fill_between(mass_plot, pred[\"mean_ci_lower\"], pred[\"mean_ci_upper\"], alpha=0.3)\n ax.scatter(brains.mass, brains.brain, color=\"C0\", s=15)\n\n ax.set_title(f\"{name}: R^2: {model.rsquared:.2f}\", loc=\"left\", fontsize=11)\n\n if ax.is_first_col():\n ax.set_ylabel(\"brain volume (cc)\")\n\n if ax.is_last_row():\n ax.set_xlabel(\"body mass (kg)\")\n\n if ax.is_last_row():\n ax.set_ylim(-500, 2100)\n ax.axhline(0, ls=\"dashed\", c=\"k\", lw=1)\n ax.set_yticks([0, 450, 1300])\n else:\n ax.set_ylim(300, 1600)\n ax.set_yticks([450, 900, 1300])\n\nfig.tight_layout()", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/ipykernel_launcher.py:33: UserWarning: This figure was using constrained_layout==True, but that is incompatible with subplots_adjust and or tight_layout: setting constrained_layout==False. \n" ] ], [ [ "#### Code 7.11 - this is R specific notation for dropping rows", "_____no_output_____" ] ], [ [ "brains_new = brains.drop(brains.index[-1])", "_____no_output_____" ], [ "# Figure 7.4\n\n# this code taken from PyMC3 port of Rethinking/Chp_06.ipynb\n\nf, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(8, 3))\nax1.scatter(brains.mass, brains.brain, alpha=0.8)\nax2.scatter(brains.mass, brains.brain, alpha=0.8)\nfor i in range(len(brains)):\n d_new = brains.drop(brains.index[-i]) # drop each data point in turn\n\n # first order model\n m0 = smf.ols(\"brain ~ mass\", d_new).fit()\n # need to calculate regression line\n # need to add intercept term explicitly\n x = sm.add_constant(d_new.mass) # add constant to new data frame with mass\n x_pred = pd.DataFrame(\n {\"mass\": np.linspace(x.mass.min() - 10, x.mass.max() + 10, 50)}\n ) # create linspace dataframe\n x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe\n y_pred = m0.predict(x_pred2) # calculate predicted values\n ax1.plot(x_pred, y_pred, \"gray\", alpha=0.5)\n ax1.set_ylabel(\"body mass (kg)\", fontsize=12)\n ax1.set_xlabel(\"brain volume (cc)\", fontsize=12)\n ax1.set_title(\"Underfit model\")\n\n # fifth order model\n m1 = smf.ols(\n \"brain ~ mass + I(mass**2) + I(mass**3) + I(mass**4) + I(mass**5)\", data=d_new\n ).fit()\n x = sm.add_constant(d_new.mass) # add constant to new data frame with mass\n x_pred = pd.DataFrame(\n {\"mass\": np.linspace(x.mass.min() - 10, x.mass.max() + 10, 200)}\n ) # create linspace dataframe\n x_pred2 = sm.add_constant(x_pred) # add constant to newly created linspace dataframe\n y_pred = m1.predict(x_pred2) # calculate predicted values from fitted model\n ax2.plot(x_pred, y_pred, \"gray\", alpha=0.5)\n ax2.set_xlim(32, 62)\n ax2.set_ylim(-250, 2200)\n ax2.set_ylabel(\"body mass (kg)\", fontsize=12)\n ax2.set_xlabel(\"brain volume (cc)\", fontsize=12)\n ax2.set_title(\"Overfit model\")", "_____no_output_____" ] ], [ [ "#### Code 7.12", "_____no_output_____" ] ], [ [ "p = np.array([0.3, 0.7])\n-np.sum(p * np.log(p))", "_____no_output_____" ], [ "# Figure 7.5\np = np.array([0.3, 0.7])\nq = np.arange(0.01, 1, 0.01)\nDKL = np.sum(p * np.log(p / np.array([q, 1 - q]).T), 1)\n\nplt.plot(q, DKL)\nplt.xlabel(\"q[1]\")\nplt.ylabel(\"Divergence of q from p\")\nplt.axvline(0.3, ls=\"dashed\", color=\"k\")\nplt.text(0.315, 1.22, \"q = p\");", "_____no_output_____" ] ], [ [ "#### Code 7.13 & 7.14", "_____no_output_____" ] ], [ [ "n_samples = 3000\n\nintercept, slope = stats.multivariate_normal(m_7_1.params, m_7_1.cov_params()).rvs(n_samples).T\n\npred = intercept + slope * brains.mass_std.values.reshape(-1, 1)\n\nn, ns = pred.shape", "_____no_output_____" ], [ "# PyMC3 does not have a way to calculate LPPD directly, so we use the approach from 7.14\n\nsigmas = (np.sum((pred - brains.brain_std.values.reshape(-1, 1)) ** 2, 0) / 7) ** 0.5\nll = np.zeros((n, ns))\nfor s in range(ns):\n logprob = stats.norm.logpdf(brains.brain_std, pred[:, s], sigmas[s])\n ll[:, s] = logprob\n\nlppd = np.zeros(n)\nfor i in range(n):\n lppd[i] = logsumexp(ll[i]) - np.log(ns)\n\nlppd", "_____no_output_____" ] ], [ [ "#### Code 7.15", "_____no_output_____" ] ], [ [ "# make an lppd function that can be applied to all models (from code above)\ndef lppd(model, n_samples=1e4):\n n_samples = int(n_samples)\n\n pars = stats.multivariate_normal(model.params, model.cov_params()).rvs(n_samples).T\n dmat = dmatrix(\n model.model.data.design_info, brains, return_type=\"dataframe\"\n ).values # get model design matrix\n pred = dmat.dot(pars)\n\n n, ns = pred.shape\n\n # this approach for calculating lppd isfrom 7.14\n sigmas = (np.sum((pred - brains.brain_std.values.reshape(-1, 1)) ** 2, 0) / 7) ** 0.5\n ll = np.zeros((n, ns))\n for s in range(ns):\n logprob = stats.norm.logpdf(brains.brain_std, pred[:, s], sigmas[s])\n ll[:, s] = logprob\n\n lppd = np.zeros(n)\n for i in range(n):\n lppd[i] = logsumexp(ll[i]) - np.log(ns)\n\n return lppd", "_____no_output_____" ], [ "# model 7_6 does not work with OLS because its covariance matrix is not finite.\nlppds = np.array(list(map(lppd, models[:-1], [1000] * len(models[:-1]))))\n\nlppds.sum(1)", "_____no_output_____" ] ], [ [ "#### Code 7.16\n\nThis relies on the `sim.train.test` function in the `rethinking` package. [This](https://github.com/rmcelreath/rethinking/blob/master/R/sim_train_test.R) is the original function.\n\nThe python port of this function below is from [Rethinking/Chp_06](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb) Code 6.12.", "_____no_output_____" ] ], [ [ "def sim_train_test(N=20, k=3, rho=[0.15, -0.4], b_sigma=100):\n\n n_dim = 1 + len(rho)\n if n_dim < k:\n n_dim = k\n Rho = np.diag(np.ones(n_dim))\n Rho[0, 1:3:1] = rho\n i_lower = np.tril_indices(n_dim, -1)\n Rho[i_lower] = Rho.T[i_lower]\n\n x_train = stats.multivariate_normal.rvs(cov=Rho, size=N)\n x_test = stats.multivariate_normal.rvs(cov=Rho, size=N)\n\n mm_train = np.ones((N, 1))\n\n np.concatenate([mm_train, x_train[:, 1:k]], axis=1)\n\n # Using pymc3\n\n with pm.Model() as m_sim:\n vec_V = pm.MvNormal(\n \"vec_V\",\n mu=0,\n cov=b_sigma * np.eye(n_dim),\n shape=(1, n_dim),\n testval=np.random.randn(1, n_dim) * 0.01,\n )\n mu = pm.Deterministic(\"mu\", 0 + pm.math.dot(x_train, vec_V.T))\n y = pm.Normal(\"y\", mu=mu, sd=1, observed=x_train[:, 0])\n\n with m_sim:\n trace_m_sim = pm.sample(return_inferencedata=True)\n\n vec = az.summary(trace_m_sim)[\"mean\"][:n_dim]\n vec = np.array([i for i in vec]).reshape(n_dim, -1)\n\n dev_train = -2 * sum(stats.norm.logpdf(x_train, loc=np.matmul(x_train, vec), scale=1))\n\n mm_test = np.ones((N, 1))\n\n mm_test = np.concatenate([mm_test, x_test[:, 1 : k + 1]], axis=1)\n\n dev_test = -2 * sum(stats.norm.logpdf(x_test[:, 0], loc=np.matmul(mm_test, vec), scale=1))\n\n return np.mean(dev_train), np.mean(dev_test)", "_____no_output_____" ], [ "n = 20\ntries = 10\nparam = 6\nr = np.zeros(shape=(param - 1, 4))\n\ntrain = []\ntest = []\n\nfor j in range(2, param + 1):\n print(j)\n for i in range(1, tries + 1):\n tr, te = sim_train_test(N=n, k=param)\n train.append(tr), test.append(te)\n r[j - 2, :] = (\n np.mean(train),\n np.std(train, ddof=1),\n np.mean(test),\n np.std(test, ddof=1),\n )", "2\n" ] ], [ [ "#### Code 7.17\n\nDoes not apply because multi-threading is automatic in PyMC3.", "_____no_output_____" ], [ "#### Code 7.18", "_____no_output_____" ] ], [ [ "num_param = np.arange(2, param + 1)\n\nplt.figure(figsize=(10, 6))\nplt.scatter(num_param, r[:, 0], color=\"C0\")\nplt.xticks(num_param)\n\nfor j in range(param - 1):\n plt.vlines(\n num_param[j],\n r[j, 0] - r[j, 1],\n r[j, 0] + r[j, 1],\n color=\"mediumblue\",\n zorder=-1,\n alpha=0.80,\n )\n\nplt.scatter(num_param + 0.1, r[:, 2], facecolors=\"none\", edgecolors=\"k\")\n\nfor j in range(param - 1):\n plt.vlines(\n num_param[j] + 0.1,\n r[j, 2] - r[j, 3],\n r[j, 2] + r[j, 3],\n color=\"k\",\n zorder=-2,\n alpha=0.70,\n )\n\ndist = 0.20\nplt.text(num_param[1] - dist, r[1, 0] - dist, \"in\", color=\"C0\", fontsize=13)\nplt.text(num_param[1] + dist, r[1, 2] - dist, \"out\", color=\"k\", fontsize=13)\nplt.text(num_param[1] + dist, r[1, 2] + r[1, 3] - dist, \"+1 SD\", color=\"k\", fontsize=10)\nplt.text(num_param[1] + dist, r[1, 2] - r[1, 3] - dist, \"+1 SD\", color=\"k\", fontsize=10)\nplt.xlabel(\"Number of parameters\", fontsize=14)\nplt.ylabel(\"Deviance\", fontsize=14)\nplt.title(f\"N = {n}\", fontsize=14)\nplt.show()", "_____no_output_____" ] ], [ [ "These uncertainties are a *lot* larger than in the book... MCMC vs OLS again?", "_____no_output_____" ], [ "#### Code 7.19\n\n7.19 to 7.25 transcribed directly from 6.15-6.20 in [Chapter 6 of 1st Edition](https://nbviewer.jupyter.org/github/pymc-devs/resources/blob/master/Rethinking/Chp_06.ipynb).", "_____no_output_____" ] ], [ [ "data = pd.read_csv(\"Data/cars.csv\", sep=\",\", index_col=0)", "_____no_output_____" ], [ "with pm.Model() as m:\n a = pm.Normal(\"a\", mu=0, sd=100)\n b = pm.Normal(\"b\", mu=0, sd=10)\n sigma = pm.Uniform(\"sigma\", 0, 30)\n mu = pm.Deterministic(\"mu\", a + b * data[\"speed\"])\n dist = pm.Normal(\"dist\", mu=mu, sd=sigma, observed=data[\"dist\"])\n m = pm.sample(5000, tune=10000)", "Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [sigma, b, a]\n" ] ], [ [ "#### Code 7.20", "_____no_output_____" ] ], [ [ "n_samples = 1000\nn_cases = data.shape[0]\nlogprob = np.zeros((n_cases, n_samples))\n\nfor s in range(0, n_samples):\n mu = m[\"a\"][s] + m[\"b\"][s] * data[\"speed\"]\n p_ = stats.norm.logpdf(data[\"dist\"], loc=mu, scale=m[\"sigma\"][s])\n logprob[:, s] = p_", "_____no_output_____" ] ], [ [ "#### Code 7.21", "_____no_output_____" ] ], [ [ "n_cases = data.shape[0]\nlppd = np.zeros(n_cases)\nfor a in range(1, n_cases):\n lppd[a] = logsumexp(logprob[a]) - np.log(n_samples)", "_____no_output_____" ] ], [ [ "#### Code 7.22", "_____no_output_____" ] ], [ [ "pWAIC = np.zeros(n_cases)\nfor i in range(1, n_cases):\n pWAIC[i] = np.var(logprob[i])", "_____no_output_____" ] ], [ [ "#### Code 7.23", "_____no_output_____" ] ], [ [ "-2 * (sum(lppd) - sum(pWAIC))", "_____no_output_____" ] ], [ [ "#### Code 7.24", "_____no_output_____" ] ], [ [ "waic_vec = -2 * (lppd - pWAIC)\n(n_cases * np.var(waic_vec)) ** 0.5", "_____no_output_____" ] ], [ [ "#### Setup for Code 7.25+\n\nHave to reproduce m6.6-m6.8 from Code 6.13-6.17 in Chapter 6", "_____no_output_____" ] ], [ [ "# number of plants\nN = 100\n# simulate initial heights\nh0 = np.random.normal(10, 2, N)\n# assign treatments and simulate fungus and growth\ntreatment = np.repeat([0, 1], N / 2)\nfungus = np.random.binomial(n=1, p=0.5 - treatment * 0.4, size=N)\nh1 = h0 + np.random.normal(5 - 3 * fungus, size=N)\n# compose a clean data frame\nd = pd.DataFrame.from_dict({\"h0\": h0, \"h1\": h1, \"treatment\": treatment, \"fungus\": fungus})\n\nwith pm.Model() as m_6_6:\n p = pm.Lognormal(\"p\", 0, 0.25)\n\n mu = pm.Deterministic(\"mu\", p * d.h0)\n sigma = pm.Exponential(\"sigma\", 1)\n\n h1 = pm.Normal(\"h1\", mu=mu, sigma=sigma, observed=d.h1)\n\n m_6_6_trace = pm.sample(return_inferencedata=True)\n\nwith pm.Model() as m_6_7:\n a = pm.Normal(\"a\", 0, 0.2)\n bt = pm.Normal(\"bt\", 0, 0.5)\n bf = pm.Normal(\"bf\", 0, 0.5)\n\n p = a + bt * d.treatment + bf * d.fungus\n\n mu = pm.Deterministic(\"mu\", p * d.h0)\n sigma = pm.Exponential(\"sigma\", 1)\n\n h1 = pm.Normal(\"h1\", mu=mu, sigma=sigma, observed=d.h1)\n\n m_6_7_trace = pm.sample(return_inferencedata=True)\n\nwith pm.Model() as m_6_8:\n a = pm.Normal(\"a\", 0, 0.2)\n bt = pm.Normal(\"bt\", 0, 0.5)\n\n p = a + bt * d.treatment\n\n mu = pm.Deterministic(\"mu\", p * d.h0)\n sigma = pm.Exponential(\"sigma\", 1)\n\n h1 = pm.Normal(\"h1\", mu=mu, sigma=sigma, observed=d.h1)\n\n m_6_8_trace = pm.sample(return_inferencedata=True)", "Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [sigma, p]\n" ] ], [ [ "#### Code 7.25", "_____no_output_____" ] ], [ [ "az.waic(m_6_7_trace, m_6_7, scale=\"deviance\")", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/arviz/stats/stats.py:1415: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail. \nSee http://arxiv.org/abs/1507.04544 for details\n \"For one or more samples the posterior variance of the log predictive \"\n" ] ], [ [ "#### Code 7.26", "_____no_output_____" ] ], [ [ "compare_df = az.compare(\n {\n \"m_6_6\": m_6_6_trace,\n \"m_6_7\": m_6_7_trace,\n \"m_6_8\": m_6_8_trace,\n },\n method=\"pseudo-BMA\",\n ic=\"waic\",\n scale=\"deviance\",\n)\ncompare_df", "_____no_output_____" ] ], [ [ "#### Code 7.27", "_____no_output_____" ] ], [ [ "waic_m_6_7 = az.waic(m_6_7_trace, pointwise=True, scale=\"deviance\")\nwaic_m_6_8 = az.waic(m_6_8_trace, pointwise=True, scale=\"deviance\")\n\n# pointwise values are stored in the waic_i attribute.\ndiff_m_6_7_m_6_8 = waic_m_6_7.waic_i - waic_m_6_8.waic_i\n\nn = len(diff_m_6_7_m_6_8)\n\nnp.sqrt(n * np.var(diff_m_6_7_m_6_8)).values", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/arviz/stats/stats.py:1415: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail. \nSee http://arxiv.org/abs/1507.04544 for details\n \"For one or more samples the posterior variance of the log predictive \"\n" ] ], [ [ "#### Code 7.28", "_____no_output_____" ] ], [ [ "40.0 + np.array([-1, 1]) * 10.4 * 2.6", "_____no_output_____" ] ], [ [ "#### Code 7.29", "_____no_output_____" ] ], [ [ "az.plot_compare(compare_df);", "_____no_output_____" ] ], [ [ "#### Code 7.30", "_____no_output_____" ] ], [ [ "waic_m_6_6 = az.waic(m_6_6_trace, pointwise=True, scale=\"deviance\")\n\ndiff_m6_6_m6_8 = waic_m_6_6.waic_i - waic_m_6_8.waic_i\n\nn = len(diff_m6_6_m6_8)\n\nnp.sqrt(n * np.var(diff_m6_6_m6_8)).values", "_____no_output_____" ] ], [ [ "#### Code 7.31\n\ndSE is calculated by compare above, but `rethinking` produces a pairwise comparison. This is not implemented in `arviz`, but we can hack it together:", "_____no_output_____" ] ], [ [ "dataset_dict = {\"m_6_6\": m_6_6_trace, \"m_6_7\": m_6_7_trace, \"m_6_8\": m_6_8_trace}\n\n# compare all models\ns0 = az.compare(dataset_dict, ic=\"waic\", scale=\"deviance\")[\"dse\"]\n# the output compares each model to the 'best' model - i.e. two models are compared to one.\n# to complete a pair-wise comparison we need to compare the remaining two models.\n\n# to do this, remove the 'best' model from the input data\ndel dataset_dict[s0.index[0]]\n\n# re-run compare with the remaining two models\ns1 = az.compare(dataset_dict, ic=\"waic\", scale=\"deviance\")[\"dse\"]\n\n# s0 compares two models to one model, and s1 compares the remaining two models to each other\n# now we just nee to wrangle them together!\n\n# convert them both to dataframes, setting the name to the 'best' model in each `compare` output.\n# (i.e. the name is the model that others are compared to)\ndf_0 = s0.to_frame(name=s0.index[0])\ndf_1 = s1.to_frame(name=s1.index[0])\n\n# merge these dataframes to create a pairwise comparison\npd.merge(df_0, df_1, left_index=True, right_index=True)", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/arviz/stats/stats.py:1415: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail. \nSee http://arxiv.org/abs/1507.04544 for details\n \"For one or more samples the posterior variance of the log predictive \"\n" ] ], [ [ "**Note:** this work for three models, but will get increasingly hack-y with additional models. The function below can be applied to *n* models:", "_____no_output_____" ] ], [ [ "def pairwise_compare(dataset_dict, metric=\"dse\", **kwargs):\n \"\"\"\n Calculate pairwise comparison of models in dataset_dict.\n\n Parameters\n ----------\n dataset_dict : dict\n A dict containing two ore more {'name': pymc3.backends.base.MultiTrace}\n items.\n metric : str\n The name of the matric to be calculated. Can be any valid column output\n by `arviz.compare`. Note that this may change depending on the **kwargs\n that are specified.\n kwargs\n Arguments passed to `arviz.compare`\n \"\"\"\n data_dict = dataset_dict.copy()\n dicts = []\n\n while len(data_dict) > 1:\n c = az.compare(data_dict, **kwargs)[metric]\n dicts.append(c.to_frame(name=c.index[0]))\n del data_dict[c.index[0]]\n\n return pd.concat(dicts, axis=1)", "_____no_output_____" ], [ "dataset_dict = {\"m_6_6\": m_6_6_trace, \"m_6_7\": m_6_7_trace, \"m_6_8\": m_6_8_trace}\n\npairwise_compare(dataset_dict, metric=\"dse\", ic=\"waic\", scale=\"deviance\")", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/arviz/stats/stats.py:1415: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail. \nSee http://arxiv.org/abs/1507.04544 for details\n \"For one or more samples the posterior variance of the log predictive \"\n" ] ], [ [ "#### Code 7.32", "_____no_output_____" ] ], [ [ "d = pd.read_csv(\"Data/WaffleDivorce.csv\", delimiter=\";\")\n\nd[\"A\"] = stats.zscore(d[\"MedianAgeMarriage\"])\nd[\"D\"] = stats.zscore(d[\"Divorce\"])\nd[\"M\"] = stats.zscore(d[\"Marriage\"])", "_____no_output_____" ], [ "with pm.Model() as m_5_1:\n a = pm.Normal(\"a\", 0, 0.2)\n bA = pm.Normal(\"bA\", 0, 0.5)\n\n mu = a + bA * d[\"A\"]\n sigma = pm.Exponential(\"sigma\", 1)\n\n D = pm.Normal(\"D\", mu, sigma, observed=d[\"D\"])\n\n m_5_1_trace = pm.sample(return_inferencedata=True)\n\nwith pm.Model() as m_5_2:\n a = pm.Normal(\"a\", 0, 0.2)\n bM = pm.Normal(\"bM\", 0, 0.5)\n\n mu = a + bM * d[\"M\"]\n sigma = pm.Exponential(\"sigma\", 1)\n\n D = pm.Normal(\"D\", mu, sigma, observed=d[\"D\"])\n\n m_5_2_trace = pm.sample(return_inferencedata=True)\n\nwith pm.Model() as m_5_3:\n a = pm.Normal(\"a\", 0, 0.2)\n bA = pm.Normal(\"bA\", 0, 0.5)\n bM = pm.Normal(\"bM\", 0, 0.5)\n\n mu = a + bA * d[\"A\"] + bM * d[\"M\"]\n sigma = pm.Exponential(\"sigma\", 1)\n\n D = pm.Normal(\"D\", mu, sigma, observed=d[\"D\"])\n\n m_5_3_trace = pm.sample(return_inferencedata=True)", "Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [sigma, bA, a]\n" ] ], [ [ "#### Code 7.33", "_____no_output_____" ] ], [ [ "az.compare(\n {\"m_5_1\": m_5_1_trace, \"m_5_2\": m_5_2_trace, \"m_5_3\": m_5_3_trace},\n scale=\"deviance\",\n)", "_____no_output_____" ] ], [ [ "#### Code 7.34", "_____no_output_____" ] ], [ [ "psis_m_5_3 = az.loo(m_5_3_trace, pointwise=True, scale=\"deviance\")\nwaic_m_5_3 = az.waic(m_5_3_trace, pointwise=True, scale=\"deviance\")\n\n# Figure 7.10\nplt.scatter(psis_m_5_3.pareto_k, waic_m_5_3.waic_i)\nplt.xlabel(\"PSIS Pareto k\")\nplt.ylabel(\"WAIC\");", "/Users/ksachdeva/Desktop/Dev/myoss/resources/env/lib/python3.6/site-packages/arviz/stats/stats.py:1415: UserWarning: For one or more samples the posterior variance of the log predictive densities exceeds 0.4. This could be indication of WAIC starting to fail. \nSee http://arxiv.org/abs/1507.04544 for details\n \"For one or more samples the posterior variance of the log predictive \"\n" ], [ "# Figure 7.11\n\nv = np.linspace(-4, 4, 100)\n\ng = stats.norm(loc=0, scale=1)\nt = stats.t(df=2, loc=0, scale=1)\n\nfig, (ax, lax) = plt.subplots(1, 2, figsize=[8, 3.5])\n\nax.plot(v, g.pdf(v), color=\"b\")\nax.plot(v, t.pdf(v), color=\"k\")\n\nlax.plot(v, -g.logpdf(v), color=\"b\")\nlax.plot(v, -t.logpdf(v), color=\"k\");", "_____no_output_____" ] ], [ [ "#### Code 7.35", "_____no_output_____" ] ], [ [ "with pm.Model() as m_5_3t:\n a = pm.Normal(\"a\", 0, 0.2)\n bA = pm.Normal(\"bA\", 0, 0.5)\n bM = pm.Normal(\"bM\", 0, 0.5)\n\n mu = a + bA * d[\"A\"] + bM * d[\"M\"]\n sigma = pm.Exponential(\"sigma\", 1)\n\n D = pm.StudentT(\"D\", 2, mu, sigma, observed=d[\"D\"])\n\n m_5_3t_trace = pm.sample(return_inferencedata=True)", "Auto-assigning NUTS sampler...\nInitializing NUTS using jitter+adapt_diag...\nMultiprocess sampling (4 chains in 4 jobs)\nNUTS: [sigma, bM, bA, a]\n" ], [ "az.loo(m_5_3t_trace, pointwise=True, scale=\"deviance\")", "_____no_output_____" ], [ "az.plot_forest([m_5_3_trace, m_5_3t_trace], model_names=[\"m_5_3\", \"m_5_3t\"], figsize=[6, 3.5]);", "_____no_output_____" ], [ "%load_ext watermark\n%watermark -n -u -v -iv -w", "pandas 1.1.1\nnumpy 1.19.1\npymc3 3.9.3\narviz 0.9.0\nstatsmodels.api 0.11.1\nlast updated: Mon Aug 24 2020 \n\nCPython 3.6.10\nIPython 7.16.1\nwatermark 2.0.2\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a8924e5a939b3ffd9f01b52a6c8561cfcd76467
140,536
ipynb
Jupyter Notebook
src/vocabulary_histogram.ipynb
alazareva/291Peoject
73d97f5280f32afeeb831757a03fb5d7cce160da
[ "BSD-2-Clause" ]
null
null
null
src/vocabulary_histogram.ipynb
alazareva/291Peoject
73d97f5280f32afeeb831757a03fb5d7cce160da
[ "BSD-2-Clause" ]
null
null
null
src/vocabulary_histogram.ipynb
alazareva/291Peoject
73d97f5280f32afeeb831757a03fb5d7cce160da
[ "BSD-2-Clause" ]
null
null
null
44.068987
28,690
0.521546
[ [ [ "import numpy as np\nimport pickle\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n", "The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n" ], [ "def build_vocabulary(sentence_iterator, word_count_threshold=0, save_variables=False): # borrowed this function from NeuralTalk\n\n print 'preprocessing word counts and creating vocab based on word count threshold %d' % (word_count_threshold)\n length_of_longest_sentence = np.max(map(lambda x: len(x.split(' ')), sentence_iterator))\n print 'Length of the longest sentence is %s'%length_of_longest_sentence\n word_counts = {}\n number_of_sentences = 0\n length_of_sentences =list()\n for sentence in sentence_iterator:\n number_of_sentences += 1\n length_of_sentences.append(len(sentence.lower().split(' ')))\n for current_word in sentence.lower().split(' '):\n word_counts[current_word] = word_counts.get(current_word, 0) + 1\n vocab = [current_word for current_word in word_counts if word_counts[current_word] >= word_count_threshold]\n print 'filtered words from %d to %d' % (len(word_counts), len(vocab))\n\n index_to_word_list = {}\n index_to_word_list[0] = '#END#' # end token at the end of the sentence. make first dimension be end token\n word_to_index_list = {}\n word_to_index_list['#START#'] = 0 # make first vector be the start token\n current_index = 1\n\n for current_word in vocab:\n word_to_index_list[current_word] = current_index\n index_to_word_list[current_index] = current_word\n current_index += 1\n\n word_counts['#END#'] = number_of_sentences\n\n plt.subplot(2, 1, 1)\n plt.plot(length_of_sentences)\n plt.title('Word distribution')\n plt.xlabel('Sample #')\n plt.ylabel('Length of words')\n \n print np.mean(length_of_sentences)\n print np.max(length_of_sentences)\n print np.min(length_of_sentences)\n print np.median(length_of_sentences)\n \n if save_variables:\n print 'Completed processing captions. Saving work now ...'\n word_to_index_path = 'word_to_index.p'\n index_to_word_path = 'index_to_word.p'\n word_count_path = 'word_count.p'\n pickle.dump(word_to_index_list, open(word_to_index_path, \"wb\"))\n pickle.dump(index_to_word_list, open(index_to_word_path, \"wb\"))\n pickle.dump(word_counts, open(word_count_path, \"wb\"))\n\n return word_to_index_list, index_to_word_list, word_counts\n\n", "_____no_output_____" ], [ "annotation_path = 'training_set_recipes.p'\nannotation_data = pickle.load(open(annotation_path, \"rb\"))\ncaptions = annotation_data.values()\n ", "_____no_output_____" ], [ " build_vocabulary(captions)", "preprocessing word counts and creating vocab based on word count threshold 0\nLength of the longest sentence is 747\nfiltered words from 71752 to 71752\n135.501885714\n747\n11\n124.0\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a89302ef3a1743a5405b2a0f553f20b763c08c5
90,247
ipynb
Jupyter Notebook
testzie/svm_test.ipynb
amozie/amozie
fb7c16ce537bc5567f9c87cfc22c564a4dffc4ef
[ "Apache-2.0" ]
null
null
null
testzie/svm_test.ipynb
amozie/amozie
fb7c16ce537bc5567f9c87cfc22c564a4dffc4ef
[ "Apache-2.0" ]
null
null
null
testzie/svm_test.ipynb
amozie/amozie
fb7c16ce537bc5567f9c87cfc22c564a4dffc4ef
[ "Apache-2.0" ]
null
null
null
507.005618
86,480
0.937394
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport itertools\n%matplotlib inline", "_____no_output_____" ], [ "from sklearn import svm", "_____no_output_____" ], [ "X = np.random.rand(1000, 2)\nY = np.where((X[:, 0]-0.5)**2/9 + (X[:, 1]-0.5)**2/6 < 0.01 + np.random.randn(1000)/400, 1, 0)", "_____no_output_____" ], [ "svc = svm.SVC(C=10, gamma=1, probability=True)", "_____no_output_____" ], [ "print(svc.fit(X, Y))\nprint(svc.score(X, Y))", "SVC(C=10, cache_size=200, class_weight=None, coef0=0.0,\n decision_function_shape=None, degree=3, gamma=1, kernel='rbf',\n max_iter=-1, probability=True, random_state=None, shrinking=True,\n tol=0.001, verbose=False)\n0.955\n" ], [ "px, py = np.meshgrid(np.linspace(0, 1), np.linspace(0, 1))\npxy = np.vstack([px.flatten(), py.flatten()]).T\npz = svc.predict_proba(pxy)[:, 1].reshape(50, 50)", "_____no_output_____" ], [ "plt.contourf(px, py, pz, cmap=plt.cm.binary_r)\nplt.colorbar()\nplt.contour(px, py, pz, [0.5], colors='k')\nplt.scatter(X[:,0], X[:,1], c=Y, marker='.')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a89320f4c77927fd79fe114a41f1ba78eef57d4
659
ipynb
Jupyter Notebook
sem24/main.ipynb
Sergey-Tkachenko/MachineLearningSeminars
813711ac83343fecb20b8fe5e3a489514be41e71
[ "MIT" ]
100
2020-09-04T23:04:40.000Z
2022-03-05T16:43:54.000Z
seminars/sem24/main.ipynb
mordiggian174/MachineLearningSeminars
83e29c023fc86ed61ff07ca245773f4d0cdb0a09
[ "MIT" ]
null
null
null
seminars/sem24/main.ipynb
mordiggian174/MachineLearningSeminars
83e29c023fc86ed61ff07ca245773f4d0cdb0a09
[ "MIT" ]
67
2020-09-05T11:09:23.000Z
2022-03-31T12:20:00.000Z
16.475
34
0.517451
[ [ [ "# Онлайновое обучение", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
4a89446b301905a5d61a1f24aab982e285394903
294,592
ipynb
Jupyter Notebook
src/theory/Spielfelder.ipynb
LukasWallisch/game-ur-analysis
760a892cc20924cf1864b9602c5d3239a8995323
[ "MIT" ]
null
null
null
src/theory/Spielfelder.ipynb
LukasWallisch/game-ur-analysis
760a892cc20924cf1864b9602c5d3239a8995323
[ "MIT" ]
null
null
null
src/theory/Spielfelder.ipynb
LukasWallisch/game-ur-analysis
760a892cc20924cf1864b9602c5d3239a8995323
[ "MIT" ]
null
null
null
597.549696
59,478
0.939119
[ [ [ "from mpl_toolkits.axes_grid1 import make_axes_locatable\nimport random\nimport numpy as np\nfrom src.codeGameSimulation.GameUr import GameSettings\nfrom theory.helpers import labelLine,draw_squares,draw_circles,draw_stars,draw_path,draw_fives,draw_4fives,draw_4eyes,draw_steps\nimport gameBoardDisplay as gbd\n\nfrom scipy import stats\n\n\n# %config InlineBackend.figure_formats = ['svg']\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.style as mplstyle\nfrom matplotlib.colors import LinearSegmentedColormap\nimport matplotlib.collections as collections\nimport matplotlib.patches as mpatches\nimport matplotlib.axes as maxes\n\nfrom matplotlib import patheffects\n\n\nmplstyle.use('fast')\nmplstyle.use('default')\n\nmpl.rcParams['figure.dpi'] = 30\nmpl.rcParams['figure.figsize'] = [10, 20]\n\n\nfont = {'family' : 'normal',\n # 'weight' : 'bold',\n 'size' : 30}\n\nmpl.rc('font', **font)\n\ncolors = [\"lightgreen\", \"yellow\", \"red\"]\ncmap = LinearSegmentedColormap.from_list(\"mycmap\", colors)\n", "_____no_output_____" ], [ "def draw_gameboard(ax,noEnd=False,noStart=False, rotate=False,dimitry=False,clear=False):\n if rotate:\n draw_squares(ax,[1,2,3,4],[0,2],\"prepare\",clear)\n draw_squares(ax,list(range(1,9)),[1],\"fight\",clear)\n if dimitry:\n draw_squares(ax,[7,8],[0,2],\"fight\",clear)\n else:\n draw_squares(ax,[7,8],[0,2],\"retreat\",clear)\n draw_stars(ax,[1,7],[0,2])\n draw_stars(ax,[4],[1])\n if not clear:\n if not noStart:\n draw_circles(ax,[5],[0,2],\"start\")\n if not noEnd:\n draw_circles(ax,[6],[0,2],\"ende\")\n", "_____no_output_____" ], [ "def formatAxis(ax,xlim,ylim,xticksRange=(0,9)):\n ax.set_yticks([0, 1, 2], [\"A\", \"B\", \"C\"])\n ax.set_xticks(range(*xticksRange),range(xticksRange[0]-1,xticksRange[1]-1))\n ax.set_aspect(\"equal\", \"box\")\n ax.set_ylim(*ylim)\n ax.set_xlim(*xlim)\n ax.spines[\"top\"].set_visible(False)\n ax.spines[\"right\"].set_visible(False)\n ax.spines[\"left\"].set_color(\"gray\")\n ax.spines[\"bottom\"].set_color(\"gray\")", "_____no_output_____" ], [ "figGB, ax = plt.subplot_mosaic([[\"p0\"]], figsize=[15, 5], constrained_layout=True)\ndraw_gameboard(ax[\"p0\"],rotate=True,clear=True)\nformatAxis(ax[\"p0\"],(.4,8.6),(-0.6,2.6))", "findfont: Font family ['normal'] not found. Falling back to DejaVu Sans.\n" ], [ "figSimple, ax = plt.subplot_mosaic([[\"p0\"]], figsize=[15, 5], constrained_layout=True)\n\ndraw_gameboard(ax[\"p0\"],rotate=True)\ny = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]\nx = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6]\n\nx0 = y[:5] + [x_ - 0.2 for x_ in y[5:13]] + y[13:]\nx1 = [x_ + 2 for x_ in y[:5]] + [x_ + 0.2 for x_ in y[5:13]] + [x_ + 2 for x_ in y[13:]]\ndraw_path(ax[\"p0\"], x, x0, \"red\")\ndraw_path(ax[\"p0\"], x, x1, \"green\")\n\nformatAxis(ax[\"p0\"],(.4,8.6),(-0.6,2.6))\n\n\n", "findfont: Font family ['normal'] not found. Falling back to DejaVu Sans.\n" ], [ "figFinkel, ax = plt.subplot_mosaic([[\"p0\"]], figsize=[15, 5], constrained_layout=True)\n\ndraw_gameboard(ax[\"p0\"],rotate=True,dimitry=True, noEnd=True)\n\ndraw_circles(ax[\"p0\"],6,0,\"ende\")\n\n\ny = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 0,0,0]\nx = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 8, 8,7,6]\n\nx0 = y[:5] + [x_ - 0.2 for x_ in y[5:14]] + y[14:15]+ [x_ + .2 for x_ in y[15:]]\nx1 = [x_ + 2 for x_ in y[:5]] + [x_ + 0.2 for x_ in y[5:14]] + y[14:15]+ [x_ - .2 for x_ in y[15:]]\ny0=x[:10]+[y_ + .2 for y_ in x[10:13]]+[y_ - .2 for y_ in x[13:16]] +x[16:]\ny1=x[:10]+[y_ - .2 for y_ in x[10:13]]+[y_ + .2 for y_ in x[13:16]] +x[16:]\ndraw_path(ax[\"p0\"], y0, x0, \"red\")\ndraw_path(ax[\"p0\"], y1, x1, \"green\")\n\nformatAxis(ax[\"p0\"],(.4,8.6),(-0.6,2.6))", "_____no_output_____" ], [ "figDimitry, ax = plt.subplot_mosaic([[\"p0\"]], figsize=[15, 5], constrained_layout=True)\n\ndraw_gameboard(ax[\"p0\"],noEnd=True,noStart=True,rotate=True,dimitry=True,clear=True)\n# draw_circles(ax[\"p0\"],0,1,\"ende\")\n\ny = [0,0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 2,2,1,1,1,1,1,1,1,1]\nx = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 8, 8,7,7,6,5,4,3,2,1,0]\n\nx0 = y[:5] + [x_ - 0.2 for x_ in y[5:12]]+y[12:17] + [x_ + 0.2 for x_ in y[17:]]\n\n# draw_path(ax[\"p0\"],x, x0, \"red\")\n# draw_circles(ax[\"p0\"],8,1,\"turn\")\ndraw_fives(ax[\"p0\"],8,[0,2],\"small\")\ndraw_fives(ax[\"p0\"],3,[0,2],\"normal\")\ndraw_fives(ax[\"p0\"],[2,5,8],1,\"normal\")\ndraw_4fives(ax[\"p0\"],[3,6],1)\ndraw_4eyes(ax[\"p0\"],[2,4],[0,2])\ndraw_4eyes(ax[\"p0\"],7,1)\ndraw_steps(ax[\"p0\"],1,1)\n\nformatAxis(ax[\"p0\"],(.4,8.6),(-0.6,2.6))", "_____no_output_____" ], [ "figDimitryPath, ax = plt.subplot_mosaic([[\"p0\"]], figsize=[15, 5], constrained_layout=True)\n\ndraw_gameboard(ax[\"p0\"],noEnd=True,rotate=True,dimitry=True)\ndraw_circles(ax[\"p0\"],0,1,\"ende\")\n\ny = [0,0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 2,2,1,1,1,1,1,1,1,1]\nx = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 7, 8, 8, 8,7,7,6,5,4,3,2,1,0]\n\nx0 = y[:5] + [x_ - 0.2 for x_ in y[5:12]]+y[12:17] + [x_ + 0.2 for x_ in y[17:]]\n\ndraw_path(ax[\"p0\"],x, x0, \"red\")\ndraw_circles(ax[\"p0\"],8,0,\"turn\")\n# draw_fives(ax[\"p0\"],8,[0,2],\"small\")\n# draw_fives(ax[\"p0\"],3,[0,2],\"normal\")\n# draw_fives(ax[\"p0\"],[2,5,8],1,\"normal\")\n# draw_4fives(ax[\"p0\"],[3,6],1)\n# draw_4eyes(ax[\"p0\"],[2,4],[0,2])\n# draw_4eyes(ax[\"p0\"],7,1)\n# draw_steps(ax[\"p0\"],1,1)\n\nformatAxis(ax[\"p0\"],(-.6,8.6),(-0.6,2.6))", "_____no_output_____" ], [ "figFinkelGameBoard, ax = plt.subplot_mosaic([[\"p0\"]], figsize=[15, 5], constrained_layout=True)\ndraw_circles(ax[\"p0\"],5,[0,2],\"start\")\ndraw_squares(ax[\"p0\"], [1, 2, 3, 4], [0, 2], \"prepare\", False)\ndraw_squares(ax[\"p0\"],list(range(1,13)),[1],\"fight\",False)\ndraw_circles(ax[\"p0\"],13,1,\"ende\")\ndraw_stars(ax[\"p0\"],[4,8,12],1)\ndraw_stars(ax[\"p0\"],1,[0,2])\nformatAxis(ax[\"p0\"],(.4,13.6),(-.6,2.6),(1,14))\n\ny = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\nx = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]\n\nx0 = y[:5] + [x_ - 0.2 for x_ in y[5:]]\nx1 = [x_ + 2 for x_ in y[:5]] + [x_ + 0.2 for x_ in y[5:]]\ny0 = x[:10]+[y_ + .2 for y_ in x[10:13]]+[y_ - .2 for y_ in x[13:16]] + x[16:]\ny1 = x[:10]+[y_ - .2 for y_ in x[10:13]]+[y_ + .2 for y_ in x[13:16]] + x[16:]\ndraw_path(ax[\"p0\"], y0, x0, \"red\")\ndraw_path(ax[\"p0\"], y1, x1, \"green\")\n\n", "_____no_output_____" ], [ "figSimple_singleside, ax = plt.subplot_mosaic([[\"p0\"]], figsize=[15, 5], constrained_layout=True)\n\ndraw_circles(ax[\"p0\"], 5, 0, \"start\")\ndraw_squares(ax[\"p0\"], [1, 2, 3, 4], 0, \"prepare\", False)\ndraw_squares(ax[\"p0\"], list(range(1, 9)), [1], \"fight\", False)\ndraw_squares(ax[\"p0\"], list(range(7, 9)), 0, \"retreat\", False)\ndraw_circles(ax[\"p0\"], 6, 0, \"ende\")\n\ndraw_stars(ax[\"p0\"], 7, 0)\ndraw_stars(ax[\"p0\"], 4, 1)\ndraw_stars(ax[\"p0\"], 1, 0)\n\nformatAxis(ax[\"p0\"],(.4,8.6),(-0.6,1.6))\n\ny = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0]\nx = [5, 4, 3, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8, 8, 7, 6]\n\ny0 = [y_ + 0.2 for y_ in y[:5]] + [y_ - 0.05 for y_ in y[5:13]] +[y_ + 0.2 for y_ in y[13:]]\ny1 = [y_ -.2 for y_ in y[:5]] + [y_ + 0.05 for y_ in y[5:13]] + [y_ -.2 for y_ in y[13:]]\nx0 = x[:4]+[x_+.2 for x_ in x[4:6]]+x[6:12]+[x_-.2 for x_ in x[12:14]]+x[14:]\nx1 = x[:4]+[x_-.2 for x_ in x[4:6]]+x[6:12]+[x_+.2 for x_ in x[12:14]]+x[14:]\ndraw_path(ax[\"p0\"], x0, y0, \"red\")\ndraw_path(ax[\"p0\"], x1, y1, \"green\")\n# ax[\"p0\"].plot(.5,.5,marker=\"o\",color=\"black\",fillstyle=\"full\",markersize=30,markeredgecolor=\"black\")\n# ax[\"p0\"].plot(8.5, .5, marker=\"o\", color=\"black\",\n# fillstyle=\"full\", markersize=30, markeredgecolor=\"black\")\n", "_____no_output_____" ], [ "figSimple_straight, ax = plt.subplot_mosaic(\n [[\"p0\"]], figsize=[35, 5], constrained_layout=True)\n\ndraw_circles(ax[\"p0\"], 0, 0, \"start\")\ndraw_squares(ax[\"p0\"], list(range(1, 5)), 0, \"prepare\", False)\ndraw_squares(ax[\"p0\"], list(range(5, 13)), 0, \"fight\", False)\ndraw_squares(ax[\"p0\"], list(range(13, 15)), 0, \"retreat\", False)\ndraw_circles(ax[\"p0\"], 15, 0, \"ende\")\n\ndraw_stars(ax[\"p0\"], [4,8,14], 0)\n\nformatAxis(ax[\"p0\"], (-.6, 15.6), (-0.6, 0.6),(0,16))\n\ny = [0]*16\nx = list(range(0,16))\n\ny0 = [y_ + 0.2 for y_ in y[:5]] + [y_ + .05 for y_ in y[5:13]] + [y_ + 0.2 for y_ in y[13:]]\ny1 = [y_ - .2 for y_ in y[:5]] + [y_ - .05 for y_ in y[5:13]] + [y_ - .2 for y_ in y[13:]]\ndraw_path(ax[\"p0\"], x, y0, \"red\")\ndraw_path(ax[\"p0\"], x, y1, \"green\")\nax[\"p0\"].spines[\"left\"].set_visible(False)\nax[\"p0\"].set_yticks([0],\"\")\nax[\"p0\"].set_xticks(range(0, 16), range(0, 16))\n# ax[\"p0\"].plot(4.5,-.5,marker=\"o\",color=\"black\",fillstyle=\"full\",markersize=30,markeredgecolor=\"black\")\n# ax[\"p0\"].plot(12.5,-.5,marker=\"o\",color=\"black\",fillstyle=\"full\",markersize=30,markeredgecolor=\"black\")", "_____no_output_____" ], [ "figGB.savefig(\"../../tex/game_ur_ba_thesis/img/Grafiken/Gameboard.png\",dpi=300,)\nfigFinkelGameBoard.savefig(\n \"../../tex/game_ur_ba_thesis/img/Grafiken/GameboardFinkel.png\", dpi=300,)\nfigSimple.savefig(\"../../tex/game_ur_ba_thesis/img/Grafiken/simplePathGameboard.png\",dpi=300,)\nfigFinkel.savefig(\"../../tex/game_ur_ba_thesis/img/Grafiken/FinkelPathGameboard.png\",dpi=300,)\nfigDimitry.savefig(\"../../tex/game_ur_ba_thesis/img/Grafiken/DimitryGameboard.png\",dpi=300,)\nfigDimitryPath.savefig(\"../../tex/game_ur_ba_thesis/img/Grafiken/DimitryPathGameboard.png\",dpi=300,)\nfigSimple_singleside.savefig( \"../../tex/game_ur_ba_thesis/img/Grafiken/Gameboard_singleside.png\", dpi=300,)\nfigSimple_straight.savefig(\"../../tex/game_ur_ba_thesis/img/Grafiken/Gameboard_straight.png\",dpi=300,)", "_____no_output_____" ], [ "y=[-1,1]*2\nx=[-1]*2+[1]*2\nlist(zip(y,x))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a8961722a58d729488ab9f74dceb1b1b92d9854
274,076
ipynb
Jupyter Notebook
Modulo1/Clase4/DecisionTheory.ipynb
AdrianRamosDS/mgpo2021
1004b3fff386b389594fd2b0f995756dd1b93d6e
[ "MIT" ]
null
null
null
Modulo1/Clase4/DecisionTheory.ipynb
AdrianRamosDS/mgpo2021
1004b3fff386b389594fd2b0f995756dd1b93d6e
[ "MIT" ]
null
null
null
Modulo1/Clase4/DecisionTheory.ipynb
AdrianRamosDS/mgpo2021
1004b3fff386b389594fd2b0f995756dd1b93d6e
[ "MIT" ]
6
2021-08-18T01:07:56.000Z
2021-09-07T04:06:28.000Z
224.284779
52,860
0.903961
[ [ [ "# Introduction to Decision Theory using Probabilistic Graphical Models\n\n<img style=\"float: right; margin: 0px 0px 15px 15px;\" src=\"https://upload.wikimedia.org/wikipedia/commons/b/bb/Risk_aversion_curve.jpg\" width=\"400px\" height=\"300px\" />\n\n> So far, we have seen that probabilistic graphical models are useful for modeling situations that involve uncertainty. Furthermore, we will see in the next module how using inference algorithms we will also reach conclusions abount the current situation from partial evidence: predictions.\n> \n> On the other hand, we do not only want to obtain these conclusions (predictions), but actually make decisions on top of these conclusions.\n>\n> It turns out that we can actually use probabilistic graphical models to encode not only the uncertain situations, but also the decision making agents with all the policies they are allowed to implement and the possible utilities that one may obtain.\n\n> **Objetives:**\n> - To learn how to represent decision situations using PGMs.\n> - To understand the maximum expected utility principle.\n> - To learn how to measure the value of information when making a decision.\n\n> **References:**\n> - Probabilistic Graphical Models: Principles and Techniques, By Daphne Koller and Nir Friedman. Ch. 22 - 23.\n> - Probabilistic Graphical Models Specialization, offered through Coursera. Prof. Daphne Koller.\n\n\n<p style=\"text-align:right;\"> Imagen recuperada de: https://upload.wikimedia.org/wikipedia/commons/b/bb/Risk_aversion_curve.jpg.</p>\n\n___", "_____no_output_____" ], [ "# 1. Maximizing Expected Utility", "_____no_output_____" ], [ "The theoretical foundations of decision theory were established long befor probabilistic graphical models came to live. The framework of *maximum expected utility* allows to formulate and solve decision problems that involve uncertainty.\n\nBefore continuing, we should be clear that the **utility** is a numerical function that assigns numbers to the various possible outcomes, encoding the preferences of the agent. These numbers:\n\n- Do not have meanings in themselves.\n- We only know that the larger, the better, according to the preferences of the agent.\n- Usually, we compare the utility of two outcomes by means of the $\\Delta U$, which represents the strength of the \"happiness\" change from an outcome w.r.t. the other.", "_____no_output_____" ], [ "The outcomes we were talking about above can vary along *multiple dimensions*. One of those dimensions is often the monetary gain, but most of the settings consider other dimensions as well.", "_____no_output_____" ], [ "## 1.1. Problem formulation and maximum expected utility principle\n\nA simple **decision making situation** $\\mathcal{D}$ is defined by:\n\n- A set of possible actions $A$, with $\\mathrm{Val}(A)=\\{a_1, \\dots, a_k\\}$.\n- A set of possible states (RVs) $\\bar{X}$, with $\\mathrm{Val}(\\bar{X})=\\{\\bar{x}_1, \\dots, \\bar{x}_n\\}$.\n- A conditional distribution $P(\\bar{X}|A)$.\n- A utility function $U(\\bar{X}, A)$, which expresses the agent's preferences.\n\nThe **expected utility** on the above decision making situation, given that $A=a$, is\n\n$$EU[\\mathcal{D}[a]] = \\sum_{\\bar{X}}P(\\bar{X}|a)U(\\bar{X},a).$$\n\nFurthermore, the **maximum expected utility (MEU)** principle states that we should choose the action that maximizes the expected utility\n\n$$a^\\ast = \\arg\\max_{a\\in\\mathrm{Val}(A)} EU[\\mathcal{D}[a]].$$", "_____no_output_____" ], [ "**How can we represent the above using PGMs?**\n\nWe can use the ideas we developed for PGMs to represent the decision making situations in a very interpretable way.\n\nIn this sense, we have:\n- Random variables are represented by *ovals* and stand for the state.\n- Actions are represented by *rectangles*.\n- Utilities are represented by *diamonds*. These have no children.\n\n**Example.** Consider the decision situation $\\mathcal{D}$ where a graduate of the Master's Degree in Data Science is deciding whether to found a Data Science consultancy company or not.\n\nWhile this person does not exactly know which will be the demand of consultancy services, he/she knows that the demand will be either:\n- $m^0$: nonexistent, with probability $0.5$;\n- $m^1$: moderate, with probability $0.3$;\n- $m^2$: high, with probability $0.2$.\n\nMoreover, he/she will obtain a utility $U(M, f^0)=0$ for $M=m^0, m^1, m^2$ in the case that he/she doesn't found the company, or the following utilities in the case that he/she found the company:\n- $U(m^0, f^1)=-7$;\n- $U(m^1, f^1)=5$;\n- $U(m^2, f^1)=20$.\n\nLet's represent the graphical model corresponding to this situation:", "_____no_output_____" ] ], [ [ "from IPython.display import Image", "_____no_output_____" ], [ "# First draw in the white board, then show (first_representation)\nImage(\"figures/first_representation.png\")", "_____no_output_____" ] ], [ [ "Then, according to this:\n\n- What are the expected utilities for each action?\n - $E[\\mathcal{D}[f^0]]=0$\n - $E[\\mathcal{D}[f^1]]=0.5 \\times -7 + 0.3 \\times 5 + 0.2 \\times 20=2.$\n \n- Which is the optimal action?\n - $f^1 = \\arg \\max_{f=f^0, f^1} E[\\mathcal{D}[f]]$.", "_____no_output_____" ], [ "With `pgmpy`:", "_____no_output_____" ] ], [ [ "# Import pgmpy.factors.discrete.DiscreteFactor\nfrom pgmpy.factors.discrete import DiscreteFactor", "_____no_output_____" ], [ "# Define factors P(M), U(M,F)\nP_M = DiscreteFactor(variables=[\"M\"],\n cardinality=[3],\n values=[0.5, 0.3, 0.2])\nU_MF = DiscreteFactor(variables=[\"M\", \"F\"],\n cardinality=[3, 2],\n values=[0, -7, 0, 5, 0, 20])", "_____no_output_____" ], [ "print(P_M)", "+------+----------+\n| M | phi(M) |\n+======+==========+\n| M(0) | 0.5000 |\n+------+----------+\n| M(1) | 0.3000 |\n+------+----------+\n| M(2) | 0.2000 |\n+------+----------+\n" ], [ "print(U_MF)", "+------+------+------------+\n| M | F | phi(M,F) |\n+======+======+============+\n| M(0) | F(0) | 0.0000 |\n+------+------+------------+\n| M(0) | F(1) | -7.0000 |\n+------+------+------------+\n| M(1) | F(0) | 0.0000 |\n+------+------+------------+\n| M(1) | F(1) | 5.0000 |\n+------+------+------------+\n| M(2) | F(0) | 0.0000 |\n+------+------+------------+\n| M(2) | F(1) | 20.0000 |\n+------+------+------------+\n" ], [ "# Find Expected Utility\nEU = (P_M * U_MF).marginalize(variables=[\"M\"],\n inplace=False)", "_____no_output_____" ], [ "print(EU)", "+------+----------+\n| F | phi(F) |\n+======+==========+\n| F(0) | 0.0000 |\n+------+----------+\n| F(1) | 2.0000 |\n+------+----------+\n" ] ], [ [ "**Multiple utility nodes**\n\nIn the above example, we only had one utility node. However, we may include as many utility nodes as wanted to reduce the number of parameters:", "_____no_output_____" ] ], [ [ "Image(\"figures/student_utility.png\")", "_____no_output_____" ] ], [ [ "Where:\n- $V_G$: Happiness with the grade itself.\n- $V_Q$: Quality of life during studies.\n- $V_S$: Value of getting a good job.\n\nThe total utility can be formulated as:\n\n$$U=V_G+V_Q+V_S.$$\n\n*Question.* If $|\\mathrm{Val}(D)| = 2$, $|\\mathrm{Val}(S)| = 2$, $|\\mathrm{Val}(G)| = 3$, and $|\\mathrm{Val}(J)| = 2$, how many parameters do you need to completely specify the utility?\n\n- $|\\mathrm{Val}(V_G)| = 3$; $|\\mathrm{Val}(V_Q)| = 4$; $|\\mathrm{Val}(V_S)| = 2$. We need $3+4+2=9$ parameters.\n\n*Question.* How many if the utility weren't decomposed?\n\n- $3\\times 2 \\times 2 \\times 2 = 24$.", "_____no_output_____" ], [ "## 1.2. Information edges and decision rules\n\nThe influence diagrams we have depicted above, also allow us to capture the notion of information available to the agent when they make their decision.\n\n**Example.** In the example of the Master's graduate deciding founding or not founding his company, let's assume that he/she has the opportunity to carry out a survey to measure the overall market demand for Data Science consultancy before making the decision.\n\nIn this sense, the graph now looks like the following:", "_____no_output_____" ] ], [ [ "# First draw in the white board, then show (second_representation)\nImage(\"figures/second_representation.png\")", "_____no_output_____" ] ], [ [ "Hence, the agent can make its decision depending on the value of the survey, which is denoted by the precense of the edge.\n\nFormally,\n\n> *Definition.* A **decision rule** $\\delta_A$ at an action node $A$ is a conditional probability $P(A|\\mathrm{Pa}A)$ (a function that maps each instatiation of $\\mathrm{Pa}A$ to a probability distribution $\\delta_A$ over $\\mathrm{Val}(A)$).", "_____no_output_____" ], [ "Given the above, **what is the expected utility with information?**\n\nFollowing the same sort of ideas we get that\n\n$$EU[\\mathcal{D}[\\delta_A]] = \\sum_{\\bar{X}, A}P_{\\delta_A}(\\bar{X},A)U(\\bar{X},A),$$\n\nwhere $P_{\\delta_A}(\\bar{X},A)$ is the joint probability distribution over $\\bar{X}$ and $A$. The subindex $\\delta_A$ makes reference that this joind distribution depends on the selection of the decision rule $\\delta_A$.", "_____no_output_____" ], [ "Now, following the MEU, the optimal decision rule is:\n\n$$\\delta_A^\\ast = \\arg \\max_{\\delta_A} EU[\\mathcal{D}[\\delta_A]],$$\n\nand the MEU is\n\n$$MEU(\\mathcal{D}) = \\max_{\\delta_A} EU[\\mathcal{D}[\\delta_A]].$$", "_____no_output_____" ], [ "**How can we find optimal decision rules?**\n\nIn our entrepreneur example, we have that \n\n\\begin{align}\nEU[\\mathcal{D}[\\delta_A]] &= \\sum_{\\bar{X}, A}P_{\\delta_A}(\\bar{X},A)U(\\bar{X},A) \\\\\n& = \\sum_{M,S,F} P(M)P(S|M) \\delta_F(F|S) U(M,F)\\\\\n& = \\sum_{S,F} \\delta_F(F|S) \\sum_M P(M)P(S|M)U(M,F)\\\\\n& = \\sum_{S,F} \\delta_F(F|S) \\mu(S,F)\n\\end{align}\n\n(see in the whiteboard, then show equations)", "_____no_output_____" ], [ "Thus, let's calculate $\\mu(S,F)$ using `pgmpy`:", "_____no_output_____" ] ], [ [ "print(P_M), print(U_MF)", "+------+----------+\n| M | phi(M) |\n+======+==========+\n| M(0) | 0.5000 |\n+------+----------+\n| M(1) | 0.3000 |\n+------+----------+\n| M(2) | 0.2000 |\n+------+----------+\n+------+------+------------+\n| M | F | phi(M,F) |\n+======+======+============+\n| M(0) | F(0) | 0.0000 |\n+------+------+------------+\n| M(0) | F(1) | -7.0000 |\n+------+------+------------+\n| M(1) | F(0) | 0.0000 |\n+------+------+------------+\n| M(1) | F(1) | 5.0000 |\n+------+------+------------+\n| M(2) | F(0) | 0.0000 |\n+------+------+------------+\n| M(2) | F(1) | 20.0000 |\n+------+------+------------+\n" ], [ "# We already have P(M), and U(F,M). Define P(S|M)\nP_S_given_M = DiscreteFactor(variables=[\"S\", \"M\"],\n cardinality=[3, 3],\n values=[0.6, 0.3, 0.1, 0.3, 0.4, 0.4, 0.1, 0.3, 0.5])", "_____no_output_____" ], [ "# Compute mu(F,S)\nmu_FS = (P_M * P_S_given_M * U_MF).marginalize(variables=[\"M\"], inplace=False)", "_____no_output_____" ], [ "# Print mu(F,S)\nprint(mu_FS)", "+------+------+------------+\n| S | F | phi(S,F) |\n+======+======+============+\n| S(0) | F(0) | 0.0000 |\n+------+------+------------+\n| S(0) | F(1) | -1.2500 |\n+------+------+------------+\n| S(1) | F(0) | 0.0000 |\n+------+------+------------+\n| S(1) | F(1) | 1.1500 |\n+------+------+------------+\n| S(2) | F(0) | 0.0000 |\n+------+------+------------+\n| S(2) | F(1) | 2.1000 |\n+------+------+------------+\n" ] ], [ [ "Following the MEU principle, we should select for each state (the Survey, in this case) the action that maximizes $\\mu$.\n\nIn this case:", "_____no_output_____" ] ], [ [ "Image(\"figures/table.png\")", "_____no_output_____" ] ], [ [ "Finally,\n\n$$MEU[\\mathcal{D}] = \\sum_{S,F} \\delta_F^\\ast(F|S) \\mu(S,F) = 0 + 1.15 + 2.1 = 3.25$$", "_____no_output_____" ] ], [ [ "print(mu_FS)", "+------+------+------------+\n| S | F | phi(S,F) |\n+======+======+============+\n| S(0) | F(0) | 0.0000 |\n+------+------+------------+\n| S(0) | F(1) | -1.2500 |\n+------+------+------------+\n| S(1) | F(0) | 0.0000 |\n+------+------+------------+\n| S(1) | F(1) | 1.1500 |\n+------+------+------------+\n| S(2) | F(0) | 0.0000 |\n+------+------+------------+\n| S(2) | F(1) | 2.1000 |\n+------+------+------------+\n" ], [ "print(mu_FS.maximize(variables=[\"F\"], inplace=False))", "+------+----------+\n| S | phi(S) |\n+======+==========+\n| S(0) | 0.0000 |\n+------+----------+\n| S(1) | 1.1500 |\n+------+----------+\n| S(2) | 2.1000 |\n+------+----------+\n" ], [ "# Define optimal decision rule\ndelta_F_given_S = DiscreteFactor(variables=[\"F\", \"S\"],\n cardinality=[2, 3],\n values=[1, 0, 0, 0, 1, 1])", "_____no_output_____" ], [ "print(delta_F_given_S)", "+------+------+------------+\n| F | S | phi(F,S) |\n+======+======+============+\n| F(0) | S(0) | 1.0000 |\n+------+------+------------+\n| F(0) | S(1) | 0.0000 |\n+------+------+------------+\n| F(0) | S(2) | 0.0000 |\n+------+------+------------+\n| F(1) | S(0) | 0.0000 |\n+------+------+------------+\n| F(1) | S(1) | 1.0000 |\n+------+------+------------+\n| F(1) | S(2) | 1.0000 |\n+------+------+------------+\n" ], [ "# Obtain MEU\nMEU = (delta_F_given_S * mu_FS).marginalize(variables=[\"F\", \"S\"], inplace=False)\nprint(MEU)", "+---------+\n| phi() |\n+=========+\n| 3.2500 |\n+---------+\n" ] ], [ [ "Without this observation the MEU was 2, and now the MEU has increased more than 50%.\n\nNice, huh?", "_____no_output_____" ], [ "___\n# 2. Utility functions\n\n\n## 2.1. Utility of money\nWe have used utility functions in all the first section assuming that they were known. In this section we study these functions in more detail.\n\n**Utility functions** are a necessary tool that enables us to compare complex scenarios that involve uncertainty or risk.\n\nThe first thing that we should understand is that utility is not the same as expected payoff.\n\n**Example.** An investor must decide between a participation in company A, where he would earn $\\$3$ million without risk, and a participation in company B, where he would earn $\\$4$ million with probability 0.8 and $\\$0$ with probability 0.2.", "_____no_output_____" ] ], [ [ "Image(\"figures/utility_first.png\")", "_____no_output_____" ] ], [ [ "Which one do you prefer?\n - The risk-free one.\n\nWhat is the expected payof of the company A?\n - $\\$3$ M.\n \nWhat is the expected payoff of the company B?\n - $0.8 \\times \\$4$ M + $0.2 \\times \\$0$ M = $\\$3.2$ M.", "_____no_output_____" ], [ "**Example.**\n\nAnother common example that reflects this fact is the well-known St. Petersburg Paradox:\n\n- A fair coin is tossed repeatedly until it comes up Heads.\n- Each toss that it doesn't come up Heads, the payoff is doubled.\n- In this sense, if the coin comes up Heads in the $n$-th toss, then the payoff will be $\\$2^n$.\n\nHow much are you willing to pay to enter to this game?\n- 20, 10, 1, 5.", "_____no_output_____" ], [ "What happens is that if we compute the expected payoff it is:\n\n- $P(\\text{comes up Heads in } n-\\text{th toss}) = \\frac{1}{2^n}$\n\n$$E[\\text{Payoff}] = \\sum_{n=1}^{\\infty}P(\\text{comes up Heads in } n-\\text{th toss}) \\text{Payoff}(n) = \\sum_{n=1}^{\\infty} \\frac{1}{2^n} 2^n = \\sum_{n=1}^{\\infty} 1 = \\infty.$$", "_____no_output_____" ], [ "With these two examples, we have shown that almost always people do not always choose to maximize their monetary gain. What this implies is that the utility of money is not the money itself.\n\nIn fact, at this point of the history and after several psychological studies, we know that utility functions of money for most people look like\n\n$$U(W) = \\alpha + \\beta \\log(W + \\gamma),$$\n\nwhich is a nice *concave* function.\n\n**How does this function look like?** (see in the whiteboard).\n\n**How does the actual form of the curve have to be the attitude towards risk?**", "_____no_output_____" ], [ "## 2.2. Utility of multiple attributes\n\nAll the attributes affecting the utility must be integrated into one utility function.\n\nThis may be a difficult task, since we can enter into some complex fields, far beyond math, probability, and graphs.\n\n- For instance, how do we compare human life with money?\n - A low cost airline is considering the decision of not to run mainteinance plans over the aircraft at every arrival.\n - If you have car, you don't change the tires that often (every 3 months).\n \nThere have been several attempts to addres this problem:\n - Micromorts: $\\frac{1}{10^6}$ chance of death.\n - [QALY](https://en.wikipedia.org/wiki/Quality-adjusted_life_year).", "_____no_output_____" ], [ "# 3. Value of perfect information\n\nWe used influence diagrams to make decisions given a set of observations.\n\nAnother type of question that may arise is **which observations sould I make before making a decision?**.\n\n- Initially one may think that the more information, the better (because information is power).\n- But the answer to this question is far from being that simple.", "_____no_output_____" ], [ "For instance:\n\n- In our entrepreneur example, we saw that including the information of the survey increased the MEU significatively. However, we did not take into account the costs of performing that survey. What if the cost of performing that survey makes the money gains of the company negative or too little?\n\n- Medical diagnosis relies on tests. Some of these tests are painful, risky and/or very expensive.", "_____no_output_____" ], [ "A notion that allows us to adress this question is the **Value of Perfect Information**.\n\n- The value of perfect information $\\mathrm{VPI}(A|\\bar{X})$ is the value (in utility units) of observing $\\bar{X}$ before choosing an action at the node $A$.\n\n- If $\\mathcal{D}$ is the original influence diagram, and\n\n- $\\mathcal{D}_{\\bar{X}\\to A}$ is the influence diagram with the edge(s) $\\bar{X}\\to A$, \n\n- then\n\n $$\\mathrm{VPI}(A|\\bar{X}) = MEU(\\mathcal{D}_{\\bar{X}\\to A}) - MEU(\\mathcal{D}).$$", "_____no_output_____" ], [ "In the entrepreneur example,\n\n$$\\mathrm{VPI}(F|S) = MEU(\\mathcal{D}_{S\\to F}) - MEU(\\mathcal{D})=3.25 - 2 = 1.25.$$", "_____no_output_____" ], [ "> *Theorem.* The value of perfect information satisfies:\n>\n> (i) $\\mathrm{VPI}(A|\\bar{X})\\geq 0$.\n> \n> (ii) $\\mathrm{VPI}(A|\\bar{X})= 0$ if and only if the optimal decision rule for $\\mathcal{D}$ is also optimal for $\\mathcal{D}_{\\bar{X}\\to A}$.", "_____no_output_____" ], [ "This theorem practically says that the information is valuable if and only if it changes the agent's decision at least in one case.", "_____no_output_____" ], [ "**Example.** Consider the case that you are interested in two job offers in two different companies. Furthermore, these two companies are startups and both are looking for funding, which highly depends on the organizational quality of the company.\n\nThis situation can be modeled as:", "_____no_output_____" ] ], [ [ "Image(\"figures/vpi.png\")", "_____no_output_____" ] ], [ [ "Let's find the MEU using `pgmpy`:", "_____no_output_____" ] ], [ [ "# Define factors\nP_C1 = DiscreteFactor(variables=[\"C1\"],\n cardinality=[3],\n values=[0.1, 0.2, 0.7])\nP_C2 = DiscreteFactor(variables=[\"C2\"],\n cardinality=[3],\n values=[0.4, 0.5, 0.1])\nP_F1_given_C1 = DiscreteFactor(variables=[\"F1\", \"C1\"],\n cardinality=[2, 3],\n values=[0.9, 0.6, 0.1, 0.1, 0.4, 0.9])\nP_F2_given_C2 = DiscreteFactor(variables=[\"F2\", \"C2\"],\n cardinality=[2, 3],\n values=[0.9, 0.6, 0.1, 0.1, 0.4, 0.9])\nU_F1F2D = DiscreteFactor(variables=[\"F1\", \"F2\", \"D\"],\n cardinality=[2, 2, 2],\n values=[0, 0, 0, 1, 1, 0, 1, 1])", "_____no_output_____" ], [ "# Obtain Expected utility\nEU = (P_C1 * P_C2 * P_F1_given_C1 * P_F2_given_C2 * U_F1F2D).marginalize(variables=[\n \"C1\", \"C2\", \"F1\", \"F2\"], inplace=False)", "_____no_output_____" ], [ "# Print EU\nprint(EU)", "+------+----------+\n| D | phi(D) |\n+======+==========+\n| D(0) | 0.7200 |\n+------+----------+\n| D(1) | 0.3300 |\n+------+----------+\n" ], [ "# Obtain MEU(D)\nMEU = EU.maximize(variables=[\"D\"], inplace=False)\nprint(MEU)", "+---------+\n| phi() |\n+=========+\n| 0.7200 |\n+---------+\n" ] ], [ [ "Now, let's say that a friend of yours already works in the Company 2, so he informs you about the organizational status of that company. What is the value of that information?", "_____no_output_____" ] ], [ [ "Image(\"figures/vpi2.png\")", "_____no_output_____" ], [ "# Obtain the factor mu(D, C2)\nmu_DC2 = (P_C1 * P_C2 * P_F1_given_C1 * P_F2_given_C2 * U_F1F2D).marginalize(\n variables=[\"C1\", \"F1\", \"F2\"], inplace=False\n)", "_____no_output_____" ], [ "# Print\nprint(mu_DC2)", "+-------+------+-------------+\n| C2 | D | phi(C2,D) |\n+=======+======+=============+\n| C2(0) | D(0) | 0.2880 |\n+-------+------+-------------+\n| C2(0) | D(1) | 0.0400 |\n+-------+------+-------------+\n| C2(1) | D(0) | 0.3600 |\n+-------+------+-------------+\n| C2(1) | D(1) | 0.2000 |\n+-------+------+-------------+\n| C2(2) | D(0) | 0.0720 |\n+-------+------+-------------+\n| C2(2) | D(1) | 0.0900 |\n+-------+------+-------------+\n" ], [ "# Select optimal decision\ndelta_D_given_C2 = DiscreteFactor(variables=[\"C2\", \"D\"],\n cardinality=[3, 2],\n values=[1, 0, 1, 0, 0, 1])", "_____no_output_____" ], [ "print(delta_D_given_C2)", "+-------+------+-------------+\n| C2 | D | phi(C2,D) |\n+=======+======+=============+\n| C2(0) | D(0) | 1.0000 |\n+-------+------+-------------+\n| C2(0) | D(1) | 0.0000 |\n+-------+------+-------------+\n| C2(1) | D(0) | 1.0000 |\n+-------+------+-------------+\n| C2(1) | D(1) | 0.0000 |\n+-------+------+-------------+\n| C2(2) | D(0) | 0.0000 |\n+-------+------+-------------+\n| C2(2) | D(1) | 1.0000 |\n+-------+------+-------------+\n" ], [ "# Obtain MEU(D_C2->D)\nMEU_ = (delta_D_given_C2 * mu_DC2).marginalize(variables=[\"D\", \"C2\"], inplace=False)", "_____no_output_____" ], [ "print(MEU_)", "+---------+\n| phi() |\n+=========+\n| 0.7380 |\n+---------+\n" ], [ "# Obtain VPI\nVPI = MEU_.values - MEU.values\nVPI", "_____no_output_____" ] ], [ [ "# Announcements\n\n## Exam of module 1.", "_____no_output_____" ], [ "<script>\n $(document).ready(function(){\n $('div.prompt').hide();\n $('div.back-to-top').hide();\n $('nav#menubar').hide();\n $('.breadcrumb').hide();\n $('.hidden-print').hide();\n });\n</script>\n\n<footer id=\"attribution\" style=\"float:right; color:#808080; background:#fff;\">\nCreated with Jupyter by Esteban Jiménez Rodríguez.\n</footer>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
4a896482717cf65d70dfa0f792f38357fdc1ab2e
20,645
ipynb
Jupyter Notebook
examples/04-Exporting-ranking-models.ipynb
bschifferer/models-1
b6042dbd1b98150cc50fd7d2cb6c07033f42fd35
[ "Apache-2.0" ]
null
null
null
examples/04-Exporting-ranking-models.ipynb
bschifferer/models-1
b6042dbd1b98150cc50fd7d2cb6c07033f42fd35
[ "Apache-2.0" ]
null
null
null
examples/04-Exporting-ranking-models.ipynb
bschifferer/models-1
b6042dbd1b98150cc50fd7d2cb6c07033f42fd35
[ "Apache-2.0" ]
null
null
null
35.594828
705
0.609397
[ [ [ "# Copyright 2021 NVIDIA Corporation. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ================================", "_____no_output_____" ] ], [ [ "<img src=\"http://developer.download.nvidia.com/compute/machine-learning/frameworks/nvidia_logo.png\" style=\"width: 90px; float: right;\">\n\n# Exporting Ranking Models\n\nIn this example notebook we demonstrate how to export (save) NVTabular `workflow` and a `ranking model` for model deployment with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library. \n\nLearning Objectives:\n\n- Export NVTabular workflow for model deployment\n- Export TensorFlow DLRM model for model deployment\n\nWe will follow the steps below:\n- Prepare the data with NVTabular and export NVTabular workflow\n- Train a DLRM model with Merlin Models and export the trained model", "_____no_output_____" ], [ "## Importing Libraries", "_____no_output_____" ], [ "Let's start with importing the libraries that we'll use in this notebook.", "_____no_output_____" ] ], [ [ "import os\n\nimport nvtabular as nvt\nfrom nvtabular.ops import *\n\nfrom merlin.models.utils.example_utils import workflow_fit_transform\nfrom merlin.schema.tags import Tags\n\nimport merlin.models.tf as mm\nfrom merlin.io.dataset import Dataset\nimport tensorflow as tf", "2022-04-21 09:36:11.187627: I tensorflow/core/platform/cpu_feature_guard.cc:152] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n2022-04-21 09:36:13.394590: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 16255 MB memory: -> device: 0, name: Tesla V100-SXM2-32GB-LS, pci bus id: 0000:8a:00.0, compute capability: 7.0\n" ] ], [ [ "## Feature Engineering with NVTabular", "_____no_output_____" ], [ "We use the synthetic train and test datasets generated by mimicking the real [Ali-CCP: Alibaba Click and Conversion Prediction](https://tianchi.aliyun.com/dataset/dataDetail?dataId=408#1) dataset to build our recommender system ranking models. \n\nIf you would like to use real Ali-CCP dataset instead, you can download the training and test datasets on [tianchi.aliyun.com](https://tianchi.aliyun.com/dataset/dataDetail?dataId=408#1). You can then use [get_aliccp()](https://github.com/NVIDIA-Merlin/models/blob/main/merlin/datasets/ecommerce/aliccp/dataset.py#L43) function to curate the raw csv files and save them as the parquet.", "_____no_output_____" ] ], [ [ "from merlin.datasets.synthetic import generate_data\n\nDATA_FOLDER = os.environ.get(\"DATA_FOLDER\", \"/workspace/data/\")\nNUM_ROWS = os.environ.get(\"NUM_ROWS\", 1000000)\n\ntrain, valid = generate_data(\"aliccp-raw\", int(NUM_ROWS), set_sizes=(0.7, 0.3))\n\n# save the datasets as parquet files\ntrain.to_ddf().to_parquet(os.path.join(DATA_FOLDER, \"train\"))\nvalid.to_ddf().to_parquet(os.path.join(DATA_FOLDER, \"valid\"))", "_____no_output_____" ] ], [ [ "Let's define our input and output paths.", "_____no_output_____" ] ], [ [ "train_path = os.path.join(DATA_FOLDER, \"train\", \"part.0.parquet\")\nvalid_path = os.path.join(DATA_FOLDER, \"valid\", \"part.0.parquet\")\noutput_path = os.path.join(DATA_FOLDER, \"processed\")", "_____no_output_____" ] ], [ [ "After we execute `fit()` and `transform()` functions on the raw dataset applying the operators defined in the NVTabular workflow pipeline below, the processed parquet files are saved to `output_path`.", "_____no_output_____" ] ], [ [ "%%time\nuser_id = [\"user_id\"] >> Categorify() >> TagAsUserID()\nitem_id = [\"item_id\"] >> Categorify() >> TagAsItemID()\ntargets = [\"click\"] >> AddMetadata(tags=[Tags.BINARY_CLASSIFICATION, \"target\"])\n\nitem_features = [\"item_category\", \"item_shop\", \"item_brand\"] >> Categorify() >> TagAsItemFeatures()\n\nuser_features = (\n [\n \"user_shops\",\n \"user_profile\",\n \"user_group\",\n \"user_gender\",\n \"user_age\",\n \"user_consumption_2\",\n \"user_is_occupied\",\n \"user_geography\",\n \"user_intentions\",\n \"user_brands\",\n \"user_categories\",\n ]\n >> Categorify()\n >> TagAsUserFeatures()\n)\n\noutputs = user_id + item_id + item_features + user_features + targets\n\nworkflow = nvt.Workflow(outputs)\n\ntrain_dataset = nvt.Dataset(train_path)\nvalid_dataset = nvt.Dataset(valid_path)\n\nworkflow.fit(train_dataset)\nworkflow.transform(train_dataset).to_parquet(output_path=output_path + \"/train/\")\nworkflow.transform(valid_dataset).to_parquet(output_path=output_path + \"/valid/\")", "/usr/local/lib/python3.8/dist-packages/cudf/core/dataframe.py:1292: UserWarning: The deep parameter is ignored and is only included for pandas compatibility.\n warnings.warn(\n" ] ], [ [ "We save NVTabular `workflow` model in the current working directory.", "_____no_output_____" ] ], [ [ "workflow.save(\"workflow\")", "_____no_output_____" ] ], [ [ "Let's check out our saved workflow model folder.", "_____no_output_____" ] ], [ [ "!apt-get update\n!apt-get install tree", "Get:1 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]\nGet:2 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]\nGet:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]\nGet:4 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB]\nGet:5 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB]\nGet:6 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages [11.3 MB]\nGet:7 http://archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [177 kB]\nGet:8 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages [1275 kB] \nGet:9 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [30.3 kB]\nGet:10 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [1214 kB]\nGet:11 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [1771 kB]\nGet:12 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1153 kB]\nGet:13 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [2186 kB]\nGet:14 http://archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages [51.2 kB]\nGet:15 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [26.0 kB]\nGet:16 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [1139 kB]\nGet:17 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [25.8 kB]\nGet:18 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [870 kB]\nFetched 21.9 MB in 3s (6363 kB/s) \nReading package lists... Done\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nThe following NEW packages will be installed:\n tree\n0 upgraded, 1 newly installed, 0 to remove and 34 not upgraded.\nNeed to get 43.0 kB of archives.\nAfter this operation, 115 kB of additional disk space will be used.\nGet:1 http://archive.ubuntu.com/ubuntu focal/universe amd64 tree amd64 1.8.0-1 [43.0 kB]\nFetched 43.0 kB in 0s (139 kB/s)\nSelecting previously unselected package tree.\n(Reading database ... 44765 files and directories currently installed.)\nPreparing to unpack .../tree_1.8.0-1_amd64.deb ...\nUnpacking tree (1.8.0-1) ...\nSetting up tree (1.8.0-1) ...\n" ], [ "!tree ./workflow", "\u001b[01;34m./workflow\u001b[00m\r\n├── \u001b[01;34mcategories\u001b[00m\r\n│   ├── unique.item_brand.parquet\r\n│   ├── unique.item_category.parquet\r\n│   ├── unique.item_id.parquet\r\n│   ├── unique.item_shop.parquet\r\n│   ├── unique.user_age.parquet\r\n│   ├── unique.user_brands.parquet\r\n│   ├── unique.user_categories.parquet\r\n│   ├── unique.user_consumption_2.parquet\r\n│   ├── unique.user_gender.parquet\r\n│   ├── unique.user_geography.parquet\r\n│   ├── unique.user_group.parquet\r\n│   ├── unique.user_id.parquet\r\n│   ├── unique.user_intentions.parquet\r\n│   ├── unique.user_is_occupied.parquet\r\n│   ├── unique.user_profile.parquet\r\n│   └── unique.user_shops.parquet\r\n├── metadata.json\r\n└── workflow.pkl\r\n\r\n1 directory, 18 files\r\n" ] ], [ [ "## Build and Train a DLRM model", "_____no_output_____" ], [ "In this example, we build, train, and export a Deep Learning Recommendation Model [(DLRM)](https://arxiv.org/abs/1906.00091) architecture. To learn more about how to train different deep learning models, how easily transition from one model to another and the seamless integration between data preparation and model training visit [03-Exploring-different-models.ipynb](https://github.com/NVIDIA-Merlin/models/blob/main/examples/03-Exploring-different-models.ipynb) notebook.", "_____no_output_____" ], [ "NVTabular workflow above exports a schema file, schema.pbtxt, of our processed dataset. To learn more about the schema object, schema file and `tags`, you can explore [02-Merlin-Models-and-NVTabular-integration.ipynb](02-Merlin-Models-and-NVTabular-integration.ipynb).", "_____no_output_____" ] ], [ [ "# define train and valid dataset objects\ntrain = Dataset(os.path.join(output_path, \"train\", \"*.parquet\"))\nvalid = Dataset(os.path.join(output_path, \"valid\", \"*.parquet\"))\n\n# define schema object\nschema = train.schema", "_____no_output_____" ], [ "target_column = schema.select_by_tag(Tags.TARGET).column_names[0]\ntarget_column", "_____no_output_____" ], [ "model = mm.DLRMModel(\n schema,\n embedding_dim=64,\n bottom_block=mm.MLPBlock([128, 64]),\n top_block=mm.MLPBlock([128, 64, 32]),\n prediction_tasks=mm.BinaryClassificationTask(target_column, metrics=[tf.keras.metrics.AUC()]),\n)", "_____no_output_____" ], [ "%%time\n\nmodel.compile(\"adam\", run_eagerly=False)\nmodel.fit(train, validation_data=valid, batch_size=512)", "2022-04-21 09:36:37.360581: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\n" ] ], [ [ "### Save model", "_____no_output_____" ], [ "The last step of machine learning (ML)/deep learning (DL) pipeline is to deploy the ETL workflow and saved model to production. In the production setting, we want to transform the input data as done during training (ETL). We need to apply the same mean/std for continuous features and use the same categorical mapping to convert the categories to continuous integer before we use the DL model for a prediction. Therefore, we deploy the NVTabular workflow with the Tensorflow model as an ensemble model to Triton Inference using [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library very easily. The ensemble model guarantees that the same transformation is applied to the raw inputs.\n\nLet's save our DLRM model.", "_____no_output_____" ] ], [ [ "model.save(\"dlrm\")", "WARNING:absl:Found untraced functions such as sequential_block_8_layer_call_fn, sequential_block_8_layer_call_and_return_conditional_losses, binary_classification_task_layer_call_fn, binary_classification_task_layer_call_and_return_conditional_losses, click/binary_classification_task/output_layer_layer_call_fn while saving (showing 5 of 48). These functions will not be directly callable after loading.\n" ] ], [ [ "We have NVTabular wokflow and DLRM model exported, now it is time to move on to the next step: model deployment with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems).", "_____no_output_____" ], [ "### Deploying the model with Merlin Systems", "_____no_output_____" ], [ "We trained and exported our ranking model and NVTabular workflow. In the next step, we will learn how to deploy our trained DLRM model into [Triton Inference Server](https://github.com/triton-inference-server/server) with [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library. NVIDIA Triton Inference Server (TIS) simplifies the deployment of AI models at scale in production. TIS provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. It supports a number of different machine learning frameworks such as TensorFlow and PyTorch.\n\nFor the next step, visit [Merlin Systems](https://github.com/NVIDIA-Merlin/systems) library and execute [Serving-Ranking-Models-With-Merlin-Systems](https://github.com/NVIDIA-Merlin/systems/blob/main/examples/Getting_Started/Serving-Ranking-Models-With-Merlin-Systems.ipynb) notebook to deploy our saved DLRM and NVTabular workflow models as an ensemble to TIS and obtain prediction results for a qiven request. In doing so, you need to mount the saved DLRM and NVTabular workflow to the inference container following the instructions in the [README.md](https://github.com/NVIDIA-Merlin/systems/blob/main/examples/Getting_Started/README.md).", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
4a896897813d856cfb2fe10b1fef8053b50341c3
41,094
ipynb
Jupyter Notebook
_docs/nbs/T887829-MaxEnt-Gridworld.ipynb
sparsh-ai/recohut
4121f665761ffe38c9b6337eaa9293b26bee2376
[ "Apache-2.0" ]
null
null
null
_docs/nbs/T887829-MaxEnt-Gridworld.ipynb
sparsh-ai/recohut
4121f665761ffe38c9b6337eaa9293b26bee2376
[ "Apache-2.0" ]
1
2022-01-12T05:40:57.000Z
2022-01-12T05:40:57.000Z
_docs/nbs/T887829-MaxEnt-Gridworld.ipynb
RecoHut-Projects/recohut
4121f665761ffe38c9b6337eaa9293b26bee2376
[ "Apache-2.0" ]
null
null
null
41,094
41,094
0.680318
[ [ [ "# MaxEnt Gridworld\n\n> Implementation of MaxEnt-IRL model for FBER recommendation system, based on the approach of Ziebart et al. 2008 paper: Maximum Entropy Inverse Reinforcement Learning.", "_____no_output_____" ] ], [ [ "import copy\n\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "\"\"\"\nFind the value function associated with a policy. Based on Sutton & Barto, 1998.\n\nMatthew Alger, 2015\[email protected]\n\"\"\"\n\nimport numpy as np\n\ndef value(policy, n_states, transition_probabilities, reward, discount,\n threshold=1e-2):\n \"\"\"\n Find the value function associated with a policy.\n\n policy: List of action ints for each state.\n n_states: Number of states. int.\n transition_probabilities: Function taking (state, action, state) to\n transition probabilities.\n reward: Vector of rewards for each state.\n discount: MDP discount factor. float.\n threshold: Convergence threshold, default 1e-2. float.\n -> Array of values for each state\n \"\"\"\n v = np.zeros(n_states)\n\n diff = float(\"inf\")\n while diff > threshold:\n diff = 0\n for s in range(n_states):\n vs = v[s]\n a = policy[s]\n v[s] = sum(transition_probabilities[s, a, k] *\n (reward[k] + discount * v[k])\n for k in range(n_states))\n diff = max(diff, abs(vs - v[s]))\n\n return v\n\ndef optimal_value(n_states, n_actions, transition_probabilities, reward,\n discount, threshold=1e-2):\n \"\"\"\n Find the optimal value function.\n\n n_states: Number of states. int.\n n_actions: Number of actions. int.\n transition_probabilities: Function taking (state, action, state) to\n transition probabilities.\n reward: Vector of rewards for each state.\n discount: MDP discount factor. float.\n threshold: Convergence threshold, default 1e-2. float.\n -> Array of values for each state\n \"\"\"\n\n v = np.zeros(n_states)\n\n diff = float(\"inf\")\n while diff > threshold:\n diff = 0\n for s in range(n_states):\n max_v = float(\"-inf\")\n for a in range(n_actions):\n tp = transition_probabilities[s, a, :]\n max_v = max(max_v, np.dot(tp, reward + discount*v))\n\n new_diff = abs(v[s] - max_v)\n if new_diff > diff:\n diff = new_diff\n v[s] = max_v\n\n return v\n\ndef find_policy(n_states, n_actions, transition_probabilities, reward, discount,\n threshold=1e-2, v=None, stochastic=True):\n \"\"\"\n Find the optimal policy.\n\n n_states: Number of states. int.\n n_actions: Number of actions. int.\n transition_probabilities: Function taking (state, action, state) to\n transition probabilities.\n reward: Vector of rewards for each state.\n discount: MDP discount factor. float.\n threshold: Convergence threshold, default 1e-2. float.\n v: Value function (if known). Default None.\n stochastic: Whether the policy should be stochastic. Default True.\n -> Action probabilities for each state or action int for each state\n (depending on stochasticity).\n \"\"\"\n\n if v is None:\n v = optimal_value(n_states, n_actions, transition_probabilities, reward,\n discount, threshold)\n\n if stochastic:\n # Get Q using equation 9.2 from Ziebart's thesis.\n Q = np.zeros((n_states, n_actions))\n for i in range(n_states):\n for j in range(n_actions):\n p = transition_probabilities[i, j, :]\n Q[i, j] = p.dot(reward + discount*v)\n Q -= Q.max(axis=1).reshape((n_states, 1)) # For numerical stability.\n Q = np.exp(Q)/np.exp(Q).sum(axis=1).reshape((n_states, 1))\n return Q\n\n def _policy(s):\n return max(list(range(n_actions)),\n key=lambda a: sum(transition_probabilities[s, a, k] *\n (reward[k] + discount * v[k])\n for k in range(n_states)))\n policy = np.array([_policy(s) for s in range(n_states)])\n return policy", "_____no_output_____" ], [ "\"\"\"\nImplements the gridworld MDP.\n\nMatthew Alger, 2015\[email protected]\n\"\"\"\n\nimport numpy as np\nimport numpy.random as rn\n\nclass Gridworld(object):\n \"\"\"\n Gridworld MDP.\n \"\"\"\n\n def __init__(self, grid_size, wind, discount):\n \"\"\"\n grid_size: Grid size. int.\n wind: Chance of moving randomly. float.\n discount: MDP discount. float.\n -> Gridworld\n \"\"\"\n\n self.actions = ((1, 0), (0, 1), (-1, 0), (0, -1))\n self.n_actions = len(self.actions)\n self.n_states = grid_size**2\n self.grid_size = grid_size\n self.wind = wind\n self.discount = discount\n\n # Preconstruct the transition probability array.\n self.transition_probability = np.array(\n [[[self._transition_probability(i, j, k)\n for k in range(self.n_states)]\n for j in range(self.n_actions)]\n for i in range(self.n_states)])\n\n def __str__(self):\n return \"Gridworld({}, {}, {})\".format(self.grid_size, self.wind,\n self.discount)\n\n def feature_vector(self, i, feature_map=\"ident\"):\n \"\"\"\n Get the feature vector associated with a state integer.\n\n i: State int.\n feature_map: Which feature map to use (default ident). String in {ident,\n coord, proxi}.\n -> Feature vector.\n \"\"\"\n\n if feature_map == \"coord\":\n f = np.zeros(self.grid_size)\n x, y = i % self.grid_size, i // self.grid_size\n f[x] += 1\n f[y] += 1\n return f\n if feature_map == \"proxi\":\n f = np.zeros(self.n_states)\n x, y = i % self.grid_size, i // self.grid_size\n for b in range(self.grid_size):\n for a in range(self.grid_size):\n dist = abs(x - a) + abs(y - b)\n f[self.point_to_int((a, b))] = dist\n return f\n # Assume identity map.\n f = np.zeros(self.n_states)\n f[i] = 1\n return f\n\n def feature_matrix(self, feature_map=\"ident\"):\n \"\"\"\n Get the feature matrix for this gridworld.\n\n feature_map: Which feature map to use (default ident). String in {ident,\n coord, proxi}.\n -> NumPy array with shape (n_states, d_states).\n \"\"\"\n\n features = []\n for n in range(self.n_states):\n f = self.feature_vector(n, feature_map)\n features.append(f)\n return np.array(features)\n\n def int_to_point(self, i):\n \"\"\"\n Convert a state int into the corresponding coordinate.\n\n i: State int.\n -> (x, y) int tuple.\n \"\"\"\n\n return (i % self.grid_size, i // self.grid_size)\n\n def point_to_int(self, p):\n \"\"\"\n Convert a coordinate into the corresponding state int.\n\n p: (x, y) tuple.\n -> State int.\n \"\"\"\n\n return p[0] + p[1]*self.grid_size\n\n def neighbouring(self, i, k):\n \"\"\"\n Get whether two points neighbour each other. Also returns true if they\n are the same point.\n\n i: (x, y) int tuple.\n k: (x, y) int tuple.\n -> bool.\n \"\"\"\n\n return abs(i[0] - k[0]) + abs(i[1] - k[1]) <= 1\n\n def _transition_probability(self, i, j, k):\n \"\"\"\n Get the probability of transitioning from state i to state k given\n action j.\n\n i: State int.\n j: Action int.\n k: State int.\n -> p(s_k | s_i, a_j)\n \"\"\"\n\n xi, yi = self.int_to_point(i)\n xj, yj = self.actions[j]\n xk, yk = self.int_to_point(k)\n\n if not self.neighbouring((xi, yi), (xk, yk)):\n return 0.0\n\n # Is k the intended state to move to?\n if (xi + xj, yi + yj) == (xk, yk):\n return 1 - self.wind + self.wind/self.n_actions\n\n # If these are not the same point, then we can move there by wind.\n if (xi, yi) != (xk, yk):\n return self.wind/self.n_actions\n\n # If these are the same point, we can only move here by either moving\n # off the grid or being blown off the grid. Are we on a corner or not?\n if (xi, yi) in {(0, 0), (self.grid_size-1, self.grid_size-1),\n (0, self.grid_size-1), (self.grid_size-1, 0)}:\n # Corner.\n # Can move off the edge in two directions.\n # Did we intend to move off the grid?\n if not (0 <= xi + xj < self.grid_size and\n 0 <= yi + yj < self.grid_size):\n # We intended to move off the grid, so we have the regular\n # success chance of staying here plus an extra chance of blowing\n # onto the *other* off-grid square.\n return 1 - self.wind + 2*self.wind/self.n_actions\n else:\n # We can blow off the grid in either direction only by wind.\n return 2*self.wind/self.n_actions\n else:\n # Not a corner. Is it an edge?\n if (xi not in {0, self.grid_size-1} and\n yi not in {0, self.grid_size-1}):\n # Not an edge.\n return 0.0\n\n # Edge.\n # Can only move off the edge in one direction.\n # Did we intend to move off the grid?\n if not (0 <= xi + xj < self.grid_size and\n 0 <= yi + yj < self.grid_size):\n # We intended to move off the grid, so we have the regular\n # success chance of staying here.\n return 1 - self.wind + self.wind/self.n_actions\n else:\n # We can blow off the grid only by wind.\n return self.wind/self.n_actions\n\n def reward(self, state_int):\n \"\"\"\n Reward for being in state state_int.\n\n state_int: State integer. int.\n -> Reward.\n \"\"\"\n\n if state_int == self.n_states - 1:\n return 1\n return 0\n\n def average_reward(self, n_trajectories, trajectory_length, policy):\n \"\"\"\n Calculate the average total reward obtained by following a given policy\n over n_paths paths.\n\n policy: Map from state integers to action integers.\n n_trajectories: Number of trajectories. int.\n trajectory_length: Length of an episode. int.\n -> Average reward, standard deviation.\n \"\"\"\n\n trajectories = self.generate_trajectories(n_trajectories,\n trajectory_length, policy)\n rewards = [[r for _, _, r in trajectory] for trajectory in trajectories]\n rewards = np.array(rewards)\n\n # Add up all the rewards to find the total reward.\n total_reward = rewards.sum(axis=1)\n\n # Return the average reward and standard deviation.\n return total_reward.mean(), total_reward.std()\n\n def optimal_policy(self, state_int):\n \"\"\"\n The optimal policy for this gridworld.\n\n state_int: What state we are in. int.\n -> Action int.\n \"\"\"\n\n sx, sy = self.int_to_point(state_int)\n\n if sx < self.grid_size and sy < self.grid_size:\n return rn.randint(0, 2)\n if sx < self.grid_size-1:\n return 0\n if sy < self.grid_size-1:\n return 1\n raise ValueError(\"Unexpected state.\")\n\n def optimal_policy_deterministic(self, state_int):\n \"\"\"\n Deterministic version of the optimal policy for this gridworld.\n\n state_int: What state we are in. int.\n -> Action int.\n \"\"\"\n\n sx, sy = self.int_to_point(state_int)\n if sx < sy:\n return 0\n return 1\n\n def generate_trajectories(self, n_trajectories, trajectory_length, policy,\n random_start=False):\n \"\"\"\n Generate n_trajectories trajectories with length trajectory_length,\n following the given policy.\n\n n_trajectories: Number of trajectories. int.\n trajectory_length: Length of an episode. int.\n policy: Map from state integers to action integers.\n random_start: Whether to start randomly (default False). bool.\n -> [[(state int, action int, reward float)]]\n \"\"\"\n\n trajectories = []\n for _ in range(n_trajectories):\n if random_start:\n sx, sy = rn.randint(self.grid_size), rn.randint(self.grid_size)\n else:\n sx, sy = 0, 0\n\n trajectory = []\n for _ in range(trajectory_length):\n if rn.random() < self.wind:\n action = self.actions[rn.randint(0, 4)]\n else:\n # Follow the given policy.\n action = self.actions[policy(self.point_to_int((sx, sy)))]\n\n if (0 <= sx + action[0] < self.grid_size and\n 0 <= sy + action[1] < self.grid_size):\n next_sx = sx + action[0]\n next_sy = sy + action[1]\n else:\n next_sx = sx\n next_sy = sy\n\n state_int = self.point_to_int((sx, sy))\n action_int = self.actions.index(action)\n next_state_int = self.point_to_int((next_sx, next_sy))\n reward = self.reward(next_state_int)\n trajectory.append((state_int, action_int, reward))\n\n sx = next_sx\n sy = next_sy\n\n trajectories.append(trajectory)\n\n return np.array(trajectories)", "_____no_output_____" ], [ "# Quick unit test using gridworld.\n\ngw = Gridworld(3, 0.3, 0.9)\n\nv = value([gw.optimal_policy_deterministic(s) for s in range(gw.n_states)],\n gw.n_states,\n gw.transition_probability,\n [gw.reward(s) for s in range(gw.n_states)],\n gw.discount)\n\nassert np.isclose(v,\n [5.7194282, 6.46706692, 6.42589811,\n 6.46706692, 7.47058224, 7.96505174,\n 6.42589811, 7.96505174, 8.19268666], 1).all()\n\nopt_v = optimal_value(gw.n_states,\n gw.n_actions,\n gw.transition_probability,\n [gw.reward(s) for s in range(gw.n_states)],\n gw.discount)\n\nassert np.isclose(v, opt_v).all()", "_____no_output_____" ], [ "\"\"\"\nImplements maximum entropy inverse reinforcement learning (Ziebart et al., 2008)\n\nMatthew Alger, 2015\[email protected]\n\"\"\"\n\nfrom itertools import product\n\nimport numpy as np\nimport numpy.random as rn\n\ndef irl(feature_matrix, n_actions, discount, transition_probability,\n trajectories, epochs, learning_rate):\n \"\"\"\n Find the reward function for the given trajectories.\n\n feature_matrix: Matrix with the nth row representing the nth state. NumPy\n array with shape (N, D) where N is the number of states and D is the\n dimensionality of the state.\n n_actions: Number of actions A. int.\n discount: Discount factor of the MDP. float.\n transition_probability: NumPy array mapping (state_i, action, state_k) to\n the probability of transitioning from state_i to state_k under action.\n Shape (N, A, N).\n trajectories: 3D array of state/action pairs. States are ints, actions\n are ints. NumPy array with shape (T, L, 2) where T is the number of\n trajectories and L is the trajectory length.\n epochs: Number of gradient descent steps. int.\n learning_rate: Gradient descent learning rate. float.\n -> Reward vector with shape (N,).\n \"\"\"\n\n n_states, d_states = feature_matrix.shape\n\n # Initialise weights.\n alpha = rn.uniform(size=(d_states,))\n\n # Calculate the feature expectations \\tilde{phi}.\n feature_expectations = find_feature_expectations(feature_matrix,\n trajectories)\n\n # Gradient descent on alpha.\n for i in range(epochs):\n # print(\"i: {}\".format(i))\n r = feature_matrix.dot(alpha)\n expected_svf = find_expected_svf(n_states, r, n_actions, discount,\n transition_probability, trajectories)\n grad = feature_expectations - feature_matrix.T.dot(expected_svf)\n\n alpha += learning_rate * grad\n\n return feature_matrix.dot(alpha).reshape((n_states,))\n\ndef find_svf(n_states, trajectories):\n \"\"\"\n Find the state visitation frequency from trajectories.\n\n n_states: Number of states. int.\n trajectories: 3D array of state/action pairs. States are ints, actions\n are ints. NumPy array with shape (T, L, 2) where T is the number of\n trajectories and L is the trajectory length.\n -> State visitation frequencies vector with shape (N,).\n \"\"\"\n\n svf = np.zeros(n_states)\n\n for trajectory in trajectories:\n for state, _, _ in trajectory:\n svf[state] += 1\n\n svf /= trajectories.shape[0]\n\n return svf\n\ndef find_feature_expectations(feature_matrix, trajectories):\n \"\"\"\n Find the feature expectations for the given trajectories. This is the\n average path feature vector.\n\n feature_matrix: Matrix with the nth row representing the nth state. NumPy\n array with shape (N, D) where N is the number of states and D is the\n dimensionality of the state.\n trajectories: 3D array of state/action pairs. States are ints, actions\n are ints. NumPy array with shape (T, L, 2) where T is the number of\n trajectories and L is the trajectory length.\n -> Feature expectations vector with shape (D,).\n \"\"\"\n\n feature_expectations = np.zeros(feature_matrix.shape[1])\n\n for trajectory in trajectories:\n for state, _, _ in trajectory:\n feature_expectations += feature_matrix[state]\n\n feature_expectations /= trajectories.shape[0]\n\n return feature_expectations\n\ndef find_expected_svf(n_states, r, n_actions, discount,\n transition_probability, trajectories):\n \"\"\"\n Find the expected state visitation frequencies using algorithm 1 from\n Ziebart et al. 2008.\n\n n_states: Number of states N. int.\n alpha: Reward. NumPy array with shape (N,).\n n_actions: Number of actions A. int.\n discount: Discount factor of the MDP. float.\n transition_probability: NumPy array mapping (state_i, action, state_k) to\n the probability of transitioning from state_i to state_k under action.\n Shape (N, A, N).\n trajectories: 3D array of state/action pairs. States are ints, actions\n are ints. NumPy array with shape (T, L, 2) where T is the number of\n trajectories and L is the trajectory length.\n -> Expected state visitation frequencies vector with shape (N,).\n \"\"\"\n\n n_trajectories = trajectories.shape[0]\n trajectory_length = trajectories.shape[1]\n\n policy = find_policy(n_states, r, n_actions, discount,\n transition_probability)\n # policy = find_policy(n_states, n_actions,\n # transition_probability, r, discount)\n\n start_state_count = np.zeros(n_states)\n for trajectory in trajectories:\n start_state_count[trajectory[0, 0]] += 1\n p_start_state = start_state_count/n_trajectories\n\n expected_svf = np.tile(p_start_state, (trajectory_length, 1)).T\n for t in range(1, trajectory_length):\n expected_svf[:, t] = 0\n for i, j, k in product(list(range(n_states)), list(range(n_actions)), list(range(n_states))):\n expected_svf[k, t] += (expected_svf[i, t-1] *\n policy[i, j] * # Stochastic policy\n transition_probability[i, j, k])\n\n return expected_svf.sum(axis=1)\n\ndef softmax(x1, x2):\n \"\"\"\n Soft-maximum calculation, from algorithm 9.2 in Ziebart's PhD thesis.\n\n x1: float.\n x2: float.\n -> softmax(x1, x2)\n \"\"\"\n\n max_x = max(x1, x2)\n min_x = min(x1, x2)\n return max_x + np.log(1 + np.exp(min_x - max_x))\n\ndef find_policy(n_states, r, n_actions, discount,\n transition_probability):\n \"\"\"\n Find a policy with linear value iteration. Based on the code accompanying\n the Levine et al. GPIRL paper and on Ziebart's PhD thesis (algorithm 9.1).\n\n n_states: Number of states N. int.\n r: Reward. NumPy array with shape (N,).\n n_actions: Number of actions A. int.\n discount: Discount factor of the MDP. float.\n transition_probability: NumPy array mapping (state_i, action, state_k) to\n the probability of transitioning from state_i to state_k under action.\n Shape (N, A, N).\n -> NumPy array of states and the probability of taking each action in that\n state, with shape (N, A).\n \"\"\"\n\n # V = value_iteration.value(n_states, transition_probability, r, discount)\n\n # NumPy's dot really dislikes using inf, so I'm making everything finite\n # using nan_to_num.\n V = np.nan_to_num(np.ones((n_states, 1)) * float(\"-inf\"))\n\n diff = np.ones((n_states,))\n while (diff > 1e-4).all(): # Iterate until convergence.\n new_V = r.copy()\n for j in range(n_actions):\n for i in range(n_states):\n new_V[i] = softmax(new_V[i], r[i] + discount*\n np.sum(transition_probability[i, j, k] * V[k]\n for k in range(n_states)))\n\n # # This seems to diverge, so we z-score it (engineering hack).\n new_V = (new_V - new_V.mean())/new_V.std()\n\n diff = abs(V - new_V)\n V = new_V\n\n # We really want Q, not V, so grab that using equation 9.2 from the thesis.\n Q = np.zeros((n_states, n_actions))\n for i in range(n_states):\n for j in range(n_actions):\n p = np.array([transition_probability[i, j, k]\n for k in range(n_states)])\n Q[i, j] = p.dot(r + discount*V)\n\n # Softmax by row to interpret these values as probabilities.\n Q -= Q.max(axis=1).reshape((n_states, 1)) # For numerical stability.\n Q = np.exp(Q)/np.exp(Q).sum(axis=1).reshape((n_states, 1))\n return Q\n\ndef expected_value_difference(n_states, n_actions, transition_probability,\n reward, discount, p_start_state, optimal_value, true_reward):\n \"\"\"\n Calculate the expected value difference, which is a proxy to how good a\n recovered reward function is.\n\n n_states: Number of states. int.\n n_actions: Number of actions. int.\n transition_probability: NumPy array mapping (state_i, action, state_k) to\n the probability of transitioning from state_i to state_k under action.\n Shape (N, A, N).\n reward: Reward vector mapping state int to reward. Shape (N,).\n discount: Discount factor. float.\n p_start_state: Probability vector with the ith component as the probability\n that the ith state is the start state. Shape (N,).\n optimal_value: Value vector for the ground reward with optimal policy.\n The ith component is the value of the ith state. Shape (N,).\n true_reward: True reward vector. Shape (N,).\n -> Expected value difference. float.\n \"\"\"\n\n policy = value_iteration.find_policy(n_states, n_actions,\n transition_probability, reward, discount)\n value = value_iteration.value(policy.argmax(axis=1), n_states,\n transition_probability, true_reward, discount)\n\n evd = optimal_value.dot(p_start_state) - value.dot(p_start_state)\n return evd\n", "_____no_output_____" ], [ "\"\"\"\nRun maximum entropy inverse reinforcement learning on the gridworld MDP.\n\nMatthew Alger, 2015\[email protected]\n\"\"\"\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\ndef main(grid_size, discount, n_trajectories, epochs, learning_rate):\n \"\"\"\n Run maximum entropy inverse reinforcement learning on the gridworld MDP.\n\n Plots the reward function.\n\n grid_size: Grid size. int.\n discount: MDP discount factor. float.\n n_trajectories: Number of sampled trajectories. int.\n epochs: Gradient descent iterations. int.\n learning_rate: Gradient descent learning rate. float.\n \"\"\"\n\n wind = 0.3\n trajectory_length = 3*grid_size\n\n gw = Gridworld(grid_size, wind, discount)\n trajectories = gw.generate_trajectories(n_trajectories,\n trajectory_length,\n gw.optimal_policy)\n feature_matrix = gw.feature_matrix()\n ground_r = np.array([gw.reward(s) for s in range(gw.n_states)])\n r = irl(feature_matrix, gw.n_actions, discount,\n gw.transition_probability, trajectories, epochs, learning_rate)\n\n plt.subplot(1, 2, 1)\n plt.pcolor(ground_r.reshape((grid_size, grid_size)))\n plt.colorbar()\n plt.title(\"Groundtruth reward\")\n plt.subplot(1, 2, 2)\n plt.pcolor(r.reshape((grid_size, grid_size)))\n plt.colorbar()\n plt.title(\"Recovered reward\")\n plt.show()\n\nif __name__ == '__main__':\n main(5, 0.01, 20, 200, 0.01)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
4a8978970b27a7212d6e2f1c2595ac343672a656
8,574
ipynb
Jupyter Notebook
notebook/opencv_warp_affine_basic.ipynb
puyopop/python-snippets
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
[ "MIT" ]
174
2018-05-30T21:14:50.000Z
2022-03-25T07:59:37.000Z
notebook/opencv_warp_affine_basic.ipynb
puyopop/python-snippets
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
[ "MIT" ]
5
2019-08-10T03:22:02.000Z
2021-07-12T20:31:17.000Z
notebook/opencv_warp_affine_basic.ipynb
puyopop/python-snippets
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
[ "MIT" ]
53
2018-04-27T05:26:35.000Z
2022-03-25T07:59:37.000Z
19.267416
104
0.48787
[ [ [ "import cv2\nimport numpy as np\nimport math", "_____no_output_____" ], [ "img = cv2.imread('data/src/lena.jpg')", "_____no_output_____" ], [ "h, w, c = img.shape\nprint(h, w, c)", "225 400 3\n" ] ], [ [ "![](data/src/lena.jpg)", "_____no_output_____" ] ], [ [ "mat = cv2.getRotationMatrix2D((w / 2, h / 2), 45, 0.5)\nprint(mat)", "[[ 0.35355339 0.35355339 89.51456544]\n [ -0.35355339 0.35355339 143.43592168]]\n" ], [ "affine_img = cv2.warpAffine(img, mat, (w, h))\ncv2.imwrite('data/dst/opencv_affine.jpg', affine_img)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine.jpg)", "_____no_output_____" ] ], [ [ "affine_img_half = cv2.warpAffine(img, mat, (w, h // 2))\ncv2.imwrite('data/dst/opencv_affine_half.jpg', affine_img_half)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_half.jpg)", "_____no_output_____" ] ], [ [ "affine_img_flags = cv2.warpAffine(img, mat, (w, h), flags=cv2.INTER_CUBIC)\ncv2.imwrite('data/dst/opencv_affine_flags.jpg', affine_img_flags)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_flags.jpg)", "_____no_output_____" ] ], [ [ "affine_img_bv = cv2.warpAffine(img, mat, (w, h), borderValue=(0, 128, 255))\ncv2.imwrite('data/dst/opencv_affine_border_value.jpg', affine_img_bv)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_border_value.jpg)", "_____no_output_____" ] ], [ [ "dst = img // 4", "_____no_output_____" ], [ "affine_img_bm_bt = cv2.warpAffine(img, mat, (w, h), borderMode=cv2.BORDER_TRANSPARENT, dst=dst)\ncv2.imwrite('data/dst/opencv_affine_border_transparent.jpg', affine_img_bm_bt)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_border_transparent.jpg)", "_____no_output_____" ] ], [ [ "affine_img_bm_br = cv2.warpAffine(img, mat, (w, h), borderMode=cv2.BORDER_REPLICATE)\ncv2.imwrite('data/dst/opencv_affine_border_replicate.jpg', affine_img_bm_br)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_border_replicate.jpg)", "_____no_output_____" ] ], [ [ "affine_img_bm_bw = cv2.warpAffine(img, mat, (w, h), borderMode=cv2.BORDER_WRAP)\ncv2.imwrite('data/dst/opencv_affine_border_wrap.jpg', affine_img_bm_bw)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_border_wrap.jpg)", "_____no_output_____" ] ], [ [ "mat = np.array([[1, 0, 50], [0, 1, 20]], dtype=np.float32)\nprint(mat)", "[[ 1. 0. 50.]\n [ 0. 1. 20.]]\n" ], [ "affine_img_translation = cv2.warpAffine(img, mat, (w, h))\ncv2.imwrite('data/dst/opencv_affine_translation.jpg', affine_img_translation)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_translation.jpg)", "_____no_output_____" ] ], [ [ "a = math.tan(math.radians(15))", "_____no_output_____" ], [ "mat = np.array([[1, a, 0], [0, 1, 0]], dtype=np.float32)\nprint(mat)", "[[1. 0.2679492 0. ]\n [0. 1. 0. ]]\n" ], [ "affine_img_skew_x = cv2.warpAffine(img, mat, (int(w + h * a), h))\ncv2.imwrite('data/dst/opencv_affine_skew_x.jpg', affine_img_skew_x)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_skew_x.jpg)", "_____no_output_____" ] ], [ [ "mat = np.array([[1, 0, 0], [a, 1, 0]], dtype=np.float32)\nprint(mat)", "[[1. 0. 0. ]\n [0.2679492 1. 0. ]]\n" ], [ "affine_img_skew_y = cv2.warpAffine(img, mat, (w, int(h + w * a)))\ncv2.imwrite('data/dst/opencv_affine_skew_y.jpg', affine_img_skew_y)", "_____no_output_____" ] ], [ [ "![](data/dst/opencv_affine_skew_y.jpg)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a898a52fe35cb76f3a5b340080f5bcac0e369e8
3,100
ipynb
Jupyter Notebook
SQL/sqlalchemy_Basic.ipynb
corybaird/Web_development
01dff32d130655ec8b2708ae2b99fe0b3986b871
[ "MIT" ]
null
null
null
SQL/sqlalchemy_Basic.ipynb
corybaird/Web_development
01dff32d130655ec8b2708ae2b99fe0b3986b871
[ "MIT" ]
null
null
null
SQL/sqlalchemy_Basic.ipynb
corybaird/Web_development
01dff32d130655ec8b2708ae2b99fe0b3986b871
[ "MIT" ]
null
null
null
21.232877
120
0.421935
[ [ [ "# [SQlalchemy basics](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.DataFrame.to_sql.html)", "_____no_output_____" ] ], [ [ "from sqlalchemy import create_engine\nimport pandas as pd\nengine = create_engine('sqlite://', echo=False)", "_____no_output_____" ], [ "df = pd.DataFrame({'name' : ['User 1', 'User 2', 'User 3']})\ndf", "_____no_output_____" ], [ "df.to_sql(\n 'users', #name of SQL table\n con=engine #Sql database name\n)", "_____no_output_____" ], [ "engine.execute(\"SELECT * FROM users\").fetchall()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a8994e61ae222ccf4610437d5e084a1db349959
2,756
ipynb
Jupyter Notebook
community_detection/group_segments/Group_merging.ipynb
etherlabsio/hinton
eedd99f6a5fbbc38283bbf945dff4f4006a3d1f5
[ "MIT" ]
null
null
null
community_detection/group_segments/Group_merging.ipynb
etherlabsio/hinton
eedd99f6a5fbbc38283bbf945dff4f4006a3d1f5
[ "MIT" ]
null
null
null
community_detection/group_segments/Group_merging.ipynb
etherlabsio/hinton
eedd99f6a5fbbc38283bbf945dff4f4006a3d1f5
[ "MIT" ]
1
2020-04-19T11:08:02.000Z
2020-04-19T11:08:02.000Z
45.933333
1,394
0.616473
[ [ [ "import pickle\nimport networkx as nx\nwith open(\"meeting_graph\", \"rb\") as f:\n nodes, edges = pickle.load(f) \nG = nx.Graph() \nG.add_nodes_from(nodes) \nG.add_edges_from(edges)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a8996d8a7fc50d4e0f46796cb0e2a2d1d0ff804
10,746
ipynb
Jupyter Notebook
coursera_ai/week4/dcgan19.ipynb
mbalgabayev/claimed
b595c25a752874602054443fb8785611c5e863e4
[ "Apache-2.0" ]
308
2021-08-09T20:08:59.000Z
2022-03-31T15:24:02.000Z
coursera_ai/week4/dcgan19.ipynb
mbalgabayev/claimed
b595c25a752874602054443fb8785611c5e863e4
[ "Apache-2.0" ]
15
2021-09-12T15:06:13.000Z
2022-03-31T19:02:08.000Z
coursera_ai/week4/dcgan19.ipynb
mbalgabayev/claimed
b595c25a752874602054443fb8785611c5e863e4
[ "Apache-2.0" ]
615
2021-08-11T12:41:21.000Z
2022-03-31T18:08:12.000Z
30.87931
116
0.550903
[ [ [ "MIT License\n\nCopyright (c) 2017 Erik Linder-Norén\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.", "_____no_output_____" ] ], [ [ "#Please make sure you have at least TensorFlow version 1.12 installed, if not please uncomment and use the \n# pip command below to upgrade. When in a jupyter environment (especially IBM Watson Studio),\n# please don't forget to restart the kernel\nimport tensorflow as tf\ntf.__version__", "_____no_output_____" ], [ "!pip install --upgrade tensorflow", "_____no_output_____" ], [ "from __future__ import print_function, division\n\nfrom keras.datasets import mnist\nfrom keras.layers import Input, Dense, Reshape, Flatten, Dropout\nfrom keras.layers import BatchNormalization, Activation, ZeroPadding2D\nfrom keras.layers.advanced_activations import LeakyReLU\nfrom keras.layers.convolutional import UpSampling2D, Conv2D\nfrom keras.models import Sequential, Model\nfrom keras.optimizers import Adam\n\nimport matplotlib.pyplot as plt\n\nimport sys\n\nimport numpy as np", "_____no_output_____" ], [ "img_rows = 28\nimg_cols = 28\nchannels = 1\nlatent_dim = 100\nimg_shape = (img_rows, img_cols, channels)", "_____no_output_____" ], [ "def build_generator():\n\n model = Sequential()\n\n model.add(Dense(128 * 7 * 7, activation=\"relu\", input_dim=latent_dim))\n model.add(Reshape((7, 7, 128)))\n model.add(UpSampling2D())\n model.add(Conv2D(128, kernel_size=3, padding=\"same\"))\n model.add(BatchNormalization(momentum=0.8))\n model.add(Activation(\"relu\"))\n model.add(UpSampling2D())\n model.add(Conv2D(64, kernel_size=3, padding=\"same\"))\n model.add(BatchNormalization(momentum=0.8))\n model.add(Activation(\"relu\"))\n model.add(Conv2D(channels, kernel_size=3, padding=\"same\"))\n model.add(Activation(\"tanh\"))\n\n model.summary()\n\n noise = Input(shape=(latent_dim,))\n img = model(noise)\n\n return Model(noise, img)", "_____no_output_____" ], [ "def build_discriminator():\n\n model = Sequential()\n\n model.add(Conv2D(32, kernel_size=3, strides=2, input_shape=img_shape, padding=\"same\"))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(Conv2D(64, kernel_size=3, strides=2, padding=\"same\"))\n model.add(ZeroPadding2D(padding=((0,1),(0,1))))\n model.add(BatchNormalization(momentum=0.8))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(Conv2D(128, kernel_size=3, strides=2, padding=\"same\"))\n model.add(BatchNormalization(momentum=0.8))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(Conv2D(256, kernel_size=3, strides=1, padding=\"same\"))\n model.add(BatchNormalization(momentum=0.8))\n model.add(LeakyReLU(alpha=0.2))\n model.add(Dropout(0.25))\n model.add(Flatten())\n model.add(Dense(1, activation='sigmoid'))\n\n model.summary()\n\n img = Input(shape=img_shape)\n validity = model(img)\n\n return Model(img, validity)", "_____no_output_____" ], [ "\n\noptimizer = Adam(0.0002, 0.5)\n\n# Build and compile the discriminator\ndiscriminator = build_discriminator()\ndiscriminator.compile(loss='binary_crossentropy',\n optimizer=optimizer,\n metrics=['accuracy'])\n\n# Build the generator\ngenerator = build_generator()\n\n# The generator takes noise as input and generates imgs\nz = Input(shape=(latent_dim,))\nimg = generator(z)\n\n# For the combined model we will only train the generator\ndiscriminator.trainable = False\n\n# The discriminator takes generated images as input and determines validity\nvalid = discriminator(img)\n\n# The combined model (stacked generator and discriminator)\n# Trains the generator to fool the discriminator\ncombined = Model(z, valid)\ncombined.compile(loss='binary_crossentropy', optimizer=optimizer)", "_____no_output_____" ], [ "def save_imgs(epoch):\n r, c = 5, 5\n noise = np.random.normal(0, 1, (r * c, latent_dim))\n gen_imgs = generator.predict(noise)\n\n # Rescale images 0 - 1\n gen_imgs = 0.5 * gen_imgs + 0.5\n\n fig, axs = plt.subplots(r, c)\n cnt = 0\n for i in range(r):\n for j in range(c):\n axs[i,j].imshow(gen_imgs[cnt, :,:,0], cmap='gray')\n axs[i,j].axis('off')\n cnt += 1\n fig.savefig(\"images/mnist_%d.png\" % epoch)\n plt.close()", "_____no_output_____" ], [ "def train(epochs, batch_size=128, save_interval=50):\n\n # Load the dataset\n (X_train, _), (_, _) = mnist.load_data()\n\n # Rescale -1 to 1\n X_train = X_train / 127.5 - 1.\n X_train = np.expand_dims(X_train, axis=3)\n\n # Adversarial ground truths\n valid = np.ones((batch_size, 1))\n fake = np.zeros((batch_size, 1))\n\n for epoch in range(epochs):\n\n # ---------------------\n # Train Discriminator\n # ---------------------\n\n # Select a random half of images\n idx = np.random.randint(0, X_train.shape[0], batch_size)\n imgs = X_train[idx]\n\n # Sample noise and generate a batch of new images\n noise = np.random.normal(0, 1, (batch_size, latent_dim))\n gen_imgs = generator.predict(noise)\n\n # Train the discriminator (real classified as ones and generated as zeros)\n d_loss_real = discriminator.train_on_batch(imgs, valid)\n d_loss_fake = discriminator.train_on_batch(gen_imgs, fake)\n d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)\n\n # ---------------------\n # Train Generator\n # ---------------------\n\n # Train the generator (wants discriminator to mistake images as real)\n g_loss = combined.train_on_batch(noise, valid)\n\n # Plot the progress\n print (\"%d [D loss: %f, acc.: %.2f%%] [G loss: %f]\" % (epoch, d_loss[0], 100*d_loss[1], g_loss))\n\n # If at save interval => save generated image samples\n if epoch % save_interval == 0:\n save_imgs(epoch)", "_____no_output_____" ], [ "!mkdir -p images", "_____no_output_____" ], [ "train(epochs=4000, batch_size=32, save_interval=50)", "_____no_output_____" ], [ "ls images", "_____no_output_____" ], [ "from IPython.display import display\nfrom PIL import Image\n\n\npath=\"images/mnist_0.png\"\ndisplay(Image.open(path))", "_____no_output_____" ], [ "\nfrom IPython.display import display\nfrom PIL import Image\n\n\npath=\"images/mnist_3950.png\"\ndisplay(Image.open(path))", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a899a988b3ab08cbbfae2b544cf09bb7bed64a4
369,065
ipynb
Jupyter Notebook
HeadPose_SageMaker_PythonSDK/HeadPose_SageMaker_PySDK-Gluon.ipynb
hpuri/deeplens-posture-test1
df0da40b2656402b615ecd66033fb885ddc2e85a
[ "Apache-2.0" ]
16
2018-03-14T13:18:20.000Z
2021-10-30T01:19:21.000Z
HeadPose_SageMaker_PythonSDK/HeadPose_SageMaker_PySDK-Gluon.ipynb
hpuri/deeplens-posture-test1
df0da40b2656402b615ecd66033fb885ddc2e85a
[ "Apache-2.0" ]
1
2021-02-15T07:43:04.000Z
2021-02-15T07:43:04.000Z
HeadPose_SageMaker_PythonSDK/HeadPose_SageMaker_PySDK-Gluon.ipynb
hpuri/deeplens-posture-test1
df0da40b2656402b615ecd66033fb885ddc2e85a
[ "Apache-2.0" ]
13
2018-02-16T07:08:58.000Z
2021-10-30T01:19:14.000Z
238.877023
116,404
0.862431
[ [ [ "# Training and hosting SageMaker Models using the Apache MXNet Gluon API\n\nWhen there is a person in front of you, your human eyes can immediately recognize what direction the person is looking at (e.g. either facing straight up to you or looking at somewhere else). The direction is defined as the head-pose. We are going to develop a deep neural learning model to estimate such a head-pose based on an input human head image. The **SageMaker Python SDK** makes it easy to train and deploy MXNet models. In this example, we train a ResNet-50 model using the Apache MXNet [Gluon API](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html) and the head-pose dataset. \n\nThe task at hand is to train a model using the head-pose dataset so that the trained model is able to classify head-pose into 9 different categories (the combinations of 3 tilt and 3 pan angles).", "_____no_output_____" ] ], [ [ "import sys\nprint(sys.version)", "3.6.2 |Anaconda custom (64-bit)| (default, Jul 20 2017, 13:51:32) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]\n" ] ], [ [ "### Setup\n\nFirst we need to define a few variables that will be needed later in the example.", "_____no_output_____" ] ], [ [ "from sagemaker import get_execution_role\n\ns3_bucket = '<your S3 bucket >'\nheadpose_folder = 'headpose'\n\n#Bucket location to save your custom code in tar.gz format.\ncustom_code_upload_location = 's3://{}/{}/customMXNetcodes'.format(s3_bucket, headpose_folder)\n\n#Bucket location where results of model training are saved.\nmodel_artifacts_location = 's3://{}/{}/artifacts'.format(s3_bucket, headpose_folder)\n\n#IAM execution role that gives SageMaker access to resources in your AWS account.\n#We can use the SageMaker Python SDK to get the role from our notebook environment. \nrole = get_execution_role()", "_____no_output_____" ] ], [ [ "### The training script\n\nThe ``EntryPt-headpose.py`` script provides all the code we need for training and hosting a SageMaker model. The script we will use is adapted and modified from Apache MXNet [MNIST tutorial](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-python-sdk/mxnet_mnist) and [HeadPose tutorial](https://)", "_____no_output_____" ] ], [ [ "!cat EntryPt-headpose-Gluon.py", "import logging\r\n\r\nimport mxnet as mx\r\nimport numpy as np\r\nimport os\r\nimport urllib\r\nimport pickle\r\nimport sys\r\nimport cv2\r\nfrom mxnet import init\r\nfrom mxnet import gluon\r\nfrom mxnet import autograd\r\nfrom mxnet import nd\r\nfrom mxnet.gluon.model_zoo.vision import resnet50_v1\r\n\r\n### The shape of input image. \r\n### Aspect Ratio 1:1 # (1,3,84,84)\r\nDSHAPE = (1,3,84,84)\r\n\r\n##################################\r\n###\r\n### Helper functions\r\n###\r\n##################################\r\n\r\n#cv2 is not supported in ml.m4 instance\r\ndef shiftHSV(im, h_shift_lim=(-180, 180),\r\n s_shift_lim=(-255, 255),\r\n v_shift_lim=(-255, 255), u=0.5):\r\n if np.random.random() < u:\r\n im = cv2.cvtColor(im, cv2.COLOR_BGR2HSV)\r\n h, s, v = cv2.split(im)\r\n h_shift = np.random.uniform(h_shift_lim[0], h_shift_lim[1])\r\n h = cv2.add(h, h_shift) \r\n s_shift = np.random.uniform(s_shift_lim[0], s_shift_lim[1])\r\n s = cv2.add(s, s_shift)\r\n v_shift = np.random.uniform(v_shift_lim[0], v_shift_lim[1])\r\n v = cv2.add(v, v_shift)\r\n im = cv2.merge((h, s, v))\r\n im = cv2.cvtColor(im, cv2.COLOR_HSV2BGR)\r\n im = np.uint8(im) \r\n im = np.float32(im)\r\n return im \r\n \r\ndef load_data(path):\r\n ### #Aspect Ratio 1:1 # 6.7 GB\r\n trn_im, test_im, trn_output, test_output = pickle.load(open(find_file(path,\"HeadPoseData_trn_test_x15_py2.pkl\"), 'rb')) \r\n\r\n print(\"dataset loaded !\")\r\n return trn_im, test_im, trn_output, test_output\r\n\r\ndef find_file(root_path, file_name):\r\n for root, dirs, files in os.walk(root_path):\r\n if file_name in files:\r\n return os.path.join(root, file_name)\r\n\r\ndef download(url):\r\n filename = url.split(\"/\")[-1]\r\n if not os.path.exists(filename):\r\n urllib.urlretrieve(url, filename)\r\n\r\ndef load_model(s_fname, p_fname):\r\n \"\"\"\r\n Load model checkpoint from file.\r\n :return: (arg_params, aux_params)\r\n arg_params : dict of str to NDArray\r\n Model parameter, dict of name to NDArray of net's weights.\r\n aux_params : dict of str to NDArray\r\n Model parameter, dict of name to NDArray of net's auxiliary states.\r\n \"\"\"\r\n symbol = mx.symbol.load(s_fname)\r\n save_dict = mx.nd.load(p_fname)\r\n arg_params = {}\r\n aux_params = {}\r\n for k, v in save_dict.items():\r\n tp, name = k.split(':', 1)\r\n if tp == 'arg':\r\n arg_params[name] = v\r\n if tp == 'aux':\r\n aux_params[name] = v\r\n return symbol, arg_params, aux_params \r\n \r\n\r\ndef angles2Cat(angles_thrshld, angl_input):\r\n # angl_input: Normalized angle -90 - 90 -> -1.0 - 1.0\r\n angles_cat_temp = angles_thrshld + [angl_input]\r\n return np.argmin(np.multiply(sorted(angles_cat_temp)-angl_input,sorted(angles_cat_temp)-angl_input))\r\n\r\n# Accuracy Evaluation\r\ndef eval_acc(data_iter, net, ctx):\r\n acc = mx.metric.Accuracy()\r\n for i, (data, label) in enumerate(data_iter):\r\n data = data.as_in_context(ctx)\r\n label = label.as_in_context(ctx)\r\n\r\n output = net(data)\r\n pred = nd.argmax(output, axis=1)\r\n acc.update(preds=pred, labels=label)\r\n return acc.get()[1]\r\n\r\n# Training Loop\r\ndef train_util(output_data_dir, net, train_iter, validation_iter, loss_fn, trainer, ctx, epochs, batch_size):\r\n metric = mx.metric.create(['acc'])\r\n lst_val_acc = []\r\n lst_trn_acc = []\r\n best_accuracy = 0\r\n for epoch in range(epochs):\r\n for i, (data, label) in enumerate(train_iter):\r\n # ensure context \r\n data = data.as_in_context(ctx)\r\n label = label.as_in_context(ctx)\r\n \r\n with autograd.record():\r\n output = net(data)\r\n loss = loss_fn(output, label)\r\n\r\n loss.backward()\r\n trainer.step(data.shape[0])\r\n\r\n train_acc = eval_acc(train_iter, net, ctx)\r\n validation_acc = eval_acc(validation_iter, net, ctx)\r\n\r\n lst_trn_acc += [train_acc]\r\n lst_val_acc += [validation_acc]\r\n \r\n ### Modularize the output network. \r\n # Export .json and .params files\r\n # chkpt-XX-symbol.json does not come with softmax layer on the top of network. \r\n net.export('{}/chkpt-{}'.format(output_data_dir, epoch)) \r\n # Overwrite .json with the one with softmax.\r\n net_with_softmax = net(mx.sym.var('data'))\r\n net_with_softmax = mx.sym.SoftmaxOutput(data=net_with_softmax, name=\"softmax\")\r\n net_with_softmax.save('{}/chkpt-{}-symbol.json'.format(output_data_dir, epoch)) \r\n print(\"Epoch %s | training_acc %s | val_acc %s \" % (epoch, train_acc, validation_acc))\r\n \r\n if validation_acc > best_accuracy:\r\n # A network with the best validation accuracy is returned. \r\n net_best = net\r\n net_with_softmax_best = net_with_softmax\r\n best_accuracy = validation_acc\r\n \r\n return net_best, net_with_softmax_best\r\n\r\n##################################\r\n###\r\n### Training\r\n###\r\n##################################\r\n\r\ndef train(channel_input_dirs, hyperparameters, hosts, num_cpus, num_gpus, output_data_dir, model_dir, **kwargs):\r\n print(sys.version)\r\n print(sys.executable)\r\n print(sys.version_info)\r\n print(mx.__version__)\r\n\r\n '''\r\n # Load preprocessed data #\r\n Due to the memory limit of m4 instance, we only use a part of dataset to train the model. \r\n '''\r\n trn_im, test_im, trn_output, test_output = load_data(os.path.join(channel_input_dirs['dataset']))\r\n \r\n '''\r\n # Additional Data Augmentation #\r\n '''\r\n # Mirror\r\n trn_im_mirror = trn_im[:,:,:,::-1]\r\n trn_output_mirror = np.zeros(trn_output.shape)\r\n trn_output_mirror[:,0] = trn_output[:,0] \r\n trn_output_mirror[:,1] = trn_output[:,1] * -1\r\n trn_im = np.concatenate((trn_im, trn_im_mirror), axis = 0) \r\n trn_output = np.concatenate((trn_output, trn_output_mirror), axis = 0) \r\n \r\n # Color Shift\r\n for i0 in range(trn_im.shape[0]):\r\n im_temp = trn_im[i0,:,:,:]\r\n im_temp = np.transpose(im_temp, (1,2,0)) * 255 #transposing and restoring the color\r\n im_temp = shiftHSV(im_temp,\r\n h_shift_lim=(-0.5, 0.5),\r\n s_shift_lim=(-0.5, 0.5),\r\n v_shift_lim=(-0.5, 0.5))\r\n im_temp = np.transpose(im_temp, (2,0,1)) / 255 #transposing and restoring the color\r\n trn_im[i0,:,:,:] = im_temp\r\n \r\n '''\r\n # Head Pose Labeling #\r\n '''\r\n # angle class (3) i.e. headpose class ( 3 x 3)\r\n n_grid = 3\r\n angles_thrshld = [np.arcsin(float(a) * 2 / n_grid - 1)/np.pi * 180 / 90 for a in range(1,n_grid)]\r\n \r\n # From (normalized) angle to angle class\r\n trn_tilt_cls = []\r\n trn_pan_cls = []\r\n for i0 in range(trn_output.shape[0]):\r\n trn_tilt_cls += [angles2Cat(angles_thrshld, trn_output[i0,0])]\r\n trn_pan_cls += [angles2Cat(angles_thrshld, trn_output[i0,1])]\r\n\r\n test_tilt_cls = []\r\n test_pan_cls = []\r\n for i0 in range(test_output.shape[0]):\r\n test_tilt_cls += [angles2Cat(angles_thrshld, test_output[i0,0])]\r\n test_pan_cls += [angles2Cat(angles_thrshld, test_output[i0,1])]\r\n \r\n np_trn_tilt_cls = np.asarray(trn_tilt_cls)\r\n np_test_tilt_cls = np.asarray(test_tilt_cls)\r\n np_trn_pan_cls = np.asarray(trn_pan_cls)\r\n np_test_pan_cls = np.asarray(test_pan_cls)\r\n \r\n # From angle class to head pose class\r\n np_trn_grid_cls = np_trn_pan_cls * n_grid + np_trn_tilt_cls\r\n np_test_grid_cls = np_test_pan_cls * n_grid + np_test_tilt_cls\r\n \r\n '''\r\n Train the model \r\n '''\r\n if len(hosts) == 1:\r\n kvstore = 'device' if num_gpus > 0 else 'local'\r\n else:\r\n kvstore = 'dist_device_sync'\r\n\r\n ctx = mx.gpu() if num_gpus > 0 else mx.cpu()\r\n \r\n batch_size = 64\r\n train_iter = mx.gluon.data.DataLoader(mx.gluon.data.ArrayDataset((trn_im.astype(np.float32)-0.5) *2, np_trn_grid_cls),\r\n batch_size=batch_size, shuffle=True, last_batch='discard')\r\n test_iter = mx.gluon.data.DataLoader(mx.gluon.data.ArrayDataset((test_im.astype(np.float32)-0.5) *2 , np_test_grid_cls),\r\n batch_size=batch_size, shuffle=True, last_batch='discard')\r\n # Modify the number of output classes\r\n \r\n pretrained_net = resnet50_v1(pretrained=True, prefix = 'headpose_')\r\n net = resnet50_v1(classes=9, prefix='headpose_') \r\n net.collect_params().initialize()\r\n net.features = pretrained_net.features\r\n \r\n #net.output.initialize(init.Xavier(rnd_type='gaussian', factor_type=\"in\", magnitude=2)) # MXNet 1.1.0\r\n net.initialize(init.Xavier(rnd_type='gaussian', factor_type=\"in\", magnitude=2)) # MXNet 0.12.1\r\n \r\n net.collect_params().reset_ctx(ctx)\r\n net.hybridize()\r\n \r\n loss = gluon.loss.SoftmaxCrossEntropyLoss()\r\n trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': float(hyperparameters.get(\"learning_rate\", 0.0005))})\r\n \r\n # Fine-tune the model\r\n logging.getLogger().setLevel(logging.DEBUG)\r\n\r\n num_epoch = 5\r\n \r\n print('training started')\r\n net, net_with_softmax = train_util(output_data_dir, net, train_iter, test_iter, loss, trainer, ctx, num_epoch, batch_size)\r\n print('training is done')\r\n\r\n ### Serial format v. Modular format \r\n # \"net\" is a serial (i.e. Gluon) network. \r\n # In order to save the network in the modular format, \"net\" needs to be passed to save function. \r\n \r\n return net\r\n\r\n\r\ndef save(net, model_dir):\r\n '''\r\n Save the model in the modular format. \r\n \r\n :net: serialized model returned from train\r\n :model_dir: model_dir The directory where model files are stored.\r\n \r\n DeepLens requires a model artifact in the modularized format (.json and .params)\r\n \r\n '''\r\n \r\n ### Save modularized model \r\n # Export .json and .params files\r\n # model-symbol.json does not come with softmax layer at the end. \r\n net.export('{}/model'.format(model_dir)) \r\n # Overwrite model-symbol.json with the one with softmax\r\n net_with_softmax = net(mx.sym.var('data'))\r\n net_with_softmax = mx.sym.SoftmaxOutput(data=net_with_softmax, name=\"softmax\")\r\n net_with_softmax.save('{}/model-symbol.json'.format(model_dir)) \r\n\r\n\r\n\r\n# ------------------------------------------------------------ #\r\n# Hosting methods #\r\n# ------------------------------------------------------------ #\r\n\r\n\r\ndef model_fn(model_dir):\r\n \"\"\"\r\n Load the model. Called once when hosting service starts.\r\n\r\n :param: model_dir The directory where model files are stored.\r\n :return: a model \r\n \"\"\"\r\n \r\n model_symbol = '{}/model-symbol.json'.format(model_dir)\r\n model_params = '{}/model-0000.params'.format(model_dir)\r\n\r\n sym, arg_params, aux_params = load_model(model_symbol, model_params)\r\n ### DSHAPR = (1,3,84,84)\r\n # The shape of input image. \r\n dshape = [('data', DSHAPE)]\r\n\r\n ctx = mx.cpu() # USE CPU to predict... \r\n net = mx.mod.Module(symbol=sym,context=ctx)\r\n net.bind(for_training=False, data_shapes=dshape)\r\n net.set_params(arg_params, aux_params)\r\n \r\n return net \r\n \r\n" ] ], [ [ "You may find a similarity between this ``EntryPt-headpose-Gluon.py`` and [Head Pose Gluon Tutorial](https://)", "_____no_output_____" ], [ "### SageMaker's MXNet estimator class", "_____no_output_____" ], [ "The SageMaker ```MXNet``` estimator allows us to run single machine or distributed training in SageMaker, using CPU or GPU-based instances.\n\nWhen we create the estimator, we pass in the filename of our training script, the name of our IAM execution role, and the S3 locations we defined in the setup section. We also provide a few other parameters. ``train_instance_count`` and ``train_instance_type`` determine the number and type of SageMaker instances that will be used for the training job. The ``hyperparameters`` parameter is a ``dict`` of values that will be passed to your training script -- you can see how to access these values in the ``EntryPt-headpose.py`` script above.\n\nFor this example, we will choose one ``ml.p2.xlarge`` instance.", "_____no_output_____" ] ], [ [ "from sagemaker.mxnet import MXNet\n\nheadpose_estimator = MXNet(entry_point='EntryPt-headpose-Gluon.py',\n role=role,\n output_path=model_artifacts_location,\n code_location=custom_code_upload_location,\n train_instance_count=1, \n train_instance_type='ml.p2.xlarge',\n hyperparameters={'learning_rate': 0.0005},\n train_max_run = 432000,\n train_volume_size=100)\n", "_____no_output_____" ] ], [ [ "The default volume size in a training instance is 30GB. However, the actual free space is much less. Make sure that you have enough free space in the training instance to download your training data (e.g. 100 GB). ", "_____no_output_____" ] ], [ [ "print(headpose_estimator.train_volume_size)", "100\n" ] ], [ [ "We name this job as **deeplens-sagemaker-headpose**. The ``base_job_name`` will be the prefix of output folders we are going to create. ", "_____no_output_____" ] ], [ [ "headpose_estimator.base_job_name = 'deeplens-sagemaker-headpose'", "_____no_output_____" ] ], [ [ "### Running the Training Job", "_____no_output_____" ], [ "After we've constructed our MXNet object, we can fit it using data stored in S3. \n\nDuring training, SageMaker makes this data stored in S3 available in the local filesystem where the headpose script is running. The ```EntryPt-headpose-Gluon.py``` script simply loads the train and test data from disk.", "_____no_output_____" ] ], [ [ "%%time\n'''\n# Load preprocessed data and run the training#\n'''\n\n# Head-pose dataset \"HeadPoseData_trn_test_x15_py2.pkl\" is in the following S3 folder. \ndataset_location = 's3://{}/{}/datasets'.format(s3_bucket, headpose_folder)\n\n# You can specify multiple input file directories (i.e. channel_input_dirs) in the dictionary.\n# e.g. {'dataset1': dataset1_location, 'dataset2': dataset2_location, 'dataset3': dataset3_location}\n# Start training !\nheadpose_estimator.fit({'dataset': dataset_location})", "INFO:sagemaker:Creating training-job with name: deeplens-sagemaker-headpose-2018-03-06-21-23-10-835\n" ] ], [ [ "The latest training job name is...", "_____no_output_____" ] ], [ [ "print(headpose_estimator.latest_training_job.name)", "deeplens-sagemaker-headpose-2018-03-06-21-23-10-835\n" ] ], [ [ "The training is done", "_____no_output_____" ], [ "### Creating an inference Endpoint\n\nAfter training, we use the ``MXNet estimator`` object to build and deploy an ``MXNetPredictor``. This creates a Sagemaker **Endpoint** -- a hosted prediction service that we can use to perform inference. \n\nThe arguments to the ``deploy`` function allow us to set the number and type of instances that will be used for the Endpoint. These do not need to be the same as the values we used for the training job. For example, you can train a model on a set of GPU-based instances, and then deploy the Endpoint to a fleet of CPU-based instances. Here we will deploy the model to a single ``ml.c4.xlarge`` instance. \n", "_____no_output_____" ] ], [ [ "from sagemaker.mxnet.model import MXNetModel\n\n'''\n You will find the name of training job on the top of the training log. \n e.g.\n INFO:sagemaker:Creating training-job with name: HeadPose-Gluon-YYYY-MM-DD-HH-MM-SS-XXX\n'''\ntraining_job_name = headpose_estimator.latest_training_job.name\nsagemaker_model = MXNetModel(model_data= model_artifacts_location + '/{}/output/model.tar.gz'.format(training_job_name),\n role=role,\n entry_point='EntryPt-headpose-Gluon-wo-cv2.py')\n", "_____no_output_____" ], [ "predictor = sagemaker_model.deploy(initial_instance_count=1,\n instance_type='ml.c4.xlarge')", "INFO:sagemaker:Created S3 bucket: sagemaker-us-east-1-148886336128\nINFO:sagemaker:Creating model with name: sagemaker-mxnet-py2-cpu-2018-03-06-22-29-20-394\nINFO:sagemaker:Creating endpoint with name sagemaker-mxnet-py2-cpu-2018-03-06-22-29-20-394\n" ] ], [ [ "The request handling behavior of the Endpoint is determined by the ``EntryPt-headpose-Gluon-wo-cv2.py`` script. The difference between ``EntryPt-headpose-Gluon-wo-cv2.py`` and ``EntryPt-headpose-Gluon.py`` is just OpenCV module (``cv2``). We found the inference instance does not support ``cv2``. If you use ``EntryPt-headpose-Gluon.py``, the inference instance will return an error `` AllTraffic did not pass the ping health check``.\n\n### Making an inference request\n\nNow our Endpoint is deployed and we have a ``predictor`` object, we can use it to classify the head-pose of our own head-torso image.\n", "_____no_output_____" ] ], [ [ "import cv2\nimport numpy as np\nimport boto3\nimport os\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nrole = get_execution_role()", "_____no_output_____" ], [ "import urllib.request\n\nsample_ims_location = 'https://s3.amazonaws.com/{}/{}/testIMs/IMG_1242.JPG'.format(s3_bucket,headpose_folder)\n\nprint(sample_ims_location)\n\ndef download(url):\n filename = url.split(\"/\")[-1]\n if not os.path.exists(filename):\n urllib.request.urlretrieve(url, filename)\n return cv2.imread(filename)\n \nim_true = download(sample_ims_location)", "https://s3.amazonaws.com/sagemaker-headpose-0000/headpose/testIMs/IMG_1242.JPG\n" ], [ "im = im_true.astype(np.float32)/255 # Normalized\n\ncrop_uly = 62\ncrop_height = 360\ncrop_ulx = 100\ncrop_width = 360\n\nim = im[crop_uly:crop_uly + crop_height, crop_ulx:crop_ulx + crop_width]\nim_crop = im\nplt.imshow(im_crop[:,:,::-1])\nplt.show()\n\nim = cv2.resize(im, (84, 84))\nplt.imshow(im[:,:,::-1])\nplt.show()\n\nim = np.swapaxes(im, 0, 2)\nim = np.swapaxes(im, 1, 2)\nim = im[np.newaxis, :]\nim = (im -0.5) * 2\nprint(im.shape)", "_____no_output_____" ] ], [ [ "Now we can use the ``predictor`` object to classify the head pose:", "_____no_output_____" ] ], [ [ "data = im\n\nprob = predictor.predict(data)\nprint('Raw prediction result:')\nprint(prob)\n\nlabeled_predictions = list(zip(range(10), prob[0]))\nprint('Labeled predictions: ')\nprint(labeled_predictions)\n\nlabeled_predictions.sort(key=lambda label_and_prob: 1.0 - label_and_prob[1])\nprint('Most likely answer: {}'.format(labeled_predictions[0]))", "Raw prediction result:\n[[0.0004842608468607068, 0.0005739308544434607, 0.79445481300354, 0.00716796750202775, 0.0009900344302877784, 0.18847312033176422, 0.0012833202490583062, 0.00021341521642170846, 0.006359193939715624]]\nLabeled predictions: \n[(0, 0.0004842608468607068), (1, 0.0005739308544434607), (2, 0.79445481300354), (3, 0.00716796750202775), (4, 0.0009900344302877784), (5, 0.18847312033176422), (6, 0.0012833202490583062), (7, 0.00021341521642170846), (8, 0.006359193939715624)]\nMost likely answer: (2, 0.79445481300354)\n" ], [ "n_grid_cls = 9\nn_tilt_cls = 3\n\npred = labeled_predictions[0][0]\n\n\n### Tilt Prediction\npred_tilt_pic = pred % n_tilt_cls\n### Pan Prediction\npred_pan_pic = pred // n_tilt_cls\n\nextent = 0, im_true.shape[1]-1, im_true.shape[0]-1, 0\nPanel_Pred = np.zeros((n_tilt_cls, n_tilt_cls))\nPanel_Pred[pred_tilt_pic, pred_pan_pic] = 1\nPanel_Pred = np.fliplr(Panel_Pred)\nPanel_Pred = np.flipud(Panel_Pred)\nplt.imshow(im_true[:,:,[2,1,0]], extent=extent)\nplt.imshow(Panel_Pred, cmap=plt.cm.Blues, alpha=.2, interpolation='nearest', extent=extent)\nplt.axis('off')\narrw_mg = 100\narrw_x_rad = 1 * (prob[0][0] + prob[0][1] + prob[0][2] - prob[0][6] -prob[0][7] - prob[0][8]) * 90 * np.pi / 180. \narrw_y_rad = 1 * (prob[0][0] + prob[0][3] + prob[0][6] - prob[0][2] -prob[0][5] - prob[0][8]) * 90 * np.pi / 180.\n\nplt.arrow(im_true.shape[1]//2, im_true.shape[0]//2, \n np.sin(arrw_x_rad) * arrw_mg, np.sin(arrw_y_rad) * arrw_mg, \n head_width=10, head_length=10, fc='b', ec='b')\nplt.show()", "_____no_output_____" ] ], [ [ "# (Optional) Delete the Endpoint\n\nAfter you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.", "_____no_output_____" ] ], [ [ "print(\"Endpoint name: \" + predictor.endpoint)", "Endpoint name: sagemaker-mxnet-py2-cpu-2018-03-02-20-36-19-973\n" ], [ "import sagemaker\n\nsagemaker.Session().delete_endpoint(predictor.endpoint)", "INFO:sagemaker:Deleting endpoint with name: sagemaker-mxnet-py2-cpu-2018-03-02-20-36-19-973\n" ] ], [ [ "# End", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a89aa652b9979b64e9b15bb2f47a26fc478554c
25,311
ipynb
Jupyter Notebook
jupyter-workspace/Python[3] Numpy.ipynb
Shailesh-Pandey/Python
54eba5a0beda51e55c2393c59ce8e5dd7c0cda1d
[ "Apache-2.0" ]
null
null
null
jupyter-workspace/Python[3] Numpy.ipynb
Shailesh-Pandey/Python
54eba5a0beda51e55c2393c59ce8e5dd7c0cda1d
[ "Apache-2.0" ]
null
null
null
jupyter-workspace/Python[3] Numpy.ipynb
Shailesh-Pandey/Python
54eba5a0beda51e55c2393c59ce8e5dd7c0cda1d
[ "Apache-2.0" ]
null
null
null
18.354605
606
0.433566
[ [ [ "# Numpy", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "## 1) Array Creation", "_____no_output_____" ], [ "### 1.1 From function", "_____no_output_____" ] ], [ [ "lst = list(range(1,100,5))\nprint(lst)", "[1, 6, 11, 16, 21, 26, 31, 36, 41, 46, 51, 56, 61, 66, 71, 76, 81, 86, 91, 96]\n" ], [ "arr = np.arange(1,100,5)\nprint(arr)\nprint(type(arr))", "[ 1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96]\n<class 'numpy.ndarray'>\n" ] ], [ [ "### 1.2 From list", "_____no_output_____" ] ], [ [ "l1 = [1,2,3,4]\narr1 = np.array(l1)\nprint(l1)\nprint(arr1)", "[1, 2, 3, 4]\n[1 2 3 4]\n" ], [ "list2d = [[1,2],[3,4],[5,6]]\narr2d = np.array(list2d)\nprint(list2d)\nprint(arr2d)\nprint(type(arr2d[0]))", "[[1, 2], [3, 4], [5, 6]]\n[[1 2]\n [3 4]\n [5 6]]\n<class 'numpy.ndarray'>\n" ], [ "#Numpy array should be homogeneous\nlist2d = [[1],[3,4],[5,6,7]]\narr2d = np.array(list2d)\nprint(list2d)\nprint(arr2d)\ntype(arr2d[0])", "[[1], [3, 4], [5, 6, 7]]\n[list([1]) list([3, 4]) list([5, 6, 7])]\n" ], [ "#Numpy array should be homogeneous\nlist2d = [[1,2],[3,4],['5','6']]\narr2d = np.array(list2d)\nprint(list2d)\nprint(arr2d)\ntype(arr2d[0])", "[[1, 2], [3, 4], ['5', '6']]\n[['1' '2']\n ['3' '4']\n ['5' '6']]\n" ] ], [ [ "### 1.3 Character array", "_____no_output_____" ] ], [ [ "charar = np.chararray((3, 3))\ncharar[:] = 'h'\nprint(charar)", "[[b'h' b'h' b'h']\n [b'h' b'h' b'h']\n [b'h' b'h' b'h']]\n" ], [ "charar2 = np.chararray((3, 3),itemsize=5)\ncharar2[:] = 'hello'\ncharar2[:] = 'hello world'\nprint(charar2)", "[[b'hello' b'hello' b'hello']\n [b'hello' b'hello' b'hello']\n [b'hello' b'hello' b'hello']]\n" ] ], [ [ "### 1.4 Some others arrays", "_____no_output_____" ] ], [ [ "#Zero matrix\nnp.zeros(4)", "_____no_output_____" ], [ "#Zero matrix\nnp.zeros((4,4))", "_____no_output_____" ], [ "#Identity matrix\nnp.eye(5)", "_____no_output_____" ], [ "np.linspace(0,100,5)", "_____no_output_____" ], [ "#[0,1] in uniform distribution\nnp.random.rand(4)", "_____no_output_____" ], [ "#in normal distribution\nnp.random.randn(4)", "_____no_output_____" ], [ "# random integers\nnp.random.randint(1,50,8)", "_____no_output_____" ] ], [ [ "## 2) Operations", "_____no_output_____" ], [ "### 2.1 Addition, Subtraction, Multiplication, Division", "_____no_output_____" ] ], [ [ "arr = np.arange(1,10,1)\nprint(arr)", "[1 2 3 4 5 6 7 8 9]\n" ], [ "#print(arr + 5)\narr1 = np.arange(10,20,1)\n#shape must be same\narr2 = np.arange(10,20,1)\nprint(arr1)\nprint(arr2)\nprint(arr1+arr2)", "[10 11 12 13 14 15 16 17 18 19]\n[10 11 12 13 14 15 16 17 18 19]\n[20 22 24 26 28 30 32 34 36 38]\n" ], [ "# In list concatenation occurs\nl1 = [1,2,3]\nl2 = [5,6,7]\nl3 = [4]\nprint(l1+l2+l3)", "[1, 2, 3, 5, 6, 7, 4]\n" ], [ "# for inf check\narr = np.arange(1,20,2)\n#arr = np.arange(0,19,2)\nprint(arr)", "[ 1 3 5 7 9 11 13 15 17 19]\n" ], [ "arr2 = np.arange(0,100,10)\nprint(arr2)", "[ 0 10 20 30 40 50 60 70 80 90]\n" ], [ "print(arr*arr2)", "[ 0 30 100 210 360 550 780 1050 1360 1710]\n" ], [ "print(arr2/arr)", "[0. 3.33333333 4. 4.28571429 4.44444444 4.54545455\n 4.61538462 4.66666667 4.70588235 4.73684211]\n" ], [ "#division with zero handled but should be avoided\nprint(arr/arr2)\n", "[ inf 0.3 0.25 0.23333333 0.225 0.22\n 0.21666667 0.21428571 0.2125 0.21111111]\n" ], [ "s = arr/arr2\n#s[0]==float('inf')\ns[0]==np.inf\n#s[0]==np.nan\n#np.isnan(s[0])", "c:\\users\\ranger\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in true_divide\n \"\"\"Entry point for launching an IPython kernel.\n" ] ], [ [ "### 2.2 dtype shape reshape", "_____no_output_____" ] ], [ [ "arr1 = np.array([1,2,3,4,5])\narr2 = np.array([[1,2,3,4,5,6],[7,8,9,10,11,12]])", "_____no_output_____" ], [ "print(arr1.dtype)\nprint(np.result_type(arr1))", "int32\nint32\n" ], [ "print(arr2.dtype)\nprint(np.result_type(arr2))", "int32\nint32\n" ], [ "#explicitly give type\narrf = np.array([1.5,2,3,4,5], dtype=np.float64)\nprint(arrf)\nprint(arrf.dtype)", "[1.5 2. 3. 4. 5. ]\nfloat64\n" ], [ "lst = ['1','2','3']\nlst2 = [[1,2,3],[5,6,7]]\nlst+lst2+[1,2]", "_____no_output_____" ], [ "print(arr1)", "[1 2 3 4 5]\n" ], [ "arrnew = np.append(arr1, 'hi')\nprint(arrnew)\nprint(arrnew.dtype)", "['1' '2' '3' '4' '5' 'hi']\n<U11\n" ], [ "arr1 = np.array([1,2,3,4,5])\narr2 = np.array([[1,2,3,4,5,6],[7,8,9,10,11,12]])", "_____no_output_____" ], [ "print(arr1.shape)\n#print(arr2.shape)", "(5,)\n" ], [ "print(arr2)\nprint(arr2.shape)", "[[ 1 2 3 4 5 6]\n [ 7 8 9 10 11 12]]\n(2, 6)\n" ], [ "arr3 = arr2.reshape(4,3)\nprint(arr3)\n#print(arr3.shape)", "[[ 1 2 3]\n [ 4 5 6]\n [ 7 8 9]\n [10 11 12]]\n" ] ], [ [ "### 2.3 other numpy operation", "_____no_output_____" ] ], [ [ "arr1 = np.array([1,2,3,4,5])\narr2 = np.array([[1,2,3,4,5,6],[7,8,9,10,11,12]])", "_____no_output_____" ], [ "print(arr1.max())", "5\n" ], [ "print(arr2)\nprint(arr2.max())\nprint(arr2[0].max())\nprint(arr2[1].max())", "[[ 1 2 3 4 5 6]\n [ 7 8 9 10 11 12]]\n12\n6\n12\n" ], [ "arr2.min()", "_____no_output_____" ], [ "print(arr2.argmin())\n#Cannot be used to access element but only know the position\nprint(arr2.argmax())\n#arr1[arr1.argmax()]\n#arr1[arr1.argmin()]\narr2[arr2.argmax()]", "0\n11\n" ], [ "#arr2[1][5]\n\n# Way1\n#i,j = np.unravel_index(arr2.argmax(), arr2.shape)\n#print(i,j)\n#arr2[i][j]\n\n# Way2\n#arr2[np.where(arr2==arr2.max())]\n#print(np.where(arr2==arr2.max()))", "_____no_output_____" ], [ "print(arr1)", "[1 2 3 4 5]\n" ], [ "print(np.sqrt(arr1))", "[1. 1.41421356 1.73205081 2. 2.23606798]\n" ], [ "print(np.log(arr1))", "[0. 0.69314718 1.09861229 1.38629436 1.60943791]\n" ], [ "print(np.exp(arr1))", "[ 2.71828183 7.3890561 20.08553692 54.59815003 148.4131591 ]\n" ], [ "print(np.sin(arr1))", "[ 0.84147098 0.90929743 0.14112001 -0.7568025 -0.95892427]\n" ], [ "print(np.cos(arr1))", "[ 0.54030231 -0.41614684 -0.9899925 -0.65364362 0.28366219]\n" ], [ "print(np.tan(arr1))", "[ 1.55740772 -2.18503986 -0.14254654 1.15782128 -3.38051501]\n" ] ], [ [ "### 2.4 indexing", "_____no_output_____" ] ], [ [ "arr1 = np.array([1,2,3,4,5])\narr2 = np.array([ [1,2,3,4,5,6], [7,8,9,10,11,12], [0,2,4,3,1,5] , [9,8,7,6,5,4] ])", "_____no_output_____" ], [ "print(arr1[0:2])", "[1 2]\n" ], [ "arr1[0:2] = 0\nprint(arr1)", "[0 0 3 4 5]\n" ], [ "arr1 = np.array([1,2,3,4,5])\narr = arr1.copy()\narr[:] = 0\nprint(arr)\nprint(arr1)", "[0 0 0 0 0]\n[1 2 3 4 5]\n" ], [ "print(arr2)", "[[ 1 2 3 4 5 6]\n [ 7 8 9 10 11 12]\n [ 0 2 4 3 1 5]\n [ 9 8 7 6 5 4]]\n" ], [ "# row,col\nprint(arr2[2][2])", "4\n" ], [ "print(arr2[2])", "[0 2 4 3 1 5]\n" ], [ "print(arr2[2:5])", "[[0 2 4 3 1 5]\n [9 8 7 6 5 4]]\n" ], [ "print(arr2[2][1:3])", "[2 4]\n" ], [ "print(arr1)", "[1 2 3 4 5]\n" ], [ "print(arr1>3)", "[False False False True True]\n" ], [ "print(arr1[arr1>3])", "[4 5]\n" ], [ "print(np.where(arr1>3))", "(array([3, 4], dtype=int64),)\n" ], [ "arr1[np.where(arr1>3)]", "_____no_output_____" ], [ "arr[arr1>3]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a89b4946a31f5bb5d2d1a7b1b1cc0b54ab7c697
95,971
ipynb
Jupyter Notebook
project/analysis/wrangle.ipynb
jakehsiao/tenhou-python-bot
9a8b4016cb5a9b70392c74398f205c11953feca9
[ "MIT" ]
null
null
null
project/analysis/wrangle.ipynb
jakehsiao/tenhou-python-bot
9a8b4016cb5a9b70392c74398f205c11953feca9
[ "MIT" ]
null
null
null
project/analysis/wrangle.ipynb
jakehsiao/tenhou-python-bot
9a8b4016cb5a9b70392c74398f205c11953feca9
[ "MIT" ]
null
null
null
33.756947
182
0.362141
[ [ [ "# Main code\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nimport re\n\ndef parse_order_info(order_info, player_name):\n order_list = re.findall(r\"\\[(.*?)\\]\", order_info)[0].split(\", \")\n position = 1\n for i in range(len(order_list)):\n if player_name in order_list[i]:\n score = re.findall(r\"\\((.*?)\\)\", order_list[i])[0]\n position = i + 1\n break\n return score, position\n\n\n\ndef parse_gamelog(log_str):\n games = re.findall(r\"Game started.*?Final results.*?\\n\", log_str, re.DOTALL)\n\n games_data = []\n hands_data = []\n\n for game in games:\n try:\n log_info = re.findall(r\"\\?log=(.*)\", game)\n if log_info:\n game_index = log_info[0]\n else:\n game_index = np.random.randint(0, 100000)\n \n player_name = re.findall(r\"\\[(.*?)\\ \", re.findall(r\"Players.*?\\n\", game)[0])[0]\n\n hands = re.findall(r\"Round:.*?Cowboy: Score.*?\\n\", game, re.DOTALL)\n\n \n final_info = re.findall(r\"Final results:.*?\\n\", game)[0]\n south_info = re.findall(r\"Round: 4.*?]\", game, re.DOTALL)\n if south_info:\n south_info = south_info[0]\n else:\n # when the game have no South rounds\n south_info = final_info\n\n south_score, south_position = parse_order_info(south_info, player_name)\n final_score, final_position = parse_order_info(final_info, player_name)\n\n games_data.append([game_index, south_score, final_score, south_position, final_position])\n\n for hand in hands:\n round_num = re.findall(r\"Round: (\\d)\", hand)[0]\n result = re.findall(r\"Cowboy: Score.*?\\n\", hand)[0]\n score, score_diff, play_state, state_index = re.findall(r\"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\\n\", result)[0]\n\n meld_times = len(re.findall(r\"With hand:\", hand))\n reached = bool(re.findall(r\"Go for it!\", hand))\n\n is_dealer = player_name == re.findall(r\"Dealer: (.*) \", hand)[0]\n\n hands_data.append([\n score, score_diff, play_state, \n state_index, meld_times, reached, \n is_dealer, game_index, round_num,\n south_position, final_position,\n ])\n \n except Exception as e:\n print(e)\n \n games_col = ['game_index',\n 'south_score',\n 'final_score',\n 'south_position',\n 'final_position']\n hands_col = ['score',\n 'score_diff',\n 'play_state',\n 'state_index',\n 'meld_times',\n 'reached',\n 'is_dealer',\n 'game_index',\n 'round_num',\n 'south_position',\n 'final_position',\n ]\n games_df = pd.DataFrame(data=np.array(games_data), columns=games_col)\n hands_df = pd.DataFrame(data=np.array(hands_data), columns=hands_col)\n return games_df, hands_df", "_____no_output_____" ], [ "# exec\nfilepath = \"../logs225/\" # CHANGE THIS\nlog_files = [\"gamelog.txt\", \"gamelog1.txt\", \"gamelog2.txt\", \"gamelog3.txt\", \"gamelog4.txt\", \"gamelog7.txt\", \"gamelog8.txt\"] # CHANGE THIS\ngames_df_list = []\nhands_df_list = []\n\nfor filename in log_files:\n with open(filepath+filename) as f:\n log_str = f.read()\n games_df, hands_df = parse_gamelog(log_str)\n games_df_list.append(games_df)\n hands_df_list.append(hands_df)\n \n# Change file name here\ngames_df_filename = \"games7\" # CHANGE THIS\nhands_df_filename = \"hands7\" # CHANGE THIS\n\npd.concat(games_df_list).to_csv(games_df_filename + \".csv\")\npd.concat(hands_df_list).to_csv(hands_df_filename + \".csv\")", "local variable 'score' referenced before assignment\n" ], [ "# Check the df\ngames = pd.read_csv(games_df_filename+\".csv\")\nhands = pd.read_csv(hands_df_filename+\".csv\")", "_____no_output_____" ], [ "games.shape, hands.shape", "_____no_output_____" ], [ "hands.head(20)", "_____no_output_____" ] ], [ [ "## Develop", "_____no_output_____" ] ], [ [ "origin_str = \"\"\nwith open(\"../gamelog.txt\") as f:\n origin_str = f.read()", "_____no_output_____" ] ], [ [ "## Get 1 game", "_____no_output_____" ] ], [ [ "test_str = origin_str[:50000]", "_____no_output_____" ], [ "re.findall(r\"Final result.*\\n\", test_str)", "_____no_output_____" ], [ "games = re.findall(r\"Game started.*?Final results.*?\\n\", test_str, re.DOTALL)\nlen(games)", "_____no_output_____" ], [ "test_game = games[1]", "_____no_output_____" ], [ "game_index = re.findall(r\"\\?log=(.*)\", test_game)[0]\ngame_index", "_____no_output_____" ], [ "test_hands = re.findall(r\"Round:.*?Cowboy: Score.*?\\n\", test_game, re.DOTALL)\nlen(test_hands)", "_____no_output_____" ], [ "test_hand = test_hands[0]", "_____no_output_____" ], [ "print(test_hand)", "Round: 0, Honba: 0, Dora Indicators: 1s\n2019-02-09 08:52:23 INFO: Players: [annie808 (25,000), 名字和JJ一樣長 (25,000), じゅんのすけ (25,000), 芋虎 (25,000)]\n2019-02-09 08:52:23 INFO: Dealer: annie808 (25,000)\n2019-02-09 08:52:23 INFO: Round wind: East\n2019-02-09 08:52:23 INFO: Player wind: South\n2019-02-09 08:52:28 INFO: Shanten: 4\n2019-02-09 08:52:40 INFO: Shanten: 3\n2019-02-09 08:53:16 INFO: Shanten: 2\n2019-02-09 08:53:20 INFO: Meld: Type: pon, Tiles: 333p [44, 45, 46] by 3\n2019-02-09 08:53:28 INFO: Detect a possible meld: Type: pon, Tiles: 666z [128, 129, 130]\n2019-02-09 08:53:28 INFO: This is yakuhai pon? True\n2019-02-09 08:53:28 INFO: Player has yakuhai pon? False\n2019-02-09 08:53:28 INFO: It's fine to call this meld.\n2019-02-09 08:53:29 INFO: Meld: Type: pon, Tiles: 666z [128, 129, 130] by 0\n2019-02-09 08:53:29 INFO: With hand: 399m127p12356s66z + 6z\n2019-02-09 08:53:29 INFO: Discard tile after called meld: 3m\n2019-02-09 08:53:39 INFO: Shanten: 0\n2019-02-09 08:53:39 INFO: Set the player's state from PREPARING to PROACTIVE_BADSHAPE\n2019-02-09 08:53:39 INFO: Hand: 99m127p123456s\n2019-02-09 08:53:39 INFO: Outs: 3p 1\n2019-02-09 08:53:50 INFO: This is a bad meld, don't call it.\n2019-02-09 08:53:54 INFO: Meld: Type: chankan, Tiles: 6666z [128, 129, 130, 131] by 0\n2019-02-09 08:54:15 INFO: Cowboy: Score: 23000 Score difference: -2000 Play State: PROACTIVE_BADSHAPE Latest change index: 7\n\n" ], [ "test_result = re.findall(r\"Cowboy: Score.*?\\n\", test_hand)[0]", "_____no_output_____" ], [ "test_result", "_____no_output_____" ], [ "meld_times = len(re.findall(r\"With hand:\", test_hand))\nmeld_times", "_____no_output_____" ], [ "reached = bool(re.findall(r\"Go for it!\", test_hand))\nreached", "_____no_output_____" ], [ "south_info = re.findall(r\"Round: 4.*?]\", test_game, re.DOTALL)[0]", "_____no_output_____" ], [ "final_info = re.findall(r\"Final results:.*?\\n\", test_game)[0]", "_____no_output_____" ], [ "order_info = re.findall(r\"\\[.*?\\]\", south_info)[0]", "_____no_output_____" ], [ "order_info[1:-1].split(\", \")", "_____no_output_____" ], [ "player_name = re.findall(r\"\\[.*?\\(\", re.findall(r\"Players.*?\\n\", test_game)[0])[0][1:-2]", "_____no_output_____" ], [ "position = 1\norder_list = order_info.split(\", \")\nfor i in range(len(order_list)):\n if player_name in order_list[i]:\n position = i + 1\n break", "_____no_output_____" ], [ "position", "_____no_output_____" ], [ "def parse_order_info(order_info, player_name):\n order_list = re.findall(r\"\\[(.*?)\\]\", order_info)[0].split(\", \")\n position = 1\n for i in range(len(order_list)):\n if player_name in order_list[i]:\n score = re.findall(r\"\\((.*?)\\)\", order_list[i])[0]\n position = i + 1\n break\n return score, position", "_____no_output_____" ], [ "parse_order_info(final_info, player_name)", "_____no_output_____" ], [ "re.findall(r\"Dealer: (.*) \", test_hand)", "_____no_output_____" ], [ "is_dealer = player_name == re.findall(r\"Dealer: (.*) \", test_hand)", "_____no_output_____" ], [ "test_result", "_____no_output_____" ], [ "re.findall(r\"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\\n\", test_result)", "_____no_output_____" ], [ "score, score_diff, play_state, state_index = re.findall(r\"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\\n\", test_result)[0]", "_____no_output_____" ], [ "player_name = re.findall(r\"\\[(.*?)\\ \", re.findall(r\"Players.*?\\n\", test_game)[0])[0]", "_____no_output_____" ], [ "games = re.findall(r\"Game started.*?Final results.*?\\n\", test_str, re.DOTALL)\n\ngames_data = []\nhands_data = []\n\nfor game_index, game in enumerate(games):\n player_name = re.findall(r\"\\[(.*?)\\ \", re.findall(r\"Players.*?\\n\", game)[0])[0]\n \n hands = re.findall(r\"Round:.*?Cowboy: Score.*?\\n\", test_game, re.DOTALL)\n \n south_info = re.findall(r\"Round: 4.*?]\", game, re.DOTALL)[0]\n final_info = re.findall(r\"Final results:.*?\\n\", game)[0]\n south_score, south_position = parse_order_info(south_info, player_name)\n final_score, final_position = parse_order_info(final_info, player_name)\n \n games_data.append([game_index, south_score, final_score, south_position, final_position])\n \n for hand in hands:\n result = re.findall(r\"Cowboy: Score.*?\\n\", hand)[0]\n score, score_diff, play_state, state_index = re.findall(r\"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\\n\", result)[0]\n \n meld_times = len(re.findall(r\"With hand:\", hand))\n reached = bool(re.findall(r\"Go for it!\", hand))\n \n is_dealer = player_name == re.findall(r\"Dealer: (.*) \", hand)[0]\n \n hands_data.append([score, score_diff, play_state, state_index, meld_times, reached, is_dealer, game_index])", "_____no_output_____" ], [ "games_col = ['game_index',\n 'south_score',\n 'final_score',\n 'south_position',\n 'final_position']", "_____no_output_____" ], [ "hands_col = ['score',\n 'score_diff',\n 'play_state',\n 'state_index',\n 'meld_times',\n 'reached',\n 'is_dealer',\n 'game_index']", "_____no_output_____" ], [ "games_df = pd.DataFrame(data=np.array(games_data), columns=games_col)", "_____no_output_____" ], [ "games_df", "_____no_output_____" ], [ "def parse_order_info(order_info, player_name):\n order_list = re.findall(r\"\\[(.*?)\\]\", order_info)[0].split(\", \")\n position = 1\n for i in range(len(order_list)):\n if player_name in order_list[i]:\n score = re.findall(r\"\\((.*?)\\)\", order_list[i])[0]\n position = i + 1\n break\n return score, positionhands_df = pd.DataFrame(data=np.array(hands_data), columns=hands_col)\nhands_df", "_____no_output_____" ] ], [ [ "## Functions", "_____no_output_____" ] ], [ [ "def parse_order_info(order_info, player_name):\n order_list = re.findall(r\"\\[(.*?)\\]\", order_info)[0].split(\", \")\n position = 1\n for i in range(len(order_list)):\n if player_name in order_list[i]:\n score = re.findall(r\"\\((.*?)\\)\", order_list[i])[0]\n position = i + 1\n break\n return score, position\n\n\n\ndef parse_gamelog(log_str):\n games = re.findall(r\"Game started.*?Final results.*?\\n\", log_str, re.DOTALL)\n\n games_data = []\n hands_data = []\n\n for game_index, game in enumerate(games):\n try:\n player_name = re.findall(r\"\\[(.*?)\\ \", re.findall(r\"Players.*?\\n\", game)[0])[0]\n\n hands = re.findall(r\"Round:.*?Cowboy: Score.*?\\n\", game, re.DOTALL)\n\n \n final_info = re.findall(r\"Final results:.*?\\n\", game)[0]\n south_info = re.findall(r\"Round: 4.*?]\", game, re.DOTALL)\n if south_info:\n south_info = south_info[0]\n else:\n # when the game have no South rounds\n south_info = final_info\n\n south_score, south_position = parse_order_info(south_info, player_name)\n final_score, final_position = parse_order_info(final_info, player_name)\n\n games_data.append([game_index, south_score, final_score, south_position, final_position])\n\n for hand in hands:\n result = re.findall(r\"Cowboy: Score.*?\\n\", hand)[0]\n score, score_diff, play_state, state_index = re.findall(r\"Score: (.*?) Score difference: (.*?) Play State: (.*?) Latest change index: (.*?)\\n\", result)[0]\n\n meld_times = len(re.findall(r\"With hand:\", hand))\n reached = bool(re.findall(r\"Go for it!\", hand))\n\n is_dealer = player_name == re.findall(r\"Dealer: (.*) \", hand)[0]\n\n hands_data.append([score, score_diff, play_state, state_index, meld_times, reached, is_dealer, game_index, final_position])\n \n except Exception as e:\n print(e)\n \n games_col = ['game_index',\n 'south_score',\n 'final_score',\n 'south_position',\n 'final_position']\n hands_col = ['score',\n 'score_diff',\n 'play_state',\n 'state_index',\n 'meld_times',\n 'reached',\n 'is_dealer',\n 'game_index',\n 'final_position',\n ]\n games_df = pd.DataFrame(data=np.array(games_data), columns=games_col)\n hands_df = pd.DataFrame(data=np.array(hands_data), columns=hands_col)\n return games_df, hands_df", "_____no_output_____" ], [ "games_df, hands_df = parse_gamelog(origin_str)", "_____no_output_____" ], [ "gamelog2 = \"\"\nwith open(\"../gamelog2.txt\") as f:\n gamelog2 = f.read()", "_____no_output_____" ], [ "games_df2, hands_df2 = parse_gamelog(gamelog2)", "_____no_output_____" ], [ "pd.concat([games_df, games_df2]).to_csv(\"games1.csv\")", "_____no_output_____" ], [ "pd.concat([hands_df, hands_df2]).to_csv(\"hands1.csv\")", "_____no_output_____" ], [ "hands_df", "_____no_output_____" ], [ "for i in re.findall(r\"(http.*)\\n\", origin_str):\n print(i)", "http://tenhou.net/0/?log=2019020700gm-0089-0000-1a7bf310&tw=3\nhttp://tenhou.net/0/?log=2019020701gm-0089-0000-e0fa7774&tw=0\nhttp://tenhou.net/0/?log=2019020701gm-0089-0000-f9c7cdc1&tw=2\nhttp://tenhou.net/0/?log=2019020702gm-0089-0000-e2f3301d&tw=1\nhttp://tenhou.net/0/?log=2019020702gm-0089-0000-6712678b&tw=3\nhttp://tenhou.net/0/?log=2019020703gm-0089-0000-0ddb358f&tw=0\nhttp://tenhou.net/0/?log=2019020703gm-0089-0000-08012e66&tw=3\nhttp://tenhou.net/0/?log=2019020704gm-0089-0000-6bde1bb2&tw=3\nhttp://tenhou.net/0/?log=2019020704gm-0089-0000-468c3aa9&tw=1\nhttp://tenhou.net/0/?log=2019020705gm-0089-0000-7da238fe&tw=3\nhttp://tenhou.net/0/?log=2019020705gm-0089-0000-4e960f42&tw=2\nhttp://tenhou.net/0/?log=2019020706gm-0089-0000-530f6c59&tw=1\nhttp://tenhou.net/0/?log=2019020706gm-0089-0000-176939ed&tw=1\nhttp://tenhou.net/0/?log=2019020706gm-0089-0000-79253a84&tw=3\nhttp://tenhou.net/0/?log=2019020707gm-0089-0000-2f7a0fcf&tw=3\nhttp://tenhou.net/0/?log=2019020707gm-0089-0000-169571a3&tw=1\nhttp://tenhou.net/0/?log=2019020708gm-0089-0000-dc9ed77e&tw=0\nhttp://tenhou.net/0/?log=2019020708gm-0089-0000-7b528b6c&tw=3\nhttp://tenhou.net/0/?log=2019020708gm-0089-0000-770ca081&tw=3\nhttp://tenhou.net/0/?log=2019020709gm-0089-0000-c438d554&tw=3\nhttp://tenhou.net/0/?log=2019020709gm-0089-0000-6be5c35b&tw=2\nhttp://tenhou.net/0/?log=2019020710gm-0089-0000-2c26ef08&tw=1\nhttp://tenhou.net/0/?log=2019020710gm-0089-0000-26dff6e2&tw=0\nhttp://tenhou.net/0/?log=2019020711gm-0089-0000-47dfe04e&tw=0\nhttp://tenhou.net/0/?log=2019020712gm-0089-0000-6f80963c&tw=3\nhttp://tenhou.net/0/?log=2019020712gm-0089-0000-6b68de25&tw=3\nhttp://tenhou.net/0/?log=2019020713gm-0089-0000-13f0455b&tw=2\nhttp://tenhou.net/0/?log=2019020713gm-0089-0000-36a5b121&tw=0\nhttp://tenhou.net/0/?log=2019020714gm-0089-0000-6f10e4e4&tw=0\nhttp://tenhou.net/0/?log=2019020714gm-0089-0000-4963f18a&tw=0\nhttp://tenhou.net/0/?log=2019020715gm-0089-0000-300d9b1a&tw=2\nhttp://tenhou.net/0/?log=2019020715gm-0089-0000-00ad880d&tw=2\nhttp://tenhou.net/0/?log=2019020715gm-0089-0000-7e9087a5&tw=0\nhttp://tenhou.net/0/?log=2019020716gm-0089-0000-30a36c27&tw=2\nhttp://tenhou.net/0/?log=2019020716gm-0089-0000-78208f7f&tw=2\nhttp://tenhou.net/0/?log=2019020717gm-0089-0000-75eca3a9&tw=3\nhttp://tenhou.net/0/?log=2019020717gm-0089-0000-53d88621&tw=2\nhttp://tenhou.net/0/?log=2019020718gm-0089-0000-5384f4fa&tw=0\nhttp://tenhou.net/0/?log=2019020718gm-0089-0000-97ca52d3&tw=3\nhttp://tenhou.net/0/?log=2019020719gm-0089-0000-3568ae00&tw=3\nhttp://tenhou.net/0/?log=2019020719gm-0089-0000-35716ef7&tw=1\nhttp://tenhou.net/0/?log=2019020719gm-0089-0000-c61b982c&tw=3\nhttp://tenhou.net/0/?log=2019020720gm-0089-0000-f92af1ed&tw=1\nhttp://tenhou.net/0/?log=2019020720gm-0089-0000-7001d756&tw=3\nhttp://tenhou.net/0/?log=2019020720gm-0089-0000-15b74f19&tw=3\nhttp://tenhou.net/0/?log=2019020721gm-0089-0000-393059b2&tw=0\nhttp://tenhou.net/0/?log=2019020721gm-0089-0000-7f981f42&tw=1\nhttp://tenhou.net/0/?log=2019020722gm-0089-0000-c0a6b683&tw=0\nhttp://tenhou.net/0/?log=2019020722gm-0089-0000-fe31756e&tw=1\nhttp://tenhou.net/0/?log=2019020722gm-0089-0000-cfd3b3e3&tw=1\nhttp://tenhou.net/0/?log=2019020723gm-0089-0000-a3cd2bb0&tw=2\nhttp://tenhou.net/0/?log=2019020723gm-0089-0000-b3e68824&tw=3\nhttp://tenhou.net/0/?log=2019020800gm-0089-0000-aef6b8ae&tw=0\nhttp://tenhou.net/0/?log=2019020800gm-0089-0000-d9636146&tw=1\nhttp://tenhou.net/0/?log=2019020801gm-0089-0000-8f3117e4&tw=1\nhttp://tenhou.net/0/?log=2019020801gm-0089-0000-b7a61700&tw=0\nhttp://tenhou.net/0/?log=2019020801gm-0089-0000-b3930eaa&tw=1\nhttp://tenhou.net/0/?log=2019020802gm-0089-0000-1cfc0443&tw=3\nhttp://tenhou.net/0/?log=2019020802gm-0089-0000-002c3abb&tw=1\nhttp://tenhou.net/0/?log=2019020803gm-0089-0000-ca72da42&tw=0\nhttp://tenhou.net/0/?log=2019020803gm-0089-0000-856dcdef&tw=2\nhttp://tenhou.net/0/?log=2019020804gm-0089-0000-54239535&tw=3\nhttp://tenhou.net/0/?log=2019020804gm-0089-0000-5c66c800&tw=0\nhttp://tenhou.net/0/?log=2019020804gm-0089-0000-2d3858fb&tw=0\nhttp://tenhou.net/0/?log=2019020805gm-0089-0000-864afe04&tw=3\nhttp://tenhou.net/0/?log=2019020805gm-0089-0000-e1296365&tw=1\nhttp://tenhou.net/0/?log=2019020806gm-0089-0000-44d3084c&tw=1\nhttp://tenhou.net/0/?log=2019020806gm-0089-0000-bfad384e&tw=3\nhttp://tenhou.net/0/?log=2019020807gm-0089-0000-e769912d&tw=2\nhttp://tenhou.net/0/?log=2019020807gm-0089-0000-5ba23428&tw=0\nhttp://tenhou.net/0/?log=2019020808gm-0089-0000-b7589262&tw=2\nhttp://tenhou.net/0/?log=2019020808gm-0089-0000-2cafcf73&tw=0\nhttp://tenhou.net/0/?log=2019020809gm-0089-0000-6f20dabb&tw=2\nhttp://tenhou.net/0/?log=2019020809gm-0089-0000-f3ed88a6&tw=3\nhttp://tenhou.net/0/?log=2019020810gm-0089-0000-d8592b24&tw=0\nhttp://tenhou.net/0/?log=2019020810gm-0089-0000-afa534f5&tw=2\nhttp://tenhou.net/0/?log=2019020810gm-0089-0000-95a2e44e&tw=3\nhttp://tenhou.net/0/?log=2019020811gm-0089-0000-c890ef7f&tw=0\nhttp://tenhou.net/0/?log=2019020811gm-0089-0000-05b8f201&tw=1\nhttp://tenhou.net/0/?log=2019020812gm-0089-0000-741c7345&tw=2\nhttp://tenhou.net/0/?log=2019020812gm-0089-0000-fd4b6223&tw=1\nhttp://tenhou.net/0/?log=2019020813gm-0089-0000-2c997751&tw=2\nhttp://tenhou.net/0/?log=2019020813gm-0089-0000-4c315e40&tw=2\nhttp://tenhou.net/0/?log=2019020814gm-0089-0000-889190e7&tw=2\nhttp://tenhou.net/0/?log=2019020814gm-0089-0000-859a6af5&tw=1\nhttp://tenhou.net/0/?log=2019020815gm-0089-0000-0f2ef038&tw=0\nhttp://tenhou.net/0/?log=2019020816gm-0089-0000-373dd8cb&tw=0\nhttp://tenhou.net/0/?log=2019020816gm-0089-0000-a6580be1&tw=2\nhttp://tenhou.net/0/?log=2019020816gm-0089-0000-268b40cc&tw=3\nhttp://tenhou.net/0/?log=2019020817gm-0089-0000-30a21efe&tw=3\nhttp://tenhou.net/0/?log=2019020818gm-0089-0000-bb5312c4&tw=1\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a89b98379f609f7695421c66886651b03739fb7
572
ipynb
Jupyter Notebook
pset_challenging_ext/exercises/nb/p87.ipynb
mottaquikarim/pydev-psets
9749e0d216ee0a5c586d0d3013ef481cc21dee27
[ "MIT" ]
5
2019-04-08T20:05:37.000Z
2019-12-04T20:48:45.000Z
pset_challenging_ext/exercises/nb/p87.ipynb
mottaquikarim/pydev-psets
9749e0d216ee0a5c586d0d3013ef481cc21dee27
[ "MIT" ]
8
2019-04-15T15:16:05.000Z
2022-02-12T10:33:32.000Z
pset_challenging_ext/exercises/nb/p87.ipynb
mottaquikarim/pydev-psets
9749e0d216ee0a5c586d0d3013ef481cc21dee27
[ "MIT" ]
2
2019-04-10T00:14:42.000Z
2020-02-26T20:35:21.000Z
22.88
115
0.54021
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a89be24f79246b7fcfc449701a6cfdf9dc955c7
11,583
ipynb
Jupyter Notebook
training_notebook.ipynb
AryanRaj315/CVPR_PixelSkelNetON
0b5bc9949f55cb932408b8fa9ca48371da63c0b9
[ "MIT" ]
null
null
null
training_notebook.ipynb
AryanRaj315/CVPR_PixelSkelNetON
0b5bc9949f55cb932408b8fa9ca48371da63c0b9
[ "MIT" ]
null
null
null
training_notebook.ipynb
AryanRaj315/CVPR_PixelSkelNetON
0b5bc9949f55cb932408b8fa9ca48371da63c0b9
[ "MIT" ]
null
null
null
46.332
2,242
0.620306
[ [ [ "import sys\nsys.path.insert(0, \"segmentation_models.pytorch/\")\nfrom trainer import Trainer\nimport segmentation_models_pytorch as smp\ndef TRAIN(MODEL, ENCODER, OPTIMIZER, LOSS):\n if(MODEL == 'Unet'):\n model = smp.Unet(ENCODER, encoder_weights='imagenet', classes=1, activation=None)\n elif(MODEL == 'FPN'):\n model = smp.FPN(ENCODER, encoder_weights='imagenet', classes=1, activation=None)\n elif(MODEL == 'Linknet'):\n model = smp.Linknet(ENCODER, encoder_weights='imagenet', classes=1, activation=None)\n \n model_trainer = Trainer(model = model, optim = OPTIMIZER, loss = LOSS, lr = 1e-3, bs = 8, name = ENCODER+'_'+MODEL+'_'+LOSS+'_'+OPTIMIZER)\n# model_trainer.load_model('efficientnet-b4_FPN_DICE_Ranger_best_dice.pth')\n model_trainer.seed_everything(42)\n# model_trainer.do_cutmix = False\n# model_trainer.freeze()\n# model_trainer.change_loader(crop_type=0, shape=256)\n# model_trainer.fit(10)\n model_trainer.do_cutmix = False\n model_trainer.unfreeze()\n model_trainer.change_loader(crop_type=0, shape=256)\n model_trainer.fit(20)\n model_trainer.do_cutmix = False\n model_trainer.unfreeze()\n model_trainer.change_loader(crop_type=1, shape=256)\n model_trainer.fit(20)\n model_trainer.do_cutmix = False\n model_trainer.freeze()\n model_trainer.change_loader(crop_type=1, shape=256)\n model_trainer.fit(5)\n# Train models\nModel = ['Unet']\nEncoder = ['efficientnet-b3']\nOptimizer = ['Over9000']\nLoss = ['DICE', 'BCE+DICE']\nfor model in Model:\n for encoder in Encoder:\n for optimizer in Optimizer:\n for loss in Loss:\n TRAIN(model, encoder, optimizer, loss)", "Starting epoch: 0 | phase: train | ⏰: 01:15:26\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a89cf899d65e72592d395b8f7b989fec1b9e283
3,705
ipynb
Jupyter Notebook
examples/notebooks/03_inspector_tool.ipynb
Yisheng-Li/geemap
0594917a4acedfebb85879cfe2bcb6a406a55f39
[ "MIT" ]
1,894
2020-03-10T04:44:09.000Z
2022-03-31T08:19:15.000Z
examples/notebooks/03_inspector_tool.ipynb
Yisheng-Li/geemap
0594917a4acedfebb85879cfe2bcb6a406a55f39
[ "MIT" ]
398
2020-03-19T14:04:21.000Z
2022-03-31T15:48:04.000Z
examples/notebooks/03_inspector_tool.ipynb
Yisheng-Li/geemap
0594917a4acedfebb85879cfe2bcb6a406a55f39
[ "MIT" ]
759
2020-03-17T21:58:53.000Z
2022-03-29T13:12:39.000Z
23.012422
229
0.54332
[ [ [ "<a href=\"https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/03_inspector_tool.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\"/></a>", "_____no_output_____" ], [ "Uncomment the following line to install [geemap](https://geemap.org) if needed.", "_____no_output_____" ] ], [ [ "# !pip install geemap", "_____no_output_____" ], [ "import ee\nimport geemap", "_____no_output_____" ], [ "geemap.show_youtube('k477ksjkaXw')", "_____no_output_____" ] ], [ [ "## Create an interactive map", "_____no_output_____" ] ], [ [ "Map = geemap.Map(center=(40, -100), zoom=4)", "_____no_output_____" ] ], [ [ "## Add Earth Engine Python script", "_____no_output_____" ] ], [ [ "# Add Earth Engine dataset\ndem = ee.Image('USGS/SRTMGL1_003')\nlandcover = ee.Image(\"ESA/GLOBCOVER_L4_200901_200912_V2_3\").select('landcover')\nlandsat7 = ee.Image('LE7_TOA_5YEAR/1999_2003').select(['B1', 'B2', 'B3', 'B4', 'B5', 'B7'])\nstates = ee.FeatureCollection(\"TIGER/2018/States\")\n\n# Set visualization parameters.\nvis_params = {\n 'min': 0,\n 'max': 4000,\n 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}\n\n# Add Earth Eninge layers to Map\nMap.addLayer(dem, vis_params, 'SRTM DEM', True, 0.5)\nMap.addLayer(landcover, {}, 'Land cover')\nMap.addLayer(landsat7, {'bands': ['B4', 'B3', 'B2'], 'min': 20, 'max': 200, 'gamma': 2.0}, 'Landsat 7')\nMap.addLayer(states, {}, \"US States\")\n\nMap", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a89d10189cd6c9c236ba3c7def7fb99f18ceacd
96,476
ipynb
Jupyter Notebook
iterations/KK_scripts/W207_Final_Project_errorAnalysis_updated_08_21_1830.ipynb
samgoodgame/sf_crime
eb1f559f2dd380bfd9862b30324cd7cb4078a010
[ "MIT" ]
1
2017-07-11T01:16:55.000Z
2017-07-11T01:16:55.000Z
iterations/KK_scripts/W207_Final_Project_errorAnalysis_updated_08_21_1830.ipynb
samgoodgame/sf_crime
eb1f559f2dd380bfd9862b30324cd7cb4078a010
[ "MIT" ]
null
null
null
iterations/KK_scripts/W207_Final_Project_errorAnalysis_updated_08_21_1830.ipynb
samgoodgame/sf_crime
eb1f559f2dd380bfd9862b30324cd7cb4078a010
[ "MIT" ]
2
2017-07-11T04:36:34.000Z
2018-04-24T03:16:22.000Z
52.205628
17,426
0.457129
[ [ [ "# Kaggle San Francisco Crime Classification\n## Berkeley MIDS W207 Final Project: Sam Goodgame, Sarah Cha, Kalvin Kao, Bryan Moore\n\n", "_____no_output_____" ], [ "### Environment and Data", "_____no_output_____" ] ], [ [ "# Additional Libraries\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Import relevant libraries:\nimport time\nimport numpy as np\nimport pandas as pd\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn import preprocessing\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.naive_bayes import MultinomialNB\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import log_loss\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import svm\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.tree import DecisionTreeClassifier\n# Import Meta-estimators\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.ensemble import BaggingClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\n# Import Calibration tools\nfrom sklearn.calibration import CalibratedClassifierCV\n\n# Set random seed and format print output:\nnp.random.seed(0)\nnp.set_printoptions(precision=3)", "_____no_output_____" ] ], [ [ "### Local, individual load of updated data set (with weather data integrated) into training, development, and test subsets.\n", "_____no_output_____" ] ], [ [ "# Data path to your local copy of Kalvin's \"x_data.csv\", which was produced by the negated cell above\ndata_path = \"./data/x_data_3.csv\"\ndf = pd.read_csv(data_path, header=0)\nx_data = df.drop('category', 1)\ny = df.category.as_matrix()\n\n# Impute missing values with mean values:\n#x_complete = df.fillna(df.mean())\nx_complete = x_data.fillna(x_data.mean())\nX_raw = x_complete.as_matrix()\n\n# Scale the data between 0 and 1:\nX = MinMaxScaler().fit_transform(X_raw)\n\n####\nX = np.around(X, decimals=2)\n####\n\n# Shuffle data to remove any underlying pattern that may exist. Must re-run random seed step each time:\nnp.random.seed(0)\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, y = X[shuffle], y[shuffle]\n\ntest_data, test_labels = X[800000:], y[800000:]\ndev_data, dev_labels = X[700000:800000], y[700000:800000]\ntrain_data, train_labels = X[:700000], y[:700000]\n\nmini_train_data, mini_train_labels = X[:200000], y[:200000]\nmini_dev_data, mini_dev_labels = X[430000:480000], y[430000:480000]\n\ncrime_labels = list(set(y))\ncrime_labels_mini_train = list(set(mini_train_labels))\ncrime_labels_mini_dev = list(set(mini_dev_labels))\nprint(len(crime_labels), len(crime_labels_mini_train), len(crime_labels_mini_dev))\n\nprint(len(train_data),len(train_labels))\nprint(len(dev_data),len(dev_labels))\nprint(len(mini_train_data),len(mini_train_labels))\nprint(len(mini_dev_data),len(mini_dev_labels))\nprint(len(test_data),len(test_labels))", "39 39 39\n700000 700000\n100000 100000\n200000 200000\n50000 50000\n78049 78049\n" ] ], [ [ "### Logistic Regression\n\n###### Hyperparameter tuning:\n\nFor the Logistic Regression classifier, we can seek to optimize the following classifier parameters: penalty (l1 or l2), C (inverse of regularization strength), solver ('newton-cg', 'lbfgs', 'liblinear', or 'sag')\n\n###### Model calibration:\n\nSee above\n", "_____no_output_____" ], [ "## Fit the Best LR Parameters", "_____no_output_____" ] ], [ [ "bestLR = LogisticRegression(penalty='l2', solver='newton-cg', tol=0.01, C=500)\nbestLR.fit(mini_train_data, mini_train_labels)", "_____no_output_____" ], [ "bestLRPredictions = bestLR.predict(mini_dev_data)\nbestLRPredictionProbabilities = bestLR.predict_proba(mini_dev_data)", "_____no_output_____" ], [ "print(\"L1 Multi-class Log Loss:\", log_loss(y_true = mini_dev_labels, y_pred = bestLRPredictionProbabilities, \\\n labels = crime_labels_mini_dev), \"\\n\\n\")", "L1 Multi-class Log Loss: 2.59276763905 \n\n\n" ], [ "pd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist()", "_____no_output_____" ] ], [ [ "## Error Analysis: Calibration", "_____no_output_____" ] ], [ [ "#clf_probabilities, clf_predictions, labels\ndef error_analysis_calibration(buckets, clf_probabilities, clf_predictions, labels):\n \"\"\"inputs:\n clf_probabilities = clf.predict_proba(dev_data)\n clf_predictions = clf.predict(dev_data)\n labels = dev_labels\"\"\"\n \n #buckets = [0.05, 0.15, 0.3, 0.5, 0.8]\n #buckets = [0.15, 0.25, 0.3, 1.0]\n correct = [0 for i in buckets]\n total = [0 for i in buckets]\n\n lLimit = 0\n uLimit = 0\n for i in range(len(buckets)):\n uLimit = buckets[i]\n for j in range(clf_probabilities.shape[0]):\n if (np.amax(clf_probabilities[j]) > lLimit) and (np.amax(clf_probabilities[j]) <= uLimit):\n if clf_predictions[j] == labels[j]:\n correct[i] += 1\n total[i] += 1\n lLimit = uLimit\n \n print(sum(correct))\n print(sum(total))\n print(correct)\n print(total)\n\n #here we report the classifier accuracy for each posterior probability bucket\n accuracies = []\n for k in range(len(buckets)):\n print(1.0*correct[k]/total[k])\n accuracies.append(1.0*correct[k]/total[k])\n print('p(pred) <= %.13f total = %3d correct = %3d accuracy = %.3f' \\\n %(buckets[k], total[k], correct[k], 1.0*correct[k]/total[k]))\n plt.plot(buckets,accuracies)\n plt.title(\"Calibration Analysis\")\n plt.xlabel(\"Posterior Probability\")\n plt.ylabel(\"Classifier Accuracy\")\n \n return buckets, accuracies", "_____no_output_____" ], [ "#i think you'll need to look at how the posteriors are distributed in order to set the best bins in 'buckets'\npd.DataFrame(np.amax(bestLRPredictionProbabilities, axis=1)).hist()", "_____no_output_____" ], [ "buckets = [0.15, 0.25, 0.3, 1.0]\ncalibration_buckets, calibration_accuracies = error_analysis_calibration(buckets, clf_probabilities=bestLRPredictionProbabilities, \\\n clf_predictions=bestLRPredictions, \\\n labels=mini_dev_labels)", "11246\n50000\n[199, 6513, 2458, 2076]\n[1351, 33941, 8338, 6370]\n0.14729829755736493\np(pred) <= 0.1500000000000 total = 1351 correct = 199 accuracy = 0.147\n0.19189181226245544\np(pred) <= 0.2500000000000 total = 33941 correct = 6513 accuracy = 0.192\n0.2947949148476853\np(pred) <= 0.3000000000000 total = 8338 correct = 2458 accuracy = 0.295\n0.3259026687598116\np(pred) <= 1.0000000000000 total = 6370 correct = 2076 accuracy = 0.326\n" ] ], [ [ "## Error Analysis: Classification Report", "_____no_output_____" ] ], [ [ "def error_analysis_classification_report(clf_predictions, labels):\n \"\"\"inputs:\n clf_predictions = clf.predict(dev_data)\n labels = dev_labels\"\"\"\n print('Classification Report:')\n report = classification_report(labels, clf_predictions)\n print(report)\n return report", "_____no_output_____" ], [ "classification_report = error_analysis_classification_report(clf_predictions=bestLRPredictions, \\\n labels=mini_dev_labels)", "Classification Report:\n precision recall f1-score support\n\n ARSON 0.00 0.00 0.00 94\n ASSAULT 0.23 0.00 0.00 4427\n BAD CHECKS 0.00 0.00 0.00 27\n BRIBERY 0.00 0.00 0.00 14\n BURGLARY 0.00 0.00 0.00 2047\n DISORDERLY CONDUCT 0.00 0.00 0.00 243\nDRIVING UNDER THE INFLUENCE 0.00 0.00 0.00 157\n DRUG/NARCOTIC 0.24 0.31 0.27 3009\n DRUNKENNESS 0.00 0.00 0.00 254\n EMBEZZLEMENT 0.00 0.00 0.00 61\n EXTORTION 0.00 0.00 0.00 14\n FAMILY OFFENSES 0.00 0.00 0.00 25\n FORGERY/COUNTERFEITING 0.00 0.00 0.00 594\n FRAUD 0.00 0.00 0.00 953\n GAMBLING 0.00 0.00 0.00 10\n KIDNAPPING 0.00 0.00 0.00 132\n LARCENY/THEFT 0.24 0.77 0.37 10076\n LIQUOR LAWS 0.00 0.00 0.00 111\n LOITERING 0.00 0.00 0.00 69\n MISSING PERSON 0.00 0.00 0.00 1483\n NON-CRIMINAL 0.30 0.00 0.00 5210\n OTHER OFFENSES 0.18 0.35 0.24 7117\n PORNOGRAPHY/OBSCENE MAT 0.00 0.00 0.00 1\n PROSTITUTION 0.00 0.00 0.00 450\n RECOVERED VEHICLE 0.00 0.00 0.00 181\n ROBBERY 0.00 0.00 0.00 1310\n RUNAWAY 0.00 0.00 0.00 92\n SECONDARY CODES 0.00 0.00 0.00 571\n SEX OFFENSES FORCIBLE 0.00 0.00 0.00 266\n SEX OFFENSES NON FORCIBLE 0.00 0.00 0.00 8\n STOLEN PROPERTY 0.00 0.00 0.00 234\n SUICIDE 0.00 0.00 0.00 31\n SUSPICIOUS OCC 0.00 0.00 0.00 1785\n TREA 0.00 0.00 0.00 2\n TRESPASS 0.00 0.00 0.00 432\n VANDALISM 0.00 0.00 0.00 2502\n VEHICLE THEFT 0.18 0.03 0.05 3109\n WARRANTS 0.00 0.00 0.00 2434\n WEAPON LAWS 0.00 0.00 0.00 465\n\n avg / total 0.15 0.22 0.13 50000\n\n" ] ], [ [ "## Error Analysis: Confusion Matrix", "_____no_output_____" ] ], [ [ "crime_labels_mini_dev", "_____no_output_____" ], [ "def error_analysis_confusion_matrix(label_names, clf_predictions, labels):\n \"\"\"inputs:\n clf_predictions = clf.predict(dev_data)\n labels = dev_labels\"\"\"\n cm = pd.DataFrame(confusion_matrix(labels, clf_predictions, labels=label_names))\n cm.columns=label_names\n cm.index=label_names\n cm.to_csv(path_or_buf=\"./confusion_matrix.csv\")\n #print(cm)\n return cm", "_____no_output_____" ], [ "error_analysis_confusion_matrix(label_names=crime_labels_mini_dev, clf_predictions=bestLRPredictions, \\\n labels=mini_dev_labels)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a89d2b6c262d68fb849f5d10c7b48bf803cb9b6
1,491
ipynb
Jupyter Notebook
Tutorial.ipynb
PetrWolf/pydata_nyc_2019
fd286136abe2184ced0878e8f93fedbebd81fa1d
[ "Apache-2.0" ]
2
2019-11-06T18:28:10.000Z
2019-11-06T18:35:12.000Z
Tutorial.ipynb
PetrWolf/pydata_nyc_2019
fd286136abe2184ced0878e8f93fedbebd81fa1d
[ "Apache-2.0" ]
null
null
null
Tutorial.ipynb
PetrWolf/pydata_nyc_2019
fd286136abe2184ced0878e8f93fedbebd81fa1d
[ "Apache-2.0" ]
null
null
null
26.625
317
0.599598
[ [ [ "# From Raw Recruit Scripts to Perfect Python (in 90 minutes)\n\n[PyData NYC 2019 tutorial](https://pydata.org/nyc2019/schedule/presentation/14/from-raw-recruit-scripts-to-perfect-python-in-90-minutes/) by [Stanley van der Merwe](https://pydata.org/nyc2019/speaker/profile/153/stanley-van-der-merwe) and [Petr Wolf](https://pydata.org/nyc2019/speaker/profile/119/petr-wolf)\n\nAudience level: Novice", "_____no_output_____" ], [ "## https://github.com/PetrWolf/pydata_nyc_2019\n* setup instructions: [`README.md`](README.md)\n* slides: [`tutorial.pdf`](tutorial.pdf)\n* sample notebook: [`sample_notebook.ipynb`](sample_notebook.ipynb)\n* practical exercises: [`exercises`](exercises)\n* sample solutions: [`solutions`](solutions)", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown" ] ]
4a89dfd37afefe27982fabb8a3f2808466fe26fb
394,154
ipynb
Jupyter Notebook
notebooks/explo_old/1.0-discriminant.ipynb
jeammimi/rnn_seg
bc11109bec2865d5cf5c617ab559bf1894a260ba
[ "MIT" ]
null
null
null
notebooks/explo_old/1.0-discriminant.ipynb
jeammimi/rnn_seg
bc11109bec2865d5cf5c617ab559bf1894a260ba
[ "MIT" ]
null
null
null
notebooks/explo_old/1.0-discriminant.ipynb
jeammimi/rnn_seg
bc11109bec2865d5cf5c617ab559bf1894a260ba
[ "MIT" ]
null
null
null
412.726702
57,756
0.900798
[ [ [ "from Toolv1 import MotionGenerator, GenerateTraj,random_rot,traj_to_dist\nfrom Toolv1 import diffusive,subdiffusive,directed,accelerated,slowed,still\nfrom Toolv1 import diffusive_confined,subdiffusive_confined, continuous_time_random_walk\nfrom Toolv1 import continuous_time_random_walk_confined\nimport numpy as np\nndim = 2\nnp.random.seed(6)\ndef add_miss_tracking(traj,N,f=10):\n \n step = traj[1:]-traj[:-1]\n \n std = np.average(np.sum(step**2,axis=1)**0.5)\n \n for i in range(N):\n w = np.random.randint(0,len(traj))\n traj[w] = np.random.normal(traj[w],f*std)\n \n return traj\n\n\ndef generate_N_nstep(N,nstep):\n add = 0\n index_zero = 8\n ndim = 2\n if ndim == 3:\n add = 1\n size = nstep\n \n X_train = np.zeros((N,size,5))\n Y_trains_b = np.zeros((N,size,9))\n Y_train_traj = []\n\n #12\n for i in range(N):\n #for i in range(1000):\n\n #if i % 1000 == 0:\n # print i\n sigma = max(np.random.normal(0.5,1),0.05)\n step = max(np.random.normal(1,1),0.2)\n tryagain = True\n while tryagain:\n try:\n\n\n clean=False\n \n time=size\n ndim=2\n list_generator = [ MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=still),\n MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=subdiffusive_confined),\n MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=subdiffusive),\n MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=diffusive_confined),\n MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=diffusive),\n MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=continuous_time_random_walk),\n MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=continuous_time_random_walk_confined),\n \n MotionGenerator(time,ndim,\n parameters=np.random.rand(3),\n generate_motion=directed) ]\n\n A = GenerateTraj(time,n_max=4,list_max_possible=[3,3,3,3,3,3,3,3],list_generator=list_generator)\n \n\n #Clean small seq\n all_states = set(A.sequence)\n n_states = [A.sequence.count(istate) for istate in all_states]\n \n \n for s,ns in zip(all_states,n_states):\n A.sequence = np.array(A.sequence)\n if size > 25 and ns < 10:\n A.sequence[A.sequence == s] = \"%i_0\"%(index_zero)\n \n \n def map_sequence(sequence):\n ns = []\n for iseque in sequence:\n i0,j0 = map(int,iseque.split(\"_\"))\n ns.append(i0)\n return ns\n \n real_traj = A.traj\n sc = map_sequence(A.sequence)\n\n alpharot = 2*3.14*np.random.random()\n \n\n \n real_traj = random_rot(real_traj,alpharot,ndim=ndim)\n \n\n #Noise\n dt = real_traj[1:]-real_traj[:-1]\n std = np.mean(np.sum(dt**2,axis=1)/3)**0.5 \n \n if std == 0 :\n std = 1\n noise_level = 0.25*np.random.rand()\n real_traj += np.random.normal(0,noise_level*std,real_traj.shape)\n \n \n \n alligned_traj,normed,alpha,_ = traj_to_dist(real_traj,ndim=ndim)\n nzeros = np.random.randint(0,10)\n \n \n Z = []\n for _ in range(nzeros):\n Z.append(np.random.randint(len(sc)-1))\n sc[Z[-1]] = index_zero\n \n \n for i_isc,isc in enumerate(sc):\n if isc == index_zero:\n normed[i_isc,::] = 0\n \n \n #print alligned_traj.shape ,len(sc)\n \n tryagain=False\n \n \n except IndexError:\n tryagain=True\n \n Y_train_traj.append(real_traj)\n #print X_train.shape\n X_train[i] = normed\n \n \n Y_trains_b[i][range(time),np.array(sc,dtype=np.int)] = 1\n \n #print sc\n\n \n return X_train,Y_trains_b,Y_train_traj\n\nprint generate_N_nstep(100,600)[1].shape", "(100, 600, 9)\n" ], [ "# this returns a tensor\nfrom keras.layers import Input, Embedding, LSTM, Dense, merge,TimeDistributed\nfrom Bilayer import BiLSTMv1 as BiLSTM\nfrom keras.models import Model\n\nncats = 6\n\ninputs = Input(shape=(None,5),name=\"Input\")\n\nl1 = BiLSTM(output_dim=50,activation='tanh',return_sequences=True)(inputs)\nl2 = BiLSTM(output_dim=50,activation='tanh',return_sequences=True)(merge([inputs,l1],mode=\"concat\"))\nl3 = BiLSTM(output_dim=50,activation='tanh',return_sequences=True)(merge([inputs,l2],mode=\"concat\"))\n\nbrownian = BiLSTM(output_dim=50,activation='tanh',return_sequences=True,name=\"brownian_i\")(merge([inputs,l1,l2,l3],mode=\"concat\"))\nbrownian = TimeDistributed(Dense(9,activation=\"softmax\"),name=\"brownian\")(brownian)\n\nmodel = Model(input=[inputs],output=[brownian])#,sigma])\n\nmodel.compile(optimizer='adadelta',\n loss={'brownian': 'categorical_crossentropy'})\n #loss_weights={'sigma': .1, 'brownian': .9})\n\"\"\"\nmodel.compile(optimizer='adadelta',\n loss={ 'brownian': 'binary_crossentropy'})\"\"\"", "_____no_output_____" ], [ "#w = \"ftest_2_135\"\n\"\"\"\nw = 'ftest_rea_2_40'\ntry:\n model.load_weights(\"/home/jarbona/cluster_theano/\"+w)\nexcept:\n model.load_weights(w)\"\"\"", "_____no_output_____" ], [ "import keras\nimport cPickle\nclass LossHistory(keras.callbacks.Callback):\n #losses = []\n #val_losses = []\n def __init__(self,name):\n super(LossHistory, self).__init__()\n self.name=name\n self.losses = []\n self.val_losses = []\n def on_train_begin(self, logs={}):\n \n pass\n def on_batch_end(self, batch, logs={}):\n self.losses.append(logs.get('loss'))\n #self.val_losses.append(logs.get('val_loss'))\n \n def on_epoch_end(self, epoch, logs={}):\n self.losses.append(logs.get('loss'))\n self.val_losses.append(logs.get('val_loss'))\n cPickle.dump((self.losses,self.val_losses), open(self.name, 'wb')) \n \n", "_____no_output_____" ], [ "#from Specialist_layer import return_three_layer,return_three_bis\n#history = LossHistory(\"losses15.pick\")\n#TRaining of graph 1\n#print lr\n#lr = 0.1\nlr = 1.0\nwgraph = model\nwgraph.optimizer.lr.set_value(lr)\nfor i in range(15):\n for j in range(0,6*6*4,1):\n \n modulo = 6\n size = (1 + j % modulo)*50\n \n if j % modulo == 4:\n size=200\n if j % modulo == 5:\n size=400 \n if j % modulo == 3:\n size=26\n \"\"\"\n if j % modulo == 6:\n size=600 \n if j % modulo == 7:\n size=800 \n if j % modulo == 8:\n size=1000 \n if j % modulo == 9:\n size=1500 \"\"\"\n #size=26\n \n \n print size\n \n try:\n \n X_train,Y_trains_b,scs = generate_N_nstep(4000,size)\n\n inp = {\"Input\":X_train}\n ret={\"brownian\":Y_trains_b}\n \"\"\"#\n ret = {\"input1\":X_train,\n \"output\":Y_trains,\n \"outputtype\":convert_output(Y_trains),\n \"category\":Y_train_cat[::,::,:12]}\"\"\"\n \n except:\n print \"pb\"\n pass\n #next(generator())\n \n #print ret[\"category\"].shape\n if size == 400:\n wgraph.optimizer.lr.set_value(lr)\n print wgraph.optimizer.lr.get_value()\n \n if size == 50:\n wgraph.optimizer.lr.set_value(lr)\n print wgraph.optimizer.lr.get_value()\n \n \"\"\"\n if size != 600:\n batch = 50\n else:\n batch = 20\"\"\"\n batch = 50\n #print ret.keys()\n #print ret[\"category\"].shape, ret[\"output\"].shape, \n wgraph.fit(inp,ret,batch, nb_epoch=1,validation_split=0.05)#, callbacks=[history])\n \n if i == 3:\n lr = 0.1\n if j % modulo == 0 :\n name = \"ftest_rea_continuous\"\n wgraph.save_weights(name + \"_%i_%i\"%(i+2,j),overwrite=True)\n #sub_with_noise (30p)\n \n \n #if np.isnan(graph.evaluate(ret)):\n # graph = return_three_layer()\n # graph.load_weights(\"transition_l8_%i_diff_size_50\"%(i+2))\n #if i % 3 == 0 and i != 0:\n # lr /= 2.\n # graph.optimizer.lr.set_value(lr)\n # print graph.optimizer.lr.get_value()\n\n#score = model.evaluate(X_test, Y_test, batch_size=16)", "50\npb\n0.10000000149\n" ], [ "w = 'ftest_rea_6_24'\ntry:\n model.load_weights(\"/home/jarbona/cluster_theano/\"+w)\nexcept:\n model.load_weights(w)\nsize=600\nbatch = 10\nX_train,Y_trains_b,scs = generate_N_nstep(1000,size)\ninp = {\"Input\":X_train}\nret={\"brownian\":Y_trains_b}", "_____no_output_____" ], [ "model.optimizer.lr.set_value(0.1)\n\nmodel.fit(inp,ret,batch, nb_epoch=1,validation_split=0.05)#, callbacks=[history])", "_____no_output_____" ], [ "model.save_weights(\"test\")", "[WARNING] test already exists - overwrite? [y/n]y\n[TIP] Next time specify overwrite=True in save_weights!\n" ], [ "size=100\nX_train,Y_trains_b,scs = generate_N_nstep(200,size)\n\ninp = {\"Input\":X_train}\nret={\"brownian\":Y_trains_b}", "_____no_output_____" ], [ "resp = model.predict(inp[\"Input\"][:20])\n", "_____no_output_____" ], [ "from Tools import plot_by_class,plot_label,plot_by_class\nimport copy\nfor w in range(10):\n res = copy.deepcopy(resp[w])\n res = np.argmax(res,axis=1)\n gt = np.argmax(Y_trains_b[w],axis=1)\n #print res\n #print gt\n print set(gt)\n #print resp[1][w][::,0]\n fig = figure()\n ax = fig.add_subplot(131)\n plot_label(scs[w],res,remove6=9)\n ax = fig.add_subplot(132)\n\n plot_label(scs[w],gt,remove6=9)\n ax = fig.add_subplot(133)\n\n \n plot_by_class(scs[w],np.array(gt))\n\"\"\"\nfigure()\nplot(resp[1][w][::,0])\nplot(ret[\"sigma\"][w][::,0])\n\"\"\"", "set([3, 5])\nset([0, 3, 5])\nset([3, 5])\nset([1, 2, 3, 5])\nset([1, 2, 5])\nset([3, 5])\nset([3, 5])\nset([0, 2, 4, 5])\nset([0, 5])\nset([3, 4, 5])\n" ], [ "print ret[\"brownian\"][w][:10]", "[[ 1. 1. 1. 1. 1. 1.]\n [ 1. 1. 1. 1. 1. 1.]\n [ 1. 1. 1. 1. 1. 1.]\n [ 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0.]\n [ 1. 1. 1. 1. 1. 1.]\n [ 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0.]]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a89ffcf7020b21d9deca68d6b634c6e08b82a1a
4,121
ipynb
Jupyter Notebook
MongoDB/notebooks/querying-for-documents-on-an-array-field.ipynb
kasamoh/NoSQL
24ea4e835e182282acad7d42c72111f8bd45beb2
[ "MIT" ]
null
null
null
MongoDB/notebooks/querying-for-documents-on-an-array-field.ipynb
kasamoh/NoSQL
24ea4e835e182282acad7d42c72111f8bd45beb2
[ "MIT" ]
null
null
null
MongoDB/notebooks/querying-for-documents-on-an-array-field.ipynb
kasamoh/NoSQL
24ea4e835e182282acad7d42c72111f8bd45beb2
[ "MIT" ]
null
null
null
23.022346
266
0.527785
[ [ [ "from pymongo import MongoClient\nimport pprint", "_____no_output_____" ], [ "# We're just reading data, so we can use the course cluster\nclient = MongoClient('mongodb://analytics-student:[email protected]:27017,cluster0-shard-00-01-jxeqq.mongodb.net:27017,cluster0-shard-00-02-jxeqq.mongodb.net:27017/?ssl=true&replicaSet=Cluster0-shard-0&authSource=admin')", "_____no_output_____" ], [ "# We'll be using two different collections this time around\nmovies = client.mflix.movies\nsurveys = client.results.surveys", "_____no_output_____" ], [ "movies.find_one()", "_____no_output_____" ], [ "surveys.find_one()", "_____no_output_____" ], [ "# Replace XXXX with a filter document that will find the movies where \"Harrison Ford\" is a member of the cast\nmovie_filter_doc = {'cast' : \"Harrison Ford\"}", "_____no_output_____" ], [ "# This is the first part of the answer to the lab\nmovies.find(movie_filter_doc).count()", "_____no_output_____" ], [ "# Replace YYYY with a filter document to find the survey results where the \"abc\" product scored greater than 6\nsurvey_filter_doc = {'results' : {\n '$elemMatch':{'product':\"abc\", 'score':{\"$gt\":6}}}}", "_____no_output_____" ], [ "# This is the second part of the answer to the lab\nsurveys.find(survey_filter_doc).count()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a8a23ccc95398af562911dadb86beb580b8599f
862
ipynb
Jupyter Notebook
01_upcoming_features.ipynb
pysan-dev/pysan
60ca77ebac1749dadc91e6331f6da4b4c814e503
[ "Apache-2.0" ]
4
2020-05-18T20:19:25.000Z
2021-09-12T09:47:00.000Z
01_upcoming_features.ipynb
pysan-dev/pysan
60ca77ebac1749dadc91e6331f6da4b4c814e503
[ "Apache-2.0" ]
10
2020-05-19T12:10:03.000Z
2020-11-19T11:17:05.000Z
01_upcoming_features.ipynb
pysan-dev/pysan
60ca77ebac1749dadc91e6331f6da4b4c814e503
[ "Apache-2.0" ]
null
null
null
16.576923
55
0.49768
[ [ [ "#hide\nfrom pysan.core import *\nfrom pysan.multi import *", "_____no_output_____" ] ], [ [ "# Upcoming Features\n\n> Which new features are being worked on?", "_____no_output_____" ] ], [ [ "import pysan as ps\n\n#sequences = ps.generate_sequences(20,20,[1,2,3])", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
4a8a2b27a2f8c341d8df47cc2210a88ca682328d
935,615
ipynb
Jupyter Notebook
model/notebooks/1-data-cleaning.ipynb
MultiW/nimble-xc
bb9058d0f441b5d382abb97c3bb1753fd9d7c9b7
[ "MIT" ]
4
2021-08-02T07:28:06.000Z
2022-01-26T00:09:39.000Z
model/notebooks/1-data-cleaning.ipynb
MultiW/nimble-xc
bb9058d0f441b5d382abb97c3bb1753fd9d7c9b7
[ "MIT" ]
5
2021-04-23T00:59:29.000Z
2021-05-06T00:42:39.000Z
model/notebooks/1-data-cleaning.ipynb
MultiW/nimble-xc
bb9058d0f441b5d382abb97c3bb1753fd9d7c9b7
[ "MIT" ]
null
null
null
289.843556
281,393
0.89325
[ [ [ "# Data Cleaning\n\nFor each IMU file, clean the IMU data, adjust the labels, and output these as CSV files.", "_____no_output_____" ] ], [ [ "%load_ext autoreload\n%autoreload 2\n%matplotlib notebook\n\nimport numpy as np\n\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import RepeatedStratifiedKFold\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom matplotlib.lines import Line2D\nimport joblib\n\nfrom src.data.labels_util import load_labels, LabelCol, get_labels_file, load_clean_labels, get_workouts\nfrom src.data.imu_util import (\n get_sensor_file, ImuCol, load_imu_data, Sensor, fix_epoch, resample_uniformly, time_to_row_range, get_data_chunk,\n normalize_with_bounds, data_to_features, list_imu_abspaths, clean_imu_data\n)\nfrom src.data.util import find_nearest, find_nearest_index, shift, low_pass_filter, add_col\nfrom src.data.workout import Activity, Workout\nfrom src.data.data import DataState\nfrom src.data.clean_dataset import main as clean_dataset\nfrom src.data.clean_labels import main as clean_labels\nfrom src.visualization.visualize import multiplot\n\n# import data types\nfrom pandas import DataFrame\nfrom numpy import ndarray\nfrom typing import List, Tuple, Optional", "_____no_output_____" ] ], [ [ "## Clean IMU data", "_____no_output_____" ] ], [ [ "# Clean data (UNCOMMENT when needed)\n# clean_dataset()\n\n# Test\ncleaned_files = list_imu_abspaths(sensor_type=Sensor.Accelerometer, data_state=DataState.Clean)\n\ndef plot_helper(idx, plot):\n imu_data = np.load(cleaned_files[idx])\n plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.XACCEL])\n# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])\n# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])\n \nmultiplot(len(cleaned_files), plot_helper)", "_____no_output_____" ] ], [ [ "## Adjust Labels\nA few raw IMU files seems to have corrupted timestamps, causing some labels to not properly map to their data point. We note these labels in the cleaned/adjusted labels. They'll be handled in the model fitting.", "_____no_output_____" ] ], [ [ "# Adjust labels (UNCOMMENT when needed)\n# clean_labels()\n\n# Test\nraw_boot_labels: ndarray = load_labels(get_labels_file(Activity.Boot, DataState.Raw), Activity.Boot)\nraw_pole_labels: ndarray = load_labels(get_labels_file(Activity.Pole, DataState.Raw), Activity.Pole)\nclean_boot_labels: ndarray = load_clean_labels(Activity.Boot)\nclean_pole_labels: ndarray = load_clean_labels(Activity.Pole)\n\n# Check cleaned data content \n# print('Raw Boot')\n# print(raw_boot_labels[:50,])\n# print('Clean Boot')\n# print(clean_boot_labels[:50,])\n# print('Raw Pole')\n# print(raw_pole_labels[:50,])\n# print('Clean Pole')\n# print(clean_pole_labels[:50,])", "_____no_output_____" ] ], [ [ "## Examine Data Integrity\nMake sure that labels for steps are still reasonable after data cleaning.\n\n**Something to consider**: one area of concern are the end steps labels for poles labels. Pole lift-off (end of step) occurs at a min-peak. Re-sampling, interpolation, and the adjustment of labels may cause the end labels to deviate slightly from the min-peak. (The graph seems okay, with some points slightly off the peak, but it's not too common.) We can make the reasonable assumption that data points are sampled approximately uniformly. This may affect the accuracy of using a low-pass filter and (for workout detection) FFT.", "_____no_output_____" ] ], [ [ "# CHOOSE a workout and test type (pole or boot) to examine\nworkout_idx = 5\nselected_labels = clean_boot_labels\n\nworkouts: List[Workout] = get_workouts(selected_labels)\nprint('Number of workouts: %d' % len(workouts))\n\nworkout = workouts[workout_idx]\nprint('Sensor %s' % workout.sensor)\n\ndef plot_helper(idx, plot):\n # Plot IMU data\n imu_data: ndarray = np.load(\n get_sensor_file(sensor_name=workout.sensor, sensor_type=Sensor.Accelerometer, data_state=DataState.Clean))\n plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.XACCEL])\n# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])\n# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])\n plot.set_xlabel('Epoch Time')\n\n\n # Plot step labels\n for i in range(workout.labels.shape[0]):\n start_row, end_row = workout.labels[i, LabelCol.START], workout.labels[i, LabelCol.END]\n plot.axvline(x=imu_data[start_row, ImuCol.TIME], color='green', linestyle='dashed')\n plot.axvline(x=imu_data[end_row, ImuCol.TIME], color='red', linestyle='dotted')\n \n legend_items = [Line2D([], [], color='green', linestyle='dashed', label='Step start'), \n Line2D([], [], color='red', linestyle='dotted', label='Step end')]\n plot.legend(handles=legend_items)\n \n # Zoom (REMOVE to see the entire graph)\n# plot.set_xlim([1597340600000, 1597340615000])\n \nmultiplot(1, plot_helper)", "Number of workouts: 44\nSensor 12R\n" ] ], [ [ "Let's compare the cleaned labels to the original labels.", "_____no_output_____" ] ], [ [ "# CHOOSE a workout and test type (pole or boot) to examine\nworkout_idx = 5\nselected_labels = raw_pole_labels\n\nworkouts: List[Workout] = get_workouts(selected_labels)\nprint('Number of workouts: %d' % len(workouts))\n\nworkout = workouts[workout_idx]\nprint('Sensor %s' % workout.sensor)\n\ndef plot_helper(idx, plot):\n # Plot IMU data\n imu_data: ndarray = load_imu_data(\n get_sensor_file(sensor_name=workout.sensor, sensor_type=Sensor.Accelerometer, data_state=DataState.Raw))\n plot.plot(imu_data[:, ImuCol.XACCEL])\n# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.YACCEL])\n# plot.plot(imu_data[:, ImuCol.TIME], imu_data[:, ImuCol.ZACCEL])\n plot.set_xlabel('Row Index')\n\n # Plot step labels\n for i in range(workout.labels.shape[0]):\n # find labels rows\n start_epoch, end_epoch = workout.labels[i, LabelCol.START], workout.labels[i, LabelCol.END]\n start_row = np.where(imu_data[:, ImuCol.TIME].astype(int) == int(start_epoch))[0]\n end_row = np.where(imu_data[:, ImuCol.TIME].astype(int) == int(end_epoch))[0]\n if len(start_row) != 1 or len(end_row) != 1:\n print('Bad workout')\n return\n start_row, end_row = start_row[0], end_row[0]\n \n plot.axvline(x=start_row, color='green', linestyle='dashed')\n plot.axvline(x=end_row, color='red', linestyle='dotted')\n \n legend_items = [Line2D([], [], color='green', linestyle='dashed', label='Step start'), \n Line2D([], [], color='red', linestyle='dotted', label='Step end')]\n plot.legend(handles=legend_items)\n \n # Zoom (REMOVE to see the entire graph)\n plot.set_xlim([124500, 125000])\n \nmultiplot(1, plot_helper)", "Number of workouts: 39\nSensor 14L\n" ] ], [ [ "Make sure NaN labels were persisted during the label data's save/load process.", "_____no_output_____" ] ], [ [ "def count_errors(labels: ndarray):\n for workout in get_workouts(labels):\n boot: ndarray = workout.labels\n num_errors = np.count_nonzero(\n np.isnan(boot[:, LabelCol.START].astype(np.float64)) | np.isnan(boot[:, LabelCol.END].astype(np.float64)))\n if num_errors != 0:\n print('Number of labels that could not be mapped for sensor %s: %d' % (workout.sensor, num_errors))\n\nclean_boot_labels: ndarray = load_clean_labels(Activity.Boot)\nclean_pole_labels: ndarray = load_clean_labels(Activity.Pole)\nprint('Boot labels')\ncount_errors(clean_boot_labels)\nprint('Pole labels')\ncount_errors(clean_pole_labels)", "Boot labels\nNumber of labels that could not be mapped for sensor 12L: 5\nNumber of labels that could not be mapped for sensor 17R: 19\nPole labels\nNumber of labels that could not be mapped for sensor 12L: 9\nNumber of labels that could not be mapped for sensor 12L: 17\nNumber of labels that could not be mapped for sensor 17R: 26\nNumber of labels that could not be mapped for sensor 9R: 6\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a8a3974ef2c51b9bdb359aa3889e6d804f1a874
4,781
ipynb
Jupyter Notebook
deep-learning/Tensorflow-2.x/Browser-Based-Models/Exercises/Exercise 8 - Multiclass with Signs/Exercise 8 - Question.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
3,266
2017-08-06T16:51:46.000Z
2022-03-30T07:34:24.000Z
deep-learning/Tensorflow-2.x/Browser-Based-Models/Exercises/Exercise 8 - Multiclass with Signs/Exercise 8 - Question.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
150
2017-08-28T14:59:36.000Z
2022-03-11T23:21:35.000Z
deep-learning/Tensorflow-2.x/Browser-Based-Models/Exercises/Exercise 8 - Multiclass with Signs/Exercise 8 - Question.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
1,449
2017-08-06T17:40:59.000Z
2022-03-31T12:03:24.000Z
4,781
4,781
0.676428
[ [ [ "import csv\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\nfrom google.colab import files", "_____no_output_____" ] ], [ [ "The data for this exercise is available at: https://www.kaggle.com/datamunge/sign-language-mnist/home\n\nSign up and download to find 2 CSV files: sign_mnist_test.csv and sign_mnist_train.csv -- You will upload both of them using this button before you can continue.\n", "_____no_output_____" ] ], [ [ "uploaded=files.upload()", "_____no_output_____" ], [ "def get_data(filename):\n # You will need to write code that will read the file passed\n # into this function. The first line contains the column headers\n # so you should ignore it\n # Each successive line contians 785 comma separated values between 0 and 255\n # The first value is the label\n # The rest are the pixel values for that picture\n # The function will return 2 np.array types. One with all the labels\n # One with all the images\n #\n # Tips: \n # If you read a full line (as 'row') then row[0] has the label\n # and row[1:785] has the 784 pixel values\n # Take a look at np.array_split to turn the 784 pixels into 28x28\n # You are reading in strings, but need the values to be floats\n # Check out np.array().astype for a conversion\n with open(filename) as training_file:\n # Your code starts here\n # Your code ends here\n return images, labels\n\n\ntraining_images, training_labels = get_data('sign_mnist_train.csv')\ntesting_images, testing_labels = get_data('sign_mnist_test.csv')\n\n# Keep these\nprint(training_images.shape)\nprint(training_labels.shape)\nprint(testing_images.shape)\nprint(testing_labels.shape)\n\n# Their output should be:\n# (27455, 28, 28)\n# (27455,)\n# (7172, 28, 28)\n# (7172,)", "_____no_output_____" ], [ "# In this section you will have to add another dimension to the data\n# So, for example, if your array is (10000, 28, 28)\n# You will need to make it (10000, 28, 28, 1)\n# Hint: np.expand_dims\n\ntraining_images = # Your Code Here\ntesting_images = # Your Code Here\n\n# Create an ImageDataGenerator and do Image Augmentation\ntrain_datagen = ImageDataGenerator(\n # Your Code Here\n )\n\nvalidation_datagen = ImageDataGenerator(\n # Your Code Here)\n \n# Keep These\nprint(training_images.shape)\nprint(testing_images.shape)\n \n# Their output should be:\n# (27455, 28, 28, 1)\n# (7172, 28, 28, 1)", "_____no_output_____" ], [ "# Define the model\n# Use no more than 2 Conv2D and 2 MaxPooling2D\nmodel = tf.keras.models.Sequential([\n # Your Code Here\n )\n\n# Compile Model. \nmodel.compile(# Your Code Here)\n\n# Train the Model\nhistory = model.fit_generator(# Your Code Here)\n\nmodel.evaluate(testing_images, testing_labels)\n \n# The output from model.evaluate should be close to:\n[6.92426086682151, 0.56609035]\n", "_____no_output_____" ], [ "# Plot the chart for accuracy and loss on both training and validation\n\nimport matplotlib.pyplot as plt\nacc = # Your Code Here\nval_acc = # Your Code Here\nloss = # Your Code Here\nval_loss = # Your Code Here\n\nepochs = range(len(acc))\n\nplt.plot(epochs, acc, 'r', label='Training accuracy')\nplt.plot(epochs, val_acc, 'b', label='Validation accuracy')\nplt.title('Training and validation accuracy')\nplt.legend()\nplt.figure()\n\nplt.plot(epochs, loss, 'r', label='Training Loss')\nplt.plot(epochs, val_loss, 'b', label='Validation Loss')\nplt.title('Training and validation loss')\nplt.legend()\n\nplt.show()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a8a4a1d79f4978d55263404d1efda2e9bfd88bb
150,054
ipynb
Jupyter Notebook
docs/examples/Manual.ipynb
underworldcode/pyFTracks
6050a4327616ebca7ab932b609b25c7c4e6a62f8
[ "MIT" ]
4
2020-11-02T03:54:52.000Z
2022-03-04T11:48:26.000Z
docs/examples/Manual.ipynb
rbeucher/pyFTracks
6050a4327616ebca7ab932b609b25c7c4e6a62f8
[ "MIT" ]
12
2020-03-05T04:04:46.000Z
2020-03-05T23:27:57.000Z
docs/examples/Manual.ipynb
ryanstoner1/pyFTracks
6050a4327616ebca7ab932b609b25c7c4e6a62f8
[ "MIT" ]
2
2020-12-29T01:59:07.000Z
2021-10-15T11:22:57.000Z
115.248848
60,280
0.820211
[ [ [ "# Getting Started with *pyFTracks* v 1.0\n\n**Romain Beucher, Roderick Brown, Louis Moresi and Fabian Kohlmann**\n\nThe Australian National University\nThe University of Glasgow\nLithodat\n\n*pyFTracks* is a Python package that can be used to predict Fission Track ages and Track lengths distributions for some given thermal-histories and kinetic parameters.\n\n*pyFTracks* is an open-source project licensed under the MiT license. See LICENSE.md for details.\n\nThe functionalities provided are similar to Richard Ketcham HeFTy sofware. \nThe main advantage comes from its Python interface which allows users to easily integrate *pyFTracks* with other Python libraries and existing scientific applications.\n*pyFTracks* is available on all major operating systems.\n\nFor now, *pyFTracks* only provide forward modelling functionalities. Integration with inverse problem schemes is planned for version 2.0.\n\n\n# Installation\n\n*pyFTracks* is availabe on pypi. The code should work on all major operating systems (Linux, MaxOSx and Windows)\n\n`pip install pyFTracks`", "_____no_output_____" ], [ "# Importing *pyFTracks*\n\nThe recommended way to import pyFTracks is to run:", "_____no_output_____" ] ], [ [ "import pyFTracks as FT", "_____no_output_____" ] ], [ [ "# Input", "_____no_output_____" ], [ "## Specifying a Thermal history", "_____no_output_____" ] ], [ [ "thermal_history = FT.ThermalHistory(name=\"My Thermal History\",\n time=[0., 43., 44., 100.],\n temperature=[283., 283., 403., 403.])", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "plt.figure(figsize=(15, 5))\nplt.plot(thermal_history.input_time, thermal_history.input_temperature, label=thermal_history.name, marker=\"o\")\nplt.xlim(100., 0.)\nplt.ylim(150. + 273.15, 0.+273.15)\nplt.ylabel(\"Temperature in degC\")\nplt.xlabel(\"Time in (Ma)\")\nplt.legend()", "_____no_output_____" ] ], [ [ "## Predefined thermal histories\n\nWe provide predefined thermal histories for convenience.", "_____no_output_____" ] ], [ [ "from pyFTracks.thermal_history import WOLF1, WOLF2, WOLF3, WOLF4, WOLF5, FLAXMANS1, VROLIJ", "_____no_output_____" ], [ "thermal_histories = [WOLF1, WOLF2, WOLF3, WOLF4, WOLF5, FLAXMANS1, VROLIJ]\n\nplt.figure(figsize=(15, 5))\nfor thermal_history in thermal_histories:\n plt.plot(thermal_history.input_time, thermal_history.input_temperature, label=thermal_history.name, marker=\"o\")\nplt.xlim(100., 0.)\nplt.ylim(150. + 273.15, 0.+273.15)\nplt.ylabel(\"Temperature in degC\")\nplt.xlabel(\"Time in (Ma)\")\nplt.legend() ", "_____no_output_____" ] ], [ [ "## Annealing Models", "_____no_output_____" ] ], [ [ "annealing_model = FT.Ketcham1999(kinetic_parameters={\"ETCH_PIT_LENGTH\": 1.65})\nannealing_model.history = WOLF1", "_____no_output_____" ], [ "annealing_model.calculate_age()", "_____no_output_____" ], [ "annealing_model = FT.Ketcham2007(kinetic_parameters={\"ETCH_PIT_LENGTH\": 1.65})\nannealing_model.history = WOLF1", "_____no_output_____" ], [ "annealing_model.calculate_age()", "_____no_output_____" ], [ "FT.Viewer(history=WOLF1, annealing_model=annealing_model)", "_____no_output_____" ] ], [ [ "# Simple Fission-Track data Predictions", "_____no_output_____" ] ], [ [ "Ns = [31, 19, 56, 67, 88, 6, 18, 40, 36, 54, 35, 52, 51, 47, 27, 36, 64, 68, 61, 30]\nNi = [41, 22, 63, 71, 90, 7, 14, 41, 49, 79, 52, 76, 74, 66, 39, 44, 86, 90, 91, 41]\n\nzeta = 350.\nzeta_err = 10. / 350.\nrhod = 1.304\nrhod_err = 0.\nNd = 2936", "_____no_output_____" ], [ "FT.central_age(Ns, Ni, zeta, zeta_err, rhod, Nd)", "_____no_output_____" ], [ "FT.pooled_age(Ns, Ni, zeta, zeta_err, rhod, Nd)", "_____no_output_____" ], [ "FT.single_grain_ages(Ns, Ni, zeta, zeta_err, rhod, Nd)", "_____no_output_____" ], [ "FT.chi2_test(Ns, Ni)", "_____no_output_____" ] ], [ [ "# Included datasets\n\n*pyFTracks* comes with some sample datasets that can be used for testing and designing general code.", "_____no_output_____" ] ], [ [ "from pyFTracks.ressources import Gleadow\nfrom pyFTracks.ressources import Miller", "_____no_output_____" ], [ "Gleadow", "_____no_output_____" ], [ "FT.central_age(Gleadow.Ns,\n Gleadow.Ni,\n Gleadow.zeta,\n Gleadow.zeta_error,\n Gleadow.rhod,\n Gleadow.nd)", "_____no_output_____" ], [ "Miller", "_____no_output_____" ], [ "FT.central_age(Miller.Ns,\n Miller.Ni,\n Miller.zeta,\n Miller.zeta_error,\n Miller.rhod,\n Miller.nd)", "_____no_output_____" ], [ "Miller.calculate_central_age()", "_____no_output_____" ], [ "Miller.calculate_pooled_age()", "_____no_output_____" ], [ "Miller.calculate_ages()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a8a61a7dbd4d897972600395ba3c44cca8b9e18
132,512
ipynb
Jupyter Notebook
.ipynb_checkpoints/Cycluster_1-checkpoint.ipynb
big0tim1/Cycluster
4e5893f0010ce3719c3e62e64e139b3b8e627e5b
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Cycluster_1-checkpoint.ipynb
big0tim1/Cycluster
4e5893f0010ce3719c3e62e64e139b3b8e627e5b
[ "MIT" ]
null
null
null
.ipynb_checkpoints/Cycluster_1-checkpoint.ipynb
big0tim1/Cycluster
4e5893f0010ce3719c3e62e64e139b3b8e627e5b
[ "MIT" ]
null
null
null
60.123412
44,504
0.536457
[ [ [ "%matplotlib inline\nimport pandas as pd\nimport cycluster as cy\nimport os.path as op\nimport numpy as np\nimport palettable\nfrom custom_legends import colorLegend\nimport seaborn as sns\nfrom hclusterplot import *\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pprint\nimport openpyxl", "_____no_output_____" ], [ "sns.set_context('paper')\npath = \"./\"\ninf = \"NIC FPID Cytokines JC.csv\"\ndataFilename = op.join(path,inf)\n\n\"\"\"A long df has one analyte measurement per row\"\"\"\nlongDf = pd.read_csv(dataFilename)\nprint(longDf)", " ID Influenza.Status Strain Age Sex CMV.Status EBV.Status \\\n0 1 Positive B 14 M 0 1 \n1 2 Positive B 3 M 0 1 \n2 3 Positive B 4 M 1 1 \n3 4 Positive B 22 F 1 1 \n4 5 Positive B 4 F 1 1 \n5 6 Positive B 5 M 1 1 \n6 7 Positive H3N2 13 M 1 1 \n7 8 Positive B 5 M 1 1 \n8 9 Positive B 26 F 0 1 \n9 10 Positive B 2 F 1 1 \n10 11 Positive H3N2 24 M 1 1 \n11 12 Positive B 12 F 1 1 \n12 13 Positive B 6 M 1 1 \n13 14 Positive B 12 F 1 1 \n14 15 Positive B 13 F 1 1 \n15 16 Positive B 14 M 1 1 \n16 17 Positive B 17 M 1 1 \n17 18 Positive B 37 F 1 1 \n18 19 Positive H3N2 51 F 1 1 \n19 20 Positive H3N2 7 M 1 1 \n20 21 Positive H3N2 11 M 1 1 \n21 22 Positive H1N1 5 M 1 1 \n22 23 Positive H1N1 14 F 1 1 \n23 24 Positive H1N1 9 F 1 1 \n24 25 Positive H1N1 22 F 1 1 \n25 26 Positive H3N2 2 M 1 1 \n26 27 Positive H3N2 11 F 1 1 \n27 28 Positive H1N1 4 M 0 1 \n28 29 Positive H1N1 6 F 0 1 \n29 30 Positive Mixed 64 F 1 1 \n.. ... ... ... ... .. ... ... \n170 171 Positive H3N2 3 M 0 0 \n171 172 Positive H3N2 24 F 0 0 \n172 173 Positive B 12 M 1 0 \n173 174 Positive H3N2 1 M 0 0 \n174 175 Positive H3N2 1 F 1 1 \n175 176 Positive Mixed 1 M 1 0 \n176 177 Positive H3N2 1 M 1 0 \n177 178 Positive H3N2 3 M 1 1 \n178 179 Positive H3N2 1 F 1 0 \n179 180 Positive H1N1 1 F 0 0 \n180 181 Positive H3N2 1 F 1 1 \n181 182 Positive H3N2 2 M 0 0 \n182 183 Positive H1N1 1 F 1 1 \n183 184 Positive H3N2 1 F 1 0 \n184 185 Positive B 1 M 1 0 \n185 186 Positive B 1 F 1 0 \n186 187 Positive B 1 M 0 0 \n187 188 Positive B 1 M 0 0 \n188 189 Positive B 3 M 0 1 \n189 190 Positive H3N2 0 F 1 0 \n190 191 Positive H3N2 0 M 0 0 \n191 192 Positive H3N2 1 F 1 1 \n192 193 Positive H3N2 1 F 1 0 \n193 194 Positive H3N2 2 F 1 0 \n194 195 Positive H3N2 1 F 1 1 \n195 196 Positive H3N2 1 F 1 0 \n196 197 Positive H3N2 1 F 1 0 \n197 198 Positive H3N2 0 M 1 0 \n198 199 Positive H3N2 1 M 1 0 \n199 200 Positive H3N2 1 M 1 0 \n\n HSV1_2.Status HHV6.Status VZV.Status ... IL-6 IL-7 IL-8 \\\n0 1 1 1 ... 2.37 7.52 12.36 \n1 0 1 0 ... 9.10 21.12 39.43 \n2 0 1 1 ... 17.50 2.77 35.84 \n3 1 0 1 ... 2.37 40.19 30.92 \n4 1 1 0 ... 2.37 2.77 7.16 \n5 0 1 0 ... 5.13 2.77 25.56 \n6 1 1 1 ... 8.91 4.60 8.26 \n7 1 1 0 ... 2.37 2.77 8.06 \n8 1 1 1 ... 2.99 2.77 6.53 \n9 0 1 0 ... 2.37 10.09 38.36 \n10 1 1 1 ... 19.91 9.73 24.37 \n11 1 1 0 ... 27.65 4.78 33.33 \n12 1 1 1 ... 5.13 11.18 12.56 \n13 1 1 1 ... 4.46 2.77 30.88 \n14 1 1 0 ... 2.66 2.87 31.23 \n15 0 0 1 ... 2.37 9.54 10.06 \n16 1 0 1 ... 2.37 10.46 36.11 \n17 1 1 1 ... 2.37 4.96 23.25 \n18 1 1 1 ... 8.84 2.77 2.45 \n19 1 1 1 ... 2.37 7.15 8.66 \n20 1 1 1 ... 7.29 28.03 4.63 \n21 1 0 0 ... 3.21 2.77 7.23 \n22 1 1 0 ... 7.29 2.77 2.44 \n23 1 1 0 ... 2.37 2.77 21.61 \n24 1 1 1 ... 2.37 3.72 8.98 \n25 0 1 0 ... 21.85 2.77 4.82 \n26 1 1 1 ... 6.39 11.90 12.68 \n27 0 0 0 ... 2.37 2.77 2.44 \n28 1 0 0 ... 2.37 4.96 9.37 \n29 1 0 1 ... 2.37 2.77 2.44 \n.. ... ... ... ... ... ... ... \n170 0 0 0 ... 5.20 15.20 18.34 \n171 1 1 1 ... 29.82 41.46 27.22 \n172 0 1 0 ... 2.32 5.71 15.09 \n173 0 1 0 ... 4.98 24.76 19.38 \n174 0 1 0 ... 18.93 12.95 16.53 \n175 0 0 0 ... 2.32 16.21 6.09 \n176 0 1 0 ... 61.53 20.73 92.42 \n177 1 1 1 ... 29.82 12.95 207.85 \n178 0 1 0 ... 28.23 28.65 81.31 \n179 0 1 0 ... 13.94 5.71 53.53 \n180 0 1 0 ... 37.09 28.65 80.14 \n181 0 1 0 ... 22.96 14.40 33.50 \n182 0 1 0 ... 2.32 5.71 15.31 \n183 0 0 0 ... 2.54 5.71 23.55 \n184 0 1 0 ... 9.92 12.95 193.01 \n185 0 1 0 ... 3.62 11.99 113.64 \n186 0 1 0 ... 5.96 5.71 69.24 \n187 0 1 0 ... 2.32 5.71 22.59 \n188 0 1 0 ... 2.32 5.71 28.27 \n189 0 1 0 ... 59.26 20.54 268.10 \n190 0 0 0 ... 2.32 5.71 11.43 \n191 0 1 0 ... 3.12 5.71 9.72 \n192 0 1 0 ... 11.56 5.71 71.37 \n193 0 1 0 ... 14.26 7.02 45.39 \n194 0 1 0 ... 4.98 5.71 75.98 \n195 1 1 0 ... 3.32 22.04 20.80 \n196 0 1 0 ... 3.72 14.67 59.31 \n197 0 0 0 ... 2.32 5.71 75.64 \n198 0 0 0 ... 2.32 5.71 5.36 \n199 0 1 0 ... 3.12 5.71 18.29 \n\n IP-10 MCP-1 MIP-1a MIP-1b TNFa TNFb VEGF \n0 86.27 475.39 1382.00 78.48 8.98 3.36 169.11 \n1 2213.00 2048.00 23.82 171.99 52.17 18.65 452.98 \n2 193.51 859.72 34.14 142.83 23.04 137.32 667.07 \n3 210.75 478.20 13.16 26.20 16.81 278.80 169.11 \n4 127.61 630.68 0.56 5.47 18.05 3.36 169.11 \n5 1510.00 964.35 16.76 65.37 29.68 17.11 434.81 \n6 1375.00 477.11 0.56 52.08 4.55 3.36 106.13 \n7 142.30 513.43 0.56 80.30 28.32 14.74 80.85 \n8 235.06 498.08 14.92 36.18 14.97 3.36 338.45 \n9 2447.00 2548.00 8.81 69.57 42.10 3.36 327.71 \n10 208.53 508.28 42.92 184.95 26.91 95.68 903.13 \n11 2909.00 694.97 25.12 149.51 25.58 3.36 760.56 \n12 269.85 1186.00 0.56 63.62 38.65 3.36 169.11 \n13 1769.00 693.26 12.52 45.95 22.49 3.36 582.14 \n14 1184.00 1069.00 0.56 91.20 12.06 3.36 80.85 \n15 5196.00 2864.00 6.94 67.08 23.51 3.36 130.58 \n16 176.79 832.22 0.56 101.07 9.43 3.36 151.13 \n17 119.95 361.09 7.91 33.16 15.85 3.36 237.65 \n18 199.27 284.06 3.97 43.54 7.02 3.36 331.35 \n19 174.43 186.18 17.25 65.37 17.13 26.05 422.77 \n20 136.81 242.02 37.66 118.41 27.46 60.31 614.88 \n21 2285.00 252.51 1.17 32.36 9.39 7.12 80.85 \n22 6650.00 521.30 0.56 54.15 5.02 3.36 237.65 \n23 3361.00 387.22 13.16 44.15 8.90 68.24 308.57 \n24 133.82 147.36 12.19 60.46 7.56 3.36 278.52 \n25 723.45 46.35 12.19 53.64 13.32 3.36 192.69 \n26 128.31 111.88 36.67 119.94 15.57 324.50 427.65 \n27 3307.00 286.58 0.56 58.58 18.37 3.36 141.24 \n28 309.21 192.15 0.56 102.25 17.69 3.36 382.83 \n29 247.13 184.98 0.56 5.47 5.20 3.36 80.85 \n.. ... ... ... ... ... ... ... \n170 1460.00 1518.00 11.05 115.59 26.90 5.88 180.32 \n171 75.82 649.57 31.28 149.14 28.93 228.87 595.68 \n172 1361.00 667.63 13.85 31.56 20.27 38.85 133.45 \n173 151.78 273.14 23.10 68.76 30.43 73.52 350.11 \n174 591.24 425.70 17.87 60.34 14.62 22.06 209.24 \n175 338.98 308.79 17.74 45.46 30.66 5.88 215.70 \n176 316.42 659.75 71.21 218.46 61.02 204.13 878.48 \n177 2183.00 1482.00 39.67 109.61 40.19 1002.00 289.30 \n178 629.60 480.94 32.44 121.14 41.89 87.04 321.93 \n179 913.71 737.90 5.26 89.10 55.62 5.88 54.20 \n180 5621.00 3153.00 37.01 199.87 65.41 606.54 311.01 \n181 580.64 753.58 11.05 42.34 19.49 5.88 522.01 \n182 246.69 506.14 5.26 42.34 31.99 5.88 260.00 \n183 660.54 1051.00 16.28 47.58 34.16 5.88 184.25 \n184 803.31 349.23 54.20 113.35 34.45 1062.00 406.46 \n185 433.31 1046.00 30.16 101.81 43.63 431.34 282.93 \n186 582.95 635.27 28.45 84.78 50.12 365.82 357.75 \n187 491.66 452.37 5.26 42.34 38.49 5.88 215.70 \n188 345.06 445.99 5.26 22.20 29.22 5.88 54.20 \n189 754.40 998.64 52.75 174.53 14.68 1812.00 438.77 \n190 93.38 496.44 5.26 97.03 24.69 23.68 318.35 \n191 2054.00 383.29 8.36 19.73 32.56 26.07 93.63 \n192 425.33 665.07 32.50 115.79 30.61 129.12 163.44 \n193 269.75 744.54 23.10 99.33 17.34 5.88 433.28 \n194 132.59 368.35 28.45 42.34 20.98 5.88 291.38 \n195 627.91 270.46 25.88 47.16 44.87 37.40 264.82 \n196 416.23 545.27 14.81 51.60 30.66 48.00 307.24 \n197 207.48 560.40 27.28 84.78 45.93 355.63 369.47 \n198 248.56 457.03 9.68 33.83 34.74 5.88 360.74 \n199 163.54 250.99 13.16 27.87 17.22 5.88 227.91 \n\n[200 rows x 48 columns]\n" ], [ "# longDf['Groups']=longDf['ID'].astype(str)+'_'+longDf['Strain']# longDf = longDf.drop(columns= ['ID', 'Influenza.Status', 'Strain', 'Age', 'Sex', 'CMV.Status', 'EBV.Status', 'HSV1_2.Status', 'HHV6.Status', 'VZV.Status'])\nlongDf = longDf.drop(columns= ['ID', 'Influenza.Status', 'Strain', 'Age', 'Sex', 'CMV.Status', 'EBV.Status', 'HSV1_2.Status', 'HHV6.Status', 'VZV.Status', 'Strains'])\nlongDf", "_____no_output_____" ], [ "Df = longDf.pivot_table(index='Groups')\nDf.to_excel('Example_1.xlsx')", "_____no_output_____" ], [ "\n\"\"\"A wide df has one sample per row (analyte measurements across the columns)\"\"\"\n\n\ndef _prepCyDf(tmp, K=3, normed=False, cluster=\"Cluster\", percent= 0, rtol= None, atol= None):\n# dayDf = longDf\n# tmp = tmp.pivot_table(index='ptid', columns='cytokine', values='log10_conc')\n if rtol or atol == None:\n noVar = tmp.columns[np.isclose(tmp.std(), 0)].tolist()\n else:\n noVar = tmp.columns[np.isclose(tmp.std(), 0), rtol, atol].tolist()\n naCols = tmp.columns[(tmp.isnull().sum()) / (((tmp.isnull()).sum()) + (tmp.notnull().sum())) > (percent / 100)].tolist() + [\"il21\"]\n keepCols = [c for c in tmp.columns if not c in (noVar + naCols)]\n# dayDf = dayDf.pivot_table(index='ptid', columns='cytokine', values='log10_conc')[keepCols]\n \"\"\"By setting normed=True the data our normalized based on correlation with mean analyte concentration\"\"\"\n rcyc = cy.cyclusterClass(studyStr='ADAMTS', sampleStr=cluster, normed=normed, rCyDf=tmp)\n rcyc.clusterCytokines(K=K, metric='spearman-signed', minN=0)\n rcyc.printModules()\n return rcyc\n\ntest = _prepCyDf(Df, K=3, normed=False, cluster=\"All\", percent= 10)\n\n\n\n", "All1\nEGF\nEotaxin\nGRO\nIL-8\nMCP-1\nMDC\nTGF-a\nTNFa\nsCD40L\n\nAll2\nFGF-2\nFlt3 Ligand\nG-CSF\nGM-CSF\nIFNa2\nIFNg\nIL-12p40\nIL-13\nIL-15\nIL-17A\nIL-1RA\nIL-1a\nIL-1b\nIL-6\nIL-7\nIL12-p70\nMCP-3\nMIP-1a\nMIP-1b\nTNFb\nVEGF\n\nAll3\nFractalkine\nIL-10\nIL-2\nIL-3\nIL-4\nIL-5\nIL-9\nIP-10\n\n" ], [ "\"\"\"Now you can use attributes in nserum for plots and testing: cyDf, modDf, dmatDf, etc.\"\"\"\nplt.figure(41, figsize=(15.5, 9.5))\ncolInds = plotHColCluster(rcyc.cyDf,\n method='complete',\n metric='pearson-signed',\n col_labels=rcyc.labels,\n col_dmat=rcyc.dmatDf,\n tickSz='large',\n vRange=(0,1))\n\nplt.figure(43, figsize = (15.5, 9.5))\ncolInds = cy.plotting.plotHierClust(1 - rcyc.pwrel,\n rcyc.Z,\n labels=rcyc.labels,\n titleStr='Pairwise reliability (%s)' % rcyc.name,\n vRange=(0, 1),\n tickSz='large')\n\nplt.figure(901, figsize=(13, 9.7))\ncy.plotting.plotModuleEmbedding(rcyc.dmatDf, rcyc.labels, method='kpca', txtSize='large')\ncolors = palettable.colorbrewer.get_map('Set1', 'qualitative', len(np.unique(rcyc.labels))).mpl_colors\ncolorLegend(colors, ['%s%1.0f' % (rcyc.sampleStr, i) for i in np.unique(rcyc.labels)], loc='lower left')\n", "_____no_output_____" ], [ "import scipy.stats\n\n\"\"\"df here should have one column per module and the genotype column\"\"\"\nptidDf = longDf[['ptid', 'sample', 'genotype', 'dpi']].drop_duplicates().set_index('ptid')\ndf = rcyc.modDf.join(ptidDf)\n\nind = df.genotype == 'WT'\ncol = 'LUNG1'\n# stats.ranksums(df[col].loc[ind], df[col].loc[~ind])\n\n\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a8a647970cdeda9c97865deff9b701de4d9cbc3
129
ipynb
Jupyter Notebook
01-Lesson-Plans/14-Deep-Learning/1/Activities/01-Evr_Keras_Intro/Unsolved/artificial-neuron.ipynb
tatianegercina/FinTech
b40687aa362d78674e223eb15ecf14bc59f90b62
[ "ADSL" ]
1
2021-04-13T07:14:34.000Z
2021-04-13T07:14:34.000Z
01-Lesson-Plans/14-Deep-Learning/1/Activities/01-Evr_Keras_Intro/Unsolved/artificial-neuron.ipynb
tatianegercina/FinTech
b40687aa362d78674e223eb15ecf14bc59f90b62
[ "ADSL" ]
2
2021-06-02T03:14:19.000Z
2022-02-11T23:21:24.000Z
01-Lesson-Plans/14-Deep-Learning/1/Activities/01-Evr_Keras_Intro/Unsolved/artificial-neuron.ipynb
tatianegercina/FinTech
b40687aa362d78674e223eb15ecf14bc59f90b62
[ "ADSL" ]
1
2021-05-07T13:26:50.000Z
2021-05-07T13:26:50.000Z
32.25
75
0.883721
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a8a667e9acfd94b12d2648f2e3a3ebaedcf68e8
5,435
ipynb
Jupyter Notebook
3CNN.ipynb
LZY2006/keras-learning
7ddb3bd27aeb8ccc88d5f648d9e1cb50699784be
[ "MIT" ]
null
null
null
3CNN.ipynb
LZY2006/keras-learning
7ddb3bd27aeb8ccc88d5f648d9e1cb50699784be
[ "MIT" ]
null
null
null
3CNN.ipynb
LZY2006/keras-learning
7ddb3bd27aeb8ccc88d5f648d9e1cb50699784be
[ "MIT" ]
null
null
null
29.064171
106
0.475805
[ [ [ "import numpy as np\nnp.random.seed(1337) # 伪随机以保持一致\nfrom keras.datasets import mnist\nfrom keras.utils import np_utils\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Activation, Conv2D, MaxPooling2D, Flatten\nfrom keras.optimizers import Adam", "Using TensorFlow backend.\n" ], [ "# 下载mnist数据集保存到\"c:\\users\\用户名\\.keras\\datasets\"(如果没有数据集)\n# X的形状(60,000 28x28), Y的形状(10,000,)\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\n# 数据预处理\nx_train = x_train.reshape(-1, 1, 28, 28)# / 255\nx_test = x_test.reshape(-1, 1, 28, 28)# / 255\n\ny_train = np_utils.to_categorical(y_train, num_classes=10)\ny_test = np_utils.to_categorical(y_test, num_classes=10)\n\n# 构建卷积神经网络\nmodel = Sequential()\n\n# 卷积层1 输出:(32, 28, 28)\nmodel.add(Conv2D(\n filters=32,\n kernel_size = (5,5),\n padding=\"same\", # padding模式\n input_shape=(1,28,28) # 通道个数 长 宽\n ))\nmodel.add(Activation(\"relu\"))\n\n# 池化层1 输出:(32, 14, 14)\nmodel.add(MaxPooling2D(\n pool_size=(2,2),\n strides=(2, 2), # 步长\n padding=\"same\"\n ))\n\n# 卷积层2 输出:(64, 14, 14)\nmodel.add(Conv2D(64, (5, 5), padding=\"same\"))\nmodel.add(Activation(\"relu\"))\n\n# 池化层2 输出:(64, 7, 7)\nmodel.add(MaxPooling2D(pool_size=(2, 2), padding=\"same\"))\n\n# 全连接层1 输入:(64 * 7 * 7) = 3136, 输出: 1024\nmodel.add(Flatten())\nmodel.add(Dense(1024))\nmodel.add(Activation(\"relu\"))\n# 全连接层 2 输入:1024 输出: 10(softmax)\nmodel.add(Dense(10))\nmodel.add(Activation(\"softmax\"))\n# 定义优化器\nadam = Adam(lr=1e-4)\n\n# 加入metrics来获得更多结果\nmodel.compile(optimizer=adam, loss=\"categorical_crossentropy\",)\n #metrics=[\"accuracy\"])\n\n# 训练模型\nprint(\"正在训练. . .\")\nmodel.fit(x_train, y_train, epochs=10, batch_size=32)\n\n# 测试模型和先前定义的metrics\n\n", "正在训练. . .\nEpoch 1/10\n60000/60000 [==============================] - 11s 178us/step - loss: 0.3150 - accuracy: 0.9146\nEpoch 2/10\n60000/60000 [==============================] - 8s 141us/step - loss: 0.1003 - accuracy: 0.9691\nEpoch 3/10\n60000/60000 [==============================] - 8s 140us/step - loss: 0.0645 - accuracy: 0.9798\nEpoch 4/10\n60000/60000 [==============================] - 8s 140us/step - loss: 0.0445 - accuracy: 0.9862\nEpoch 5/10\n60000/60000 [==============================] - 8s 138us/step - loss: 0.0303 - accuracy: 0.9902\nEpoch 6/10\n60000/60000 [==============================] - 8s 141us/step - loss: 0.0242 - accuracy: 0.9918\nEpoch 7/10\n60000/60000 [==============================] - 9s 146us/step - loss: 0.0194 - accuracy: 0.9937\nEpoch 8/10\n60000/60000 [==============================] - 9s 151us/step - loss: 0.0146 - accuracy: 0.9948\nEpoch 9/10\n60000/60000 [==============================] - 9s 152us/step - loss: 0.0140 - accuracy: 0.9954\nEpoch 10/10\n60000/60000 [==============================] - 9s 153us/step - loss: 0.0098 - accuracy: 0.9967\n正在测试. . .\n10000/10000 [==============================] - 1s 73us/step\ntest loss: 0.0674502815180299\ntest accuracy: 0.982699990272522\n" ], [ "print(\"正在测试. . .\")\nloss, accuracy = model.evaluate(x_test, y_test)\n\nprint(\"test loss:\", loss)\nprint(\"test accuracy:\", accuracy)", "正在测试. . .\n10000/10000 [==============================] - 1s 59us/step\ntest loss: 0.0674502815180299\ntest accuracy: 0.982699990272522\n" ], [ "model?", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a8a7295108e99eae4777ec74de439324c8a4a7c
17,914
ipynb
Jupyter Notebook
torch_basic/CNN_minist.ipynb
quoniammm/mine-pytorch-examples
2c1715a8e387e04417f163fba9e923e6e7031234
[ "MIT" ]
2
2017-09-14T09:42:31.000Z
2019-05-03T11:55:00.000Z
torch_basic/CNN_minist.ipynb
quoniammm/mine-pytorch-examples
2c1715a8e387e04417f163fba9e923e6e7031234
[ "MIT" ]
null
null
null
torch_basic/CNN_minist.ipynb
quoniammm/mine-pytorch-examples
2c1715a8e387e04417f163fba9e923e6e7031234
[ "MIT" ]
null
null
null
40.713636
1,714
0.590265
[ [ [ "import torch\nfrom torch.autograd import Variable\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.utils.data as Data\nimport torchvision\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "path = 'data/mnist/'", "_____no_output_____" ], [ "raw_train = pd.read_csv(path + 'train.csv')\nraw_test = pd.read_csv(path + 'train.csv')", "_____no_output_____" ], [ "raw_train_array = raw_train.values\nraw_test_array = raw_test.values", "_____no_output_____" ], [ "raw_train_array = np.random.permutation(raw_train_array)\nlen(raw_train_array)", "_____no_output_____" ], [ "raw_train = raw_train_array[:40000, :]\nraw_valid = raw_train_array[40000:, :]", "_____no_output_____" ], [ "# train_label = np.eye(10)[raw_train[:,0]]\ntrain_label = raw_train[:,0]\ntrain_data = raw_train[:,1:]\n# valid_label = np.eye(10)[raw_valid[:,0]]\nvalid_label = raw_valid[:,0]\nvalid_data = raw_valid[:,1:]\n\ntrain_data.shape", "_____no_output_____" ], [ "def reshape(data, target_size): return np.reshape(data, target_size)", "_____no_output_____" ], [ "train_data = reshape(train_data, [40000, 1, 28, 28])\nvalid_data = reshape(valid_data, [2000, 1, 28, 28])\ntrain_data.shape, train_label.shape, valid_label.shape, valid_data.shape", "_____no_output_____" ], [ "BATCH_SIZE = 64\nLEARNING_RATE = 0.1\nEPOCH = 2", "_____no_output_____" ], [ "#convert to pytorch tensor\ntrain_data = torch.from_numpy(train_data)..type(torch.FloatTensor)\ntrain_label = torch.from_numpy(train_label).type(torch.LongTensor)\nval_data = torch.from_numpy(valid_data).type(torch.FloatTensor)\nval_label = torch.from_numpy(valid_label).type(torch.LongTensor)\n\ntrain_data.size(),train_label.size(),val_data.size(),val_label.size()", "_____no_output_____" ], [ "train_dataset = Data.TensorDataset(data_tensor=train_data, target_tensor=train_label)\nval_dataset = Data.TensorDataset(data_tensor=val_data, target_tensor=val_label)\ntrain_loader = Data.DataLoader(dataset=train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)\nval_loader = Data.DataLoader(dataset=val_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)", "_____no_output_____" ], [ "#pyton opp\nclass CNN(nn.Module):\n def __init__(self):\n super(CNN, self).__init__()\n #in_chanel out_chanel kernel stride padding\n self.conv1 = nn.Conv2d(1, 32, 3)\n self.conv2 = nn.Conv2d(32, 32, 3)\n self.conv3 = nn.Conv2d(32, 64, 3)\n self.conv4 = nn.Conv2d(64, 64, 3)\n \n self.fc1 = nn.Linear(64*4*4, 512)\n self.fc2 = nn.Linear(512, 10)\n \n \n def forward(self, x):\n x = F.relu(self.conv1(x))\n x = F.max_pool2d(F.relu(self.conv2(x)), 2)\n \n x = F.relu(self.conv3(x)) \n x = F.max_pool2d(F.relu(self.conv4(x)), 2) \n \n x = x.view(x.size(0), -1)\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n \n return x\n \n def num_flat_features(self, x):\n size = x.size()[1:]\n num_features = 1\n for s in size:\n num_features *= s\n return num_features\n \ncnn = CNN()\nprint(cnn)", "CNN (\n (conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1))\n (conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1))\n (conv3): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1))\n (conv4): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1))\n (fc1): Linear (1024 -> 512)\n (fc2): Linear (512 -> 10)\n)\n" ], [ "list(cnn.parameters())[2].size() #conv2 weights", "_____no_output_____" ], [ "#Loss and Optimizer", "_____no_output_____" ], [ "criterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(cnn.parameters(), lr=LEARNING_RATE)", "_____no_output_____" ], [ "#train the model\nfor epoch in range(2):\n for i, (images, labels) in enumerate(train_loader):\n# print(type(images))\n# print(type(labels))\n \n images = Variable(images)\n labels = Variable(labels)\n #print(type(images))\n # Forward + Backward + Optimize\n optimizer.zero_grad()\n outputs = cnn(images)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n \n if (i+1) % 100 == 0:\n print(loss.data)\n print ('Epoch [%d/%d], Iter [%d/%d] Loss: %.4f' \n %(epoch+1, 2, i+1, len(train_dataset)//BATCH_SIZE, loss.data[0]))\n ", "_____no_output_____" ], [ "# save and load(保存和提取)", "_____no_output_____" ], [ "# save\ndef save():\n pass\n #torch.save(net_name, 'net.pkl')\n #torch.save(net_name.state_dict(), 'net_params.pkl')\n# load\ndef restore_net():\n pass\n #net_new = torch.load('net.pkl')\n\ndef restore_params():\n pass\n #net_new_old_params = NET()\n #net_new_old_params = net_new_old_params.load_state_dict(torch.load()'net_params.pkl'))\n", "_____no_output_____" ], [ "#批处理", "_____no_output_____" ], [ "#optimizer 优化器\n# optimizer = torch.optim.SGD()\n# torch.optim.Adam\n# momentum (m)\n# alpha (RMSprop)\n# Adam (betas)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a8a7c7760d596d2a66ae6b4196c8a06e48df34f
307,984
ipynb
Jupyter Notebook
Lect 2.ipynb
singhvisha/Natural-Language-Processing
b29adbb0637c5caecbf544f24df92419bd8e1dd3
[ "MIT" ]
null
null
null
Lect 2.ipynb
singhvisha/Natural-Language-Processing
b29adbb0637c5caecbf544f24df92419bd8e1dd3
[ "MIT" ]
null
null
null
Lect 2.ipynb
singhvisha/Natural-Language-Processing
b29adbb0637c5caecbf544f24df92419bd8e1dd3
[ "MIT" ]
null
null
null
505.720854
256,855
0.497172
[ [ [ "# Building and Visualizing word frequencies\n\n\nIn this lab, we will focus on the `build_freqs()` helper function and visualizing a dataset fed into it. In our goal of tweet sentiment analysis, this function will build a dictionary where we can lookup how many times a word appears in the lists of positive or negative tweets. This will be very helpful when extracting the features of the dataset in the week's programming assignment. Let's see how this function is implemented under the hood in this notebook.", "_____no_output_____" ], [ "## Setup\n\nLet's import the required libraries for this lab: ", "_____no_output_____" ] ], [ [ "import nltk # Python library for NLP\nfrom nltk.corpus import twitter_samples # sample Twitter dataset from NLTK\nimport matplotlib.pyplot as plt # visualization library\nimport numpy as np # library for scientific computing and matrix operations", "_____no_output_____" ] ], [ [ "#### Import some helper functions that we provided in the utils.py file:\n* `process_tweet()`: Cleans the text, tokenizes it into separate words, removes stopwords, and converts words to stems.\n* `build_freqs()`: This counts how often a word in the 'corpus' (the entire set of tweets) was associated with a positive label `1` or a negative label `0`. It then builds the `freqs` dictionary, where each key is a `(word,label)` tuple, and the value is the count of its frequency within the corpus of tweets.", "_____no_output_____" ] ], [ [ "# download the stopwords for the process_tweet function\nnltk.download('stopwords')\n\n# import our convenience functions\nfrom utils import process_tweet, build_freqs", "[nltk_data] Downloading package stopwords to /home/jovyan/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\n" ] ], [ [ "## Load the NLTK sample dataset\n\nAs in the previous lab, we will be using the [Twitter dataset from NLTK](http://www.nltk.org/howto/twitter.html#Using-a-Tweet-Corpus).", "_____no_output_____" ] ], [ [ "# select the lists of positive and negative tweets\nall_positive_tweets = twitter_samples.strings('positive_tweets.json')\nall_negative_tweets = twitter_samples.strings('negative_tweets.json')\n\n# concatenate the lists, 1st part is the positive tweets followed by the negative\ntweets = all_positive_tweets + all_negative_tweets\n\n# let's see how many tweets we have\nprint(\"Number of tweets: \", len(tweets))", "Number of tweets: 10000\n" ] ], [ [ "Next, we will build a labels array that matches the sentiments of our tweets. This data type works pretty much like a regular list but is optimized for computations and manipulation. The `labels` array will be composed of 10000 elements. The first 5000 will be filled with `1` labels denoting positive sentiments, and the next 5000 will be `0` labels denoting the opposite. We can do this easily with a series of operations provided by the `numpy` library:\n\n* `np.ones()` - create an array of 1's\n* `np.zeros()` - create an array of 0's\n* `np.append()` - concatenate arrays", "_____no_output_____" ] ], [ [ "# make a numpy array representing labels of the tweets\nlabels = np.append(np.ones((len(all_positive_tweets))), np.zeros((len(all_negative_tweets))))", "_____no_output_____" ] ], [ [ "## Dictionaries\n\nIn Python, a dictionary is a mutable and indexed collection. It stores items as key-value pairs and uses [hash tables](https://en.wikipedia.org/wiki/Hash_table) underneath to allow practically constant time lookups. In NLP, dictionaries are essential because it enables fast retrieval of items or containment checks even with thousands of entries in the collection.", "_____no_output_____" ], [ "### Definition\n\nA dictionary in Python is declared using curly brackets. Look at the next example:", "_____no_output_____" ] ], [ [ "dictionary = {'key1': 1, 'key2': 2}", "_____no_output_____" ] ], [ [ "The former line defines a dictionary with two entries. Keys and values can be almost any type ([with a few restriction on keys](https://docs.python.org/3/tutorial/datastructures.html#dictionaries)), and in this case, we used strings. We can also use floats, integers, tuples, etc.\n\n### Adding or editing entries\n\nNew entries can be inserted into dictionaries using square brackets. If the dictionary already contains the specified key, its value is overwritten. ", "_____no_output_____" ] ], [ [ "# Add a new entry\ndictionary['key3'] = -5\n\n# Overwrite the value of key1\ndictionary['key1'] = 0\n\nprint(dictionary)", "{'key1': 0, 'key2': 2, 'key3': -5}\n" ] ], [ [ "### Accessing values and lookup keys\n\nPerforming dictionary lookups and retrieval are common tasks in NLP. There are two ways to do this: \n\n* Using square bracket notation: This form is allowed if the lookup key is in the dictionary. It produces an error otherwise.\n* Using the [get()](https://docs.python.org/3/library/stdtypes.html#dict.get) method: This allows us to set a default value if the dictionary key does not exist. \n\nLet us see these in action:", "_____no_output_____" ] ], [ [ "# Square bracket lookup when the key exist\nprint(dictionary['key2'])", "2\n" ] ], [ [ "However, if the key is missing, the operation produce an error", "_____no_output_____" ] ], [ [ "# The output of this line is intended to produce a KeyError\nprint(dictionary['key8'])", "_____no_output_____" ] ], [ [ "When using a square bracket lookup, it is common to use an if-else block to check for containment first (with the keyword `in`) before getting the item. On the other hand, you can use the `.get()` method if you want to set a default value when the key is not found. Let's compare these in the cells below:", "_____no_output_____" ] ], [ [ "# This prints a value\nif 'key1' in dictionary:\n print(\"item found: \", dictionary['key1'])\nelse:\n print('key1 is not defined')\n\n# Same as what you get with get\nprint(\"item found: \", dictionary.get('key1', -1))", "item found: 0\nitem found: 0\n" ], [ "# This prints a message because the key is not found\nif 'key7' in dictionary:\n print(dictionary['key7'])\nelse:\n print('key does not exist!')\n\n# This prints -1 because the key is not found and we set the default to -1\nprint(dictionary.get('key7', -1))", "key does not exist!\n-1\n" ] ], [ [ "## Word frequency dictionary", "_____no_output_____" ], [ "Now that we know the building blocks, let's finally take a look at the **build_freqs()** function in **utils.py**. This is the function that creates the dictionary containing the word counts from each corpus.", "_____no_output_____" ], [ "```python\ndef build_freqs(tweets, ys):\n \"\"\"Build frequencies.\n Input:\n tweets: a list of tweets\n ys: an m x 1 array with the sentiment label of each tweet\n (either 0 or 1)\n Output:\n freqs: a dictionary mapping each (word, sentiment) pair to its\n frequency\n \"\"\"\n # Convert np array to list since zip needs an iterable.\n # The squeeze is necessary or the list ends up with one element.\n # Also note that this is just a NOP if ys is already a list.\n yslist = np.squeeze(ys).tolist()\n\n # Start with an empty dictionary and populate it by looping over all tweets\n # and over all processed words in each tweet.\n freqs = {}\n for y, tweet in zip(yslist, tweets):\n for word in process_tweet(tweet):\n pair = (word, y)\n if pair in freqs:\n freqs[pair] += 1\n else:\n freqs[pair] = 1 \n return freqs\n```", "_____no_output_____" ], [ "You can also do the for loop like this to make it a bit more compact:\n\n```python\n for y, tweet in zip(yslist, tweets):\n for word in process_tweet(tweet):\n pair = (word, y)\n freqs[pair] = freqs.get(pair, 0) + 1\n```", "_____no_output_____" ], [ "As shown above, each key is a 2-element tuple containing a `(word, y)` pair. The `word` is an element in a processed tweet while `y` is an integer representing the corpus: `1` for the positive tweets and `0` for the negative tweets. The value associated with this key is the number of times that word appears in the specified corpus. For example: \n\n``` \n# \"folowfriday\" appears 25 times in the positive tweets\n('followfriday', 1.0): 25\n\n# \"shame\" appears 19 times in the negative tweets\n('shame', 0.0): 19 \n```", "_____no_output_____" ], [ "Now, it is time to use the dictionary returned by the `build_freqs()` function. First, let us feed our `tweets` and `labels` lists then print a basic report:", "_____no_output_____" ] ], [ [ "# create frequency dictionary\nfreqs = build_freqs(tweets, labels)\n\n# check data type\nprint(f'type(freqs) = {type(freqs)}')\n\n# check length of the dictionary\nprint(f'len(freqs) = {len(freqs)}')", "type(freqs) = <class 'dict'>\nlen(freqs) = 13076\n" ] ], [ [ "Now print the frequency of each word depending on its class.", "_____no_output_____" ] ], [ [ "print(freqs)", "{('followfriday', 1.0): 25, ('top', 1.0): 32, ('engag', 1.0): 7, ('member', 1.0): 16, ('commun', 1.0): 33, ('week', 1.0): 83, (':)', 1.0): 3568, ('hey', 1.0): 76, ('jame', 1.0): 7, ('odd', 1.0): 2, (':/', 1.0): 5, ('pleas', 1.0): 97, ('call', 1.0): 37, ('contact', 1.0): 7, ('centr', 1.0): 2, ('02392441234', 1.0): 1, ('abl', 1.0): 8, ('assist', 1.0): 1, ('mani', 1.0): 33, ('thank', 1.0): 620, ('listen', 1.0): 16, ('last', 1.0): 47, ('night', 1.0): 68, ('bleed', 1.0): 2, ('amaz', 1.0): 51, ('track', 1.0): 5, ('scotland', 1.0): 2, ('congrat', 1.0): 21, ('yeaaah', 1.0): 1, ('yipppi', 1.0): 1, ('accnt', 1.0): 2, ('verifi', 1.0): 2, ('rqst', 1.0): 1, ('succeed', 1.0): 1, ('got', 1.0): 69, ('blue', 1.0): 9, ('tick', 1.0): 1, ('mark', 1.0): 1, ('fb', 1.0): 6, ('profil', 1.0): 2, ('15', 1.0): 5, ('day', 1.0): 246, ('one', 1.0): 129, ('irresist', 1.0): 2, ('flipkartfashionfriday', 1.0): 17, ('like', 1.0): 233, ('keep', 1.0): 68, ('love', 1.0): 400, ('custom', 1.0): 4, ('wait', 1.0): 70, ('long', 1.0): 36, ('hope', 1.0): 141, ('enjoy', 1.0): 75, ('happi', 1.0): 211, ('friday', 1.0): 116, ('lwwf', 1.0): 1, ('second', 1.0): 10, ('thought', 1.0): 29, ('’', 1.0): 21, ('enough', 1.0): 18, ('time', 1.0): 127, ('dd', 1.0): 1, ('new', 1.0): 143, ('short', 1.0): 7, ('enter', 1.0): 9, ('system', 1.0): 2, ('sheep', 1.0): 1, ('must', 1.0): 18, ('buy', 1.0): 11, ('jgh', 1.0): 4, ('go', 1.0): 148, ('bayan', 1.0): 1, (':D', 1.0): 629, ('bye', 1.0): 7, ('act', 1.0): 8, ('mischiev', 1.0): 1, ('etl', 1.0): 1, ('layer', 1.0): 1, ('in-hous', 1.0): 1, ('wareh', 1.0): 1, ('app', 1.0): 16, ('katamari', 1.0): 1, ('well', 1.0): 81, ('…', 1.0): 38, ('name', 1.0): 18, ('impli', 1.0): 1, (':p', 1.0): 137, ('influenc', 1.0): 18, ('big', 1.0): 33, ('...', 1.0): 289, ('juici', 1.0): 3, ('selfi', 1.0): 12, ('follow', 1.0): 381, ('perfect', 1.0): 24, ('alreadi', 1.0): 28, ('know', 1.0): 145, (\"what'\", 1.0): 17, ('great', 1.0): 171, ('opportun', 1.0): 23, ('junior', 1.0): 2, ('triathlet', 1.0): 1, ('age', 1.0): 2, ('12', 1.0): 5, ('13', 1.0): 6, ('gatorad', 1.0): 1, ('seri', 1.0): 5, ('get', 1.0): 206, ('entri', 1.0): 4, ('lay', 1.0): 4, ('greet', 1.0): 5, ('card', 1.0): 8, ('rang', 1.0): 3, ('print', 1.0): 3, ('today', 1.0): 108, ('job', 1.0): 41, (':-)', 1.0): 692, (\"friend'\", 1.0): 3, ('lunch', 1.0): 5, ('yummm', 1.0): 1, ('nostalgia', 1.0): 1, ('tb', 1.0): 2, ('ku', 1.0): 1, ('id', 1.0): 8, ('conflict', 1.0): 1, ('help', 1.0): 41, (\"here'\", 1.0): 25, ('screenshot', 1.0): 3, ('work', 1.0): 110, ('hi', 1.0): 173, ('liv', 1.0): 2, ('hello', 1.0): 59, ('need', 1.0): 78, ('someth', 1.0): 28, ('u', 1.0): 175, ('fm', 1.0): 2, ('twitter', 1.0): 29, ('—', 1.0): 27, ('sure', 1.0): 58, ('thing', 1.0): 69, ('dm', 1.0): 39, ('x', 1.0): 72, (\"i'v\", 1.0): 35, ('heard', 1.0): 9, ('four', 1.0): 5, ('season', 1.0): 9, ('pretti', 1.0): 20, ('dope', 1.0): 2, ('penthous', 1.0): 1, ('obv', 1.0): 1, ('gobigorgohom', 1.0): 1, ('fun', 1.0): 58, (\"y'all\", 1.0): 3, ('yeah', 1.0): 47, ('suppos', 1.0): 7, ('lol', 1.0): 64, ('chat', 1.0): 13, ('bit', 1.0): 20, ('youth', 1.0): 19, ('💅', 1.0): 1, ('🏽', 1.0): 2, ('💋', 1.0): 2, ('seen', 1.0): 10, ('year', 1.0): 43, ('rest', 1.0): 12, ('goe', 1.0): 7, ('quickli', 1.0): 3, ('bed', 1.0): 16, ('music', 1.0): 21, ('fix', 1.0): 10, ('dream', 1.0): 20, ('spiritu', 1.0): 1, ('ritual', 1.0): 1, ('festiv', 1.0): 8, ('népal', 1.0): 1, ('begin', 1.0): 4, ('line-up', 1.0): 4, ('left', 1.0): 13, ('see', 1.0): 184, ('sarah', 1.0): 4, ('send', 1.0): 22, ('us', 1.0): 109, ('email', 1.0): 26, ('[email protected]', 1.0): 1, (\"we'll\", 1.0): 20, ('asap', 1.0): 5, ('kik', 1.0): 22, ('hatessuc', 1.0): 1, ('32429', 1.0): 1, ('kikm', 1.0): 1, ('lgbt', 1.0): 2, ('tinder', 1.0): 1, ('nsfw', 1.0): 1, ('akua', 1.0): 1, ('cumshot', 1.0): 1, ('come', 1.0): 70, ('hous', 1.0): 7, ('nsn_supplement', 1.0): 1, ('effect', 1.0): 4, ('press', 1.0): 1, ('releas', 1.0): 11, ('distribut', 1.0): 1, ('result', 1.0): 2, ('link', 1.0): 18, ('remov', 1.0): 3, ('pressreleas', 1.0): 1, ('newsdistribut', 1.0): 1, ('bam', 1.0): 44, ('bestfriend', 1.0): 50, ('lot', 1.0): 87, ('warsaw', 1.0): 44, ('<3', 1.0): 134, ('x46', 1.0): 1, ('everyon', 1.0): 58, ('watch', 1.0): 46, ('documentari', 1.0): 1, ('earthl', 1.0): 2, ('youtub', 1.0): 13, ('support', 1.0): 27, ('buuut', 1.0): 1, ('oh', 1.0): 53, ('look', 1.0): 137, ('forward', 1.0): 29, ('visit', 1.0): 30, ('next', 1.0): 48, ('letsgetmessi', 1.0): 1, ('jo', 1.0): 1, ('make', 1.0): 99, ('feel', 1.0): 46, ('better', 1.0): 52, ('never', 1.0): 36, ('anyon', 1.0): 11, ('kpop', 1.0): 1, ('flesh', 1.0): 1, ('good', 1.0): 238, ('girl', 1.0): 44, ('best', 1.0): 65, ('wish', 1.0): 37, ('reason', 1.0): 13, ('epic', 1.0): 2, ('soundtrack', 1.0): 1, ('shout', 1.0): 12, ('ad', 1.0): 14, ('video', 1.0): 34, ('playlist', 1.0): 5, ('would', 1.0): 84, ('dear', 1.0): 17, ('jordan', 1.0): 1, ('okay', 1.0): 39, ('fake', 1.0): 2, ('gameplay', 1.0): 2, (';)', 1.0): 27, ('haha', 1.0): 53, ('im', 1.0): 51, ('kid', 1.0): 18, ('stuff', 1.0): 13, ('exactli', 1.0): 6, ('product', 1.0): 12, ('line', 1.0): 6, ('etsi', 1.0): 1, ('shop', 1.0): 16, ('check', 1.0): 52, ('vacat', 1.0): 6, ('recharg', 1.0): 1, ('normal', 1.0): 6, ('charger', 1.0): 2, ('asleep', 1.0): 9, ('talk', 1.0): 45, ('sooo', 1.0): 6, ('someon', 1.0): 34, ('text', 1.0): 18, ('ye', 1.0): 77, ('bet', 1.0): 6, (\"he'll\", 1.0): 4, ('fit', 1.0): 3, ('hear', 1.0): 33, ('speech', 1.0): 1, ('piti', 1.0): 3, ('green', 1.0): 3, ('garden', 1.0): 7, ('midnight', 1.0): 1, ('sun', 1.0): 6, ('beauti', 1.0): 50, ('canal', 1.0): 1, ('dasvidaniya', 1.0): 1, ('till', 1.0): 18, ('scout', 1.0): 1, ('sg', 1.0): 1, ('futur', 1.0): 13, ('wlan', 1.0): 1, ('pro', 1.0): 5, ('confer', 1.0): 1, ('asia', 1.0): 1, ('chang', 1.0): 24, ('lollipop', 1.0): 1, ('🍭', 1.0): 1, ('nez', 1.0): 1, ('agnezmo', 1.0): 1, ('oley', 1.0): 1, ('mama', 1.0): 1, ('stand', 1.0): 8, ('stronger', 1.0): 1, ('god', 1.0): 20, ('misti', 1.0): 1, ('babi', 1.0): 20, ('cute', 1.0): 26, ('woohoo', 1.0): 3, (\"can't\", 1.0): 43, ('sign', 1.0): 11, ('yet', 1.0): 13, ('still', 1.0): 48, ('think', 1.0): 63, ('mka', 1.0): 5, ('liam', 1.0): 8, ('access', 1.0): 3, ('welcom', 1.0): 73, ('stat', 1.0): 60, ('arriv', 1.0): 67, ('1', 1.0): 75, ('unfollow', 1.0): 63, ('via', 1.0): 69, ('surpris', 1.0): 10, ('figur', 1.0): 5, ('happybirthdayemilybett', 1.0): 1, ('sweet', 1.0): 19, ('talent', 1.0): 5, ('2', 1.0): 58, ('plan', 1.0): 27, ('drain', 1.0): 1, ('gotta', 1.0): 5, ('timezon', 1.0): 1, ('parent', 1.0): 5, ('proud', 1.0): 12, ('least', 1.0): 16, ('mayb', 1.0): 18, ('sometim', 1.0): 13, ('grade', 1.0): 4, ('al', 1.0): 4, ('grand', 1.0): 4, ('manila_bro', 1.0): 2, ('chosen', 1.0): 1, ('let', 1.0): 68, ('around', 1.0): 17, ('..', 1.0): 128, ('side', 1.0): 15, ('world', 1.0): 27, ('eh', 1.0): 2, ('take', 1.0): 43, ('care', 1.0): 18, ('final', 1.0): 30, ('fuck', 1.0): 26, ('weekend', 1.0): 75, ('real', 1.0): 21, ('x45', 1.0): 1, ('join', 1.0): 23, ('hushedcallwithfraydo', 1.0): 1, ('gift', 1.0): 8, ('yeahhh', 1.0): 1, ('hushedpinwithsammi', 1.0): 2, ('event', 1.0): 8, ('might', 1.0): 27, ('luv', 1.0): 6, ('realli', 1.0): 79, ('appreci', 1.0): 31, ('share', 1.0): 46, ('wow', 1.0): 22, ('tom', 1.0): 5, ('gym', 1.0): 4, ('monday', 1.0): 9, ('invit', 1.0): 17, ('scope', 1.0): 5, ('friend', 1.0): 61, ('nude', 1.0): 2, ('sleep', 1.0): 45, ('birthday', 1.0): 74, ('want', 1.0): 96, ('t-shirt', 1.0): 3, ('cool', 1.0): 38, ('haw', 1.0): 1, ('phela', 1.0): 1, ('mom', 1.0): 10, ('obvious', 1.0): 2, ('princ', 1.0): 1, ('charm', 1.0): 1, ('stage', 1.0): 2, ('luck', 1.0): 30, ('tyler', 1.0): 2, ('hipster', 1.0): 1, ('glass', 1.0): 5, ('marti', 1.0): 2, ('glad', 1.0): 43, ('done', 1.0): 54, ('afternoon', 1.0): 10, ('read', 1.0): 34, ('kahfi', 1.0): 1, ('finish', 1.0): 17, ('ohmyg', 1.0): 1, ('yaya', 1.0): 3, ('dub', 1.0): 2, ('stalk', 1.0): 2, ('ig', 1.0): 3, ('gondooo', 1.0): 1, ('moo', 1.0): 2, ('tologooo', 1.0): 1, ('becom', 1.0): 10, ('detail', 1.0): 10, ('zzz', 1.0): 1, ('xx', 1.0): 42, ('physiotherapi', 1.0): 1, ('hashtag', 1.0): 5, ('💪', 1.0): 1, ('monica', 1.0): 1, ('miss', 1.0): 27, ('sound', 1.0): 23, ('morn', 1.0): 101, (\"that'\", 1.0): 67, ('x43', 1.0): 1, ('definit', 1.0): 23, ('tri', 1.0): 44, ('tonight', 1.0): 20, ('took', 1.0): 8, ('advic', 1.0): 6, ('treviso', 1.0): 1, ('concert', 1.0): 24, ('citi', 1.0): 27, ('countri', 1.0): 23, (\"i'll\", 1.0): 90, ('start', 1.0): 61, ('fine', 1.0): 10, ('gorgeou', 1.0): 12, ('xo', 1.0): 2, ('oven', 1.0): 3, ('roast', 1.0): 2, ('garlic', 1.0): 1, ('oliv', 1.0): 1, ('oil', 1.0): 4, ('dri', 1.0): 5, ('tomato', 1.0): 1, ('basil', 1.0): 1, ('centuri', 1.0): 1, ('tuna', 1.0): 1, ('right', 1.0): 47, ('back', 1.0): 98, ('atchya', 1.0): 1, ('even', 1.0): 35, ('almost', 1.0): 10, ('chanc', 1.0): 6, ('cheer', 1.0): 20, ('po', 1.0): 4, ('ice', 1.0): 6, ('cream', 1.0): 6, ('agre', 1.0): 16, ('100', 1.0): 8, ('heheheh', 1.0): 2, ('that', 1.0): 13, ('point', 1.0): 13, ('stay', 1.0): 25, ('home', 1.0): 31, ('soon', 1.0): 47, ('promis', 1.0): 6, ('web', 1.0): 4, ('whatsapp', 1.0): 5, ('volta', 1.0): 1, ('funcionar', 1.0): 1, ('com', 1.0): 2, ('iphon', 1.0): 7, ('jailbroken', 1.0): 1, ('later', 1.0): 16, ('34', 1.0): 3, ('min', 1.0): 9, ('leia', 1.0): 1, ('appear', 1.0): 3, ('hologram', 1.0): 1, ('r2d2', 1.0): 1, ('w', 1.0): 18, ('messag', 1.0): 10, ('obi', 1.0): 1, ('wan', 1.0): 3, ('sit', 1.0): 8, ('luke', 1.0): 6, ('inter', 1.0): 1, ('3', 1.0): 32, ('ucl', 1.0): 1, ('arsen', 1.0): 2, ('small', 1.0): 4, ('team', 1.0): 29, ('pass', 1.0): 12, ('🚂', 1.0): 1, ('dewsburi', 1.0): 2, ('railway', 1.0): 1, ('station', 1.0): 4, ('dew', 1.0): 1, ('west', 1.0): 3, ('yorkshir', 1.0): 2, ('430', 1.0): 1, ('smh', 1.0): 2, ('9:25', 1.0): 1, ('live', 1.0): 26, ('strang', 1.0): 4, ('imagin', 1.0): 5, ('megan', 1.0): 1, ('masaantoday', 1.0): 6, ('a4', 1.0): 3, ('shweta', 1.0): 1, ('tripathi', 1.0): 1, ('5', 1.0): 17, ('20', 1.0): 6, ('kurta', 1.0): 3, ('half', 1.0): 7, ('number', 1.0): 13, ('wsalelov', 1.0): 16, ('ah', 1.0): 13, ('larri', 1.0): 3, ('anyway', 1.0): 16, ('kinda', 1.0): 13, ('goood', 1.0): 4, ('life', 1.0): 49, ('enn', 1.0): 1, ('could', 1.0): 32, ('warmup', 1.0): 1, ('15th', 1.0): 2, ('bath', 1.0): 7, ('dum', 1.0): 2, ('andar', 1.0): 1, ('ram', 1.0): 1, ('sampath', 1.0): 1, ('sona', 1.0): 1, ('mohapatra', 1.0): 1, ('samantha', 1.0): 1, ('edward', 1.0): 1, ('mein', 1.0): 1, ('tulan', 1.0): 1, ('razi', 1.0): 2, ('wah', 1.0): 2, ('josh', 1.0): 1, ('alway', 1.0): 67, ('smile', 1.0): 62, ('pictur', 1.0): 12, ('16.20', 1.0): 1, ('giveitup', 1.0): 1, ('given', 1.0): 3, ('ga', 1.0): 3, ('subsidi', 1.0): 1, ('initi', 1.0): 4, ('propos', 1.0): 3, ('delight', 1.0): 7, ('yesterday', 1.0): 7, ('x42', 1.0): 1, ('lmaoo', 1.0): 2, ('song', 1.0): 22, ('ever', 1.0): 23, ('shall', 1.0): 6, ('littl', 1.0): 31, ('throwback', 1.0): 3, ('outli', 1.0): 1, ('island', 1.0): 5, ('cheung', 1.0): 1, ('chau', 1.0): 1, ('mui', 1.0): 1, ('wo', 1.0): 1, ('total', 1.0): 9, ('differ', 1.0): 11, ('kfckitchentour', 1.0): 2, ('kitchen', 1.0): 4, ('clean', 1.0): 1, (\"i'm\", 1.0): 183, ('cusp', 1.0): 1, ('test', 1.0): 7, ('water', 1.0): 8, ('reward', 1.0): 1, ('arummzz', 1.0): 2, (\"let'\", 1.0): 23, ('drive', 1.0): 11, ('travel', 1.0): 20, ('yogyakarta', 1.0): 3, ('jeep', 1.0): 3, ('indonesia', 1.0): 4, ('instamood', 1.0): 3, ('wanna', 1.0): 30, ('skype', 1.0): 3, ('may', 1.0): 22, ('nice', 1.0): 98, ('friendli', 1.0): 2, ('pretend', 1.0): 2, ('film', 1.0): 9, ('congratul', 1.0): 15, ('winner', 1.0): 4, ('cheesydelight', 1.0): 1, ('contest', 1.0): 6, ('address', 1.0): 10, ('guy', 1.0): 60, ('market', 1.0): 5, ('24/7', 1.0): 1, ('14', 1.0): 1, ('hour', 1.0): 27, ('leav', 1.0): 12, ('without', 1.0): 12, ('delay', 1.0): 2, ('actual', 1.0): 19, ('easi', 1.0): 9, ('guess', 1.0): 14, ('train', 1.0): 10, ('wd', 1.0): 1, ('shift', 1.0): 5, ('engin', 1.0): 2, ('etc', 1.0): 2, ('sunburn', 1.0): 1, ('peel', 1.0): 2, ('blog', 1.0): 31, ('huge', 1.0): 11, ('warm', 1.0): 6, ('☆', 1.0): 3, ('complet', 1.0): 11, ('triangl', 1.0): 2, ('northern', 1.0): 1, ('ireland', 1.0): 2, ('sight', 1.0): 1, ('smthng', 1.0): 2, ('fr', 1.0): 3, ('hug', 1.0): 13, ('xoxo', 1.0): 3, ('uu', 1.0): 1, ('jaann', 1.0): 1, ('topnewfollow', 1.0): 2, ('connect', 1.0): 14, ('wonder', 1.0): 35, ('made', 1.0): 53, ('fluffi', 1.0): 1, ('insid', 1.0): 8, ('pirouett', 1.0): 1, ('moos', 1.0): 1, ('trip', 1.0): 14, ('philli', 1.0): 1, ('decemb', 1.0): 3, (\"i'd\", 1.0): 20, ('dude', 1.0): 6, ('x41', 1.0): 1, ('question', 1.0): 17, ('flaw', 1.0): 1, ('pain', 1.0): 9, ('negat', 1.0): 1, ('strength', 1.0): 3, ('went', 1.0): 12, ('solo', 1.0): 4, ('move', 1.0): 12, ('fav', 1.0): 13, ('nirvana', 1.0): 1, ('smell', 1.0): 2, ('teen', 1.0): 3, ('spirit', 1.0): 3, ('rip', 1.0): 3, ('ami', 1.0): 4, ('winehous', 1.0): 1, ('coupl', 1.0): 9, ('tomhiddleston', 1.0): 1, ('elizabetholsen', 1.0): 1, ('yaytheylookgreat', 1.0): 1, ('goodnight', 1.0): 24, ('vid', 1.0): 11, ('wake', 1.0): 12, ('gonna', 1.0): 21, ('shoot', 1.0): 6, ('itti', 1.0): 2, ('bitti', 1.0): 2, ('teeni', 1.0): 2, ('bikini', 1.0): 3, ('much', 1.0): 89, ('4th', 1.0): 4, ('togeth', 1.0): 7, ('end', 1.0): 20, ('xfile', 1.0): 1, ('content', 1.0): 4, ('rain', 1.0): 21, ('fabul', 1.0): 5, ('fantast', 1.0): 13, ('♡', 1.0): 20, ('jb', 1.0): 1, ('forev', 1.0): 5, ('belieb', 1.0): 3, ('nighti', 1.0): 1, ('bug', 1.0): 3, ('bite', 1.0): 1, ('bracelet', 1.0): 2, ('idea', 1.0): 26, ('foundri', 1.0): 1, ('game', 1.0): 27, ('sens', 1.0): 7, ('pic', 1.0): 27, ('ef', 1.0): 1, ('phone', 1.0): 19, ('woot', 1.0): 2, ('derek', 1.0): 1, ('use', 1.0): 44, ('parkshar', 1.0): 1, ('gloucestershir', 1.0): 1, ('aaaahhh', 1.0): 1, ('man', 1.0): 23, ('traffic', 1.0): 2, ('stress', 1.0): 8, ('reliev', 1.0): 1, (\"how'r\", 1.0): 1, ('arbeloa', 1.0): 1, ('turn', 1.0): 15, ('17', 1.0): 4, ('omg', 1.0): 15, ('say', 1.0): 61, ('europ', 1.0): 1, ('rise', 1.0): 2, ('find', 1.0): 23, ('hard', 1.0): 12, ('believ', 1.0): 9, ('uncount', 1.0): 1, ('coz', 1.0): 3, ('unlimit', 1.0): 1, ('cours', 1.0): 18, ('teamposit', 1.0): 1, ('aldub', 1.0): 2, ('☕', 1.0): 3, ('rita', 1.0): 2, ('info', 1.0): 13, (\"we'd\", 1.0): 4, ('way', 1.0): 46, ('boy', 1.0): 21, ('x40', 1.0): 1, ('true', 1.0): 22, ('sethi', 1.0): 2, ('high', 1.0): 7, ('exe', 1.0): 1, ('skeem', 1.0): 1, ('saam', 1.0): 1, ('peopl', 1.0): 48, ('polit', 1.0): 2, ('izzat', 1.0): 1, ('wese', 1.0): 1, ('trust', 1.0): 9, ('khawateen', 1.0): 1, ('k', 1.0): 9, ('sath', 1.0): 2, ('mana', 1.0): 1, ('kar', 1.0): 1, ('deya', 1.0): 1, ('sort', 1.0): 9, ('smart', 1.0): 5, ('hair', 1.0): 12, ('tbh', 1.0): 5, ('jacob', 1.0): 2, ('g', 1.0): 10, ('upgrad', 1.0): 6, ('tee', 1.0): 2, ('famili', 1.0): 19, ('person', 1.0): 19, ('two', 1.0): 22, ('convers', 1.0): 6, ('onlin', 1.0): 7, ('mclaren', 1.0): 1, ('fridayfeel', 1.0): 5, ('tgif', 1.0): 10, ('squar', 1.0): 1, ('enix', 1.0): 1, ('bissmillah', 1.0): 1, ('ya', 1.0): 23, ('allah', 1.0): 3, (\"we'r\", 1.0): 29, ('socent', 1.0): 1, ('startup', 1.0): 2, ('drop', 1.0): 9, ('your', 1.0): 3, ('arnd', 1.0): 1, ('town', 1.0): 5, ('basic', 1.0): 4, ('piss', 1.0): 3, ('cup', 1.0): 4, ('also', 1.0): 35, ('terribl', 1.0): 2, ('complic', 1.0): 1, ('discuss', 1.0): 3, ('snapchat', 1.0): 36, ('lynettelow', 1.0): 1, ('kikmenow', 1.0): 3, ('snapm', 1.0): 2, ('hot', 1.0): 24, ('amazon', 1.0): 1, ('kikmeguy', 1.0): 3, ('defin', 1.0): 2, ('grow', 1.0): 7, ('sport', 1.0): 4, ('rt', 1.0): 12, ('rakyat', 1.0): 1, ('write', 1.0): 13, ('sinc', 1.0): 15, ('mention', 1.0): 24, ('fli', 1.0): 5, ('fish', 1.0): 3, ('promot', 1.0): 5, ('post', 1.0): 21, ('cyber', 1.0): 1, ('ourdaughtersourprid', 1.0): 5, ('mypapamyprid', 1.0): 2, ('papa', 1.0): 2, ('coach', 1.0): 2, ('posit', 1.0): 8, ('kha', 1.0): 1, ('atleast', 1.0): 2, ('x39', 1.0): 1, ('mango', 1.0): 1, (\"lassi'\", 1.0): 1, (\"monty'\", 1.0): 1, ('marvel', 1.0): 2, ('though', 1.0): 19, ('suspect', 1.0): 3, ('meant', 1.0): 3, ('24', 1.0): 4, ('hr', 1.0): 2, ('touch', 1.0): 15, ('kepler', 1.0): 4, ('452b', 1.0): 5, ('chalna', 1.0): 1, ('hai', 1.0): 11, ('thankyou', 1.0): 14, ('hazel', 1.0): 1, ('food', 1.0): 6, ('brooklyn', 1.0): 1, ('pta', 1.0): 2, ('awak', 1.0): 10, ('okayi', 1.0): 2, ('awww', 1.0): 15, ('ha', 1.0): 23, ('doc', 1.0): 1, ('splendid', 1.0): 1, ('spam', 1.0): 1, ('folder', 1.0): 1, ('amount', 1.0): 1, ('nigeria', 1.0): 1, ('claim', 1.0): 1, ('rted', 1.0): 1, ('leg', 1.0): 5, ('hurt', 1.0): 8, ('bad', 1.0): 18, ('mine', 1.0): 14, ('saturday', 1.0): 8, ('thaaank', 1.0): 1, ('puhon', 1.0): 1, ('happinesss', 1.0): 1, ('tnc', 1.0): 1, ('prior', 1.0): 1, ('notif', 1.0): 2, ('fat', 1.0): 1, ('co', 1.0): 1, ('probabl', 1.0): 9, ('ate', 1.0): 4, ('yuna', 1.0): 2, ('tamesid', 1.0): 1, ('´', 1.0): 3, ('googl', 1.0): 6, ('account', 1.0): 19, ('scouser', 1.0): 1, ('everyth', 1.0): 13, ('zoe', 1.0): 2, ('mate', 1.0): 7, ('liter', 1.0): 6, (\"they'r\", 1.0): 12, ('samee', 1.0): 1, ('edgar', 1.0): 1, ('updat', 1.0): 13, ('log', 1.0): 4, ('bring', 1.0): 17, ('abe', 1.0): 1, ('meet', 1.0): 34, ('x38', 1.0): 1, ('sigh', 1.0): 3, ('dreamili', 1.0): 1, ('pout', 1.0): 1, ('eye', 1.0): 14, ('quacketyquack', 1.0): 7, ('funni', 1.0): 19, ('happen', 1.0): 16, ('phil', 1.0): 1, ('em', 1.0): 3, ('del', 1.0): 1, ('rodder', 1.0): 1, ('els', 1.0): 10, ('play', 1.0): 46, ('newest', 1.0): 1, ('gamejam', 1.0): 1, ('irish', 1.0): 2, ('literatur', 1.0): 2, ('inaccess', 1.0): 2, (\"kareena'\", 1.0): 2, ('fan', 1.0): 30, ('brain', 1.0): 13, ('dot', 1.0): 11, ('braindot', 1.0): 11, ('fair', 1.0): 5, ('rush', 1.0): 1, ('either', 1.0): 11, ('brandi', 1.0): 1, ('18', 1.0): 5, ('carniv', 1.0): 1, ('men', 1.0): 10, ('put', 1.0): 17, ('mask', 1.0): 3, ('xavier', 1.0): 1, ('forneret', 1.0): 1, ('jennif', 1.0): 1, ('site', 1.0): 9, ('free', 1.0): 37, ('50.000', 1.0): 3, ('8', 1.0): 10, ('ball', 1.0): 7, ('pool', 1.0): 5, ('coin', 1.0): 5, ('edit', 1.0): 7, ('trish', 1.0): 1, ('♥', 1.0): 19, ('grate', 1.0): 5, ('three', 1.0): 10, ('comment', 1.0): 8, ('wakeup', 1.0): 1, ('besid', 1.0): 2, ('dirti', 1.0): 2, ('sex', 1.0): 6, ('lmaooo', 1.0): 1, ('😤', 1.0): 2, ('loui', 1.0): 4, (\"he'\", 1.0): 11, ('throw', 1.0): 3, ('caus', 1.0): 15, ('inspir', 1.0): 7, ('ff', 1.0): 48, ('twoof', 1.0): 3, ('gr8', 1.0): 1, ('wkend', 1.0): 3, ('kind', 1.0): 24, ('exhaust', 1.0): 2, ('word', 1.0): 20, ('cheltenham', 1.0): 1, ('area', 1.0): 4, ('kale', 1.0): 1, ('crisp', 1.0): 1, ('ruin', 1.0): 5, ('x37', 1.0): 1, ('open', 1.0): 12, ('worldwid', 1.0): 2, ('outta', 1.0): 1, ('sfvbeta', 1.0): 1, ('vantast', 1.0): 1, ('xcylin', 1.0): 1, ('bundl', 1.0): 1, ('show', 1.0): 28, ('internet', 1.0): 2, ('price', 1.0): 4, ('realisticli', 1.0): 1, ('pay', 1.0): 8, ('net', 1.0): 1, ('educ', 1.0): 1, ('power', 1.0): 7, ('weapon', 1.0): 1, ('nelson', 1.0): 1, ('mandela', 1.0): 1, ('recent', 1.0): 9, ('j', 1.0): 3, ('chenab', 1.0): 1, ('flow', 1.0): 5, ('pakistan', 1.0): 2, ('incredibleindia', 1.0): 1, ('teenchoic', 1.0): 10, ('choiceinternationalartist', 1.0): 9, ('superjunior', 1.0): 9, ('caught', 1.0): 4, ('first', 1.0): 50, ('salmon', 1.0): 3, ('super-blend', 1.0): 1, ('project', 1.0): 6, ('[email protected]', 1.0): 1, ('awesom', 1.0): 42, ('stream', 1.0): 14, ('alma', 1.0): 1, ('mater', 1.0): 1, ('highschoolday', 1.0): 1, ('clientvisit', 1.0): 1, ('faith', 1.0): 3, ('christian', 1.0): 1, ('school', 1.0): 9, ('lizaminnelli', 1.0): 1, ('upcom', 1.0): 2, ('uk', 1.0): 4, ('😄', 1.0): 5, ('singl', 1.0): 6, ('hill', 1.0): 4, ('everi', 1.0): 26, ('beat', 1.0): 10, ('wrong', 1.0): 10, ('readi', 1.0): 25, ('natur', 1.0): 1, ('pefumeri', 1.0): 1, ('workshop', 1.0): 3, ('neal', 1.0): 1, ('yard', 1.0): 1, ('covent', 1.0): 1, ('tomorrow', 1.0): 40, ('fback', 1.0): 27, ('indo', 1.0): 1, ('harmo', 1.0): 1, ('americano', 1.0): 1, ('rememb', 1.0): 16, ('aww', 1.0): 10, ('head', 1.0): 14, ('saw', 1.0): 19, ('dark', 1.0): 6, ('handshom', 1.0): 1, ('juga', 1.0): 1, ('hurray', 1.0): 1, ('hate', 1.0): 13, ('cant', 1.0): 15, ('decid', 1.0): 4, ('save', 1.0): 12, ('list', 1.0): 15, ('hiya', 1.0): 4, ('exec', 1.0): 1, ('[email protected]', 1.0): 1, ('photo', 1.0): 19, ('thx', 1.0): 15, ('4', 1.0): 24, ('china', 1.0): 2, ('homosexu', 1.0): 1, ('hyungbot', 1.0): 1, ('give', 1.0): 48, ('fam', 1.0): 5, ('mind', 1.0): 23, ('timetunnel', 1.0): 1, ('1982', 1.0): 1, ('quit', 1.0): 13, ('radio', 1.0): 5, ('set', 1.0): 11, ('heart', 1.0): 11, ('hiii', 1.0): 2, ('jack', 1.0): 3, ('ili', 1.0): 5, ('✨', 1.0): 4, ('domino', 1.0): 1, ('pub', 1.0): 1, ('heat', 1.0): 1, ('prob', 1.0): 5, ('sorri', 1.0): 22, ('hastili', 1.0): 1, ('type', 1.0): 6, ('came', 1.0): 7, ('pakistani', 1.0): 1, ('x36', 1.0): 1, ('3point', 1.0): 1, ('dreamteam', 1.0): 1, ('gooo', 1.0): 2, ('bailey', 1.0): 2, ('pbb', 1.0): 4, ('737gold', 1.0): 3, ('drank', 1.0): 2, ('old', 1.0): 13, ('gotten', 1.0): 2, ('1/2', 1.0): 1, ('welsh', 1.0): 1, ('wale', 1.0): 3, ('yippe', 1.0): 1, ('💟', 1.0): 4, ('bro', 1.0): 24, ('lord', 1.0): 4, ('michael', 1.0): 4, (\"u'r\", 1.0): 1, ('ure', 1.0): 1, ('bigot', 1.0): 1, ('usual', 1.0): 6, ('front', 1.0): 4, ('squat', 1.0): 1, ('dobar', 1.0): 1, ('dan', 1.0): 5, ('brand', 1.0): 8, ('heavi', 1.0): 2, ('musicolog', 1.0): 1, ('2015', 1.0): 16, ('spend', 1.0): 2, ('marathon', 1.0): 1, ('iflix', 1.0): 2, ('offici', 1.0): 10, ('graduat', 1.0): 3, ('cri', 1.0): 9, ('__', 1.0): 1, ('yep', 1.0): 9, ('expert', 1.0): 4, ('bisexu', 1.0): 1, ('minal', 1.0): 1, ('aidzin', 1.0): 1, ('yo', 1.0): 7, ('pi', 1.0): 1, ('cook', 1.0): 2, ('book', 1.0): 21, ('dinner', 1.0): 7, ('tough', 1.0): 2, ('choic', 1.0): 8, ('other', 1.0): 12, ('chill', 1.0): 6, ('smu', 1.0): 1, ('oval', 1.0): 1, ('basketbal', 1.0): 1, ('player', 1.0): 4, ('whahahaha', 1.0): 1, ('soamaz', 1.0): 1, ('moment', 1.0): 12, ('onto', 1.0): 3, ('a5', 1.0): 1, ('wardrob', 1.0): 2, ('user', 1.0): 3, ('teamr', 1.0): 1, ('appar', 1.0): 6, ('depend', 1.0): 2, ('greatli', 1.0): 1, ('design', 1.0): 21, ('ahhh', 1.0): 1, ('7th', 1.0): 1, ('cinepambata', 1.0): 1, ('mechan', 1.0): 1, ('form', 1.0): 4, ('download', 1.0): 10, ('ur', 1.0): 38, ('swisher', 1.0): 1, ('cop', 1.0): 1, ('ducktail', 1.0): 1, ('surreal', 1.0): 3, ('exposur', 1.0): 1, ('sotw', 1.0): 1, ('halesowen', 1.0): 1, ('blackcountryfair', 1.0): 1, ('street', 1.0): 1, ('assess', 1.0): 1, ('mental', 1.0): 4, ('bodi', 1.0): 15, ('ooz', 1.0): 1, ('appeal', 1.0): 1, ('amassiveoverdoseofship', 1.0): 1, ('latest', 1.0): 5, ('isi', 1.0): 1, ('chan', 1.0): 1, ('c', 1.0): 9, ('note', 1.0): 6, ('pkwalasawa', 1.0): 1, ('gemma', 1.0): 1, ('orlean', 1.0): 1, ('fever', 1.0): 2, ('geskenya', 1.0): 1, ('obamainkenya', 1.0): 1, ('magicalkenya', 1.0): 1, ('greatkenya', 1.0): 1, ('allgoodthingsk', 1.0): 1, ('anim', 1.0): 6, ('umaru', 1.0): 1, ('singer', 1.0): 4, ('ship', 1.0): 8, ('order', 1.0): 17, ('room', 1.0): 5, ('car', 1.0): 6, ('gone', 1.0): 5, ('hahaha', 1.0): 14, ('stori', 1.0): 11, ('relat', 1.0): 4, ('label', 1.0): 1, ('worst', 1.0): 3, ('batch', 1.0): 1, ('princip', 1.0): 1, ('due', 1.0): 3, ('march', 1.0): 1, ('wooftast', 1.0): 2, ('receiv', 1.0): 8, ('necessari', 1.0): 1, ('regret', 1.0): 4, ('rn', 1.0): 4, ('whatev', 1.0): 5, ('hat', 1.0): 1, ('success', 1.0): 6, ('abstin', 1.0): 1, ('wtf', 1.0): 3, (\"there'\", 1.0): 11, ('thrown', 1.0): 1, ('middl', 1.0): 2, ('repeat', 1.0): 3, ('relentlessli', 1.0): 1, ('approxim', 1.0): 1, ('oldschool', 1.0): 1, ('runescap', 1.0): 1, ('daaay', 1.0): 1, ('jumma_mubarik', 1.0): 1, ('frnd', 1.0): 1, ('stay_bless', 1.0): 1, ('bless', 1.0): 12, ('pussycat', 1.0): 1, ('main', 1.0): 7, ('launch', 1.0): 4, ('pretoria', 1.0): 1, ('fahrinahmad', 1.0): 1, ('tengkuaaronshah', 1.0): 1, ('eksperimencinta', 1.0): 1, ('tykkäsin', 1.0): 1, ('videosta', 1.0): 1, ('month', 1.0): 13, ('hoodi', 1.0): 2, ('eeep', 1.0): 1, ('yay', 1.0): 16, ('sohappyrightnow', 1.0): 1, ('mmm', 1.0): 1, ('azz-set', 1.0): 1, ('babe', 1.0): 9, ('feedback', 1.0): 11, ('gain', 1.0): 6, ('valu', 1.0): 2, ('peac', 1.0): 8, ('refresh', 1.0): 5, ('manthan', 1.0): 1, ('tune', 1.0): 5, ('fresh', 1.0): 6, ('mother', 1.0): 5, ('determin', 1.0): 2, ('maxfreshmov', 1.0): 2, ('loneliest', 1.0): 1, ('tattoo', 1.0): 3, ('friday.and', 1.0): 1, ('magnific', 1.0): 2, ('e', 1.0): 5, ('achiev', 1.0): 2, ('rashmi', 1.0): 1, ('dedic', 1.0): 2, ('happyfriday', 1.0): 6, ('nearli', 1.0): 4, ('retweet', 1.0): 35, ('alert', 1.0): 1, ('da', 1.0): 5, ('dang', 1.0): 2, ('rad', 1.0): 2, ('fanart', 1.0): 1, ('massiv', 1.0): 1, ('niamh', 1.0): 1, ('fennel', 1.0): 1, ('journal', 1.0): 1, ('land', 1.0): 2, ('copi', 1.0): 5, ('past', 1.0): 7, ('tweet', 1.0): 61, ('yesss', 1.0): 5, ('ariana', 1.0): 2, ('selena', 1.0): 2, ('gomez', 1.0): 1, ('tomlinson', 1.0): 1, ('payn', 1.0): 1, ('caradelevingn', 1.0): 1, ('🌷', 1.0): 1, ('trade', 1.0): 3, ('tire', 1.0): 5, ('nope', 1.0): 7, ('appli', 1.0): 6, ('iamca', 1.0): 1, ('found', 1.0): 15, ('afti', 1.0): 1, ('goodmorn', 1.0): 8, ('prokabaddi', 1.0): 1, ('koel', 1.0): 1, ('mallick', 1.0): 1, ('recit', 1.0): 4, ('nation', 1.0): 3, ('anthem', 1.0): 1, ('6', 1.0): 23, ('yournaturallead', 1.0): 1, ('youngnaturallead', 1.0): 1, ('mon', 1.0): 3, ('27juli', 1.0): 1, ('cumbria', 1.0): 1, ('flockstar', 1.0): 1, ('thur', 1.0): 2, ('30juli', 1.0): 1, ('itv', 1.0): 1, ('sleeptight', 1.0): 1, ('haveagoodday', 1.0): 1, ('septemb', 1.0): 5, ('perhap', 1.0): 3, ('bb', 1.0): 4, ('full', 1.0): 19, ('album', 1.0): 6, ('fulli', 1.0): 2, ('intend', 1.0): 1, ('possibl', 1.0): 7, ('attack', 1.0): 3, ('>:d', 1.0): 4, ('bird', 1.0): 4, ('teamadmicro', 1.0): 1, ('fridaydownpour', 1.0): 1, ('clear', 1.0): 4, ('rohit', 1.0): 1, ('queen', 1.0): 8, ('otwolgrandtrail', 1.0): 3, ('sheer', 1.0): 1, ('fact', 1.0): 8, ('obama', 1.0): 1, ('innumer', 1.0): 1, ('presid', 1.0): 2, ('ni', 1.0): 3, ('shauri', 1.0): 1, ('yako', 1.0): 1, ('memotohat', 1.0): 1, ('sunday', 1.0): 9, ('pamper', 1.0): 2, (\"t'wa\", 1.0): 1, ('cabincrew', 1.0): 1, ('interview', 1.0): 5, ('langkawi', 1.0): 1, ('1st', 1.0): 1, ('august', 1.0): 7, ('fulfil', 1.0): 5, ('fantasi', 1.0): 6, ('👉', 1.0): 6, ('ex-tweleb', 1.0): 1, ('apart', 1.0): 2, ('makeov', 1.0): 1, ('brilliantli', 1.0): 1, ('happyyi', 1.0): 1, ('birthdaaayyy', 1.0): 2, ('kill', 1.0): 3, ('interest', 1.0): 20, ('internship', 1.0): 3, ('program', 1.0): 5, ('sadli', 1.0): 1, ('career', 1.0): 3, ('page', 1.0): 9, ('issu', 1.0): 10, ('sad', 1.0): 5, ('overwhelmingli', 1.0): 1, ('aha', 1.0): 2, ('beaut', 1.0): 2, ('♬', 1.0): 2, ('win', 1.0): 16, ('deo', 1.0): 1, ('faaabul', 1.0): 1, ('freebiefriday', 1.0): 4, ('aluminiumfre', 1.0): 1, ('stayfresh', 1.0): 1, ('john', 1.0): 6, ('worri', 1.0): 18, ('navig', 1.0): 1, ('thnk', 1.0): 1, ('progrmr', 1.0): 1, ('9pm', 1.0): 1, ('9am', 1.0): 2, ('hardli', 1.0): 1, ('rose', 1.0): 4, ('emot', 1.0): 3, ('poetri', 1.0): 1, ('frequentfly', 1.0): 1, ('break', 1.0): 10, ('apolog', 1.0): 4, ('kb', 1.0): 1, ('londondairi', 1.0): 1, ('icecream', 1.0): 2, ('experi', 1.0): 7, ('cover', 1.0): 9, ('sin', 1.0): 1, ('excit', 1.0): 33, (\":')\", 1.0): 2, ('xxx', 1.0): 15, ('jim', 1.0): 1, ('chuckl', 1.0): 1, ('cake', 1.0): 10, ('doh', 1.0): 1, ('500', 1.0): 2, ('subscrib', 1.0): 2, ('reach', 1.0): 1, ('scorch', 1.0): 1, ('summer', 1.0): 17, ('younger', 1.0): 4, ('woman', 1.0): 4, ('stamina', 1.0): 1, ('expect', 1.0): 6, ('anyth', 1.0): 22, ('less', 1.0): 8, ('tweeti', 1.0): 1, ('fab', 1.0): 12, ('dont', 1.0): 13, ('-->', 1.0): 2, ('10', 1.0): 16, ('loner', 1.0): 3, ('introduc', 1.0): 3, ('vs', 1.0): 4, ('alter', 1.0): 1, ('understand', 1.0): 6, ('spread', 1.0): 8, ('problem', 1.0): 19, ('supa', 1.0): 1, ('dupa', 1.0): 1, ('near', 1.0): 6, ('dartmoor', 1.0): 1, ('gold', 1.0): 7, ('colour', 1.0): 4, ('ok', 1.0): 38, ('someday', 1.0): 4, ('r', 1.0): 14, ('dii', 1.0): 1, ('n', 1.0): 17, ('forget', 1.0): 17, ('si', 1.0): 4, ('smf', 1.0): 1, ('ft', 1.0): 4, ('japanes', 1.0): 3, ('import', 1.0): 5, ('kitti', 1.0): 1, ('match', 1.0): 6, ('stationari', 1.0): 1, ('draw', 1.0): 6, ('close', 1.0): 14, ('broken', 1.0): 3, ('specialis', 1.0): 4, ('thermal', 1.0): 4, ('imag', 1.0): 6, ('survey', 1.0): 4, ('–', 1.0): 14, ('south', 1.0): 2, ('korea', 1.0): 3, ('scamper', 1.0): 1, ('slept', 1.0): 4, ('alarm', 1.0): 1, (\"ain't\", 1.0): 5, ('mad', 1.0): 4, ('chweina', 1.0): 1, ('xd', 1.0): 4, ('jotzh', 1.0): 1, ('wast', 1.0): 7, ('place', 1.0): 21, ('worth', 1.0): 11, ('coat', 1.0): 3, ('beforehand', 1.0): 1, ('tho', 1.0): 12, ('foh', 1.0): 2, ('outsid', 1.0): 5, ('holiday', 1.0): 11, ('menac', 1.0): 1, ('jojo', 1.0): 2, ('ta', 1.0): 2, ('accept', 1.0): 1, ('admin', 1.0): 2, ('lukri', 1.0): 1, ('😘', 1.0): 10, ('momma', 1.0): 2, ('bear', 1.0): 2, ('❤', 1.0): 29, ('️', 1.0): 21, ('redid', 1.0): 1, ('8th', 1.0): 1, ('v.ball', 1.0): 1, ('atm', 1.0): 4, ('build', 1.0): 8, ('pack', 1.0): 8, ('suitcas', 1.0): 2, ('hang-copi', 1.0): 1, ('translat', 1.0): 1, (\"dostoevsky'\", 1.0): 1, ('voucher', 1.0): 2, ('bugatti', 1.0): 1, ('bra', 1.0): 3, ('مطعم_هاشم', 1.0): 1, ('yummi', 1.0): 3, ('a7la', 1.0): 1, ('bdayt', 1.0): 1, ('mnwreeen', 1.0): 1, ('jazz', 1.0): 2, ('truck', 1.0): 1, ('x34', 1.0): 1, ('speak', 1.0): 8, ('pbevent', 1.0): 1, ('hq', 1.0): 1, ('add', 1.0): 22, ('yoona', 1.0): 1, ('hairpin', 1.0): 1, ('otp', 1.0): 1, ('collect', 1.0): 7, ('mastership', 1.0): 1, ('honey', 1.0): 4, ('paindo', 1.0): 1, ('await', 1.0): 1, ('report', 1.0): 3, ('manni', 1.0): 1, ('asshol', 1.0): 3, ('brijresid', 1.0): 1, ('structur', 1.0): 1, ('156', 1.0): 1, ('unit', 1.0): 3, ('encompass', 1.0): 1, ('bhk', 1.0): 1, ('flat', 1.0): 2, ('91', 1.0): 2, ('975-580-', 1.0): 1, ('444', 1.0): 1, ('honor', 1.0): 2, ('curri', 1.0): 2, ('clash', 1.0): 1, ('milano', 1.0): 1, ('👌', 1.0): 1, ('followback', 1.0): 6, (':-d', 1.0): 5, ('legit', 1.0): 1, ('loser', 1.0): 5, ('gass', 1.0): 1, ('dead', 1.0): 4, ('starsquad', 1.0): 4, ('⭐', 1.0): 3, ('news', 1.0): 25, ('utc', 1.0): 1, ('flume', 1.0): 1, ('kaytranada', 1.0): 1, ('alunageorg', 1.0): 1, ('ticket', 1.0): 12, ('km', 1.0): 1, ('certainti', 1.0): 1, ('solv', 1.0): 2, ('faster', 1.0): 3, ('👊', 1.0): 2, ('hurri', 1.0): 5, ('totem', 1.0): 1, ('somewher', 1.0): 5, ('alic', 1.0): 4, ('dog', 1.0): 6, ('cat', 1.0): 5, ('goodwynsgoodi', 1.0): 1, ('ugh', 1.0): 1, ('fade', 1.0): 2, ('moan', 1.0): 1, ('leed', 1.0): 1, ('jozi', 1.0): 1, ('wasnt', 1.0): 2, ('fifth', 1.0): 2, ('avail', 1.0): 10, ('tix', 1.0): 2, ('pa', 1.0): 2, ('ba', 1.0): 2, ('ng', 1.0): 2, ('atl', 1.0): 1, ('coldplay', 1.0): 1, ('favorit', 1.0): 14, ('scientist', 1.0): 1, ('yellow', 1.0): 2, ('atla', 1.0): 1, ('yein', 1.0): 1, ('selo', 1.0): 1, ('jabongatpumaurbanstamped', 1.0): 4, ('an', 1.0): 3, ('7', 1.0): 8, ('waiter', 1.0): 1, ('bill', 1.0): 5, ('sir', 1.0): 12, ('titl', 1.0): 2, ('pocket', 1.0): 1, ('wrip', 1.0): 1, ('jean', 1.0): 1, ('conni', 1.0): 2, ('crew', 1.0): 3, ('staff', 1.0): 2, ('sweetan', 1.0): 1, ('ask', 1.0): 37, ('mum', 1.0): 2, ('beg', 1.0): 2, ('soprano', 1.0): 1, ('ukrain', 1.0): 2, ('x33', 1.0): 1, ('olli', 1.0): 2, ('disney.art', 1.0): 1, ('elmoprinssi', 1.0): 1, ('salsa', 1.0): 1, ('danc', 1.0): 2, ('tell', 1.0): 25, ('truth', 1.0): 4, ('pl', 1.0): 8, ('4-6', 1.0): 1, ('2nd', 1.0): 5, ('blogiversari', 1.0): 1, ('review', 1.0): 9, ('cuti', 1.0): 6, ('bohol', 1.0): 1, ('briliant', 1.0): 1, ('v', 1.0): 9, ('key', 1.0): 3, ('annual', 1.0): 1, ('far', 1.0): 19, ('spin', 1.0): 2, ('voic', 1.0): 3, ('\\U000fe334', 1.0): 1, ('yeheyi', 1.0): 1, ('pinya', 1.0): 1, ('whoooah', 1.0): 1, ('tranc', 1.0): 1, ('lover', 1.0): 4, ('subject', 1.0): 7, ('physic', 1.0): 1, ('stop', 1.0): 15, ('ब', 1.0): 1, ('matter', 1.0): 6, ('jungl', 1.0): 1, ('accommod', 1.0): 1, ('secret', 1.0): 9, ('behind', 1.0): 3, ('sandroforceo', 1.0): 2, ('ceo', 1.0): 11, ('1month', 1.0): 11, ('swag', 1.0): 1, ('mia', 1.0): 1, ('workinprogress', 1.0): 1, ('choos', 1.0): 2, ('finnigan', 1.0): 1, ('loyal', 1.0): 2, ('royal', 1.0): 2, ('fotoset', 1.0): 1, ('reus', 1.0): 1, ('seem', 1.0): 10, ('somebodi', 1.0): 1, ('sell', 1.0): 1, ('young', 1.0): 3, ('muntu', 1.0): 1, ('anoth', 1.0): 23, ('gem', 1.0): 2, ('falco', 1.0): 1, ('supersmash', 1.0): 1, ('hotnsexi', 1.0): 1, ('friskyfriday', 1.0): 1, ('beach', 1.0): 4, ('movi', 1.0): 24, ('crop', 1.0): 2, ('nash', 1.0): 1, ('tissu', 1.0): 1, ('chocol', 1.0): 7, ('tea', 1.0): 6, ('hannib', 1.0): 3, ('episod', 1.0): 5, ('hotb', 1.0): 1, ('bush', 1.0): 2, ('classicassur', 1.0): 1, ('thrill', 1.0): 2, ('intern', 1.0): 2, ('assign', 1.0): 1, ('aerial', 1.0): 1, ('camera', 1.0): 6, ('oper', 1.0): 1, ('boom', 1.0): 3, ('hong', 1.0): 1, ('kong', 1.0): 1, ('ferri', 1.0): 1, ('central', 1.0): 2, ('girlfriend', 1.0): 4, ('after-work', 1.0): 1, ('drink', 1.0): 8, ('dj', 1.0): 3, ('resto', 1.0): 1, ('drinkt', 1.0): 1, ('koffi', 1.0): 1, ('a6', 1.0): 1, ('stargat', 1.0): 1, ('atlanti', 1.0): 1, ('muaahhh', 1.0): 1, ('ohh', 1.0): 3, ('hii', 1.0): 2, ('🙈', 1.0): 1, ('di', 1.0): 5, ('nagsend', 1.0): 1, ('yung', 1.0): 1, ('ko', 1.0): 4, ('ulit', 1.0): 3, ('🎉', 1.0): 5, ('🎈', 1.0): 1, ('ugli', 1.0): 4, ('legget', 1.0): 1, ('qui', 1.0): 1, ('per', 1.0): 1, ('la', 1.0): 8, ('mar', 1.0): 1, ('encourag', 1.0): 3, ('employ', 1.0): 5, ('board', 1.0): 5, ('sticker', 1.0): 1, ('sponsor', 1.0): 4, ('prize', 1.0): 3, ('(:', 1.0): 1, ('milo', 1.0): 1, ('aurini', 1.0): 1, ('juicebro', 1.0): 1, ('pillar', 1.0): 2, ('respect', 1.0): 2, ('boii', 1.0): 1, ('smashingbook', 1.0): 1, ('bibl', 1.0): 2, ('ill', 1.0): 6, ('sick', 1.0): 4, ('lamo', 1.0): 1, ('fangirl', 1.0): 3, ('platon', 1.0): 1, ('scienc', 1.0): 5, ('resid', 1.0): 2, ('servicewithasmil', 1.0): 1, ('bloodlin', 1.0): 1, ('huski', 1.0): 1, ('obituari', 1.0): 1, ('advert', 1.0): 1, ('goofingaround', 1.0): 1, ('bollywood', 1.0): 1, ('giveaway', 1.0): 6, ('dah', 1.0): 2, ('noth', 1.0): 15, ('bitter', 1.0): 2, ('anger', 1.0): 1, ('hatr', 1.0): 2, ('toward', 1.0): 2, ('pure', 1.0): 2, ('indiffer', 1.0): 1, ('suit', 1.0): 5, ('zach', 1.0): 1, ('codi', 1.0): 2, ('deliv', 1.0): 3, ('ac', 1.0): 1, ('excel', 1.0): 6, ('produc', 1.0): 1, ('boggl', 1.0): 1, ('fatigu', 1.0): 1, ('baareeq', 1.0): 1, ('gamedev', 1.0): 2, ('hobbi', 1.0): 1, ('tweenie_fox', 1.0): 1, ('click', 1.0): 3, ('accessori', 1.0): 1, ('tamang', 1.0): 1, ('hinala', 1.0): 1, ('niam', 1.0): 1, ('selfiee', 1.0): 1, ('especi', 1.0): 4, ('lass', 1.0): 1, ('ale', 1.0): 1, ('swim', 1.0): 3, ('bout', 1.0): 3, ('goodby', 1.0): 5, ('feminist', 1.0): 1, ('fought', 1.0): 1, ('snobbi', 1.0): 1, ('bitch', 1.0): 3, ('carolin', 1.0): 2, ('mighti', 1.0): 1, ('🔥', 1.0): 1, ('threw', 1.0): 2, ('hbd', 1.0): 1, ('follback', 1.0): 19, ('jog', 1.0): 1, ('remot', 1.0): 2, ('newli', 1.0): 1, ('ebay', 1.0): 2, ('store', 1.0): 15, ('disneyinfin', 1.0): 1, ('starwar', 1.0): 1, ('charact', 1.0): 3, ('preorder', 1.0): 1, ('starter', 1.0): 1, ('hit', 1.0): 13, ('snap', 1.0): 4, ('homi', 1.0): 3, ('bought', 1.0): 4, ('skin', 1.0): 8, ('bday', 1.0): 11, ('chant', 1.0): 2, ('jai', 1.0): 1, ('itali', 1.0): 2, ('fast', 1.0): 4, ('heeeyyy', 1.0): 1, ('woah', 1.0): 3, ('★', 1.0): 5, ('😊', 1.0): 11, ('whenev', 1.0): 4, ('ang', 1.0): 2, ('kiss', 1.0): 4, ('philippin', 1.0): 2, ('packag', 1.0): 3, ('bruis', 1.0): 1, ('rib', 1.0): 2, ('😀', 1.0): 2, ('😁', 1.0): 6, ('😂', 1.0): 17, ('😃', 1.0): 1, ('😅', 1.0): 1, ('😉', 1.0): 2, ('tombraid', 1.0): 1, ('hype', 1.0): 1, ('thejuiceinthemix', 1.0): 1, ('rela', 1.0): 1, ('low', 1.0): 6, ('prioriti', 1.0): 1, ('harri', 1.0): 5, ('bc', 1.0): 9, ('collaps', 1.0): 2, ('chaotic', 1.0): 1, ('cosa', 1.0): 1, ('<---', 1.0): 2, ('alliter', 1.0): 1, ('oppayaa', 1.0): 1, (\"how'\", 1.0): 4, ('natgeo', 1.0): 1, ('lick', 1.0): 1, ('elbow', 1.0): 2, ('. .', 1.0): 2, ('“', 1.0): 7, ('emu', 1.0): 1, ('stoke', 1.0): 1, ('woke', 1.0): 5, (\"people'\", 1.0): 3, ('approv', 1.0): 6, (\"god'\", 1.0): 2, ('jisung', 1.0): 1, ('sunshin', 1.0): 7, ('mm', 1.0): 6, ('nicola', 1.0): 1, ('brighten', 1.0): 2, ('helen', 1.0): 3, ('brian', 1.0): 3, ('2-3', 1.0): 1, ('australia', 1.0): 5, ('ol', 1.0): 2, ('bone', 1.0): 1, ('creak', 1.0): 1, ('abuti', 1.0): 1, ('tweetland', 1.0): 1, ('android', 1.0): 3, ('xma', 1.0): 2, ('skyblock', 1.0): 1, ('bcaus', 1.0): 1, ('2009', 1.0): 1, ('die', 1.0): 10, ('twitch', 1.0): 5, ('sympathi', 1.0): 1, ('laugh', 1.0): 5, ('unniee', 1.0): 1, ('nuka', 1.0): 1, ('penacova', 1.0): 1, ('djset', 1.0): 1, ('edm', 1.0): 1, ('kizomba', 1.0): 1, ('latinhous', 1.0): 1, ('housemus', 1.0): 3, ('portug', 1.0): 1, ('wild', 1.0): 2, ('ride', 1.0): 6, ('anytim', 1.0): 6, ('tast', 1.0): 5, ('yer', 1.0): 2, ('mtn', 1.0): 2, ('maganda', 1.0): 1, ('mistress', 1.0): 2, ('saphir', 1.0): 1, ('busi', 1.0): 19, ('4000', 1.0): 1, ('instagram', 1.0): 7, ('among', 1.0): 5, ('coconut', 1.0): 1, ('sambal', 1.0): 1, ('mussel', 1.0): 1, ('recip', 1.0): 5, ('kalin', 1.0): 1, ('mixcloud', 1.0): 1, ('sarcasm', 1.0): 2, ('chelsea', 1.0): 3, ('he', 1.0): 2, ('useless', 1.0): 2, ('thursday', 1.0): 2, ('hang', 1.0): 3, ('hehe', 1.0): 10, ('said', 1.0): 16, ('benson', 1.0): 1, ('facebook', 1.0): 5, ('solid', 1.0): 1, ('16/17', 1.0): 1, ('30', 1.0): 3, ('°', 1.0): 1, ('😜', 1.0): 2, ('maryhick', 1.0): 1, ('kikmeboy', 1.0): 7, ('photooftheday', 1.0): 4, ('musicbiz', 1.0): 2, ('sheskindahot', 1.0): 1, ('fleekil', 1.0): 1, ('mbalula', 1.0): 1, ('africa', 1.0): 1, ('mexican', 1.0): 1, ('scar', 1.0): 1, ('offic', 1.0): 8, ('donut', 1.0): 2, ('foiegra', 1.0): 2, ('despit', 1.0): 2, ('weather', 1.0): 9, ('wed', 1.0): 5, ('toni', 1.0): 2, ('stark', 1.0): 1, ('incred', 1.0): 7, ('poem', 1.0): 2, ('bubbl', 1.0): 3, ('dale', 1.0): 1, ('billion', 1.0): 1, ('magic', 1.0): 5, ('op', 1.0): 3, ('cast', 1.0): 1, ('vote', 1.0): 9, ('elect', 1.0): 1, ('jcreport', 1.0): 1, ('piggin', 1.0): 1, ('botan', 1.0): 2, ('soap', 1.0): 4, ('late', 1.0): 13, ('upload', 1.0): 5, ('freshli', 1.0): 1, ('3week', 1.0): 1, ('heal', 1.0): 1, ('tobi-bro', 1.0): 1, ('isp', 1.0): 1, ('steel', 1.0): 1, ('wednesday', 1.0): 1, ('swear', 1.0): 3, ('met', 1.0): 4, ('earlier', 1.0): 4, ('cam', 1.0): 3, ('😭', 1.0): 2, ('except', 1.0): 2, (\"masha'allah\", 1.0): 1, ('french', 1.0): 5, ('wwat', 1.0): 2, ('franc', 1.0): 5, ('yaaay', 1.0): 3, ('beirut', 1.0): 2, ('coffe', 1.0): 11, ('panda', 1.0): 6, ('eonni', 1.0): 2, ('favourit', 1.0): 13, ('soda', 1.0): 1, ('fuller', 1.0): 1, ('shit', 1.0): 13, ('healthi', 1.0): 2, ('💓', 1.0): 2, ('rettweet', 1.0): 3, ('mvg', 1.0): 1, ('valuabl', 1.0): 1, ('madrid', 1.0): 3, ('sore', 1.0): 6, ('bergerac', 1.0): 1, ('u21', 1.0): 1, ('individu', 1.0): 2, ('adam', 1.0): 1, (\"beach'\", 1.0): 1, ('suicid', 1.0): 1, ('squad', 1.0): 1, ('fond', 1.0): 1, ('christoph', 1.0): 2, ('cocki', 1.0): 1, ('prove', 1.0): 3, (\"attitude'\", 1.0): 1, ('improv', 1.0): 3, ('suggest', 1.0): 6, ('date', 1.0): 12, ('inde', 1.0): 10, ('intellig', 1.0): 3, ('strong', 1.0): 7, ('cs', 1.0): 2, ('certain', 1.0): 2, ('exam', 1.0): 5, ('forgot', 1.0): 3, ('home-bas', 1.0): 1, ('knee', 1.0): 4, ('sale', 1.0): 3, ('fleur', 1.0): 1, ('dress', 1.0): 10, ('readystock_hijabmart', 1.0): 1, ('idr', 1.0): 2, ('325.000', 1.0): 1, ('200.000', 1.0): 1, ('tompolo', 1.0): 1, ('aim', 1.0): 1, ('cannot', 1.0): 4, ('buyer', 1.0): 3, ('disappoint', 1.0): 1, ('paper', 1.0): 4, ('slack', 1.0): 1, ('crack', 1.0): 1, ('particularli', 1.0): 2, ('strike', 1.0): 1, ('31', 1.0): 1, ('mam', 1.0): 2, ('feytyaz', 1.0): 1, ('instant', 1.0): 1, ('stiffen', 1.0): 1, ('ricky_feb', 1.0): 1, ('grindea', 1.0): 1, ('courier', 1.0): 1, ('crypt', 1.0): 1, ('arma', 1.0): 1, ('record', 1.0): 5, ('gosh', 1.0): 2, ('limbo', 1.0): 1, ('orchard', 1.0): 1, ('art', 1.0): 10, ('super', 1.0): 15, ('karachi', 1.0): 2, ('ka', 1.0): 4, ('venic', 1.0): 1, ('sever', 1.0): 3, ('part', 1.0): 15, ('wit', 1.0): 2, ('accumul', 1.0): 1, ('maroon', 1.0): 1, ('cocktail', 1.0): 4, ('0-100', 1.0): 1, ('quick', 1.0): 7, ('1100d', 1.0): 1, ('auto-focu', 1.0): 1, ('manual', 1.0): 2, ('vein', 1.0): 1, ('crackl', 1.0): 1, ('glaze', 1.0): 1, ('layout', 1.0): 3, ('bomb', 1.0): 4, ('social', 1.0): 4, ('websit', 1.0): 8, ('pake', 1.0): 1, ('joim', 1.0): 1, ('feed', 1.0): 4, ('troop', 1.0): 1, ('mail', 1.0): 3, ('[email protected]', 1.0): 1, ('prrequest', 1.0): 1, ('journorequest', 1.0): 1, ('the_madstork', 1.0): 1, ('shaun', 1.0): 1, ('bot', 1.0): 4, ('chloe', 1.0): 2, ('actress', 1.0): 3, ('away', 1.0): 13, ('wick', 1.0): 9, ('hola', 1.0): 1, ('juan', 1.0): 1, ('houston', 1.0): 1, ('tx', 1.0): 2, ('jenni', 1.0): 1, (\"year'\", 1.0): 2, ('stumbl', 1.0): 1, ('upon', 1.0): 1, ('prob.nic', 1.0): 1, ('choker', 1.0): 1, ('btw', 1.0): 12, ('seouljin', 1.0): 1, ('photoset', 1.0): 3, ('sadomasochistsparadis', 1.0): 1, ('wynter', 1.0): 1, ('bottom', 1.0): 3, ('outtak', 1.0): 1, ('sadomasochist', 1.0): 1, ('paradis', 1.0): 1, ('ty', 1.0): 8, ('bbi', 1.0): 3, ('clip', 1.0): 1, ('lose', 1.0): 6, ('cypher', 1.0): 1, ('amen', 1.0): 2, ('x32', 1.0): 1, ('plant', 1.0): 4, ('allow', 1.0): 4, ('corner', 1.0): 3, ('addict', 1.0): 4, ('gurl', 1.0): 1, ('suck', 1.0): 9, ('special', 1.0): 8, ('owe', 1.0): 1, ('daniel', 1.0): 2, ('ape', 1.0): 1, ('saar', 1.0): 1, ('ahead', 1.0): 4, ('vers', 1.0): 1, ('butterfli', 1.0): 1, ('bonu', 1.0): 2, ('fill', 1.0): 5, ('tear', 1.0): 1, ('laughter', 1.0): 2, ('5so', 1.0): 6, ('yummmyyi', 1.0): 1, ('eat', 1.0): 6, ('dosa', 1.0): 1, ('easier', 1.0): 2, ('unless', 1.0): 3, ('achi', 1.0): 2, ('youuu', 1.0): 2, ('bawi', 1.0): 1, ('ako', 1.0): 1, ('queenesth', 1.0): 1, ('sharp', 1.0): 2, ('yess', 1.0): 1, ('poldi', 1.0): 1, ('cimbom', 1.0): 1, ('buddi', 1.0): 7, ('bruhhh', 1.0): 1, ('daddi', 1.0): 2, ('”', 1.0): 5, ('knowledg', 1.0): 2, ('attent', 1.0): 4, ('1tb', 1.0): 1, ('bank', 1.0): 1, ('credit', 1.0): 4, ('depart', 1.0): 2, ('anz', 1.0): 1, ('extrem', 1.0): 3, ('offshor', 1.0): 1, ('absolut', 1.0): 9, ('classic', 1.0): 3, ('gottolovebank', 1.0): 1, ('yup', 1.0): 6, ('in-shaa-allah', 1.0): 1, ('dua', 1.0): 1, ('thru', 1.0): 2, ('aameen', 1.0): 2, ('4/5', 1.0): 1, ('coca', 1.0): 1, ('cola', 1.0): 1, ('fanta', 1.0): 1, ('pepsi', 1.0): 1, ('sprite', 1.0): 1, ('all', 1.0): 1, ('sweeeti', 1.0): 1, (';-)', 1.0): 3, ('welcometweet', 1.0): 2, ('psygustokita', 1.0): 4, ('setup', 1.0): 1, ('wet', 1.0): 3, ('feet', 1.0): 3, ('carpet', 1.0): 1, ('judgment', 1.0): 1, ('hypocrit', 1.0): 1, ('narcissist', 1.0): 1, ('jumpsuit', 1.0): 1, ('bt', 1.0): 2, ('denim', 1.0): 1, ('verg', 1.0): 1, ('owl', 1.0): 1, ('constant', 1.0): 1, ('run', 1.0): 12, ('sia', 1.0): 1, ('count', 1.0): 7, ('brilliant', 1.0): 9, ('teacher', 1.0): 1, ('compar', 1.0): 2, ('religion', 1.0): 1, ('rant', 1.0): 1, ('student', 1.0): 6, ('bencher', 1.0): 1, ('1/5', 1.0): 1, ('porsch', 1.0): 1, ('paddock', 1.0): 1, ('budapestgp', 1.0): 1, ('johnyherbert', 1.0): 1, ('roll', 1.0): 5, ('porschesupercup', 1.0): 1, ('koyal', 1.0): 1, ('melodi', 1.0): 1, ('unexpect', 1.0): 4, ('creat', 1.0): 8, ('memori', 1.0): 3, ('35', 1.0): 1, ('ep', 1.0): 3, ('catch', 1.0): 10, ('wirh', 1.0): 1, ('arc', 1.0): 1, ('x31', 1.0): 1, ('wolv', 1.0): 2, ('desir', 1.0): 1, ('ameen', 1.0): 1, ('kca', 1.0): 1, ('votejkt', 1.0): 1, ('48id', 1.0): 1, ('helpinggroupdm', 1.0): 1, ('quot', 1.0): 6, ('weird', 1.0): 5, ('dp', 1.0): 1, ('wife', 1.0): 5, ('poor', 1.0): 4, ('chick', 1.0): 1, ('guid', 1.0): 3, ('zonzofox', 1.0): 3, ('bhaiya', 1.0): 1, ('brother', 1.0): 4, ('lucki', 1.0): 10, ('patti', 1.0): 1, ('elabor', 1.0): 1, ('kuch', 1.0): 1, ('rate', 1.0): 1, ('merdeka', 1.0): 1, ('palac', 1.0): 2, ('hotel', 1.0): 5, ('plusmil', 1.0): 1, ('servic', 1.0): 7, ('hahahaa', 1.0): 1, ('mean', 1.0): 25, ('nex', 1.0): 2, ('safe', 1.0): 5, ('gwd', 1.0): 1, ('she', 1.0): 2, ('okok', 1.0): 1, ('33', 1.0): 4, ('idiot', 1.0): 1, ('chaerin', 1.0): 1, ('unni', 1.0): 1, ('viabl', 1.0): 1, ('altern', 1.0): 3, ('nowaday', 1.0): 2, ('ip', 1.0): 1, ('tombow', 1.0): 1, ('abt', 1.0): 2, ('friyay', 1.0): 2, ('smug', 1.0): 1, ('marrickvil', 1.0): 1, ('public', 1.0): 3, ('ten', 1.0): 1, ('ago', 1.0): 8, ('eighteen', 1.0): 1, ('auvssscr', 1.0): 1, ('ncaaseason', 1.0): 1, ('slow', 1.0): 2, ('popsicl', 1.0): 1, ('soft', 1.0): 2, ('melt', 1.0): 1, ('mouth', 1.0): 2, ('thankyouuu', 1.0): 1, ('dianna', 1.0): 1, ('ngga', 1.0): 1, ('usah', 1.0): 1, ('dipikirin', 1.0): 1, ('elah', 1.0): 1, ('easili', 1.0): 1, (\"who'\", 1.0): 9, ('entp', 1.0): 1, ('killin', 1.0): 1, ('meme', 1.0): 1, ('worthi', 1.0): 1, ('shot', 1.0): 6, ('emon', 1.0): 1, ('decent', 1.0): 2, ('outdoor', 1.0): 1, ('rave', 1.0): 1, ('dv', 1.0): 1, ('aku', 1.0): 1, ('bakal', 1.0): 1, ('liat', 1.0): 1, ('kak', 1.0): 2, ('merri', 1.0): 1, ('tv', 1.0): 5, ('outfit', 1.0): 3, ('--->', 1.0): 1, ('fashionfriday', 1.0): 1, ('angle.nelson', 1.0): 1, ('cheap', 1.0): 1, ('mymonsoonstori', 1.0): 2, ('tree', 1.0): 2, ('lotion', 1.0): 1, ('moistur', 1.0): 1, ('monsoon', 1.0): 1, ('whoop', 1.0): 6, ('romant', 1.0): 2, ('valencia', 1.0): 1, ('daaru', 1.0): 1, ('parti', 1.0): 12, ('chaddi', 1.0): 1, ('wonderful.great', 1.0): 1, ('trim', 1.0): 1, ('pube', 1.0): 1, ('es', 1.0): 2, ('mi', 1.0): 5, ('tio', 1.0): 1, ('sinaloa', 1.0): 1, ('arr', 1.0): 1, ('stylish', 1.0): 1, ('trendi', 1.0): 1, ('kim', 1.0): 5, ('fabfriday', 1.0): 2, ('facetim', 1.0): 4, ('calum', 1.0): 3, ('constantli', 1.0): 1, ('announc', 1.0): 1, ('filbarbarian', 1.0): 1, ('beer', 1.0): 3, ('arm', 1.0): 3, ('testicl', 1.0): 1, ('light', 1.0): 13, ('katerina', 1.0): 1, ('maniataki', 1.0): 1, ('ahh', 1.0): 5, ('alright', 1.0): 6, ('worthwhil', 1.0): 3, ('judg', 1.0): 2, ('tech', 1.0): 2, ('window', 1.0): 7, ('stupid', 1.0): 8, ('plugin', 1.0): 1, ('bass', 1.0): 1, ('slap', 1.0): 1, ('6pm', 1.0): 1, ('door', 1.0): 3, ('vip', 1.0): 1, ('gener', 1.0): 4, ('seat', 1.0): 2, ('earli', 1.0): 9, ('london', 1.0): 9, ('toptravelcentar', 1.0): 1, ('ttctop', 1.0): 1, ('lux', 1.0): 1, ('luxurytravel', 1.0): 1, ('beograd', 1.0): 1, ('srbija', 1.0): 1, ('putovanja', 1.0): 1, ('wendi', 1.0): 2, ('provid', 1.0): 4, ('drainag', 1.0): 1, ('homebound', 1.0): 1, ('hahahay', 1.0): 1, ('yeeeah', 1.0): 1, ('moar', 1.0): 2, ('kitteh', 1.0): 1, ('incom', 1.0): 1, ('tower', 1.0): 2, ('yippee', 1.0): 1, ('scrummi', 1.0): 1, ('bio', 1.0): 5, ('mcpe', 1.0): 1, ('->', 1.0): 1, ('vainglori', 1.0): 1, ('driver', 1.0): 1, ('6:01', 1.0): 1, ('lilydal', 1.0): 1, ('fss', 1.0): 1, ('rais', 1.0): 3, ('magicalmysterytour', 1.0): 1, ('chek', 1.0): 2, ('rule', 1.0): 2, ('weebli', 1.0): 1, ('donetsk', 1.0): 1, ('earth', 1.0): 7, ('personalis', 1.0): 1, ('wrap', 1.0): 2, ('stationeri', 1.0): 1, ('adrian', 1.0): 1, ('parcel', 1.0): 2, ('tuesday', 1.0): 7, ('pri', 1.0): 3, ('80', 1.0): 3, ('wz', 1.0): 1, ('pattern', 1.0): 1, ('cut', 1.0): 3, ('buttonhol', 1.0): 1, ('4mi', 1.0): 1, ('famou', 1.0): 1, ('client', 1.0): 1, ('p', 1.0): 3, ('aliv', 1.0): 2, ('trial', 1.0): 1, ('spm', 1.0): 1, ('dinooo', 1.0): 1, ('cardio', 1.0): 1, ('steak', 1.0): 1, ('cue', 1.0): 1, ('laptop', 1.0): 1, ('guinea', 1.0): 1, ('pig', 1.0): 1, ('salamat', 1.0): 1, ('sa', 1.0): 6, ('mga', 1.0): 1, ('nag.greet', 1.0): 1, ('guis', 1.0): 1, ('godbless', 1.0): 2, ('crush', 1.0): 3, ('appl', 1.0): 4, ('deserv', 1.0): 11, ('charl', 1.0): 1, ('workhard', 1.0): 1, ('model', 1.0): 7, ('forrit', 1.0): 1, ('bread', 1.0): 2, ('bacon', 1.0): 2, ('butter', 1.0): 2, ('afang', 1.0): 2, ('soup', 1.0): 2, ('semo', 1.0): 2, ('brb', 1.0): 1, ('forc', 1.0): 2, ('doesnt', 1.0): 5, ('tato', 1.0): 1, ('bulat', 1.0): 1, ('concern', 1.0): 1, ('snake', 1.0): 1, ('perform', 1.0): 3, ('con', 1.0): 1, ('todayyy', 1.0): 1, ('max', 1.0): 2, ('gaza', 1.0): 1, ('bbb', 1.0): 1, ('pc', 1.0): 3, ('22', 1.0): 2, ('legal', 1.0): 1, ('ditch', 1.0): 2, ('tori', 1.0): 1, ('bajrangibhaijaanhighestweek', 1.0): 6, (\"s'okay\", 1.0): 1, ('andi', 1.0): 2, ('you-and', 1.0): 1, ('return', 1.0): 3, ('tuitutil', 1.0): 1, ('bud', 1.0): 2, ('learn', 1.0): 8, ('takeaway', 1.0): 1, ('instead', 1.0): 7, ('1hr', 1.0): 1, ('genial', 1.0): 1, ('competit', 1.0): 1, ('yosh', 1.0): 1, ('procrastin', 1.0): 1, ('plu', 1.0): 4, ('kfc', 1.0): 2, ('itun', 1.0): 1, ('dedicatedfan', 1.0): 1, ('💜', 1.0): 7, ('daft', 1.0): 1, ('teeth', 1.0): 1, ('troubl', 1.0): 1, ('huxley', 1.0): 1, ('basket', 1.0): 2, ('ben', 1.0): 2, ('sent', 1.0): 8, ('gamer', 1.0): 3, ('activ', 1.0): 5, ('120', 1.0): 2, ('distanc', 1.0): 2, ('suitabl', 1.0): 1, ('stockholm', 1.0): 1, ('zack', 1.0): 1, ('destroy', 1.0): 1, ('heel', 1.0): 2, ('claw', 1.0): 1, ('q', 1.0): 2, ('blond', 1.0): 2, ('box', 1.0): 3, ('cheerio', 1.0): 1, ('seed', 1.0): 4, ('cutest', 1.0): 2, ('ffback', 1.0): 2, ('spotifi', 1.0): 3, (\"we'v\", 1.0): 7, ('vc', 1.0): 1, ('tgp', 1.0): 1, ('race', 1.0): 5, ('averag', 1.0): 2, (\"joe'\", 1.0): 1, ('bluejay', 1.0): 1, ('vinylbear', 1.0): 1, ('pal', 1.0): 1, ('furbabi', 1.0): 1, ('luff', 1.0): 1, ('mega', 1.0): 4, ('retail', 1.0): 4, ('boot', 1.0): 2, ('whsmith', 1.0): 1, ('ps3', 1.0): 1, ('shannon', 1.0): 1, ('na', 1.0): 9, ('redecor', 1.0): 1, ('bob', 1.0): 3, ('elli', 1.0): 4, ('mairi', 1.0): 1, ('workout', 1.0): 6, ('impair', 1.0): 1, ('uggghhh', 1.0): 1, ('dam', 1.0): 2, ('dun', 1.0): 2, ('eczema', 1.0): 1, ('suffer', 1.0): 4, ('ndee', 1.0): 1, ('pleasur', 1.0): 14, ('publiliu', 1.0): 1, ('syru', 1.0): 1, ('fear', 1.0): 1, ('death', 1.0): 3, ('dread', 1.0): 1, ('fell', 1.0): 3, ('fuk', 1.0): 1, ('unblock', 1.0): 1, ('tweak', 1.0): 2, ('php', 1.0): 1, ('fall', 1.0): 10, ('oomf', 1.0): 1, ('pippa', 1.0): 1, ('hschool', 1.0): 1, ('bu', 1.0): 3, ('cardi', 1.0): 1, ('everyday', 1.0): 3, ('everytim', 1.0): 3, ('hk', 1.0): 1, (\"why'd\", 1.0): 1, ('acorn', 1.0): 1, ('origin', 1.0): 7, ('c64', 1.0): 1, ('cpu', 1.0): 1, ('consider', 1.0): 1, ('advanc', 1.0): 1, ('onair', 1.0): 1, ('bay', 1.0): 1, ('hold', 1.0): 6, ('river', 1.0): 3, ('0878 0388', 1.0): 1, ('1033', 1.0): 1, ('0272 3306', 1.0): 1, ('70', 1.0): 5, ('rescu', 1.0): 1, ('mutt', 1.0): 1, ('confirm', 1.0): 3, ('deliveri', 1.0): 3, ('switch', 1.0): 2, ('lap', 1.0): 1, ('optim', 1.0): 1, ('lu', 1.0): 1, (':|', 1.0): 1, ('tweetofthedecad', 1.0): 1, (':P', 1.0): 1, ('class', 1.0): 5, ('happiest', 1.0): 2, ('bbmme', 1.0): 3, ('pin', 1.0): 4, ('7df9e60a', 1.0): 1, ('bbm', 1.0): 2, ('bbmpin', 1.0): 2, ('addmeonbbm', 1.0): 1, ('addm', 1.0): 1, (\"today'\", 1.0): 3, ('menu', 1.0): 1, ('marri', 1.0): 3, ('glenn', 1.0): 1, ('what', 1.0): 4, ('height', 1.0): 1, (\"sculptor'\", 1.0): 1, ('ti5', 1.0): 1, ('dota', 1.0): 3, ('nudg', 1.0): 1, ('spot', 1.0): 5, ('tasti', 1.0): 1, ('hilli', 1.0): 1, ('cycl', 1.0): 6, ('england', 1.0): 4, ('scotlandismass', 1.0): 1, ('gen', 1.0): 2, ('vikk', 1.0): 1, ('fna', 1.0): 1, ('mombasa', 1.0): 1, ('tukutanemombasa', 1.0): 1, ('100reasonstovisitmombasa', 1.0): 1, ('karibumombasa', 1.0): 1, ('hanbin', 1.0): 1, ('certainli', 1.0): 4, ('goosnight', 1.0): 1, ('kindli', 1.0): 4, ('familiar', 1.0): 2, ('jealou', 1.0): 4, ('tent', 1.0): 2, ('yea', 1.0): 2, ('cozi', 1.0): 1, ('phenomen', 1.0): 2, ('collab', 1.0): 2, ('gave', 1.0): 4, ('birth', 1.0): 1, ('behav', 1.0): 2, ('monster', 1.0): 1, ('spree', 1.0): 4, ('000', 1.0): 1, ('tank', 1.0): 6, ('outstand', 1.0): 1, ('donat', 1.0): 3, ('h', 1.0): 4, ('contestkiduniya', 1.0): 2, ('mfundo', 1.0): 1, ('och', 1.0): 1, ('hun', 1.0): 4, ('inner', 1.0): 2, ('nerd', 1.0): 2, ('tame', 1.0): 2, ('insidi', 1.0): 1, ('logic', 1.0): 1, ('math', 1.0): 1, ('channel', 1.0): 5, ('continu', 1.0): 4, ('doubt', 1.0): 3, ('300', 1.0): 2, ('sub', 1.0): 2, ('200', 1.0): 3, ('forgiven', 1.0): 1, ('manner', 1.0): 1, ('yhooo', 1.0): 1, ('ngi', 1.0): 1, ('mood', 1.0): 7, ('push', 1.0): 1, ('limit', 1.0): 6, ('obakeng', 1.0): 1, ('goat', 1.0): 1, ('alhamdullilah', 1.0): 1, ('pebbl', 1.0): 1, ('engross', 1.0): 1, ('bing', 1.0): 2, ('scream', 1.0): 2, ('whole', 1.0): 7, ('wide', 1.0): 2, ('🌎', 1.0): 2, ('😧', 1.0): 1, ('wat', 1.0): 2, ('muahhh', 1.0): 1, ('pausetim', 1.0): 1, ('drift', 1.0): 1, ('loos', 1.0): 3, ('campaign', 1.0): 4, ('kickstart', 1.0): 1, ('articl', 1.0): 9, ('jenna', 1.0): 1, ('bellybutton', 1.0): 5, ('inni', 1.0): 4, ('outi', 1.0): 4, ('havent', 1.0): 4, ('delish', 1.0): 1, ('joselito', 1.0): 1, ('freya', 1.0): 1, ('nth', 1.0): 1, ('latepost', 1.0): 1, ('lupet', 1.0): 1, ('mo', 1.0): 2, ('eric', 1.0): 3, ('askaman', 1.0): 1, ('150', 1.0): 1, ('0345', 1.0): 2, ('454', 1.0): 1, ('111', 1.0): 1, ('webz', 1.0): 1, ('oop', 1.0): 5, (\"they'll\", 1.0): 6, ('realis', 1.0): 2, ('anymor', 1.0): 3, ('carmel', 1.0): 1, ('decis', 1.0): 5, ('matt', 1.0): 6, ('@commoncultur', 1.0): 1, ('@connorfranta', 1.0): 1, ('honestli', 1.0): 3, ('explain', 1.0): 3, ('relationship', 1.0): 4, ('pick', 1.0): 15, ('tessnzach', 1.0): 1, ('paperboy', 1.0): 1, ('honest', 1.0): 3, ('reassur', 1.0): 1, ('guysss', 1.0): 3, ('mubank', 1.0): 2, (\"dongwoo'\", 1.0): 1, ('bright', 1.0): 2, ('tommorow', 1.0): 3, ('newyork', 1.0): 1, ('lolll', 1.0): 1, ('twinx', 1.0): 1, ('16', 1.0): 2, ('path', 1.0): 1, ('firmansyahbl', 1.0): 1, ('procedur', 1.0): 1, ('grim', 1.0): 1, ('fandango', 1.0): 1, ('ordinari', 1.0): 1, ('extraordinari', 1.0): 1, ('bo', 1.0): 2, ('birmingham', 1.0): 1, ('oracl', 1.0): 1, ('samosa', 1.0): 1, ('firebal', 1.0): 1, ('shoe', 1.0): 4, ('serv', 1.0): 1, ('sushi', 1.0): 2, ('shoeshi', 1.0): 1, ('�', 1.0): 2, ('lymond', 1.0): 1, ('philippa', 1.0): 2, ('novel', 1.0): 1, ('tara', 1.0): 3, ('. . .', 1.0): 2, ('aur', 1.0): 2, ('han', 1.0): 1, ('imran', 1.0): 3, ('khan', 1.0): 7, ('63', 1.0): 1, ('agaaain', 1.0): 1, ('doli', 1.0): 1, ('siregar', 1.0): 1, ('ninh', 1.0): 1, ('size', 1.0): 5, ('geekiest', 1.0): 1, ('geek', 1.0): 2, ('wallet', 1.0): 3, ('request', 1.0): 4, ('media', 1.0): 4, ('ralli', 1.0): 1, ('rotat', 1.0): 3, ('direct', 1.0): 3, ('eek', 1.0): 1, ('red', 1.0): 6, ('beij', 1.0): 1, ('meni', 1.0): 1, ('tebrik', 1.0): 1, ('etdi', 1.0): 1, ('700', 1.0): 1, ('💗', 1.0): 2, ('rod', 1.0): 1, ('embrac', 1.0): 1, ('actor', 1.0): 1, ('aplomb', 1.0): 1, ('foreveralon', 1.0): 2, ('mysumm', 1.0): 1, ('01482', 1.0): 1, ('333505', 1.0): 1, ('hahahaha', 1.0): 2, ('wear', 1.0): 6, ('uniform', 1.0): 1, ('evil', 1.0): 1, ('owww', 1.0): 1, ('choo', 1.0): 1, ('chweet', 1.0): 1, ('shorthair', 1.0): 1, ('oscar', 1.0): 1, ('realiz', 1.0): 7, ('harmoni', 1.0): 1, ('deneriveri', 1.0): 1, ('506', 1.0): 1, ('kiksext', 1.0): 5, ('kikkomansabor', 1.0): 2, ('killer', 1.0): 1, ('henessydiari', 1.0): 1, ('journey', 1.0): 4, ('band', 1.0): 4, ('plz', 1.0): 5, ('convo', 1.0): 3, ('11', 1.0): 5, ('vault', 1.0): 1, ('expand', 1.0): 2, ('vinni', 1.0): 1, ('money', 1.0): 9, ('hahahahaha', 1.0): 2, ('50cent', 1.0): 1, ('repay', 1.0): 1, ('debt', 1.0): 2, ('evet', 1.0): 1, ('wifi', 1.0): 3, ('lifestyl', 1.0): 1, ('qatarday', 1.0): 1, ('. ..', 1.0): 3, ('🌞', 1.0): 3, ('girli', 1.0): 1, ('india', 1.0): 4, ('innov', 1.0): 1, ('volunt', 1.0): 2, ('saran', 1.0): 1, ('drama', 1.0): 3, ('genr', 1.0): 1, ('romanc', 1.0): 1, ('comedi', 1.0): 1, ('leannerin', 1.0): 1, ('19', 1.0): 7, ('porno', 1.0): 1, ('l4l', 1.0): 3, ('weloveyounamjoon', 1.0): 1, ('homey', 1.0): 1, ('kenya', 1.0): 1, ('roller', 1.0): 2, ('coaster', 1.0): 1, ('aspect', 1.0): 1, ('najam', 1.0): 1, ('confess', 1.0): 2, ('pricelessantiqu', 1.0): 1, ('takesonetoknowon', 1.0): 1, ('extra', 1.0): 5, ('ucount', 1.0): 1, ('ji', 1.0): 3, ('turkish', 1.0): 1, ('knew', 1.0): 8, ('crap', 1.0): 1, ('burn', 1.0): 3, ('80x', 1.0): 1, ('airlin', 1.0): 1, ('sexi', 1.0): 10, ('yello', 1.0): 1, ('gail', 1.0): 1, ('yael', 1.0): 1, ('lesson', 1.0): 4, ('en', 1.0): 1, ('mano', 1.0): 1, ('hand', 1.0): 4, ('manag', 1.0): 6, ('prettiest', 1.0): 1, ('reader', 1.0): 4, ('dnt', 1.0): 1, ('ideal', 1.0): 2, ('weekli', 1.0): 2, ('idol', 1.0): 3, ('pose', 1.0): 2, ('shortlist', 1.0): 1, ('dominion', 1.0): 2, ('picnic', 1.0): 2, ('tmrw', 1.0): 3, ('nobodi', 1.0): 2, ('jummamubarak', 1.0): 1, ('shower', 1.0): 3, ('shalwarkameez', 1.0): 1, ('itter', 1.0): 1, ('offer', 1.0): 8, ('jummapray', 1.0): 1, ('af', 1.0): 8, ('display', 1.0): 1, ('enabl', 1.0): 1, ('compani', 1.0): 4, ('peep', 1.0): 4, ('tweep', 1.0): 2, ('folow', 1.0): 1, ('2k', 1.0): 1, ('ohhh', 1.0): 4, ('teaser', 1.0): 2, ('airec', 1.0): 1, ('009', 1.0): 1, ('acid', 1.0): 1, ('mous', 1.0): 2, ('31st', 1.0): 2, ('includ', 1.0): 5, ('robin', 1.0): 1, ('rough', 1.0): 4, ('control', 1.0): 1, ('remix', 1.0): 5, ('fave', 1.0): 3, ('toss', 1.0): 1, ('ladi', 1.0): 8, ('🐑', 1.0): 1, ('librari', 1.0): 3, ('mr2', 1.0): 1, ('climb', 1.0): 1, ('cuddl', 1.0): 1, ('jilla', 1.0): 1, ('headlin', 1.0): 1, ('2017', 1.0): 1, ('jumma', 1.0): 5, ('mubarik', 1.0): 2, ('spent', 1.0): 2, ('congratz', 1.0): 1, ('contribut', 1.0): 3, ('2.0', 1.0): 2, ('yuppiiee', 1.0): 1, ('alienthought', 1.0): 1, ('happyalien', 1.0): 1, ('crowd', 1.0): 2, ('loudest', 1.0): 2, ('gari', 1.0): 1, ('particular', 1.0): 1, ('attract', 1.0): 1, ('supprt', 1.0): 1, ('savag', 1.0): 1, ('cleans', 1.0): 1, ('scam', 1.0): 1, ('ridden', 1.0): 1, ('vyapam', 1.0): 2, ('renam', 1.0): 1, ('wave', 1.0): 2, ('couch', 1.0): 1, ('dodg', 1.0): 1, ('explan', 1.0): 2, ('bag', 1.0): 4, ('sanza', 1.0): 1, ('yaa', 1.0): 3, ('slr', 1.0): 1, ('som', 1.0): 1, ('honour', 1.0): 1, ('heheh', 1.0): 1, ('view', 1.0): 16, ('explor', 1.0): 2, ('wayanadan', 1.0): 1, ('forest', 1.0): 1, ('wayanad', 1.0): 1, ('srijith', 1.0): 1, ('whisper', 1.0): 1, ('lie', 1.0): 4, ('pokemon', 1.0): 1, ('dazzl', 1.0): 1, ('urself', 1.0): 2, ('doubl', 1.0): 2, ('flare', 1.0): 1, ('black', 1.0): 4, ('9', 1.0): 3, ('51', 1.0): 1, ('brows', 1.0): 1, ('bore', 1.0): 9, ('femal', 1.0): 2, ('tour', 1.0): 8, ('delv', 1.0): 2, ('muchhh', 1.0): 1, ('tmr', 1.0): 1, ('breakfast', 1.0): 4, ('gl', 1.0): 1, (\"tonight'\", 1.0): 2, ('):', 1.0): 7, ('litey', 1.0): 1, ('manuella', 1.0): 1, ('abhi', 1.0): 2, ('tak', 1.0): 2, ('nhi', 1.0): 2, ('dekhi', 1.0): 1, ('promo', 1.0): 3, ('se', 1.0): 4, ('xpax', 1.0): 1, ('lisa', 1.0): 2, ('aboard', 1.0): 3, ('institut', 1.0): 1, ('nc', 1.0): 2, ('chees', 1.0): 4, ('overload', 1.0): 1, ('pizza', 1.0): 1, ('•', 1.0): 3, ('mcfloat', 1.0): 1, ('fudg', 1.0): 3, ('sanda', 1.0): 1, ('munchkin', 1.0): 1, (\"d'd\", 1.0): 1, ('granni', 1.0): 1, ('baller', 1.0): 1, ('lil', 1.0): 4, ('chain', 1.0): 1, ('everybodi', 1.0): 1, ('ought', 1.0): 1, ('jay', 1.0): 3, ('[email protected]', 1.0): 1, ('79x', 1.0): 1, ('champion', 1.0): 1, ('letter', 1.0): 2, ('uniqu', 1.0): 2, ('affaraid', 1.0): 1, ('dearslim', 1.0): 2, ('role', 1.0): 2, ('billi', 1.0): 2, ('lab', 1.0): 1, ('ovh', 1.0): 2, ('maxi', 1.0): 2, ('bunch', 1.0): 1, ('acc', 1.0): 2, ('sprit', 1.0): 1, ('you', 1.0): 1, ('til', 1.0): 2, ('hammi', 1.0): 1, ('freedom', 1.0): 2, ('pistol', 1.0): 1, ('unlock', 1.0): 1, ('bemeapp', 1.0): 1, ('thumb', 1.0): 1, ('beme', 1.0): 1, ('bemecod', 1.0): 1, ('proudtobem', 1.0): 1, ('round', 1.0): 2, ('calm', 1.0): 5, ('kepo', 1.0): 1, ('luckili', 1.0): 1, ('clearli', 1.0): 2, ('دعمم', 1.0): 1, ('للعودة', 1.0): 1, ('للحياة', 1.0): 1, ('heiyo', 1.0): 2, ('dudafti', 1.0): 1, ('breaktym', 1.0): 1, ('fatal', 1.0): 1, ('danger', 1.0): 1, ('term', 1.0): 2, ('health', 1.0): 2, ('outrag', 1.0): 1, ('645k', 1.0): 1, ('muna', 1.0): 1, ('magstart', 1.0): 1, ('salut', 1.0): 3, ('→', 1.0): 1, ('thq', 1.0): 1, ('contin', 1.0): 1, ('thalaivar', 1.0): 1, ('£', 1.0): 7, ('heiya', 1.0): 2, ('grab', 1.0): 3, ('30.000', 1.0): 2, ('av', 1.0): 1, ('gd', 1.0): 3, ('wknd', 1.0): 1, ('ear', 1.0): 12, (\"y'day\", 1.0): 1, ('hxh', 1.0): 1, ('badass', 1.0): 2, ('killua', 1.0): 1, ('scene', 1.0): 2, ('78x', 1.0): 1, ('unappreci', 1.0): 1, ('graciou', 1.0): 1, ('nailedit', 1.0): 1, ('ourdisneyinfin', 1.0): 1, ('mari', 1.0): 3, ('jillmil', 1.0): 1, ('webcam', 1.0): 2, ('elfindelmundo', 1.0): 1, ('mainli', 1.0): 1, ('favour', 1.0): 1, ('dancetast', 1.0): 1, ('satyajit', 1.0): 1, (\"ray'\", 1.0): 1, ('porosh', 1.0): 1, ('pathor', 1.0): 1, ('situat', 1.0): 3, ('goldbug', 1.0): 1, ('wine', 1.0): 3, ('bottl', 1.0): 2, ('spill', 1.0): 2, ('jazmin', 1.0): 3, ('bonilla', 1.0): 3, ('15000', 1.0): 1, ('star', 1.0): 9, ('hollywood', 1.0): 3, ('rofl', 1.0): 3, ('shade', 1.0): 1, ('grey', 1.0): 1, ('netsec', 1.0): 1, ('kev', 1.0): 1, ('sister', 1.0): 6, ('told', 1.0): 6, ('unlist', 1.0): 1, ('hickey', 1.0): 1, ('dad', 1.0): 5, ('hock', 1.0): 1, ('mamma', 1.0): 1, ('human', 1.0): 5, ('be', 1.0): 1, ('mere', 1.0): 1, ('holist', 1.0): 1, ('cosmovis', 1.0): 1, ('narrow-mind', 1.0): 1, ('charg', 1.0): 3, ('cess', 1.0): 1, ('alix', 1.0): 1, ('quan', 1.0): 1, ('tip', 1.0): 5, ('naaahhh', 1.0): 1, ('duh', 1.0): 2, ('emesh', 1.0): 1, ('hilari', 1.0): 4, ('kath', 1.0): 3, ('kia', 1.0): 1, ('@vauk', 1.0): 1, ('tango', 1.0): 1, ('tracerequest', 1.0): 2, ('dassi', 1.0): 1, ('fwm', 1.0): 1, ('selamat', 1.0): 1, ('nichola', 1.0): 2, ('malta', 1.0): 1, ('gto', 1.0): 1, ('tomorrowland', 1.0): 1, ('incal', 1.0): 1, ('shob', 1.0): 1, ('incomplet', 1.0): 1, ('barkada', 1.0): 1, ('silverston', 1.0): 1, ('pull', 1.0): 1, ('bookstor', 1.0): 1, ('ganna', 1.0): 1, ('hillari', 1.0): 1, ('clinton', 1.0): 1, ('court', 1.0): 2, ('notic', 1.0): 11, ('slice', 1.0): 2, ('life-so', 1.0): 1, ('hidden', 1.0): 1, ('untap', 1.0): 1, ('mca', 1.0): 2, ('gettin', 1.0): 1, ('hella', 1.0): 1, ('wana', 1.0): 1, ('bandz', 1.0): 1, ('hell', 1.0): 4, ('donington', 1.0): 1, ('park', 1.0): 8, ('24/25', 1.0): 1, ('x30', 1.0): 1, ('merci', 1.0): 1, ('bien', 1.0): 1, ('pitbul', 1.0): 1, ('777x', 1.0): 1, ('fri', 1.0): 3, ('annyeong', 1.0): 1, ('oppa', 1.0): 7, ('indonesian', 1.0): 1, ('elf', 1.0): 3, ('flight', 1.0): 2, ('bf', 1.0): 2, ('jennyjean', 1.0): 1, ('kikchat', 1.0): 1, ('sabadodeganarseguidor', 1.0): 1, ('sexysasunday', 1.0): 2, ('marseil', 1.0): 1, ('ganda', 1.0): 1, ('fnaf', 1.0): 5, ('steam', 1.0): 1, ('assur', 1.0): 2, ('current', 1.0): 7, ('goin', 1.0): 1, ('sweeti', 1.0): 4, ('strongest', 1.0): 1, (\"spot'\", 1.0): 1, ('barnstapl', 1.0): 1, ('bideford', 1.0): 1, ('abit', 1.0): 1, ('road', 1.0): 5, ('rocro', 1.0): 1, ('13glodyysbro', 1.0): 1, ('hire', 1.0): 1, ('2ne1', 1.0): 1, ('aspetti', 1.0): 1, ('chicken', 1.0): 4, ('chip', 1.0): 3, ('cupboard', 1.0): 1, ('empti', 1.0): 2, ('jami', 1.0): 2, ('ian', 1.0): 2, ('latin', 1.0): 5, ('asian', 1.0): 5, ('version', 1.0): 8, ('va', 1.0): 1, ('642', 1.0): 1, ('kikgirl', 1.0): 5, ('orgasm', 1.0): 1, ('phonesex', 1.0): 1, ('spacer', 1.0): 1, ('felic', 1.0): 1, ('smoak', 1.0): 1, ('👓', 1.0): 1, ('💘', 1.0): 3, ('children', 1.0): 3, ('psychopath', 1.0): 1, ('spoil', 1.0): 1, ('dimpl', 1.0): 1, ('contempl', 1.0): 1, ('indi', 1.0): 2, ('rout', 1.0): 4, ('jsl', 1.0): 1, ('76x', 1.0): 1, ('gotcha', 1.0): 1, ('kina', 1.0): 1, ('donna', 1.0): 3, ('reachabl', 1.0): 1, ('jk', 1.0): 1, ('s02e04', 1.0): 1, ('air', 1.0): 7, ('naggi', 1.0): 1, ('anal', 1.0): 1, ('child', 1.0): 3, ('vidcon', 1.0): 2, ('anxiou', 1.0): 1, ('shake', 1.0): 2, ('10:30', 1.0): 1, ('smoke', 1.0): 3, ('white', 1.0): 4, ('grandpa', 1.0): 4, ('prolli', 1.0): 1, ('stash', 1.0): 2, ('closer-chas', 1.0): 1, ('spec', 1.0): 1, ('leagu', 1.0): 3, ('chase', 1.0): 1, ('wall', 1.0): 3, ('angel', 1.0): 4, ('mochamichel', 1.0): 1, ('iph', 1.0): 4, ('0ne', 1.0): 4, ('simpli', 1.0): 3, ('bi0', 1.0): 8, ('x29', 1.0): 1, ('there', 1.0): 2, ('background', 1.0): 2, ('maggi', 1.0): 1, ('afraid', 1.0): 3, ('mull', 1.0): 1, ('nil', 1.0): 1, ('glasgow', 1.0): 2, ('netbal', 1.0): 1, ('thistl', 1.0): 1, ('thistlelov', 1.0): 1, ('minecraft', 1.0): 7, ('drew', 1.0): 3, ('delici', 1.0): 3, ('muddl', 1.0): 1, ('racket', 1.0): 2, ('isol', 1.0): 1, ('fa', 1.0): 1, ('particip', 1.0): 2, ('icecreammast', 1.0): 1, ('group', 1.0): 10, ('huhu', 1.0): 3, ('shet', 1.0): 1, ('desk', 1.0): 1, ('o_o', 1.0): 1, ('orz', 1.0): 1, ('problemmm', 1.0): 1, ('75x', 1.0): 1, ('english', 1.0): 4, ('yeeaayi', 1.0): 1, ('alhamdulillah', 1.0): 1, ('amin', 1.0): 1, ('weed', 1.0): 1, ('crowdfund', 1.0): 1, ('goal', 1.0): 2, ('walk', 1.0): 12, ('hellooo', 1.0): 2, ('select', 1.0): 1, ('lynn', 1.0): 1, ('buffer', 1.0): 2, ('button', 1.0): 2, ('compos', 1.0): 1, ('fridayfun', 1.0): 1, ('non-filipina', 1.0): 1, ('ejayst', 1.0): 1, ('state', 1.0): 2, ('le', 1.0): 2, ('stan', 1.0): 1, ('lee', 1.0): 2, ('discoveri', 1.0): 1, ('cousin', 1.0): 5, ('1400', 1.0): 1, ('yr', 1.0): 2, ('teleport', 1.0): 1, ('shahid', 1.0): 1, ('afridi', 1.0): 1, ('tou', 1.0): 1, ('mahnor', 1.0): 1, ('baloch', 1.0): 1, ('nikki', 1.0): 2, ('flower', 1.0): 4, ('blackfli', 1.0): 1, ('courgett', 1.0): 1, ('wont', 1.0): 5, ('affect', 1.0): 2, ('fruit', 1.0): 5, ('italian', 1.0): 1, ('netfilx', 1.0): 1, ('unmarri', 1.0): 1, ('finger', 1.0): 6, ('rock', 1.0): 10, ('wielli', 1.0): 1, ('paul', 1.0): 2, ('barcod', 1.0): 1, ('charlott', 1.0): 1, ('thta', 1.0): 1, ('trailblazerhonor', 1.0): 1, ('labour', 1.0): 3, ('leader', 1.0): 3, ('alot', 1.0): 2, ('agayhippiehippi', 1.0): 1, ('exercis', 1.0): 2, ('ginger', 1.0): 1, ('x28', 1.0): 1, ('teach', 1.0): 2, ('awar', 1.0): 1, ('::', 1.0): 4, ('portsmouth', 1.0): 1, ('sonal', 1.0): 1, ('hungri', 1.0): 2, ('hmmm', 1.0): 4, ('pedant', 1.0): 1, ('98', 1.0): 1, ('kit', 1.0): 2, ('ack', 1.0): 1, ('hih', 1.0): 1, ('choir', 1.0): 1, ('rosidbinr', 1.0): 1, ('duke', 1.0): 2, ('earl', 1.0): 1, ('tau', 1.0): 1, ('orayt', 1.0): 1, ('knw', 1.0): 1, ('block', 1.0): 3, ('dikha', 1.0): 1, ('reh', 1.0): 1, ('adolf', 1.0): 1, ('hitler', 1.0): 1, ('obstacl', 1.0): 1, ('exist', 1.0): 2, ('surrend', 1.0): 2, ('terrif', 1.0): 1, ('advaddict', 1.0): 1, ('_15', 1.0): 1, ('jimin', 1.0): 1, ('notanapolog', 1.0): 3, ('map', 1.0): 2, ('inform', 1.0): 5, ('0.7', 1.0): 1, ('motherfuck', 1.0): 1, (\"david'\", 1.0): 1, ('damn', 1.0): 3, ('colleg', 1.0): 2, ('24th', 1.0): 3, ('steroid', 1.0): 1, ('alansmithpart', 1.0): 1, ('servu', 1.0): 1, ('bonasio', 1.0): 1, (\"doido'\", 1.0): 1, ('task', 1.0): 2, ('deleg', 1.0): 1, ('aaahhh', 1.0): 1, ('jen', 1.0): 2, ('virgin', 1.0): 5, ('non-mapbox', 1.0): 1, ('restrict', 1.0): 1, ('mapbox', 1.0): 1, ('basemap', 1.0): 1, ('contractu', 1.0): 1, ('research', 1.0): 1, ('seafood', 1.0): 1, ('weltum', 1.0): 1, ('teh', 1.0): 1, ('deti', 1.0): 1, ('huh', 1.0): 2, ('=D', 1.0): 2, ('annoy', 1.0): 2, ('katmtan', 1.0): 1, ('swan', 1.0): 1, ('fandom', 1.0): 3, ('blurri', 1.0): 1, ('besok', 1.0): 1, ('b', 1.0): 8, ('urgent', 1.0): 3, ('within', 1.0): 4, ('dorset', 1.0): 1, ('goddess', 1.0): 1, ('blast', 1.0): 1, ('shitfac', 1.0): 1, ('soul', 1.0): 4, ('sing', 1.0): 5, ('disney', 1.0): 1, ('doug', 1.0): 3, ('28', 1.0): 2, ('bnte', 1.0): 1, ('hain', 1.0): 2, (';p', 1.0): 1, ('shiiitt', 1.0): 1, ('case', 1.0): 9, ('rm35', 1.0): 1, ('negooo', 1.0): 1, ('male', 1.0): 1, ('madelin', 1.0): 1, ('nun', 1.0): 1, ('mornin', 1.0): 2, ('yapster', 1.0): 1, ('pli', 1.0): 1, ('icon', 1.0): 2, ('alchemist', 1.0): 1, ('x27', 1.0): 1, ('dayz', 1.0): 1, ('preview', 1.0): 1, ('thug', 1.0): 1, ('lmao', 1.0): 3, ('sharethelov', 1.0): 2, ('highvalu', 1.0): 2, ('halsey', 1.0): 1, ('30th', 1.0): 1, ('anniversari', 1.0): 5, ('folk', 1.0): 10, ('bae', 1.0): 6, ('repli', 1.0): 5, ('complain', 1.0): 3, ('rude', 1.0): 3, ('bond', 1.0): 4, ('nigg', 1.0): 1, ('readingr', 1.0): 1, ('wordoftheweek', 1.0): 1, ('wotw', 1.0): 1, ('4:18', 1.0): 1, ('est', 1.0): 1, ('earn', 1.0): 1, ('jess', 1.0): 2, ('surri', 1.0): 1, ('botani', 1.0): 1, ('gel', 1.0): 1, ('alison', 1.0): 1, ('lsa', 1.0): 1, ('respons', 1.0): 7, ('fron', 1.0): 1, ('debbi', 1.0): 1, ('carol', 1.0): 2, ('patient', 1.0): 4, ('discharg', 1.0): 1, ('loung', 1.0): 1, ('walmart', 1.0): 1, ('balanc', 1.0): 2, ('studi', 1.0): 6, ('hayley', 1.0): 2, ('shoulder', 1.0): 1, ('pad', 1.0): 2, ('mount', 1.0): 1, ('inquisitor', 1.0): 1, ('cosplay', 1.0): 4, ('cosplayprogress', 1.0): 1, ('mike', 1.0): 3, ('dunno', 1.0): 2, ('insecur', 1.0): 2, ('nh', 1.0): 1, ('devolut', 1.0): 1, ('patriot', 1.0): 1, ('halla', 1.0): 1, ('ark', 1.0): 1, (\"jiyeon'\", 1.0): 1, ('buzz', 1.0): 2, ('burnt', 1.0): 1, ('mist', 1.0): 4, ('opi', 1.0): 1, ('avoplex', 1.0): 1, ('nail', 1.0): 3, ('cuticl', 1.0): 1, ('replenish', 1.0): 1, ('15ml', 1.0): 1, ('seriou', 1.0): 2, ('submiss', 1.0): 1, ('lb', 1.0): 2, ('cherish', 1.0): 2, ('flip', 1.0): 1, ('learnt', 1.0): 2, ('backflip', 1.0): 2, ('jumpgiant', 1.0): 1, ('foampit', 1.0): 1, ('usa', 1.0): 3, ('pamer', 1.0): 1, ('thk', 1.0): 1, ('actuallythough', 1.0): 1, ('craft', 1.0): 2, ('session', 1.0): 3, ('mehtab', 1.0): 1, ('aunti', 1.0): 1, ('gc', 1.0): 1, ('yeeew', 1.0): 1, ('pre', 1.0): 3, ('lan', 1.0): 1, ('yeey', 1.0): 1, ('arrang', 1.0): 1, ('doodl', 1.0): 2, ('comic', 1.0): 1, ('summon', 1.0): 1, ('none', 1.0): 1, ('🙅', 1.0): 1, ('lycra', 1.0): 1, ('vincent', 1.0): 1, ('couldnt', 1.0): 1, ('roy', 1.0): 1, ('bg', 1.0): 1, ('img', 1.0): 1, ('circl', 1.0): 1, ('font', 1.0): 1, ('deathofgrass', 1.0): 1, ('loan', 1.0): 2, ('lawnmow', 1.0): 1, ('popular', 1.0): 2, ('charismat', 1.0): 1, ('man.h', 1.0): 1, ('thrive', 1.0): 1, ('economi', 1.0): 1, ('burst', 1.0): 2, ('georgi', 1.0): 1, ('x26', 1.0): 1, ('million', 1.0): 4, ('fl', 1.0): 1, ('kindest', 1.0): 2, ('iceland', 1.0): 1, ('crazi', 1.0): 4, ('landscap', 1.0): 2, ('yok', 1.0): 1, ('lah', 1.0): 1, ('concordia', 1.0): 1, ('reunit', 1.0): 1, ('xxxibmchll', 1.0): 1, ('sea', 1.0): 4, ('prettier', 1.0): 2, ('imitatia', 1.0): 1, ('oe', 1.0): 1, ('michel', 1.0): 1, ('comeback', 1.0): 1, ('gross', 1.0): 1, ('treat', 1.0): 5, ('equal', 1.0): 2, ('injustic', 1.0): 1, ('femin', 1.0): 1, ('ineedfeminismbecaus', 1.0): 1, ('forgotten', 1.0): 3, ('stuck', 1.0): 4, ('recommend', 1.0): 4, ('redhead', 1.0): 1, ('wacki', 1.0): 1, ('rather', 1.0): 5, ('waytoliveahappylif', 1.0): 1, ('hoxton', 1.0): 1, ('holborn', 1.0): 1, ('karen', 1.0): 2, ('wag', 1.0): 2, ('bum', 1.0): 1, ('wwooo', 1.0): 1, ('nite', 1.0): 3, ('laiten', 1.0): 1, ('arond', 1.0): 1, ('1:30', 1.0): 1, ('consid', 1.0): 3, ('matur', 1.0): 3, ('journeyp', 1.0): 2, ('foam', 1.0): 1, (\"lady'\", 1.0): 1, ('mob', 1.0): 1, ('fals', 1.0): 1, ('bulletin', 1.0): 1, ('spring', 1.0): 1, ('fiesta', 1.0): 1, ('nois', 1.0): 2, ('awuuu', 1.0): 1, ('aich', 1.0): 1, ('sept', 1.0): 2, ('rudramadevi', 1.0): 1, ('anushka', 1.0): 1, ('gunashekar', 1.0): 1, ('harryxhood', 1.0): 1, ('upset', 1.0): 1, ('ooh', 1.0): 1, ('humanist', 1.0): 1, ('magazin', 1.0): 2, ('usernam', 1.0): 1, ('rape', 1.0): 1, ('csrrace', 1.0): 1, ('lack', 1.0): 6, ('hygien', 1.0): 1, ('tose', 1.0): 1, ('cloth', 1.0): 1, ('temperatur', 1.0): 1, ('planet', 1.0): 2, ('brave', 1.0): 2, ('ge', 1.0): 1, ('2015kenya', 1.0): 1, ('ryan', 1.0): 4, ('tidi', 1.0): 2, ('hagergang', 1.0): 1, ('chanhun', 1.0): 1, ('photoshoot', 1.0): 1, ('afteral', 1.0): 1, ('sadkaay', 1.0): 1, ('thark', 1.0): 1, ('peak', 1.0): 1, ('heatwav', 1.0): 1, ('lower', 1.0): 1, ('standard', 1.0): 2, ('x25', 1.0): 1, ('recruit', 1.0): 2, ('doom', 1.0): 1, ('nasti', 1.0): 1, ('affili', 1.0): 1, ('>:)', 1.0): 2, ('64', 1.0): 2, ('74', 1.0): 1, ('40', 1.0): 4, ('00', 1.0): 1, ('hall', 1.0): 2, ('ted', 1.0): 3, ('pixgram', 1.0): 2, ('creativ', 1.0): 2, ('slideshow', 1.0): 1, ('nibbl', 1.0): 2, ('ivi', 1.0): 1, ('sho', 1.0): 1, ('superpow', 1.0): 2, ('obsess', 1.0): 2, ('oth', 1.0): 1, ('third', 1.0): 2, ('ngarepfollbackdarinabilahjkt', 1.0): 1, ('48', 1.0): 1, ('sunglass', 1.0): 1, ('jacki', 1.0): 2, ('sunni', 1.0): 6, ('style', 1.0): 5, ('jlo', 1.0): 1, ('jlover', 1.0): 1, ('turkey', 1.0): 1, ('goodafternoon', 1.0): 2, ('collag', 1.0): 2, ('furri', 1.0): 2, ('bruce', 1.0): 2, ('kunoriforceo', 1.0): 8, ('aayegi', 1.0): 1, ('tim', 1.0): 2, ('wiw', 1.0): 1, ('bip', 1.0): 1, ('zareen', 1.0): 1, ('daisi', 1.0): 1, (\"b'coz\", 1.0): 1, ('kart', 1.0): 1, ('mak', 1.0): 1, ('∗', 1.0): 2, ('lega', 1.0): 1, ('spag', 1.0): 1, ('boat', 1.0): 2, ('outboard', 1.0): 1, ('spell', 1.0): 4, ('reboard', 1.0): 1, ('fire', 1.0): 2, ('offboard', 1.0): 1, ('sn16', 1.0): 1, ('9dg', 1.0): 1, ('bnf', 1.0): 1, ('50', 1.0): 1, ('jason', 1.0): 1, ('rob', 1.0): 2, ('feb', 1.0): 1, ('victoriasecret', 1.0): 1, ('finland', 1.0): 1, ('helsinki', 1.0): 1, ('airport', 1.0): 3, ('plane', 1.0): 2, ('beyond', 1.0): 4, ('ont', 1.0): 1, ('tii', 1.0): 1, ('lng', 1.0): 2, ('yan', 1.0): 2, (\"u'll\", 1.0): 2, ('steve', 1.0): 2, ('bell', 1.0): 1, ('prescott', 1.0): 1, ('leadership', 1.0): 2, ('cartoon', 1.0): 1, ('upsid', 1.0): 2, ('statement', 1.0): 1, ('selamathariraya', 1.0): 1, ('lovesummertim', 1.0): 1, ('dumont', 1.0): 1, ('jax', 1.0): 1, ('jone', 1.0): 1, ('awesomee', 1.0): 1, ('x24', 1.0): 1, ('geoff', 1.0): 1, ('amazingli', 1.0): 1, ('talant', 1.0): 1, ('vsco', 1.0): 2, ('thanki', 1.0): 2, ('hash', 1.0): 1, ('tag', 1.0): 5, ('ifimeetanalien', 1.0): 1, ('bff', 1.0): 4, ('section', 1.0): 3, ('follbaaack', 1.0): 1, ('az', 1.0): 1, ('cauliflow', 1.0): 1, ('attempt', 1.0): 1, ('prinsesa', 1.0): 1, ('yaaah', 1.0): 2, ('law', 1.0): 3, ('toy', 1.0): 2, ('sonaaa', 1.0): 1, ('beautiful', 1.0): 2, (\"josephine'\", 1.0): 1, ('mirror', 1.0): 3, ('cretaperfect', 1.0): 2, ('4me', 1.0): 2, ('cretaperfectsuv', 1.0): 2, ('creta', 1.0): 1, ('load', 1.0): 1, ('telecom', 1.0): 2, ('judi', 1.0): 1, ('superb', 1.0): 1, ('slightli', 1.0): 1, ('rakna', 1.0): 1, ('ew', 1.0): 1, ('whose', 1.0): 1, ('fifa', 1.0): 1, ('lineup', 1.0): 1, ('surviv', 1.0): 2, ('p90x', 1.0): 1, ('p90', 1.0): 1, ('dishoom', 1.0): 2, ('rajnigandha', 1.0): 1, ('minju', 1.0): 1, ('rapper', 1.0): 1, ('lead', 1.0): 2, ('vocal', 1.0): 1, ('yujin', 1.0): 1, ('visual', 1.0): 2, ('makna', 1.0): 1, ('jane', 1.0): 2, ('hah', 1.0): 4, ('hawk', 1.0): 2, ('greatest', 1.0): 2, ('histori', 1.0): 2, ('along', 1.0): 6, ('talkback', 1.0): 1, ('process', 1.0): 4, ('featur', 1.0): 4, ('mostli', 1.0): 1, (\"cinema'\", 1.0): 1, ('defend', 1.0): 2, ('fashion', 1.0): 2, ('atroc', 1.0): 1, ('pandimension', 1.0): 1, ('manifest', 1.0): 1, ('argo', 1.0): 1, ('ring', 1.0): 4, ('640', 1.0): 1, ('nad', 1.0): 1, ('plezzz', 1.0): 1, ('asthma', 1.0): 1, ('inhal', 1.0): 1, ('breath', 1.0): 3, ('goodluck', 1.0): 1, ('hunger', 1.0): 1, ('mockingjay', 1.0): 1, ('thehungergam', 1.0): 1, ('ador', 1.0): 4, ('x23', 1.0): 1, ('reina', 1.0): 1, ('felt', 1.0): 3, ('excus', 1.0): 2, ('attend', 1.0): 2, ('whn', 1.0): 1, ('andr', 1.0): 1, ('mamayang', 1.0): 1, ('11pm', 1.0): 1, ('1d', 1.0): 2, ('89.9', 1.0): 1, ('powi', 1.0): 1, ('shropshir', 1.0): 1, ('border', 1.0): 1, (\"school'\", 1.0): 1, ('san', 1.0): 2, ('diego', 1.0): 1, ('jump', 1.0): 2, ('sourc', 1.0): 3, ('appeas', 1.0): 1, ('¦', 1.0): 1, ('aj', 1.0): 1, ('action', 1.0): 1, ('grunt', 1.0): 1, ('sc', 1.0): 1, ('anti-christ', 1.0): 1, ('m8', 1.0): 1, ('ju', 1.0): 1, ('halfway', 1.0): 1, ('ex', 1.0): 2, ('postiv', 1.0): 2, ('opinion', 1.0): 3, ('avi', 1.0): 1, ('dare', 1.0): 4, ('corridor', 1.0): 1, ('👯', 1.0): 2, ('neither', 1.0): 2, ('rundown', 1.0): 1, ('yah', 1.0): 4, ('leviboard', 1.0): 1, ('kleper', 1.0): 1, (':(', 1.0): 1, ('impecc', 1.0): 2, ('setokido', 1.0): 1, ('shoulda', 1.0): 3, ('hippo', 1.0): 1, ('materialist', 1.0): 1, ('showpo', 1.0): 1, ('cough', 1.0): 6, ('@artofsleepingin', 1.0): 1, ('x22', 1.0): 1, ('☺', 1.0): 5, ('makesm', 1.0): 1, ('santorini', 1.0): 1, ('escap', 1.0): 2, ('beatport', 1.0): 1, ('🏻', 1.0): 3, ('trmdhesit', 1.0): 2, ('manuel', 1.0): 1, ('vall', 1.0): 1, ('king', 1.0): 3, ('seven', 1.0): 2, ('kingdom', 1.0): 2, ('andal', 1.0): 1, ('taught', 1.0): 1, ('hide', 1.0): 3, ('privaci', 1.0): 1, ('wise', 1.0): 1, ('natsuki', 1.0): 1, ('often', 1.0): 2, ('catchi', 1.0): 1, ('neil', 1.0): 2, ('emir', 1.0): 2, ('brill', 1.0): 1, ('urquhart', 1.0): 1, ('castl', 1.0): 1, ('simpl', 1.0): 2, ('shatter', 1.0): 2, ('contrast', 1.0): 1, ('educampakl', 1.0): 1, ('rotorua', 1.0): 1, ('pehli', 1.0): 1, ('phir', 1.0): 1, ('somi', 1.0): 1, ('burfday', 1.0): 1, ('univers', 1.0): 3, ('santo', 1.0): 1, ('toma', 1.0): 1, ('norh', 1.0): 1, ('dialogu', 1.0): 2, ('chainsaw', 1.0): 2, ('amus', 1.0): 1, ('awe', 1.0): 1, ('protect', 1.0): 2, ('pop', 1.0): 5, ('2ish', 1.0): 1, ('fahad', 1.0): 1, ('bhai', 1.0): 3, ('iqrar', 1.0): 1, ('waseem', 1.0): 1, ('abroad', 1.0): 2, ('movie', 1.0): 1, ('chef', 1.0): 1, ('grogol', 1.0): 1, ('long-dist', 1.0): 1, ('rhi', 1.0): 1, ('pwrfl', 1.0): 1, ('benefit', 1.0): 2, ('b2b', 1.0): 1, ('b2c', 1.0): 1, (\"else'\", 1.0): 2, ('soo', 1.0): 2, ('enterprison', 1.0): 1, ('schoolsoutforsumm', 1.0): 1, ('fellow', 1.0): 4, ('juggl', 1.0): 1, ('purrtho', 1.0): 1, ('catho', 1.0): 1, ('catami', 1.0): 1, ('fourfivesecond', 1.0): 4, ('deaf', 1.0): 4, ('drug', 1.0): 1, ('alcohol', 1.0): 1, ('apexi', 1.0): 3, ('crystal', 1.0): 3, ('meth', 1.0): 1, ('champagn', 1.0): 1, ('fc', 1.0): 1, ('streamer', 1.0): 1, ('juic', 1.0): 1, ('correct', 1.0): 1, ('portrait', 1.0): 1, ('izumi', 1.0): 1, ('fugiwara', 1.0): 1, ('clonmel', 1.0): 1, ('vibrant', 1.0): 1, ('estim', 1.0): 1, ('server', 1.0): 2, ('quiet', 1.0): 1, ('yey', 1.0): 1, (\"insha'allah\", 1.0): 1, ('wil', 1.0): 1, ('x21', 1.0): 1, ('trend', 1.0): 3, ('akshaymostlovedsuperstarev', 1.0): 1, ('indirect', 1.0): 1, ('askurban', 1.0): 1, ('lyka', 1.0): 2, ('nap', 1.0): 4, ('aff', 1.0): 1, ('unam', 1.0): 1, ('jonginuh', 1.0): 1, ('forecast', 1.0): 2, ('10am', 1.0): 2, ('5am', 1.0): 1, ('sooth', 1.0): 1, ('vii', 1.0): 1, ('sweetheart', 1.0): 1, ('freak', 1.0): 3, ('zayn', 1.0): 3, ('fucker', 1.0): 1, ('pet', 1.0): 2, ('illustr', 1.0): 1, ('wohoo', 1.0): 1, ('gleam', 1.0): 1, ('paint', 1.0): 4, ('deal', 1.0): 2, ('prime', 1.0): 2, ('minist', 1.0): 2, ('sunjam', 1.0): 1, ('industri', 1.0): 1, ('present', 1.0): 7, ('practic', 1.0): 3, ('proactiv', 1.0): 1, ('environ', 1.0): 1, ('unreal', 1.0): 1, ('zain', 1.0): 1, ('zac', 1.0): 1, ('isaac', 1.0): 1, ('oss', 1.0): 1, ('frank', 1.0): 1, ('iero', 1.0): 1, ('phase', 1.0): 2, ('david', 1.0): 1, ('beginn', 1.0): 1, ('shine', 1.0): 3, ('sunflow', 1.0): 2, ('tommarow', 1.0): 1, ('yall', 1.0): 2, ('rank', 1.0): 2, ('birthdaymonth', 1.0): 1, ('vianey', 1.0): 1, ('juli', 1.0): 11, ('birthdaygirl', 1.0): 1, (\"town'\", 1.0): 1, ('andrew', 1.0): 2, ('checkout', 1.0): 2, ('otwol', 1.0): 1, ('awhil', 1.0): 1, ('x20', 1.0): 1, ('all-tim', 1.0): 1, ('julia', 1.0): 1, ('robert', 1.0): 1, ('awwhh', 1.0): 1, ('bulldog', 1.0): 1, ('unfortun', 1.0): 2, ('02079', 1.0): 1, ('490', 1.0): 1, ('132', 1.0): 1, ('born', 1.0): 2, ('fightstickfriday', 1.0): 1, ('extravag', 1.0): 2, ('tearout', 1.0): 1, ('selekt', 1.0): 1, ('yoot', 1.0): 1, ('cross', 1.0): 3, ('gudday', 1.0): 1, ('dave', 1.0): 5, ('haileyhelp', 1.0): 1, ('eid', 1.0): 2, ('mubarak', 1.0): 5, ('brotheeerrr', 1.0): 1, ('adventur', 1.0): 5, ('tokyo', 1.0): 2, ('kansai', 1.0): 1, ('l', 1.0): 4, ('upp', 1.0): 2, ('om', 1.0): 1, ('60', 1.0): 1, ('minut', 1.0): 7, ('data', 1.0): 1, ('jesu', 1.0): 5, ('amsterdam', 1.0): 2, ('3rd', 1.0): 3, ('nextweek', 1.0): 1, ('booti', 1.0): 2, ('bcuz', 1.0): 1, ('step', 1.0): 3, ('option', 1.0): 3, ('stabl', 1.0): 1, ('sturdi', 1.0): 1, ('lukkke', 1.0): 1, ('again.ensoi', 1.0): 1, ('tc', 1.0): 1, ('madam', 1.0): 1, ('siddi', 1.0): 1, ('unknown', 1.0): 2, ('roomi', 1.0): 1, ('gn', 1.0): 2, ('gf', 1.0): 2, ('consent', 1.0): 1, ('mister', 1.0): 2, ('vine', 1.0): 2, ('peyton', 1.0): 1, ('nagato', 1.0): 1, ('yuki-chan', 1.0): 1, ('shoushitsu', 1.0): 1, ('archdbanterburi', 1.0): 3, ('experttradesmen', 1.0): 1, ('banter', 1.0): 1, ('quiz', 1.0): 1, ('tradetalk', 1.0): 1, ('floof', 1.0): 1, ('face', 1.0): 13, ('muahah', 1.0): 1, ('x19', 1.0): 1, ('anticip', 1.0): 1, ('jd', 1.0): 1, ('laro', 1.0): 1, ('tayo', 1.0): 1, ('answer', 1.0): 8, ('ht', 1.0): 1, ('angelica', 1.0): 1, ('anghel', 1.0): 1, ('aa', 1.0): 3, ('kkk', 1.0): 1, ('macbook', 1.0): 1, ('rehears', 1.0): 1, ('youthcelebr', 1.0): 1, ('mute', 1.0): 1, ('29th', 1.0): 1, ('gohf', 1.0): 4, ('vegetarian', 1.0): 1, (\"she'll\", 1.0): 1, ('gooday', 1.0): 3, ('101', 1.0): 3, ('12000', 1.0): 1, ('oshieer', 1.0): 1, ('realreview', 1.0): 1, ('happycustom', 1.0): 1, ('realoshi', 1.0): 1, ('dealsuthaonotebachao', 1.0): 1, ('bigger', 1.0): 2, ('dime', 1.0): 1, ('uhuh', 1.0): 1, ('🎵', 1.0): 3, ('code', 1.0): 4, ('pleasant', 1.0): 2, ('on-board', 1.0): 1, ('raheel', 1.0): 1, ('flyhigh', 1.0): 1, ('bother', 1.0): 2, ('everett', 1.0): 1, ('taylor', 1.0): 1, ('ha-ha', 1.0): 1, ('peachyloan', 1.0): 1, ('fridayfreebi', 1.0): 1, ('noe', 1.0): 1, ('yisss', 1.0): 1, ('bindingofissac', 1.0): 1, ('xboxon', 1.0): 1, ('consol', 1.0): 1, ('justin', 1.0): 2, ('gladli', 1.0): 1, ('son', 1.0): 4, ('morocco', 1.0): 1, ('peru', 1.0): 1, ('nxt', 1.0): 1, ('bp', 1.0): 1, ('resort', 1.0): 1, ('x18', 1.0): 1, ('havuuulovey', 1.0): 1, ('uuu', 1.0): 1, ('possitv', 1.0): 1, ('hopey', 1.0): 1, ('throwbackfriday', 1.0): 1, ('christen', 1.0): 1, ('ki', 1.0): 1, ('yaad', 1.0): 1, ('gayi', 1.0): 1, ('opossum', 1.0): 1, ('belat', 1.0): 5, ('yeahh', 1.0): 2, ('kuffar', 1.0): 1, ('comput', 1.0): 5, ('cell', 1.0): 1, ('diarrhea', 1.0): 1, ('immigr', 1.0): 1, ('lice', 1.0): 1, ('goictiv', 1.0): 1, ('70685', 1.0): 1, ('tagsforlik', 1.0): 4, ('trapmus', 1.0): 1, ('hotmusicdeloco', 1.0): 1, ('kinick', 1.0): 1, ('01282', 1.0): 2, ('452096', 1.0): 1, ('shadi', 1.0): 1, ('reserv', 1.0): 3, ('tkt', 1.0): 1, ('likewis', 1.0): 4, ('overgener', 1.0): 1, ('ikr', 1.0): 1, ('😍', 1.0): 2, ('consumer', 1.0): 1, ('fic', 1.0): 2, ('ouch', 1.0): 2, ('slip', 1.0): 1, ('disc', 1.0): 1, ('thw', 1.0): 1, ('chute', 1.0): 1, ('chalut', 1.0): 1, ('replay', 1.0): 1, ('iplay', 1.0): 1, ('11am', 1.0): 3, ('unneed', 1.0): 1, ('megamoh', 1.0): 1, ('7/29', 1.0): 1, ('tool', 1.0): 2, ('zealand', 1.0): 1, ('pile', 1.0): 2, ('dump', 1.0): 1, ('couscou', 1.0): 3, (\"women'\", 1.0): 2, ('fiction', 1.0): 1, ('wahahaah', 1.0): 1, ('x17', 1.0): 1, ('orhan', 1.0): 1, ('pamuk', 1.0): 1, ('hero', 1.0): 3, ('canopi', 1.0): 1, ('mapl', 1.0): 2, ('syrup', 1.0): 1, ('farm', 1.0): 2, ('stephani', 1.0): 2, ('💖', 1.0): 2, ('congrtaualt', 1.0): 1, ('philea', 1.0): 1, ('club', 1.0): 4, ('inc', 1.0): 1, ('photograph', 1.0): 2, ('phonegraph', 1.0): 1, ('srsli', 1.0): 1, ('10:17', 1.0): 1, ('ripaaa', 1.0): 1, ('banat', 1.0): 1, ('ray', 1.0): 1, ('dept', 1.0): 1, ('hospit', 1.0): 3, ('grt', 1.0): 1, ('infograph', 1.0): 1, (\"o'clock\", 1.0): 2, ('habit', 1.0): 1, ('1dfor', 1.0): 1, ('roadtrip', 1.0): 1, ('19:30', 1.0): 1, ('ifc', 1.0): 1, ('whip', 1.0): 1, ('lilsisbro', 1.0): 1, ('pre-ord', 1.0): 2, (\"pixar'\", 1.0): 2, ('steelbook', 1.0): 1, ('hmm', 1.0): 2, ('pegel', 1.0): 1, ('lemess', 1.0): 1, ('kyle', 1.0): 2, ('paypal', 1.0): 1, ('oct', 1.0): 1, ('tud', 1.0): 1, ('jst', 1.0): 2, ('humphrey', 1.0): 1, ('yell', 1.0): 2, ('erm', 1.0): 1, ('breach', 1.0): 1, ('lemon', 1.0): 2, ('yogurt', 1.0): 2, ('pot', 1.0): 1, ('discov', 1.0): 2, ('liquoric', 1.0): 1, ('pud', 1.0): 1, ('cajun', 1.0): 1, ('spice', 1.0): 1, ('yum', 1.0): 2, ('cajunchicken', 1.0): 1, ('infinit', 1.0): 2, ('fight', 1.0): 4, ('gern', 1.0): 1, ('cikaaa', 1.0): 1, ('maaf', 1.0): 1, ('telat', 1.0): 1, ('ngucapinnya', 1.0): 1, ('maaay', 1.0): 1, ('x16', 1.0): 1, ('viparita', 1.0): 1, ('karani', 1.0): 1, ('legsupthewal', 1.0): 1, ('unwind', 1.0): 1, ('coco', 1.0): 3, ('comfi', 1.0): 1, ('jalulu', 1.0): 1, ('rosh', 1.0): 1, ('gla', 1.0): 1, ('pallavi', 1.0): 1, ('nairobi', 1.0): 1, ('hrdstellobama', 1.0): 1, ('region', 1.0): 2, ('civil', 1.0): 1, ('societi', 1.0): 2, ('globe', 1.0): 1, ('hajur', 1.0): 1, ('yayi', 1.0): 2, (\"must'v\", 1.0): 1, ('nerv', 1.0): 1, ('prelim', 1.0): 1, ('costacc', 1.0): 1, ('nwb', 1.0): 1, ('shud', 1.0): 1, ('cold', 1.0): 2, ('hmu', 1.0): 2, ('cala', 1.0): 1, ('brush', 1.0): 1, ('ego', 1.0): 1, ('wherev', 1.0): 1, ('interact', 1.0): 2, ('dongsaeng', 1.0): 1, ('chorong', 1.0): 1, ('friendship', 1.0): 1, ('impress', 1.0): 3, ('dragon', 1.0): 2, ('duck', 1.0): 5, ('mix', 1.0): 5, ('cheetah', 1.0): 1, ('wagga', 1.0): 2, ('coursework', 1.0): 1, ('lorna', 1.0): 1, ('scan', 1.0): 1, ('x12', 1.0): 2, ('canva', 1.0): 2, ('iqbal', 1.0): 1, ('ima', 1.0): 1, ('hon', 1.0): 1, ('aja', 1.0): 1, ('besi', 1.0): 1, ('chati', 1.0): 1, ('phulani', 1.0): 1, ('swasa', 1.0): 1, ('bahari', 1.0): 1, ('jiba', 1.0): 1, ('mumbai', 1.0): 1, ('gujarat', 1.0): 1, ('distrub', 1.0): 1, ('otherwis', 1.0): 5, ('190cr', 1.0): 1, ('inspit', 1.0): 1, ('highest', 1.0): 1, ('holder', 1.0): 1, ('threaten', 1.0): 1, ('daili', 1.0): 2, ('basi', 1.0): 1, ('vr', 1.0): 1, ('angelo', 1.0): 1, ('quezon', 1.0): 1, ('sweatpant', 1.0): 1, ('farbridg', 1.0): 1, ('segalakatakata', 1.0): 1, ('nixu', 1.0): 1, ('begun', 1.0): 1, ('flint', 1.0): 1, ('🍰', 1.0): 5, ('separ', 1.0): 1, ('criticis', 1.0): 1, ('gestur', 1.0): 1, ('pedal', 1.0): 1, ('stroke', 1.0): 1, ('caro', 1.0): 1, ('deposit', 1.0): 1, ('secur', 1.0): 2, ('shock', 1.0): 1, ('coff', 1.0): 2, ('tenerina', 1.0): 1, ('auguri', 1.0): 1, ('iso', 1.0): 1, ('certif', 1.0): 1, ('paralyz', 1.0): 1, ('anxieti', 1.0): 1, (\"it'd\", 1.0): 1, ('develop', 1.0): 3, ('spain', 1.0): 2, ('def', 1.0): 1, ('bantim', 1.0): 1, ('fail', 1.0): 5, ('2ban', 1.0): 1, ('x15', 1.0): 1, ('awkward', 1.0): 2, ('ab', 1.0): 1, ('gale', 1.0): 1, ('founder', 1.0): 1, ('loveyaaah', 1.0): 1, ('⅛', 1.0): 1, ('⅞', 1.0): 1, ('∞', 1.0): 1, ('specialist', 1.0): 1, ('aw', 1.0): 3, ('babyyi', 1.0): 1, ('djstruthmat', 1.0): 1, ('re-cap', 1.0): 1, ('flickr', 1.0): 1, ('tack', 1.0): 2, ('zephbot', 1.0): 1, ('hhahahahaha', 1.0): 1, ('blew', 1.0): 2, ('entir', 1.0): 2, ('vega', 1.0): 3, ('strip', 1.0): 1, ('hahahahahhaha', 1.0): 1, (\"callie'\", 1.0): 1, ('puppi', 1.0): 1, ('owner', 1.0): 2, ('callinganimalabusehotlineasap', 1.0): 1, ('gorefiend', 1.0): 1, ('mythic', 1.0): 1, ('remind', 1.0): 6, ('9:00', 1.0): 1, ('▪', 1.0): 2, ('bea', 1.0): 1, ('miller', 1.0): 2, ('lockscreen', 1.0): 1, ('mbf', 1.0): 1, ('keesh', 1.0): 1, (\"yesterday'\", 1.0): 1, ('groupi', 1.0): 1, ('bebe', 1.0): 1, ('sizam', 1.0): 1, ('color', 1.0): 5, ('invoic', 1.0): 1, ('kanina', 1.0): 1, ('pong', 1.0): 1, ('umaga', 1.0): 1, ('browser', 1.0): 1, ('typic', 1.0): 2, ('pleass', 1.0): 5, ('leeteuk', 1.0): 1, ('pearl', 1.0): 1, ('thusi', 1.0): 1, ('pour', 1.0): 1, ('milk', 1.0): 2, ('tgv', 1.0): 1, ('pari', 1.0): 5, ('austerlitz', 1.0): 1, ('bloi', 1.0): 1, ('mile', 1.0): 3, ('chateau', 1.0): 1, ('de', 1.0): 1, ('marai', 1.0): 1, ('taxi', 1.0): 1, ('x14', 1.0): 1, ('nom', 1.0): 1, ('enji', 1.0): 1, ('hater', 1.0): 3, ('purchas', 1.0): 2, ('specially-mark', 1.0): 1, ('custard', 1.0): 1, ('sm', 1.0): 1, ('on-pack', 1.0): 1, ('instruct', 1.0): 1, ('tile', 1.0): 1, ('downstair', 1.0): 1, ('kelli', 1.0): 1, ('greek', 1.0): 2, ('petra', 1.0): 1, ('shadowplayloui', 1.0): 1, ('mutual', 1.0): 2, ('cuz', 1.0): 4, ('liveonstream', 1.0): 1, ('lani', 1.0): 1, ('graze', 1.0): 1, ('pride', 1.0): 1, ('bristolart', 1.0): 1, ('in-app', 1.0): 1, ('ensur', 1.0): 1, ('item', 1.0): 2, ('screw', 1.0): 1, ('amber', 1.0): 2, ('43', 1.0): 1, ('hpc', 1.0): 1, ('wip', 1.0): 2, ('sw', 1.0): 1, ('newsround', 1.0): 1, ('hound', 1.0): 1, ('7:40', 1.0): 1, ('ada', 1.0): 1, ('racist', 1.0): 1, ('hulk', 1.0): 1, ('tight', 1.0): 2, ('prayer', 1.0): 3, ('pardon', 1.0): 1, ('phl', 1.0): 1, ('abu', 1.0): 2, ('dhabi', 1.0): 1, ('hihihi', 1.0): 1, ('teamjanuaryclaim', 1.0): 1, ('godonna', 1.0): 1, ('msg', 1.0): 2, ('bowwowchicawowwow', 1.0): 1, ('settl', 1.0): 1, ('dkt', 1.0): 1, ('porch', 1.0): 1, ('uber', 1.0): 2, ('mobil', 1.0): 4, ('applic', 1.0): 3, ('giggl', 1.0): 2, ('bare', 1.0): 3, ('wind', 1.0): 2, ('kahlil', 1.0): 1, ('gibran', 1.0): 1, ('flash', 1.0): 1, ('stiff', 1.0): 1, ('upper', 1.0): 1, ('lip', 1.0): 1, ('britain', 1.0): 1, ('latmon', 1.0): 1, ('endeavour', 1.0): 1, ('ann', 1.0): 2, ('joy', 1.0): 4, ('os', 1.0): 1, ('exploit', 1.0): 1, ('ign', 1.0): 2, ('au', 1.0): 1, ('pubcast', 1.0): 1, ('tengaman', 1.0): 1, ('21', 1.0): 2, ('celebratio', 1.0): 1, ('women', 1.0): 1, ('instal', 1.0): 2, ('glorifi', 1.0): 1, ('infirm', 1.0): 1, ('silli', 1.0): 1, ('suav', 1.0): 1, ('gentlemen', 1.0): 1, ('monthli', 1.0): 1, ('mileag', 1.0): 1, ('target', 1.0): 2, ('samsung', 1.0): 1, ('qualiti', 1.0): 3, ('ey', 1.0): 1, ('beth', 1.0): 2, ('gangster', 1.0): 1, (\"athena'\", 1.0): 1, ('fanci', 1.0): 1, ('wellington', 1.0): 1, ('rich', 1.0): 2, ('christina', 1.0): 1, ('newslett', 1.0): 1, ('zy', 1.0): 1, ('olur', 1.0): 1, ('x13', 1.0): 1, ('flawless', 1.0): 1, ('reaction', 1.0): 2, ('hayli', 1.0): 1, ('edwin', 1.0): 1, ('elvena', 1.0): 1, ('emc', 1.0): 1, ('rubber', 1.0): 3, ('swearword', 1.0): 1, ('infect', 1.0): 1, ('10:16', 1.0): 1, ('wrote', 1.0): 3, ('gan', 1.0): 1, ('brotherhood', 1.0): 1, ('wolf', 1.0): 5, ('pill', 1.0): 1, ('nocturn', 1.0): 1, ('rrp', 1.0): 1, ('18.99', 1.0): 1, ('13.99', 1.0): 1, ('jah', 1.0): 1, ('wobbl', 1.0): 1, ('retard', 1.0): 1, ('50notif', 1.0): 1, ('check-up', 1.0): 1, ('pun', 1.0): 1, ('elit', 1.0): 1, ('camillu', 1.0): 1, ('pleasee', 1.0): 1, ('spare', 1.0): 1, ('tyre', 1.0): 2, ('joke', 1.0): 3, ('ahahah', 1.0): 1, ('shame', 1.0): 1, ('abandon', 1.0): 1, ('disagre', 1.0): 2, ('nowher', 1.0): 2, ('contradict', 1.0): 1, ('chao', 1.0): 1, ('contain', 1.0): 1, ('cranium', 1.0): 1, ('sneaker', 1.0): 1, ('nike', 1.0): 1, ('nikeorigin', 1.0): 1, ('nikeindonesia', 1.0): 1, ('pierojogg', 1.0): 1, ('skoy', 1.0): 1, ('winter', 1.0): 2, ('falkland', 1.0): 1, ('jamie-le', 1.0): 1, ('congraaat', 1.0): 1, ('hooh', 1.0): 1, ('chrome', 1.0): 1, ('storm', 1.0): 1, ('thunderstorm', 1.0): 1, ('circuscircu', 1.0): 1, ('omgg', 1.0): 1, ('tdi', 1.0): 1, ('(-:', 1.0): 2, ('peter', 1.0): 1, ('expel', 1.0): 2, ('boughi', 1.0): 1, ('kernel', 1.0): 1, ('paralysi', 1.0): 1, ('liza', 1.0): 1, ('lol.hook', 1.0): 1, ('vampir', 1.0): 2, ('diari', 1.0): 3, ('twice', 1.0): 1, ('thanq', 1.0): 2, ('goodwil', 1.0): 1, ('vandr', 1.0): 1, ('ash', 1.0): 1, ('debat', 1.0): 3, ('solar', 1.0): 1, ('6-5', 1.0): 1, ('shown', 1.0): 1, ('ek', 1.0): 1, ('taco', 1.0): 2, ('mexico', 1.0): 2, ('viva', 1.0): 1, ('méxico', 1.0): 1, ('burger', 1.0): 3, ('thebestangkapuso', 1.0): 1, ('lighter', 1.0): 1, ('tooth', 1.0): 2, ('korean', 1.0): 2, ('netizen', 1.0): 1, ('crueler', 1.0): 1, ('eleph', 1.0): 1, ('marula', 1.0): 1, ('tdif', 1.0): 1, ('shoutout', 1.0): 1, ('shortli', 1.0): 1, ('itsamarvelth', 1.0): 1, (\"japan'\", 1.0): 1, ('artist', 1.0): 1, ('homework', 1.0): 1, ('marco', 1.0): 1, ('herb', 1.0): 1, ('pm', 1.0): 3, ('self', 1.0): 1, ('esteem', 1.0): 1, ('patienc', 1.0): 1, ('sobtian', 1.0): 1, ('cowork', 1.0): 1, ('deathli', 1.0): 1, ('hallow', 1.0): 1, ('supernatur', 1.0): 1, ('consult', 1.0): 1, ('himach', 1.0): 1, ('2.25', 1.0): 1, ('asham', 1.0): 1, ('where.do.i.start', 1.0): 1, ('moviemarathon', 1.0): 1, ('skill', 1.0): 4, ('shadow', 1.0): 1, ('own', 1.0): 1, ('pair', 1.0): 3, (\"it'll\", 1.0): 6, ('cortez', 1.0): 1, ('superstar', 1.0): 1, ('tthank', 1.0): 1, ('colin', 1.0): 1, ('luxuou', 1.0): 1, ('tarryn', 1.0): 1, ('hbdme', 1.0): 1, ('yeeeyyy', 1.0): 1, ('barsostay', 1.0): 1, ('males', 1.0): 1, ('independ', 1.0): 1, ('sum', 1.0): 1, ('debacl', 1.0): 1, ('perfectli', 1.0): 1, ('longer', 1.0): 2, ('amyjackson', 1.0): 1, ('omegl', 1.0): 2, ('countrymus', 1.0): 1, ('five', 1.0): 2, (\"night'\", 1.0): 2, (\"freddy'\", 1.0): 2, ('demo', 1.0): 2, ('pump', 1.0): 2, ('fanboy', 1.0): 1, ('thegrandad', 1.0): 1, ('sidni', 1.0): 1, ('remarriag', 1.0): 1, ('occas', 1.0): 1, ('languag', 1.0): 1, ('java', 1.0): 1, (\"php'\", 1.0): 1, ('notion', 1.0): 1, ('refer', 1.0): 1, ('confus', 1.0): 3, ('ohioan', 1.0): 1, ('stick', 1.0): 2, ('doctor', 1.0): 3, ('offlin', 1.0): 1, ('thesim', 1.0): 1, ('mb', 1.0): 1, ('meaningless', 1.0): 1, ('common', 1.0): 1, ('celebr', 1.0): 9, ('muertosatfring', 1.0): 1, ('emul', 1.0): 1, ('brought', 1.0): 1, ('enemi', 1.0): 2, ('relax', 1.0): 3, ('ou', 1.0): 1, ('pink', 1.0): 2, ('cc', 1.0): 2, ('meooowww', 1.0): 1, ('barkkkiiidee', 1.0): 1, ('bark', 1.0): 1, ('x11', 1.0): 1, ('routin', 1.0): 4, ('alek', 1.0): 1, ('awh', 1.0): 2, ('kumpul', 1.0): 1, ('cantik', 1.0): 1, ('ganteng', 1.0): 1, ('kresna', 1.0): 1, ('jelli', 1.0): 1, ('simon', 1.0): 1, ('lesley', 1.0): 3, ('blood', 1.0): 2, ('panti', 1.0): 1, ('lion', 1.0): 1, ('artworkbyli', 1.0): 1, ('judo', 1.0): 1, ('daredevil', 1.0): 2, ('despond', 1.0): 1, ('re-watch', 1.0): 1, ('welcoma.hav', 1.0): 1, ('favor', 1.0): 5, ('tridon', 1.0): 1, ('21pic', 1.0): 1, ('master', 1.0): 3, ('nim', 1.0): 1, (\"there'r\", 1.0): 1, ('22pic', 1.0): 1, ('kebun', 1.0): 1, ('ubud', 1.0): 1, ('ladyposs', 1.0): 1, ('xoxoxo', 1.0): 1, ('sneak', 1.0): 3, ('peek', 1.0): 2, ('inbox', 1.0): 1, ('happyweekend', 1.0): 1, ('therealgolden', 1.0): 1, ('47', 1.0): 1, ('girlfriendsmya', 1.0): 1, ('ppl', 1.0): 2, ('closest', 1.0): 1, ('njoy', 1.0): 1, ('followingg', 1.0): 1, ('privat', 1.0): 1, ('pusher', 1.0): 1, ('stun', 1.0): 4, ('wooohooo', 1.0): 1, ('cuss', 1.0): 1, ('teenag', 1.0): 1, ('ace', 1.0): 1, ('sauc', 1.0): 3, ('livi', 1.0): 1, ('fowl', 1.0): 1, ('oliviafowl', 1.0): 1, ('891', 1.0): 1, ('burnout', 1.0): 1, ('johnforceo', 1.0): 1, ('matthew', 1.0): 1, ('provok', 1.0): 1, ('indiankultur', 1.0): 1, ('oppos', 1.0): 1, ('biker', 1.0): 1, ('lyk', 1.0): 1, ('gud', 1.0): 4, ('weight', 1.0): 6, ('bcu', 1.0): 1, ('rubbish', 1.0): 1, ('veggi', 1.0): 2, ('steph', 1.0): 1, ('nj', 1.0): 1, ('x10', 1.0): 1, ('cohes', 1.0): 1, ('gossip', 1.0): 2, ('alex', 1.0): 3, ('heswifi', 1.0): 1, ('7am', 1.0): 1, ('wub', 1.0): 1, ('cerbchan', 1.0): 1, ('jarraaa', 1.0): 1, ('morrrn', 1.0): 1, ('snooz', 1.0): 1, ('clicksco', 1.0): 1, ('gay', 1.0): 4, ('lesbian', 1.0): 2, ('rigid', 1.0): 1, ('theocrat', 1.0): 1, ('wing', 1.0): 1, ('fundamentalist', 1.0): 1, ('islamist', 1.0): 1, ('brianaaa', 1.0): 1, ('brianazabrocki', 1.0): 1, ('sky', 1.0): 2, ('batb', 1.0): 1, ('clap', 1.0): 3, ('whilst', 1.0): 1, ('aki', 1.0): 1, ('thencerest', 1.0): 2, ('547', 1.0): 2, ('indiemus', 1.0): 5, ('sexyjudi', 1.0): 3, ('pussi', 1.0): 4, ('sexo', 1.0): 3, ('humid', 1.0): 1, ('87', 1.0): 1, ('sloppi', 1.0): 1, (\"second'\", 1.0): 1, ('stock', 1.0): 3, ('marmit', 1.0): 2, ('x9', 1.0): 1, ('nic', 1.0): 3, ('taft', 1.0): 1, ('finalist', 1.0): 1, ('lotteri', 1.0): 1, ('award', 1.0): 3, ('usagi', 1.0): 1, ('looov', 1.0): 1, ('wowww', 1.0): 2, ('💙', 1.0): 8, ('💚', 1.0): 8, ('💕', 1.0): 12, ('lepa', 1.0): 1, ('sembuh', 1.0): 1, ('sibuk', 1.0): 1, ('balik', 1.0): 1, ('kin', 1.0): 1, ('gotham', 1.0): 1, ('sunnyday', 1.0): 1, ('dudett', 1.0): 1, ('cost', 1.0): 1, ('flippin', 1.0): 1, ('fortun', 1.0): 1, ('divinediscont', 1.0): 1, (';}', 1.0): 1, ('amnot', 1.0): 1, ('autofollow', 1.0): 3, ('teamfollowback', 1.0): 4, ('geer', 1.0): 1, ('bat', 1.0): 2, ('mz', 1.0): 1, ('yang', 1.0): 2, ('deennya', 1.0): 1, ('jehwan', 1.0): 1, ('11:00', 1.0): 1, ('ashton', 1.0): 1, ('✧', 1.0): 12, ('。', 1.0): 4, ('chelni', 1.0): 2, ('datz', 1.0): 1, ('jeremi', 1.0): 1, ('fmt', 1.0): 1, ('dat', 1.0): 3, ('heartbeat', 1.0): 1, ('clutch', 1.0): 1, ('🐢', 1.0): 2, ('besteverdoctorwhoepisod', 1.0): 1, ('relev', 1.0): 1, ('puke', 1.0): 1, ('proper', 1.0): 1, ('x8', 1.0): 1, ('sublimin', 1.0): 1, ('eatmeat', 1.0): 1, ('brewproject', 1.0): 1, ('lovenafianna', 1.0): 1, ('mr', 1.0): 7, ('lewi', 1.0): 1, ('clock', 1.0): 1, ('3:02', 1.0): 2, ('muslim', 1.0): 1, ('prophet', 1.0): 1, ('غردلي', 1.0): 4, ('is.h', 1.0): 1, ('mistak', 1.0): 4, ('understood', 1.0): 1, ('politician', 1.0): 1, ('argu', 1.0): 1, ('intellect', 1.0): 1, ('shiva', 1.0): 1, ('mp3', 1.0): 1, ('standrew', 1.0): 1, ('sandcastl', 1.0): 1, ('ewok', 1.0): 1, ('nate', 1.0): 2, ('brawl', 1.0): 1, ('rear', 1.0): 1, ('nake', 1.0): 1, ('choke', 1.0): 1, ('heck', 1.0): 1, ('gun', 1.0): 2, ('associ', 1.0): 1, ('um', 1.0): 1, ('endow', 1.0): 1, ('ai', 1.0): 1, ('sikandar', 1.0): 1, ('pti', 1.0): 1, ('standwdik', 1.0): 1, ('westandwithik', 1.0): 1, ('starbuck', 1.0): 2, ('logo', 1.0): 2, ('renew', 1.0): 1, ('chariti', 1.0): 1, ('جمعة_مباركة', 1.0): 1, ('hoki', 1.0): 1, ('biz', 1.0): 1, ('non', 1.0): 1, ('america', 1.0): 1, ('california', 1.0): 1, ('01:16', 1.0): 1, ('45gameplay', 1.0): 2, ('ilovey', 1.0): 2, ('vex', 1.0): 1, ('iger', 1.0): 1, ('leicaq', 1.0): 1, ('leica', 1.0): 1, ('dudee', 1.0): 1, ('persona', 1.0): 1, ('yepp', 1.0): 1, ('5878e503', 1.0): 1, ('x7', 1.0): 1, ('greg', 1.0): 1, ('posey', 1.0): 1, ('miami', 1.0): 1, ('james_yammouni', 1.0): 1, ('breakdown', 1.0): 1, ('materi', 1.0): 2, ('thorin', 1.0): 1, ('hunt', 1.0): 1, ('choroo', 1.0): 1, ('nahi', 1.0): 2, ('aztec', 1.0): 1, ('princess', 1.0): 2, ('raini', 1.0): 1, ('kingfish', 1.0): 1, ('chinua', 1.0): 1, ('acheb', 1.0): 1, ('intellectu', 1.0): 2, ('liquid', 1.0): 1, ('melbournetrip', 1.0): 1, ('taxikitchen', 1.0): 1, ('nooow', 1.0): 2, ('mcdo', 1.0): 1, ('everywher', 1.0): 2, ('dreamer', 1.0): 1, ('tanisha', 1.0): 1, ('1nonli', 1.0): 1, ('attitud', 1.0): 1, ('kindl', 1.0): 2, ('flame', 1.0): 1, ('convict', 1.0): 1, ('bar', 1.0): 1, ('repath', 1.0): 2, ('adi', 1.0): 1, ('stefani', 1.0): 1, ('sg1', 1.0): 1, ('lightbox', 1.0): 1, ('ran', 1.0): 2, ('incorrect', 1.0): 1, ('apologist', 1.0): 1, ('x6', 1.0): 1, ('vuli', 1.0): 1, ('01:15', 1.0): 1, ('batman', 1.0): 1, ('pearson', 1.0): 1, ('reput', 1.0): 2, ('nikkei', 1.0): 1, ('woodford', 1.0): 1, ('vscocam', 1.0): 1, ('vscoph', 1.0): 1, ('vscogood', 1.0): 1, ('vscophil', 1.0): 1, ('vscocousin', 1.0): 1, ('yaap', 1.0): 1, ('urwelc', 1.0): 1, ('neon', 1.0): 1, ('pant', 1.0): 1, ('haaa', 1.0): 1, ('will', 1.0): 2, ('auspost', 1.0): 1, ('openfollow', 1.0): 1, ('rp', 1.0): 2, ('eng', 1.0): 1, ('yūjō-cosplay', 1.0): 1, ('luxembourg', 1.0): 1, ('bunni', 1.0): 1, ('broadcast', 1.0): 1, ('needa', 1.0): 1, ('gal', 1.0): 3, ('bend', 1.0): 3, ('heaven', 1.0): 2, ('score', 1.0): 2, ('januari', 1.0): 1, ('hanabutl', 1.0): 1, ('kikhorni', 1.0): 1, ('interraci', 1.0): 1, ('makeup', 1.0): 1, ('chu', 1.0): 1, (\"weekend'\", 1.0): 1, ('punt', 1.0): 1, ('horserac', 1.0): 1, ('hors', 1.0): 2, ('horseracingtip', 1.0): 1, ('guitar', 1.0): 1, ('cocoar', 1.0): 1, ('brief', 1.0): 1, ('introduct', 1.0): 1, ('earliest', 1.0): 1, ('indian', 1.0): 1, ('subcontin', 1.0): 1, ('bfr', 1.0): 1, ('maurya', 1.0): 1, ('jordanian', 1.0): 1, ('00962778381', 1.0): 1, ('838', 1.0): 1, ('tenyai', 1.0): 1, ('hee', 1.0): 2, ('ss', 1.0): 1, ('semi', 1.0): 1, ('atp', 1.0): 2, ('wimbledon', 1.0): 2, ('feder', 1.0): 1, ('nadal', 1.0): 1, ('monfil', 1.0): 1, ('handsom', 1.0): 2, ('cilic', 1.0): 3, ('firm', 1.0): 1, ('potenti', 1.0): 3, ('nyc', 1.0): 1, ('chillin', 1.0): 2, ('tail', 1.0): 2, ('kitten', 1.0): 1, ('garret', 1.0): 1, ('baz', 1.0): 1, ('leo', 1.0): 2, ('xst', 1.0): 1, ('centrifug', 1.0): 1, ('etern', 1.0): 3, ('forgiv', 1.0): 2, ('kangin', 1.0): 1, ('بندر', 1.0): 1, ('العنزي', 1.0): 1, ('kristin', 1.0): 1, ('cass', 1.0): 1, ('surajettan', 1.0): 1, ('kashi', 1.0): 1, ('ashwathi', 1.0): 1, ('mommi', 1.0): 2, ('tirth', 1.0): 1, ('brambhatt', 1.0): 1, ('snooker', 1.0): 1, ('compens', 1.0): 1, ('theoper', 1.0): 1, ('479', 1.0): 1, ('premiostumundo', 1.0): 2, ('philosoph', 1.0): 1, ('x5', 1.0): 1, ('graphic', 1.0): 2, ('level', 1.0): 1, ('aug', 1.0): 3, ('excl', 1.0): 1, ('raw', 1.0): 1, ('weeni', 1.0): 1, ('annoyingbabi', 1.0): 1, ('lazi', 1.0): 2, ('cosi', 1.0): 1, ('client_amends_edit', 1.0): 1, ('_5_final_final_fin', 1.0): 1, ('pdf', 1.0): 1, ('mauliat', 1.0): 1, ('ito', 1.0): 2, ('okkay', 1.0): 1, ('knock', 1.0): 3, (\"soloist'\", 1.0): 1, ('ryu', 1.0): 1, ('saera', 1.0): 1, ('pinkeu', 1.0): 1, ('angri', 1.0): 3, ('screencap', 1.0): 1, ('jonghyun', 1.0): 1, ('seungyeon', 1.0): 1, ('cnblue', 1.0): 1, ('mbc', 1.0): 1, ('wgm', 1.0): 1, ('masa', 1.0): 2, ('entrepreneurship', 1.0): 1, ('empow', 1.0): 1, ('limpopo', 1.0): 1, ('pict', 1.0): 1, ('norapowel', 1.0): 1, ('hornykik', 1.0): 2, ('livesex', 1.0): 1, ('pumpkin', 1.0): 1, ('thrice', 1.0): 1, ('patron', 1.0): 1, ('ventur', 1.0): 1, ('deathcur', 1.0): 1, ('boob', 1.0): 1, ('blame', 1.0): 1, ('dine', 1.0): 1, ('modern', 1.0): 1, ('grill', 1.0): 1, ('disk', 1.0): 1, ('nt4', 1.0): 1, ('iirc', 1.0): 1, ('ux', 1.0): 1, ('refin', 1.0): 1, ('zdp', 1.0): 1, ('didnt', 1.0): 2, ('justic', 1.0): 1, ('daw', 1.0): 1, ('tine', 1.0): 1, ('gensan', 1.0): 1, ('frightl', 1.0): 1, ('undead', 1.0): 1, ('plush', 1.0): 1, ('cushion', 1.0): 1, ('nba', 1.0): 3, ('2k15', 1.0): 3, ('mypark', 1.0): 3, ('chronicl', 1.0): 4, ('gryph', 1.0): 3, ('volum', 1.0): 3, ('ellen', 1.0): 1, ('degener', 1.0): 1, ('shirt', 1.0): 1, ('mint', 1.0): 1, ('superdri', 1.0): 1, ('berangkaat', 1.0): 1, ('lagiii', 1.0): 1, ('siguro', 1.0): 1, ('un', 1.0): 1, ('kesa', 1.0): 1, ('lotsa', 1.0): 2, ('organis', 1.0): 2, ('4am', 1.0): 1, ('fingers-cross', 1.0): 1, ('deep', 1.0): 1, ('htaccess', 1.0): 1, ('file', 1.0): 2, ('adf', 1.0): 1, ('womad', 1.0): 1, ('gran', 1.0): 1, ('canaria', 1.0): 1, ('gig', 1.0): 1, ('twist', 1.0): 1, ('youv', 1.0): 1, ('teamnatur', 1.0): 1, ('huni', 1.0): 1, ('yayayayay', 1.0): 1, ('yt', 1.0): 2, ('convent', 1.0): 1, ('brighton', 1.0): 1, ('slay', 1.0): 1, ('nicknam', 1.0): 1, ('babygirl', 1.0): 1, ('regard', 1.0): 2, ('himmat', 1.0): 1, ('karain', 1.0): 2, ('baat', 1.0): 1, ('meri', 1.0): 1, ('hotee-mi', 1.0): 1, ('uncl', 1.0): 1, ('tongu', 1.0): 1, ('pronounc', 1.0): 1, ('nativ', 1.0): 1, ('american', 1.0): 2, ('proverb', 1.0): 1, ('lovabl', 1.0): 1, ('yesha', 1.0): 1, ('montoya', 1.0): 1, ('eagerli', 1.0): 1, ('payment', 1.0): 1, ('suprem', 1.0): 1, ('leon', 1.0): 1, ('ks', 1.0): 2, ('randi', 1.0): 1, ('9bi', 1.0): 1, ('physiqu', 1.0): 1, ('shave', 1.0): 1, ('uncut', 1.0): 1, ('boi', 1.0): 1, ('cheapest', 1.0): 1, ('regular', 1.0): 3, ('printer', 1.0): 3, ('nz', 1.0): 1, ('larg', 1.0): 4, ('format', 1.0): 1, ('10/10', 1.0): 1, ('senior', 1.0): 1, ('raid', 1.0): 2, ('conserv', 1.0): 1, ('batteri', 1.0): 1, ('comfort', 1.0): 2, ('swt', 1.0): 1, ('[email protected]', 1.0): 1, ('localgaragederbi', 1.0): 1, ('campu', 1.0): 1, ('subgam', 1.0): 1, ('faceit', 1.0): 1, ('snpcaht', 1.0): 1, ('hakhakhak', 1.0): 1, ('t___t', 1.0): 1, (\"kyungsoo'\", 1.0): 1, ('3d', 1.0): 2, ('properti', 1.0): 2, ('agent', 1.0): 1, ('accur', 1.0): 1, ('descript', 1.0): 1, ('theori', 1.0): 1, ('x4', 1.0): 1, ('15.90', 1.0): 1, ('yvett', 1.0): 1, ('author', 1.0): 2, ('mwf', 1.0): 1, ('programm', 1.0): 1, ('taal', 1.0): 1, ('lake', 1.0): 1, ('2emt', 1.0): 1, ('«', 1.0): 2, ('scurri', 1.0): 1, ('agil', 1.0): 1, ('solut', 1.0): 1, ('sme', 1.0): 1, ('omar', 1.0): 1, ('biggest', 1.0): 5, ('kamaal', 1.0): 1, ('amm', 1.0): 1, ('3am', 1.0): 1, ('hopehousekid', 1.0): 1, ('pitmantrain', 1.0): 1, ('walkersmithway', 1.0): 1, ('keepitloc', 1.0): 2, ('sehun', 1.0): 1, ('se100lead', 1.0): 1, ('unev', 1.0): 1, ('sofa', 1.0): 1, ('surf', 1.0): 1, ('cunt', 1.0): 1, ('rescoop', 1.0): 1, ('multiraci', 1.0): 1, ('fk', 1.0): 1, ('narrow', 1.0): 1, ('warlock', 1.0): 1, ('balloon', 1.0): 3, ('mj', 1.0): 1, ('madison', 1.0): 1, ('beonknockknock', 1.0): 1, ('con-gradu', 1.0): 1, ('gent', 1.0): 1, ('bitchfac', 1.0): 1, ('😒', 1.0): 1, ('organ', 1.0): 1, ('12pm', 1.0): 2, ('york', 1.0): 2, ('nearest', 1.0): 1, ('lendal', 1.0): 1, ('pikami', 1.0): 1, ('captur', 1.0): 1, ('fulton', 1.0): 1, ('sheen', 1.0): 1, ('baloney', 1.0): 1, ('unvarnish', 1.0): 1, ('laid', 1.0): 2, ('thick', 1.0): 1, ('blarney', 1.0): 1, ('flatteri', 1.0): 1, ('thin', 1.0): 1, ('sachin', 1.0): 1, ('unimport', 1.0): 1, ('context', 1.0): 1, ('dampen', 1.0): 1, ('yu', 1.0): 1, ('rocket', 1.0): 1, ('narendra', 1.0): 1, ('modi', 1.0): 1, ('aaaand', 1.0): 1, (\"team'\", 1.0): 1, ('macauley', 1.0): 1, ('howev', 1.0): 3, ('x3', 1.0): 1, ('wheeen', 1.0): 1, ('heechul', 1.0): 1, ('toast', 1.0): 2, ('coffee-weekday', 1.0): 1, ('9-11', 1.0): 1, ('sail', 1.0): 1, (\"friday'\", 1.0): 1, ('commerci', 1.0): 1, ('insur', 1.0): 1, ('requir', 1.0): 2, ('lookfortheo', 1.0): 1, ('cl', 1.0): 1, ('thou', 1.0): 1, ('april', 1.0): 2, ('airforc', 1.0): 1, ('clark', 1.0): 1, ('field', 1.0): 1, ('pampanga', 1.0): 1, ('troll', 1.0): 1, ('⚡', 1.0): 1, ('brow', 1.0): 1, ('oili', 1.0): 1, ('maricarljanah', 1.0): 1, ('6:15', 1.0): 1, ('degre', 1.0): 3, ('fahrenheit', 1.0): 1, ('🍸', 1.0): 7, ('╲', 1.0): 4, ('─', 1.0): 8, ('╱', 1.0): 5, ('🍤', 1.0): 4, ('╭', 1.0): 4, ('╮', 1.0): 4, ('┓', 1.0): 2, ('┳', 1.0): 1, ('┣', 1.0): 1, ('╰', 1.0): 3, ('╯', 1.0): 3, ('┗', 1.0): 2, ('┻', 1.0): 1, ('stool', 1.0): 1, ('toppl', 1.0): 1, ('findyourfit', 1.0): 1, ('prefer', 1.0): 2, ('whomosexu', 1.0): 1, ('stack', 1.0): 1, ('pandora', 1.0): 3, ('digitalexet', 1.0): 1, ('digitalmarket', 1.0): 1, ('sociamedia', 1.0): 1, ('nb', 1.0): 1, ('bom', 1.0): 1, ('dia', 1.0): 1, ('todo', 1.0): 1, ('forklift', 1.0): 1, ('warehous', 1.0): 1, ('worker', 1.0): 1, ('lsceen', 1.0): 1, ('immatur', 1.0): 1, ('gandhi', 1.0): 1, ('grassi', 1.0): 1, ('feetblog', 1.0): 2, ('daughter', 1.0): 3, ('4yr', 1.0): 1, ('old-porridg', 1.0): 1, ('fiend', 1.0): 1, ('2nite', 1.0): 1, ('comp', 1.0): 1, ('vike', 1.0): 1, ('t20blast', 1.0): 1, ('np', 1.0): 1, ('tax', 1.0): 1, ('ooohh', 1.0): 1, ('petjam', 1.0): 1, ('virtual', 1.0): 2, ('pounc', 1.0): 1, ('bentek', 1.0): 1, ('agn', 1.0): 1, ('[email protected]', 1.0): 1, ('sam', 1.0): 3, ('fruiti', 1.0): 1, ('vodka', 1.0): 2, ('sellyourcarin', 1.0): 2, ('5word', 1.0): 2, ('chaloniklo', 1.0): 2, ('pic.twitter.com/jxz2lbv6o', 1.0): 1, (\"paperwhite'\", 1.0): 1, ('laser-lik', 1.0): 1, ('focu', 1.0): 1, ('ghost', 1.0): 3, ('tagsforlikesapp', 1.0): 2, ('instagood', 1.0): 2, ('tbt', 1.0): 1, ('socket', 1.0): 1, ('spanner', 1.0): 1, ('😴', 1.0): 1, ('pglcsgo', 1.0): 1, ('x2', 1.0): 1, ('tend', 1.0): 1, ('crave', 1.0): 1, ('slower', 1.0): 1, ('sjw', 1.0): 1, ('cakehamp', 1.0): 1, ('glow', 1.0): 2, ('yayyy', 1.0): 1, ('merced', 1.0): 1, ('hood', 1.0): 1, ('badg', 1.0): 1, ('host', 1.0): 1, ('drone', 1.0): 1, ('blow', 1.0): 1, ('ignor', 1.0): 1, ('retali', 1.0): 1, ('bolling', 1.0): 1, (\"where'\", 1.0): 1, ('denmark', 1.0): 1, ('whitey', 1.0): 1, ('cultur', 1.0): 2, ('course', 1.0): 1, ('intro', 1.0): 2, ('graphicdesign', 1.0): 1, ('videograph', 1.0): 1, ('space', 1.0): 2, (\"ted'\", 1.0): 1, ('bogu', 1.0): 1, ('1000', 1.0): 1, ('hahahaaah', 1.0): 1, ('owli', 1.0): 1, ('afternon', 1.0): 1, ('whangarei', 1.0): 1, ('kati', 1.0): 2, ('paulin', 1.0): 1, ('traffick', 1.0): 1, ('wors', 1.0): 3, ('henc', 1.0): 1, ('express', 1.0): 1, ('wot', 1.0): 1, ('hand-lett', 1.0): 1, ('roof', 1.0): 1, ('eas', 1.0): 1, ('2/2', 1.0): 1, ('sour', 1.0): 1, ('dough', 1.0): 1, ('egypt', 1.0): 1, ('hubbi', 1.0): 2, ('sakin', 1.0): 1, ('six', 1.0): 1, ('christma', 1.0): 2, ('avril', 1.0): 1, ('n04j', 1.0): 1, ('25', 1.0): 1, ('prosecco', 1.0): 1, ('pech', 1.0): 1, ('micro', 1.0): 1, ('catspj', 1.0): 1, ('4:15', 1.0): 1, ('lazyweekend', 1.0): 1, ('overdu', 1.0): 1, ('mice', 1.0): 1, ('💃', 1.0): 3, ('jurass', 1.0): 1, ('ding', 1.0): 1, ('nila', 1.0): 1, ('8)', 1.0): 1, ('cooki', 1.0): 1, ('shir', 1.0): 1, ('0', 1.0): 3, ('hale', 1.0): 1, ('cheshir', 1.0): 1, ('decor', 1.0): 1, ('lemm', 1.0): 2, ('rec', 1.0): 1, ('ingat', 1.0): 1, ('din', 1.0): 2, ('mono', 1.0): 1, ('kathryn', 1.0): 1, ('jr', 1.0): 1, ('hsr', 1.0): 1, ('base', 1.0): 3, ('major', 1.0): 1, ('sugarrush', 1.0): 1, ('knit', 1.0): 1, ('partli', 1.0): 1, ('homegirl', 1.0): 1, ('nanci', 1.0): 1, ('fenja', 1.0): 1, ('aapk', 1.0): 1, ('benchmark', 1.0): 1, ('ke', 1.0): 1, ('hisaab', 1.0): 1, ('ho', 1.0): 1, ('gaya', 1.0): 1, ('ofc', 1.0): 1, ('rtss', 1.0): 1, ('hwait', 1.0): 1, ('titanfal', 1.0): 1, ('xbox', 1.0): 2, ('ultim', 1.0): 2, ('gastronomi', 1.0): 1, ('newblogpost', 1.0): 1, ('foodiefriday', 1.0): 1, ('foodi', 1.0): 1, ('yoghurt', 1.0): 1, ('pancak', 1.0): 2, ('sabah', 1.0): 3, ('kapima', 1.0): 1, ('gelen', 1.0): 1, ('guzel', 1.0): 1, ('bir', 1.0): 1, ('hediy', 1.0): 1, ('thanx', 1.0): 1, ('💞', 1.0): 2, ('visa', 1.0): 1, ('parisa', 1.0): 1, ('epiphani', 1.0): 1, ('lit', 1.0): 1, ('em-con', 1.0): 1, ('swore', 1.0): 1, ('0330 333 7234', 1.0): 1, ('kianweareproud', 1.0): 1, ('distract', 1.0): 1, ('dayofarch', 1.0): 1, ('10-20', 1.0): 1, ('bapu', 1.0): 1, ('ivypowel', 1.0): 1, ('newmus', 1.0): 1, ('sexchat', 1.0): 1, ('🍅', 1.0): 1, ('pathway', 1.0): 1, ('balkan', 1.0): 1, ('gypsi', 1.0): 1, ('mayhem', 1.0): 1, ('burek', 1.0): 1, ('meat', 1.0): 1, ('gibanica', 1.0): 1, ('pie', 1.0): 1, ('surrey', 1.0): 1, ('afterward', 1.0): 1, ('10.30', 1.0): 1, ('tempor', 1.0): 1, ('void', 1.0): 1, ('stem', 1.0): 1, ('sf', 1.0): 1, ('ykr', 1.0): 1, ('sparki', 1.0): 1, ('40mm', 1.0): 1, ('3.5', 1.0): 1, ('gr', 1.0): 1, ('rockfish', 1.0): 1, ('topwat', 1.0): 1, ('twitlong', 1.0): 1, ('me.so', 1.0): 1, ('jummah', 1.0): 3, ('durood', 1.0): 1, ('pak', 1.0): 1, ('cjradacomateada', 1.0): 2, ('supris', 1.0): 1, ('debut', 1.0): 1, ('shipper', 1.0): 1, ('asid', 1.0): 1, ('housem', 1.0): 1, ('737bigatingconcert', 1.0): 1, ('jedzjabłka', 1.0): 1, ('pijjabłka', 1.0): 1, ('polish', 1.0): 1, ('cider', 1.0): 1, ('mustread', 1.0): 1, ('cricket', 1.0): 1, ('5pm', 1.0): 1, ('queri', 1.0): 2, ('abbi', 1.0): 1, ('sumedh', 1.0): 1, ('sunnah', 1.0): 2, ('عن', 1.0): 2, ('quad', 1.0): 1, ('bike', 1.0): 1, ('carri', 1.0): 2, ('proprieti', 1.0): 1, ('chronic', 1.0): 1, ('superday', 1.0): 1, ('chocolatey', 1.0): 1, ('yasu', 1.0): 1, ('ooooh', 1.0): 1, ('hallo', 1.0): 2, ('dylan', 1.0): 2, ('laura', 1.0): 1, ('patric', 1.0): 2, ('keepin', 1.0): 1, ('mohr', 1.0): 1, ('guest', 1.0): 1, (\"o'neal\", 1.0): 1, ('tk', 1.0): 1, ('lua', 1.0): 1, ('stone', 1.0): 2, ('quicker', 1.0): 1, ('diet', 1.0): 1, ('sosweet', 1.0): 1, ('nominier', 1.0): 1, ('und', 1.0): 1, ('hardcor', 1.0): 1, ('😌', 1.0): 1, ('ff__special', 1.0): 1, ('acha', 1.0): 2, ('banda', 1.0): 1, ('✌', 1.0): 2, ('bhi', 1.0): 2, ('krta', 1.0): 1, ('beautifully-craft', 1.0): 1, ('mockingbird', 1.0): 1, ('diploma', 1.0): 1, ('blend', 1.0): 3, ('numbero', 1.0): 1, ('lolz', 1.0): 1, ('ambros', 1.0): 1, ('gwinett', 1.0): 1, ('bierc', 1.0): 1, ('ravag', 1.0): 1, ('illadvis', 1.0): 1, ('marriag', 1.0): 1, ('stare', 1.0): 1, ('cynic', 1.0): 2, ('yahuda', 1.0): 1, ('nosmet', 1.0): 1, ('poni', 1.0): 1, ('cuuut', 1.0): 1, (\"f'ing\", 1.0): 1, ('vacant', 1.0): 1, ('hauc', 1.0): 1, ('lovesss', 1.0): 1, ('hiss', 1.0): 1, ('overnight', 1.0): 1, ('cornish', 1.0): 1, ('all-clear', 1.0): 1, ('raincoat', 1.0): 1, ('measur', 1.0): 1, ('wealth', 1.0): 1, ('invest', 1.0): 2, ('garbi', 1.0): 1, ('wash', 1.0): 2, ('refuel', 1.0): 1, ('dunedin', 1.0): 1, ('kall', 1.0): 1, ('rakhi', 1.0): 1, ('12th', 1.0): 2, ('repres', 1.0): 3, ('slovenia', 1.0): 1, ('fridg', 1.0): 2, ('ludlow', 1.0): 1, ('28th', 1.0): 1, ('selway', 1.0): 1, ('submit', 1.0): 1, ('spanish', 1.0): 2, ('90210', 1.0): 1, ('oitnb', 1.0): 1, ('prepar', 1.0): 3, ('condit', 1.0): 1, ('msged', 1.0): 1, ('chiquito', 1.0): 1, ('ohaha', 1.0): 1, ('delhi', 1.0): 1, ('95', 1.0): 1, ('webtogsaward', 1.0): 1, ('grace', 1.0): 2, ('sheffield', 1.0): 1, ('tramlin', 1.0): 1, ('tl', 1.0): 2, ('hack', 1.0): 1, ('lad', 1.0): 1, ('beeepin', 1.0): 1, ('duper', 1.0): 1, ('handl', 1.0): 1, ('critiqu', 1.0): 1, ('contectu', 1.0): 1, ('ultor', 1.0): 2, ('mamaya', 1.0): 1, ('loiyal', 1.0): 1, ('para', 1.0): 1, ('truthfulwordsof', 1.0): 1, ('beanatividad', 1.0): 1, ('nknkkpagpapakumbaba', 1.0): 1, ('birthdaypres', 1.0): 1, ('compliment', 1.0): 1, ('swerv', 1.0): 1, ('goodtim', 1.0): 1, ('sinist', 1.0): 1, ('scare', 1.0): 1, ('tryna', 1.0): 1, ('anonym', 1.0): 1, ('dipsatch', 1.0): 1, ('aunt', 1.0): 1, ('dagga', 1.0): 1, ('burket', 1.0): 1, ('2am', 1.0): 1, ('twine', 1.0): 1, (\"diane'\", 1.0): 1, ('happybirthday', 1.0): 1, ('thanksss', 1.0): 1, ('randomli', 1.0): 1, ('buckinghampalac', 1.0): 1, ('chibi', 1.0): 1, ('maker', 1.0): 1, ('timog', 1.0): 1, ('18th', 1.0): 1, ('otw', 1.0): 1, ('kami', 1.0): 1, ('feelinggood', 1.0): 1, ('demand', 1.0): 2, ('naman', 1.0): 1, ('barkin', 1.0): 1, ('yeap', 1.0): 2, ('onkey', 1.0): 1, ('umma', 1.0): 1, ('pervert', 1.0): 1, ('onyu', 1.0): 1, ('appa', 1.0): 1, ('luci', 1.0): 1, ('horribl', 1.0): 1, ('quantum', 1.0): 1, ('greater', 1.0): 1, ('blockchain', 1.0): 1, ('nowplay', 1.0): 1, ('loftey', 1.0): 1, ('routt', 1.0): 1, ('assia', 1.0): 1, ('.\\n.\\n.', 1.0): 1, ('joint', 1.0): 1, ('futurereleas', 1.0): 1, (\"look'\", 1.0): 1, ('scari', 1.0): 1, ('murder', 1.0): 1, ('mysteri', 1.0): 1, ('comma', 1.0): 1, (\"j'\", 1.0): 1, ('hunni', 1.0): 2, ('diva', 1.0): 1, ('emili', 1.0): 3, ('nathan', 1.0): 1, ('medit', 1.0): 1, ('alumni', 1.0): 1, ('mba', 1.0): 1, ('foto', 1.0): 1, ('what-is-your-fashion', 1.0): 1, ('lorenangel', 1.0): 1, ('kw', 1.0): 2, ('tellanoldjokeday', 1.0): 1, ('reqd', 1.0): 1, ('specul', 1.0): 1, ('consist', 1.0): 4, ('tropic', 1.0): 1, ('startupph', 1.0): 1, ('zodiac', 1.0): 1, ('rapunzel', 1.0): 1, ('therver', 1.0): 1, ('85552', 1.0): 1, ('bestoftheday', 1.0): 1, ('oralsex', 1.0): 1, ('carli', 1.0): 1, ('happili', 1.0): 1, ('contract', 1.0): 1, ('matsu_bouzu', 1.0): 1, ('sonic', 1.0): 2, ('videogam', 1.0): 1, ('harana', 1.0): 1, ('belfast', 1.0): 1, ('danni', 1.0): 1, ('rare', 1.0): 1, ('sponsorship', 1.0): 1, ('aswel', 1.0): 1, ('gigi', 1.0): 1, ('nick', 1.0): 1, ('austin', 1.0): 1, ('youll', 1.0): 1, ('weak', 1.0): 4, ('10,000', 1.0): 1, ('bravo', 1.0): 1, ('iamamonst', 1.0): 1, ('rxthedailysurveyvot', 1.0): 1, ('broke', 1.0): 1, ('ass', 1.0): 1, ('roux', 1.0): 1, ('walkin', 1.0): 1, ('audienc', 1.0): 2, ('pfb', 1.0): 1, ('jute', 1.0): 1, ('walangmakakapigilsakin', 1.0): 1, ('lori', 1.0): 1, ('ehm', 1.0): 1, ('trick', 1.0): 1, ('baekhyun', 1.0): 1, ('eyesmil', 1.0): 1, ('borrow', 1.0): 1, ('knive', 1.0): 1, ('thek', 1.0): 1, ('eventu', 1.0): 1, ('reaapear', 1.0): 1, ('kno', 1.0): 1, ('whet', 1.0): 1, ('gratti', 1.0): 1, ('shorter', 1.0): 1, ('tweetin', 1.0): 1, ('inshallah', 1.0): 1, ('banana', 1.0): 1, ('raspberri', 1.0): 2, ('healthylifestyl', 1.0): 1, ('aint', 1.0): 2, ('skate', 1.0): 1, ('analyz', 1.0): 1, ('varieti', 1.0): 1, ('4:13', 1.0): 1, ('insomnia', 1.0): 1, ('medic', 1.0): 1, ('opposit', 1.0): 1, ('everlast', 1.0): 1, ('yoga', 1.0): 1, ('massag', 1.0): 2, ('osteopath', 1.0): 1, ('trainer', 1.0): 1, ('sharm', 1.0): 1, ('al_master_band', 1.0): 1, ('tbc', 1.0): 1, ('unives', 1.0): 1, ('architectur', 1.0): 1, ('random', 1.0): 1, ('isnt', 1.0): 1, ('typo', 1.0): 1, ('snark', 1.0): 1, ('lession', 1.0): 1, ('drunk', 1.0): 1, ('bruuh', 1.0): 1, ('2week', 1.0): 1, ('50europ', 1.0): 1, ('🇫', 1.0): 4, ('🇷', 1.0): 4, ('iov', 1.0): 1, ('accord', 1.0): 1, ('mne', 1.0): 1, ('pchelok', 1.0): 1, ('ja', 1.0): 1, ('=:', 1.0): 2, ('sweetest', 1.0): 1, ('comet', 1.0): 1, ('ahah', 1.0): 1, ('candi', 1.0): 2, ('axio', 1.0): 1, ('rabbit', 1.0): 2, ('nutshel', 1.0): 1, ('taken', 1.0): 1, ('letshavecocktailsafternuclai', 1.0): 1, ('malik', 1.0): 1, ('umair', 1.0): 1, ('canon', 1.0): 1, ('gang', 1.0): 1, ('grind', 1.0): 1, ('thoracicbridg', 1.0): 1, ('5minut', 1.0): 1, ('nonscript', 1.0): 1, ('password', 1.0): 1, ('shoshannavassil', 1.0): 1, ('addmeonsnapchat', 1.0): 1, ('dmme', 1.0): 1, ('mpoint', 1.0): 2, ('soph', 1.0): 1, ('anot', 1.0): 1, ('liao', 1.0): 2, ('ord', 1.0): 1, ('lor', 1.0): 1, ('sibei', 1.0): 1, ('xialan', 1.0): 1, ('thnx', 1.0): 1, ('malfunct', 1.0): 1, ('clown', 1.0): 1, ('joker', 1.0): 1, ('\\U000fec00', 1.0): 1, ('nigth', 1.0): 1, ('estoy', 1.0): 1, ('escuchando', 1.0): 1, ('elsewher', 1.0): 1, ('bipolar', 1.0): 1, ('hahahahahahahahahahahahahaha', 1.0): 1, ('yoohoo', 1.0): 1, ('bajrangibhaijaanstorm', 1.0): 1, ('superhappi', 1.0): 1, ('doll', 1.0): 1, ('energi', 1.0): 1, ('f', 1.0): 3, (\"m'dear\", 1.0): 1, ('emma', 1.0): 2, ('alrd', 1.0): 1, ('dhan', 1.0): 2, ('satguru', 1.0): 1, ('tera', 1.0): 1, ('aasra', 1.0): 1, ('pita', 1.0): 1, ('keeo', 1.0): 1, ('darl', 1.0): 2, ('akarshan', 1.0): 1, ('sweetpea', 1.0): 1, ('gluten', 1.0): 1, ('pastri', 1.0): 2, ('highfiv', 1.0): 1, ('artsi', 1.0): 1, ('verbal', 1.0): 1, ('kaaa', 1.0): 1, ('oxford', 1.0): 2, ('wahoo', 1.0): 1, ('anchor', 1.0): 1, ('partnership', 1.0): 1, ('robbenisland', 1.0): 1, ('whale', 1.0): 1, ('aquat', 1.0): 1, ('safari', 1.0): 1, ('garru', 1.0): 1, ('liara', 1.0): 1, ('appoint', 1.0): 1, ('burnley', 1.0): 1, ('453', 1.0): 1, ('110', 1.0): 2, ('49', 1.0): 1, ('footbal', 1.0): 1, ('fm15', 1.0): 1, ('fmfamili', 1.0): 1, ('aamir', 1.0): 1, ('difficult', 1.0): 1, ('medium', 1.0): 1, ('nva', 1.0): 1, ('minuet', 1.0): 1, ('gamec', 1.0): 1, ('headrest', 1.0): 1, ('pit', 1.0): 1, ('spoken', 1.0): 1, ('advis', 1.0): 1, ('paypoint', 1.0): 1, ('deepthroat', 1.0): 1, ('truli', 1.0): 3, ('bee', 1.0): 2, ('upward', 1.0): 1, ('bound', 1.0): 1, ('movingonup', 1.0): 1, ('aitor', 1.0): 1, ('sn', 1.0): 1, ('ps4', 1.0): 2, ('jawad', 1.0): 1, ('presal', 1.0): 1, ('betcha', 1.0): 1, ('dumb', 1.0): 2, ('butt', 1.0): 1, ('qualki', 1.0): 1, ('808', 1.0): 1, ('milf', 1.0): 1, ('4like', 1.0): 1, ('sexysaturday', 1.0): 1, ('vw', 1.0): 1, ('umpfff', 1.0): 1, ('ca', 1.0): 1, ('domg', 1.0): 1, ('nanti', 1.0): 1, ('difollow', 1.0): 1, ('stubborn', 1.0): 1, ('nothavingit', 1.0): 1, ('klee', 1.0): 1, ('hem', 1.0): 1, ('congrad', 1.0): 1, ('accomplish', 1.0): 1, ('kfcroleplay', 1.0): 3, ('tregaron', 1.0): 1, ('boar', 1.0): 1, ('sweati', 1.0): 1, ('glyon', 1.0): 1, ('🚮', 1.0): 1, (\"tee'\", 1.0): 1, ('johnni', 1.0): 1, ('utub', 1.0): 1, (\"video'\", 1.0): 1, ('loss', 1.0): 1, ('combin', 1.0): 2, ('pigeon', 1.0): 1, ('fingerscross', 1.0): 1, ('photobomb', 1.0): 1, ('90', 1.0): 1, ('23', 1.0): 1, ('gimm', 1.0): 1, ('definetli', 1.0): 1, ('exit', 1.0): 1, ('bom-dia', 1.0): 1, ('apod', 1.0): 1, ('ultraviolet', 1.0): 1, ('m31', 1.0): 1, ('jul', 1.0): 1, ('oooh', 1.0): 1, ('yawn', 1.0): 1, ('ftw', 1.0): 1, ('maman', 1.0): 1, ('afterznoon', 1.0): 1, ('tweeep', 1.0): 1, ('abp', 1.0): 2, ('kiya', 1.0): 1, ('van', 1.0): 1, ('olymp', 1.0): 1, ('😷', 1.0): 1, ('classi', 1.0): 1, ('attach', 1.0): 1, ('equip', 1.0): 1, ('bobbl', 1.0): 1, ('anu', 1.0): 1, ('mh3', 1.0): 1, ('patch', 1.0): 1, ('psp', 1.0): 1, ('huffpost', 1.0): 1, ('tribut', 1.0): 1, ('h_eartshapedbox', 1.0): 1, ('magictrikband', 1.0): 1, ('magictrik', 1.0): 2, ('roommat', 1.0): 1, ('tami', 1.0): 1, ('b3dk', 1.0): 1, ('7an', 1.0): 1, ('ank', 1.0): 1, ('purpos', 1.0): 1, ('struggl', 1.0): 1, ('eagl', 1.0): 1, ('oceana', 1.0): 1, ('idk', 1.0): 3, ('med', 1.0): 1, ('fridayfauxpa', 1.0): 1, ('subtl', 1.0): 1, ('hint', 1.0): 1, ('prim', 1.0): 1, ('algorithm', 1.0): 1, ('iii', 1.0): 1, ('rosa', 1.0): 1, ('yvw', 1.0): 1, ('here', 1.0): 1, ('boost', 1.0): 1, ('unforgett', 1.0): 1, ('humor', 1.0): 1, (\"mum'\", 1.0): 1, ('hahahhaah', 1.0): 1, ('sombrero', 1.0): 1, ('lost', 1.0): 2, ('spammer', 1.0): 1, ('proceed', 1.0): 1, ('entertain', 1.0): 1, ('100k', 1.0): 1, ('mileston', 1.0): 1, ('judith', 1.0): 1, ('district', 1.0): 1, ('council', 1.0): 1, ('midar', 1.0): 1, ('gender', 1.0): 1, ('ilysm', 1.0): 1, ('zen', 1.0): 1, ('neat', 1.0): 1, ('rider', 1.0): 1, ('fyi', 1.0): 1, ('dig', 1.0): 2, ('👱', 1.0): 1, ('👽', 1.0): 1, ('🌳', 1.0): 1, ('suspici', 1.0): 1, ('calori', 1.0): 1, ('harder', 1.0): 1, ('jessica', 1.0): 1, ('carina', 1.0): 1, ('francisco', 1.0): 1, ('teret', 1.0): 1, ('potassium', 1.0): 1, ('rehydr', 1.0): 1, ('drinkitallup', 1.0): 1, ('thirstquench', 1.0): 1, ('tapir', 1.0): 1, ('calf', 1.0): 1, ('mealtim', 1.0): 1, ('uhc', 1.0): 1, ('scale', 1.0): 1, ('network', 1.0): 1, ('areal', 1.0): 1, ('extremesport', 1.0): 1, ('quadbik', 1.0): 1, ('bloggersrequir', 1.0): 1, ('bloggersw', 1.0): 1, ('brainer', 1.0): 1, ('mse', 1.0): 1, ('fund', 1.0): 1, ('nooowww', 1.0): 1, ('lile', 1.0): 1, ('tid', 1.0): 1, ('tmi', 1.0): 1, ('deploy', 1.0): 1, ('jule', 1.0): 1, ('betti', 1.0): 1, ('hddc', 1.0): 1, ('salman', 1.0): 1, ('pthht', 1.0): 1, ('lfc', 1.0): 3, ('tope', 1.0): 1, ('xxoo', 1.0): 2, ('russia', 1.0): 2, ('silver-wash', 1.0): 1, ('fritillari', 1.0): 1, ('moon', 1.0): 1, ('ap', 1.0): 2, ('trash', 1.0): 2, ('clever', 1.0): 1, (\"thank'\", 1.0): 1, ('keven', 1.0): 1, ('pastim', 1.0): 1, ('ashramcal', 1.0): 1, ('ontrack', 1.0): 1, ('german', 1.0): 1, ('subtitl', 1.0): 1, ('pinter', 1.0): 1, ('morninggg', 1.0): 1, ('🐶', 1.0): 1, ('pete', 1.0): 1, ('awesome-o', 1.0): 1, ('multipl', 1.0): 1, ('cya', 1.0): 1, ('harrog', 1.0): 1, ('jet', 1.0): 1, ('supplier', 1.0): 1, ('req', 1.0): 1, ('fridayloug', 1.0): 1, ('4thstreetmus', 1.0): 1, ('hawaii', 1.0): 1, ('kick', 1.0): 1, ('deepli', 1.0): 1, ('[email protected]', 1.0): 1, ('thousand', 1.0): 2, ('newspap', 1.0): 1, ('lew', 1.0): 1, ('nah', 1.0): 1, ('fallout', 1.0): 2, ('technic', 1.0): 1, ('gunderson', 1.0): 1, ('europa', 1.0): 1, ('thoroughli', 1.0): 1, ('script', 1.0): 1, ('overtak', 1.0): 1, ('motorway', 1.0): 1, ('thu', 1.0): 1, ('niteflirt', 1.0): 1, ('hbu', 1.0): 2, ('bowl', 1.0): 1, ('chri', 1.0): 2, ('niall', 1.0): 2, ('94', 1.0): 1, ('ik', 1.0): 1, ('stydia', 1.0): 1, ('nawazuddin', 1.0): 1, ('siddiqu', 1.0): 1, ('nomnomnom', 1.0): 1, ('dukefreebiefriday', 1.0): 1, ('z', 1.0): 1, ('insyaallah', 1.0): 1, ('ham', 1.0): 1, ('villa', 1.0): 1, ('brum', 1.0): 1, ('deni', 1.0): 1, ('vagina', 1.0): 1, ('rli', 1.0): 1, ('izzi', 1.0): 1, ('mitch', 1.0): 1, ('minn', 1.0): 1, ('recently.websit', 1.0): 1, ('coolingtow', 1.0): 1, ('soon.thank', 1.0): 1, ('showinginterest', 1.0): 1, ('multicolor', 1.0): 1, ('wid', 1.0): 1, ('wedg', 1.0): 1, ('motiv', 1.0): 1, ('nnnnot', 1.0): 1, (\"gf'\", 1.0): 1, ('bluesidemenxix', 1.0): 1, ('ardent', 1.0): 1, ('mooorn', 1.0): 1, ('wuppert', 1.0): 1, ('fridayfunday', 1.0): 1, ('re-sign', 1.0): 1, ('chalkhil', 1.0): 1, ('midday', 1.0): 1, ('carter', 1.0): 1, ('remedi', 1.0): 1, ('atrack', 1.0): 1, ('christ', 1.0): 1, ('badminton', 1.0): 1, (\"littl'un\", 1.0): 1, ('ikprideofpak', 1.0): 1, ('janjua', 1.0): 1, ('pimpl', 1.0): 1, ('forehead', 1.0): 1, ('volcano', 1.0): 1, ('mag', 1.0): 1, ('miryenda', 1.0): 1, (\"technology'\", 1.0): 1, ('touchétoday', 1.0): 1, ('idownload', 1.0): 1, ('25ish', 1.0): 1, ('snowbal', 1.0): 1, ('nd', 1.0): 1, ('expir', 1.0): 1, ('6gb', 1.0): 1, ('loveu', 1.0): 1, ('morefuninthephilippin', 1.0): 1, ('laho', 1.0): 1, ('caramoan', 1.0): 1, ('kareem', 1.0): 1, ('surah', 1.0): 1, ('kahaf', 1.0): 1, ('melani', 1.0): 1, ('bosch', 1.0): 1, ('machin', 1.0): 1, (\"week'\", 1.0): 1, ('refollow', 1.0): 1, ('😎', 1.0): 1, ('💁', 1.0): 1, ('relaps', 1.0): 1, ('prada', 1.0): 2, ('punjabiswillgetit', 1.0): 1, ('hitter', 1.0): 1, ('mass', 1.0): 2, ('shoud', 1.0): 1, ('1:12', 1.0): 1, ('ughtm', 1.0): 1, ('545', 1.0): 1, ('kissm', 1.0): 1, ('likeforfollow', 1.0): 1, ('overwhelm', 1.0): 1, ('groupmat', 1.0): 1, ('75', 1.0): 2, ('kyunk', 1.0): 1, ('aitchison', 1.0): 1, ('curvi', 1.0): 1, ('mont', 1.0): 1, ('doa', 1.0): 1, ('header', 1.0): 1, ('speaker', 1.0): 3, ('avoid', 1.0): 1, ('laboratori', 1.0): 1, ('idc', 1.0): 1, ('fuckin', 1.0): 2, ('wooo', 1.0): 2, ('neobyt', 1.0): 1, ('pirat', 1.0): 1, ('takedown', 1.0): 1, ('indirag', 1.0): 1, ('judiciari', 1.0): 1, ('commit', 1.0): 4, ('govt', 1.0): 1, ('polici', 1.0): 1, ('rbi', 1.0): 1, ('similar', 1.0): 1, (\"thought'\", 1.0): 1, ('progress', 1.0): 1, ('transfer', 1.0): 1, ('gg', 1.0): 1, ('defenit', 1.0): 1, ('nofx', 1.0): 1, ('friskyfiday', 1.0): 1, ('yipee', 1.0): 1, ('shed', 1.0): 1, ('incent', 1.0): 1, ('vege', 1.0): 1, ('marin', 1.0): 1, ('gz', 1.0): 1, ('rajeev', 1.0): 1, ('hvng', 1.0): 1, ('funfil', 1.0): 1, ('friday.it', 1.0): 1, ('ws', 1.0): 1, ('reali', 1.0): 1, ('diff', 1.0): 1, ('kabir.fel', 1.0): 1, ('dresden', 1.0): 1, ('germani', 1.0): 1, ('plot', 1.0): 1, ('tdf', 1.0): 1, ('🍷', 1.0): 2, ('☀', 1.0): 2, ('🚲', 1.0): 2, ('minion', 1.0): 2, ('slot', 1.0): 1, (\"b'day\", 1.0): 1, ('isabella', 1.0): 1, ('okeyyy', 1.0): 1, ('vddd', 1.0): 1, (');', 1.0): 1, ('selfee', 1.0): 1, ('insta', 1.0): 1, ('🙆', 1.0): 1, ('🙌', 1.0): 1, ('😛', 1.0): 1, ('🐒', 1.0): 1, ('😝', 1.0): 1, ('hhahhaaa', 1.0): 1, ('jeez', 1.0): 1, ('teamcannib', 1.0): 1, ('teamspacewhalingisthebest', 1.0): 1, ('fitfa', 1.0): 1, ('identifi', 1.0): 1, ('pharmaci', 1.0): 1, ('verylaterealis', 1.0): 1, ('iwishiknewbett', 1.0): 1, ('satisfi', 1.0): 1, ('ess-aych-eye-te', 1.0): 1, ('supposedli', 1.0): 1, ('👍', 1.0): 1, ('immedi', 1.0): 1, (\"foxy'\", 1.0): 1, ('instrument', 1.0): 1, ('alon', 1.0): 2, ('goldcoast', 1.0): 1, ('lelomustfal', 1.0): 1, ('meal', 1.0): 1, ('5g', 1.0): 1, ('liker', 1.0): 1, ('newdress', 1.0): 1, ('resist', 1.0): 1, ('fot', 1.0): 1, ('troy', 1.0): 1, ('twitterfollowerswhatsup', 1.0): 1, ('happyfriedday', 1.0): 1, ('keepsafealway', 1.0): 1, ('loveyeah', 1.0): 1, ('emojasp_her', 1.0): 1, ('vanilla', 1.0): 1, ('sidemen', 1.0): 1, ('yaaayyy', 1.0): 1, ('friendaaa', 1.0): 1, ('bulb', 1.0): 5, ('corn', 1.0): 6, ('1tbps4', 1.0): 1, ('divin', 1.0): 1, ('wheeli', 1.0): 1, ('bin', 1.0): 1, ('ubericecream', 1.0): 1, ('messengerforaday', 1.0): 1, ('kyli', 1.0): 1, ('toilet', 1.0): 1, ('ikaw', 1.0): 1, ('musta', 1.0): 1, ('cheatmat', 1.0): 1, ('kyuhyun', 1.0): 1, ('ghanton', 1.0): 1, ('easy.get', 1.0): 1, ('5:30', 1.0): 1, ('therein', 1.0): 1, ('majalah', 1.0): 1, ('dominiqu', 1.0): 1, ('lamp', 1.0): 1, ('a-foot', 1.0): 1, ('revamp', 1.0): 1, ('brainchild', 1.0): 1, ('confid', 1.0): 1, ('confin', 1.0): 1, ('colorado', 1.0): 1, ('goodyear', 1.0): 1, ('upto', 1.0): 1, ('cashback', 1.0): 1, ('yourewelcom', 1.0): 1, ('nightli', 1.0): 1, ('simpin', 1.0): 1, ('sketchbook', 1.0): 1, ('4wild', 1.0): 1, ('colorpencil', 1.0): 1, ('cray', 1.0): 1, ('6:30', 1.0): 1, ('imma', 1.0): 3, ('ob', 1.0): 1, ('11h', 1.0): 1, ('kino', 1.0): 1, ('adult', 1.0): 1, ('kardamena', 1.0): 1, ('samo', 1.0): 1, ('greec', 1.0): 1, ('caesar', 1.0): 1, ('salad', 1.0): 1, ('tad', 1.0): 1, ('bland', 1.0): 1, ('respond', 1.0): 1, ('okk', 1.0): 1, ('den', 1.0): 1, ('allov', 1.0): 1, ('hangout', 1.0): 1, ('whoever', 1.0): 1, ('tourist', 1.0): 1, ('♌', 1.0): 1, ('kutiyapanti', 1.0): 1, ('profession', 1.0): 1, ('boomshot', 1.0): 1, ('fuh', 1.0): 1, ('yeeey', 1.0): 1, ('donot', 1.0): 1, ('expos', 1.0): 1, ('lipstick', 1.0): 1, ('cran', 1.0): 1, ('prayr', 1.0): 1, ('හ', 1.0): 1, ('ෙ', 1.0): 1, ('ල', 1.0): 2, ('හව', 1.0): 1, ('ු', 1.0): 1, ('onemochaonelov', 1.0): 1, ('southpaw', 1.0): 1, ('geniu', 1.0): 1, ('stroma', 1.0): 1, ('🔴', 1.0): 1, ('younow', 1.0): 1, ('jonah', 1.0): 1, ('jareddd', 1.0): 1, ('postcod', 1.0): 1, ('talkmobil', 1.0): 1, ('huha', 1.0): 1, ('transform', 1.0): 1, ('sword', 1.0): 3, ('misread', 1.0): 1, ('richard', 1.0): 1, ('ibiza', 1.0): 1, ('birthdaymoneyforjesusjuic', 1.0): 1, ('ytb', 1.0): 1, ('tutori', 1.0): 1, ('construct', 1.0): 2, ('critic', 1.0): 1, ('ganesha', 1.0): 1, ('textur', 1.0): 1, ('photographi', 1.0): 1, ('hinduism', 1.0): 1, ('hindugod', 1.0): 1, ('elephantgod', 1.0): 1, ('selfish', 1.0): 1, ('bboy', 1.0): 1, ('cardgam', 1.0): 1, ('pixelart', 1.0): 1, ('gamedesign', 1.0): 1, ('indiedev', 1.0): 1, ('pixel_daili', 1.0): 1, ('plateau', 1.0): 1, ('laguna', 1.0): 1, ('tha', 1.0): 4, ('bahot', 1.0): 1, ('baje', 1.0): 1, ('raat', 1.0): 1, ('liya', 1.0): 1, ('hath', 1.0): 1, ('ghant', 1.0): 1, ('itna', 1.0): 2, ('bana', 1.0): 1, ('paya', 1.0): 1, ('uta', 1.0): 1, ('manga', 1.0): 1, ('jamuna', 1.0): 1, ('\\\\:', 1.0): 1, ('swiftma', 1.0): 1, ('trion', 1.0): 1, ('forum', 1.0): 1, ('b-day', 1.0): 1, ('disgust', 1.0): 1, ('commodor', 1.0): 1, ('annabel', 1.0): 1, ('bridg', 1.0): 1, ('quest', 1.0): 1, ('borderland', 1.0): 1, ('wanderrook', 1.0): 1, ('gm', 1.0): 1, ('preciou', 1.0): 2, ('mizz', 1.0): 1, ('bleedgreen', 1.0): 1, ('sophia', 1.0): 1, ('chicago', 1.0): 1, ('honeymoon', 1.0): 1, (\"da'esh\", 1.0): 1, ('co-ord', 1.0): 1, ('fsa', 1.0): 1, ('estat', 1.0): 1, (\"when'\", 1.0): 1, ('dusti', 1.0): 1, ('tunisia', 1.0): 2, (\"class'\", 1.0): 1, ('irrit', 1.0): 1, ('fiverr', 1.0): 1, ('gina', 1.0): 1, ('soproud', 1.0): 1, ('enought', 1.0): 1, ('hole', 1.0): 1, ('melbourneburg', 1.0): 1, ('arianna', 1.0): 1, ('esai', 1.0): 1, ('rotterdam', 1.0): 1, ('jordi', 1.0): 1, ('clasi', 1.0): 1, ('horni', 1.0): 1, ('salon', 1.0): 1, ('bleach', 1.0): 1, ('olaplex', 1.0): 1, ('damag', 1.0): 1, ('teamwork', 1.0): 1, ('zitecofficestori', 1.0): 1, ('다쇼', 1.0): 1, ('colleagu', 1.0): 1, ('eb', 1.0): 1, (\"t'would\", 1.0): 1, ('tweetup', 1.0): 1, ('detect', 1.0): 1, ('jonathancreek', 1.0): 1, ('dvr', 1.0): 1, ('kat', 1.0): 1, ('rarer', 1.0): 1, ('okkk', 1.0): 1, ('frend', 1.0): 1, ('milt', 1.0): 1, ('mario', 1.0): 1, ('rewatch', 1.0): 1, ('1600', 1.0): 1, ('sige', 1.0): 1, ('punta', 1.0): 1, ('kayo', 1.0): 1, ('nooo', 1.0): 1, ('prompt', 1.0): 1, ('t-mobil', 1.0): 1, ('orang', 1.0): 1, ('ee', 1.0): 1, ('teapot', 1.0): 1, ('hotter', 1.0): 1, ('»', 1.0): 1, ('londoutrad', 1.0): 1, ('kal', 1.0): 1, ('wayward', 1.0): 1, ('pine', 1.0): 1, ('muscl', 1.0): 1, ('ilikeit', 1.0): 1, ('belong', 1.0): 1, ('watford', 1.0): 1, ('enterpris', 1.0): 1, ('cube', 1.0): 1, ('particp', 1.0): 1, ('saudi', 1.0): 1, ('arabia', 1.0): 1, ('recogn', 1.0): 1, ('fanbas', 1.0): 3, ('bailona', 1.0): 3, ('responsibilti', 1.0): 1, ('sunlight', 1.0): 1, ('tiger', 1.0): 1, ('elev', 1.0): 1, ('horror', 1.0): 1, ('bitchesss', 1.0): 1, ('shitti', 1.0): 1, ('squash', 1.0): 1, ('becca', 1.0): 1, ('delta', 1.0): 1, ('nut', 1.0): 1, ('yun', 1.0): 1, ('joe', 1.0): 1, ('dirt', 1.0): 1, ('sharon', 1.0): 1, ('medicin', 1.0): 1, ('ttyl', 1.0): 1, ('gav', 1.0): 1, ('linda', 1.0): 1, ('3hr', 1.0): 1, ('tym', 1.0): 2, ('dieback', 1.0): 1, ('endit', 1.0): 1, ('minecon', 1.0): 1, ('sere', 1.0): 1, ('joerin', 1.0): 1, ('joshan', 1.0): 1, ('tandem', 1.0): 1, ('ligao', 1.0): 1, ('albay', 1.0): 1, ('bcyc', 1.0): 1, ('lnh', 1.0): 1, ('sat', 1.0): 1, ('honorari', 1.0): 1, ('alac', 1.0): 1, ('skelo_ghost', 1.0): 1, ('madadagdagan', 1.0): 1, ('bmc', 1.0): 1, ('11:11', 1.0): 2, ('embarrass', 1.0): 1, ('entropi', 1.0): 1, ('evolut', 1.0): 2, ('loop', 1.0): 1, ('eva', 1.0): 1, ('camden', 1.0): 1, ('uhh', 1.0): 1, ('scoup', 1.0): 1, ('jren', 1.0): 1, ('nuest', 1.0): 1, ('lovelayyy', 1.0): 1, ('kidney', 1.0): 1, ('neuer', 1.0): 1, ('spray', 1.0): 1, ('[email protected]', 1.0): 1, ('uni', 1.0): 1, ('uff', 1.0): 1, ('karhi', 1.0): 1, ('thi', 1.0): 1, ('juaquin', 1.0): 1, ('v3nzor99', 1.0): 1, ('shell', 1.0): 1, ('heyi', 1.0): 1, ('flavor', 1.0): 1, ('thakyou', 1.0): 1, ('beatriz', 1.0): 1, ('cancel', 1.0): 1, ('puff', 1.0): 1, ('egg', 1.0): 2, ('tart', 1.0): 1, ('chai', 1.0): 1, ('mtr', 1.0): 1, ('alyssa', 1.0): 1, ('rub', 1.0): 1, ('tummi', 1.0): 1, ('zelda', 1.0): 1, ('ive', 1.0): 1, ('🎂', 1.0): 1, ('jiva', 1.0): 1, ('🍹', 1.0): 1, ('🍻', 1.0): 1, ('mubbarak', 1.0): 1, ('deborah', 1.0): 1, ('coupon', 1.0): 1, ('colourdeb', 1.0): 1, ('purpl', 1.0): 1, (\"chippy'\", 1.0): 1, ('vessel', 1.0): 1, ('ps', 1.0): 2, ('vintag', 1.0): 1, ('✫', 1.0): 4, ('˚', 1.0): 4, ('·', 1.0): 4, ('✵', 1.0): 4, ('⊹', 1.0): 4, ('1710', 1.0): 1, ('gooffeanotter', 1.0): 1, ('kiksex', 1.0): 1, ('mugshot', 1.0): 1, ('token', 1.0): 1, ('maritimen', 1.0): 1, ('rh', 1.0): 1, ('tatton', 1.0): 1, ('jump_julia', 1.0): 1, ('malema', 1.0): 1, ('fren', 1.0): 1, ('nuf', 1.0): 1, ('teas', 1.0): 1, ('alien', 1.0): 2, ('closer', 1.0): 1, ('monitor', 1.0): 1, ('kimmi', 1.0): 1, (\"channel'\", 1.0): 1, ('planetbollywoodnew', 1.0): 1, ('epi', 1.0): 1, ('tricki', 1.0): 1, ('be-shak', 1.0): 1, ('chenoweth', 1.0): 1, ('oodl', 1.0): 1, ('hailey', 1.0): 1, ('craźi', 1.0): 1, ('sęxxxÿ', 1.0): 1, ('cøôl', 1.0): 1, ('runway', 1.0): 1, ('gooodnight', 1.0): 1, ('iv', 1.0): 1, ('ri', 1.0): 1, ('jayci', 1.0): 1, ('karaok', 1.0): 1, ('ltsw', 1.0): 1, ('giant', 1.0): 1, ('1709', 1.0): 1, ('refus', 1.0): 1, ('collagen', 1.0): 1, ('2win', 1.0): 1, ('hopetowin', 1.0): 1, ('inventori', 1.0): 1, ('loveforfood', 1.0): 1, ('foodforthought', 1.0): 1, ('thoughtfortheday', 1.0): 1, ('carp', 1.0): 1, ('diem', 1.0): 1, ('nath', 1.0): 1, ('ning', 1.0): 1, ('although', 1.0): 1, ('harm', 1.0): 1, ('stormi', 1.0): 1, ('sync', 1.0): 1, ('devic', 1.0): 1, ('mess', 1.0): 1, ('nylon', 1.0): 1, ('gvb', 1.0): 1, ('cd', 1.0): 1, ('mountain.titl', 1.0): 1, ('unto', 1.0): 1, ('theworldwouldchang', 1.0): 1, ('categori', 1.0): 1, ('mah', 1.0): 1, ('panel', 1.0): 1, (\"i'am\", 1.0): 1, ('80-1', 1.0): 1, ('1708', 1.0): 1, ('neenkin', 1.0): 1, ('masterpiec', 1.0): 1, ('debit', 1.0): 1, ('beagl', 1.0): 1, ('♫', 1.0): 1, ('feat', 1.0): 1, ('charli', 1.0): 1, ('puth', 1.0): 1, ('wiz', 1.0): 1, ('khalifa', 1.0): 1, ('svu', 1.0): 1, ('darker', 1.0): 1, ('berni', 1.0): 1, ('henri', 1.0): 1, ('trap', 1.0): 1, ('tommi', 1.0): 1, (\"vivian'\", 1.0): 1, ('transpar', 1.0): 1, ('bitcoin', 1.0): 1, ('insight', 1.0): 1, ('ping', 1.0): 1, ('masquerad', 1.0): 1, ('zorroreturm', 1.0): 1, ('1707', 1.0): 1, ('pk', 1.0): 1, ('hay', 1.0): 1, ('jacquelin', 1.0): 1, ('passion', 1.0): 1, ('full-fledg', 1.0): 1, ('workplac', 1.0): 1, ('venu', 1.0): 1, ('lago', 1.0): 1, ('luxord', 1.0): 1, ('potato', 1.0): 1, ('hundr', 1.0): 1, ('cite', 1.0): 1, ('academ', 1.0): 1, ('pokiri', 1.0): 1, ('1nenokkadin', 1.0): 1, ('heritag', 1.0): 1, ('wood', 1.0): 1, ('beleaf', 1.0): 1, ('spnfamili', 1.0): 1, ('spn', 1.0): 1, ('alwayskeepfight', 1.0): 1, ('jaredpadalecki', 1.0): 1, ('jensenackl', 1.0): 1, ('peasant', 1.0): 2, ('ahahha', 1.0): 1, ('distant', 1.0): 1, ('shout-out', 1.0): 1, ('adulthood', 1.0): 1, ('hopeless', 0.0): 2, ('tmr', 0.0): 3, (':(', 0.0): 4571, ('everyth', 0.0): 17, ('kid', 0.0): 20, ('section', 0.0): 3, ('ikea', 0.0): 1, ('cute', 0.0): 43, ('shame', 0.0): 19, (\"i'm\", 0.0): 343, ('nearli', 0.0): 3, ('19', 0.0): 8, ('2', 0.0): 42, ('month', 0.0): 23, ('heart', 0.0): 27, ('slide', 0.0): 1, ('wast', 0.0): 5, ('basket', 0.0): 1, ('“', 0.0): 15, ('hate', 0.0): 57, ('japanes', 0.0): 4, ('call', 0.0): 29, ('bani', 0.0): 2, ('”', 0.0): 11, ('dang', 0.0): 2, ('start', 0.0): 44, ('next', 0.0): 40, ('week', 0.0): 56, ('work', 0.0): 133, ('oh', 0.0): 92, ('god', 0.0): 15, ('babi', 0.0): 47, ('face', 0.0): 20, ('make', 0.0): 102, ('smile', 0.0): 10, ('neighbour', 0.0): 1, ('motor', 0.0): 1, ('ask', 0.0): 29, ('said', 0.0): 33, ('updat', 0.0): 11, ('search', 0.0): 3, ('sialan', 0.0): 1, ('athabasca', 0.0): 2, ('glacier', 0.0): 2, ('1948', 0.0): 1, (':-(', 0.0): 493, ('jasper', 0.0): 1, ('jaspernationalpark', 0.0): 1, ('alberta', 0.0): 1, ('explorealberta', 0.0): 1, ('…', 0.0): 16, ('realli', 0.0): 131, ('good', 0.0): 101, ('g', 0.0): 8, ('idea', 0.0): 10, ('never', 0.0): 57, ('go', 0.0): 224, ('meet', 0.0): 31, ('mare', 0.0): 1, ('ivan', 0.0): 1, ('happi', 0.0): 25, ('trip', 0.0): 11, ('keep', 0.0): 34, ('safe', 0.0): 5, ('see', 0.0): 124, ('soon', 0.0): 45, ('tire', 0.0): 50, ('hahahah', 0.0): 3, ('knee', 0.0): 2, ('replac', 0.0): 4, ('get', 0.0): 232, ('day', 0.0): 149, ('ouch', 0.0): 3, ('relat', 0.0): 2, ('sweet', 0.0): 7, ('n', 0.0): 21, ('sour', 0.0): 2, ('kind', 0.0): 11, ('bi-polar', 0.0): 1, ('peopl', 0.0): 75, ('life', 0.0): 33, ('...', 0.0): 331, ('cuz', 0.0): 4, ('full', 0.0): 16, ('pleass', 0.0): 2, ('im', 0.0): 129, ('sure', 0.0): 31, ('tho', 0.0): 28, ('feel', 0.0): 158, ('stupid', 0.0): 8, (\"can't\", 0.0): 180, ('seem', 0.0): 15, ('grasp', 0.0): 1, ('basic', 0.0): 2, ('digit', 0.0): 8, ('paint', 0.0): 3, ('noth', 0.0): 26, (\"i'v\", 0.0): 77, ('research', 0.0): 1, ('help', 0.0): 54, ('lord', 0.0): 2, ('lone', 0.0): 9, ('someon', 0.0): 57, ('talk', 0.0): 45, ('guy', 0.0): 62, ('girl', 0.0): 28, ('assign', 0.0): 5, ('project', 0.0): 3, ('😩', 0.0): 14, ('want', 0.0): 246, ('play', 0.0): 48, ('video', 0.0): 23, ('game', 0.0): 28, ('watch', 0.0): 77, ('movi', 0.0): 24, ('choreograph', 0.0): 1, ('hard', 0.0): 35, ('email', 0.0): 10, ('link', 0.0): 12, ('still', 0.0): 124, ('say', 0.0): 63, ('longer', 0.0): 12, ('avail', 0.0): 13, ('cri', 0.0): 46, ('bc', 0.0): 50, ('miss', 0.0): 301, ('mingm', 0.0): 1, ('much', 0.0): 139, ('sorri', 0.0): 148, ('mom', 0.0): 13, ('far', 0.0): 18, ('away', 0.0): 28, (\"we'r\", 0.0): 30, ('truli', 0.0): 5, ('flight', 0.0): 6, ('friend', 0.0): 39, ('happen', 0.0): 51, ('sad', 0.0): 123, ('dog', 0.0): 17, ('pee', 0.0): 2, ('’', 0.0): 27, ('bag', 0.0): 8, ('take', 0.0): 49, ('newwin', 0.0): 1, ('15', 0.0): 10, ('doushit', 0.0): 1, ('late', 0.0): 27, ('suck', 0.0): 23, ('sick', 0.0): 43, ('plan', 0.0): 17, ('first', 0.0): 27, ('gundam', 0.0): 1, ('night', 0.0): 46, ('nope', 0.0): 6, ('dollar', 0.0): 1, ('😭', 0.0): 29, ('listen', 0.0): 18, ('back', 0.0): 122, ('old', 0.0): 16, ('show', 0.0): 26, ('know', 0.0): 131, ('weird', 0.0): 10, ('got', 0.0): 104, ('u', 0.0): 193, ('leav', 0.0): 42, ('might', 0.0): 11, ('give', 0.0): 36, ('pale', 0.0): 2, ('imit', 0.0): 1, ('went', 0.0): 32, ('sea', 0.0): 1, ('massiv', 0.0): 4, ('fuck', 0.0): 58, ('rash', 0.0): 1, ('bodi', 0.0): 12, ('pain', 0.0): 21, ('thing', 0.0): 52, ('ever', 0.0): 30, ('home', 0.0): 63, ('hi', 0.0): 34, ('absent', 0.0): 1, ('gran', 0.0): 2, ('knew', 0.0): 6, ('care', 0.0): 20, ('tell', 0.0): 26, ('love', 0.0): 152, ('wish', 0.0): 91, ('would', 0.0): 70, ('sequel', 0.0): 1, ('busi', 0.0): 28, ('sa', 0.0): 15, ('school', 0.0): 32, ('time', 0.0): 166, ('yah', 0.0): 3, ('xx', 0.0): 18, ('ouucchhh', 0.0): 1, ('one', 0.0): 148, ('wisdom', 0.0): 2, ('teeth', 0.0): 6, ('come', 0.0): 91, ('frighten', 0.0): 1, ('case', 0.0): 6, ('pret', 0.0): 1, ('wkwkw', 0.0): 1, ('verfi', 0.0): 1, ('activ', 0.0): 6, ('forget', 0.0): 8, ('follow', 0.0): 262, ('member', 0.0): 6, ('thank', 0.0): 107, ('join', 0.0): 8, ('goodby', 0.0): 14, ('´', 0.0): 4, ('chain', 0.0): 1, ('—', 0.0): 26, ('sentir-s', 0.0): 1, ('incompleta', 0.0): 1, ('okay', 0.0): 38, ('..', 0.0): 108, ('wednesday', 0.0): 5, ('marvel', 0.0): 1, ('thwart', 0.0): 1, ('awh', 0.0): 3, (\"what'\", 0.0): 15, ('chanc', 0.0): 16, ('zant', 0.0): 1, ('need', 0.0): 106, ('someth', 0.0): 28, ('x', 0.0): 39, (\"when'\", 0.0): 1, ('birthday', 0.0): 23, ('worst', 0.0): 14, ('part', 0.0): 11, ('bad', 0.0): 73, ('audraesar', 0.0): 1, ('sushi', 0.0): 3, ('pic', 0.0): 15, ('tl', 0.0): 8, ('drive', 0.0): 16, ('craaazzyy', 0.0): 2, ('pop', 0.0): 3, ('like', 0.0): 228, ('helium', 0.0): 1, ('balloon', 0.0): 1, ('climatechang', 0.0): 5, ('cc', 0.0): 6, (\"california'\", 0.0): 1, ('power', 0.0): 6, ('influenti', 0.0): 1, ('air', 0.0): 3, ('pollut', 0.0): 1, ('watchdog', 0.0): 1, ('califor', 0.0): 1, ('elhaida', 0.0): 1, ('rob', 0.0): 2, ('juri', 0.0): 1, ('came', 0.0): 16, ('10th', 0.0): 1, ('televot', 0.0): 1, ('idaho', 0.0): 2, ('restrict', 0.0): 2, ('fish', 0.0): 2, ('despit', 0.0): 2, ('region', 0.0): 2, ('drought-link', 0.0): 1, ('die-of', 0.0): 1, ('abrupt', 0.0): 1, ('climat', 0.0): 1, ('chang', 0.0): 27, ('may', 0.0): 16, ('doom', 0.0): 2, ('mammoth', 0.0): 1, ('megafauna', 0.0): 1, ('sc', 0.0): 3, (\"australia'\", 0.0): 1, ('dirtiest', 0.0): 2, ('station', 0.0): 3, ('consid', 0.0): 5, ('clean', 0.0): 6, ('energi', 0.0): 3, ('biomass', 0.0): 1, (\"ain't\", 0.0): 5, ('easi', 0.0): 6, ('green', 0.0): 7, ('golf', 0.0): 1, ('cours', 0.0): 7, ('california', 0.0): 1, ('ulti', 0.0): 1, ('well', 0.0): 56, ('mine', 0.0): 12, ('gonna', 0.0): 51, ('sexi', 0.0): 14, ('prexi', 0.0): 1, ('kindergarten', 0.0): 1, ('hungri', 0.0): 19, ('cant', 0.0): 47, ('find', 0.0): 53, ('book', 0.0): 20, ('sane', 0.0): 1, ('liter', 0.0): 15, ('three', 0.0): 7, ('loung', 0.0): 1, ('event', 0.0): 4, ('turn', 0.0): 17, ('boss', 0.0): 5, ('hozier', 0.0): 1, (\"that'\", 0.0): 61, ('true', 0.0): 22, ('soooner', 0.0): 1, ('ahh', 0.0): 7, ('fam', 0.0): 3, ('respectlost', 0.0): 1, ('hypercholesteloremia', 0.0): 1, ('ok', 0.0): 33, ('look', 0.0): 100, ('gift', 0.0): 11, ('calibraska', 0.0): 1, ('actual', 0.0): 24, ('genuin', 0.0): 2, ('contend', 0.0): 1, ('head', 0.0): 23, ('alway', 0.0): 56, ('hurt', 0.0): 41, ('stay', 0.0): 24, ('lmao', 0.0): 13, ('older', 0.0): 5, ('sound', 0.0): 19, ('upset', 0.0): 11, ('infinit', 0.0): 10, ('ao', 0.0): 1, ('stick', 0.0): 1, ('8th', 0.0): 1, ('either', 0.0): 13, ('seriou', 0.0): 8, ('yun', 0.0): 1, ('eh', 0.0): 4, ('room', 0.0): 11, ('way', 0.0): 42, ('hot', 0.0): 15, ('havent', 0.0): 11, ('found', 0.0): 11, ('handsom', 0.0): 2, ('jack', 0.0): 3, ('draw', 0.0): 2, ('shit', 0.0): 36, ('cut', 0.0): 14, ('encor', 0.0): 4, ('4thwin', 0.0): 4, ('baymax', 0.0): 1, ('french', 0.0): 4, ('mixer', 0.0): 1, ('💜', 0.0): 6, ('wft', 0.0): 1, ('awesom', 0.0): 5, ('replay', 0.0): 1, ('parti', 0.0): 15, ('promot', 0.0): 3, ('music', 0.0): 16, ('bank', 0.0): 9, ('short', 0.0): 11, ('boy', 0.0): 18, ('order', 0.0): 16, ('receiv', 0.0): 7, ('hub', 0.0): 1, ('nearest', 0.0): 1, ('deliv', 0.0): 3, ('today', 0.0): 108, ('1/2', 0.0): 3, ('mum', 0.0): 14, ('loud', 0.0): 2, ('final', 0.0): 35, ('parasyt', 0.0): 1, ('alll', 0.0): 1, ('zayniscomingbackonjuli', 0.0): 23, ('26', 0.0): 24, ('bye', 0.0): 8, ('era', 0.0): 1, ('。', 0.0): 3, ('ω', 0.0): 1, ('」', 0.0): 2, ('∠', 0.0): 2, ('):', 0.0): 6, ('nathann', 0.0): 1, ('💕', 0.0): 7, ('hug', 0.0): 29, ('😊', 0.0): 9, ('beauti', 0.0): 11, ('dieididieieiei', 0.0): 1, ('stage', 0.0): 15, ('mean', 0.0): 43, ('hello', 0.0): 13, ('lion', 0.0): 3, ('think', 0.0): 75, ('screw', 0.0): 4, ('netflix', 0.0): 5, ('chill', 0.0): 7, ('di', 0.0): 7, ('ervin', 0.0): 1, ('ohh', 0.0): 8, ('yeah', 0.0): 41, ('hope', 0.0): 102, ('accept', 0.0): 2, ('offer', 0.0): 10, ('desper', 0.0): 2, ('year', 0.0): 46, ('snapchat', 0.0): 79, ('amargolonnard', 0.0): 2, ('kikhorni', 0.0): 13, ('snapm', 0.0): 4, ('tagsforlik', 0.0): 5, ('batalladelosgallo', 0.0): 2, ('webcamsex', 0.0): 4, ('ugh', 0.0): 26, ('stream', 0.0): 24, ('duti', 0.0): 3, (\"u'v\", 0.0): 1, ('gone', 0.0): 24, ('alien', 0.0): 1, ('aww', 0.0): 21, ('wanna', 0.0): 94, ('sorka', 0.0): 1, ('funer', 0.0): 4, ('text', 0.0): 15, ('phone', 0.0): 34, ('sunni', 0.0): 1, ('nonexist', 0.0): 1, ('wowza', 0.0): 1, ('fah', 0.0): 1, ('taylor', 0.0): 3, ('crop', 0.0): 1, ('boo', 0.0): 5, ('count', 0.0): 7, ('new', 0.0): 51, ('guitar', 0.0): 1, ('jonghyun', 0.0): 1, ('hyung', 0.0): 1, ('pleas', 0.0): 275, ('predict', 0.0): 2, ('sj', 0.0): 3, ('nomin', 0.0): 1, ('vs', 0.0): 4, ('pl', 0.0): 45, ('dude', 0.0): 12, ('calm', 0.0): 3, ('brace', 0.0): 5, ('sir', 0.0): 5, ('plu', 0.0): 4, ('4', 0.0): 18, ('shock', 0.0): 3, ('omggg', 0.0): 2, ('yall', 0.0): 4, ('deserv', 0.0): 8, ('whenev', 0.0): 3, ('spend', 0.0): 8, ('smoke', 0.0): 3, ('end', 0.0): 40, ('fall', 0.0): 16, ('asleep', 0.0): 25, ('1', 0.0): 26, ('point', 0.0): 14, ('close', 0.0): 20, ('grand', 0.0): 1, ('whyyi', 0.0): 7, ('long', 0.0): 38, ('must', 0.0): 15, ('annoy', 0.0): 11, ('evan', 0.0): 1, ('option', 0.0): 3, ('opt', 0.0): 1, (\"who'\", 0.0): 7, ('giveaway', 0.0): 3, ('muster', 0.0): 1, ('merch', 0.0): 4, ('ah', 0.0): 18, ('funni', 0.0): 6, ('drink', 0.0): 7, ('savanna', 0.0): 1, ('straw', 0.0): 1, ('ignor', 0.0): 16, ('yester', 0.0): 1, ('afternoon', 0.0): 3, ('sleep', 0.0): 90, ('ye', 0.0): 48, ('sadli', 0.0): 11, ('when', 0.0): 2, ('album', 0.0): 16, ('last', 0.0): 72, ('chocol', 0.0): 8, ('consum', 0.0): 1, ('werk', 0.0): 1, ('morn', 0.0): 31, ('foreal', 0.0): 1, ('wesen', 0.0): 1, ('uwes', 0.0): 1, ('mj', 0.0): 1, ('😂', 0.0): 24, ('catch', 0.0): 9, ('onlin', 0.0): 20, ('enough', 0.0): 24, ('haha', 0.0): 30, (\"he'\", 0.0): 23, ('bosen', 0.0): 1, ('die', 0.0): 21, ('egg', 0.0): 4, ('benni', 0.0): 1, ('sometim', 0.0): 16, ('followback', 0.0): 6, ('huhu', 0.0): 17, ('understand', 0.0): 15, ('badli', 0.0): 12, ('scare', 0.0): 16, ('>:(', 0.0): 47, ('al', 0.0): 4, ('kati', 0.0): 3, ('zaz', 0.0): 1, ('ami', 0.0): 2, ('lot', 0.0): 27, ('diari', 0.0): 1, ('read', 0.0): 20, ('rehash', 0.0): 1, ('websit', 0.0): 7, ('mushroom', 0.0): 1, ('piec', 0.0): 4, ('except', 0.0): 5, ('reach', 0.0): 3, ('anyway', 0.0): 12, ('vicki', 0.0): 1, ('omg', 0.0): 63, ('wtf', 0.0): 13, ('lip', 0.0): 3, ('virgin', 0.0): 2, ('your', 0.0): 8, ('45', 0.0): 1, ('hahah', 0.0): 6, ('ninasti', 0.0): 1, ('tsktsk', 0.0): 1, ('oppa', 0.0): 4, ('wont', 0.0): 9, ('dick', 0.0): 5, ('kawaii', 0.0): 1, ('manli', 0.0): 1, ('xbox', 0.0): 3, ('alreadi', 0.0): 52, ('comfi', 0.0): 1, ('bed', 0.0): 12, ('youu', 0.0): 2, ('sigh', 0.0): 13, ('lol', 0.0): 43, ('potato', 0.0): 1, ('fri', 0.0): 7, ('guess', 0.0): 14, (\"y'all\", 0.0): 2, ('ugli', 0.0): 9, ('asf', 0.0): 1, ('huh', 0.0): 7, ('eish', 0.0): 1, ('ive', 0.0): 11, ('quit', 0.0): 9, ('lost', 0.0): 25, ('twitter', 0.0): 30, ('mojo', 0.0): 1, ('dont', 0.0): 53, ('mara', 0.0): 1, ('neh', 0.0): 2, ('fever', 0.0): 7, ('<3', 0.0): 25, ('poor', 0.0): 35, ('bb', 0.0): 7, ('abl', 0.0): 22, ('associ', 0.0): 1, ('councillor', 0.0): 1, ('confer', 0.0): 2, ('weekend', 0.0): 25, ('skype', 0.0): 6, ('account', 0.0): 20, ('hack', 0.0): 8, ('contact', 0.0): 7, ('creat', 0.0): 2, ('tweet', 0.0): 35, ('spree', 0.0): 4, ('na', 0.0): 29, ('sholong', 0.0): 1, ('reject', 0.0): 7, ('propos', 0.0): 2, ('gee', 0.0): 1, ('fli', 0.0): 10, ('gidi', 0.0): 1, ('pamper', 0.0): 1, ('lago', 0.0): 1, ('ehn', 0.0): 1, ('arrest', 0.0): 1, ('girlfriend', 0.0): 2, ('he', 0.0): 3, ('nice', 0.0): 19, ('person', 0.0): 15, ('idk', 0.0): 26, ('anybodi', 0.0): 7, ('song', 0.0): 27, ('disappear', 0.0): 1, ('itun', 0.0): 3, ('daze', 0.0): 1, ('confus', 0.0): 8, ('surviv', 0.0): 5, ('fragment', 0.0): 1, (\"would'v\", 0.0): 2, ('forc', 0.0): 2, ('horribl', 0.0): 9, ('weather', 0.0): 29, ('us', 0.0): 43, ('could', 0.0): 69, ('walao', 0.0): 1, ('kb', 0.0): 1, ('send', 0.0): 12, ('ill', 0.0): 16, ('djderek', 0.0): 1, ('mani', 0.0): 29, ('fun', 0.0): 32, ('gig', 0.0): 3, ('absolut', 0.0): 6, ('legend', 0.0): 3, ('wait', 0.0): 43, ('till', 0.0): 8, ('saturday', 0.0): 10, ('homework', 0.0): 2, ('pa', 0.0): 8, ('made', 0.0): 23, ('da', 0.0): 5, ('greek', 0.0): 2, ('tragedi', 0.0): 1, ('rain', 0.0): 43, ('gym', 0.0): 6, ('💪', 0.0): 2, ('🏻', 0.0): 4, ('🐒', 0.0): 1, ('what', 0.0): 8, ('wrong', 0.0): 33, ('struck', 0.0): 1, ('anymor', 0.0): 20, ('belgium', 0.0): 4, ('fabian', 0.0): 2, ('delph', 0.0): 6, ('fallen', 0.0): 3, ('hide', 0.0): 4, ('drake', 0.0): 1, ('silent', 0.0): 1, ('hear', 0.0): 33, ('rest', 0.0): 21, ('peac', 0.0): 5, ('mo', 0.0): 4, ('tonight', 0.0): 24, ('t20blast', 0.0): 1, ('ahhh', 0.0): 5, ('wake', 0.0): 21, ('mumma', 0.0): 2, ('7', 0.0): 16, ('dead', 0.0): 10, ('tomorrow', 0.0): 34, (\"i'll\", 0.0): 41, ('high', 0.0): 8, ('low', 0.0): 8, ('pray', 0.0): 13, ('appropri', 0.0): 1, ('. . .', 0.0): 2, ('awak', 0.0): 10, ('woke', 0.0): 14, ('upp', 0.0): 1, ('dm', 0.0): 23, ('luke', 0.0): 6, ('hey', 0.0): 26, ('babe', 0.0): 19, ('across', 0.0): 4, ('hindi', 0.0): 1, ('reaction', 0.0): 1, ('5s', 0.0): 1, ('run', 0.0): 15, ('space', 0.0): 5, ('tbh', 0.0): 14, ('disabl', 0.0): 2, ('pension', 0.0): 1, ('ptsd', 0.0): 1, ('imposs', 0.0): 4, ('physic', 0.0): 7, ('financi', 0.0): 2, ('nooo', 0.0): 16, ('broke', 0.0): 9, ('soo', 0.0): 3, ('amaz', 0.0): 16, ('toghet', 0.0): 1, ('around', 0.0): 20, ('p', 0.0): 5, ('hold', 0.0): 9, ('anoth', 0.0): 27, ('septemb', 0.0): 2, ('21st', 0.0): 2, ('snsd', 0.0): 2, ('interact', 0.0): 2, ('anna', 0.0): 5, ('akana', 0.0): 1, ('askip', 0.0): 1, (\"t'exist\", 0.0): 1, ('channel', 0.0): 6, ('owner', 0.0): 1, ('decid', 0.0): 10, ('broadcast', 0.0): 6, ('kei', 0.0): 2, ('rate', 0.0): 4, ('se', 0.0): 2, ('notic', 0.0): 26, ('exist', 0.0): 2, ('traffic', 0.0): 5, ('terribl', 0.0): 12, ('eye', 0.0): 12, ('small', 0.0): 9, ('kate', 0.0): 2, ('spade', 0.0): 1, ('pero', 0.0): 3, ('walang', 0.0): 1, ('maganda', 0.0): 1, ('aw', 0.0): 42, ('seen', 0.0): 23, ('agesss', 0.0): 1, ('add', 0.0): 26, ('corinehurleigh', 0.0): 1, ('snapchatm', 0.0): 6, ('instagram', 0.0): 4, ('addmeonsnapchat', 0.0): 2, ('sf', 0.0): 3, ('quot', 0.0): 6, ('kiksext', 0.0): 6, ('bum', 0.0): 2, ('zara', 0.0): 1, ('trouser', 0.0): 1, ('effect', 0.0): 4, ('spanish', 0.0): 1, (\"it'okay\", 0.0): 1, ('health', 0.0): 2, ('luck', 0.0): 6, ('freed', 0.0): 1, ('rock', 0.0): 3, ('orcalov', 0.0): 1, ('tri', 0.0): 65, ('big', 0.0): 21, ('cuddl', 0.0): 8, ('lew', 0.0): 1, ('kiss', 0.0): 4, ('em', 0.0): 1, ('crave', 0.0): 8, ('banana', 0.0): 4, ('crumbl', 0.0): 1, ('mcflurri', 0.0): 1, ('cabl', 0.0): 1, ('car', 0.0): 17, ('brother', 0.0): 10, (\"venus'\", 0.0): 1, ('concept', 0.0): 4, ('rli', 0.0): 5, ('tea', 0.0): 7, ('tagal', 0.0): 2, (\"we'v\", 0.0): 3, ('appoint', 0.0): 1, (\"i'd\", 0.0): 11, ('sinc', 0.0): 35, (\"there'\", 0.0): 18, ('milk', 0.0): 3, ('left', 0.0): 26, ('cereal', 0.0): 2, ('film', 0.0): 6, ('date', 0.0): 7, ('previou', 0.0): 2, ('73', 0.0): 2, ('user', 0.0): 1, ('everywher', 0.0): 6, ('fansign', 0.0): 1, ('photo', 0.0): 15, ('expens', 0.0): 7, ('zzzz', 0.0): 1, ('let', 0.0): 37, ('sun', 0.0): 10, ('yet', 0.0): 33, (\"bff'\", 0.0): 1, ('extrem', 0.0): 3, ('stress', 0.0): 10, ('anyth', 0.0): 19, ('win', 0.0): 27, (\"deosn't\", 0.0): 1, ('liverpool', 0.0): 2, ('pool', 0.0): 3, ('though', 0.0): 57, ('bro', 0.0): 3, ('great', 0.0): 22, ('news', 0.0): 21, ('self', 0.0): 1, ('esteem', 0.0): 1, ('lowest', 0.0): 1, ('better', 0.0): 36, ('tacki', 0.0): 1, ('taken', 0.0): 9, ('man', 0.0): 32, ('lucki', 0.0): 16, ('charm', 0.0): 1, ('haaretz', 0.0): 1, ('israel', 0.0): 1, ('syria', 0.0): 2, ('continu', 0.0): 1, ('develop', 0.0): 5, ('chemic', 0.0): 1, ('weapon', 0.0): 2, ('offici', 0.0): 3, ('wsj', 0.0): 2, ('rep', 0.0): 1, ('bt', 0.0): 4, ('mr', 0.0): 9, ('wong', 0.0): 1, ('confisc', 0.0): 1, ('art', 0.0): 4, ('thought', 0.0): 31, ('icepack', 0.0): 1, ('dose', 0.0): 2, ('killer', 0.0): 2, ('board', 0.0): 1, ('whimper', 0.0): 1, ('fan', 0.0): 17, ('senpai', 0.0): 1, ('buttsex', 0.0): 1, ('joke', 0.0): 8, ('headlin', 0.0): 1, (\"dn't\", 0.0): 1, ('brk', 0.0): 1, (\":'(\", 0.0): 13, ('hit', 0.0): 7, ('voic', 0.0): 9, ('falsetto', 0.0): 1, ('zone', 0.0): 2, ('leannerin', 0.0): 1, ('hornykik', 0.0): 17, ('loveofmylif', 0.0): 2, ('dmme', 0.0): 2, ('pussi', 0.0): 2, ('newmus', 0.0): 3, ('sexo', 0.0): 2, ('s2', 0.0): 1, ('spain', 0.0): 4, ('delay', 0.0): 5, ('kill', 0.0): 22, ('singl', 0.0): 10, ('untruth', 0.0): 1, ('cross', 0.0): 4, ('countri', 0.0): 6, ('ij', 0.0): 1, ('💥', 0.0): 1, ('✨', 0.0): 1, ('💫', 0.0): 1, ('bear', 0.0): 2, ('littl', 0.0): 21, ('apart', 0.0): 7, ('live', 0.0): 37, ('soshi', 0.0): 1, ('didnt', 0.0): 24, ('buttt', 0.0): 2, ('congrat', 0.0): 2, ('sunday', 0.0): 8, ('friday', 0.0): 12, ('shoulda', 0.0): 1, ('move', 0.0): 12, ('w', 0.0): 22, ('caus', 0.0): 16, (\"they'r\", 0.0): 14, ('heyyy', 0.0): 1, ('yeol', 0.0): 2, ('solo', 0.0): 6, ('dancee', 0.0): 1, ('inter', 0.0): 1, ('nemanja', 0.0): 1, ('vidic', 0.0): 1, ('roma', 0.0): 1, (\"mom'\", 0.0): 2, ('linguist', 0.0): 1, (\"dad'\", 0.0): 1, ('comput', 0.0): 6, ('scientist', 0.0): 1, ('dumbest', 0.0): 1, ('famili', 0.0): 9, ('broken', 0.0): 11, ('ice', 0.0): 35, ('cream', 0.0): 32, ('pour', 0.0): 1, ('crash', 0.0): 6, ('scienc', 0.0): 1, ('resourc', 0.0): 1, ('vehicl', 0.0): 5, ('ate', 0.0): 10, ('ayex', 0.0): 1, ('eat', 0.0): 27, ('swear', 0.0): 6, ('lamon', 0.0): 1, ('scroll', 0.0): 1, ('curv', 0.0): 2, ('😉', 0.0): 1, ('cement', 0.0): 1, ('cast', 0.0): 5, ('10.3', 0.0): 1, ('k', 0.0): 9, ('sign', 0.0): 9, ('zayn', 0.0): 8, ('bot', 0.0): 1, ('plz', 0.0): 3, ('mention', 0.0): 9, ('jmu', 0.0): 1, ('camp', 0.0): 7, ('teas', 0.0): 3, ('sweetest', 0.0): 1, ('awuna', 0.0): 1, ('mbulelo', 0.0): 1, ('match', 0.0): 7, ('pig', 0.0): 2, ('although', 0.0): 5, ('crackl', 0.0): 1, ('nois', 0.0): 3, ('plug', 0.0): 2, ('fuse', 0.0): 1, ('dammit', 0.0): 3, ('tip', 0.0): 2, ('carlton', 0.0): 2, ('aflblueshawk', 0.0): 2, (\"alex'\", 0.0): 1, ('hous', 0.0): 16, ('motorsport', 0.0): 1, ('seri', 0.0): 3, ('disc', 0.0): 1, ('right', 0.0): 51, ('cheeki', 0.0): 1, ('j', 0.0): 1, ('instead', 0.0): 4, ('seo', 0.0): 1, ('nl', 0.0): 1, ('bud', 0.0): 1, ('christi', 0.0): 1, ('xo', 0.0): 1, ('niec', 0.0): 1, ('summer', 0.0): 19, ('bloodi', 0.0): 2, ('sandwhich', 0.0): 1, ('buset', 0.0): 1, ('discrimin', 0.0): 4, ('five', 0.0): 5, ('learn', 0.0): 5, ('pregnanc', 0.0): 2, ('foot', 0.0): 5, ('f', 0.0): 4, ('matern', 0.0): 1, ('kick', 0.0): 6, ('domesticviol', 0.0): 1, ('law', 0.0): 4, ('domest', 0.0): 1, ('violenc', 0.0): 2, ('victim', 0.0): 4, ('98fm', 0.0): 1, ('exactli', 0.0): 5, ('unfortun', 0.0): 21, ('yesterday', 0.0): 13, ('uk', 0.0): 9, ('govern', 0.0): 1, ('sapiosexu', 0.0): 1, ('damn', 0.0): 29, ('beta', 0.0): 4, ('12', 0.0): 8, ('hour', 0.0): 35, ('world', 0.0): 17, ('hulk', 0.0): 3, ('hogan', 0.0): 3, ('scrub', 0.0): 1, ('wwe', 0.0): 2, ('histori', 0.0): 2, ('iren', 0.0): 4, ('mistak', 0.0): 6, ('naa', 0.0): 1, ('sold', 0.0): 6, ('h_my_k', 0.0): 1, ('lose', 0.0): 7, ('valentin', 0.0): 2, ('et', 0.0): 3, (\"r'ship\", 0.0): 1, ('btwn', 0.0): 1, ('homo', 0.0): 2, ('biphob', 0.0): 2, ('comment', 0.0): 4, ('certain', 0.0): 6, ('disciplin', 0.0): 2, ('incl', 0.0): 2, ('european', 0.0): 3, ('lang', 0.0): 6, ('lit', 0.0): 2, ('educ', 0.0): 2, ('fresherstofin', 0.0): 1, ('💔', 0.0): 3, ('dream', 0.0): 24, ('gettin', 0.0): 2, ('realist', 0.0): 4, ('thx', 0.0): 1, ('real', 0.0): 21, ('isnt', 0.0): 7, ('prefer', 0.0): 4, ('benzema', 0.0): 2, ('hahahahahaah', 0.0): 1, ('donno', 0.0): 1, ('korean', 0.0): 2, ('languag', 0.0): 5, ('russian', 0.0): 2, ('waaa', 0.0): 1, ('eidwithgrof', 0.0): 1, ('boreddd', 0.0): 1, ('mug', 0.0): 3, ('piss', 0.0): 3, ('tiddler', 0.0): 1, ('silli', 0.0): 2, ('least', 0.0): 15, ('card', 0.0): 7, ('chorong', 0.0): 1, ('leader', 0.0): 1, ('에이핑크', 0.0): 3, ('더쇼', 0.0): 4, ('clan', 0.0): 1, ('slot', 0.0): 2, ('open', 0.0): 16, ('pfff', 0.0): 1, ('privat', 0.0): 2, ('bugbounti', 0.0): 1, ('self-xss', 0.0): 1, ('host', 0.0): 2, ('header', 0.0): 3, ('poison', 0.0): 3, ('code', 0.0): 8, ('execut', 0.0): 1, ('ktksbye', 0.0): 1, ('connect', 0.0): 3, ('compani', 0.0): 3, ('alert', 0.0): 2, ('cancel', 0.0): 10, ('uber', 0.0): 3, ('everyon', 0.0): 26, ('els', 0.0): 4, ('offic', 0.0): 7, ('ahahah', 0.0): 1, ('petit', 0.0): 1, ('relationship', 0.0): 4, ('height', 0.0): 2, ('cost', 0.0): 1, ('600', 0.0): 2, ('£', 0.0): 6, ('secur', 0.0): 4, ('odoo', 0.0): 2, ('8', 0.0): 11, ('partner', 0.0): 2, ('commun', 0.0): 2, ('spirit', 0.0): 3, ('jgh', 0.0): 2, ('effin', 0.0): 1, ('facebook', 0.0): 4, ('anyon', 0.0): 17, (\"else'\", 0.0): 1, ('box', 0.0): 8, ('ap', 0.0): 3, ('stori', 0.0): 13, ('london', 0.0): 12, ('imagin', 0.0): 2, ('elsewher', 0.0): 1, ('someday', 0.0): 1, ('ben', 0.0): 3, ('provid', 0.0): 3, ('name', 0.0): 15, ('branch', 0.0): 1, ('visit', 0.0): 12, ('address', 0.0): 3, ('concern', 0.0): 3, ('welsh', 0.0): 1, ('pod', 0.0): 1, ('juli', 0.0): 12, ('laura', 0.0): 4, ('insid', 0.0): 10, ('train', 0.0): 12, ('D;', 0.0): 1, ('talk-kama', 0.0): 1, ('hawako', 0.0): 1, ('waa', 0.0): 1, ('kimaaani', 0.0): 1, ('prisss', 0.0): 1, ('baggag', 0.0): 2, ('claim', 0.0): 3, ('plane', 0.0): 2, ('niamh', 0.0): 1, ('forev', 0.0): 10, ('hmmm', 0.0): 2, ('sugar', 0.0): 3, ('rare', 0.0): 1, ('paper', 0.0): 16, ('town', 0.0): 14, ('score', 0.0): 3, ('stuck', 0.0): 8, ('agh', 0.0): 2, ('middl', 0.0): 7, ('undercoverboss', 0.0): 1, ('تكفى', 0.0): 1, ('10', 0.0): 8, ('job', 0.0): 13, ('cat', 0.0): 17, ('forgotten', 0.0): 3, ('yep', 0.0): 5, ('stop', 0.0): 43, ('ach', 0.0): 2, ('wrist', 0.0): 1, ('nake', 0.0): 3, ('forgot', 0.0): 14, ('bracelet', 0.0): 3, ('ligo', 0.0): 1, ('dozen', 0.0): 1, ('parent', 0.0): 8, ('children', 0.0): 2, ('shark', 0.0): 2, ('selfi', 0.0): 6, ('heartach', 0.0): 1, ('zayniscomingback', 0.0): 3, ('mix', 0.0): 2, ('sweden', 0.0): 1, ('breath', 0.0): 4, ('moment', 0.0): 14, ('word', 0.0): 16, ('elmhurst', 0.0): 1, ('fc', 0.0): 1, ('etid', 0.0): 1, (\"chillin'with\", 0.0): 1, ('father', 0.0): 2, ('istanya', 0.0): 1, ('2suppli', 0.0): 1, ('extra', 0.0): 3, ('infrastructur', 0.0): 2, ('teacher', 0.0): 2, ('doctor', 0.0): 4, ('nurs', 0.0): 2, ('paramed', 0.0): 1, ('countless', 0.0): 1, ('2cope', 0.0): 1, ('bore', 0.0): 23, ('plea', 0.0): 2, ('arian', 0.0): 1, ('hahahaha', 0.0): 6, ('slr', 0.0): 1, ('kendal', 0.0): 1, ('kyli', 0.0): 3, (\"kylie'\", 0.0): 1, ('manila', 0.0): 3, ('jeebu', 0.0): 1, ('reabsorbt', 0.0): 1, ('tooth', 0.0): 2, ('abscess', 0.0): 1, ('threaten', 0.0): 2, ('affect', 0.0): 1, ('front', 0.0): 6, ('crown', 0.0): 1, ('ooouch', 0.0): 1, ('barney', 0.0): 1, (\"be'\", 0.0): 1, ('yo', 0.0): 4, ('later', 0.0): 14, ('realis', 0.0): 6, ('problemat', 0.0): 1, ('expect', 0.0): 5, ('proud', 0.0): 8, ('mess', 0.0): 7, ('maa', 0.0): 2, ('without', 0.0): 25, ('bangalor', 0.0): 1, ('awww', 0.0): 23, ('lui', 0.0): 1, ('manzano', 0.0): 1, ('shaaa', 0.0): 1, ('super', 0.0): 11, ('7th', 0.0): 1, ('conven', 0.0): 1, ('2:30', 0.0): 2, ('pm', 0.0): 8, ('forward', 0.0): 6, ('delet', 0.0): 5, ('turkey', 0.0): 1, ('bomb', 0.0): 3, ('isi', 0.0): 1, ('allow', 0.0): 9, ('usa', 0.0): 2, ('use', 0.0): 43, ('airfield', 0.0): 1, ('jet', 0.0): 1, (\"jack'\", 0.0): 1, ('spam', 0.0): 6, ('sooo', 0.0): 16, ('☺', 0.0): 3, (\"mommy'\", 0.0): 1, ('reason', 0.0): 8, ('overweight', 0.0): 1, ('sigeg', 0.0): 1, ('habhab', 0.0): 1, ('masud', 0.0): 1, ('kaha', 0.0): 1, ('ko', 0.0): 10, ('akong', 0.0): 1, ('un', 0.0): 1, ('hella', 0.0): 4, ('matter', 0.0): 4, ('pala', 0.0): 1, ('hahaha', 0.0): 11, ('lesson', 0.0): 1, ('dolphin', 0.0): 1, ('xxx', 0.0): 12, ('holi', 0.0): 2, ('anythin', 0.0): 1, ('trend', 0.0): 6, ('radio', 0.0): 4, ('sing', 0.0): 5, ('bewar', 0.0): 1, ('agonis', 0.0): 1, ('experi', 0.0): 2, ('ahead', 0.0): 3, ('modimo', 0.0): 1, ('ho', 0.0): 3, ('tseba', 0.0): 1, ('wena', 0.0): 1, ('fela', 0.0): 1, ('emot', 0.0): 8, ('hubbi', 0.0): 1, ('delight', 0.0): 1, ('return', 0.0): 6, ('bill', 0.0): 6, ('nowt', 0.0): 1, ('wors', 0.0): 8, ('willi', 0.0): 1, ('gon', 0.0): 1, ('vomit', 0.0): 1, ('famou', 0.0): 5, ('bowl', 0.0): 1, ('devast', 0.0): 1, ('titan', 0.0): 1, ('ae', 0.0): 1, ('mark', 0.0): 2, ('hair', 0.0): 21, ('shini', 0.0): 1, ('wavi', 0.0): 1, ('emo', 0.0): 2, ('germani', 0.0): 4, ('load', 0.0): 9, ('shed', 0.0): 2, ('ha', 0.0): 7, ('bheyp', 0.0): 1, ('ayemso', 0.0): 1, ('ear', 0.0): 5, ('swell', 0.0): 2, ('sm', 0.0): 7, ('fb', 0.0): 7, ('remind', 0.0): 3, ('abt', 0.0): 3, ('womad', 0.0): 1, ('wut', 0.0): 1, ('hell', 0.0): 11, ('viciou', 0.0): 1, ('circl', 0.0): 1, ('surpris', 0.0): 5, ('ticket', 0.0): 12, ('codi', 0.0): 1, ('simpson', 0.0): 1, ('concert', 0.0): 11, ('singapor', 0.0): 4, ('august', 0.0): 5, ('pooo', 0.0): 2, ('bh3', 0.0): 1, ('enter', 0.0): 1, ('pitchwar', 0.0): 1, ('chap', 0.0): 1, (\"mine'\", 0.0): 1, ('transcript', 0.0): 1, (\"apma'\", 0.0): 1, ('shoulder', 0.0): 2, ('bitch', 0.0): 11, ('competit', 0.0): 1, (\"it'll\", 0.0): 3, ('fine', 0.0): 6, ('timw', 0.0): 1, ('acc', 0.0): 8, ('rude', 0.0): 11, ('vitamin', 0.0): 1, ('e', 0.0): 9, ('oil', 0.0): 1, ('massag', 0.0): 5, ('everyday', 0.0): 7, ('healthier', 0.0): 1, ('easier', 0.0): 3, ('stretch', 0.0): 1, ('choos', 0.0): 7, ('blockjam', 0.0): 1, (\"schedule'\", 0.0): 1, ('whack', 0.0): 1, ('kik', 0.0): 69, ('thelock', 0.0): 1, ('76', 0.0): 1, ('sex', 0.0): 6, ('omegl', 0.0): 4, ('coupl', 0.0): 2, ('travel', 0.0): 11, ('hotgirl', 0.0): 2, ('2009', 0.0): 1, ('3', 0.0): 37, ('ghantay', 0.0): 1, ('light', 0.0): 8, ('nai', 0.0): 1, ('hay', 0.0): 8, ('deni', 0.0): 1, ('ruin', 0.0): 11, ('laguna', 0.0): 1, ('exit', 0.0): 2, ('gomen', 0.0): 1, ('heck', 0.0): 5, ('fair', 0.0): 12, ('grew', 0.0): 2, ('half', 0.0): 10, ('inch', 0.0): 2, ('two', 0.0): 19, ('problem', 0.0): 7, ('suuuper', 0.0): 1, ('65', 0.0): 1, ('sale', 0.0): 8, ('inact', 0.0): 8, ('orphan', 0.0): 1, ('black', 0.0): 12, ('earlier', 0.0): 9, ('whaaat', 0.0): 5, ('kaya', 0.0): 2, ('naaan', 0.0): 1, ('paus', 0.0): 1, ('randomli', 0.0): 1, ('app', 0.0): 13, ('3:30', 0.0): 1, ('walk', 0.0): 7, ('inglewood', 0.0): 1, ('ummm', 0.0): 4, ('anxieti', 0.0): 3, ('readi', 0.0): 12, ('also', 0.0): 19, ('charcoal', 0.0): 1, ('til', 0.0): 5, ('mid-end', 0.0): 1, ('aug', 0.0): 1, ('noooo', 0.0): 1, ('heard', 0.0): 6, ('rip', 0.0): 12, ('rodfanta', 0.0): 1, ('wasp', 0.0): 2, ('sting', 0.0): 1, ('avert', 0.0): 1, ('bug', 0.0): 3, ('(:', 0.0): 7, ('exo', 0.0): 2, ('seekli', 0.0): 1, ('riptito', 0.0): 1, ('manbearpig', 0.0): 1, ('cannot', 0.0): 7, ('grow', 0.0): 3, ('shorter', 0.0): 1, ('academ', 0.0): 1, ('free', 0.0): 19, ('exclus', 0.0): 2, ('unfair', 0.0): 7, ('esp', 0.0): 4, ('regard', 0.0): 1, ('current', 0.0): 7, ('bleak', 0.0): 1, ('german', 0.0): 1, ('chart', 0.0): 2, ('situat', 0.0): 2, ('entri', 0.0): 4, ('even', 0.0): 70, ('top', 0.0): 6, ('100', 0.0): 8, ('pfft', 0.0): 1, ('place', 0.0): 18, ('white', 0.0): 7, ('wash', 0.0): 1, ('polaroid', 0.0): 1, ('newbethvideo', 0.0): 1, ('greec', 0.0): 2, ('xur', 0.0): 2, ('imi', 0.0): 3, ('fill', 0.0): 1, ('♡', 0.0): 11, ('♥', 0.0): 22, ('xoxoxo', 0.0): 1, ('pictur', 0.0): 17, ('stud', 0.0): 1, ('hund', 0.0): 1, ('6', 0.0): 14, ('kikchat', 0.0): 9, ('amazon', 0.0): 5, ('3.4', 0.0): 1, ('yach', 0.0): 1, ('telat', 0.0): 1, ('huvvft', 0.0): 1, ('zoo', 0.0): 2, ('fieldtrip', 0.0): 1, ('touch', 0.0): 5, ('yan', 0.0): 1, ('posit', 0.0): 2, ('king', 0.0): 1, ('futur', 0.0): 4, ('sizw', 0.0): 1, ('write', 0.0): 13, ('20', 0.0): 9, ('result', 0.0): 3, ('km', 0.0): 2, ('four', 0.0): 4, ('shift', 0.0): 5, ('aaahhh', 0.0): 2, ('boredom', 0.0): 1, ('en', 0.0): 1, ('aint', 0.0): 7, ('who', 0.0): 1, ('sins', 0.0): 1, ('that', 0.0): 13, ('somehow', 0.0): 2, ('tini', 0.0): 4, ('ball', 0.0): 2, ('barbel', 0.0): 1, ('owww', 0.0): 2, ('amsterdam', 0.0): 1, ('luv', 0.0): 2, ('💖', 0.0): 4, ('ps', 0.0): 3, ('looong', 0.0): 1, ('especi', 0.0): 4, (':/', 0.0): 11, ('lap', 0.0): 1, ('litro', 0.0): 1, ('shepherd', 0.0): 2, ('lami', 0.0): 1, ('mayb', 0.0): 27, ('relax', 0.0): 3, ('lungomar', 0.0): 1, ('pesaro', 0.0): 1, ('giachietittiwed', 0.0): 1, ('igersoftheday', 0.0): 1, ('summertim', 0.0): 1, ('nose', 0.0): 7, ('bruis', 0.0): 1, ('lil', 0.0): 8, ('snake', 0.0): 3, ('journey', 0.0): 2, ('scarf', 0.0): 1, ('au', 0.0): 3, ('afford', 0.0): 7, ('fridayfeel', 0.0): 1, ('earli', 0.0): 12, ('money', 0.0): 24, ('chicken', 0.0): 5, ('woe', 0.0): 4, ('nigga', 0.0): 3, ('motn', 0.0): 1, ('make-up', 0.0): 1, ('justic', 0.0): 1, ('import', 0.0): 4, ('sit', 0.0): 5, ('mind', 0.0): 7, ('buy', 0.0): 17, ('limit', 0.0): 4, ('ver', 0.0): 1, ('normal', 0.0): 5, ('edit', 0.0): 7, ('huhuhu', 0.0): 3, ('stack', 0.0): 1, (\"m'ladi\", 0.0): 1, ('j8', 0.0): 1, ('j11', 0.0): 1, ('m20', 0.0): 1, ('jk', 0.0): 5, ('acad', 0.0): 1, ('schedul', 0.0): 9, ('nowww', 0.0): 1, ('cop', 0.0): 1, ('jame', 0.0): 4, ('window', 0.0): 6, ('hugh', 0.0): 2, ('paw', 0.0): 1, ('muddi', 0.0): 1, ('distract', 0.0): 1, ('heyi', 0.0): 1, ('otherwis', 0.0): 3, ('picnic', 0.0): 1, ('24', 0.0): 11, ('cupcak', 0.0): 2, ('talaga', 0.0): 1, ('best', 0.0): 22, ('femal', 0.0): 3, ('poppin', 0.0): 1, ('joc', 0.0): 1, ('playin', 0.0): 1, ('saw', 0.0): 19, ('fix', 0.0): 10, ('coldplay', 0.0): 1, ('media', 0.0): 1, ('player', 0.0): 3, ('fail', 0.0): 10, ('subj', 0.0): 1, ('sobrang', 0.0): 1, ('bv', 0.0): 1, ('zamn', 0.0): 1, ('line', 0.0): 8, ('afropunk', 0.0): 1, ('fest', 0.0): 1, ('brooklyn', 0.0): 2, ('id', 0.0): 5, ('put', 0.0): 14, ('50', 0.0): 5, ('madrid', 0.0): 7, ('shithous', 0.0): 1, ('cutest', 0.0): 2, ('danc', 0.0): 6, ('ur', 0.0): 26, ('arm', 0.0): 3, ('rais', 0.0): 1, ('hand', 0.0): 12, ('ladder', 0.0): 2, ('told', 0.0): 11, ('climb', 0.0): 3, ('success', 0.0): 4, ('nerv', 0.0): 1, ('wrack', 0.0): 1, ('test', 0.0): 8, ('booset', 0.0): 1, ('restart', 0.0): 1, ('assassin', 0.0): 1, ('creed', 0.0): 1, ('ii', 0.0): 1, ('heap', 0.0): 1, ('fell', 0.0): 10, ('daughter', 0.0): 1, ('begin', 0.0): 4, ('ps3', 0.0): 1, ('ankl', 0.0): 4, ('step', 0.0): 5, ('puddl', 0.0): 2, ('wear', 0.0): 5, ('slipper', 0.0): 1, ('eve', 0.0): 1, ('bbi', 0.0): 6, ('sararoc', 0.0): 1, ('angri', 0.0): 5, ('pretti', 0.0): 15, ('fnaf', 0.0): 1, ('holiday', 0.0): 20, ('cheer', 0.0): 6, ('😘', 0.0): 11, ('anywayhedidanicejob', 0.0): 1, ('😞', 0.0): 3, ('3am', 0.0): 2, ('other', 0.0): 7, ('local', 0.0): 3, ('cruis', 0.0): 1, ('done', 0.0): 24, ('doubl', 0.0): 4, ('wail', 0.0): 1, ('manual', 0.0): 2, ('wheelchair', 0.0): 1, ('check', 0.0): 19, ('fit', 0.0): 3, ('nh', 0.0): 3, ('26week', 0.0): 1, ('sbenu', 0.0): 1, ('sasin', 0.0): 1, ('team', 0.0): 14, ('anarchi', 0.0): 1, ('af', 0.0): 14, ('candl', 0.0): 1, ('forehead', 0.0): 4, ('medicin', 0.0): 3, ('welcom', 0.0): 5, ('oop', 0.0): 4, ('hoya', 0.0): 3, ('mah', 0.0): 2, ('a', 0.0): 1, ('nobodi', 0.0): 10, ('awhil', 0.0): 2, ('ago', 0.0): 20, ('b', 0.0): 10, ('hush', 0.0): 2, ('gurli', 0.0): 1, ('bring', 0.0): 9, ('purti', 0.0): 1, ('mouth', 0.0): 5, ('closer', 0.0): 2, ('shiver', 0.0): 1, ('solut', 0.0): 1, ('paid', 0.0): 8, ('properli', 0.0): 2, ('gol', 0.0): 1, ('pea', 0.0): 1, ('english', 0.0): 9, ('mental', 0.0): 4, ('tierd', 0.0): 2, ('third', 0.0): 1, (\"eye'\", 0.0): 1, ('thnkyouuu', 0.0): 1, ('carolin', 0.0): 1, ('neither', 0.0): 6, ('figur', 0.0): 6, ('mirror', 0.0): 1, ('highlight', 0.0): 2, ('pure', 0.0): 3, ('courag', 0.0): 1, ('bit', 0.0): 15, ('fishi', 0.0): 1, ('idek', 0.0): 1, ('apink', 0.0): 5, ('perform', 0.0): 8, ('bulet', 0.0): 1, ('gendut', 0.0): 1, ('noo', 0.0): 5, ('race', 0.0): 3, ('hotwheel', 0.0): 1, ('ms', 0.0): 1, ('patch', 0.0): 1, ('typic', 0.0): 2, ('ahaha', 0.0): 1, ('lay', 0.0): 2, ('wine', 0.0): 1, ('glass', 0.0): 3, (\"where'\", 0.0): 4, ('akon', 0.0): 1, ('somewher', 0.0): 5, ('nightmar', 0.0): 7, ('ya', 0.0): 15, ('mino', 0.0): 2, ('crazyyi', 0.0): 1, ('thooo', 0.0): 1, ('zz', 0.0): 1, ('airport', 0.0): 7, ('straight', 0.0): 4, ('soundcheck', 0.0): 1, ('hmm', 0.0): 4, ('antagonist', 0.0): 1, ('ob', 0.0): 1, ('phantasi', 0.0): 1, ('star', 0.0): 4, ('ip', 0.0): 1, ('issu', 0.0): 11, ('bruce', 0.0): 1, ('sleepdepriv', 0.0): 1, ('tiredashel', 0.0): 1, ('4aspot', 0.0): 1, (\"kinara'\", 0.0): 1, ('awami', 0.0): 1, ('question', 0.0): 9, ('niqqa', 0.0): 1, ('answer', 0.0): 14, ('mockingjay', 0.0): 1, ('slow', 0.0): 9, ('pb.contest', 0.0): 1, ('cycl', 0.0): 2, ('aarww', 0.0): 1, ('lmbo', 0.0): 1, ('dangit', 0.0): 1, ('ohmygod', 0.0): 1, ('scenario', 0.0): 1, ('tooo', 0.0): 2, ('duck', 0.0): 1, ('baechyyi', 0.0): 1, ('okayyy', 0.0): 1, ('noon', 0.0): 3, ('drag', 0.0): 5, ('serious', 0.0): 11, ('misundersrand', 0.0): 1, ('chal', 0.0): 1, ('raha', 0.0): 1, ('hai', 0.0): 11, ('yhm', 0.0): 1, ('edsa', 0.0): 2, ('jasmingarrick', 0.0): 2, ('kikmeguy', 0.0): 5, ('webcam', 0.0): 2, ('milf', 0.0): 1, ('nakamaforev', 0.0): 3, ('kiksex', 0.0): 7, (\"unicef'\", 0.0): 1, ('fu', 0.0): 1, ('alon', 0.0): 16, ('manag', 0.0): 13, ('stephen', 0.0): 1, ('street', 0.0): 2, ('35', 0.0): 1, ('min', 0.0): 7, ('appear', 0.0): 2, ('record', 0.0): 6, ('coz', 0.0): 4, ('frustrat', 0.0): 6, ('sent', 0.0): 9, ('interest', 0.0): 9, ('woza', 0.0): 1, ('promis', 0.0): 4, ('senight', 0.0): 1, ('468', 0.0): 1, ('kikmeboy', 0.0): 9, ('gay', 0.0): 6, ('teen', 0.0): 7, ('amateur', 0.0): 5, ('hotscratch', 0.0): 1, ('sell', 0.0): 8, ('sock', 0.0): 6, ('150-160', 0.0): 1, ('peso', 0.0): 1, ('gotta', 0.0): 8, ('pay', 0.0): 8, ('degrassi', 0.0): 1, ('4-6', 0.0): 1, ('bcz', 0.0): 1, ('kat', 0.0): 3, ('chem', 0.0): 2, ('onscreen', 0.0): 1, ('ofscreen', 0.0): 1, ('kinda', 0.0): 10, ('pak', 0.0): 4, ('class', 0.0): 10, ('monthli', 0.0): 1, ('roll', 0.0): 4, ('band', 0.0): 2, ('throw', 0.0): 2, ('ironi', 0.0): 2, ('rhisfor', 0.0): 1, ('500', 0.0): 2, ('bestoftheday', 0.0): 3, ('chat', 0.0): 9, ('camsex', 0.0): 5, ('unfollow', 0.0): 11, ('particular', 0.0): 1, ('support', 0.0): 26, ('bae', 0.0): 11, ('poopi', 0.0): 1, ('pip', 0.0): 1, ('post', 0.0): 12, ('felt', 0.0): 6, ('uff', 0.0): 1, ('1.300', 0.0): 1, ('credit', 0.0): 3, ('glue', 0.0): 1, ('factori', 0.0): 1, ('kuchar', 0.0): 1, ('fast', 0.0): 7, ('graduat', 0.0): 3, ('up', 0.0): 2, ('definit', 0.0): 3, ('uni', 0.0): 2, ('ee', 0.0): 1, ('tommi', 0.0): 1, ('georgia', 0.0): 2, ('bout', 0.0): 2, ('instant', 0.0): 1, ('transmiss', 0.0): 1, ('malik', 0.0): 1, ('orang', 0.0): 2, ('suma', 0.0): 1, ('shouldeeerr', 0.0): 1, ('outfit', 0.0): 5, ('age', 0.0): 8, ('repack', 0.0): 3, ('group', 0.0): 4, ('charl', 0.0): 1, ('grown', 0.0): 2, ('rememb', 0.0): 17, ('dy', 0.0): 1, ('rihanna', 0.0): 1, ('red', 0.0): 4, ('ging', 0.0): 2, ('boot', 0.0): 4, ('closest', 0.0): 3, ('nike', 0.0): 1, ('adida', 0.0): 1, ('inform', 0.0): 4, ('[email protected]', 0.0): 1, ('set', 0.0): 13, ('ifeely', 0.0): 1, ('harder', 0.0): 2, ('usual', 0.0): 7, ('ratbaglat', 0.0): 1, ('second', 0.0): 5, ('semest', 0.0): 2, ('gin', 0.0): 1, ('gut', 0.0): 12, ('reynold', 0.0): 1, ('dessert', 0.0): 2, ('season', 0.0): 9, ('villag', 0.0): 1, ('differ', 0.0): 10, ('citi', 0.0): 11, ('unit', 0.0): 3, ('oppress', 0.0): 1, ('mass', 0.0): 2, ('wat', 0.0): 5, ('afghanistn', 0.0): 1, ('war', 0.0): 2, ('tore', 0.0): 1, ('sunggyu', 0.0): 5, ('injur', 0.0): 7, ('plaster', 0.0): 2, ('rtd', 0.0): 1, ('loui', 0.0): 4, ('harri', 0.0): 10, ('5so', 0.0): 7, ('crowd', 0.0): 1, ('stadium', 0.0): 4, ('welder', 0.0): 1, ('ghost', 0.0): 1, ('hogo', 0.0): 1, ('vishaya', 0.0): 1, ('adu', 0.0): 1, ('bjp', 0.0): 1, ('madatt', 0.0): 1, ('anta', 0.0): 1, ('vishwa', 0.0): 1, ('ne', 0.0): 3, ('illa', 0.0): 1, ('wua', 0.0): 1, ('picki', 0.0): 1, ('finger', 0.0): 8, ('favourit', 0.0): 9, ('mutual', 0.0): 2, ('gn', 0.0): 1, ('along', 0.0): 3, ('ass', 0.0): 9, ('thent', 0.0): 1, ('423', 0.0): 1, ('sabadodeganarseguidor', 0.0): 2, ('sexual', 0.0): 4, ('sync', 0.0): 2, ('plug.dj', 0.0): 1, ('peel', 0.0): 1, ('suspems', 0.0): 1, ('cope', 0.0): 3, ('offroad', 0.0): 1, ('adventur', 0.0): 1, ('there', 0.0): 5, ('harvest', 0.0): 1, ('machineri', 0.0): 1, ('inapropri', 0.0): 1, ('weav', 0.0): 2, ('nowher', 0.0): 3, ('decent', 0.0): 2, ('invest', 0.0): 2, ('scottish', 0.0): 1, ('footbal', 0.0): 3, ('dire', 0.0): 2, ('nomoney', 0.0): 1, ('nawf', 0.0): 1, ('sum', 0.0): 2, ('becho', 0.0): 1, ('danni', 0.0): 3, ('eng', 0.0): 2, (\"let'\", 0.0): 5, ('overli', 0.0): 2, ('lab', 0.0): 1, ('ty', 0.0): 3, ('zap', 0.0): 1, ('distress', 0.0): 1, ('shot', 0.0): 6, ('cinema', 0.0): 4, ('louisianashoot', 0.0): 1, ('laugh', 0.0): 7, ('har', 0.0): 3, (\"how'\", 0.0): 5, ('chum', 0.0): 1, ('ncc', 0.0): 1, ('ph', 0.0): 2, ('balik', 0.0): 1, ('naman', 0.0): 1, ('kayo', 0.0): 1, ('itong', 0.0): 1, ('shirt', 0.0): 3, ('thaaat', 0.0): 1, ('ctto', 0.0): 1, ('expir', 0.0): 3, ('bi', 0.0): 2, ('tough', 0.0): 2, ('11', 0.0): 4, ('3:33', 0.0): 2, ('jfc', 0.0): 1, ('bio', 0.0): 3, ('bodo', 0.0): 1, ('amat', 0.0): 1, ('quick', 0.0): 5, ('yelaaa', 0.0): 1, ('dublin', 0.0): 2, ('potter', 0.0): 1, ('marathon', 0.0): 3, ('balanc', 0.0): 2, ('warm', 0.0): 5, ('comic', 0.0): 5, ('pine', 0.0): 1, ('keybind', 0.0): 1, ('featur', 0.0): 4, ('wild', 0.0): 2, ('warfar', 0.0): 1, ('control', 0.0): 2, ('diagnos', 0.0): 1, ('wiv', 0.0): 1, (\"scheuermann'\", 0.0): 1, ('diseas', 0.0): 3, ('bone', 0.0): 1, ('rlyhurt', 0.0): 1, ('howdo', 0.0): 1, ('georgesampson', 0.0): 1, ('stand', 0.0): 6, ('signal', 0.0): 3, ('reckon', 0.0): 1, ('t20', 0.0): 1, ('action', 0.0): 2, ('taunton', 0.0): 1, ('vacat', 0.0): 3, ('excit', 0.0): 6, ('justiceforsandrabland', 0.0): 2, ('sandrabland', 0.0): 6, ('disturb', 0.0): 1, ('women', 0.0): 5, ('happpi', 0.0): 1, ('justinbieb', 0.0): 4, ('daianerufato', 0.0): 3, ('ilysm', 0.0): 3, ('2015', 0.0): 12, ('07:34', 0.0): 1, ('delphi', 0.0): 2, ('weak', 0.0): 2, ('dom', 0.0): 2, ('techniqu', 0.0): 1, ('minc', 0.0): 2, ('complet', 0.0): 9, ('symphoni', 0.0): 1, ('joe', 0.0): 3, ('co', 0.0): 6, ('wth', 0.0): 2, ('aisyhhh', 0.0): 1, ('bald', 0.0): 1, ('14', 0.0): 3, ('seungchan', 0.0): 1, ('aigooo', 0.0): 1, ('riri', 0.0): 1, ('origin', 0.0): 6, ('depend', 0.0): 2, ('vet', 0.0): 1, ('major', 0.0): 2, ('va', 0.0): 1, ('kept', 0.0): 2, ('lumin', 0.0): 1, ('follback', 0.0): 2, ('treat', 0.0): 5, ('v', 0.0): 6, ('product', 0.0): 4, ('letter', 0.0): 1, ('z', 0.0): 5, ('uniqu', 0.0): 2, ('refresh', 0.0): 1, ('popular', 0.0): 1, ('bebee', 0.0): 2, ('lt', 0.0): 1, ('inaccuraci', 0.0): 1, ('inaccur', 0.0): 1, ('worri', 0.0): 8, ('burn', 0.0): 4, ('rn', 0.0): 17, ('tragic', 0.0): 1, ('joy', 0.0): 2, ('sam', 0.0): 4, ('rush', 0.0): 2, ('toronto', 0.0): 1, ('stuart', 0.0): 1, (\"party'\", 0.0): 2, ('iyalaya', 0.0): 1, ('shade', 0.0): 3, ('round', 0.0): 3, ('clock', 0.0): 2, (';(', 0.0): 6, ('happier', 0.0): 1, ('h', 0.0): 8, ('ubusi', 0.0): 1, ('le', 0.0): 3, ('fifa', 0.0): 1, ('gymnast', 0.0): 1, ('aahhh', 0.0): 1, ('noggin', 0.0): 1, ('bump', 0.0): 1, ('feelslikeanidiot', 0.0): 1, ('pregnant', 0.0): 2, ('woman', 0.0): 5, ('dearli', 0.0): 1, ('sunshin', 0.0): 4, ('suk', 0.0): 2, ('pumpkin', 0.0): 1, ('scone', 0.0): 1, ('outnumb', 0.0): 1, ('vidcon', 0.0): 10, ('eri', 0.0): 1, ('geez', 0.0): 1, ('preciou', 0.0): 4, ('hive', 0.0): 1, ('vote', 0.0): 7, ('vietnam', 0.0): 1, ('decemb', 0.0): 2, ('dunt', 0.0): 1, ('ikr', 0.0): 3, ('sob', 0.0): 3, ('buff', 0.0): 1, ('leg', 0.0): 4, ('toni', 0.0): 1, ('deactiv', 0.0): 6, ('bra', 0.0): 2, (\"shady'\", 0.0): 1, ('isibaya', 0.0): 1, ('special', 0.0): 3, ('❤', 0.0): 21, ('️', 0.0): 20, ('😓', 0.0): 2, ('slept', 0.0): 5, ('colder', 0.0): 1, ('took', 0.0): 9, ('med', 0.0): 1, ('sausag', 0.0): 1, ('adio', 0.0): 1, ('cold', 0.0): 15, ('sore', 0.0): 9, ('ew', 0.0): 3, ('h8', 0.0): 1, ('messeng', 0.0): 2, ('shittier', 0.0): 1, ('leno', 0.0): 1, ('ident', 0.0): 1, ('crisi', 0.0): 2, ('roommat', 0.0): 1, ('knock', 0.0): 3, ('nighter', 0.0): 3, ('bird', 0.0): 2, ('flew', 0.0): 2, ('thru', 0.0): 2, ('derek', 0.0): 3, ('tour', 0.0): 7, ('wetherspoon', 0.0): 1, ('pub', 0.0): 1, ('polic', 0.0): 4, ('frank', 0.0): 2, ('ocean', 0.0): 4, ('releas', 0.0): 8, ('ff', 0.0): 4, ('lisah', 0.0): 2, ('kikm', 0.0): 8, ('eboni', 0.0): 2, ('weloveyounamjoon', 0.0): 1, ('gave', 0.0): 8, ('dress', 0.0): 6, ('polka', 0.0): 1, ('dot', 0.0): 2, ('ndi', 0.0): 1, ('yum', 0.0): 1, ('feed', 0.0): 3, ('leftov', 0.0): 2, ('side', 0.0): 6, ('cs', 0.0): 2, ('own', 0.0): 1, ('walnut', 0.0): 1, ('whip', 0.0): 1, ('wife', 0.0): 6, ('boah', 0.0): 1, ('madi', 0.0): 2, ('def', 0.0): 3, ('manga', 0.0): 1, ('giant', 0.0): 3, ('aminormalyet', 0.0): 1, ('cooki', 0.0): 2, ('breakfast', 0.0): 5, ('clutch', 0.0): 1, ('poorli', 0.0): 6, ('tummi', 0.0): 6, ('pj', 0.0): 1, ('groan', 0.0): 1, ('nou', 0.0): 1, ('adam', 0.0): 2, ('ken', 0.0): 1, ('sara', 0.0): 2, ('sister', 0.0): 4, ('accid', 0.0): 2, ('sort', 0.0): 7, ('mate', 0.0): 2, ('pick', 0.0): 12, ('rang', 0.0): 4, ('fk', 0.0): 2, ('freak', 0.0): 5, ('describ', 0.0): 1, ('eric', 0.0): 2, ('prydz', 0.0): 1, ('sister-in-law', 0.0): 1, ('instal', 0.0): 2, ('seat', 0.0): 4, ('bought', 0.0): 6, ('rear-end', 0.0): 1, (\"everyone'\", 0.0): 4, ('trash', 0.0): 2, ('boob', 0.0): 3, ('whilst', 0.0): 3, ('stair', 0.0): 1, ('childhood', 0.0): 1, ('toothsensit', 0.0): 4, ('size', 0.0): 9, ('ke', 0.0): 3, ('shem', 0.0): 2, ('trust', 0.0): 2, ('awel', 0.0): 1, ('drunk', 0.0): 2, ('weekendofmad', 0.0): 1, ('🍹', 0.0): 3, ('🍸', 0.0): 1, ('cb', 0.0): 1, ('dancer', 0.0): 1, ('choregraph', 0.0): 1, ('626-430-8715', 0.0): 1, ('messag', 0.0): 8, ('repli', 0.0): 14, ('hoe', 0.0): 1, ('xd', 0.0): 7, ('xiu', 0.0): 1, ('nk', 0.0): 1, ('gi', 0.0): 2, ('uss', 0.0): 1, ('eliss', 0.0): 1, ('ksoo', 0.0): 2, ('session', 0.0): 5, ('tat', 0.0): 1, ('bcoz', 0.0): 1, ('bet', 0.0): 10, ('rancho', 0.0): 1, ('imperi', 0.0): 1, ('de', 0.0): 1, ('silang', 0.0): 1, ('subdivis', 0.0): 1, ('center', 0.0): 1, ('39', 0.0): 1, ('cornwal', 0.0): 1, ('verit', 0.0): 1, ('prize', 0.0): 2, ('regular', 0.0): 3, ('workout', 0.0): 1, ('spin', 0.0): 1, ('base', 0.0): 1, ('upon', 0.0): 1, ('penni', 0.0): 1, ('ebook', 0.0): 1, ('фотосет', 0.0): 1, ('addicted-to-analsex', 0.0): 1, ('sweetbj', 0.0): 2, ('blowjob', 0.0): 1, ('mhhh', 0.0): 1, ('sed', 0.0): 1, ('sg', 0.0): 1, ('dinner', 0.0): 4, ('bless', 0.0): 2, ('mee', 0.0): 2, ('enviou', 0.0): 1, ('eonni', 0.0): 1, ('lovey', 0.0): 1, ('dovey', 0.0): 1, ('dongsaeng', 0.0): 1, ('workin', 0.0): 1, ('tuesday', 0.0): 4, ('schade', 0.0): 3, ('belfast', 0.0): 1, ('jealou', 0.0): 9, ('jacob', 0.0): 5, ('isco', 0.0): 4, ('peni', 0.0): 1, ('everi', 0.0): 16, ('convers', 0.0): 6, ('wonder', 0.0): 11, ('soul', 0.0): 5, ('nation', 0.0): 2, ('louisiana', 0.0): 4, ('lafayett', 0.0): 2, ('matteroftheheart', 0.0): 1, ('waduh', 0.0): 1, ('pant', 0.0): 3, ('suspend', 0.0): 2, ('believ', 0.0): 14, ('teenag', 0.0): 2, ('clich', 0.0): 1, ('youuu', 0.0): 5, ('rma', 0.0): 1, ('jersey', 0.0): 2, ('fake', 0.0): 4, ('jaclintil', 0.0): 1, ('model', 0.0): 9, ('likeforlik', 0.0): 7, ('mpoint', 0.0): 4, ('hotfmnoaidilforariana', 0.0): 2, ('ran', 0.0): 5, ('fuckkk', 0.0): 1, ('jump', 0.0): 3, ('justin', 0.0): 3, ('finish', 0.0): 14, ('sanum', 0.0): 1, ('llaollao', 0.0): 1, ('foood', 0.0): 1, ('ubericecream', 0.0): 14, ('glare', 0.0): 1, ('vine', 0.0): 3, ('tweetin', 0.0): 1, ('mood', 0.0): 3, ('elbow', 0.0): 1, ('choreo', 0.0): 1, ('offens', 0.0): 2, ('yeyi', 0.0): 1, ('hd', 0.0): 2, ('brow', 0.0): 1, ('kit', 0.0): 6, ('slightli', 0.0): 2, ('monday', 0.0): 10, ('sux', 0.0): 1, ('enjoy', 0.0): 9, ('nothaveld', 0.0): 1, ('765', 0.0): 1, ('edm', 0.0): 1, ('likeforfollow', 0.0): 3, ('hannib', 0.0): 3, ('mosquito', 0.0): 2, ('bite', 0.0): 5, ('kinki', 0.0): 1, ('hsould', 0.0): 1, ('justget', 0.0): 1, ('marri', 0.0): 2, ('la', 0.0): 11, ('shuffl', 0.0): 4, ('int', 0.0): 1, ('buckl', 0.0): 1, ('spring', 0.0): 1, ('millz', 0.0): 1, ('aski', 0.0): 2, ('awusasho', 0.0): 1, ('unlucki', 0.0): 2, ('driver', 0.0): 7, ('briefli', 0.0): 1, ('spot', 0.0): 4, ('144p', 0.0): 1, ('brook', 0.0): 1, ('crack', 0.0): 2, ('@', 0.0): 5, ('maverickgam', 0.0): 4, ('07:32', 0.0): 1, ('07:25', 0.0): 1, ('max', 0.0): 3, ('file', 0.0): 2, ('extern', 0.0): 2, ('sd', 0.0): 1, ('via', 0.0): 1, ('airdroid', 0.0): 1, ('android', 0.0): 2, ('4.4+', 0.0): 1, ('googl', 0.0): 5, ('alright', 0.0): 3, ('cramp', 0.0): 2, ('unstan', 0.0): 1, ('tay', 0.0): 2, ('ngeze', 0.0): 1, ('cocktaili', 0.0): 1, ('classi', 0.0): 1, ('07:24', 0.0): 1, ('✈', 0.0): 2, ('raini', 0.0): 2, ('☔', 0.0): 2, ('peter', 0.0): 1, ('pen', 0.0): 1, ('spare', 0.0): 1, ('guest', 0.0): 2, ('barcelona', 0.0): 2, ('bilbao', 0.0): 1, ('booti', 0.0): 2, ('sharyl', 0.0): 1, ('shane', 0.0): 2, ('ta', 0.0): 1, ('giddi', 0.0): 1, ('d1', 0.0): 1, ('zipper', 0.0): 1, ('beyond', 0.0): 1, ('repair', 0.0): 4, ('iphon', 0.0): 5, ('upgrad', 0.0): 1, ('april', 0.0): 1, ('2016', 0.0): 1, ('cont', 0.0): 2, ('england', 0.0): 4, ('wore', 0.0): 2, ('greet', 0.0): 5, ('tempt', 0.0): 2, ('whole', 0.0): 16, ('pack', 0.0): 6, ('oreo', 0.0): 2, ('strength', 0.0): 1, ('wifi', 0.0): 5, ('network', 0.0): 4, ('within', 0.0): 3, ('lolipop', 0.0): 1, ('kebab', 0.0): 1, ('klappertart', 0.0): 1, ('cake', 0.0): 10, ('moodbost', 0.0): 2, ('shoot', 0.0): 6, ('unprepar', 0.0): 1, ('sri', 0.0): 1, ('dresscod', 0.0): 1, ('door', 0.0): 6, ('iam', 0.0): 2, ('dnt', 0.0): 1, ('stab', 0.0): 3, ('meh', 0.0): 3, ('wrocilam', 0.0): 1, ('otp', 0.0): 3, ('5', 0.0): 14, ('looww', 0.0): 1, ('recov', 0.0): 2, ('wayn', 0.0): 2, ('insur', 0.0): 3, ('loss', 0.0): 3, ('stolen', 0.0): 2, ('accident', 0.0): 1, ('damag', 0.0): 5, ('devic', 0.0): 3, ('warranti', 0.0): 1, ('centr', 0.0): 2, ('👌', 0.0): 1, ('lmfaoo', 0.0): 1, ('accur', 0.0): 2, ('fra', 0.0): 4, ('aliv', 0.0): 2, ('steel', 0.0): 2, ('otamendi', 0.0): 1, ('ny', 0.0): 2, ('🚖', 0.0): 1, ('🗽', 0.0): 1, ('🌃', 0.0): 1, ('stealth', 0.0): 2, ('bastard', 0.0): 2, ('inc', 0.0): 3, ('steam', 0.0): 2, ('therapi', 0.0): 1, ('exhaust', 0.0): 3, ('lie', 0.0): 7, ('total', 0.0): 11, ('block', 0.0): 11, ('choic', 0.0): 5, ('switzerland', 0.0): 1, ('kfc', 0.0): 1, ('common', 0.0): 4, ('th', 0.0): 5, ('wolrd', 0.0): 1, ('fyn', 0.0): 1, ('drop', 0.0): 10, ('state', 0.0): 4, ('3g', 0.0): 2, ('christ', 0.0): 1, ('scale', 0.0): 1, ('deck', 0.0): 1, ('chair', 0.0): 4, ('yk', 0.0): 1, ('resi', 0.0): 1, ('memori', 0.0): 5, ('nude', 0.0): 4, ('bruh', 0.0): 3, ('prepar', 0.0): 3, ('lock', 0.0): 2, ('view', 0.0): 7, ('fbc', 0.0): 3, ('mork', 0.0): 1, ('873', 0.0): 1, ('kikgirl', 0.0): 13, ('premiostumundo', 0.0): 2, ('hotspotwithdanri', 0.0): 1, ('hospit', 0.0): 3, ('food', 0.0): 18, ('sone', 0.0): 1, ('produc', 0.0): 1, ('potag', 0.0): 1, ('tomato', 0.0): 1, ('blight', 0.0): 1, ('sheffield', 0.0): 1, ('mych', 0.0): 1, ('shiiit', 0.0): 2, ('screenshot', 0.0): 4, ('prompt', 0.0): 1, ('areadi', 0.0): 1, ('similar', 0.0): 4, ('soulmat', 0.0): 1, ('canon', 0.0): 1, ('zzz', 0.0): 2, ('britain', 0.0): 1, ('😁', 0.0): 3, ('mana', 0.0): 2, ('hw', 0.0): 1, ('jouch', 0.0): 1, ('por', 0.0): 1, ('que', 0.0): 1, ('liceooo', 0.0): 1, ('30', 0.0): 3, ('minut', 0.0): 6, ('pass', 0.0): 13, ('ayala', 0.0): 1, ('tunnel', 0.0): 2, ('thatscold', 0.0): 1, ('80', 0.0): 1, ('snap', 0.0): 3, ('lourd', 0.0): 1, ('bang', 0.0): 3, ('anywher', 0.0): 4, ('water', 0.0): 8, ('road', 0.0): 1, ('showbox', 0.0): 1, ('naruto', 0.0): 1, ('cartoon', 0.0): 1, ('companion', 0.0): 2, ('skinni', 0.0): 3, ('fat', 0.0): 4, ('bare', 0.0): 6, ('dubai', 0.0): 3, ('calum', 0.0): 1, ('ashton', 0.0): 1, ('✧', 0.0): 8, ('。', 0.0): 8, ('chelni', 0.0): 4, ('disappoint', 0.0): 13, ('everybodi', 0.0): 5, ('due', 0.0): 14, ('laribuggi', 0.0): 1, ('medic', 0.0): 1, ('nutella', 0.0): 1, (\"could'v\", 0.0): 3, ('siriu', 0.0): 1, ('goat', 0.0): 4, ('frudg', 0.0): 1, ('mike', 0.0): 1, ('cloth', 0.0): 6, ('stuff', 0.0): 11, ('sat', 0.0): 3, ('number', 0.0): 6, ('ring', 0.0): 1, ('bbz', 0.0): 1, ('angek', 0.0): 1, ('sbali', 0.0): 1, ('euuuwww', 0.0): 2, ('lunch', 0.0): 10, ('construct', 0.0): 3, ('worker', 0.0): 3, ('1k', 0.0): 3, ('style', 0.0): 4, ('nell', 0.0): 1, ('ik', 0.0): 2, ('death', 0.0): 3, ('jaysu', 0.0): 1, ('toast', 0.0): 1, ('insecur', 0.0): 2, ('buti', 0.0): 1, ('ure', 0.0): 2, ('poop', 0.0): 1, ('gorgeou', 0.0): 2, ('angel', 0.0): 2, ('rome', 0.0): 1, ('throat', 0.0): 10, ('llama', 0.0): 1, ('urself', 0.0): 2, ('getwellsoonamb', 0.0): 1, ('heath', 0.0): 2, ('ledger', 0.0): 1, ('appl', 0.0): 3, ('permiss', 0.0): 2, ('2-0', 0.0): 1, ('lead', 0.0): 3, ('supersport', 0.0): 1, ('milkshak', 0.0): 1, ('witcher', 0.0): 1, ('papertown', 0.0): 1, ('bale', 0.0): 1, ('9', 0.0): 5, ('méxico', 0.0): 1, ('bahay', 0.0): 1, ('bahayan', 0.0): 1, ('magisa', 0.0): 1, ('sadlyf', 0.0): 1, ('bunso', 0.0): 1, ('sleeep', 0.0): 4, ('astonvilla', 0.0): 1, ('berigaud', 0.0): 1, ('bakar', 0.0): 1, ('club', 0.0): 4, ('dear', 0.0): 11, ('allerg', 0.0): 4, ('depress', 0.0): 5, (\"blaine'\", 0.0): 1, ('acoust', 0.0): 2, ('version', 0.0): 5, ('excus', 0.0): 3, ('hernia', 0.0): 3, ('toxin', 0.0): 1, ('freedom', 0.0): 1, ('organ', 0.0): 2, ('ariel', 0.0): 1, ('slap', 0.0): 1, ('slam', 0.0): 1, ('bee', 0.0): 1, ('unknown', 0.0): 2, ('finddjderek', 0.0): 1, ('smell', 0.0): 3, ('uuughhh', 0.0): 1, ('grabe', 0.0): 5, ('ka', 0.0): 5, ('where', 0.0): 1, ('gf', 0.0): 3, ('james_yammouni', 0.0): 1, ('smi', 0.0): 1, ('nemesi', 0.0): 1, ('rule', 0.0): 1, ('doesnt', 0.0): 2, ('appeal', 0.0): 1, ('neeein', 0.0): 1, ('saaad', 0.0): 3, ('less', 0.0): 3, ('hang', 0.0): 7, ('creas', 0.0): 1, ('tan', 0.0): 3, ('dalla', 0.0): 4, ('suppos', 0.0): 7, ('infront', 0.0): 2, ('beato', 0.0): 1, ('tim', 0.0): 2, ('prob', 0.0): 5, ('minha', 0.0): 1, ('deleici', 0.0): 1, ('hr', 0.0): 2, ('pcb', 0.0): 1, ('ep', 0.0): 5, ('peregrin', 0.0): 1, ('8.40', 0.0): 1, ('pigeon', 0.0): 1, ('feet', 0.0): 3, ('tram', 0.0): 1, ('hav', 0.0): 2, ('spent', 0.0): 5, ('outsid', 0.0): 9, ('apt', 0.0): 1, ('build', 0.0): 3, ('key', 0.0): 3, ('bldg', 0.0): 1, ('wrote', 0.0): 3, ('dark', 0.0): 5, ('swan', 0.0): 1, ('fifth', 0.0): 2, ('mmmm', 0.0): 1, ('avi', 0.0): 4, ('nicki', 0.0): 1, ('fucjikg', 0.0): 1, ('disgust', 0.0): 6, ('buynotanapologyonitun', 0.0): 1, ('aval', 0.0): 1, ('denmark', 0.0): 1, ('nw', 0.0): 2, ('sch', 0.0): 2, ('share', 0.0): 11, ('jeslyn', 0.0): 1, ('72', 0.0): 4, ('root', 0.0): 2, ('kuch', 0.0): 1, ('nahi', 0.0): 1, ('hua', 0.0): 2, ('newbi', 0.0): 1, ('crap', 0.0): 3, ('miracl', 0.0): 1, ('4th', 0.0): 1, ('linda', 0.0): 1, ('click', 0.0): 1, ('pin', 0.0): 2, ('wing', 0.0): 3, ('epic', 0.0): 2, ('page', 0.0): 6, ('ang', 0.0): 8, ('ganda', 0.0): 1, ('💗', 0.0): 4, ('nux', 0.0): 1, ('hinanap', 0.0): 1, ('ako', 0.0): 1, ('uy', 0.0): 1, ('sched', 0.0): 1, ('anyar', 0.0): 1, ('entertain', 0.0): 2, ('typa', 0.0): 3, ('buddi', 0.0): 2, ('transpar', 0.0): 1, ('photoshop', 0.0): 2, ('planner', 0.0): 1, ('helppp', 0.0): 2, ('wearig', 0.0): 1, ('dri', 0.0): 2, ('alot', 0.0): 3, ('bu', 0.0): 5, ('prey', 0.0): 1, ('gross', 0.0): 5, ('drain', 0.0): 3, ('ausfailia', 0.0): 1, ('snow', 0.0): 3, ('footi', 0.0): 3, ('2nd', 0.0): 5, ('row', 0.0): 3, (\"m'\", 0.0): 2, ('kitkat', 0.0): 2, ('bday', 0.0): 7, ('😢', 0.0): 8, ('suger', 0.0): 1, ('olivia', 0.0): 2, ('audit', 0.0): 1, ('american', 0.0): 1, ('idol', 0.0): 2, ('injuri', 0.0): 2, ('appendix', 0.0): 1, ('burst', 0.0): 2, ('append', 0.0): 1, ('yeahh', 0.0): 2, ('fack', 0.0): 2, ('nhl', 0.0): 1, ('khami', 0.0): 2, ('favorit', 0.0): 4, ('rise', 0.0): 3, ('reaali', 0.0): 1, ('ja', 0.0): 2, ('naomi', 0.0): 1, ('modern', 0.0): 1, ('contemporari', 0.0): 1, ('slack', 0.0): 1, ('565', 0.0): 1, ('blond', 0.0): 2, ('jahat', 0.0): 3, ('discount', 0.0): 1, ('thorp', 0.0): 2, ('park', 0.0): 7, ('esnho', 0.0): 1, ('node', 0.0): 1, ('advanc', 0.0): 4, ('directx', 0.0): 1, ('workshop', 0.0): 1, ('p2', 0.0): 1, ('upload', 0.0): 2, ('remov', 0.0): 5, ('blackberri', 0.0): 1, ('shitti', 0.0): 1, ('mobil', 0.0): 2, ('povertyyouareevil', 0.0): 1, ('struggl', 0.0): 4, ('math', 0.0): 1, ('emm', 0.0): 1, ('data', 0.0): 6, ('elgin', 0.0): 1, ('vava', 0.0): 1, ('makati', 0.0): 1, ('💛', 0.0): 4, ('baon', 0.0): 1, ('soup', 0.0): 3, ('soak', 0.0): 1, ('bread', 0.0): 2, ('mush', 0.0): 1, (\"they'd\", 0.0): 2, ('matt', 0.0): 2, ('ouat', 0.0): 1, ('beach', 0.0): 5, ('blinkin', 0.0): 1, ('unblock', 0.0): 1, ('headack', 0.0): 1, ('tension', 0.0): 1, ('erit', 0.0): 1, ('perspect', 0.0): 1, ('wed', 0.0): 4, ('playlist', 0.0): 2, ('endlessli', 0.0): 1, ('blush', 0.0): 1, ('bat', 0.0): 1, ('kiddo', 0.0): 1, ('rumbel', 0.0): 1, ('overwhelm', 0.0): 1, ('thrown', 0.0): 2, ('irrespons', 0.0): 1, ('pakighinabi', 0.0): 1, ('pinkfinit', 0.0): 1, ('beb', 0.0): 2, ('migrain', 0.0): 2, ('almost', 0.0): 11, ('coyot', 0.0): 1, ('outta', 0.0): 1, ('mad', 0.0): 11, ('😒', 0.0): 3, ('headach', 0.0): 9, ('인피니트', 0.0): 2, ('save', 0.0): 6, ('baechu', 0.0): 1, ('calibraskaep', 0.0): 3, ('r', 0.0): 19, ('fanci', 0.0): 2, ('yt', 0.0): 3, ('purchas', 0.0): 2, ('elgato', 0.0): 1, ('ant', 0.0): 2, ('unexpect', 0.0): 2, ('bestfriend', 0.0): 9, ('faint', 0.0): 1, ('bp', 0.0): 1, ('appar', 0.0): 5, ('shower', 0.0): 3, ('subway', 0.0): 1, ('cool', 0.0): 5, ('prayer', 0.0): 2, ('fragil', 0.0): 1, ('huge', 0.0): 3, ('gap', 0.0): 1, ('plot', 0.0): 2, ('bungi', 0.0): 1, ('folk', 0.0): 1, ('raspberri', 0.0): 1, ('pi', 0.0): 1, ('shoe', 0.0): 2, ('woohyun', 0.0): 2, ('guilti', 0.0): 1, ('monica', 0.0): 2, ('davao', 0.0): 1, ('luckyyi', 0.0): 1, ('confid', 0.0): 1, ('eunha', 0.0): 1, ('misplac', 0.0): 1, ('den', 0.0): 1, ('dae', 0.0): 1, ('bap', 0.0): 1, ('likewis', 0.0): 1, ('liam', 0.0): 1, ('dylan', 0.0): 3, ('huehu', 0.0): 1, ('rice', 0.0): 1, ('krispi', 0.0): 1, ('marshmallow', 0.0): 2, ('srsli', 0.0): 7, ('birmingham', 0.0): 1, ('m5m6junction', 0.0): 1, ('soulsurvivor', 0.0): 1, ('stafford', 0.0): 1, ('progress', 0.0): 1, ('mixtur', 0.0): 1, (\"they'v\", 0.0): 4, ('practic', 0.0): 1, ('lage', 0.0): 1, ('ramd', 0.0): 1, ('lesbian', 0.0): 3, ('oralsex', 0.0): 4, ('munchkin', 0.0): 1, ('juja', 0.0): 1, ('murugan', 0.0): 1, ('handl', 0.0): 3, ('dia', 0.0): 2, ('bgtau', 0.0): 1, ('harap', 0.0): 1, ('bagi', 0.0): 1, ('aminn', 0.0): 1, ('fraand', 0.0): 1, ('😬', 0.0): 2, ('bigbang', 0.0): 2, ('steak', 0.0): 1, ('younger', 0.0): 2, ('sian', 0.0): 2, ('pizza', 0.0): 7, ('5am', 0.0): 5, ('nicoleapag', 0.0): 1, ('makeup', 0.0): 4, ('hellish', 0.0): 1, ('thirstyyi', 0.0): 1, ('chesti', 0.0): 1, ('dad', 0.0): 9, (\"nando'\", 0.0): 1, ('22', 0.0): 3, ('bow', 0.0): 2, ('queen', 0.0): 3, ('brave', 0.0): 1, ('hen', 0.0): 1, ('leed', 0.0): 9, ('rdd', 0.0): 1, ('dissip', 0.0): 1, ('. .', 0.0): 1, ('pump', 0.0): 2, ('capee', 0.0): 1, ('japan', 0.0): 2, ('random', 0.0): 1, ('young', 0.0): 5, ('outliv', 0.0): 1, ('x-ray', 0.0): 1, ('dental', 0.0): 1, ('spine', 0.0): 1, ('relief', 0.0): 1, ('popol', 0.0): 1, ('stomach', 0.0): 8, ('frog', 0.0): 2, ('brad', 0.0): 1, ('gen.ad', 0.0): 1, ('price', 0.0): 5, ('negoti', 0.0): 3, ('huhuhuhuhu', 0.0): 1, ('bbmadeinmanila', 0.0): 1, ('findavip', 0.0): 1, ('boyirl', 0.0): 1, ('yasss', 0.0): 1, ('6th', 0.0): 1, ('june', 0.0): 3, ('lain', 0.0): 1, ('diffici', 0.0): 1, ('custom', 0.0): 1, ('internet', 0.0): 9, ('near', 0.0): 9, ('speed', 0.0): 2, ('escap', 0.0): 1, ('rapist', 0.0): 1, ('commit', 0.0): 2, ('crime', 0.0): 1, ('bachpan', 0.0): 1, ('ki', 0.0): 2, ('yaadein', 0.0): 1, ('finnair', 0.0): 1, ('heathrow', 0.0): 1, ('norwegian', 0.0): 1, (':\\\\', 0.0): 1, ('batteri', 0.0): 3, ('upvot', 0.0): 4, ('keeno', 0.0): 1, ('whatthefuck', 0.0): 1, ('grotti', 0.0): 1, ('attent', 0.0): 1, ('seeker', 0.0): 1, ('moral', 0.0): 1, ('fern', 0.0): 1, ('mimi', 0.0): 1, ('bali', 0.0): 1, ('she', 0.0): 4, ('pleasee', 0.0): 3, ('brb', 0.0): 1, ('lowbat', 0.0): 1, ('otwolgrandtrail', 0.0): 4, ('funk', 0.0): 1, ('wewanticecream', 0.0): 1, ('sweat', 0.0): 2, ('eugh', 0.0): 1, ('speak', 0.0): 4, ('occasion', 0.0): 1, (\"izzy'\", 0.0): 1, ('dorm', 0.0): 1, ('choppi', 0.0): 1, ('paul', 0.0): 1, ('switch', 0.0): 4, (\"infinite'\", 0.0): 2, ('5:30', 0.0): 2, ('cayton', 0.0): 1, ('bay', 0.0): 2, ('emma', 0.0): 2, ('jen', 0.0): 1, ('darcey', 0.0): 1, ('connor', 0.0): 1, ('spoke', 0.0): 1, ('nail', 0.0): 2, ('biggest', 0.0): 3, ('blue', 0.0): 5, ('bottl', 0.0): 3, ('roommateexperi', 0.0): 1, ('yup', 0.0): 4, ('avoid', 0.0): 2, ('ic', 0.0): 1, ('te', 0.0): 1, ('auto-followback', 0.0): 1, ('asian', 0.0): 2, ('puppi', 0.0): 3, ('ljp', 0.0): 1, ('1/5', 0.0): 1, ('nowday', 0.0): 1, ('attach', 0.0): 2, ('beat', 0.0): 2, ('numb', 0.0): 1, ('dentist', 0.0): 3, ('misss', 0.0): 2, ('muchhh', 0.0): 1, ('youtub', 0.0): 5, ('rid', 0.0): 3, ('tab', 0.0): 2, ('uca', 0.0): 1, ('onto', 0.0): 2, ('track', 0.0): 3, ('bigtim', 0.0): 1, ('rumor', 0.0): 3, ('warmest', 0.0): 1, ('chin', 0.0): 2, ('tickl', 0.0): 1, ('♫', 0.0): 1, ('zikra', 0.0): 1, ('lusi', 0.0): 1, ('hasya', 0.0): 1, ('nugget', 0.0): 3, ('som', 0.0): 1, ('lu', 0.0): 1, ('olymp', 0.0): 1, (\"millie'\", 0.0): 1, ('guinea', 0.0): 1, ('lewi', 0.0): 1, ('748292', 0.0): 1, (\"we'll\", 0.0): 8, ('ano', 0.0): 2, ('22stan', 0.0): 1, ('24/7', 0.0): 2, ('thankyou', 0.0): 2, ('kanina', 0.0): 2, ('breakdown', 0.0): 2, ('mag', 0.0): 2, ('hatee', 0.0): 1, ('leas', 0.0): 1, ('written', 0.0): 2, ('hurri', 0.0): 4, ('attempt', 0.0): 1, ('6g', 0.0): 1, ('unsuccess', 0.0): 1, ('earlob', 0.0): 1, ('sue', 0.0): 1, ('dreari', 0.0): 1, ('denis', 0.0): 1, ('muriel', 0.0): 1, ('ahouré', 0.0): 1, ('pr', 0.0): 1, ('brand', 0.0): 1, ('imag', 0.0): 4, ('opportun', 0.0): 1, ('po', 0.0): 1, ('beg', 0.0): 2, (\"kath'd\", 0.0): 1, ('respond', 0.0): 2, ('chop', 0.0): 1, ('wbu', 0.0): 1, ('yess', 0.0): 2, ('kme', 0.0): 1, ('tom', 0.0): 4, ('cram', 0.0): 1, ('–', 0.0): 1, ('curiou', 0.0): 1, ('on-board', 0.0): 1, ('announc', 0.0): 3, ('trespass', 0.0): 1, ('fr', 0.0): 3, ('clandestin', 0.0): 1, ('muller', 0.0): 1, ('obviou', 0.0): 1, ('mufc', 0.0): 1, ('colour', 0.0): 4, ('stu', 0.0): 2, ('movie', 0.0): 1, ('buddyyi', 0.0): 1, ('feelgoodfriday', 0.0): 1, ('forest', 0.0): 1, ('6:30', 0.0): 1, ('babysit', 0.0): 1, ('opix', 0.0): 1, ('805', 0.0): 1, ('pilllow', 0.0): 1, ('fool', 0.0): 1, ('brag', 0.0): 1, ('skrillah', 0.0): 1, ('drown', 0.0): 2, ('gue', 0.0): 1, ('report', 0.0): 4, ('eventu', 0.0): 1, ('north', 0.0): 1, ('west', 0.0): 2, ('kitti', 0.0): 1, ('sjkao', 0.0): 1, ('mm', 0.0): 2, ('srri', 0.0): 1, ('honma', 0.0): 1, ('yeh', 0.0): 1, ('walay', 0.0): 1, ('bhi', 0.0): 2, ('bohat', 0.0): 1, ('wailay', 0.0): 1, ('hain', 0.0): 2, ('pre-season', 0.0): 1, ('friendli', 0.0): 3, ('pe', 0.0): 3, ('itna', 0.0): 2, ('shor', 0.0): 1, ('machaya', 0.0): 1, ('mein', 0.0): 1, ('samjha', 0.0): 1, ('cup', 0.0): 3, ('note', 0.0): 2, ('😄', 0.0): 1, ('👍', 0.0): 1, ('😔', 0.0): 7, ('sirkay', 0.0): 1, ('wali', 0.0): 1, ('pyaaz', 0.0): 1, ('daal', 0.0): 2, ('onion', 0.0): 1, ('vinegar', 0.0): 1, ('cook', 0.0): 3, ('tutori', 0.0): 1, ('soho', 0.0): 1, ('wobbl', 0.0): 1, ('server', 0.0): 4, ('ciao', 0.0): 1, ('masaan', 0.0): 1, ('muv', 0.0): 1, ('beast', 0.0): 2, ('hayst', 0.0): 1, ('cr', 0.0): 1, ('hnnn', 0.0): 1, ('fluffi', 0.0): 2, ('comeback', 0.0): 3, ('korea', 0.0): 1, ('wow', 0.0): 10, ('act', 0.0): 4, ('optimis', 0.0): 1, ('soniii', 0.0): 1, ('kahaaa', 0.0): 1, ('shave', 0.0): 3, ('tryna', 0.0): 3, ('healthi', 0.0): 2, ('freez', 0.0): 3, ('fml', 0.0): 4, ('jacket', 0.0): 1, ('sleepi', 0.0): 4, ('cyber', 0.0): 1, ('bulli', 0.0): 2, ('racial', 0.0): 2, ('scari', 0.0): 6, ('hall', 0.0): 1, ('stockholm', 0.0): 1, ('loool', 0.0): 3, ('bunch', 0.0): 3, ('among', 0.0): 1, ('__', 0.0): 2, ('busier', 0.0): 1, ('onward', 0.0): 1, ('ol', 0.0): 2, ('coincid', 0.0): 1, ('imac', 0.0): 1, ('launch', 0.0): 2, ('gram', 0.0): 1, ('nearer', 0.0): 1, ('blain', 0.0): 2, ('darren', 0.0): 2, ('layout', 0.0): 3, ('fuuuck', 0.0): 2, ('jesu', 0.0): 1, ('gishwh', 0.0): 1, ('exclud', 0.0): 1, ('unless', 0.0): 4, ('c', 0.0): 7, ('angelica', 0.0): 1, ('pull', 0.0): 5, ('colleg', 0.0): 5, ('movement', 0.0): 1, ('frou', 0.0): 1, ('vaccin', 0.0): 1, ('armor', 0.0): 2, ('legendari', 0.0): 1, ('cash', 0.0): 2, ('effort', 0.0): 2, ('nat', 0.0): 2, ('brake', 0.0): 1, ('grumpi', 0.0): 4, ('wreck', 0.0): 1, ('decis', 0.0): 2, ('gahhh', 0.0): 1, ('teribl', 0.0): 1, ('kilig', 0.0): 1, ('togeth', 0.0): 7, ('weaker', 0.0): 1, ('shravan', 0.0): 1, ('tv', 0.0): 4, ('stooop', 0.0): 1, ('gi-guilti', 0.0): 1, ('akooo', 0.0): 1, ('imveryverysorri', 0.0): 1, ('cd', 0.0): 1, ('grey', 0.0): 3, ('basenam', 0.0): 1, ('path', 0.0): 1, ('theme', 0.0): 2, ('cigar', 0.0): 1, ('speaker', 0.0): 1, ('volum', 0.0): 1, ('promethazin', 0.0): 1, ('zopiclon', 0.0): 1, ('addit', 0.0): 1, ('quetiapin', 0.0): 1, ('modifi', 0.0): 1, ('prescript', 0.0): 1, ('greska', 0.0): 1, ('macedonian', 0.0): 1, ('slovak', 0.0): 1, ('hike', 0.0): 1, ('certainli', 0.0): 2, ('browser', 0.0): 2, ('os', 0.0): 1, ('zokay', 0.0): 1, ('accent', 0.0): 1, ('b-but', 0.0): 1, ('gintama', 0.0): 1, ('shinsengumi', 0.0): 1, ('chapter', 0.0): 1, ('andi', 0.0): 1, ('crappl', 0.0): 1, ('agre', 0.0): 5, ('ftw', 0.0): 2, ('phandroid', 0.0): 1, ('tline', 0.0): 1, ('orchestra', 0.0): 1, ('ppl', 0.0): 5, ('rehears', 0.0): 1, ('bittersweet', 0.0): 1, ('eunji', 0.0): 1, ('bakit', 0.0): 4, ('121st', 0.0): 1, (\"yesterday'\", 0.0): 1, ('rt', 0.0): 8, ('ehdar', 0.0): 1, ('pegea', 0.0): 1, ('panga', 0.0): 1, ('dosto', 0.0): 1, ('nd', 0.0): 1, ('real_liam_payn', 0.0): 1, ('retweet', 0.0): 5, ('3/10', 0.0): 1, ('dmed', 0.0): 1, ('ad', 0.0): 1, ('yay', 0.0): 3, ('23', 0.0): 2, ('alreaddyyi', 0.0): 1, ('luceleva', 0.0): 1, ('21', 0.0): 1, ('porno', 0.0): 3, ('countrymus', 0.0): 4, ('sexysasunday', 0.0): 2, ('naeun', 0.0): 1, ('goal', 0.0): 5, (\"son'\", 0.0): 1, ('kidney', 0.0): 2, ('printer', 0.0): 1, ('ink', 0.0): 2, ('asham', 0.0): 3, ('ihatesomepeopl', 0.0): 1, ('tabl', 0.0): 2, ('0-2', 0.0): 1, ('brain', 0.0): 2, ('hard-wir', 0.0): 1, ('canadian', 0.0): 1, ('acn', 0.0): 2, ('gulo', 0.0): 1, ('kandekj', 0.0): 1, ('rize', 0.0): 1, ('meydan', 0.0): 1, ('experienc', 0.0): 2, ('fcking', 0.0): 1, ('crei', 0.0): 1, ('stabl', 0.0): 1, ('dormmat', 0.0): 1, ('pre', 0.0): 3, ('bo3', 0.0): 1, ('cod', 0.0): 2, ('redeem', 0.0): 1, ('invalid', 0.0): 1, ('wag', 0.0): 1, ('hopia', 0.0): 1, ('campaign', 0.0): 2, ('editor', 0.0): 1, ('reveal', 0.0): 2, ('booo', 0.0): 2, ('extens', 0.0): 1, ('rightnow', 0.0): 1, ('btu', 0.0): 1, ('karaok', 0.0): 1, ('licenc', 0.0): 1, ('apb', 0.0): 2, ('mbf', 0.0): 1, ('kpop', 0.0): 2, ('hahahaokay', 0.0): 1, ('basara', 0.0): 1, ('capcom', 0.0): 3, ('pc', 0.0): 2, ('url', 0.0): 2, ('web', 0.0): 2, ('site', 0.0): 6, ('design', 0.0): 3, ('grumbl', 0.0): 2, ('migrant', 0.0): 1, ('daddi', 0.0): 4, ('legit', 0.0): 1, ('australia', 0.0): 3, ('awsm', 0.0): 1, ('entir', 0.0): 5, ('tmw', 0.0): 1, ('uwu', 0.0): 1, ('jinki', 0.0): 1, ('taem', 0.0): 1, ('gif', 0.0): 2, ('cambridg', 0.0): 1, ('viath', 0.0): 1, ('brilliant', 0.0): 1, ('cypru', 0.0): 1, ('wet', 0.0): 10, ('30th', 0.0): 1, ('zayncomebackto', 0.0): 2, ('1d', 0.0): 6, ('senior', 0.0): 2, ('spazz', 0.0): 1, ('soobin', 0.0): 1, ('27', 0.0): 1, ('unmarri', 0.0): 1, ('float', 0.0): 3, ('pressur', 0.0): 3, ('winter', 0.0): 4, ('lifetim', 0.0): 2, ('hiondsh', 0.0): 1, ('58543', 0.0): 1, ('kikmenow', 0.0): 9, ('sexdat', 0.0): 2, (\"demi'\", 0.0): 1, ('junjou', 0.0): 2, ('romantica', 0.0): 1, ('cruel', 0.0): 1, ('privileg', 0.0): 2, ('mixtap', 0.0): 2, ('convinc', 0.0): 3, ('friex', 0.0): 1, ('taco', 0.0): 2, ('europ', 0.0): 2, ('shaylan', 0.0): 1, ('4:20', 0.0): 1, ('ylona', 0.0): 1, ('nah', 0.0): 4, ('notanapolog', 0.0): 3, ('ouh', 0.0): 1, ('tax', 0.0): 4, ('ohhh', 0.0): 2, ('nm', 0.0): 1, ('term', 0.0): 1, ('apolog', 0.0): 3, ('encanta', 0.0): 1, ('vale', 0.0): 1, ('osea', 0.0): 1, ('bea', 0.0): 1, ('♛', 0.0): 210, ('》', 0.0): 210, ('beli̇ev', 0.0): 35, ('wi̇ll', 0.0): 35, ('justi̇n', 0.0): 35, ('x15', 0.0): 35, ('350', 0.0): 4, ('see', 0.0): 35, ('me', 0.0): 35, ('40', 0.0): 3, ('dj', 0.0): 2, ('net', 0.0): 2, ('349', 0.0): 1, ('baek', 0.0): 1, ('tight', 0.0): 1, ('dunwan', 0.0): 1, ('suan', 0.0): 1, ('ba', 0.0): 3, ('haiz', 0.0): 1, ('otw', 0.0): 1, ('trade', 0.0): 3, ('venic', 0.0): 1, ('348', 0.0): 1, ('strong', 0.0): 6, ('adult', 0.0): 3, ('347', 0.0): 1, ('tree', 0.0): 3, ('hill', 0.0): 1, ('😕', 0.0): 1, ('com', 0.0): 1, ('insonia', 0.0): 1, ('346', 0.0): 1, ('rick', 0.0): 1, ('ross', 0.0): 1, ('wallet', 0.0): 4, ('empti', 0.0): 3, ('heartbreak', 0.0): 2, ('episod', 0.0): 11, ('345', 0.0): 1, ('milli', 0.0): 1, (':)', 0.0): 2, ('diff', 0.0): 1, ('persona', 0.0): 1, ('golden', 0.0): 1, ('scene', 0.0): 1, ('advert', 0.0): 1, ('determin', 0.0): 2, ('roseburi', 0.0): 1, ('familyhom', 0.0): 1, ('daw', 0.0): 2, ('344', 0.0): 1, ('monkey', 0.0): 1, ('yea', 0.0): 2, ('343', 0.0): 1, ('sweeti', 0.0): 2, ('erica', 0.0): 1, ('istg', 0.0): 1, ('lick', 0.0): 1, ('jackson', 0.0): 4, ('nsbzhdnxndamal', 0.0): 1, ('342', 0.0): 1, ('11:15', 0.0): 1, ('2hour', 0.0): 1, ('11:25', 0.0): 1, ('341', 0.0): 1, ('fandom', 0.0): 2, ('mahilig', 0.0): 1, ('mam-bulli', 0.0): 1, ('mtaani', 0.0): 1, ('tunaita', 0.0): 1, ('viazi', 0.0): 1, ('choma', 0.0): 1, ('laid', 0.0): 1, ('celebr', 0.0): 3, ('7am', 0.0): 1, ('jerk', 0.0): 1, ('lah', 0.0): 2, ('magic', 0.0): 1, ('menil', 0.0): 1, ('340', 0.0): 1, (\"kam'\", 0.0): 1, ('meee', 0.0): 1, ('diz', 0.0): 1, ('biooo', 0.0): 1, ('ay', 0.0): 1, ('taray', 0.0): 1, ('yumu-youtub', 0.0): 1, ('339', 0.0): 1, ('parijat', 0.0): 1, ('willmissyouparijat', 0.0): 1, ('abroad', 0.0): 2, ('jolli', 0.0): 1, ('scotland', 0.0): 2, ('338', 0.0): 1, ('mcnugget', 0.0): 1, ('sophi', 0.0): 5, ('feedback', 0.0): 4, ('met', 0.0): 7, ('caramello', 0.0): 2, ('koala', 0.0): 1, ('bar', 0.0): 1, ('suckmejimin', 0.0): 1, ('337', 0.0): 1, ('sucki', 0.0): 2, ('laughter', 0.0): 1, ('pou', 0.0): 1, ('goddamn', 0.0): 1, ('bark', 0.0): 1, ('nje', 0.0): 1, ('blast', 0.0): 1, ('hun', 0.0): 4, ('dbn', 0.0): 2, ('🎀', 0.0): 1, ('336', 0.0): 1, ('hardest', 0.0): 1, ('335', 0.0): 1, ('pledg', 0.0): 1, ('realiz', 0.0): 7, ('viber', 0.0): 1, ('mwah', 0.0): 1, ('estat', 0.0): 1, ('crush', 0.0): 1, ('lansi', 0.0): 1, ('334', 0.0): 1, ('hp', 0.0): 4, ('waah', 0.0): 1, ('miami', 0.0): 1, ('vandag', 0.0): 1, ('kgola', 0.0): 1, ('neng', 0.0): 1, ('eintlik', 0.0): 1, ('porn', 0.0): 2, ('4like', 0.0): 5, ('repost', 0.0): 2, ('333', 0.0): 4, ('magpi', 0.0): 1, ('22.05', 0.0): 1, ('15-24', 0.0): 1, ('05.15', 0.0): 1, ('coach', 0.0): 2, ('ador', 0.0): 1, ('chswiyfxcskcalum', 0.0): 1, ('nvm', 0.0): 2, ('lemm', 0.0): 1, ('quiet', 0.0): 3, ('foof', 0.0): 1, ('332', 0.0): 1, ('casilla', 0.0): 1, ('manchest', 0.0): 3, ('xi', 0.0): 1, ('rmtour', 0.0): 1, ('heavi', 0.0): 3, ('irl', 0.0): 2, ('blooper', 0.0): 2, ('huhuhuhu', 0.0): 1, ('na-tak', 0.0): 1, ('sorta', 0.0): 1, ('unfriend', 0.0): 1, ('greysonch', 0.0): 1, ('sandwich', 0.0): 4, ('bell', 0.0): 1, ('sebastian', 0.0): 1, ('rewatch', 0.0): 1, ('s4', 0.0): 1, ('ser', 0.0): 1, ('past', 0.0): 5, ('heart-break', 0.0): 1, ('outdat', 0.0): 1, ('m4', 0.0): 1, ('abandon', 0.0): 1, ('theater', 0.0): 1, ('smh', 0.0): 6, ('7-3', 0.0): 1, ('7.30-', 0.0): 1, ('ekk', 0.0): 1, ('giriboy', 0.0): 1, ('harriet', 0.0): 1, ('gegu', 0.0): 1, ('gray', 0.0): 1, ('truth', 0.0): 4, ('tbt', 0.0): 1, ('331', 0.0): 1, ('roof', 0.0): 2, ('indian', 0.0): 2, ('polit', 0.0): 3, ('blame', 0.0): 3, ('68', 0.0): 1, ('repres', 0.0): 1, ('corbyn', 0.0): 1, (\"labour'\", 0.0): 1, ('fortun', 0.0): 1, ('icecream', 0.0): 3, ('cuti', 0.0): 2, ('ry', 0.0): 1, ('lfccw', 0.0): 1, ('5ever', 0.0): 1, ('america', 0.0): 3, ('ontheroadagain', 0.0): 1, ('halaaang', 0.0): 1, ('reciev', 0.0): 1, ('flip', 0.0): 4, ('flop', 0.0): 1, ('caesarspalac', 0.0): 1, ('socialreward', 0.0): 1, ('requir', 0.0): 2, ('cali', 0.0): 1, ('fuckboy', 0.0): 1, ('330', 0.0): 1, ('deliveri', 0.0): 3, ('chrompet', 0.0): 1, ('easili', 0.0): 2, ('immun', 0.0): 1, ('system', 0.0): 3, ('lush', 0.0): 1, ('bathtub', 0.0): 1, ('php', 0.0): 1, ('mysql', 0.0): 1, ('libmysqlclient-dev', 0.0): 1, ('dev', 0.0): 2, ('pleasanton', 0.0): 1, ('wala', 0.0): 1, ('329', 0.0): 1, ('quickli', 0.0): 2, ('megan', 0.0): 1, ('heed', 0.0): 2, ('328', 0.0): 1, ('gwss', 0.0): 1, ('thankyouu', 0.0): 1, ('charad', 0.0): 1, ('becom', 0.0): 5, ('piano', 0.0): 2, ('327', 0.0): 1, ('complaint', 0.0): 2, ('yell', 0.0): 2, ('whatsoev', 0.0): 2, ('pete', 0.0): 1, ('wentz', 0.0): 1, ('shogi', 0.0): 1, ('blameshoghicp', 0.0): 1, ('classmat', 0.0): 1, ('troubl', 0.0): 1, ('fixedgearfrenzi', 0.0): 1, ('dispatch', 0.0): 1, ('theyr', 0.0): 2, ('hat', 0.0): 2, (\"shamuon'\", 0.0): 1, ('tokyo', 0.0): 1, ('toe', 0.0): 2, ('horrend', 0.0): 2, (\"someone'\", 0.0): 2, ('326', 0.0): 1, ('hasb', 0.0): 1, ('atti', 0.0): 1, ('muji', 0.0): 1, ('sirf', 0.0): 1, ('sensibl', 0.0): 1, ('etc', 0.0): 2, ('brum', 0.0): 1, ('cyclerevolut', 0.0): 1, ('caaannnttt', 0.0): 1, ('payment', 0.0): 3, ('overdrawn', 0.0): 1, ('tbf', 0.0): 1, ('complain', 0.0): 2, ('perfum', 0.0): 1, ('sampl', 0.0): 1, ('chanel', 0.0): 1, ('burberri', 0.0): 1, ('prada', 0.0): 1, ('325', 0.0): 1, ('noesss', 0.0): 1, ('topgear', 0.0): 1, ('worthi', 0.0): 1, ('bridesmaid', 0.0): 1, (\"tomorrow'\", 0.0): 2, ('gather', 0.0): 1, ('sudden', 0.0): 4, ('324', 0.0): 1, ('randomrestart', 0.0): 1, ('randomreboot', 0.0): 1, ('lumia', 0.0): 1, ('windowsphon', 0.0): 1, (\"microsoft'\", 0.0): 1, ('mañana', 0.0): 1, ('male', 0.0): 1, ('rap', 0.0): 1, ('sponsor', 0.0): 3, ('striker', 0.0): 2, ('lvg', 0.0): 1, ('behind', 0.0): 3, ('refurbish', 0.0): 1, ('cintiq', 0.0): 1, (\"finnick'\", 0.0): 1, ('askfinnick', 0.0): 1, ('contain', 0.0): 1, ('hairi', 0.0): 1, ('323', 0.0): 1, ('buri', 0.0): 1, ('omaygad', 0.0): 1, ('vic', 0.0): 1, ('surgeri', 0.0): 4, ('amber', 0.0): 8, ('tt.tt', 0.0): 1, ('hyper', 0.0): 2, ('vega', 0.0): 2, ('322', 0.0): 1, ('imiss', 0.0): 1, ('321', 0.0): 1, ('320', 0.0): 1, ('know.for', 0.0): 1, ('prepaid', 0.0): 1, ('none', 0.0): 4, ('319', 0.0): 1, ('grandma', 0.0): 1, (\"grandpa'\", 0.0): 1, ('farm', 0.0): 1, ('cow', 0.0): 1, ('sheep', 0.0): 1, ('hors', 0.0): 3, ('fruit', 0.0): 2, ('veget', 0.0): 1, ('puke', 0.0): 2, ('deliri', 0.0): 1, ('motilium', 0.0): 1, ('shite', 0.0): 1, ('318', 0.0): 1, ('schoolwork', 0.0): 1, (\"phoebe'\", 0.0): 1, ('317', 0.0): 1, ('pothol', 0.0): 1, ('316', 0.0): 1, ('notif', 0.0): 3, ('1,300', 0.0): 1, ('robyn', 0.0): 1, ('necklac', 0.0): 1, ('rachel', 0.0): 1, ('bhai', 0.0): 1, ('ramzan', 0.0): 1, ('crosss', 0.0): 1, ('clapham', 0.0): 1, ('investig', 0.0): 2, ('sth', 0.0): 1, ('essenti', 0.0): 1, ('photoshooot', 0.0): 1, ('austin', 0.0): 1, ('mahon', 0.0): 1, ('shut', 0.0): 3, ('andam', 0.0): 1, ('memor', 0.0): 1, ('cotton', 0.0): 1, ('candi', 0.0): 3, ('stock', 0.0): 3, ('swallow', 0.0): 1, ('snot', 0.0): 1, ('choke', 0.0): 1, ('taknottem', 0.0): 1, ('477', 0.0): 1, ('btob', 0.0): 2, ('percentag', 0.0): 1, ('shoshannavassil', 0.0): 1, ('swift', 0.0): 1, ('flat', 0.0): 3, ('a9', 0.0): 2, ('wsalelov', 0.0): 5, ('sexyjan', 0.0): 1, ('horni', 0.0): 2, ('goodmus', 0.0): 4, ('debut', 0.0): 3, ('lart', 0.0): 1, ('sew', 0.0): 1, ('skyfal', 0.0): 1, ('premier', 0.0): 1, ('yummi', 0.0): 2, ('manteca', 0.0): 1, (\"she'd\", 0.0): 2, ('probabl', 0.0): 8, ('shiatsu', 0.0): 1, ('heat', 0.0): 1, ('risk', 0.0): 3, ('edward', 0.0): 1, ('hopper', 0.0): 1, ('eyyah', 0.0): 1, ('utd', 0.0): 2, ('born', 0.0): 1, ('1-0', 0.0): 1, ('cart', 0.0): 1, ('shop', 0.0): 10, ('log', 0.0): 2, ('aaa', 0.0): 2, ('waifu', 0.0): 1, ('break', 0.0): 8, ('breakup', 0.0): 3, ('bother', 0.0): 3, ('bia', 0.0): 1, ('syndrom', 0.0): 1, ('shi', 0.0): 1, ('bias', 0.0): 1, ('pixel', 0.0): 2, ('weh', 0.0): 2, ('area', 0.0): 4, ('maymay', 0.0): 1, ('magpaalam', 0.0): 1, ('tf', 0.0): 3, ('subtitl', 0.0): 1, ('oitnb', 0.0): 1, ('backstori', 0.0): 1, ('jeremi', 0.0): 1, ('kyle', 0.0): 1, ('gimm', 0.0): 2, ('meal', 0.0): 3, ('neat-o', 0.0): 1, ('wru', 0.0): 1, ('scissor', 0.0): 1, ('creation', 0.0): 1, ('public', 0.0): 1, ('amtir', 0.0): 1, ('imysm', 0.0): 2, ('tut', 0.0): 1, ('trop', 0.0): 2, ('tard', 0.0): 1, ('deadlin', 0.0): 1, ('31', 0.0): 2, ('st', 0.0): 3, ('child', 0.0): 4, ('oct', 0.0): 2, ('bush', 0.0): 2, ('premiun', 0.0): 1, ('notcool', 0.0): 1, ('2/3', 0.0): 2, ('lahat', 0.0): 2, ('ng', 0.0): 4, ('araw', 0.0): 1, ('nage', 0.0): 1, ('gyu', 0.0): 4, ('lmfaooo', 0.0): 2, ('download', 0.0): 3, ('leagu', 0.0): 1, ('mashup', 0.0): 1, ('eu', 0.0): 1, ('lc', 0.0): 1, ('typo', 0.0): 2, ('itali', 0.0): 1, ('yass', 0.0): 1, ('christma', 0.0): 2, ('rel', 0.0): 1, ('yr', 0.0): 3, ('sydney', 0.0): 1, ('mb', 0.0): 1, ('perf', 0.0): 2, ('programm', 0.0): 1, ('bff', 0.0): 2, ('hashtag', 0.0): 1, ('omfg', 0.0): 4, ('exercis', 0.0): 2, ('combat', 0.0): 1, ('dosent', 0.0): 1, (\"sod'\", 0.0): 1, ('20min', 0.0): 1, ('request', 0.0): 2, ('yahoo', 0.0): 2, ('yodel', 0.0): 2, ('jokingli', 0.0): 1, ('regret', 0.0): 5, ('starbuck', 0.0): 3, ('lynettelow', 0.0): 1, ('interraci', 0.0): 3, (\"today'\", 0.0): 3, ('tgif', 0.0): 1, ('gahd', 0.0): 1, ('26th', 0.0): 1, ('discov', 0.0): 1, ('12.00', 0.0): 1, ('obyun', 0.0): 1, ('unni', 0.0): 4, ('wayhh', 0.0): 1, ('preval', 0.0): 1, ('controversi', 0.0): 1, ('🍵', 0.0): 2, ('☕', 0.0): 1, ('tube', 0.0): 1, ('strike', 0.0): 3, ('meck', 0.0): 1, ('mcfc', 0.0): 1, ('fresh', 0.0): 1, ('ucan', 0.0): 1, ('anxiou', 0.0): 1, ('poc', 0.0): 1, ('specif', 0.0): 2, ('sinhala', 0.0): 1, ('billionair', 0.0): 1, ('1645', 0.0): 1, ('island', 0.0): 3, ('1190', 0.0): 1, ('maldiv', 0.0): 1, ('dheena', 0.0): 1, ('fasgadah', 0.0): 1, ('alvadhaau', 0.0): 1, ('countdown', 0.0): 1, ('function', 0.0): 3, ('desktop', 0.0): 1, ('evelineconrad', 0.0): 1, ('facetim', 0.0): 4, ('kikmsn', 0.0): 2, ('selfshot', 0.0): 2, ('panda', 0.0): 1, ('backkk', 0.0): 1, ('transfer', 0.0): 3, ('dan', 0.0): 2, ('dull', 0.0): 1, ('overcast', 0.0): 1, ('folder', 0.0): 1, ('truck', 0.0): 2, ('missin', 0.0): 2, ('hangin', 0.0): 1, ('wiff', 0.0): 1, ('dept', 0.0): 1, ('cherri', 0.0): 1, ('bakewel', 0.0): 1, ('collect', 0.0): 3, ('teal', 0.0): 1, ('sect', 0.0): 1, ('tennunb', 0.0): 1, ('rather', 0.0): 4, ('skip', 0.0): 1, ('doomsday', 0.0): 1, ('neglect', 0.0): 1, ('posti', 0.0): 1, ('goodnight', 0.0): 1, ('donat', 0.0): 3, ('ship', 0.0): 6, ('bellami', 0.0): 1, ('raven', 0.0): 2, ('clark', 0.0): 1, ('helmi', 0.0): 1, ('uh', 0.0): 5, ('cnt', 0.0): 1, ('whereisthesun', 0.0): 1, ('summerismiss', 0.0): 1, ('longgg', 0.0): 1, ('ridicul', 0.0): 4, ('stocko', 0.0): 1, ('lucozad', 0.0): 1, ('explos', 0.0): 1, ('beh', 0.0): 2, ('half-rememb', 0.0): 1, (\"melody'\", 0.0): 1, ('recal', 0.0): 2, ('level', 0.0): 3, ('target', 0.0): 1, ('difficult', 0.0): 4, ('mile', 0.0): 1, ('pfb', 0.0): 1, ('nate', 0.0): 2, ('expo', 0.0): 2, ('jisoo', 0.0): 1, ('chloe', 0.0): 2, ('anon', 0.0): 2, ('mager', 0.0): 1, ('wi', 0.0): 1, ('knw', 0.0): 1, ('wht', 0.0): 1, ('distant', 0.0): 1, ('buffer', 0.0): 2, ('insan', 0.0): 1, ('charli', 0.0): 1, ('finland', 0.0): 3, ('gana', 0.0): 1, ('studio', 0.0): 3, ('arch', 0.0): 1, ('lyin', 0.0): 1, ('kian', 0.0): 3, ('supercar', 0.0): 1, ('gurgaon', 0.0): 1, ('locat', 0.0): 7, ('9:15', 0.0): 1, ('satir', 0.0): 1, ('gener', 0.0): 2, ('peanut', 0.0): 3, ('butter', 0.0): 1, ('garden', 0.0): 2, ('beer', 0.0): 1, ('viner', 0.0): 1, ('palembang', 0.0): 1, ('sorrryyi', 0.0): 1, ('fani', 0.0): 1, ('hahahahaha', 0.0): 2, ('boner', 0.0): 1, ('merci', 0.0): 1, ('yuki', 0.0): 1, ('2500k', 0.0): 1, ('mari', 0.0): 1, ('jake', 0.0): 1, ('gyllenha', 0.0): 1, ('impact', 0.0): 1, (\"ledger'\", 0.0): 1, ('btw', 0.0): 5, ('cough', 0.0): 4, ('hunni', 0.0): 1, ('b4', 0.0): 1, ('deplet', 0.0): 1, ('mbasa', 0.0): 1, ('client', 0.0): 3, ('ray', 0.0): 1, ('aah', 0.0): 1, ('type', 0.0): 2, ('suit', 0.0): 5, ('pa-copi', 0.0): 1, ('proper', 0.0): 2, ('biom', 0.0): 1, ('mosqu', 0.0): 1, ('smelli', 0.0): 1, ('taxi', 0.0): 4, ('emptier', 0.0): 1, (\"ciara'\", 0.0): 1, (\"everything'\", 0.0): 1, ('clip', 0.0): 2, ('tall', 0.0): 2, ('gladli', 0.0): 1, ('intent', 0.0): 1, ('amb', 0.0): 1, (\"harry'\", 0.0): 2, ('jean', 0.0): 2, ('mayday', 0.0): 1, ('parad', 0.0): 2, ('lyf', 0.0): 1, ('13th', 0.0): 1, ('anim', 0.0): 4, ('kingdom', 0.0): 1, ('chri', 0.0): 7, ('brown', 0.0): 4, ('riski', 0.0): 1, ('cologn', 0.0): 1, ('duo', 0.0): 3, ('ballad', 0.0): 2, ('bish', 0.0): 2, ('intern', 0.0): 2, ('brought', 0.0): 1, ('yumyum', 0.0): 1, (\"cathy'\", 0.0): 1, ('missyou', 0.0): 1, ('rubi', 0.0): 2, ('rose', 0.0): 2, ('tou', 0.0): 1, ('main', 0.0): 1, ('pora', 0.0): 1, ('stalk', 0.0): 3, ('karlia', 0.0): 1, ('khatam', 0.0): 2, ('bandi', 0.0): 1, ('👑', 0.0): 1, ('pyaari', 0.0): 1, ('gawd', 0.0): 1, ('understood', 0.0): 1, ('review', 0.0): 3, ('massi', 0.0): 1, ('thatselfiethough', 0.0): 1, ('loop', 0.0): 1, ('ofc', 0.0): 1, ('pict', 0.0): 1, ('caught', 0.0): 1, ('aishhh', 0.0): 1, ('viewer', 0.0): 1, ('exam', 0.0): 5, ('sighsss', 0.0): 1, ('burnt', 0.0): 2, ('toffe', 0.0): 2, ('honesti', 0.0): 1, ('cheatday', 0.0): 1, ('protein', 0.0): 1, ('sissi', 0.0): 1, ('tote', 0.0): 1, ('slowli', 0.0): 1, ('church', 0.0): 2, ('pll', 0.0): 1, ('sel', 0.0): 1, ('beth', 0.0): 2, ('serbia', 0.0): 1, ('serbian', 0.0): 1, ('selen', 0.0): 1, ('motav', 0.0): 1, ('💋', 0.0): 2, ('zayyyn', 0.0): 1, ('momma', 0.0): 1, ('happend', 0.0): 1, ('imper', 0.0): 1, ('trmdhesit', 0.0): 1, ('pana', 0.0): 1, ('quickest', 0.0): 2, ('blood', 0.0): 5, ('sake', 0.0): 1, ('hamstr', 0.0): 1, ('rodwel', 0.0): 1, ('trace', 0.0): 1, ('artist', 0.0): 4, ('tp', 0.0): 1, ('powder', 0.0): 1, ('wider', 0.0): 1, ('honestli', 0.0): 4, ('comfort', 0.0): 3, ('bruno', 0.0): 1, ('1.8', 0.0): 1, ('ed', 0.0): 7, ('croke', 0.0): 2, ('deal', 0.0): 6, ('toll', 0.0): 1, ('packag', 0.0): 1, ('shape', 0.0): 1, ('unluckiest', 0.0): 1, ('bettor', 0.0): 1, ('nstp', 0.0): 1, ('sem', 0.0): 2, ('chipotl', 0.0): 1, ('chick-fil-a', 0.0): 1, ('stole', 0.0): 3, ('evet', 0.0): 1, ('ramadhan', 0.0): 1, ('eid', 0.0): 4, ('stexpert', 0.0): 1, ('ripstegi', 0.0): 1, ('nickyyi', 0.0): 1, ('¿', 0.0): 1, ('centralis', 0.0): 1, ('discontinu', 0.0): 1, ('sniff', 0.0): 1, (\"i't\", 0.0): 1, ('glad', 0.0): 2, ('fab', 0.0): 2, ('theres', 0.0): 1, ('cred', 0.0): 1, ('t_t', 0.0): 1, ('elimin', 0.0): 1, ('teamzip', 0.0): 1, ('smtm', 0.0): 1, ('assingn', 0.0): 1, ('editi', 0.0): 1, ('nakaka', 0.0): 1, ('beastmod', 0.0): 1, ('gaaawd', 0.0): 1, ('jane', 0.0): 1, ('mango', 0.0): 1, ('colombia', 0.0): 1, ('yot', 0.0): 1, ('labyo', 0.0): 1, ('pano', 0.0): 1, ('nalamannn', 0.0): 1, ('hardhead', 0.0): 1, ('cell', 0.0): 1, (\"zach'\", 0.0): 1, ('burger', 0.0): 2, ('xpress', 0.0): 1, ('hopkin', 0.0): 1, ('melatonin', 0.0): 1, ('2-4', 0.0): 1, ('nap', 0.0): 2, ('wide', 0.0): 2, ('task', 0.0): 1, ('9pm', 0.0): 1, ('hahaah', 0.0): 1, ('frequent', 0.0): 1, ('jail', 0.0): 2, ('weirddd', 0.0): 1, ('donghyuk', 0.0): 1, ('stan', 0.0): 1, ('bek', 0.0): 1, ('13', 0.0): 4, ('reynoldsgrl', 0.0): 1, ('ole', 0.0): 1, ('beardi', 0.0): 1, ('kaussi', 0.0): 1, ('bummer', 0.0): 3, ('fightingmciren', 0.0): 1, (\"michael'\", 0.0): 1, ('�', 0.0): 21, ('miser', 0.0): 2, ('💦', 0.0): 1, ('yoga', 0.0): 2, ('🌞', 0.0): 1, ('💃', 0.0): 1, ('🏽', 0.0): 1, ('shouldv', 0.0): 1, ('saffron', 0.0): 1, ('peasant', 0.0): 1, ('wouldv', 0.0): 1, ('nfinit', 0.0): 1, ('admin_myung', 0.0): 1, ('slp', 0.0): 1, ('saddest', 0.0): 2, ('laomma', 0.0): 2, ('kebaya', 0.0): 1, ('bandung', 0.0): 1, ('indonesia', 0.0): 1, ('7df89150', 0.0): 1, ('whatsapp', 0.0): 2, ('62', 0.0): 1, ('08962464174', 0.0): 1, ('laomma_coutur', 0.0): 1, ('haizzz', 0.0): 1, ('urghhh', 0.0): 1, ('working-on-a-tight-schedul', 0.0): 1, ('ganbarimasu', 0.0): 1, ('livid', 0.0): 1, ('whammi', 0.0): 1, ('quuuee', 0.0): 1, ('friooo', 0.0): 1, ('ladi', 0.0): 4, ('stereo', 0.0): 1, ('chwang', 0.0): 1, ('lorm', 0.0): 1, ('823', 0.0): 1, ('rp', 0.0): 1, ('indiemus', 0.0): 10, ('unhappi', 0.0): 2, ('jennyjean', 0.0): 1, ('elfindelmundo', 0.0): 2, ('lolzz', 0.0): 1, ('dat', 0.0): 4, ('corey', 0.0): 1, ('appreci', 0.0): 2, ('weekli', 0.0): 2, ('mahirap', 0.0): 1, ('nash', 0.0): 1, ('gosh', 0.0): 6, ('noodl', 0.0): 1, ('veeerri', 0.0): 1, ('rted', 0.0): 2, ('orig', 0.0): 1, ('starholicxx', 0.0): 1, ('07:17', 0.0): 2, ('@the', 0.0): 1, ('notr', 0.0): 1, ('hwi', 0.0): 1, ('niall', 0.0): 5, ('fraud', 0.0): 1, ('diplomaci', 0.0): 1, ('fittest', 0.0): 1, ('zero', 0.0): 1, ('toler', 0.0): 2, ('gurl', 0.0): 1, ('notion', 0.0): 1, ('pier', 0.0): 1, ('approach', 0.0): 1, ('rattl', 0.0): 1, ('robe', 0.0): 1, ('emphasi', 0.0): 1, ('vocal', 0.0): 1, ('chose', 0.0): 1, ('erm', 0.0): 1, ('abby.can', 0.0): 1, ('persuad', 0.0): 1, ('lyric', 0.0): 1, (\"emily'\", 0.0): 1, ('odd', 0.0): 3, ('possibl', 0.0): 8, ('elect', 0.0): 2, ('kamiss', 0.0): 1, ('mwa', 0.0): 1, ('mommi', 0.0): 3, ('scream', 0.0): 1, ('fight', 0.0): 2, ('cafe', 0.0): 2, ('melbourn', 0.0): 1, ('anyonnee', 0.0): 1, ('loner', 0.0): 1, ('fricken', 0.0): 2, ('rito', 0.0): 1, ('friendzon', 0.0): 1, ('panel', 0.0): 1, ('repeat', 0.0): 2, ('audienc', 0.0): 1, ('hsm', 0.0): 1, ('canario', 0.0): 1, ('hotel', 0.0): 8, ('ukiss', 0.0): 1, ('faith', 0.0): 2, ('kurt', 0.0): 1, (\"fatma'm\", 0.0): 1, ('alex', 0.0): 4, ('swag', 0.0): 1, ('lmfao', 0.0): 2, ('flapjack', 0.0): 1, ('countthecost', 0.0): 1, ('ihop', 0.0): 1, ('infra', 0.0): 1, ('lq', 0.0): 1, ('knive', 0.0): 1, ('sotir', 0.0): 1, ('mybrainneedstoshutoff', 0.0): 1, ('macci', 0.0): 1, ('chees', 0.0): 7, ('25', 0.0): 2, ('tend', 0.0): 1, ('510', 0.0): 1, ('silicon', 0.0): 1, ('cover', 0.0): 2, ('kbye', 0.0): 1, ('ini', 0.0): 1, ('anytim', 0.0): 1, ('citizen', 0.0): 1, ('compar', 0.0): 2, ('rank', 0.0): 1, ('mcountdown', 0.0): 2, ('5h', 0.0): 1, ('thapelo', 0.0): 1, ('op', 0.0): 1, ('civ', 0.0): 1, ('wooden', 0.0): 1, ('mic', 0.0): 1, ('embarrass', 0.0): 2, ('translat', 0.0): 3, ('daili', 0.0): 3, ('mecha-totem', 0.0): 1, ('nak', 0.0): 1, ('tgk', 0.0): 1, ('townsss', 0.0): 1, ('jokid', 0.0): 1, ('rent', 0.0): 2, ('degre', 0.0): 1, ('inconsider', 0.0): 2, ('softbal', 0.0): 1, ('appli', 0.0): 1, ('tomcat', 0.0): 1, ('chel', 0.0): 1, ('jemma', 0.0): 1, ('detail', 0.0): 4, ('list', 0.0): 4, ('matchi', 0.0): 2, ('elsa', 0.0): 1, ('postpon', 0.0): 1, ('karin', 0.0): 1, ('honey', 0.0): 2, ('vist', 0.0): 1, ('unhealthi', 0.0): 1, ('propa', 0.0): 1, ('knockin', 0.0): 1, ('bacon', 0.0): 1, ('market', 0.0): 2, ('pre-holiday', 0.0): 1, ('diet', 0.0): 1, ('meani', 0.0): 1, ('deathbybaconsmel', 0.0): 1, ('init', 0.0): 2, ('destin', 0.0): 1, ('victoria', 0.0): 2, ('luna', 0.0): 1, ('krystal', 0.0): 1, ('sarajevo', 0.0): 1, ('haix', 0.0): 2, ('sp', 0.0): 1, ('student', 0.0): 4, ('wii', 0.0): 2, ('bayonetta', 0.0): 1, ('101', 0.0): 1, ('doabl', 0.0): 1, ('drove', 0.0): 1, ('agenc', 0.0): 1, ('story.miss', 0.0): 1, ('everon', 0.0): 1, ('jp', 0.0): 1, ('mamabear', 0.0): 1, ('imintoh', 0.0): 1, ('underr', 0.0): 1, (\"slovakia'\", 0.0): 1, ('D:', 0.0): 6, ('saklap', 0.0): 1, ('grade', 0.0): 2, ('rizal', 0.0): 1, ('lib', 0.0): 1, ('discuss', 0.0): 1, ('advisori', 0.0): 1, ('period', 0.0): 2, ('dit', 0.0): 1, ('du', 0.0): 1, ('harsh', 0.0): 2, ('ohgod', 0.0): 1, ('abligaverin', 0.0): 2, ('photooftheday', 0.0): 2, ('sexygirlbypreciouslemmi', 0.0): 3, ('ripsandrabland', 0.0): 1, ('edel', 0.0): 1, ('salam', 0.0): 1, ('mubark', 0.0): 1, ('dong', 0.0): 3, ('tammirossm', 0.0): 4, ('speck', 0.0): 1, ('abbymil', 0.0): 2, ('18', 0.0): 8, ('ion', 0.0): 1, ('5min', 0.0): 1, ('hse', 0.0): 1, ('noob', 0.0): 1, ('nxt', 0.0): 1, ('2week', 0.0): 1, ('300', 0.0): 3, ('fck', 0.0): 2, ('nae', 0.0): 2, ('deep', 0.0): 3, ('human', 0.0): 3, ('whit', 0.0): 1, ('van', 0.0): 4, ('bristol', 0.0): 1, ('subserv', 0.0): 1, ('si', 0.0): 4, ('oo', 0.0): 1, ('tub', 0.0): 1, ('penyfan', 0.0): 1, ('forecast', 0.0): 2, ('breconbeacon', 0.0): 1, ('tittheir', 0.0): 1, ('42', 0.0): 1, ('hotti', 0.0): 3, ('uu', 0.0): 2, ('rough', 0.0): 1, ('fuzzi', 0.0): 1, ('san', 0.0): 3, ('antonio', 0.0): 1, ('kang', 0.0): 1, ('junhe', 0.0): 1, ('couldv', 0.0): 1, ('pz', 0.0): 1, ('somerset', 0.0): 1, ('given', 0.0): 2, ('sunburnt', 0.0): 1, ('safer', 0.0): 1, ('k3g', 0.0): 1, ('input', 0.0): 1, ('gamestomp', 0.0): 1, ('desc', 0.0): 1, (\"angelo'\", 0.0): 1, ('yna', 0.0): 1, ('psygustokita', 0.0): 2, ('fiver', 0.0): 1, ('toward', 0.0): 1, ('sakho', 0.0): 1, ('threat', 0.0): 1, ('goalscor', 0.0): 1, ('10:59', 0.0): 1, ('11.00', 0.0): 1, ('sham', 0.0): 1, ('tricki', 0.0): 1, ('baao', 0.0): 1, ('nisrina', 0.0): 1, ('crazi', 0.0): 8, ('ladygaga', 0.0): 1, (\"you'\", 0.0): 2, ('pari', 0.0): 2, ('marrish', 0.0): 1, (\"otp'\", 0.0): 1, ('6:15', 0.0): 1, ('edomnt', 0.0): 1, ('qih', 0.0): 1, ('shxb', 0.0): 1, ('1000', 0.0): 1, ('chilton', 0.0): 1, ('mother', 0.0): 2, ('obsess', 0.0): 1, ('creepi', 0.0): 2, ('josh', 0.0): 1, ('boohoo', 0.0): 1, ('fellow', 0.0): 2, ('tweep', 0.0): 1, ('roar', 0.0): 1, ('victori', 0.0): 1, ('tweepsmatchout', 0.0): 1, ('nein', 0.0): 3, ('404', 0.0): 1, ('midnight', 0.0): 2, ('willlow', 0.0): 1, ('hbd', 0.0): 1, ('sowwi', 0.0): 1, ('3000', 0.0): 1, ('grind', 0.0): 1, ('gear', 0.0): 1, ('0.001', 0.0): 1, ('meant', 0.0): 6, ('portrait', 0.0): 1, ('mode', 0.0): 2, ('fact', 0.0): 4, ('11:11', 0.0): 4, ('shanzay', 0.0): 1, ('salabrati', 0.0): 1, ('journo', 0.0): 1, ('lure', 0.0): 1, ('gang', 0.0): 1, ('twist', 0.0): 1, ('mashaket', 0.0): 1, ('pet', 0.0): 2, ('bapak', 0.0): 1, ('royal', 0.0): 2, ('prima', 0.0): 1, ('mune', 0.0): 1, ('874', 0.0): 1, ('plisss', 0.0): 1, ('elf', 0.0): 1, ('teenchoic', 0.0): 5, ('choiceinternationalartist', 0.0): 5, ('superjunior', 0.0): 5, (\"he'll\", 0.0): 1, ('sunway', 0.0): 1, ('petal', 0.0): 1, ('jaya', 0.0): 1, ('selangor', 0.0): 1, ('glow', 0.0): 1, ('huhuu', 0.0): 1, ('congratul', 0.0): 2, ('margo', 0.0): 1, ('konga', 0.0): 1, ('ni', 0.0): 4, ('wa', 0.0): 2, ('ode', 0.0): 1, ('disvirgin', 0.0): 1, ('bride', 0.0): 3, ('yulin', 0.0): 1, ('meat', 0.0): 1, ('festiv', 0.0): 2, ('imma', 0.0): 2, ('syawal', 0.0): 1, ('lapar', 0.0): 1, ('foundat', 0.0): 1, ('clash', 0.0): 2, ('facil', 0.0): 1, ('dh', 0.0): 2, ('chalet', 0.0): 1, ('suay', 0.0): 1, ('anot', 0.0): 1, ('bugger', 0.0): 1, ('एक', 0.0): 1, ('ब', 0.0): 1, ('ा', 0.0): 2, ('र', 0.0): 2, ('फ', 0.0): 1, ('ि', 0.0): 1, ('स', 0.0): 1, ('े', 0.0): 1, ('ँ', 0.0): 1, ('ध', 0.0): 1, ('ो', 0.0): 1, ('ख', 0.0): 1, ('chandauli', 0.0): 1, ('majhwar', 0.0): 1, ('railway', 0.0): 1, ('tito', 0.0): 2, ('tita', 0.0): 1, ('cousin', 0.0): 3, ('critic', 0.0): 1, ('condit', 0.0): 1, ('steal', 0.0): 1, ('narco', 0.0): 1, ('regen', 0.0): 1, ('unfav', 0.0): 2, ('benadryl', 0.0): 1, ('offlin', 0.0): 1, ('arent', 0.0): 1, ('msg', 0.0): 1, ('yg', 0.0): 1, ('gg', 0.0): 3, ('sxrew', 0.0): 1, ('dissappear', 0.0): 1, ('swap', 0.0): 1, ('bleed', 0.0): 1, ('ishal', 0.0): 1, ('mi', 0.0): 2, ('thaank', 0.0): 1, ('jhezz', 0.0): 1, ('sneak', 0.0): 3, ('soft', 0.0): 1, ('defenc', 0.0): 1, ('defens', 0.0): 1, ('nrltigersroost', 0.0): 1, ('indiana', 0.0): 2, ('hibb', 0.0): 1, ('biblethump', 0.0): 1, ('rlyyi', 0.0): 1, ('septum', 0.0): 1, ('pierc', 0.0): 2, ('goood', 0.0): 1, ('hiya', 0.0): 1, ('fire', 0.0): 1, ('venom', 0.0): 1, ('carriag', 0.0): 1, ('pink', 0.0): 1, ('fur-trim', 0.0): 1, ('stetson', 0.0): 1, ('error', 0.0): 4, ('59', 0.0): 1, ('xue', 0.0): 1, ('midori', 0.0): 1, ('sakit', 0.0): 2, ('mateo', 0.0): 1, ('hawk', 0.0): 2, ('bartend', 0.0): 1, ('surf', 0.0): 1, ('despair', 0.0): 1, ('insta', 0.0): 1, ('promo', 0.0): 1, ('iwantin', 0.0): 1, ('___', 0.0): 2, ('fault', 0.0): 3, ('goodluck', 0.0): 1, ('pocket', 0.0): 1, ('[email protected]', 0.0): 1, ('benedictervent', 0.0): 1, ('content', 0.0): 1, ('221b', 0.0): 1, ('popcorn', 0.0): 3, ('joyc', 0.0): 1, ('ooop', 0.0): 1, ('spotifi', 0.0): 1, ('paalam', 0.0): 1, ('sazbal', 0.0): 1, ('incid', 0.0): 1, ('aaahh', 0.0): 1, ('gooo', 0.0): 1, (\"stomach'\", 0.0): 1, ('growl', 0.0): 1, ('beard', 0.0): 1, ('nooop', 0.0): 1, ('🎉', 0.0): 3, ('ding', 0.0): 3, ('hundr', 0.0): 1, ('meg', 0.0): 1, (\"verity'\", 0.0): 1, ('rupert', 0.0): 1, ('amin', 0.0): 1, ('studi', 0.0): 2, ('pleaaas', 0.0): 1, ('👆', 0.0): 2, ('woaah', 0.0): 1, ('solvo', 0.0): 1, ('twin', 0.0): 2, (\"friday'\", 0.0): 1, ('lego', 0.0): 1, ('barefoot', 0.0): 1, ('twelvyy', 0.0): 1, ('boaz', 0.0): 1, ('myhil', 0.0): 1, ('takeov', 0.0): 1, ('wba', 0.0): 1, (\"taeyeon'\", 0.0): 1, ('derp', 0.0): 1, ('pd', 0.0): 1, ('zoom', 0.0): 2, (\"sunny'\", 0.0): 1, ('besst', 0.0): 1, ('plagu', 0.0): 1, ('pit', 0.0): 1, ('rich', 0.0): 1, ('sight', 0.0): 1, ('frail', 0.0): 1, ('lotteri', 0.0): 1, ('ride', 0.0): 2, ('twurkin', 0.0): 1, ('razzist', 0.0): 1, ('tumblr', 0.0): 1, ('shek', 0.0): 1, ('609', 0.0): 1, ('mugshot', 0.0): 1, ('attend', 0.0): 3, ('plsss', 0.0): 4, ('taissa', 0.0): 1, ('farmiga', 0.0): 1, ('robert', 0.0): 1, ('qualiti', 0.0): 1, ('daniel', 0.0): 1, ('latest', 0.0): 3, ('softwar', 0.0): 1, ('restor', 0.0): 2, ('momo', 0.0): 2, ('pharma', 0.0): 1, ('immov', 0.0): 1, ('messi', 0.0): 1, ('ansh', 0.0): 1, ('f1', 0.0): 1, ('billion', 0.0): 1, ('rand', 0.0): 1, ('bein', 0.0): 1, ('tla', 0.0): 1, ('tweng', 0.0): 1, ('gene', 0.0): 1, ('up.com', 0.0): 1, ('counti', 0.0): 2, ('cooler', 0.0): 1, ('minhyuk', 0.0): 1, ('gold', 0.0): 2, ('1900', 0.0): 1, ('😪', 0.0): 3, ('yu', 0.0): 1, ('hz', 0.0): 2, ('selena', 0.0): 2, ('emta', 0.0): 1, ('hatigii', 0.0): 1, ('b2aa', 0.0): 1, ('yayyy', 0.0): 1, ('anesthesia', 0.0): 1, ('penrith', 0.0): 1, ('emu', 0.0): 1, ('plain', 0.0): 1, ('staff', 0.0): 3, ('untouch', 0.0): 1, ('brienn', 0.0): 1, ('lsh', 0.0): 1, ('gunna', 0.0): 1, ('former', 0.0): 1, ('darn', 0.0): 1, ('allah', 0.0): 4, ('pakistan', 0.0): 2, ('juudiciari', 0.0): 1, (\"horton'\", 0.0): 1, ('dunkin', 0.0): 1, ('socialis', 0.0): 1, ('cara', 0.0): 1, (\"delevingne'\", 0.0): 1, ('fear', 0.0): 1, ('drug', 0.0): 1, ('lace', 0.0): 1, ('fank', 0.0): 1, ('takfaham', 0.0): 1, ('ufff', 0.0): 1, ('sr', 0.0): 2, ('dard', 0.0): 1, ('katekyn', 0.0): 1, ('ehh', 0.0): 1, ('yeahhh', 0.0): 2, ('hacharatt', 0.0): 1, ('niwll', 0.0): 1, ('defin', 0.0): 1, ('wit', 0.0): 2, ('goa', 0.0): 1, ('lini', 0.0): 1, ('kasi', 0.0): 3, ('rhd', 0.0): 1, ('1st', 0.0): 3, ('wae', 0.0): 1, ('subsid', 0.0): 1, ('20th', 0.0): 1, ('anniversari', 0.0): 1, ('youngja', 0.0): 1, ('harumph', 0.0): 1, ('soggi', 0.0): 1, ('weed', 0.0): 1, ('ireland', 0.0): 3, ('sakura', 0.0): 1, ('flavour', 0.0): 1, ('chokki', 0.0): 1, ('🌸', 0.0): 1, ('unavail', 0.0): 2, ('richard', 0.0): 2, ('laptop', 0.0): 2, ('satya', 0.0): 1, ('aditya', 0.0): 1, ('🍜', 0.0): 3, ('vibrat', 0.0): 1, ('an', 0.0): 2, ('cu', 0.0): 1, ('dhaka', 0.0): 1, ('jam', 0.0): 1, ('shall', 0.0): 2, ('cornetto', 0.0): 3, ('noseble', 0.0): 1, ('nintendo', 0.0): 3, ('wew', 0.0): 1, ('ramo', 0.0): 1, ('ground', 0.0): 2, ('shawn', 0.0): 1, ('mend', 0.0): 1, ('l', 0.0): 2, ('dinghi', 0.0): 1, ('skye', 0.0): 1, ('store', 0.0): 3, ('descript', 0.0): 2, ('colleagu', 0.0): 2, ('gagal', 0.0): 2, ('txt', 0.0): 1, ('sim', 0.0): 1, ('nooot', 0.0): 1, ('notch', 0.0): 1, ('tht', 0.0): 2, ('starv', 0.0): 4, ('\\U000fe196', 0.0): 1, ('pyjama', 0.0): 1, ('swifti', 0.0): 1, ('sorna', 0.0): 1, ('lurgi', 0.0): 1, ('jim', 0.0): 2, ('6gb', 0.0): 1, ('fenestoscop', 0.0): 1, ('etienn', 0.0): 1, ('bandana', 0.0): 3, ('bigger', 0.0): 2, ('vagina', 0.0): 1, ('suriya', 0.0): 1, ('dangl', 0.0): 1, ('mjhe', 0.0): 2, ('aaj', 0.0): 1, ('tak', 0.0): 3, ('kisi', 0.0): 1, ('kiya', 0.0): 1, ('eyesight', 0.0): 1, ('25x30', 0.0): 1, ('aftenoon', 0.0): 1, ('booor', 0.0): 1, ('uuu', 0.0): 1, ('boyfriend', 0.0): 8, ('freebiefriday', 0.0): 1, ('garag', 0.0): 1, ('michael', 0.0): 1, ('obvious', 0.0): 1, ('denim', 0.0): 1, ('somebodi', 0.0): 1, ('ce', 0.0): 1, ('gw', 0.0): 1, ('anatomi', 0.0): 1, ('no1', 0.0): 1, (\"morisette'\", 0.0): 1, ('flash', 0.0): 1, ('non-trial', 0.0): 1, ('sayhernam', 0.0): 1, ('lootcrat', 0.0): 1, ('item', 0.0): 1, ('inca', 0.0): 1, ('trail', 0.0): 1, ('sandboard', 0.0): 1, ('derbi', 0.0): 1, ('coffe', 0.0): 1, ('unabl', 0.0): 3, ('signatur', 0.0): 1, ('dish', 0.0): 1, ('unfamiliar', 0.0): 1, ('kitchen', 0.0): 3, ('coldest', 0.0): 1, (\"old'\", 0.0): 1, ('14518344', 0.0): 1, ('61', 0.0): 1, ('thirdwheel', 0.0): 1, ('lovebird', 0.0): 1, ('nth', 0.0): 1, ('imo', 0.0): 1, ('familiar', 0.0): 1, ('@juliettemaughan', 0.0): 1, ('copi', 0.0): 1, ('sensiesha', 0.0): 1, ('eldest', 0.0): 1, ('netbal', 0.0): 1, ('😟', 0.0): 1, ('keedz', 0.0): 1, ('taybigail', 0.0): 1, ('jordan', 0.0): 1, ('tournament', 0.0): 1, ('goin', 0.0): 1, ('ps4', 0.0): 3, ('kink', 0.0): 1, ('charger', 0.0): 1, ('streak', 0.0): 1, ('scorch', 0.0): 1, ('srski', 0.0): 1, ('tdc', 0.0): 1, ('egypt', 0.0): 1, ('in-sensit', 0.0): 1, ('cooper', 0.0): 3, ('invit', 0.0): 1, ('donna', 0.0): 1, ('thurston', 0.0): 1, ('collin', 0.0): 1, ('quietli', 0.0): 2, ('kennel', 0.0): 1, ('911', 0.0): 1, ('pluckersss', 0.0): 1, ('gion', 0.0): 1, ('886', 0.0): 1, ('nsfw', 0.0): 1, ('kidschoiceaward', 0.0): 1, ('ming', 0.0): 1, ('pbr', 0.0): 1, ('shoutout', 0.0): 1, ('periscop', 0.0): 1, ('ut', 0.0): 1, ('shawti', 0.0): 1, ('naw', 0.0): 4, (\"sterling'\", 0.0): 1, ('9muse', 0.0): 1, ('hrryok', 0.0): 2, ('asap', 0.0): 2, ('wnt', 0.0): 1, ('9:30', 0.0): 1, ('9:48', 0.0): 1, ('9/11', 0.0): 1, ('bueno', 0.0): 1, ('receptionist', 0.0): 1, ('ella', 0.0): 2, ('goe', 0.0): 4, ('ketchup', 0.0): 1, ('tasteless', 0.0): 1, ('deantd', 0.0): 1, ('justgotkanekifi', 0.0): 1, ('notgonnabeactivefor', 0.0): 1, ('2weeksdontmissittoomuch', 0.0): 1, ('2013', 0.0): 1, ('disney', 0.0): 2, ('vlog', 0.0): 1, ('swim', 0.0): 1, ('turtl', 0.0): 2, ('cnn', 0.0): 2, ('straplin', 0.0): 1, ('theatr', 0.0): 1, ('guncontrol', 0.0): 1, ('stung', 0.0): 2, ('tweak', 0.0): 1, (\"thát'\", 0.0): 1, ('powerpoint', 0.0): 1, ('present', 0.0): 5, ('diner', 0.0): 1, ('no-no', 0.0): 1, ('hind', 0.0): 1, ('circuit', 0.0): 1, ('secondari', 0.0): 1, ('sodder', 0.0): 1, ('perhap', 0.0): 2, ('mobitel', 0.0): 1, ('colin', 0.0): 1, ('playstat', 0.0): 2, ('charg', 0.0): 4, ('exp', 0.0): 1, ('misspelt', 0.0): 1, ('wan', 0.0): 1, ('hyungwon', 0.0): 2, ('alarm', 0.0): 1, ('needicecreamnow', 0.0): 1, ('shake', 0.0): 1, ('repeatedli', 0.0): 1, ('nu-uh', 0.0): 1, ('jace', 0.0): 1, ('mostest', 0.0): 1, ('vip', 0.0): 1, ('urgh', 0.0): 1, ('consol', 0.0): 1, (\"grigson'\", 0.0): 1, ('carrot', 0.0): 1, ('>:-(', 0.0): 4, ('sunburn', 0.0): 1, ('ughh', 0.0): 2, ('enabl', 0.0): 1, ('otter', 0.0): 1, ('protect', 0.0): 1, ('argh', 0.0): 1, ('pon', 0.0): 1, ('otl', 0.0): 2, ('sleepov', 0.0): 2, ('jess', 0.0): 2, ('bebe', 0.0): 1, ('fabina', 0.0): 1, (\"barrista'\", 0.0): 1, ('plant', 0.0): 3, ('pup', 0.0): 2, ('brolli', 0.0): 1, ('mere', 0.0): 2, ('nhi', 0.0): 1, ('dey', 0.0): 2, ('serv', 0.0): 1, ('kepo', 0.0): 1, ('bitin', 0.0): 1, ('pretzel', 0.0): 1, ('bb17', 0.0): 1, ('bblf', 0.0): 1, ('fuckin', 0.0): 1, ('vanilla', 0.0): 1, ('latt', 0.0): 1, ('skulker', 0.0): 1, ('thread', 0.0): 1, ('hungrrryyi', 0.0): 1, ('icloud', 0.0): 1, ('ipod', 0.0): 3, ('hallyu', 0.0): 1, ('buuut', 0.0): 1, ('über', 0.0): 1, ('oki', 0.0): 2, ('8p', 0.0): 1, ('champagn', 0.0): 1, ('harlo', 0.0): 1, ('torrentialrain', 0.0): 1, ('lloyd', 0.0): 1, ('asshol', 0.0): 1, ('clearli', 0.0): 2, ('knowww', 0.0): 2, ('runni', 0.0): 1, ('sehun', 0.0): 1, ('sweater', 0.0): 1, ('intoler', 0.0): 2, ('xenophob', 0.0): 1, ('wtfff', 0.0): 1, ('tone', 0.0): 1, ('wasnt', 0.0): 1, ('1pm', 0.0): 2, ('fantasi', 0.0): 1, ('newer', 0.0): 1, ('pish', 0.0): 1, ('comparison', 0.0): 1, ('remast', 0.0): 1, ('fe14', 0.0): 1, ('icon', 0.0): 2, ('strawberri', 0.0): 1, ('loos', 0.0): 1, ('kapatidkongpogi', 0.0): 1, ('steph', 0.0): 1, ('mel', 0.0): 1, ('longest', 0.0): 1, ('carmen', 0.0): 1, ('login', 0.0): 1, ('respons', 0.0): 3, ('00128835', 0.0): 1, ('wingstop', 0.0): 1, ('budg', 0.0): 1, ('fuq', 0.0): 1, ('ilhoon', 0.0): 1, ('ganteng', 0.0): 1, ('simpl', 0.0): 1, ('getthescoop', 0.0): 1, ('hearess', 0.0): 1, ('677', 0.0): 1, ('txt_shot', 0.0): 1, ('standbi', 0.0): 1, ('inatal', 0.0): 1, ('zenmat', 0.0): 1, ('namecheck', 0.0): 1, ('whistl', 0.0): 1, ('junmyeon', 0.0): 1, ('ddi', 0.0): 1, ('arini', 0.0): 1, ('je', 0.0): 1, ('bright', 0.0): 2, ('igbo', 0.0): 1, ('blamehoney', 0.0): 1, ('whhr', 0.0): 1, ('juan', 0.0): 1, ('snuggl', 0.0): 1, ('internship', 0.0): 1, ('usag', 0.0): 1, ('warn', 0.0): 1, ('vertigo', 0.0): 1, ('panic', 0.0): 1, ('attack', 0.0): 4, ('dual', 0.0): 1, ('carriageway', 0.0): 1, ('aragalang', 0.0): 1, ('08', 0.0): 1, ('tam', 0.0): 1, ('bose', 0.0): 1, ('theo', 0.0): 1, ('anymoree', 0.0): 1, ('rubbish', 0.0): 1, ('cactu', 0.0): 1, ('sorrri', 0.0): 1, ('bowel', 0.0): 1, ('nasti', 0.0): 2, ('tumour', 0.0): 1, ('faster', 0.0): 1, ('puffi', 0.0): 1, ('eyelid', 0.0): 1, ('musica', 0.0): 1, ('dota', 0.0): 1, ('4am', 0.0): 1, ('campsit', 0.0): 1, ('miah', 0.0): 1, ('hahay', 0.0): 1, ('churro', 0.0): 1, ('montana', 0.0): 2, ('reign', 0.0): 1, ('exampl', 0.0): 1, ('inflat', 0.0): 1, ('sic', 0.0): 1, ('reset', 0.0): 1, ('entlerbountli', 0.0): 1, ('tinder', 0.0): 3, ('dirtykik', 0.0): 2, ('sexcam', 0.0): 3, ('spray', 0.0): 1, ('industri', 0.0): 1, ('swollen', 0.0): 1, ('distanc', 0.0): 2, ('jojo', 0.0): 1, ('postcod', 0.0): 1, ('kafi', 0.0): 1, ('din', 0.0): 1, ('mene', 0.0): 1, ('aj', 0.0): 1, ('koi', 0.0): 1, ('rewert', 0.0): 1, ('bunta', 0.0): 1, ('warnaaa', 0.0): 1, ('tortur', 0.0): 2, ('field', 0.0): 1, ('wall', 0.0): 2, ('iran', 0.0): 1, ('irand', 0.0): 1, ('us-iran', 0.0): 1, ('nuclear', 0.0): 1, (\"mit'\", 0.0): 1, ('expert', 0.0): 1, ('sever', 0.0): 3, ('li', 0.0): 1, ('s2e12', 0.0): 1, ('rumpi', 0.0): 1, ('gallon', 0.0): 1, ('ryan', 0.0): 1, ('secret', 0.0): 2, ('dandia', 0.0): 1, ('rbi', 0.0): 1, ('cage', 0.0): 2, ('parrot', 0.0): 1, ('1li', 0.0): 1, ('commiss', 0.0): 1, ('cag', 0.0): 1, ('stripe', 0.0): 2, ('gujarat', 0.0): 1, ('tear', 0.0): 3, ('ily.melani', 0.0): 1, ('unlik', 0.0): 2, ('talent', 0.0): 2, ('deepxcap', 0.0): 1, ('doin', 0.0): 3, ('5:08', 0.0): 1, ('thesi', 0.0): 11, ('belieb', 0.0): 2, ('gtg', 0.0): 1, ('compet', 0.0): 1, ('vv', 0.0): 1, ('respect', 0.0): 5, ('opt-out', 0.0): 1, ('vam', 0.0): 1, ('spece', 0.0): 1, ('ell', 0.0): 1, ('articl', 0.0): 1, ('sexyameli', 0.0): 1, ('fineandyu', 0.0): 1, ('gd', 0.0): 1, ('flesh', 0.0): 1, ('daft', 0.0): 1, ('imsorri', 0.0): 1, ('aku', 0.0): 1, ('chelsea', 0.0): 2, ('koe', 0.0): 1, ('emyu', 0.0): 1, ('confetti', 0.0): 1, ('bf', 0.0): 2, ('sini', 0.0): 1, ('dipoppo', 0.0): 1, ('hop', 0.0): 2, ('bestweekend', 0.0): 1, ('okay-ish', 0.0): 1, ('html', 0.0): 1, ('geneva', 0.0): 1, ('patml', 0.0): 1, ('482', 0.0): 1, ('orgasm', 0.0): 3, ('abouti', 0.0): 1, ('797', 0.0): 1, ('reaalli', 0.0): 1, ('aldub', 0.0): 1, ('nila', 0.0): 1, ('smart', 0.0): 1, ('meter', 0.0): 1, ('display', 0.0): 1, ('unansw', 0.0): 1, ('bri', 0.0): 1, ('magcon', 0.0): 1, ('sinuend', 0.0): 1, ('kak', 0.0): 1, ('laper', 0.0): 2, ('rage', 0.0): 1, ('loser', 0.0): 1, ('brendon', 0.0): 1, (\"urie'\", 0.0): 1, ('sumer', 0.0): 1, ('repackag', 0.0): 1, (\":'d\", 0.0): 1, ('matthew', 0.0): 1, ('yongb', 0.0): 1, ('sued', 0.0): 1, ('suprem', 0.0): 1, ('warm-up', 0.0): 1, ('arriv', 0.0): 4, ('brill', 0.0): 1, ('120', 0.0): 1, ('rub', 0.0): 1, ('belli', 0.0): 1, ('jannatul', 0.0): 1, ('ferdou', 0.0): 1, ('ekta', 0.0): 1, ('kharap', 0.0): 1, ('manush', 0.0): 1, ('mart', 0.0): 2, ('gua', 0.0): 1, ('can', 0.0): 1, (\"khloe'\", 0.0): 1, ('nhe', 0.0): 1, ('yar', 0.0): 1, ('minkyuk', 0.0): 1, ('hol', 0.0): 1, ('isol', 0.0): 1, ('hk', 0.0): 1, ('sensor', 0.0): 1, ('broker', 0.0): 1, ('wna', 0.0): 1, ('flaviana', 0.0): 1, ('chickmt', 0.0): 1, ('123', 0.0): 1, ('letsfootbal', 0.0): 2, ('atk', 0.0): 2, ('greymind', 0.0): 2, ('43', 0.0): 2, ('gayl', 0.0): 2, ('cricket', 0.0): 3, ('2-3', 0.0): 2, ('mood-dump', 0.0): 1, ('livestream', 0.0): 1, ('gotten', 0.0): 1, ('felton', 0.0): 1, ('veriti', 0.0): 1, (\"standen'\", 0.0): 1, ('shortli', 0.0): 1, ('😆', 0.0): 2, ('takoyaki', 0.0): 1, ('piti', 0.0): 1, ('aisyah', 0.0): 1, ('ffvi', 0.0): 1, ('youtu.be/2_gpctsojkw', 0.0): 1, ('donutsss', 0.0): 1, ('50p', 0.0): 1, ('grate', 0.0): 1, ('spars', 0.0): 1, ('dd', 0.0): 1, ('lagi', 0.0): 1, ('rider', 0.0): 1, ('pride', 0.0): 1, ('hueee', 0.0): 1, ('password', 0.0): 1, ('thingi', 0.0): 1, ('georg', 0.0): 1, ('afraid', 0.0): 2, ('chew', 0.0): 2, ('toy', 0.0): 1, ('stella', 0.0): 1, ('threw', 0.0): 2, ('theaccidentalcoupl', 0.0): 1, ('smooth', 0.0): 1, ('handov', 0.0): 1, ('spick', 0.0): 1, ('bebii', 0.0): 1, ('happenend', 0.0): 1, ('dr', 0.0): 1, ('balm', 0.0): 1, ('hmph', 0.0): 1, ('bubba', 0.0): 2, ('floor', 0.0): 3, ('georgi', 0.0): 1, ('oi', 0.0): 1, ('bengali', 0.0): 1, ('masterchef', 0.0): 1, ('whatchya', 0.0): 1, ('petrol', 0.0): 1, ('diesel', 0.0): 1, ('wardrob', 0.0): 1, ('awe', 0.0): 1, ('cock', 0.0): 1, ('nyquil', 0.0): 1, ('poootek', 0.0): 1, ('1,500', 0.0): 1, ('bobbl', 0.0): 1, ('leak', 0.0): 1, ('thermo', 0.0): 1, ('classic', 0.0): 1, ('ti5', 0.0): 1, ('12th', 0.0): 1, ('skate', 0.0): 1, ('tae', 0.0): 1, ('kita', 0.0): 4, ('ia', 0.0): 1, ('pkwalasawa', 0.0): 1, ('india', 0.0): 1, ('corrupt', 0.0): 2, ('access', 0.0): 2, ('anything.sur', 0.0): 1, ('info', 0.0): 6, ('octob', 0.0): 1, ('mubank', 0.0): 2, ('ene', 0.0): 2, ('3k', 0.0): 1, ('zehr', 0.0): 1, ('khani', 0.0): 1, ('groceri', 0.0): 1, ('hubba', 0.0): 1, ('bubbl', 0.0): 1, ('gum', 0.0): 2, ('closet', 0.0): 1, ('jhalak', 0.0): 1, ('. ..', 0.0): 2, ('bakwa', 0.0): 1, ('. ...', 0.0): 1, ('seehiah', 0.0): 1, ('goy', 0.0): 1, ('nacho', 0.0): 1, ('braid', 0.0): 2, ('initi', 0.0): 1, ('ruth', 0.0): 1, ('boong', 0.0): 1, ('recommend', 0.0): 3, ('gta', 0.0): 1, ('cwnt', 0.0): 1, ('trivia', 0.0): 1, ('belat', 0.0): 1, ('rohingya', 0.0): 1, ('muslim', 0.0): 2, ('indict', 0.0): 1, ('traffick', 0.0): 1, ('thailand', 0.0): 1, ('asia', 0.0): 1, ('rumbl', 0.0): 1, ('kumbl', 0.0): 1, ('scold', 0.0): 1, ('phrase', 0.0): 1, ('includ', 0.0): 1, ('tag', 0.0): 2, ('melt', 0.0): 1, ('tfw', 0.0): 1, ('jest', 0.0): 1, ('offend', 0.0): 2, ('sleepingwithsiren', 0.0): 1, ('17th', 0.0): 1, ('bringmethehorizon', 0.0): 1, ('18th', 0.0): 2, ('carva', 0.0): 1, ('regularli', 0.0): 2, ('sympathi', 0.0): 1, ('revamp', 0.0): 1, ('headphon', 0.0): 1, ('cunt', 0.0): 1, ('wacha', 0.0): 1, ('niend', 0.0): 1, ('bravo', 0.0): 1, ('2hr', 0.0): 1, ('13m', 0.0): 1, ('kk', 0.0): 2, ('calibraksaep', 0.0): 2, ('darlin', 0.0): 1, ('stun', 0.0): 1, (\"doedn't\", 0.0): 1, ('meaning', 0.0): 1, ('horrif', 0.0): 2, ('scoup', 0.0): 2, ('paypal', 0.0): 3, ('sweedi', 0.0): 1, ('nam', 0.0): 1, (\"sacconejoly'\", 0.0): 1, ('bethesda', 0.0): 1, ('fallout', 0.0): 1, ('minecon', 0.0): 1, ('perfect', 0.0): 2, ('katee', 0.0): 1, ('iloveyouu', 0.0): 1, ('linux', 0.0): 1, ('nawww', 0.0): 1, ('chikka', 0.0): 1, ('ug', 0.0): 1, ('rata', 0.0): 1, ('soonest', 0.0): 1, ('mwamwa', 0.0): 1, ('faggot', 0.0): 1, ('doubt', 0.0): 2, ('fyi', 0.0): 1, ('profil', 0.0): 1, ('nicest', 0.0): 1, ('mehendi', 0.0): 1, ('dash', 0.0): 1, ('bookmark', 0.0): 1, ('whay', 0.0): 1, ('shaa', 0.0): 1, ('prami', 0.0): 1, ('😚', 0.0): 4, ('ngee', 0.0): 1, ('ann', 0.0): 1, ('crikey', 0.0): 2, ('snit', 0.0): 1, ('nathanielhinanakit', 0.0): 1, ('naya', 0.0): 1, ('spinni', 0.0): 1, ('wheel', 0.0): 2, ('albeit', 0.0): 1, ('athlet', 0.0): 1, ('gfriend', 0.0): 2, ('yung', 0.0): 2, ('fugli', 0.0): 1, ('💞', 0.0): 4, ('jongda', 0.0): 1, ('hardli', 0.0): 2, ('tlist', 0.0): 1, ('budget', 0.0): 1, ('pabebegirl', 0.0): 1, ('pabeb', 0.0): 2, ('alter', 0.0): 1, ('sandra', 0.0): 2, ('bland', 0.0): 2, ('storifi', 0.0): 1, ('abbi', 0.0): 2, ('mtvhottest', 0.0): 1, ('gaga', 0.0): 1, ('rib', 0.0): 1, ('😵', 0.0): 1, ('hulkamania', 0.0): 1, ('unlov', 0.0): 1, ('lazi', 0.0): 3, ('ihhh', 0.0): 1, ('stackar', 0.0): 1, ('basil', 0.0): 1, ('remedi', 0.0): 1, ('ov', 0.0): 2, ('raiz', 0.0): 1, ('nvr', 0.0): 1, ('gv', 0.0): 1, ('up.wt', 0.0): 1, ('wt', 0.0): 1, ('imran', 0.0): 2, ('achiev', 0.0): 1, ('thr', 0.0): 1, ('soln', 0.0): 1, (\"sister'\", 0.0): 1, ('hong', 0.0): 1, ('kong', 0.0): 1, ('31st', 0.0): 1, ('pipe', 0.0): 1, ('sept', 0.0): 2, ('lawn', 0.0): 1, (\"cupid'\", 0.0): 1, ('torn', 0.0): 1, ('retain', 0.0): 1, ('clown', 0.0): 2, ('lipstick', 0.0): 1, ('haiss', 0.0): 1, ('todayi', 0.0): 1, ('thoo', 0.0): 1, ('everday', 0.0): 1, ('hangout', 0.0): 2, ('steven', 0.0): 2, ('william', 0.0): 1, ('umboh', 0.0): 1, ('goodafternoon', 0.0): 1, ('jadin', 0.0): 1, ('thiz', 0.0): 1, ('iz', 0.0): 1, ('emeg', 0.0): 1, ('kennat', 0.0): 1, ('reunit', 0.0): 1, ('abi', 0.0): 1, ('arctic', 0.0): 1, ('chicsirif', 0.0): 1, ('structur', 0.0): 1, ('cumbia', 0.0): 1, ('correct', 0.0): 1, ('badlif', 0.0): 1, ('4-5', 0.0): 2, ('kaslkdja', 0.0): 1, ('3wk', 0.0): 1, ('flower', 0.0): 1, ('feverfew', 0.0): 1, ('weddingflow', 0.0): 1, ('diyflow', 0.0): 1, ('fitn', 0.0): 1, ('worth', 0.0): 4, ('wolverin', 0.0): 1, ('khan', 0.0): 1, ('innoc', 0.0): 1, ('🙏', 0.0): 1, ('🎂', 0.0): 2, ('memem', 0.0): 2, ('krystoria', 0.0): 1, ('snob', 0.0): 1, ('zumba', 0.0): 1, ('greekcrisi', 0.0): 1, ('remain', 0.0): 1, ('dutch', 0.0): 1, ('legibl', 0.0): 2, ('isra', 0.0): 1, ('passport', 0.0): 1, ('froze', 0.0): 1, ('theori', 0.0): 1, ('23rd', 0.0): 1, ('24th', 0.0): 1, ('stomachach', 0.0): 1, ('slice', 0.0): 1, ('ཀ', 0.0): 1, ('again', 0.0): 1, ('otani', 0.0): 1, ('3-0', 0.0): 1, ('3rd', 0.0): 3, ('bottom', 0.0): 2, ('niaaa', 0.0): 1, ('2/4', 0.0): 1, ('scheme', 0.0): 2, ('fckin', 0.0): 1, ('hii', 0.0): 1, ('vin', 0.0): 1, ('plss', 0.0): 1, ('rpli', 0.0): 1, ('rat', 0.0): 3, ('bollywood', 0.0): 1, ('mac', 0.0): 1, ('backup', 0.0): 2, ('lune', 0.0): 1, ('robinhood', 0.0): 1, ('robinhoodi', 0.0): 1, ('🚙', 0.0): 1, ('💚', 0.0): 1, ('docopenhagen', 0.0): 1, ('setter', 0.0): 1, ('swipe', 0.0): 1, ('bbygurl', 0.0): 1, ('neil', 0.0): 1, ('caribbean', 0.0): 1, ('6yr', 0.0): 1, ('jabongatpumaurbanstamped', 0.0): 2, ('takraw', 0.0): 1, ('fersure', 0.0): 1, ('angi', 0.0): 1, ('sheriff', 0.0): 1, ('aaag', 0.0): 1, (\"i'mo\", 0.0): 1, ('sulk', 0.0): 1, ('selfish', 0.0): 1, ('trick', 0.0): 2, ('nonc', 0.0): 1, ('pad', 0.0): 1, ('bison', 0.0): 1, ('motiv', 0.0): 2, (\"q'don\", 0.0): 1, ('cheat', 0.0): 2, ('stomp', 0.0): 1, ('aaaaaaaaah', 0.0): 1, ('kany', 0.0): 1, ('mama', 0.0): 1, ('jdjdjdjd', 0.0): 1, (\"jimin'\", 0.0): 1, ('fancaf', 0.0): 1, ('waffl', 0.0): 1, ('87.7', 0.0): 1, ('2fm', 0.0): 1, ('himseek', 0.0): 1, ('kissm', 0.0): 1, ('akua', 0.0): 1, ('glo', 0.0): 1, ('cori', 0.0): 1, ('monteith', 0.0): 1, ('often', 0.0): 1, ('hashbrown', 0.0): 1, ('💘', 0.0): 2, ('pg', 0.0): 1, ('msc', 0.0): 1, ('hierro', 0.0): 1, ('shirleycam', 0.0): 1, ('phonesex', 0.0): 2, ('pal', 0.0): 1, ('111', 0.0): 1, ('gilet', 0.0): 1, ('cheek', 0.0): 1, ('squishi', 0.0): 1, ('lahhh', 0.0): 1, ('eon', 0.0): 1, ('sunris', 0.0): 1, ('beeti', 0.0): 1, ('697', 0.0): 1, ('kikkomansabor', 0.0): 1, ('getaway', 0.0): 1, ('crimin', 0.0): 1, ('amiibo', 0.0): 1, ('batman', 0.0): 1, ('habe', 0.0): 1, ('siannn', 0.0): 1, ('march', 0.0): 1, ('2017', 0.0): 1, ('chuckin', 0.0): 1, ('ampsha', 0.0): 1, ('nia', 0.0): 1, ('strap', 0.0): 1, ('dz9055', 0.0): 1, ('entlead', 0.0): 1, ('590', 0.0): 1, ('twice', 0.0): 5, ('07:02', 0.0): 1, ('ifsc', 0.0): 1, ('mayor', 0.0): 1, ('biodivers', 0.0): 1, ('taxonom', 0.0): 1, ('collabor', 0.0): 1, ('speci', 0.0): 1, ('discoveri', 0.0): 1, ('collar', 0.0): 1, ('3:03', 0.0): 1, ('belt', 0.0): 1, ('smith', 0.0): 2, ('eyelin', 0.0): 1, ('therefor', 0.0): 1, ('netherland', 0.0): 1, ('el', 0.0): 1, ('jeb', 0.0): 1, ('blacklivesmatt', 0.0): 1, ('slogan', 0.0): 1, ('msnbc', 0.0): 1, ('jebbush', 0.0): 1, ('famish', 0.0): 1, ('marino', 0.0): 1, ('qualifi', 0.0): 2, ('suzi', 0.0): 1, ('skirt', 0.0): 1, ('tama', 0.0): 1, ('warrior', 0.0): 2, ('wound', 0.0): 1, ('iraq', 0.0): 1, ('be', 0.0): 2, ('camara', 0.0): 1, ('coveral', 0.0): 1, ('happili', 0.0): 1, ('sneezi', 0.0): 1, ('rogerwatch', 0.0): 1, ('stalker', 0.0): 1, ('velvet', 0.0): 1, ('tradit', 0.0): 1, (\"people'\", 0.0): 1, ('beheaviour', 0.0): 1, (\"robert'\", 0.0): 1, ('.\\n.', 0.0): 2, ('aaron', 0.0): 1, ('jelous', 0.0): 1, ('mtg', 0.0): 1, ('thoughtseiz', 0.0): 1, ('playabl', 0.0): 1, ('oldi', 0.0): 1, ('goodi', 0.0): 1, ('mcg', 0.0): 1, ('inspirit', 0.0): 1, ('shine', 0.0): 1, ('ise', 0.0): 1, ('assum', 0.0): 2, ('waist', 0.0): 2, ('guin', 0.0): 1, ('venu', 0.0): 1, ('evil', 0.0): 1, ('pepper', 0.0): 1, ('thessidew', 0.0): 1, ('877', 0.0): 1, ('genesi', 0.0): 1, ('mexico', 0.0): 2, ('novemb', 0.0): 1, ('mash', 0.0): 1, ('whattsap', 0.0): 1, ('inuyasha', 0.0): 2, ('outfwith', 0.0): 1, ('myungsoo', 0.0): 1, ('organis', 0.0): 1, ('satisfi', 0.0): 1, ('wah', 0.0): 1, ('challo', 0.0): 1, ('pliss', 0.0): 1, ('juliana', 0.0): 1, ('enrol', 0.0): 1, ('darlen', 0.0): 1, ('emoji', 0.0): 2, ('brisban', 0.0): 1, ('merlin', 0.0): 1, ('nawwwe', 0.0): 1, ('hyperbulli', 0.0): 1, ('tong', 0.0): 1, ('nga', 0.0): 1, ('seatmat', 0.0): 1, ('rajud', 0.0): 1, ('barkada', 0.0): 1, ('ore', 0.0): 1, ('kayla', 0.0): 1, ('ericavan', 0.0): 1, ('jong', 0.0): 1, ('dongwoo', 0.0): 1, ('photocard', 0.0): 1, ('wh', 0.0): 1, ('dw', 0.0): 1, ('tumor', 0.0): 1, ('vivian', 0.0): 1, ('mmsmalubhangsakit', 0.0): 1, ('jillcruz', 0.0): 2, ('lgbt', 0.0): 3, ('qt', 0.0): 1, ('19th', 0.0): 1, ('toss', 0.0): 1, ('co-work', 0.0): 1, ('mia', 0.0): 1, ('push', 0.0): 4, ('dare', 0.0): 2, ('unsettl', 0.0): 1, ('gh', 0.0): 1, ('18c', 0.0): 1, ('rlli', 0.0): 2, ('hamster', 0.0): 2, ('sheeran', 0.0): 2, ('preform', 0.0): 2, ('monash', 0.0): 1, ('hitmark', 0.0): 1, ('glitch', 0.0): 1, ('safaa', 0.0): 1, (\"selena'\", 0.0): 1, ('galat', 0.0): 1, ('tum', 0.0): 1, ('ab', 0.0): 5, ('non', 0.0): 1, ('lrka', 0.0): 1, ('bna', 0.0): 1, ('kia', 0.0): 1, ('bhook', 0.0): 1, ('jai', 0.0): 1, ('social', 0.0): 2, ('afterschool', 0.0): 1, ('bilal', 0.0): 1, ('ashraf', 0.0): 1, ('icu', 0.0): 1, ('thanksss', 0.0): 1, ('annnd', 0.0): 1, ('winchest', 0.0): 1, ('{:', 0.0): 1, ('grepe', 0.0): 1, ('grepein', 0.0): 1, ('panem', 0.0): 1, ('lover', 0.0): 1, ('sulli', 0.0): 1, ('cpm', 0.0): 1, ('condemn', 0.0): 1, ('✔', 0.0): 1, ('occur', 0.0): 1, ('unagi', 0.0): 1, ('7elw', 0.0): 1, ('mesh', 0.0): 1, ('beyt', 0.0): 1, ('3a2ad', 0.0): 1, ('fluent', 0.0): 1, ('varsiti', 0.0): 1, ('sengenza', 0.0): 1, ('context', 0.0): 1, ('movnat', 0.0): 1, ('yield', 0.0): 1, ('nbhero', 0.0): 1, (\"it'd\", 0.0): 1, ('background', 0.0): 1, ('agov', 0.0): 1, ('brasileirao', 0.0): 2, ('abus', 0.0): 1, ('unpar', 0.0): 1, ('bianca', 0.0): 1, ('bun', 0.0): 1, ('dislik', 0.0): 1, ('burdensom', 0.0): 1, ('clear', 0.0): 2, ('amelia', 0.0): 1, ('melon', 0.0): 2, ('useless', 0.0): 1, ('soccer', 0.0): 2, ('interview', 0.0): 2, ('thursday', 0.0): 1, ('nevermind', 0.0): 1, ('jeon', 0.0): 1, ('claw', 0.0): 1, ('thigh', 0.0): 2, ('traction', 0.0): 1, ('damnit', 0.0): 1, ('pri', 0.0): 1, ('pv', 0.0): 2, ('reliv', 0.0): 1, ('nyc', 0.0): 2, ('klm', 0.0): 1, ('11am', 0.0): 1, (\"mcd'\", 0.0): 1, ('hung', 0.0): 1, ('bam', 0.0): 1, ('seventh', 0.0): 1, ('splendour', 0.0): 1, ('swedish', 0.0): 1, ('metal', 0.0): 1, ('häirførc', 0.0): 1, ('givecodpieceach', 0.0): 1, ('alic', 0.0): 3, ('stile', 0.0): 1, ('explain', 0.0): 3, ('ili', 0.0): 1, ('pragu', 0.0): 1, ('sadi', 0.0): 1, ('charact', 0.0): 1, ('915', 0.0): 1, ('hayee', 0.0): 2, ('patwari', 0.0): 1, ('mam', 0.0): 1, (\"ik'\", 0.0): 1, ('vision', 0.0): 2, ('ga', 0.0): 1, ('awhhh', 0.0): 1, ('nalang', 0.0): 1, ('hehe', 0.0): 1, ('albanian', 0.0): 1, ('curs', 0.0): 2, ('tava', 0.0): 1, ('chara', 0.0): 1, ('teteh', 0.0): 1, ('verri', 0.0): 1, ('shatter', 0.0): 2, ('sb', 0.0): 1, ('nawe', 0.0): 1, ('bulldog', 0.0): 1, ('macho', 0.0): 1, ('puriti', 0.0): 1, ('kwento', 0.0): 1, ('nakakapikon', 0.0): 1, ('nagbabasa', 0.0): 1, ('blog', 0.0): 2, ('cancer', 0.0): 1, (':-\\\\', 0.0): 1, ('jonatha', 0.0): 4, ('beti', 0.0): 4, ('sogok', 0.0): 1, ('premium', 0.0): 2, ('instrument', 0.0): 1, ('howev', 0.0): 1, ('dastardli', 0.0): 1, ('swine', 0.0): 1, ('envelop', 0.0): 1, ('pipol', 0.0): 1, ('tad', 0.0): 1, ('wiper', 0.0): 2, ('supposedli', 0.0): 1, ('kernel', 0.0): 1, ('intel', 0.0): 1, ('mega', 0.0): 1, ('bent', 0.0): 1, ('socket', 0.0): 1, ('pcgame', 0.0): 1, ('pcupgrad', 0.0): 1, ('brainwash', 0.0): 2, ('smosh', 0.0): 1, ('plawnew', 0.0): 1, ('837', 0.0): 1, ('aswel', 0.0): 1, ('litter', 0.0): 1, ('mensch', 0.0): 1, ('sepanx', 0.0): 1, ('pci', 0.0): 1, ('caerphilli', 0.0): 1, ('omw', 0.0): 1, ('😍', 0.0): 1, ('hahdhdhshh', 0.0): 1, ('growinguppoor', 0.0): 1, ('🇺', 0.0): 2, ('🇸', 0.0): 2, (\"bangtan'\", 0.0): 1, ('taimoor', 0.0): 1, ('meray', 0.0): 1, ('dost', 0.0): 1, ('tya', 0.0): 1, ('refollow', 0.0): 1, ('dumb', 0.0): 2, ('butt', 0.0): 1, ('pissbabi', 0.0): 1, ('plank', 0.0): 1, ('inconsist', 0.0): 1, ('moor', 0.0): 1, ('bin', 0.0): 1, ('osx', 0.0): 1, ('chrome', 0.0): 1, ('voiceov', 0.0): 1, ('devo', 0.0): 1, ('hulkhogan', 0.0): 1, ('unpleas', 0.0): 1, ('daaamn', 0.0): 1, ('dada', 0.0): 1, ('fulli', 0.0): 1, ('spike', 0.0): 1, (\"panic'\", 0.0): 1, ('22nd', 0.0): 1, ('south', 0.0): 2, ('africa', 0.0): 2, ('190', 0.0): 2, ('lizardz', 0.0): 1, ('deepli', 0.0): 1, ('emerg', 0.0): 1, ('engin', 0.0): 1, ('dormtel', 0.0): 1, ('scho', 0.0): 1, ('siya', 0.0): 1, ('onee', 0.0): 1, ('carri', 0.0): 1, ('7pm', 0.0): 1, ('feta', 0.0): 1, ('blaaaz', 0.0): 1, ('nausea', 0.0): 1, ('awar', 0.0): 1, ('top-up', 0.0): 1, ('sharknado', 0.0): 1, ('erni', 0.0): 1, ('ezoo', 0.0): 1, ('lilybutl', 0.0): 1, ('seduc', 0.0): 2, ('powai', 0.0): 1, ('neighbor', 0.0): 1, ('delhi', 0.0): 1, ('unsaf', 0.0): 1, ('halo', 0.0): 1, ('fred', 0.0): 1, ('gaon', 0.0): 1, ('infnt', 0.0): 1, ('elig', 0.0): 1, ('acub', 0.0): 1, (\"why'd\", 0.0): 1, ('bullshit', 0.0): 2, ('hanaaa', 0.0): 1, ('jn', 0.0): 1, ('tau', 0.0): 1, ('basta', 0.0): 1, ('sext', 0.0): 1, ('addm', 0.0): 1, ('hotmusicdeloco', 0.0): 2, ('dhi', 0.0): 1, ('👉', 0.0): 1, ('8ball', 0.0): 1, ('fakmarey', 0.0): 1, ('doo', 0.0): 2, ('six', 0.0): 3, ('flag', 0.0): 1, ('fulltim', 0.0): 1, ('awkward', 0.0): 1, ('beet', 0.0): 1, ('juic', 0.0): 1, ('dci', 0.0): 1, ('granddad', 0.0): 1, ('minion', 0.0): 3, ('bucket', 0.0): 1, ('kapan', 0.0): 1, ('udah', 0.0): 1, ('dihapu', 0.0): 1, ('hilang', 0.0): 1, ('dari', 0.0): 1, ('muka', 0.0): 1, ('bumi', 0.0): 1, ('narrow', 0.0): 1, ('gona', 0.0): 2, ('chello', 0.0): 1, ('gate', 0.0): 1, ('guard', 0.0): 1, ('crepe', 0.0): 1, ('forsaken', 0.0): 1, ('kanin', 0.0): 1, ('hypixel', 0.0): 1, ('grrr', 0.0): 1, ('thestruggleisr', 0.0): 1, ('geek', 0.0): 1, ('gamer', 0.0): 2, ('afterbirth', 0.0): 1, (\"apink'\", 0.0): 1, ('overperhatian', 0.0): 1, ('son', 0.0): 1, ('pox', 0.0): 1, ('ahm', 0.0): 1, ('karli', 0.0): 1, ('kloss', 0.0): 1, ('goofi', 0.0): 1, ('pcd', 0.0): 1, ('antagonis', 0.0): 1, ('writer', 0.0): 1, ('nudg', 0.0): 1, ('delv', 0.0): 1, ('grandad', 0.0): 1, (\"gray'\", 0.0): 1, ('followk', 0.0): 1, ('suggest', 0.0): 2, ('pace', 0.0): 1, ('maker', 0.0): 1, ('molli', 0.0): 1, ('higher', 0.0): 1, ('ceremoni', 0.0): 1, ('christin', 0.0): 1, ('moodi', 0.0): 1, ('throwback', 0.0): 1, ('fav', 0.0): 3, ('barb', 0.0): 1, ('creasi', 0.0): 1, ('deputi', 0.0): 1, ('tast', 0.0): 1, (\"banana'\", 0.0): 1, ('saludo', 0.0): 1, ('dissapoint', 0.0): 1, ('😫', 0.0): 1, ('<--', 0.0): 1, (\"bae'\", 0.0): 1, ('pimpl', 0.0): 2, ('amount', 0.0): 2, ('tdi', 0.0): 1, ('pamela', 0.0): 1, ('mini', 0.0): 1, ('mast', 0.0): 1, ('intermitt', 0.0): 1, ('servic', 0.0): 3, ('janniecam', 0.0): 1, ('musicbiz', 0.0): 1, ('braxton', 0.0): 1, ('pro', 0.0): 2, ('urban', 0.0): 1, ('unpreced', 0.0): 1, ('tebow', 0.0): 1, ('okaaay', 0.0): 1, ('sayanggg', 0.0): 1, ('housework', 0.0): 1, ('bust', 0.0): 2, ('disneyland', 0.0): 1, ('thoma', 0.0): 1, ('tommyy', 0.0): 1, ('billi', 0.0): 1, ('kevin', 0.0): 1, ('clifton', 0.0): 1, ('strictli', 0.0): 1, ('nsc', 0.0): 1, ('mat', 0.0): 1, ('0', 0.0): 1, ('awhh', 0.0): 1, ('ram', 0.0): 2, ('voucher', 0.0): 1, ('smadvow', 0.0): 1, ('544', 0.0): 1, ('acdc', 0.0): 1, ('aker', 0.0): 1, ('gmail', 0.0): 1, ('sprevelink', 0.0): 1, ('633', 0.0): 1, ('lana', 0.0): 2, ('loveyoutilltheendcart', 0.0): 1, ('sfv', 0.0): 1, ('6/7', 0.0): 1, ('winner', 0.0): 1, ('20/1', 0.0): 1, ('david', 0.0): 1, ('rosi', 0.0): 1, ('hayoung', 0.0): 1, ('nlb', 0.0): 1, ('@_', 0.0): 1, ('tayo', 0.0): 1, ('forth', 0.0): 1, ('suspect', 0.0): 1, ('mening', 0.0): 1, ('viral', 0.0): 1, ('tonsil', 0.0): 1, ('😷', 0.0): 1, ('😝', 0.0): 1, ('babyy', 0.0): 2, ('cushion', 0.0): 1, ('😿', 0.0): 1, ('💓', 0.0): 2, ('weigh', 0.0): 1, ('keen', 0.0): 1, ('petrofac', 0.0): 1, (';-)', 0.0): 1, ('wig', 0.0): 1, (\"mark'\", 0.0): 1, ('pathet', 0.0): 1, ('burden.say', 0.0): 1, ('itchi', 0.0): 1, ('cheaper', 0.0): 1, ('malaysia', 0.0): 1, ('130', 0.0): 1, ('snapchattimg', 0.0): 1, ('😏', 0.0): 4, ('sin', 0.0): 1, ('lor', 0.0): 1, ('dedic', 0.0): 1, ('worriedli', 0.0): 1, ('stare', 0.0): 1, ('toneadi', 0.0): 1, ('46532', 0.0): 1, ('snapdirti', 0.0): 1, ('sheskindahot', 0.0): 1, ('corps', 0.0): 1, ('taeni', 0.0): 1, ('fyeah', 0.0): 1, ('andromeda', 0.0): 1, ('yunni', 0.0): 1, ('whdjwksja', 0.0): 1, ('ziam', 0.0): 1, ('100k', 0.0): 1, ('spoil', 0.0): 1, ('curtain', 0.0): 1, ('watchabl', 0.0): 1, ('migrin', 0.0): 1, ('gdce', 0.0): 1, ('gamescom', 0.0): 1, (\"do't\", 0.0): 1, ('parcel', 0.0): 1, ('num', 0.0): 1, ('oooouch', 0.0): 1, ('pinki', 0.0): 1, ('👣', 0.0): 1, ('podiatrist', 0.0): 1, ('gusto', 0.0): 1, (\"rodic'\", 0.0): 1, (\"one'\", 0.0): 1, ('adoohh', 0.0): 1, ('b-butt', 0.0): 1, ('tigermilk', 0.0): 1, ('east', 0.0): 1, ('dulwich', 0.0): 1, ('intens', 0.0): 1, ('kagami', 0.0): 1, ('kuroko', 0.0): 1, ('sana', 0.0): 2, ('makita', 0.0): 1, ('spooki', 0.0): 1, ('smol', 0.0): 1, ('bean', 0.0): 1, ('fagan', 0.0): 1, ('meadowhal', 0.0): 1, ('lola', 0.0): 1, ('nadalaw', 0.0): 1, ('labyu', 0.0): 1, ('jot', 0.0): 1, ('ivypowel', 0.0): 1, ('homeslic', 0.0): 1, ('emoticon', 0.0): 2, ('eyebrow', 0.0): 1, ('prettylook', 0.0): 1, ('whitney', 0.0): 1, ('houston', 0.0): 1, ('aur', 0.0): 1, ('shamil', 0.0): 1, ('tonn', 0.0): 1, ('statu', 0.0): 1, ('→', 0.0): 1, ('suddenli', 0.0): 2, ('alli', 0.0): 2, ('wrap', 0.0): 1, ('neck', 0.0): 1, ('heartbroken', 0.0): 1, ('chover', 0.0): 1, ('cebu', 0.0): 1, ('lechon', 0.0): 1, ('kitten', 0.0): 2, ('jannygreen', 0.0): 2, ('suicid', 0.0): 2, ('forgiv', 0.0): 1, ('conno', 0.0): 1, ('brooo', 0.0): 1, ('rout', 0.0): 1, ('lovebox', 0.0): 1, ('prod', 0.0): 1, ('osad', 0.0): 1, ('scam', 0.0): 1, ('itb', 0.0): 1, ('omigod', 0.0): 1, ('ehem', 0.0): 1, ('ala', 0.0): 1, ('yeke', 0.0): 1, ('jumpa', 0.0): 1, ('😋', 0.0): 1, ('ape', 0.0): 1, ('1.2', 0.0): 1, ('map', 0.0): 1, ('namin', 0.0): 1, ('govt', 0.0): 1, ('e-petit', 0.0): 1, ('pretend', 0.0): 1, ('irk', 0.0): 1, ('ruess', 0.0): 1, ('program', 0.0): 1, ('aigoo', 0.0): 1, ('doujin', 0.0): 1, ('killua', 0.0): 1, ('ginggon', 0.0): 1, ('guys.al', 0.0): 1, ('ytd', 0.0): 1, ('pdapaghimok', 0.0): 1, ('flexibl', 0.0): 1, ('sheet', 0.0): 1, ('nanaman', 0.0): 1, ('pinay', 0.0): 1, ('pie', 0.0): 1, ('jadi', 0.0): 1, ('langsung', 0.0): 1, ('flasback', 0.0): 1, ('franc', 0.0): 1, (':|', 0.0): 1, ('lo', 0.0): 1, ('nicknam', 0.0): 1, ('involv', 0.0): 1, ('scrape', 0.0): 1, ('pile', 0.0): 1, ('sare', 0.0): 1, ('bandar', 0.0): 1, ('varg', 0.0): 1, ('hammer', 0.0): 1, ('lolo', 0.0): 1, ('xbsbabnb', 0.0): 1, ('stilll', 0.0): 1, ('apma', 0.0): 2, ('leadership', 0.0): 1, ('wakeupgop', 0.0): 1, ('mv', 0.0): 1, ('bull', 0.0): 1, ('trafficcc', 0.0): 1, ('oscar', 0.0): 1, ('pornographi', 0.0): 1, ('slutsham', 0.0): 1, ('ect', 0.0): 1, ('poland', 0.0): 1, ('faraway', 0.0): 1, ('700', 0.0): 1, ('800', 0.0): 1, ('cgi', 0.0): 1, ('pun', 0.0): 1, (\"x'\", 0.0): 1, ('osaka', 0.0): 1, ('junior', 0.0): 1, ('aytona', 0.0): 1, ('hala', 0.0): 1, ('mathird', 0.0): 1, ('jkjk', 0.0): 1, ('backtrack', 0.0): 1, ('util', 0.0): 1, ('pat', 0.0): 1, ('jay', 0.0): 2, ('broh', 0.0): 1, ('calll', 0.0): 1, ('icaru', 0.0): 1, ('awn', 0.0): 1, ('bach', 0.0): 1, ('court', 0.0): 1, ('landlord', 0.0): 1, (\"mp'\", 0.0): 1, ('dame', 0.0): 1, ('gossip', 0.0): 1, ('purpl', 0.0): 2, ('tie', 0.0): 1, ('ishii', 0.0): 1, ('clara', 0.0): 1, ('yile', 0.0): 1, ('whatev', 0.0): 1, ('stil', 0.0): 1, ('sidharth', 0.0): 1, ('ndabenhl', 0.0): 1, ('doggi', 0.0): 1, ('antag', 0.0): 1, ('41', 0.0): 1, ('thu', 0.0): 1, ('jenner', 0.0): 1, ('troubleshoot', 0.0): 1, (\"convo'\", 0.0): 1, ('dem', 0.0): 1, ('tix', 0.0): 2, ('automat', 0.0): 1, ('redirect', 0.0): 1, ('gigi', 0.0): 1, ('carter', 0.0): 1, ('corn', 0.0): 2, ('chip', 0.0): 2, ('nnnooo', 0.0): 1, ('cz', 0.0): 1, ('gorilla', 0.0): 1, ('hbm', 0.0): 1, ('humid', 0.0): 1, ('admir', 0.0): 1, ('consist', 0.0): 1, ('jason', 0.0): 1, (\"shackell'\", 0.0): 1, ('podcast', 0.0): 1, ('envi', 0.0): 1, ('twer', 0.0): 1, ('782', 0.0): 1, ('hahaahahahaha', 0.0): 1, ('sm1', 0.0): 1, ('mutil', 0.0): 1, ('robot', 0.0): 1, ('destroy', 0.0): 1, ('freakin', 0.0): 1, ('haestarr', 0.0): 1, ('😀', 0.0): 3, ('audio', 0.0): 1, ('snippet', 0.0): 1, ('brotherhood', 0.0): 1, ('mefd', 0.0): 1, ('diana', 0.0): 1, ('master', 0.0): 1, ('led', 0.0): 1, ('award', 0.0): 1, ('meowkd', 0.0): 1, ('complic', 0.0): 1, (\"c'mon\", 0.0): 1, (\"swimmer'\", 0.0): 1, ('leh', 0.0): 1, ('corner', 0.0): 1, ('didnot', 0.0): 1, ('usanel', 0.0): 2, ('nathan', 0.0): 1, ('micha', 0.0): 1, ('fave', 0.0): 2, ('creep', 0.0): 1, ('throughout', 0.0): 1, ('whose', 0.0): 1, ('ave', 0.0): 1, ('tripl', 0.0): 1, ('lectur', 0.0): 1, ('2-5', 0.0): 1, ('jaw', 0.0): 1, ('quarter', 0.0): 1, ('soni', 0.0): 1, ('followmeaaron', 0.0): 1, ('tzelumxoxo', 0.0): 1, ('drank', 0.0): 1, ('mew', 0.0): 1, ('indic', 0.0): 1, ('ouliv', 0.0): 1, ('70748', 0.0): 1, ('viernesderolenahot', 0.0): 1, ('longmorn', 0.0): 1, ('tobermori', 0.0): 1, ('32', 0.0): 1, ('tail', 0.0): 1, ('recuerda', 0.0): 1, ('tanto', 0.0): 1, ('bath', 0.0): 1, ('muna', 0.0): 1, ('await', 0.0): 1, ('urslef', 0.0): 1, ('lime', 0.0): 1, ('truckload', 0.0): 1, ('favour', 0.0): 2, ('spectat', 0.0): 1, ('sail', 0.0): 1, (\"w'end\", 0.0): 1, ('bbc', 0.0): 1, ('‘', 0.0): 1, ('foil', 0.0): 1, ('ac45', 0.0): 1, ('catamaran', 0.0): 1, ('peli', 0.0): 1, ('829', 0.0): 1, ('sextaatequemfimseguesdvcomvalentino', 0.0): 1, ('befor', 0.0): 1, ('valu', 0.0): 1, ('cinnamon', 0.0): 1, ('mtap', 0.0): 1, ('peng', 0.0): 1, ('frozen', 0.0): 1, ('bagu', 0.0): 1, ('emang', 0.0): 1, ('engg', 0.0): 1, ('cmc', 0.0): 1, ('mage', 0.0): 1, ('statement', 0.0): 1, ('moodsw', 0.0): 1, ('termin', 0.0): 1, ('men', 0.0): 1, ('peep', 0.0): 1, ('multipl', 0.0): 1, ('mef', 0.0): 1, ('rebound', 0.0): 1, ('pooor', 0.0): 1, ('2am', 0.0): 1, ('perpetu', 0.0): 1, ('bitchfac', 0.0): 1, ('clever', 0.0): 1, ('iceland', 0.0): 1, ('zayn_come_back_we_miss_y', 0.0): 1, ('pmsl', 0.0): 1, ('mianh', 0.0): 1, ('milkeu', 0.0): 1, ('lrt', 0.0): 1, ('bambam', 0.0): 1, ('soda', 0.0): 1, ('payback', 0.0): 1, ('87000', 0.0): 1, ('jobe', 0.0): 1, ('muchi', 0.0): 1, ('🎈', 0.0): 1, ('bathroom', 0.0): 1, ('lagg', 0.0): 1, ('banget', 0.0): 1, ('novel', 0.0): 1, (\"there'd\", 0.0): 1, ('invis', 0.0): 1, ('scuttl', 0.0): 1, ('worm', 0.0): 1, ('bauuukkk', 0.0): 1, ('jessica', 0.0): 1, ('5:15', 0.0): 1, ('argument', 0.0): 1, ('couldnt', 0.0): 2, ('yepp', 0.0): 1, ('😺', 0.0): 1, ('💒', 0.0): 1, ('💎', 0.0): 1, ('feelin', 0.0): 1, ('biscuit', 0.0): 1, ('slather', 0.0): 1, ('jsut', 0.0): 1, ('belov', 0.0): 1, ('grandmoth', 0.0): 1, ('princess', 0.0): 2, ('babee', 0.0): 1, ('demn', 0.0): 1, ('hotaisndonwyvauwjoqhsjsnaihsuswtf', 0.0): 1, ('sia', 0.0): 1, ('niram', 0.0): 1, ('geng', 0.0): 1, ('fikri', 0.0): 1, ('tirtagangga', 0.0): 1, ('char', 0.0): 1, ('font', 0.0): 2, ('riprishikeshwari', 0.0): 1, ('creamist', 0.0): 1, ('challeng', 0.0): 1, ('substitut', 0.0): 1, ('skin', 0.0): 1, ('cplt', 0.0): 1, ('cp', 0.0): 1, ('hannah', 0.0): 1, ('💙', 0.0): 1, ('opu', 0.0): 1, ('inner', 0.0): 1, ('pleasur', 0.0): 1, ('bbq', 0.0): 1, ('33', 0.0): 1, ('lolliv', 0.0): 1, ('split', 0.0): 3, ('collat', 0.0): 2, ('spilt', 0.0): 2, ('quitkarwaoyaaro', 0.0): 1, ('deacti̇v', 0.0): 1, ('2.5', 0.0): 1, ('g2a', 0.0): 1, ('sherep', 0.0): 1, ('nemen', 0.0): 1, ('behey', 0.0): 1, ('motherfuck', 0.0): 1, ('tattoo', 0.0): 1, ('reec', 0.0): 1, ('vm', 0.0): 1, ('deth', 0.0): 2, ('lest', 0.0): 1, ('gp', 0.0): 1, ('departur', 0.0): 1, ('wipe', 0.0): 1, ('yuck', 0.0): 1, ('ystrday', 0.0): 1, ('seolhyun', 0.0): 1, ('drama', 0.0): 1, ('spici', 0.0): 1, ('owl', 0.0): 1, ('mumbai', 0.0): 1, (\"pj'\", 0.0): 1, ('wallpap', 0.0): 1, ('cba', 0.0): 1, ('hotter', 0.0): 1, ('rec', 0.0): 1, ('gotdamn', 0.0): 1, ('baaack', 0.0): 1, ('honest', 0.0): 1, ('srw', 0.0): 1, ('mobag', 0.0): 1, ('dunno', 0.0): 1, ('stroke', 0.0): 1, ('gnr', 0.0): 1, ('backstag', 0.0): 1, ('slash', 0.0): 1, ('prolli', 0.0): 1, ('bunni', 0.0): 1, ('sooner', 0.0): 1, ('analyst', 0.0): 1, ('expedia', 0.0): 1, ('bellevu', 0.0): 1, ('prison', 0.0): 1, ('alcohol', 0.0): 1, ('huhuh', 0.0): 1, ('heartburn', 0.0): 1, ('awalmu', 0.0): 1, ('njareeem', 0.0): 1, ('maggi', 0.0): 1, ('psycho', 0.0): 1, ('wahhh', 0.0): 1, ('abudhabi', 0.0): 1, ('hiby', 0.0): 1, ('shareyoursumm', 0.0): 1, ('b8', 0.0): 1, ('must.b', 0.0): 1, ('dairi', 0.0): 1, ('produxt', 0.0): 1, ('lactos', 0.0): 2, ('midland', 0.0): 1, ('knacker', 0.0): 1, ('footag', 0.0): 1, ('lifeless', 0.0): 1, ('shell', 0.0): 1, ('44', 0.0): 1, ('7782', 0.0): 1, ('pengen', 0.0): 1, ('girlll', 0.0): 1, ('tsunami', 0.0): 1, ('indi', 0.0): 1, ('nick', 0.0): 1, ('tirad', 0.0): 1, ('stoop', 0.0): 1, ('lower', 0.0): 1, ('role', 0.0): 1, ('thunder', 0.0): 1, ('paradis', 0.0): 1, ('habit', 0.0): 1, ('facad', 0.0): 1, ('democraci', 0.0): 1, ('brat', 0.0): 1, ('tb', 0.0): 1, (\"o'\", 0.0): 1, ('bade', 0.0): 1, ('fursat', 0.0): 1, ('usey', 0.0): 2, ('banaya', 0.0): 1, ('uppar', 0.0): 1, ('waal', 0.0): 1, ('ney', 0.0): 1, ('afso', 0.0): 1, ('hums', 0.0): 1, ('dur', 0.0): 1, ('wo', 0.0): 1, (\"who'd\", 0.0): 1, ('naruhina', 0.0): 1, ('namee', 0.0): 1, ('haiqal', 0.0): 1, ('360hr', 0.0): 1, ('picc', 0.0): 1, ('instor', 0.0): 1, ('pre-vot', 0.0): 1, ('5th', 0.0): 1, ('usernam', 0.0): 1, ('minho', 0.0): 1, ('durian', 0.0): 1, ('strudel', 0.0): 1, ('tsk', 0.0): 1, ('marin', 0.0): 1, ('kailan', 0.0): 1, ('separ', 0.0): 1, ('payday', 0.0): 1, ('payhour', 0.0): 1, ('immedi', 0.0): 1, ('natur', 0.0): 1, ('pre-ord', 0.0): 1, ('fwm', 0.0): 1, ('guppi', 0.0): 1, ('poorkid', 0.0): 1, ('lack', 0.0): 1, ('misunderstood', 0.0): 1, ('cuddli', 0.0): 1, ('scratch', 0.0): 1, ('thumb', 0.0): 1, ('compens', 0.0): 1, ('kirkiri', 0.0): 1, ('phase', 0.0): 1, ('wonho', 0.0): 1, ('visual', 0.0): 1, (\"='(\", 0.0): 1, ('mission', 0.0): 1, ('pap', 0.0): 1, ('danzel', 0.0): 1, ('craft', 0.0): 1, ('devil', 0.0): 1, ('phil', 0.0): 1, ('sheff', 0.0): 1, ('york', 0.0): 1, ('visa', 0.0): 1, ('gim', 0.0): 1, ('bench', 0.0): 1, ('harm', 0.0): 1, ('yolo', 0.0): 1, ('bloat', 0.0): 1, ('olli', 0.0): 1, ('alterni', 0.0): 1, ('earth', 0.0): 1, ('influenc', 0.0): 1, ('overal', 0.0): 1, ('continent', 0.0): 1, ('🔫', 0.0): 1, ('tank', 0.0): 1, ('thirsti', 0.0): 1, ('konami', 0.0): 1, ('polici', 0.0): 1, ('ranti', 0.0): 1, ('atm', 0.0): 1, ('pervers', 0.0): 1, ('bylfnnz', 0.0): 1, ('ban', 0.0): 1, ('failsatlif', 0.0): 1, ('press', 0.0): 1, ('duper', 0.0): 1, ('waaah', 0.0): 1, ('jaebum', 0.0): 1, ('ahmad', 0.0): 1, ('maslan', 0.0): 1, ('hull', 0.0): 1, ('misser', 0.0): 1}\n" ] ], [ [ "Unfortunately, this does not help much to understand the data. It would be better to visualize this output to gain better insights.", "_____no_output_____" ], [ "## Table of word counts", "_____no_output_____" ], [ "We will select a set of words that we would like to visualize. It is better to store this temporary information in a table that is very easy to use later.", "_____no_output_____" ] ], [ [ "# select some words to appear in the report. we will assume that each word is unique (i.e. no duplicates)\nkeys = ['happi', 'merri', 'nice', 'good', 'bad', 'sad', 'mad', 'best', 'pretti',\n '❤', ':)', ':(', '😒', '😬', '😄', '😍', '♛',\n 'song', 'idea', 'power', 'play', 'magnific']\n\n# list representing our table of word counts.\n# each element consist of a sublist with this pattern: [<word>, <positive_count>, <negative_count>]\ndata = []\n\n# loop through our selected words\nfor word in keys:\n \n # initialize positive and negative counts\n pos = 0\n neg = 0\n \n # retrieve number of positive counts\n if (word, 1) in freqs:\n pos = freqs[(word, 1)]\n \n # retrieve number of negative counts\n if (word, 0) in freqs:\n neg = freqs[(word, 0)]\n \n # append the word counts to the table\n data.append([word, pos, neg])\n \ndata", "_____no_output_____" ] ], [ [ "We can then use a scatter plot to inspect this table visually. Instead of plotting the raw counts, we will plot it in the logarithmic scale to take into account the wide discrepancies between the raw counts (e.g. `:)` has 3568 counts in the positive while only 2 in the negative). The red line marks the boundary between positive and negative areas. Words close to the red line can be classified as neutral. ", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize = (8, 8))\n\n# convert positive raw counts to logarithmic scale. we add 1 to avoid log(0)\nx = np.log([x[1] + 1 for x in data]) \n\n# do the same for the negative counts\ny = np.log([x[2] + 1 for x in data]) \n\n# Plot a dot for each pair of words\nax.scatter(x, y) \n\n# assign axis labels\nplt.xlabel(\"Log Positive count\")\nplt.ylabel(\"Log Negative count\")\n\n# Add the word as the label at the same position as you added the points just before\nfor i in range(0, len(data)):\n ax.annotate(data[i][0], (x[i], y[i]), fontsize=12)\n\nax.plot([0, 9], [0, 9], color = 'red') # Plot the red line that divides the 2 areas.\nplt.show()", "_____no_output_____" ] ], [ [ "This chart is straightforward to interpret. It shows that emoticons `:)` and `:(` are very important for sentiment analysis. Thus, we should not let preprocessing steps get rid of these symbols!\n\nFurthermore, what is the meaning of the crown symbol? It seems to be very negative!", "_____no_output_____" ], [ "### That's all for this lab! We've seen how to build a word frequency dictionary and this will come in handy when extracting the features of a list of tweets. Next up, we will be reviewing Logistic Regression. Keep it up!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a8a87a3bc3e7b9b2a8bce29829bca73978487f3
34,849
ipynb
Jupyter Notebook
demo on simulated dataset.ipynb
wangbeiqi199159/fast-gradient-logit
e84b603d36215d2f14b15f1f48a81a9fb5b1a8f9
[ "MIT" ]
null
null
null
demo on simulated dataset.ipynb
wangbeiqi199159/fast-gradient-logit
e84b603d36215d2f14b15f1f48a81a9fb5b1a8f9
[ "MIT" ]
null
null
null
demo on simulated dataset.ipynb
wangbeiqi199159/fast-gradient-logit
e84b603d36215d2f14b15f1f48a81a9fb5b1a8f9
[ "MIT" ]
null
null
null
159.127854
15,332
0.891245
[ [ [ "# FastGradRidgeLogit Demo on Simulated Dataset\n", "_____no_output_____" ], [ "## About Data\n\nA simulated data set with two classes. Each class has 30 observations and 50 variables. Class 1 is denoted by 1, class 2 is denoted by 0.\n\n## Data Process Before Model Training\n", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport sklearn.preprocessing\n\n# Generate simulated dataset\nnp.random.seed(123)\ncls1 = np.random.normal(0,2,(30,50))\ncls2 = np.random.normal(1,2,(30,50))\nx_train = np.concatenate((cls1,cls2), axis = 0)\n\ny1 = np.ones(30)\ny2 = np.zeros(30)\ny_train = np.concatenate((y1,y2))\ny_train = y_train*2 - 1 # Convert to +/- 1\n\n# Standardize the data. \nscaler = sklearn.preprocessing.StandardScaler()\nscaler.fit(x_train)\nx_train = scaler.transform(x_train)", "_____no_output_____" ] ], [ [ "## Train Model Using FastGradRidgeLogit", "_____no_output_____" ] ], [ [ "from fgrlogit import FastGradRidgeLogit\n\nfg = FastGradRidgeLogit()", "_____no_output_____" ] ], [ [ "**FastGradRidgeLogit.fit(lambduh, x, y, maxiter = 300)**\n\nFit the model with x_train dataset and y_train dataset.", "_____no_output_____" ] ], [ [ "fg.fit(lambduh = 0.1,x = x_train,y = y_train)", "Start fast gradient descent:\nFast gradient iteration 100\nFast gradient iteration 200\nFast gradient iteration 300\n" ] ], [ [ "**FastGradRidgeLogit.plot_objective( )**\n\nAfter fitting the model, we can visualize the objective value change through the iteration by calling plot_objective().", "_____no_output_____" ] ], [ [ "fg.plot_objective()", "_____no_output_____" ] ], [ [ "**FastGradRidgeLogit.plot_misclassification_error( )**\n\nAfter fitting the model, we can also visualize the model performance through the iteration by calling plot_misclassification_error().", "_____no_output_____" ] ], [ [ "fg.plot_misclassification_error()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a8a9ddd63cf6c8fc738bf627be3b10c98da739c
719,473
ipynb
Jupyter Notebook
nbs/60_medical.imaging.ipynb
Paperspace/fastai
beaa630d78c44617857cd685368db0b377655bfe
[ "Apache-2.0" ]
null
null
null
nbs/60_medical.imaging.ipynb
Paperspace/fastai
beaa630d78c44617857cd685368db0b377655bfe
[ "Apache-2.0" ]
null
null
null
nbs/60_medical.imaging.ipynb
Paperspace/fastai
beaa630d78c44617857cd685368db0b377655bfe
[ "Apache-2.0" ]
null
null
null
663.72048
168,568
0.950244
[ [ [ "# all_slow", "_____no_output_____" ], [ "#default_exp medical.imaging", "_____no_output_____" ] ], [ [ "# Medical Imaging\n\n> Helpers for working with DICOM files", "_____no_output_____" ] ], [ [ "#export \nfrom fastai.basics import *\nfrom fastai.vision.all import *\nfrom fastai.data.transforms import *\n\nimport pydicom,kornia,skimage\nfrom pydicom.dataset import Dataset as DcmDataset\nfrom pydicom.tag import BaseTag as DcmTag\nfrom pydicom.multival import MultiValue as DcmMultiValue\nfrom PIL import Image\n\ntry:\n import cv2\n cv2.setNumThreads(0)\nexcept: pass", "_____no_output_____" ], [ "#hide\nfrom nbdev.showdoc import *\nmatplotlib.rcParams['image.cmap'] = 'bone'", "_____no_output_____" ], [ "#export\n_all_ = ['DcmDataset', 'DcmTag', 'DcmMultiValue', 'dcmread', 'get_dicom_files']", "_____no_output_____" ] ], [ [ "## Patching", "_____no_output_____" ] ], [ [ "#export\ndef get_dicom_files(path, recurse=True, folders=None):\n \"Get dicom files in `path` recursively, only in `folders`, if specified.\"\n return get_files(path, extensions=[\".dcm\"], recurse=recurse, folders=folders)", "_____no_output_____" ], [ "#export\n@patch\ndef dcmread(fn:Path, force = False):\n \"Open a `DICOM` file\"\n return pydicom.dcmread(str(fn), force)", "_____no_output_____" ], [ "#export\nclass TensorDicom(TensorImage): _show_args = {'cmap':'gray'}", "_____no_output_____" ], [ "#export\nclass PILDicom(PILBase):\n _open_args,_tensor_cls,_show_args = {},TensorDicom,TensorDicom._show_args\n @classmethod\n def create(cls, fn:(Path,str,bytes), mode=None)->None:\n \"Open a `DICOM file` from path `fn` or bytes `fn` and load it as a `PIL Image`\"\n if isinstance(fn,bytes): im = Image.fromarray(pydicom.dcmread(pydicom.filebase.DicomBytesIO(fn)).pixel_array)\n if isinstance(fn,(Path,str)): im = Image.fromarray(dcmread(fn).pixel_array)\n im.load()\n im = im._new(im.im)\n return cls(im.convert(mode) if mode else im)\n\nPILDicom._tensor_cls = TensorDicom", "_____no_output_____" ], [ "# #export\n# @patch\n# def png16read(self:Path): return array(Image.open(self), dtype=np.uint16)", "_____no_output_____" ], [ "TEST_DCM = Path('images/sample.dcm')\ndcm = TEST_DCM.dcmread()", "_____no_output_____" ], [ "#export\n@patch_property\ndef pixels(self:DcmDataset):\n \"`pixel_array` as a tensor\"\n return tensor(self.pixel_array.astype(np.float32))", "_____no_output_____" ], [ "#export\n@patch_property\ndef scaled_px(self:DcmDataset):\n \"`pixels` scaled by `RescaleSlope` and `RescaleIntercept`\"\n img = self.pixels\n return img if self.Modality == \"CR\" else img * self.RescaleSlope + self.RescaleIntercept", "_____no_output_____" ], [ "#export\ndef array_freqhist_bins(self, n_bins=100):\n \"A numpy based function to split the range of pixel values into groups, such that each group has around the same number of pixels\"\n imsd = np.sort(self.flatten())\n t = np.array([0.001])\n t = np.append(t, np.arange(n_bins)/n_bins+(1/2/n_bins))\n t = np.append(t, 0.999)\n t = (len(imsd)*t+0.5).astype(np.int)\n return np.unique(imsd[t])", "_____no_output_____" ], [ "#export\n@patch\ndef freqhist_bins(self:Tensor, n_bins=100):\n \"A function to split the range of pixel values into groups, such that each group has around the same number of pixels\"\n imsd = self.view(-1).sort()[0]\n t = torch.cat([tensor([0.001]),\n torch.arange(n_bins).float()/n_bins+(1/2/n_bins),\n tensor([0.999])])\n t = (len(imsd)*t).long()\n return imsd[t].unique()", "_____no_output_____" ], [ "#export\n@patch\ndef hist_scaled_pt(self:Tensor, brks=None):\n # Pytorch-only version - switch to this if/when interp_1d can be optimized\n if brks is None: brks = self.freqhist_bins()\n brks = brks.to(self.device)\n ys = torch.linspace(0., 1., len(brks)).to(self.device)\n return self.flatten().interp_1d(brks, ys).reshape(self.shape).clamp(0.,1.)", "_____no_output_____" ], [ "#export\n@patch\ndef hist_scaled(self:Tensor, brks=None):\n if self.device.type=='cuda': return self.hist_scaled_pt(brks)\n if brks is None: brks = self.freqhist_bins()\n ys = np.linspace(0., 1., len(brks))\n x = self.numpy().flatten()\n x = np.interp(x, brks.numpy(), ys)\n return tensor(x).reshape(self.shape).clamp(0.,1.)", "_____no_output_____" ], [ "#export\n@patch\ndef hist_scaled(self:DcmDataset, brks=None, min_px=None, max_px=None):\n px = self.scaled_px\n if min_px is not None: px[px<min_px] = min_px\n if max_px is not None: px[px>max_px] = max_px\n return px.hist_scaled(brks=brks)", "_____no_output_____" ], [ "#export\n@patch\ndef windowed(self:Tensor, w, l):\n px = self.clone()\n px_min = l - w//2\n px_max = l + w//2\n px[px<px_min] = px_min\n px[px>px_max] = px_max\n return (px-px_min) / (px_max-px_min)", "_____no_output_____" ], [ "#export\n@patch\ndef windowed(self:DcmDataset, w, l):\n return self.scaled_px.windowed(w,l)", "_____no_output_____" ], [ "#export\n# From https://radiopaedia.org/articles/windowing-ct\ndicom_windows = types.SimpleNamespace(\n brain=(80,40),\n subdural=(254,100),\n stroke=(8,32),\n brain_bone=(2800,600),\n brain_soft=(375,40),\n lungs=(1500,-600),\n mediastinum=(350,50),\n abdomen_soft=(400,50),\n liver=(150,30),\n spine_soft=(250,50),\n spine_bone=(1800,400)\n)", "_____no_output_____" ], [ "#export\nclass TensorCTScan(TensorImageBW): _show_args = {'cmap':'bone'}", "_____no_output_____" ], [ "#export\nclass PILCTScan(PILBase): _open_args,_tensor_cls,_show_args = {},TensorCTScan,TensorCTScan._show_args", "_____no_output_____" ], [ "#export\n@patch\n@delegates(show_image)\ndef show(self:DcmDataset, scale=True, cmap=plt.cm.bone, min_px=-1100, max_px=None, **kwargs):\n px = (self.windowed(*scale) if isinstance(scale,tuple)\n else self.hist_scaled(min_px=min_px,max_px=max_px,brks=scale) if isinstance(scale,(ndarray,Tensor))\n else self.hist_scaled(min_px=min_px,max_px=max_px) if scale\n else self.scaled_px)\n show_image(px, cmap=cmap, **kwargs)", "_____no_output_____" ], [ "scales = False, True, dicom_windows.brain, dicom_windows.subdural\ntitles = 'raw','normalized','brain windowed','subdural windowed'\nfor s,a,t in zip(scales, subplots(2,2,imsize=4)[1].flat, titles):\n dcm.show(scale=s, ax=a, title=t)", "_____no_output_____" ], [ "dcm.show(cmap=plt.cm.gist_ncar, figsize=(6,6))", "_____no_output_____" ], [ "#export\n@patch\ndef pct_in_window(dcm:DcmDataset, w, l):\n \"% of pixels in the window `(w,l)`\"\n px = dcm.scaled_px\n return ((px > l-w//2) & (px < l+w//2)).float().mean().item()", "_____no_output_____" ], [ "dcm.pct_in_window(*dicom_windows.brain)", "_____no_output_____" ], [ "#export\ndef uniform_blur2d(x,s):\n w = x.new_ones(1,1,1,s)/s\n # Factor 2d conv into 2 1d convs\n x = unsqueeze(x, dim=0, n=4-x.dim())\n r = (F.conv2d(x, w, padding=s//2))\n r = (F.conv2d(r, w.transpose(-1,-2), padding=s//2)).cpu()[:,0]\n return r.squeeze()", "_____no_output_____" ], [ "ims = dcm.hist_scaled(), uniform_blur2d(dcm.hist_scaled(),50)\nshow_images(ims, titles=('orig', 'blurred'))", "_____no_output_____" ], [ "#export\ndef gauss_blur2d(x,s):\n s2 = int(s/4)*2+1\n x2 = unsqueeze(x, dim=0, n=4-x.dim())\n res = kornia.filters.gaussian_blur2d(x2, (s2,s2), (s,s), 'replicate')\n return res.squeeze()", "_____no_output_____" ], [ "#export\n@patch\ndef mask_from_blur(x:Tensor, window, sigma=0.3, thresh=0.05, remove_max=True):\n p = x.windowed(*window)\n if remove_max: p[p==1] = 0\n return gauss_blur2d(p, s=sigma*x.shape[-1])>thresh", "_____no_output_____" ], [ "#export\n@patch\ndef mask_from_blur(x:DcmDataset, window, sigma=0.3, thresh=0.05, remove_max=True):\n return to_device(x.scaled_px).mask_from_blur(window, sigma, thresh, remove_max=remove_max)", "_____no_output_____" ], [ "mask = dcm.mask_from_blur(dicom_windows.brain)\nwind = dcm.windowed(*dicom_windows.brain)\n\n_,ax = subplots(1,1)\nshow_image(wind, ax=ax[0])\nshow_image(mask, alpha=0.5, cmap=plt.cm.Reds, ax=ax[0]);", "_____no_output_____" ], [ "#export\ndef _px_bounds(x, dim):\n c = x.sum(dim).nonzero().cpu()\n idxs,vals = torch.unique(c[:,0],return_counts=True)\n vs = torch.split_with_sizes(c[:,1],tuple(vals))\n d = {k.item():v for k,v in zip(idxs,vs)}\n default_u = tensor([0,x.shape[-1]-1])\n b = [d.get(o,default_u) for o in range(x.shape[0])]\n b = [tensor([o.min(),o.max()]) for o in b]\n return torch.stack(b)", "_____no_output_____" ], [ "#export\ndef mask2bbox(mask):\n no_batch = mask.dim()==2\n if no_batch: mask = mask[None]\n bb1 = _px_bounds(mask,-1).t()\n bb2 = _px_bounds(mask,-2).t()\n res = torch.stack([bb1,bb2],dim=1).to(mask.device)\n return res[...,0] if no_batch else res", "_____no_output_____" ], [ "bbs = mask2bbox(mask)\nlo,hi = bbs\nshow_image(wind[lo[0]:hi[0],lo[1]:hi[1]]);", "_____no_output_____" ], [ "#export\ndef _bbs2sizes(crops, init_sz, use_square=True):\n bb = crops.flip(1)\n szs = (bb[1]-bb[0])\n if use_square: szs = szs.max(0)[0][None].repeat((2,1))\n overs = (szs+bb[0])>init_sz\n bb[0][overs] = init_sz-szs[overs]\n lows = (bb[0]/float(init_sz))\n return lows,szs/float(init_sz)", "_____no_output_____" ], [ "#export\ndef crop_resize(x, crops, new_sz):\n # NB assumes square inputs. Not tested for non-square anythings!\n bs = x.shape[0]\n lows,szs = _bbs2sizes(crops, x.shape[-1])\n if not isinstance(new_sz,(list,tuple)): new_sz = (new_sz,new_sz)\n id_mat = tensor([[1.,0,0],[0,1,0]])[None].repeat((bs,1,1)).to(x.device)\n with warnings.catch_warnings():\n warnings.filterwarnings('ignore', category=UserWarning)\n sp = F.affine_grid(id_mat, (bs,1,*new_sz))+1.\n grid = sp*unsqueeze(szs.t(),1,n=2)+unsqueeze(lows.t()*2.,1,n=2)\n return F.grid_sample(x.unsqueeze(1), grid-1)", "_____no_output_____" ], [ "px256 = crop_resize(to_device(wind[None]), bbs[...,None], 128)[0]\nshow_image(px256)\npx256.shape", "_____no_output_____" ], [ "#export\n@patch\ndef to_nchan(x:Tensor, wins, bins=None):\n res = [x.windowed(*win) for win in wins]\n if not isinstance(bins,int) or bins!=0: res.append(x.hist_scaled(bins).clamp(0,1))\n dim = [0,1][x.dim()==3]\n return TensorCTScan(torch.stack(res, dim=dim))", "_____no_output_____" ], [ "#export\n@patch\ndef to_nchan(x:DcmDataset, wins, bins=None):\n return x.scaled_px.to_nchan(wins, bins)", "_____no_output_____" ], [ "#export\n@patch\ndef to_3chan(x:Tensor, win1, win2, bins=None):\n return x.to_nchan([win1,win2],bins=bins)", "_____no_output_____" ], [ "#export\n@patch\ndef to_3chan(x:DcmDataset, win1, win2, bins=None):\n return x.scaled_px.to_3chan(win1, win2, bins)", "_____no_output_____" ], [ "show_images(dcm.to_nchan([dicom_windows.brain,dicom_windows.subdural,dicom_windows.abdomen_soft]))", "_____no_output_____" ], [ "#export\n@patch\ndef save_jpg(x:(Tensor,DcmDataset), path, wins, bins=None, quality=90):\n fn = Path(path).with_suffix('.jpg')\n x = (x.to_nchan(wins, bins)*255).byte()\n im = Image.fromarray(x.permute(1,2,0).numpy(), mode=['RGB','CMYK'][x.shape[0]==4])\n im.save(fn, quality=quality)", "_____no_output_____" ], [ "#export\n@patch\ndef to_uint16(x:(Tensor,DcmDataset), bins=None):\n d = x.hist_scaled(bins).clamp(0,1) * 2**16\n return d.numpy().astype(np.uint16)", "_____no_output_____" ], [ "#export\n@patch\ndef save_tif16(x:(Tensor,DcmDataset), path, bins=None, compress=True):\n fn = Path(path).with_suffix('.tif')\n Image.fromarray(x.to_uint16(bins)).save(str(fn), compression='tiff_deflate' if compress else None)", "_____no_output_____" ], [ "_,axs=subplots(1,2)\nwith tempfile.TemporaryDirectory() as f:\n f = Path(f)\n dcm.save_jpg(f/'test.jpg', [dicom_windows.brain,dicom_windows.subdural])\n show_image(Image.open(f/'test.jpg'), ax=axs[0])\n dcm.save_tif16(f/'test.tif')\n show_image(Image.open(str(f/'test.tif')), ax=axs[1]);", "_____no_output_____" ], [ "#export\n@patch\ndef set_pixels(self:DcmDataset, px):\n self.PixelData = px.tobytes()\n self.Rows,self.Columns = px.shape\nDcmDataset.pixel_array = property(DcmDataset.pixel_array.fget, set_pixels)", "_____no_output_____" ], [ "#export\n@patch\ndef zoom(self:DcmDataset, ratio):\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\", UserWarning)\n self.pixel_array = ndimage.zoom(self.pixel_array, ratio)", "_____no_output_____" ], [ "#export\n@patch\ndef zoom_to(self:DcmDataset, sz):\n if not isinstance(sz,(list,tuple)): sz=(sz,sz)\n rows,cols = sz\n self.zoom((rows/self.Rows,cols/self.Columns))", "_____no_output_____" ], [ "#export\n@patch_property\ndef shape(self:DcmDataset): return self.Rows,self.Columns", "_____no_output_____" ], [ "dcm2 = TEST_DCM.dcmread()\ndcm2.zoom_to(90)\ntest_eq(dcm2.shape, (90,90))", "_____no_output_____" ], [ "dcm2 = TEST_DCM.dcmread()\ndcm2.zoom(0.25)\ndcm2.show()", "_____no_output_____" ], [ "#export\ndef _cast_dicom_special(x):\n cls = type(x)\n if not cls.__module__.startswith('pydicom'): return x\n if cls.__base__ == object: return x\n return cls.__base__(x)\n\ndef _split_elem(res,k,v):\n if not isinstance(v,DcmMultiValue): return\n res[f'Multi{k}'] = 1\n for i,o in enumerate(v): res[f'{k}{\"\" if i==0 else i}']=o", "_____no_output_____" ], [ "#export\n@patch\ndef as_dict(self:DcmDataset, px_summ=True, window=dicom_windows.brain):\n pxdata = (0x7fe0,0x0010)\n vals = [self[o] for o in self.keys() if o != pxdata]\n its = [(v.keyword,v.value) for v in vals]\n res = dict(its)\n res['fname'] = self.filename\n for k,v in its: _split_elem(res,k,v)\n if not px_summ: return res\n stats = 'min','max','mean','std'\n try:\n pxs = self.pixel_array\n for f in stats: res['img_'+f] = getattr(pxs,f)()\n res['img_pct_window'] = self.pct_in_window(*window)\n except Exception as e:\n for f in stats: res['img_'+f] = 0\n print(res,e)\n for k in res: res[k] = _cast_dicom_special(res[k])\n return res", "_____no_output_____" ], [ "#export\ndef _dcm2dict(fn, **kwargs): return fn.dcmread().as_dict(**kwargs)", "_____no_output_____" ], [ "#export\n@delegates(parallel)\ndef _from_dicoms(cls, fns, n_workers=0, **kwargs):\n return pd.DataFrame(parallel(_dcm2dict, fns, n_workers=n_workers, **kwargs))\npd.DataFrame.from_dicoms = classmethod(_from_dicoms)", "_____no_output_____" ] ], [ [ "## Export -", "_____no_output_____" ] ], [ [ "#hide\nfrom nbdev.export import notebook2script\nnotebook2script()", "Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_callback.core.ipynb.\nConverted 13a_learner.ipynb.\nConverted 13b_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 18a_callback.training.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.vision.ipynb.\nConverted 24_tutorial.siamese.ipynb.\nConverted 24_vision.gan.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.text.ipynb.\nConverted 39_tutorial.transformers.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.model.ipynb.\nConverted 43_tabular.learner.ipynb.\nConverted 44_tutorial.tabular.ipynb.\nConverted 45_collab.ipynb.\nConverted 46_tutorial.collab.ipynb.\nConverted 50_tutorial.datablock.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 61_tutorial.medical_imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 72_callback.neptune.ipynb.\nConverted 73_callback.captum.ipynb.\nConverted 74_callback.cutmix.ipynb.\nConverted 97_test_utils.ipynb.\nConverted 99_pytorch_doc.ipynb.\nConverted fa2_resize.ipynb.\nConverted index.ipynb.\nConverted tutorial.ipynb.\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a8aa0b8b9989cde08f709a4ee32b57f9beb76ef
3,986
ipynb
Jupyter Notebook
Showcase.ipynb
ladisk/uff_widget
f9525e61ca266f063a932929eaae68a5ff5a32e8
[ "MIT" ]
null
null
null
Showcase.ipynb
ladisk/uff_widget
f9525e61ca266f063a932929eaae68a5ff5a32e8
[ "MIT" ]
2
2019-09-16T09:05:38.000Z
2019-09-16T12:36:59.000Z
Showcase.ipynb
ladisk/uff_widget
f9525e61ca266f063a932929eaae68a5ff5a32e8
[ "MIT" ]
null
null
null
22.777143
120
0.517311
[ [ [ "import uff_widget", "_____no_output_____" ], [ "path = \"./test_data/tree_structure_mini.uff\"\nuff_1 = uff_widget.widgetuff(path)\nuff_1.read_uff()", "_____no_output_____" ], [ "uff_1.get_info()", "Name: 3D tree structure \nDescription: Dimention: 379x179x474 - CAD model: tree.step\nIn 1. dataset 15 is data for 44 points\nIn datasets 55 are data for:\n normal mode in 43 points\nIn datasets 58 are data for:\n Frequency Response Function in 43 points\nModel has: \n- 43 reference node/-s \n- 1 response node/-s\n" ], [ "uff_1.show_frf()", "_____no_output_____" ], [ "uff_1.show_3D()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a8ab0dbcfb77a398d53e176ab49f564dbb6d415
3,750
ipynb
Jupyter Notebook
Labs/Updating Forked Repo.ipynb
Guoqiang-C/Spring-2018
d212cbe3a0c535e1d1248fcb373d7a5e8bf738a4
[ "Apache-2.0" ]
null
null
null
Labs/Updating Forked Repo.ipynb
Guoqiang-C/Spring-2018
d212cbe3a0c535e1d1248fcb373d7a5e8bf738a4
[ "Apache-2.0" ]
null
null
null
Labs/Updating Forked Repo.ipynb
Guoqiang-C/Spring-2018
d212cbe3a0c535e1d1248fcb373d7a5e8bf738a4
[ "Apache-2.0" ]
null
null
null
36.407767
199
0.600533
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a8af6c3ed38f5560e6e8f89cdb39ee6d9a1be6a
1,692
ipynb
Jupyter Notebook
docs/python/basics/list-append.ipynb
rajacsp/simpleml
9224de12acf0796922dd43e7a8a391e084c3a46a
[ "MIT" ]
null
null
null
docs/python/basics/list-append.ipynb
rajacsp/simpleml
9224de12acf0796922dd43e7a8a391e084c3a46a
[ "MIT" ]
null
null
null
docs/python/basics/list-append.ipynb
rajacsp/simpleml
9224de12acf0796922dd43e7a8a391e084c3a46a
[ "MIT" ]
null
null
null
16.114286
38
0.452719
[ [ [ "---\ntitle: \"List Append Sample\"\nauthor: \"Raja CSP\"\ndate: 2020-08-08\ndescription: \"-\"\ntype: technical_note\ndraft: false\n---", "_____no_output_____" ] ], [ [ "lone = [2, 3]", "_____no_output_____" ], [ "lone", "_____no_output_____" ], [ "lone.append(67)", "_____no_output_____" ], [ "lone", "_____no_output_____" ] ] ]
[ "raw", "code" ]
[ [ "raw" ], [ "code", "code", "code", "code" ] ]
4a8b14f326c98175293ac9d3dbb5c70f54741533
50,917
ipynb
Jupyter Notebook
notebooks/sky-plot.ipynb
mpetroff/pol-mm-wave-atm-model
fa60882c60ebd4c5d152d20642b11e391072149a
[ "MIT" ]
null
null
null
notebooks/sky-plot.ipynb
mpetroff/pol-mm-wave-atm-model
fa60882c60ebd4c5d152d20642b11e391072149a
[ "MIT" ]
null
null
null
notebooks/sky-plot.ipynb
mpetroff/pol-mm-wave-atm-model
fa60882c60ebd4c5d152d20642b11e391072149a
[ "MIT" ]
null
null
null
366.309353
46,728
0.934423
[ [ [ "import numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport colorcet", "_____no_output_____" ], [ "plt.rc('text', usetex=True)\nplt.rc('text.latex', preamble=r'\\usepackage{siunitx}\\sisetup{detect-all}\\usepackage{sansmath}\\sansmath')", "_____no_output_____" ], [ "def fit(x, a, b, c, d):\n zenith = np.pi / 2 - x[0]\n az = x[1]\n return a * np.tan(b * zenith) * np.cos(az - c) + d\nfit_results = np.load('../result-data/fit-results.npy')", "_____no_output_____" ], [ "class MidpointNormalize(matplotlib.colors.Normalize):\n # From https://stackoverflow.com/a/50003503\n def __init__(self, vmin, vmax, midpoint=0, clip=False):\n self.midpoint = midpoint\n matplotlib.colors.Normalize.__init__(self, vmin, vmax, clip)\n\n def __call__(self, value, clip=None):\n normalized_min = max(0, 1 / 2 * (1 - abs((self.midpoint - self.vmin) / (self.midpoint - self.vmax))))\n normalized_max = min(1, 1 / 2 * (1 + abs((self.vmax - self.midpoint) / (self.midpoint - self.vmin))))\n normalized_mid = 0.5\n x, y = [self.vmin, self.midpoint, self.vmax], [normalized_min, normalized_mid, normalized_max]\n return np.ma.masked_array(np.interp(value, x, y))", "_____no_output_____" ], [ "alts = np.linspace(90, 15, 1501)\naz = np.linspace(0, 360, 7201)\nout = np.array([[fit((alt, a), *fit_results) for a in np.deg2rad(az)] for alt in np.deg2rad(alts)])", "_____no_output_____" ], [ "plt.figure(figsize=(4.4, 3.3))\n\nax = plt.subplot(1, 1, 1, projection='polar')\nnorm = MidpointNormalize(vmin=np.min(out)*1e6, vmax=np.max(out)*1e6, midpoint=0)\nplt.pcolormesh(np.deg2rad(az), 90 - alts, out*1e6, cmap='cet_bjy', norm=norm, rasterized=True)\nax.set_yticks(np.arange(0, 90, 15))\n# Zenith angle labels\nax.set_yticklabels(['\\SI{0}{\\degree}', '\\SI{15}{\\degree}', '\\SI{30}{\\degree}', '\\SI{45}{\\degree}', '\\SI{60}{\\degree}', ''])\nax.set_theta_zero_location('W', offset=-90)\nax.set_theta_direction(-1)\nax.set_rlabel_position(120)\nax.grid(b=True, which='major', color='0.3', linestyle='-', lw=0.5)\nplt.colorbar(pad=0.1).set_label('\\emph{V} (\\si{\\micro\\kelvin})')\n\ncolor = 'xkcd:dark purple'\nplt.annotate('MN', xy=(np.deg2rad(174.1), 75), xytext=(np.deg2rad(354.1), 79.5), arrowprops=dict(arrowstyle='<-', capstyle='butt', joinstyle='miter', color=color), horizontalalignment='center', verticalalignment='center', fontsize=9, color=color, rotation=5.9)\n\nplt.tight_layout()\nplt.savefig('../result-data/sky-plot.pdf', dpi=300)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
4a8b20dc3d5dceb2d30f8fae8f8e10489c465344
1,324
ipynb
Jupyter Notebook
labs/06_seq2seq/data_download.ipynb
JLWoodcock/MKL
2d2cdded15aeb8144e08fdbc2b9ae2abfa88216d
[ "MIT" ]
null
null
null
labs/06_seq2seq/data_download.ipynb
JLWoodcock/MKL
2d2cdded15aeb8144e08fdbc2b9ae2abfa88216d
[ "MIT" ]
null
null
null
labs/06_seq2seq/data_download.ipynb
JLWoodcock/MKL
2d2cdded15aeb8144e08fdbc2b9ae2abfa88216d
[ "MIT" ]
null
null
null
22.827586
79
0.549849
[ [ [ "# Run this cell before the lab !\n\nfrom keras.utils.data_utils import get_file\n\nget_file(\"simple_seq2seq_partially_pretrained.h5\", \n \"https://github.com/m2dsupsdlclass/lectures-labs/releases/\"\n \"download/0.4/simple_seq2seq_partially_pretrained.h5\")\n\n\nget_file(\"simple_seq2seq_pretrained.h5\", \n \"https://github.com/m2dsupsdlclass/lectures-labs/releases/\"\n \"download/0.4/simple_seq2seq_pretrained.h5\")\n\nprint(\"done getting pretrained models\")", "done getting pretrained models\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a8b2b539ad3522fdceea971a284d8c6e271f5a0
209,436
ipynb
Jupyter Notebook
Data Visualization Lab.ipynb
okaffo/Data-Visualization-Project
54c3583922e934b7d5dc8792a2771d1ff73757a3
[ "MIT" ]
null
null
null
Data Visualization Lab.ipynb
okaffo/Data-Visualization-Project
54c3583922e934b7d5dc8792a2771d1ff73757a3
[ "MIT" ]
null
null
null
Data Visualization Lab.ipynb
okaffo/Data-Visualization-Project
54c3583922e934b7d5dc8792a2771d1ff73757a3
[ "MIT" ]
null
null
null
113.947769
41,996
0.839798
[ [ [ "<center>\n <img src=\"https://gitlab.com/ibm/skills-network/courses/placeholder101/-/raw/master/labs/module%201/images/IDSNlogo.png\" width=\"300\" alt=\"cognitiveclass.ai logo\" />\n</center>\n", "_____no_output_____" ], [ "# **Data Visualization Lab**\n", "_____no_output_____" ], [ "Estimated time needed: **45 to 60** minutes\n", "_____no_output_____" ], [ "In this assignment you will be focusing on the visualization of data.\n\nThe data set will be presented to you in the form of a RDBMS.\n\nYou will have to use SQL queries to extract the data.\n", "_____no_output_____" ], [ "## Objectives\n", "_____no_output_____" ], [ "In this lab you will perform the following:\n", "_____no_output_____" ], [ "* Visualize the distribution of data.\n\n* Visualize the relationship between two features.\n\n* Visualize composition of data.\n\n* Visualize comparison of data.\n", "_____no_output_____" ], [ "<hr>\n", "_____no_output_____" ], [ "## Demo: How to work with database\n", "_____no_output_____" ], [ "Download database file.\n", "_____no_output_____" ] ], [ [ "!wget https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m4_survey_data.sqlite", "--2022-02-02 18:40:39-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m4_survey_data.sqlite\nResolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 198.23.119.245\nConnecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|198.23.119.245|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 36679680 (35M) [application/octet-stream]\nSaving to: ‘m4_survey_data.sqlite’\n\nm4_survey_data.sqli 100%[===================>] 34.98M 43.1MB/s in 0.8s \n\n2022-02-02 18:40:40 (43.1 MB/s) - ‘m4_survey_data.sqlite’ saved [36679680/36679680]\n\n" ] ], [ [ "Connect to the database.\n", "_____no_output_____" ] ], [ [ "import sqlite3\nconn = sqlite3.connect(\"m4_survey_data.sqlite\") # open a database connection", "_____no_output_____" ] ], [ [ "Import pandas module.\n", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ] ], [ [ "## Demo: How to run an sql query\n", "_____no_output_____" ] ], [ [ "# print how many rows are there in the table named 'master'\nQUERY = \"\"\"\nSELECT COUNT(*)\nFROM master\n\"\"\"\n\n# the read_sql_query runs the sql query and returns the data as a dataframe\ndf = pd.read_sql_query(QUERY,conn)\ndf.head()", "_____no_output_____" ] ], [ [ "## Demo: How to list all tables\n", "_____no_output_____" ] ], [ [ "# print all the tables names in the database\nQUERY = \"\"\"\nSELECT name as Table_Name FROM\nsqlite_master WHERE\ntype = 'table'\n\"\"\"\n# the read_sql_query runs the sql query and returns the data as a dataframe\npd.read_sql_query(QUERY,conn)\n", "_____no_output_____" ] ], [ [ "## Demo: How to run a group by query\n", "_____no_output_____" ] ], [ [ "QUERY = \"\"\"\nSELECT Age,COUNT(*) as count\nFROM master\ngroup by age\norder by age\n\"\"\"\npd.read_sql_query(QUERY,conn)", "_____no_output_____" ] ], [ [ "## Demo: How to describe a table\n", "_____no_output_____" ] ], [ [ "table_name = 'master' # the table you wish to describe\n\nQUERY = \"\"\"\nSELECT sql FROM sqlite_master\nWHERE name= '{}'\n\"\"\".format(table_name)\n\ndf = pd.read_sql_query(QUERY,conn)\nprint(df.iat[0,0])", "CREATE TABLE \"master\" (\n\"index\" INTEGER,\n \"Respondent\" INTEGER,\n \"MainBranch\" TEXT,\n \"Hobbyist\" TEXT,\n \"OpenSourcer\" TEXT,\n \"OpenSource\" TEXT,\n \"Employment\" TEXT,\n \"Country\" TEXT,\n \"Student\" TEXT,\n \"EdLevel\" TEXT,\n \"UndergradMajor\" TEXT,\n \"OrgSize\" TEXT,\n \"YearsCode\" TEXT,\n \"Age1stCode\" TEXT,\n \"YearsCodePro\" TEXT,\n \"CareerSat\" TEXT,\n \"JobSat\" TEXT,\n \"MgrIdiot\" TEXT,\n \"MgrMoney\" TEXT,\n \"MgrWant\" TEXT,\n \"JobSeek\" TEXT,\n \"LastHireDate\" TEXT,\n \"FizzBuzz\" TEXT,\n \"ResumeUpdate\" TEXT,\n \"CurrencySymbol\" TEXT,\n \"CurrencyDesc\" TEXT,\n \"CompTotal\" REAL,\n \"CompFreq\" TEXT,\n \"ConvertedComp\" REAL,\n \"WorkWeekHrs\" REAL,\n \"WorkRemote\" TEXT,\n \"WorkLoc\" TEXT,\n \"ImpSyn\" TEXT,\n \"CodeRev\" TEXT,\n \"CodeRevHrs\" REAL,\n \"UnitTests\" TEXT,\n \"PurchaseHow\" TEXT,\n \"PurchaseWhat\" TEXT,\n \"OpSys\" TEXT,\n \"BlockchainOrg\" TEXT,\n \"BlockchainIs\" TEXT,\n \"BetterLife\" TEXT,\n \"ITperson\" TEXT,\n \"OffOn\" TEXT,\n \"SocialMedia\" TEXT,\n \"Extraversion\" TEXT,\n \"ScreenName\" TEXT,\n \"SOVisit1st\" TEXT,\n \"SOVisitFreq\" TEXT,\n \"SOFindAnswer\" TEXT,\n \"SOTimeSaved\" TEXT,\n \"SOHowMuchTime\" TEXT,\n \"SOAccount\" TEXT,\n \"SOPartFreq\" TEXT,\n \"SOJobs\" TEXT,\n \"EntTeams\" TEXT,\n \"SOComm\" TEXT,\n \"WelcomeChange\" TEXT,\n \"Age\" REAL,\n \"Trans\" TEXT,\n \"Dependents\" TEXT,\n \"SurveyLength\" TEXT,\n \"SurveyEase\" TEXT\n)\n" ] ], [ [ "# Hands-on Lab\n", "_____no_output_____" ], [ "## Visualizing distribution of data\n", "_____no_output_____" ], [ "### Histograms\n", "_____no_output_____" ], [ "Plot a histogram of `ConvertedComp.`\n", "_____no_output_____" ] ], [ [ "import matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "# your code goes here\nQUERY= \"\"\"\n\n\"\"\"\ndf=pd.read_sql_query(\"SELECT*FROM master\",conn)\ndf\n\nplt.hist(df[\"ConvertedComp\"])\nplt.title(\"Histogram of ConvertedComp\")\nplt.xlabel(\"ConvertedComp\")\nplt.ylabel(\"Number of Respondents\")\nplt.show()", "_____no_output_____" ] ], [ [ "### Box Plots\n", "_____no_output_____" ], [ "Plot a box plot of `Age.`\n", "_____no_output_____" ] ], [ [ "# your code goes here\nQUERY=\"\"\"\n\n\"\"\"\ndf=pd.read_sql_query(\"SELECT*FROM master\",conn)\n\ndf['Age'].plot(kind='box',figsize=(20,8),vert=False)\nplt.title('Box Plot of Age')\nplt.show()", "_____no_output_____" ] ], [ [ "## Visualizing relationships in data\n", "_____no_output_____" ], [ "### Scatter Plots\n", "_____no_output_____" ], [ "Create a scatter plot of `Age` and `WorkWeekHrs.`\n", "_____no_output_____" ] ], [ [ "# your code goes here\ncolumn = \"Age\"\ncolumn2 = \"WorkWeekHrs\"\ntable_name = \"master\"\n\nQUERY= \"\"\"\nSELECT Age, AVG(WorkWeekHrs) FROM master GROUP BY Age\n\"\"\".format(column,column2,table_name)\n\ndf_age_work=pd.read_sql_query(QUERY,conn)\n\ndf_age_work.dropna(inplace=True)\ndf_age_work.rename(columns={\"AVG(WorkWeekHrs)\":\"WorkWeekHrs\"},inplace=True)\ndf_age_work.plot(kind=\"scatter\",x=\"Age\",y=\"WorkWeekHrs\",figsize=(9,6),color=\"red\")\n\nplt.title(\"Scatter Plot of Work Hours by Age\")\nplt.xlabel(\"Age\")\nplt.ylabel(\"Week Work Hours\")\nplt.show()\nplt.close()", "_____no_output_____" ] ], [ [ "### Bubble Plots\n", "_____no_output_____" ], [ "Create a bubble plot of `WorkWeekHrs` and `CodeRevHrs`, use `Age` column as bubble size.\n", "_____no_output_____" ] ], [ [ "import plotly.express as px", "_____no_output_____" ], [ "# your code goes here\nQUERY= \"\"\"\nSELECT Age, ConvertedComp, WorkWeekHrs, CodeRevHrs\nFROM master\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\nnorm_age=(df['Age']-df['Age'].min())/(df['Age'].max()-df['Age'].min())\n\ndf.plot(kind='scatter',x='WorkWeekHrs',y='CodeRevHrs',s=norm_age*1000,figsize=(10,6),color='yellow')\n\nplt.title('Bubble Plot of Work Week Hours and Code Rev Hours')\nplt.xlabel('WorkWeekHrs')\nplt.ylabel('CodeRevHrs')\n\nplt.show", "_____no_output_____" ] ], [ [ "## Visualizing composition of data\n", "_____no_output_____" ], [ "### Pie Charts\n", "_____no_output_____" ], [ "Create a pie chart of the top 5 databases that respondents wish to learn next year. Label the pie chart with database names. Display percentages of each database on the pie chart.\n", "_____no_output_____" ] ], [ [ "# your code goes here\nQUERY= \"\"\"\nSELECT DatabaseDesireNextYear, Count(*)as count\nFROM DatabaseDesireNextYear \ngroup by DatabaseDesireNextYear\norder by Count desc limit 5\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf.head()", "_____no_output_____" ], [ "#code to create a pie chart\nQUERY= \"\"\"\nSELECT DatabaseDesireNextYear, Count(*)as count\nFROM DatabaseDesireNextYear \ngroup by DatabaseDesireNextYear\norder by Count desc limit 5\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf.head()\ndf.set_index('DatabaseDesireNextYear',inplace=True)\n\nsizes=df.iloc[:,0]\ncolor_list=['red','yellowgreen','gold','blue','lightcoral','pink','lightgreen']\nlabels=['PostgreSQL','MongoDB','Redis','MySQL','Elasticsearch']\ndf.plot(kind='pie',\n figsize=(20,6),\n autopct='%1.1f%%',\n startangle=90,\n shadow=True,\n labels=None,\n pctdistance=1.12,\n subplots=True,\n colors=color_list)\n\nplt.title('Pie Chart of Top 5 Database Desire Next Year')\nplt.axis('equal')\nplt.legend(labels, loc='upper left')\nplt.show()", "No handles with labels found to put in legend.\n" ] ], [ [ "### Stacked Charts\n", "_____no_output_____" ], [ "Create a stacked chart of median `WorkWeekHrs` and `CodeRevHrs` for the age group 30 to 35.\n", "_____no_output_____" ] ], [ [ "# your code goes here\nQUERY= \"\"\"\nSELECT Age, WorkWeekHrs, CodeRevHrs\nFROM master\nWHERE Age BETWEEN 30 AND 35\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf=df.groupby('Age').median()\nprint(df)", " WorkWeekHrs CodeRevHrs\nAge \n30.0 40.0 4.0\n31.0 40.0 4.0\n32.0 40.0 4.0\n33.0 40.0 4.0\n34.0 40.0 4.0\n35.0 40.0 4.0\n" ], [ "# your code goes here\nQUERY= \"\"\"\nSELECT Age, WorkWeekHrs, CodeRevHrs\nFROM master\nWHERE Age Between 30 AND 35\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf.set_index('Age',inplace=True)\norder=['WorkWeekHrs','CodeRevHrs']\ndf.groupby('Age')[order].median().plot.bar(stacked=True)", "_____no_output_____" ] ], [ [ "## Visualizing comparison of data\n", "_____no_output_____" ], [ "### Line Chart\n", "_____no_output_____" ], [ "Plot the median `ConvertedComp` for all ages from 45 to 60.\n", "_____no_output_____" ] ], [ [ "# your code goes here\nQUERY= \"\"\"\nSELECT(ConvertedComp) as ConvertedComp, Age as Age\nFROM master\nWHERE Age BETWEEN 45 AND 60\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf.set_index('Age',inplace=True)\ndf.dropna(subset=['ConvertedComp'],inplace=True)\n\norder=['ConvertedComp']\ndf.groupby('Age')[order].median().plot(kind='line',figsize=(14,8))\nplt.title('Line Chart of ConvertedComp by Age')\nplt.show()", "_____no_output_____" ] ], [ [ "### Bar Chart\n", "_____no_output_____" ], [ "Create a horizontal bar chart using column `MainBranch.`\n", "_____no_output_____" ] ], [ [ "# your code goes here\nQUERY= \"\"\"\nSELECT Count(MainBranch) as count, MainBranch\nFROM master\ngroup by MainBranch\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf['MainBranch'].value_counts().plot(kind='barh',figsize=(10,10),color='skyblue')\n\nplt.title('Bar Chart of Main Branch')\nplt.show()", "_____no_output_____" ], [ "# What is the rank of Python in most popular Language?\n# write code here\nQUERY= \"\"\"\nSELECT LanguageWorkedWith\nFROM LanguageWorkedWith\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf['LanguageWorkedWith'].value_counts()", "_____no_output_____" ], [ "# How many respondents indicated they currently work with SQL?\n# write your code here\nQUERY= \"\"\"\nSELECT LanguageWorkedWith, count(Respondent) as count\nFROM LanguageWorkedWith\nWHERE LanguageWorkedWith ='SQL'\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf", "_____no_output_____" ], [ "# How many respondents indicated they work on MySQL only?\n# write your code here\nQUERY= \"\"\"\nSELECT DatabaseWorkedWith, Respondent\nFROM DatabaseWorkedWith\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf1=df.groupby('Respondent').sum()\ndf1[df1['DatabaseWorkedWith']=='MySQL'].count()", "_____no_output_____" ], [ "# Majority of the Survey Responders are?\n# write your code here\nQUERY= \"\"\"\nSELECT DevType\nFROM DevType\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf['DevType'].value_counts()", "_____no_output_____" ], [ "QUERY= \"\"\"\nSELECT DevType, count(Respondent) as count\nFROM DevType\n\"\"\"\ndf=pd.read_sql_query(QUERY,conn)\ndf", "_____no_output_____" ] ], [ [ "Close the database connection.\n", "_____no_output_____" ] ], [ [ "conn.close()", "_____no_output_____" ] ], [ [ "## Authors\n", "_____no_output_____" ], [ "Ramesh Sannareddy\n", "_____no_output_____" ], [ "### Other Contributors\n", "_____no_output_____" ], [ "Rav Ahuja\n", "_____no_output_____" ], [ "## Change Log\n", "_____no_output_____" ], [ "| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ----------------- | ---------------------------------- |\n| 2020-10-17 | 0.1 | Ramesh Sannareddy | Created initial version of the lab |\n", "_____no_output_____" ], [ "Copyright © 2020 IBM Corporation. This notebook and its source code are released under the terms of the [MIT License](https://cognitiveclass.ai/mit-license?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDA0321ENSkillsNetwork21426264-2021-01-01&cm_mmc=Email_Newsletter-\\_-Developer_Ed%2BTech-\\_-WW_WW-\\_-SkillsNetwork-Courses-IBM-DA0321EN-SkillsNetwork-21426264&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ).\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a8b2dbccd058f156540fe6278c922426ef63c71
7,095
ipynb
Jupyter Notebook
runge_kutta_mv.ipynb
nncastil/astr-119-session12
ef1aa4ae8677531af89c94ec372de05ce2e5e8aa
[ "MIT" ]
null
null
null
runge_kutta_mv.ipynb
nncastil/astr-119-session12
ef1aa4ae8677531af89c94ec372de05ce2e5e8aa
[ "MIT" ]
null
null
null
runge_kutta_mv.ipynb
nncastil/astr-119-session12
ef1aa4ae8677531af89c94ec372de05ce2e5e8aa
[ "MIT" ]
null
null
null
28.154762
152
0.454123
[ [ [ "4th order runge kutta with adapted step size\n\n- small time step to improve accuracy\n\n- integration more efficient (partition)", "_____no_output_____" ], [ "## a simple coupled ODE\n\nd^2y/dx^2 = -y\n\nfor all x the second derivative of y is = -y (sin or cos curve)\n\n- specify boundary conditions to determine which\n\n- y(0) = 0 and dy/dx (x = 0) = 1 --> sin(x)\n\nrewrte as coupled ODEs to solve numerically (slide 8)", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np", "_____no_output_____" ], [ "#define coupled derivatives to integrate\ndef dydx(x,y):\n #y is a 2D array\n \n #equation is d^2y/dx^2 = -y\n #so: dydx = z, dz/dx = -y\n \n #set y = y[0], z = y[1]\n \n #declare array\n y_derivs = np.zeros(2)\n \n y_derivs[0] = y[1]\n \n y_derivs[1] = -1*y[0]\n \n return y_derivs\n\n#can't evolve one without evolving the other, dependent variables", "_____no_output_____" ], [ "#define 4th order RK method\ndef rk_mv_core(dydx,xi,yi,nv,h):\n \n #nv = number of variables\n # h = width\n \n #declare k arrays\n k1 = np.zeros(nv)\n k2 = np.zeros(nv)\n k3 = np.zeros(nv)\n k4 = np.zeros(nv)\n \n #define x at half step\n x_ipoh = xi + 0.5*h\n \n #define x at 1 step\n x_ipo = xi + h\n \n #declare a temp y array\n y_temp = np.zeros(nv)\n \n #find k1 values\n y_derivs = dydx(xi,yi) #array of y derivatives\n k1[:] = h*y_derivs[:] #taking diff euler steps for derivs\n \n #get k2 values\n y_temp[:] = yi[:] + 0.5*k1[:]\n y_derivs = dydx(x_ipoh,y_temp)\n k2[:] = h*y_derivs[:]\n \n #get k3 values\n y_temp[:] = yi[:] + 0.5*k1[:]\n y_derivs = dydx(x_ipoh,y_temp)\n k3[:] = h*y_derivs[:]\n \n #get k4 values\n y_temp[:] = yi[:] + k3[:]\n k4[:] = h*y_derivs[:]\n \n #advance y by step h\n yipo = yi + (k1 + 2*k2 + 2*k3 + k4)/6. #this is an array \n \n return yipo", "_____no_output_____" ] ], [ [ "before, we took a single step\n\nnow we take two different versions of the same equation for the step\n\ncan be used as a check for the previous technique\n\nthe difference should be within tolerance to be valid (if the steps are too big and outside of tolerance then they need to be smaller bebeh steps)", "_____no_output_____" ] ], [ [ "#define adaptive step size for RK4\ndef rk4_mv_ad(dydx,x_i,y_i,nv,h,tol):\n \n #define safety scale\n SAFETY = 0.9\n H_NEW_FAC = 2.0\n \n #set max number of iterations\n imax = 10000\n \n #set iteration variable, num of iterations taken\n i = 0\n \n #create an error (array)\n Delta = np.full(nv,2*tol) #twice the tol, if it exceeds tol\n #steps need to be smoler\n \n #remember step\n h_step = h\n \n #adjust step\n while(Delta.max()/tol > 1.0): #while loop\n #estimate error by taking one step of size h vs two steps of size h/2\n y_2 = rk4_mv_core(dydx,x_i,y_i,nv,h_step)\n y_1 = rk4_mv_core(dydx,x_i,y_i,nv,0.5*h_step)\n y_11 = rk4_mv_core(dydx,x_i+0.5*h_step,y_1,0.5*h_step)\n \n #compute error\n Delta = np.fabs(y_2 - y_1)\n \n #if the error is too large\n if(Delta.max()/tol > 1.0):\n h_step *= SAFETY * (Delta.max()/tol)**(-0.25) #decreases h step size\n \n #check iteration\n if(i>=imax):\n print(\"Too many iterations in rk4_mv_ad()\")\n raise StopIteration(\"Ending after i = \",i)\n \n #iterate\n i+=1\n \n #leave while loop, to try bigger steps\n h_new = np.fmin(h_step * (Delta.amx()/tol)**(-0.9), h_step*H_NEW_FAC)\n \n #return the answer, the new step, and the step actually taken\n return y_2, h_new, h_step", "_____no_output_____" ], [ "#wrapper function\ndef rk4_mv(dydx,a,b,y_a,tol):\n #dydx = deriv wrt x\n #a = lower bound\n #b = upper bound\n #y_a = boundary conditions (0,1)\n #tol = tolerance for integrating y\n \n #define starting step\n xi = a\n yi = y_a.copy()\n \n #initial step size (smallllll)\n h = 1.0e-4 * (b-a)\n \n #max number of iterations\n imax = 10000\n \n #set iteration variable\n i = 0\n \n #set the number of coupled ODEs to the size of y_a\n nv = len(y-a)\n \n #set initial conditions\n x = np.sull(1,a)\n y = np.full((1,nv),y_a) #2 dimensional array\n \n #set flag\n flag = 1\n \n #loop until we reach the right side\n while(flag):", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a8b34c07a7b5772332aeaa535343e948730cff8
1,271
ipynb
Jupyter Notebook
docs/python/numpy/template-Copy1.ipynb
tarunvelagala/py_notes
1d3fadb80a6478b0a41069e88de503c64c691d22
[ "MIT" ]
null
null
null
docs/python/numpy/template-Copy1.ipynb
tarunvelagala/py_notes
1d3fadb80a6478b0a41069e88de503c64c691d22
[ "MIT" ]
null
null
null
docs/python/numpy/template-Copy1.ipynb
tarunvelagala/py_notes
1d3fadb80a6478b0a41069e88de503c64c691d22
[ "MIT" ]
null
null
null
16.723684
36
0.468922
[ [ [ "---\ntitle: \"Numpy array\"\nauthor: \"TACT\"\ndate: 2019-04-20\ndescription: \"-\"\ntype: technical_note\ndraft: false\n---", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "np.array([1, 23])", "_____no_output_____" ] ] ]
[ "raw", "code" ]
[ [ "raw" ], [ "code", "code" ] ]
4a8b3d50e84226ec7df6c4621901cd8749812b1d
437,465
ipynb
Jupyter Notebook
some_other_notebooks/Kmeans/EDC_DAT210x_Kmeans_Assigment2.ipynb
remi-ang/Udacity_MLND
f555a91eede8c3abfedb82bdbc321631001aebdb
[ "MIT" ]
null
null
null
some_other_notebooks/Kmeans/EDC_DAT210x_Kmeans_Assigment2.ipynb
remi-ang/Udacity_MLND
f555a91eede8c3abfedb82bdbc321631001aebdb
[ "MIT" ]
null
null
null
some_other_notebooks/Kmeans/EDC_DAT210x_Kmeans_Assigment2.ipynb
remi-ang/Udacity_MLND
f555a91eede8c3abfedb82bdbc321631001aebdb
[ "MIT" ]
null
null
null
58.15807
29,191
0.587448
[ [ [ "# Lab Assignment 2\n\nThe spirit of data science includes exploration, traversing the unknown, and applying a deep understanding of the challenge you're facing. In an academic setting, it's hard to duplicate these tasks, but this lab will attempt to take a few steps away from the traditional, textbook, \"plug the equation in\" pattern, so you can get a taste of what analyzing data in the real world is all about.\n\nAfter the September 11 attacks, a series of secret regulations, laws, and processes were enacted, perhaps to better protect the citizens of the United States. These processes continued through president Bush's term and were renewed and and strengthened during the Obama administration. Then, on May 24, 2006, the United States Foreign Intelligence Surveillance Court (FISC) made a fundamental shift in its approach to Section 215 of the Patriot Act, permitting the FBI to compel production of \"business records\" relevant to terrorism investigations, which are shared with the NSA. The court now defined as business records the entirety of a telephone company's call database, also known as Call Detail Records (CDR or metadata).\n\nNews of this came to public light after an ex-NSA contractor leaked the information, and a few more questions were raised when it was further discovered that not just the call records of suspected terrorists were being collected in bulk... but perhaps the entirety of Americans as a whole. After all, if you know someone who knows someone who knows someone, your private records are relevant to a terrorism investigation. The white house quickly reassured the public in a press release that \"Nobody is listening to your telephone calls,\" since, \"that's not what this program is about.\" The public was greatly relieved.\n\nThe questions you'll be exploring in this lab assignment using K-Means are: exactly how useful is telephone metadata? It must have some use, otherwise the government wouldn't have invested however many millions they did into it secretly collecting it from phone carriers. Also what kind of intelligence can you extract from CDR metadata besides its face value?\n\nYou will be using a sample CDR dataset generated for 10 people living in the Dallas, Texas metroplex area. Your task will be to attempt to do what many researchers have already successfully done - partly de-anonymize the CDR data. People generally behave in predictable manners, moving from home to work with a few errands in between. With enough call data, given a few K-locations of interest, K-Means should be able to isolate rather easily the geolocations where a person spends the most of their time.\n\nNote: to safeguard from doxing people, the CDR dataset you'll be using for this assignment was generated using the tools available in the Dive Deeper section. CDRs are at least supposed to be protected by privacy laws, and are the basis for proprietary revenue calculations. In reality, there are quite a few public CDRs out there. Much information can be discerned from them such as social networks, criminal acts, and believe it or not, even the spread of decreases as was demonstrated by Flowminder Foundation paper on Ebola. \n\n1. Open up the starter code in /Module5/assignment2.py and read through it all. It's long, so make sure you understand everything that is being asked for you before proceeding.\n2. Load up the CDR dataset from /Module5/Datasets/CDR.csv. Do your due diligence to make sure it's been loaded correctly and all the features and rows match up.\n3. Pick the first unique user in the list to examine. Follow the steps in the assignment file to approximate where the user lives.\n4. Once you have a (Latitude, Longitude) coordinate pair, drop them into Google Maps. Just do a search for the \"{Lat, Lon}\". So if your centroid is located at Longitude = -96.949246 and Latitude = 32.953856, then do a maps search for \"32.953856, -96.949246\".\n5. Answer the questions below.", "_____no_output_____" ] ], [ [ "# import\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.cluster import KMeans\n#from sklearn.datasets import load_boston\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nmatplotlib.style.use('ggplot') # Look Pretty\n\n%matplotlib notebook", "_____no_output_____" ], [ "def showandtell(title=None):\n if title != None: plt.savefig(title + \".png\", bbox_inches='tight', dpi=300)\n plt.show()\n #exit()", "_____no_output_____" ], [ "# INFO: This dataset has call records for 10 users tracked over the course of 3 years.\n# Your job is to find out where the users likely live and work at!\n\n# TODO: Load up the dataset and take a peek at its head\n# Convert the date using pd.to_datetime, and the time using pd.to_timedelta\n\ndataFile = r'C:\\Users\\ng35019\\Documents\\Training\\python_for_ds\\Module5Clustering\\Datasets\\CDR.csv'\ndf = pd.read_csv(dataFile); \ndf.CallDate = pd.to_datetime(df.CallDate)\ndf.CallTime = pd.to_timedelta(df.CallTime)\ndf", "_____no_output_____" ], [ "# TODO: Get a distinct list of \"In\" phone numbers (users) and store the values in a\n# regular python list.\n# Hint: https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tolist.html\n\nIn = df.In.unique().tolist(); In", "_____no_output_____" ], [ "# TODO: Create a slice called user1 that filters to only include dataset records where the\n# \"In\" feature (user phone number) is equal to the first number on your unique list above\n\nuser1 = df[df.In == In[0]]; user1", "_____no_output_____" ], [ "# INFO: Plot all the call locations\nuser1.plot.scatter(x='TowerLon', y='TowerLat', c='gray', alpha=0.1, title='Call Locations')\nshowandtell() # Comment this line out when you're ready to proceed", "_____no_output_____" ], [ "# On Week End days:\n# 1. People probably don't go into work\nwe = user1[(user1.DOW == 'Sat') | (user1.DOW == 'Sun')]\n# 2. They probably sleep in late on Saturday\nwe = we[(we.CallTime < '06:00:00') | (we.CallTime > '10:00:00')]; we\n\n# 3. They probably run a bunch of random errands, since they couldn't during the week\n# 4. They should be home, at least during the very late hours, e.g. 1-4 AM", "_____no_output_____" ], [ "fig = plt.figure()\nax = fig.add_subplot(111)\nax.scatter(we.TowerLon,we.TowerLat, c='g', marker='o', alpha=0.2)\nax.set_title('Weekend Calls (<6am or >10p)')\nshowandtell() # TODO: Comment this line out when you're ready to proceed", "_____no_output_____" ], [ "# On Weekdays:\n# 1. People probably are at work during normal working hours\n# 2. They probably are at home in the early morning and during the late night\n# 3. They probably spend time commuting between work and home everyday\nwd = user1[(user1.DOW != 'Sat') & (user1.DOW != 'Sun')]\nwd = wd[(wd.CallTime < '07:00:00') | (wd.CallTime > '20:00:00')]; wd", "_____no_output_____" ], [ "wd.plot.scatter(x='TowerLon', y='TowerLat', alpha=0.1, title='Call Locations')\nshowandtell()", "_____no_output_____" ], [ "# join both df\n\n#df = we[['TowerLat','TowerLon']].append(wd[['TowerLat','TowerLon']]);\ndf = we[['TowerLat','TowerLon']]; \ndf.plot.scatter(x='TowerLon', y='TowerLat', alpha=0.1, title='Call Locations')\nshowandtell()", "_____no_output_____" ], [ "# run K-Mean with K=1 and plot the centroids\n\n# TODO: Use K-Means to try and find seven cluster centers in this dataframe.\nkmeans_model = KMeans(n_clusters=2)\nkmeans_model.fit(df)\n\n# INFO: Print and plot the centroids...\ncentroids = kmeans_model.cluster_centers_\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.set_title('Weekend Calls (<6am or >10p) and Weekdays Calls (<7am or >8pm)')\nax.scatter(df.TowerLon,df.TowerLat, c='g', marker='o', alpha=0.2)\nax.scatter(centroids[:,1], centroids[:,0], marker='x', c='red', alpha=0.5, linewidths=10, s=169)\n\nshowandtell() \nprint(centroids)\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]