hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
cb7dae4f95d26ef216946744e7c368badae9ad28
12,165
ipynb
Jupyter Notebook
pages/workshop/NumPy/Intermediate NumPy.ipynb
rrbuchholz/python-training
6bade362a17b174f44a63d3474e54e0e6402b954
[ "BSD-3-Clause" ]
87
2019-08-29T06:54:06.000Z
2022-03-14T12:52:59.000Z
pages/workshop/NumPy/Intermediate NumPy.ipynb
rrbuchholz/python-training
6bade362a17b174f44a63d3474e54e0e6402b954
[ "BSD-3-Clause" ]
100
2019-08-30T16:52:36.000Z
2022-02-10T12:12:05.000Z
pages/workshop/NumPy/Intermediate NumPy.ipynb
rrbuchholz/python-training
6bade362a17b174f44a63d3474e54e0e6402b954
[ "BSD-3-Clause" ]
58
2019-07-19T20:39:18.000Z
2022-03-07T13:47:32.000Z
24.378758
311
0.538512
[ [ [ "<a name=\"top\"></a>\n<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://raw.githubusercontent.com/Unidata/MetPy/master/src/metpy/plots/_static/unidata_150x150.png\" alt=\"Unidata Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Intermediate NumPy</h1>\n<h3>Unidata Python Workshop</h3>\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">\n\n<div style=\"float:right; width:250 px\"><img src=\"http://www.contribute.geeksforgeeks.org/wp-content/uploads/numpy-logo1.jpg\" alt=\"NumPy Logo\" style=\"height: 250px;\"></div>\n\n### Questions\n1. How do we work with the multiple dimensions in a NumPy Array?\n1. How can we extract irregular subsets of data?\n1. How can we sort an array?\n\n### Objectives\n1. <a href=\"#indexing\">Using axes to slice arrays</a>\n1. <a href=\"#boolean\">Index arrays using true and false</a>\n1. <a href=\"#integers\">Index arrays using arrays of indices</a>", "_____no_output_____" ], [ "<a name=\"indexing\"></a>\n## 1. Using axes to slice arrays\n\nThe solution to the last exercise in the Numpy Basics notebook introduces an important concept when working with NumPy: the axis. This indicates the particular dimension along which a function should operate (provided the function does something taking multiple values and converts to a single value). \n\nLet's look at a concrete example with `sum`:", "_____no_output_____" ] ], [ [ "# Convention for import to get shortened namespace\nimport numpy as np", "_____no_output_____" ], [ "# Create an array for testing\na = np.arange(12).reshape(3, 4)\na", "_____no_output_____" ], [ "# This calculates the total of all values in the array\nnp.sum(a)", "_____no_output_____" ], [ "# Keep this in mind:\na.shape", "_____no_output_____" ], [ "# Instead, take the sum across the rows:\nnp.sum(a, axis=0)", "_____no_output_____" ], [ "# Or do the same and take the some across columns:\nnp.sum(a, axis=1)", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Finish the code below to calculate advection. The trick is to figure out\n how to do the summation.</li>\n </ul>\n</div>", "_____no_output_____" ] ], [ [ "# Synthetic data\ntemp = np.random.randn(100, 50)\nu = np.random.randn(100, 50)\nv = np.random.randn(100, 50)\n\n# Calculate the gradient components\ngradx, grady = np.gradient(temp)\n\n# Turn into an array of vectors:\n# axis 0 is x position\n# axis 1 is y position\n# axis 2 is the vector components\ngrad_vec = np.dstack([gradx, grady])\nprint(grad_vec.shape)\n\n# Turn wind components into vector\nwind_vec = np.dstack([u, v])\n\n# Calculate advection, the dot product of wind and the negative of gradient\n# DON'T USE NUMPY.DOT (doesn't work). Multiply and add.\n", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-info\">\n <b>SOLUTION</b>\n</div>", "_____no_output_____" ] ], [ [ "# %load solutions/advection.py", "_____no_output_____" ] ], [ [ "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">", "_____no_output_____" ], [ "<a name=\"boolean\"></a>\n## 2. Indexing Arrays with Boolean Values\nNumpy can easily create arrays of boolean values and use those to select certain values to extract from an array", "_____no_output_____" ] ], [ [ "# Create some synthetic data representing temperature and wind speed data\nnp.random.seed(19990503) # Make sure we all have the same data\ntemp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +\n 50 + 2 * np.random.randn(100))\nspd = (np.abs(10 * np.sin(np.linspace(0, 2 * np.pi, 100)) +\n 10 + 5 * np.random.randn(100)))", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.plot(temp, 'tab:red')\nplt.plot(spd, 'tab:blue');", "_____no_output_____" ] ], [ [ "By doing a comparision between a NumPy array and a value, we get an\narray of values representing the results of the comparison between\neach element and the value", "_____no_output_____" ] ], [ [ "temp > 45", "_____no_output_____" ] ], [ [ "We can take the resulting array and use this to index into the\nNumPy array and retrieve the values where the result was true", "_____no_output_____" ] ], [ [ "print(temp[temp > 45])", "_____no_output_____" ] ], [ [ "So long as the size of the boolean array matches the data, the boolean array can come from anywhere", "_____no_output_____" ] ], [ [ "print(temp[spd > 10])", "_____no_output_____" ], [ "# Make a copy so we don't modify the original data\ntemp2 = temp.copy()\n\n# Replace all places where spd is <10 with NaN (not a number) so matplotlib skips it\ntemp2[spd < 10] = np.nan\nplt.plot(temp2, 'tab:red')", "_____no_output_____" ] ], [ [ "Can also combine multiple boolean arrays using the syntax for bitwise operations. **MUST HAVE PARENTHESES** due to operator precedence.", "_____no_output_____" ] ], [ [ "print(temp[(temp < 45) & (spd > 10)])", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-success\">\n <b>EXERCISE</b>:\n <ul>\n <li>Heat index is only defined for temperatures >= 80F and relative humidity values >= 40%. Using the data generated below, use boolean indexing to extract the data where heat index has a valid value.</li>\n </ul>\n</div>", "_____no_output_____" ] ], [ [ "# Here's the \"data\"\nnp.random.seed(19990503) # Make sure we all have the same data\ntemp = (20 * np.cos(np.linspace(0, 2 * np.pi, 100)) +\n 80 + 2 * np.random.randn(100))\nrh = (np.abs(20 * np.cos(np.linspace(0, 4 * np.pi, 100)) +\n 50 + 5 * np.random.randn(100)))\n\n\n# Create a mask for the two conditions described above\n# good_heat_index = \n\n\n\n# Use this mask to grab the temperature and relative humidity values that together\n# will give good heat index values\n# temp[] ?\n\n\n# BONUS POINTS: Plot only the data where heat index is defined by\n# inverting the mask (using `~mask`) and setting invalid values to np.nan", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-info\">\n <b>SOLUTION</b>\n</div>", "_____no_output_____" ] ], [ [ "# %load solutions/heat_index.py", "_____no_output_____" ] ], [ [ "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">", "_____no_output_____" ], [ "<a name=\"integers\"></a>\n## 3. Indexing using arrays of indices\n\nYou can also use a list or array of indices to extract particular values--this is a natural extension of the regular indexing. For instance, just as we can select the first element:", "_____no_output_____" ] ], [ [ "print(temp[0])", "_____no_output_____" ] ], [ [ "We can also extract the first, fifth, and tenth elements:", "_____no_output_____" ] ], [ [ "print(temp[[0, 4, 9]])", "_____no_output_____" ] ], [ [ "One of the ways this comes into play is trying to sort numpy arrays using `argsort`. This function returns the indices of the array that give the items in sorted order. So for our temp \"data\":", "_____no_output_____" ] ], [ [ "inds = np.argsort(temp)\nprint(inds)", "_____no_output_____" ] ], [ [ "We can use this array of indices to pass into temp to get it in sorted order:", "_____no_output_____" ] ], [ [ "print(temp[inds])", "_____no_output_____" ] ], [ [ "Or we can slice `inds` to only give the 10 highest temperatures:", "_____no_output_____" ] ], [ [ "ten_highest = inds[-10:]\nprint(temp[ten_highest])", "_____no_output_____" ] ], [ [ "There are other numpy arg functions that return indices for operating:", "_____no_output_____" ] ], [ [ "np.*arg*?", "_____no_output_____" ] ], [ [ "<a href=\"#top\">Top</a>\n<hr style=\"height:2px;\">", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb7db3058c7478a3eee48f68f82e8250cbd6da38
20,610
ipynb
Jupyter Notebook
test.ipynb
rdrockz/c9-python-getting-started
3b6a1e08851dc795f11a0214fecc8678078433eb
[ "MIT" ]
null
null
null
test.ipynb
rdrockz/c9-python-getting-started
3b6a1e08851dc795f11a0214fecc8678078433eb
[ "MIT" ]
null
null
null
test.ipynb
rdrockz/c9-python-getting-started
3b6a1e08851dc795f11a0214fecc8678078433eb
[ "MIT" ]
null
null
null
26.627907
278
0.450364
[ [ [ "def dig_pow(n, p):\n # creating a placholder\n length = len(str(n))\n total=0\n for digits in range(1,length):\n \n a= n % (10**digits)\n print(a)\n total+= (a ** (p+length-digits))\n print(total)\n if total % n==0:\n return total //n\n else:\n return -1 \n", "_____no_output_____" ], [ "dig_pow(46288,3) # SHOULD RETURN 51 as 4³ + 6⁴+ 2⁵ + 8⁶ + 8⁷ = 2360688 = 46288 * 51", "8\n2097152\n88\n464406183936\n288\n2445761839104\n6288\n1565773854474240\n" ], [ "def dig_pow(n, p):\n # by list comprehension\n total=0\n new_p=int(p)\n for digit in str(n):\n #a= n % (10**digits)\n #print(a)\n print(digit)\n total += (int(digit) ** new_p)\n new_p+=1\n print(total)\n if total % n==0:\n return total //n\n else:\n return -1 \n", "_____no_output_____" ], [ "dig_pow(46288,3)", "4\n64\n6\n1360\n2\n1392\n8\n263536\n8\n2360688\n" ], [ "def openOrSenior(data):\n # List of List and make a newList\n newList =[]\n \n for a,b in data:\n print (a,b)\n #if #more than 60 and more than 7\n \n #newList.append(\"Senior\")\n \n #else:\n #newList.append(\"Open\")\n \n \n #return new_list\n ", "_____no_output_____" ], [ "openOrSenior([[45, 12],[55,21],[19, -2],[104, 20]])", "45 12\n55 21\n19 -2\n104 20\n" ], [ "def dirReduc(arr):\n # so n is +1 and s is -1 for vertical, so is the case for e and w for horizontal\n # reduce function to return [] \n # might have to use recursive\n newList =[]\n #newArray =[]\n for i,dir in enumerate(arr):\n #print (dir)\n print (arr[i])\n if arr[i] == \"NORTH\" and arr[i+1]!= \"SOUTH\" or arr[i] == \"EAST\" and arr[i+1]!= \"WEST\" :\n print (arr[i])\n\n \n #if newArray.append(dir) \n \n \n return arr", "_____no_output_____" ], [ "a = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\ndirReduc(a) # West", "NORTH\nSOUTH\nSOUTH\nEAST\nWEST\nNORTH\nNORTH\nWEST\n" ], [ "#Check if North and South are adjacent,\n\ndef dirReduc(arr):\n newList=[]\n for i, element in enumerate(arr):\n previous_element = arr[i-1] if i > 0 else None\n current_element = element\n if current_element is \"NORTH\" and previous_element is not \"SOUTH\":\n newList.append(\"NORTH\")\n elif current_element is \"SOUTH\" and previous_element is not \"NORTH\":\n newList.append(\"SOUTH\")\n elif current_element is \"EAST\" and previous_element is not \"WEST\":\n newList.append(\"EAST\") \n elif current_element is \"WEST\" and previous_element is not \"EAST\":\n newList.append(\"WEST\")\n\n #if dir== \"NORTH\" and newList[i-1]\n #newList[i].append(dir) \n #if dir[i] == \"NORTH\" and dir[i+1] == \"SOUTH\":\n # dir.pop(i)\n print (newList)\n", "_____no_output_____" ], [ "def dirReduc(arr):\n dir = [\"NORTH\",\"SOUTH\",\"EAST\",\"WEST\"]\n for i,element in enumerate[arr]:\n if element in dir\n\n # dir = [(\"NORTH\",\"SOUTH\"),(\"EAST\",\"WEST\"),(\"SOUTH\",\"NORTH\"),(\"WEST\",\"EAST\")]\n'for i, element in enumerate(mylist):\n previous_element = mylist[i-1] if i > 0 else None\n current_element = element\n next_element = mylist[i+1] if i < len(mylist)-1 else None\n print(previous_element, current_element, next_element)", "_____no_output_____" ], [ "a = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\n#dirReduc(a) # West \narr=a\ndir = [(\"NORTH\",\"SOUTH\"),(\"EAST\",\"WEST\"),(\"SOUTH\",\"NORTH\"),(\"WEST\",\"EAST\")]\ndef tup(a):\n if len(a) % 2 != 0:\n listTup= list(zip(a[::2],a[+1::2]))\n listTup.append(a[-1])\n else:\n listTup= list(zip(arr[::2],arr[+1::2]))\n\n return listTup\n\n\n\n#i=iter(arr)\n#if i in dir and i.next()==", "_____no_output_____" ], [ "def dirReduc(arr):\n dir = [(\"NORTH\",\"SOUTH\"),(\"EAST\",\"WEST\"),(\"SOUTH\",\"NORTH\"),(\"WEST\",\"EAST\")]\n \n def tup(a):\n if len(a) % 2 != 0:\n listTup= list(zip(a[::2],a[+1::2]))\n listTup.append(a[-1])\n else:\n listTup= list(zip(arr[::2],arr[+1::2]))\n\n return listTup\n \n new=[]\n tuplist=tup(arr)\n print (tuplist)\n \n if dir in tuplist :\n print(\"y\")\n else:\n print(\"nah\")\n print (dir)\n while dir in tuplist:\n new= (x for x in tuplist if x not in dir)\n tuplist =dirReduc(new)\n print(new)\n #l3 = [x for x in l1 if x not in l2]\n #else:\n # pass\n #print(a)\n ", "_____no_output_____" ], [ "a = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\ndirReduc(a) # West", "[('NORTH', 'SOUTH'), ('SOUTH', 'EAST'), ('WEST', 'NORTH'), 'WEST']\nnah\n[('NORTH', 'SOUTH'), ('EAST', 'WEST'), ('SOUTH', 'NORTH'), ('WEST', 'EAST')]\n" ], [ "def dirReduc(arr): \n def tup(a):\n if len(a) % 2 != 0:\n listTup= list(zip(a[::2],a[+1::2]))\n listTup.append(a[-1])\n else:\n listTup= list(zip(arr[::2],arr[+1::2]))\n\n return listTup\n\n new= tup(arr)\n old=[]\n dir = [(\"NORTH\",\"SOUTH\"),(\"EAST\",\"WEST\"),(\"SOUTH\",\"NORTH\"),(\"WEST\",\"EAST\")]\n #recursive\n while len(old)== len(new):\n \n old= tup(new)\n new= [element for element in old if element not in dir ]\n new= [x for t in new for x in t if len(x)>1]\n #and element for x in new fro element in x\n print (list(new))\n #old=new\n #print (old)\n \n #print (new)", "_____no_output_____" ], [ "a = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\ndirReduc(a) # West\n", "_____no_output_____" ], [ "def dirReduc(arr): \n def tup(a):\n if len(a) % 2 != 0:\n listTup= list(zip(a[::2],a[+1::2]))\n listTup.append(a[-1])\n else:\n listTup= list(zip(arr[::2],arr[+1::2]))\n\n return listTup\n a=arr\n old= tup(a)\n new=[]\n dir = [(\"NORTH\",\"SOUTH\"),(\"EAST\",\"WEST\"),(\"SOUTH\",\"NORTH\"),(\"WEST\",\"EAST\")]\n #recursive?\n if len(old) != len(new): \n #old= tup(new)\n new= [element for element in old if element not in dir ]\n print(new)\n #new= [element for tup in new for element in new ]\n #new= [element for t in new for element in t if len(element)>1]\n #and element for x in new fro element in x\n print (list(new))\n #old=new\n #print (old)\n else:\n print (new)", "_____no_output_____" ], [ "a = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\ndirReduc(a) # West", "[('SOUTH', 'EAST'), ('WEST', 'NORTH'), 'WEST']\n[('SOUTH', 'EAST'), ('WEST', 'NORTH'), 'WEST']\n" ] ], [ [ " if arr[index]==\"NORTH\" and tempList[index+1]!=\"SOUTH\" :#or arr[index]==\"SOUTH\" and tempList[index+1]!=\"NORTH\" :\n newList.append(arr[index])\n #elif arr[index]==\"EAST\" and tempList[index+1]!=\"WEST\" or arr[index]==\"WEST\" and tempList[index+1]!=\"EAST\" :\n newList.append(arr[index])\n else:\n pass \n #while len(newList)==len(tempList):\n #print(newList)\n #dirReduc(newList)\n print(newList)", "_____no_output_____" ] ], [ [ "def dirReduc(arr):\n tempList= arr\n newList=[]\n #print(arr)\n dir = [(\"NORTH\",\"SOUTH\"),(\"EAST\",\"WEST\"),(\"SOUTH\",\"NORTH\"),(\"WEST\",\"EAST\")]\n #print (dir)\n for index in range(len(arr)-1):\n \n #for i in range(len(dir)):\n \n #if [(zip(arr[::],tempList[1::]))] not in dir[:]:\n #newList.append(arr[index+1])\n\n #print(list(zip(arr[::],tempList[+1::])))\n if [list(zip(arr[index:],tempList[index+1:]))] not in [(x for x in dir)]:\n newList.append(arr[index+1])\n #else:\n #pass\n print(newList)\n\n \n\n\n", "_____no_output_____" ], [ "dirReduc([\"NORTH\",\"SOUTH\",\"SOUTH\",\"EAST\",\"WEST\"])", "['SOUTH', 'SOUTH', 'EAST', 'WEST']\n" ], [ "dirReduc([\"NORTH\",\"SOUTH\",\"SOUTH\",\"EAST\",\"WEST\"])", "{'N': 0, 'S': 1, 'E': 0, 'W': 0}\n" ], [ "def dirReduc(arr):\n turtle ={'N':0,'S':0,'E':0,'W':0}\n turtleNew=turtle\n a=arr[1:]\n for i in range(0,len(arr)-1):\n if arr[i] == \"NORTH\" and a[i] != \"SOUTH\":\n turtle['N']+=1\n #turtle['S']-=1\n elif arr[i] == \"SOUTH\" and a[i] != \"NORTH\":\n turtle['S']+=1\n #turtle['N']-=1\n\n elif arr[i] == \"EAST\" and a[i] != \"WEST\":\n turtle['E']+=1\n #turtle['W']-=1\n\n elif arr[i] == \"WEST\" and a[i] != \"EAST\":\n turtle['W']+=1\n #turtle['E']-=1\n\n print(turtle)", "_____no_output_____" ], [ "a = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\ndirReduc(a)", "{'N': 1, 'S': 2, 'E': 0, 'W': 1}\n" ], [ "def dirReduc(arr):\n new=arr\n for l in arr:\n for dir in new:\n \n\n \"\"\"\n if new[:-2] ==\"NORTH\" and l == \"SOUTH\":\n new.pop()\n new.pop()\n print(new)\n elif new[:-2]==\"SOUTH\" and l == \"NORTH\":\n new.pop()\n new.pop()\n print(new)\n elif new[:-2]==\"EAST\" and l == \"WEST\":\n new.pop()\n new.pop()\n print(new)\n elif new[:-2]==\"WEST\" and l == \"EAST\":\n new.pop()\n new.pop()\n print(new)\n #else:\n #break\n # elif new[:-1]==False:\n # new.append(l)\n \"\"\"\n print(new) \n \n", "_____no_output_____" ], [ "a = [\"NORTH\", \"SOUTH\", \"SOUTH\", \"EAST\", \"WEST\", \"NORTH\", \"WEST\"]\ndirReduc(a)\n", "['NORTH', 'SOUTH', 'SOUTH', 'EAST', 'WEST', 'NORTH', 'WEST']\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb7db3889930e741d325cf2d06df1a7fc4d9bf71
6,252
ipynb
Jupyter Notebook
courses/machine_learning/tensorflow/c_batched.ipynb
ecuriotto/training-data-analyst
3da6b9a4f9715c6cf9653c2025ac071b15011111
[ "Apache-2.0" ]
4
2021-02-20T19:23:56.000Z
2021-02-21T07:28:49.000Z
courses/machine_learning/tensorflow/c_batched.ipynb
jgamblegeorge/training-data-analyst
9f9006e7b740f59798ac6016d55dd4e38a1e0528
[ "Apache-2.0" ]
11
2020-01-28T23:13:27.000Z
2022-03-12T00:11:30.000Z
courses/machine_learning/tensorflow/c_batched.ipynb
jgamblegeorge/training-data-analyst
9f9006e7b740f59798ac6016d55dd4e38a1e0528
[ "Apache-2.0" ]
4
2020-05-15T06:23:05.000Z
2021-12-20T06:00:15.000Z
33.612903
553
0.602047
[ [ [ "<h1> 2c. Refactoring to add batching and feature-creation </h1>\n\nIn this notebook, we continue reading the same small dataset, but refactor our ML pipeline in two small, but significant, ways:\n<ol>\n<li> Refactor the input to read data in batches.\n<li> Refactor the feature creation so that it is not one-to-one with inputs.\n</ol>\nThe Pandas function in the previous notebook also batched, only after it had read the whole data into memory -- on a large dataset, this won't be an option.", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport numpy as np\nimport shutil\nprint(tf.__version__)", "_____no_output_____" ] ], [ [ "<h2> 1. Refactor the input </h2>\n\nRead data created in Lab1a, but this time make it more general and performant. Instead of using Pandas, we will use TensorFlow's Dataset API.", "_____no_output_____" ] ], [ [ "CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]\n\ndef read_dataset(filename, mode, batch_size = 512):\n def _input_fn():\n def decode_csv(value_column):\n columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)\n features = dict(zip(CSV_COLUMNS, columns))\n label = features.pop(LABEL_COLUMN)\n return features, label\n\n # Create list of files that match pattern\n file_list = tf.gfile.Glob(filename)\n\n # Create dataset from file list\n dataset = tf.data.TextLineDataset(file_list).map(decode_csv)\n if mode == tf.estimator.ModeKeys.TRAIN:\n num_epochs = None # indefinitely\n dataset = dataset.shuffle(buffer_size = 10 * batch_size)\n else:\n num_epochs = 1 # end-of-input after this\n\n dataset = dataset.repeat(num_epochs).batch(batch_size)\n return dataset.make_one_shot_iterator().get_next()\n return _input_fn\n \n\ndef get_train():\n return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)\n\ndef get_valid():\n return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)\n\ndef get_test():\n return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)", "_____no_output_____" ] ], [ [ "<h2> 2. Refactor the way features are created. </h2>\n\nFor now, pass these through (same as previous lab). However, refactoring this way will enable us to break the one-to-one relationship between inputs and features.", "_____no_output_____" ] ], [ [ "INPUT_COLUMNS = [\n tf.feature_column.numeric_column('pickuplon'),\n tf.feature_column.numeric_column('pickuplat'),\n tf.feature_column.numeric_column('dropofflat'),\n tf.feature_column.numeric_column('dropofflon'),\n tf.feature_column.numeric_column('passengers'),\n]\n\ndef add_more_features(feats):\n # Nothing to add (yet!)\n return feats\n\nfeature_cols = add_more_features(INPUT_COLUMNS)", "_____no_output_____" ] ], [ [ "<h2> Create and train the model </h2>\n\nNote that we train for num_steps * batch_size examples.", "_____no_output_____" ] ], [ [ "tf.logging.set_verbosity(tf.logging.INFO)\nOUTDIR = 'taxi_trained'\nshutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time\nmodel = tf.estimator.LinearRegressor(\n feature_columns = feature_cols, model_dir = OUTDIR)\nmodel.train(input_fn = get_train(), steps = 100); # TODO: change the name of input_fn as needed", "_____no_output_____" ] ], [ [ "<h3> Evaluate model </h3>\n\nAs before, evaluate on the validation data. We'll do the third refactoring (to move the evaluation into the training loop) in the next lab.", "_____no_output_____" ] ], [ [ "def print_rmse(model, name, input_fn):\n metrics = model.evaluate(input_fn = input_fn, steps = 1)\n print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))\nprint_rmse(model, 'validation', get_valid())", "_____no_output_____" ] ], [ [ "Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb7db702fb856e37e6eb6126ed3667aa0522b39b
16,970
ipynb
Jupyter Notebook
Baseline/resnet.ipynb
satish860/foodimageclassifier
12ce22e56f42de36f2b32d4344a57d20cf2fe089
[ "Apache-2.0" ]
null
null
null
Baseline/resnet.ipynb
satish860/foodimageclassifier
12ce22e56f42de36f2b32d4344a57d20cf2fe089
[ "Apache-2.0" ]
null
null
null
Baseline/resnet.ipynb
satish860/foodimageclassifier
12ce22e56f42de36f2b32d4344a57d20cf2fe089
[ "Apache-2.0" ]
null
null
null
29.309154
274
0.470183
[ [ [ "# Food Image Classifier", "_____no_output_____" ], [ "This part of the Manning Live project - https://liveproject.manning.com/project/210 . In synposis, By working on this project, I will be classying the food variety of 101 type. Dataset is already availble in public but we will be starting with subset of the classifier", "_____no_output_____" ], [ "## Dataset", "_____no_output_____" ], [ "As a general best practice to ALWAYS start with a subset of the dataset rather than a full one. There are two reason for the same\n1. As you experiement with the model, You dont want to run over all the dataset that will slow down the process\n2. You will end up wasting lots of GPU resources well before the getting best model for the Job", "_____no_output_____" ], [ "In the Case live Project, The authors already shared the subset of the notebook so we can use the same for the baseline model", "_____no_output_____" ] ], [ [ "#!wget https://lp-prod-resources.s3-us-west-2.amazonaws.com/other/Deploying+a+Deep+Learning+Model+on+Web+and+Mobile+Applications+Using+TensorFlow/Food+101+-+Data+Subset.zip\n#!unzip Food+101+-+Data+Subset.zip", "_____no_output_____" ], [ "import torch\nfrom torchvision import datasets,models\nimport torchvision.transforms as tt\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom torchvision.utils import make_grid\nfrom torch.utils.data import DataLoader,random_split,Dataset\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom fastprogress.fastprogress import master_bar, progress_bar", "_____no_output_____" ], [ "stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))\ntrain_tfms = tt.Compose([tt.RandomHorizontalFlip(),\n tt.Resize([224,224]),\n tt.ToTensor(), \n tt.Normalize(*stats,inplace=True)])\nvalid_tfms = tt.Compose([tt.Resize([224,224]),tt.ToTensor(), tt.Normalize(*stats)])", "_____no_output_____" ] ], [ [ "Create a Pytorch dataset from the image folder. This will allow us to create a Training dataset and validation dataset", "_____no_output_____" ] ], [ [ "ds = datasets.ImageFolder('food-101-subset/images/')", "_____no_output_____" ], [ "class CustomDataset(Dataset):\n def __init__(self,ds,transformer):\n self.ds = ds\n self.transform = transformer\n \n def __getitem__(self,idx):\n image,label = self.ds[idx]\n img = self.transform(image)\n return img,label\n \n def __len__(self):\n return len(ds)", "_____no_output_____" ], [ "train_len=0.8*len(ds)\nval_len = len(ds) - train_len\nint(train_len),int(val_len)", "_____no_output_____" ], [ "train_ds,val_ds = random_split(dataset=ds,lengths=[int(train_len),int(val_len)],generator=torch.Generator().manual_seed(42))", "_____no_output_____" ], [ "t_ds = CustomDataset(train_ds.dataset,train_tfms)\nv_ds = CustomDataset(val_ds.dataset,valid_tfms)", "_____no_output_____" ], [ "batch_size = 32\ntrain_dl = DataLoader(t_ds, batch_size, shuffle=True, pin_memory=True)\nvalid_dl = DataLoader(v_ds, batch_size, pin_memory=True)", "_____no_output_____" ], [ "for x,yb in train_dl:\n print(x.shape)\n break;", "torch.Size([32, 3, 224, 224])\n" ], [ "def show_batch(dl):\n for images, labels in dl:\n fig, ax = plt.subplots(figsize=(12, 12))\n ax.set_xticks([]); ax.set_yticks([])\n ax.imshow(make_grid(images[:64], nrow=8).permute(1, 2, 0))\n break", "_____no_output_____" ] ], [ [ "# Create a ResNet Model with default Parameters", "_____no_output_____" ] ], [ [ "class Flatten(nn.Module):\n def forward(self,x):\n return torch.flatten(x,1)\n\nclass FoodImageClassifer(nn.Module):\n def __init__(self):\n super().__init__()\n resnet = models.resnet34(pretrained=True)\n self.body = nn.Sequential(*list(resnet.children())[:-2])\n self.head = nn.Sequential(nn.AdaptiveAvgPool2d(1),Flatten(),nn.Linear(resnet.fc.in_features,3))\n \n def forward(self,x):\n x = self.body(x)\n return self.head(x)\n \n def freeze(self):\n for name,param in self.body.named_parameters():\n param.requires_grad = True", "_____no_output_____" ], [ "def fit(epochs,model,train_dl,valid_dl,loss_fn,opt):\n mb = master_bar(range(epochs))\n mb.write(['epoch','train_loss','valid_loss','trn_acc','val_acc'],table=True)\n\n for i in mb: \n trn_loss,val_loss = 0.0,0.0\n trn_acc,val_acc = 0,0\n trn_n,val_n = len(train_dl.dataset),len(valid_dl.dataset)\n model.train()\n for xb,yb in progress_bar(train_dl,parent=mb):\n xb,yb = xb.to(device), yb.to(device)\n out = model(xb)\n opt.zero_grad()\n loss = loss_fn(out,yb)\n _,pred = torch.max(out.data, 1)\n trn_acc += (pred == yb).sum().item()\n trn_loss += loss.item()\n loss.backward()\n opt.step()\n trn_loss /= mb.child.total\n trn_acc /= trn_n\n\n model.eval()\n with torch.no_grad():\n for xb,yb in progress_bar(valid_dl,parent=mb):\n xb,yb = xb.to(device), yb.to(device)\n out = model(xb)\n loss = loss_fn(out,yb)\n val_loss += loss.item()\n _,pred = torch.max(out.data, 1)\n val_acc += (pred == yb).sum().item()\n val_loss /= mb.child.total\n val_acc /= val_n\n\n mb.write([i,f'{trn_loss:.6f}',f'{val_loss:.6f}',f'{trn_acc:.6f}',f'{val_acc:.6f}'],table=True)", "_____no_output_____" ] ], [ [ "# Making the Resnet as a Feature Extractor and training model", "_____no_output_____" ] ], [ [ "model = FoodImageClassifer()\ncriterion = nn.CrossEntropyLoss()\noptimizer_ft = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)\n#model.freeze()\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel = model.to(device)\nfit(10,model=model,train_dl=train_dl,valid_dl=valid_dl,loss_fn=criterion,opt=optimizer_ft)", "_____no_output_____" ] ], [ [ "# Freeze the layers", "_____no_output_____" ] ], [ [ "model = FoodImageClassifer()\ncriterion = nn.CrossEntropyLoss()\noptimizer_ft = optim.Adam(model.parameters(), lr=1e-4)\nmodel.freeze()\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\nmodel = model.to(device)\nfit(10,model=model,train_dl=train_dl,valid_dl=valid_dl,loss_fn=criterion,opt=optimizer_ft)", "_____no_output_____" ], [ "torch.save(model.state_dict,'resnet.pth')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb7dbf352b347a29e861d64bb94dae0e7e59b66b
27,661
ipynb
Jupyter Notebook
day_5.ipynb
mzignis/dw_matrix_road_signs
8b648686794a6076a8cfb52ab26fc21aea2c196f
[ "MIT" ]
null
null
null
day_5.ipynb
mzignis/dw_matrix_road_signs
8b648686794a6076a8cfb52ab26fc21aea2c196f
[ "MIT" ]
null
null
null
day_5.ipynb
mzignis/dw_matrix_road_signs
8b648686794a6076a8cfb52ab26fc21aea2c196f
[ "MIT" ]
null
null
null
44.977236
237
0.511695
[ [ [ "import datetime\nimport os\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nfrom skimage import color, exposure\n\nfrom sklearn.metrics import accuracy_score\n\nimport tensorflow as tf\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Flatten, Dropout\nfrom tensorflow.keras.utils import to_categorical\n\nfrom hyperopt import hp, STATUS_OK, tpe, Trials, fmin\n\nsns.set()\n%load_ext tensorboard", "/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.\n import pandas.util.testing as tm\n" ], [ "HOME = '/content/drive/My Drive/Colab Notebooks/matrix/dw_matrix_road_signs'\n%cd $HOME", "/content/drive/My Drive/Colab Notebooks/matrix/dw_matrix_road_signs\n" ], [ "train_db = pd.read_pickle('data/train.p')\ntest_db = pd.read_pickle('data/test.p')\n\nX_train, y_train = train_db['features'], train_db['labels']\nX_test, y_test = test_db['features'], test_db['labels']", "_____no_output_____" ], [ "sign_names = pd.read_csv('data/dw_signnames.csv')\nsign_names.head()", "_____no_output_____" ], [ "y_train = to_categorical(y_train)\ny_test = to_categorical(y_test)", "_____no_output_____" ], [ "input_shape = X_train.shape[1:]\ncat_num = y_train.shape[1]", "_____no_output_____" ], [ "def get_cnn_v1(input_shape, cat_num, verbose=False):\n model = Sequential([Conv2D(filters=64, kernel_size=(3, 3), activation='relu', input_shape=input_shape),\n Flatten(),\n Dense(cat_num, activation='softmax')])\n if verbose:\n model.summary()\n\n return model\n\n\ncnn_v1 = get_cnn_v1(input_shape, cat_num, True)", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 30, 30, 64) 1792 \n_________________________________________________________________\nflatten (Flatten) (None, 57600) 0 \n_________________________________________________________________\ndense (Dense) (None, 43) 2476843 \n=================================================================\nTotal params: 2,478,635\nTrainable params: 2,478,635\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "def train_model(model, X_train, y_train, params_fit=dict()):\n\n logdir = os.path.join('logs', datetime.datetime.now().strftime('%Y%m%d_%H%M%S'))\n tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)\n\n model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])\n model.fit(X_train, \n y_train,\n batch_size=params_fit.get('batch_size', 128),\n epochs=params_fit.get('epochs', 5),\n verbose=params_fit.get('verbose', 1),\n validation_data=params_fit.get('validation_data', (X_train, y_train)),\n callbacks=[tensorboard_callback])\n\n return model\n\nmodel_trained = train_model(cnn_v1, X_train, y_train)", "Epoch 1/5\n272/272 [==============================] - 7s 25ms/step - loss: 21.1410 - accuracy: 0.7725 - val_loss: 0.2157 - val_accuracy: 0.9500\nEpoch 2/5\n272/272 [==============================] - 6s 23ms/step - loss: 0.1960 - accuracy: 0.9538 - val_loss: 0.1232 - val_accuracy: 0.9707\nEpoch 3/5\n272/272 [==============================] - 6s 23ms/step - loss: 0.1243 - accuracy: 0.9693 - val_loss: 0.0857 - val_accuracy: 0.9778\nEpoch 4/5\n272/272 [==============================] - 6s 23ms/step - loss: 0.0941 - accuracy: 0.9764 - val_loss: 0.0711 - val_accuracy: 0.9825\nEpoch 5/5\n272/272 [==============================] - 6s 23ms/step - loss: 0.0943 - accuracy: 0.9792 - val_loss: 0.1108 - val_accuracy: 0.9747\n" ], [ "def predict(model, X_test, y_test, scoring=accuracy_score):\n y_test_norm = np.argmax(y_test, axis=1)\n y_pred_prob = model.predict(X_test)\n y_pred = np.argmax(y_pred_prob, axis=1)\n\n return scoring(y_test_norm, y_pred)", "_____no_output_____" ], [ "def get_cnn(input_shape, cat_num):\n model = Sequential([Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=input_shape),\n Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same'),\n MaxPool2D(),\n Dropout(0.3),\n\n Conv2D(filters=64, kernel_size=(3, 3), activation='relu', padding='same'),\n Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),\n MaxPool2D(),\n Dropout(0.3),\n\n Flatten(),\n\n Dense(1024, activation='relu'),\n Dropout(0.3),\n\n Dense(1024, activation='relu'),\n Dropout(0.3),\n\n Dense(cat_num, activation='softmax')])\n return model", "_____no_output_____" ], [ "def train_and_predict(model, X_train, y_train, X_test, y_test):\n model_trained = train_model(model, X_train, y_train)\n return predict(model_trained, X_test, y_test)\n\n# train_and_predict(get_cnn(input_shape, cat_num), X_train, y_train, X_test, y_test)", "Epoch 1/5\n272/272 [==============================] - 12s 45ms/step - loss: 2.3776 - accuracy: 0.4112 - val_loss: 0.4694 - val_accuracy: 0.8598\nEpoch 2/5\n272/272 [==============================] - 12s 43ms/step - loss: 0.5820 - accuracy: 0.8224 - val_loss: 0.1191 - val_accuracy: 0.9720\nEpoch 3/5\n272/272 [==============================] - 12s 43ms/step - loss: 0.3022 - accuracy: 0.9095 - val_loss: 0.0622 - val_accuracy: 0.9845\nEpoch 4/5\n272/272 [==============================] - 12s 43ms/step - loss: 0.1948 - accuracy: 0.9410 - val_loss: 0.0264 - val_accuracy: 0.9929\nEpoch 5/5\n272/272 [==============================] - 12s 43ms/step - loss: 0.1507 - accuracy: 0.9559 - val_loss: 0.0218 - val_accuracy: 0.9943\n" ], [ "def get_model(input_shape, cat_num, params):\n model = Sequential([Conv2D(filters=32, kernel_size=(3, 3), activation='relu', input_shape=input_shape),\n Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same'),\n MaxPool2D(),\n Dropout(params['dropout_cnn_0']),\n\n Conv2D(filters=64, kernel_size=(3, 3), activation='relu', padding='same'),\n Conv2D(filters=64, kernel_size=(3, 3), activation='relu'),\n MaxPool2D(),\n Dropout(params['dropout_cnn_1']),\n\n Conv2D(filters=128, kernel_size=(3, 3), activation='relu', padding='same'),\n Conv2D(filters=128, kernel_size=(3, 3), activation='relu'),\n MaxPool2D(),\n Dropout(params['dropout_cnn_2']),\n\n Flatten(),\n\n Dense(1024, activation='relu'),\n Dropout(params['dropout_dense_0']),\n\n Dense(1024, activation='relu'),\n Dropout(params['dropout_dense_1']),\n\n Dense(cat_num, activation='softmax')])\n return model", "_____no_output_____" ], [ "def func_obj(params):\n model = get_model(input_shape, cat_num, params)\n model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])\n\n model.fit(X_train, \n y_train,\n batch_size=int(params.get('batch_size', 128)),\n epochs=params.get('epochs', 5),\n verbose=params.get('verbose', 0)\n )\n\n score = model.evaluate(X_test, y_test, verbose=0)\n accuracy = score[1]\n print(f'params={params}')\n print(f'accuracy={accuracy}')\n\n return {'loss': -accuracy, 'status': STATUS_OK, 'model': model}", "_____no_output_____" ], [ "space = {\n 'batch_size': hp.quniform('batch_size', 100, 200, 10),\n 'dropout_cnn_0': hp.uniform('dropout_cnn_0', 0.3, 0.5),\n 'dropout_cnn_1': hp.uniform('dropout_cnn_1', 0.3, 0.5),\n 'dropout_cnn_2': hp.uniform('dropout_cnn_2', 0.3, 0.5),\n 'dropout_dense_0': hp.uniform('dropout_dense_0', 0.3, 0.7),\n 'dropout_dense_1': hp.uniform('dropout_dense_1', 0.3, 0.7),\n}", "_____no_output_____" ], [ "best = fmin(\n func_obj,\n space,\n tpe.suggest,\n 30, \n Trials()\n)", "params={'batch_size': 100.0, 'dropout_cnn_0': 0.4879239947277033, 'dropout_cnn_1': 0.41844465850576257, 'dropout_cnn_2': 0.30965372112329087, 'dropout_dense_0': 0.4021819483430512, 'dropout_dense_1': 0.5484462572265736}\naccuracy=0.9546485543251038\nparams={'batch_size': 110.0, 'dropout_cnn_0': 0.4454706127504861, 'dropout_cnn_1': 0.4624384690395012, 'dropout_cnn_2': 0.33913598936283884, 'dropout_dense_0': 0.5740434204430944, 'dropout_dense_1': 0.6296515937419154}\naccuracy=0.8337868452072144\nparams={'batch_size': 160.0, 'dropout_cnn_0': 0.36067887838440155, 'dropout_cnn_1': 0.46008918518034936, 'dropout_cnn_2': 0.43839249380954826, 'dropout_dense_0': 0.3246246348241844, 'dropout_dense_1': 0.3160098003597073}\naccuracy=0.9435374140739441\nparams={'batch_size': 120.0, 'dropout_cnn_0': 0.32139805632785357, 'dropout_cnn_1': 0.3682460950330647, 'dropout_cnn_2': 0.35017506728862746, 'dropout_dense_0': 0.636572271397232, 'dropout_dense_1': 0.571017519489049}\naccuracy=0.9678004384040833\nparams={'batch_size': 120.0, 'dropout_cnn_0': 0.4913879097246001, 'dropout_cnn_1': 0.47033985564382663, 'dropout_cnn_2': 0.31865265519409725, 'dropout_dense_0': 0.4649288055725062, 'dropout_dense_1': 0.5296904186684789}\naccuracy=0.9235827922821045\nparams={'batch_size': 150.0, 'dropout_cnn_0': 0.4164041967473282, 'dropout_cnn_1': 0.37056626479342125, 'dropout_cnn_2': 0.3436300186166929, 'dropout_dense_0': 0.5087510602359042, 'dropout_dense_1': 0.42453105625736365}\naccuracy=0.9616780281066895\nparams={'batch_size': 180.0, 'dropout_cnn_0': 0.4854568816341579, 'dropout_cnn_1': 0.48877389527091347, 'dropout_cnn_2': 0.32601913003927036, 'dropout_dense_0': 0.5560442462373014, 'dropout_dense_1': 0.3649109619366709}\naccuracy=0.8356009125709534\nparams={'batch_size': 150.0, 'dropout_cnn_0': 0.4700263840446974, 'dropout_cnn_1': 0.322993910912036, 'dropout_cnn_2': 0.3856630233645689, 'dropout_dense_0': 0.6065512537170488, 'dropout_dense_1': 0.5801993674122023}\naccuracy=0.8383219838142395\nparams={'batch_size': 190.0, 'dropout_cnn_0': 0.3297266714814264, 'dropout_cnn_1': 0.4093366714067281, 'dropout_cnn_2': 0.37022163453985824, 'dropout_dense_0': 0.4170767345499065, 'dropout_dense_1': 0.6575939918759518}\naccuracy=0.942630410194397\nparams={'batch_size': 190.0, 'dropout_cnn_0': 0.41931974165794494, 'dropout_cnn_1': 0.3658335614786773, 'dropout_cnn_2': 0.38194136852267363, 'dropout_dense_0': 0.35608031433606646, 'dropout_dense_1': 0.520155315556423}\naccuracy=0.9589568972587585\nparams={'batch_size': 200.0, 'dropout_cnn_0': 0.47952859607728693, 'dropout_cnn_1': 0.45881924368073185, 'dropout_cnn_2': 0.4852393119246963, 'dropout_dense_0': 0.629240760228121, 'dropout_dense_1': 0.6294628440800268}\naccuracy=0.6000000238418579\nparams={'batch_size': 140.0, 'dropout_cnn_0': 0.3150257556317754, 'dropout_cnn_1': 0.31747190509765943, 'dropout_cnn_2': 0.35274788955799385, 'dropout_dense_0': 0.4694085515989886, 'dropout_dense_1': 0.37821243166844476}\naccuracy=0.9501133561134338\nparams={'batch_size': 190.0, 'dropout_cnn_0': 0.42318543432060074, 'dropout_cnn_1': 0.4679243745883398, 'dropout_cnn_2': 0.33994933689087475, 'dropout_dense_0': 0.5234438903237745, 'dropout_dense_1': 0.6297812010676287}\naccuracy=0.8433106541633606\nparams={'batch_size': 180.0, 'dropout_cnn_0': 0.4111970954079379, 'dropout_cnn_1': 0.3810825698020177, 'dropout_cnn_2': 0.34368611965100887, 'dropout_dense_0': 0.4377954919072118, 'dropout_dense_1': 0.408624015159865}\naccuracy=0.8283446431159973\nparams={'batch_size': 140.0, 'dropout_cnn_0': 0.3439677060908322, 'dropout_cnn_1': 0.35799455118265633, 'dropout_cnn_2': 0.33208044451947605, 'dropout_dense_0': 0.5981815864983848, 'dropout_dense_1': 0.6511843812879597}\naccuracy=0.9365079402923584\nparams={'batch_size': 150.0, 'dropout_cnn_0': 0.46748242209200613, 'dropout_cnn_1': 0.3676523437653032, 'dropout_cnn_2': 0.4312792131485984, 'dropout_dense_0': 0.32625690833721793, 'dropout_dense_1': 0.39700280191442144}\naccuracy=0.9539682269096375\nparams={'batch_size': 180.0, 'dropout_cnn_0': 0.42118581162583, 'dropout_cnn_1': 0.4195010230907402, 'dropout_cnn_2': 0.4484039188546435, 'dropout_dense_0': 0.6910918343546651, 'dropout_dense_1': 0.4624531349632961}\naccuracy=0.8224489688873291\nparams={'batch_size': 160.0, 'dropout_cnn_0': 0.39549131897703976, 'dropout_cnn_1': 0.3057757730736938, 'dropout_cnn_2': 0.3440014812703657, 'dropout_dense_0': 0.38230910863624784, 'dropout_dense_1': 0.5782683789010685}\naccuracy=0.9417233467102051\nparams={'batch_size': 160.0, 'dropout_cnn_0': 0.40973697044396806, 'dropout_cnn_1': 0.44794678886427797, 'dropout_cnn_2': 0.43617063275156226, 'dropout_dense_0': 0.4707042293286713, 'dropout_dense_1': 0.5573245013217827}\naccuracy=0.9058957099914551\nparams={'batch_size': 150.0, 'dropout_cnn_0': 0.484035628014369, 'dropout_cnn_1': 0.3354419819793466, 'dropout_cnn_2': 0.3679687322020443, 'dropout_dense_0': 0.527323731619307, 'dropout_dense_1': 0.5803313861614661}\naccuracy=0.934920608997345\nparams={'batch_size': 130.0, 'dropout_cnn_0': 0.3788726918102912, 'dropout_cnn_1': 0.39487133539431546, 'dropout_cnn_2': 0.4093930677647206, 'dropout_dense_0': 0.6899431290444987, 'dropout_dense_1': 0.4715426582507825}\naccuracy=0.91700679063797\nparams={'batch_size': 120.0, 'dropout_cnn_0': 0.3751135026304699, 'dropout_cnn_1': 0.3450941747375239, 'dropout_cnn_2': 0.4027596876155159, 'dropout_dense_0': 0.653364076129978, 'dropout_dense_1': 0.456372854022631}\naccuracy=0.9428571462631226\nparams={'batch_size': 100.0, 'dropout_cnn_0': 0.4430692000951883, 'dropout_cnn_1': 0.3920331534487922, 'dropout_cnn_2': 0.30809641707342283, 'dropout_dense_0': 0.6457738013910003, 'dropout_dense_1': 0.6931556825310153}\naccuracy=0.9020408391952515\nparams={'batch_size': 130.0, 'dropout_cnn_0': 0.3098449776598965, 'dropout_cnn_1': 0.4330141894396331, 'dropout_cnn_2': 0.36462177532083384, 'dropout_dense_0': 0.5080213947756672, 'dropout_dense_1': 0.436456594672405}\naccuracy=0.961904764175415\nparams={'batch_size': 120.0, 'dropout_cnn_0': 0.3100682995875006, 'dropout_cnn_1': 0.4403059761171209, 'dropout_cnn_2': 0.3647118024249737, 'dropout_dense_0': 0.5499327379699052, 'dropout_dense_1': 0.33047800715612013}\naccuracy=0.9215419292449951\nparams={'batch_size': 110.0, 'dropout_cnn_0': 0.33554440076591774, 'dropout_cnn_1': 0.43474055482913365, 'dropout_cnn_2': 0.41893302140483907, 'dropout_dense_0': 0.6605185879511281, 'dropout_dense_1': 0.4832971499239135}\naccuracy=0.9360544085502625\nparams={'batch_size': 130.0, 'dropout_cnn_0': 0.30379539369144754, 'dropout_cnn_1': 0.4064463518523729, 'dropout_cnn_2': 0.4656824710935996, 'dropout_dense_0': 0.5881258356630151, 'dropout_dense_1': 0.42907421627055664}\naccuracy=0.9514739513397217\nparams={'batch_size': 110.0, 'dropout_cnn_0': 0.3224440342224547, 'dropout_cnn_1': 0.48668517280337237, 'dropout_cnn_2': 0.3000177265182328, 'dropout_dense_0': 0.4908740626213097, 'dropout_dense_1': 0.5070687093481369}\naccuracy=0.9489796161651611\nparams={'batch_size': 130.0, 'dropout_cnn_0': 0.3486243130709472, 'dropout_cnn_1': 0.42415652169302975, 'dropout_cnn_2': 0.38653332278305086, 'dropout_dense_0': 0.42873294452965865, 'dropout_dense_1': 0.3484720970071098}\naccuracy=0.9428571462631226\nparams={'batch_size': 100.0, 'dropout_cnn_0': 0.3033407328546241, 'dropout_cnn_1': 0.3872273792780576, 'dropout_cnn_2': 0.31589339084649687, 'dropout_dense_0': 0.6212959824180401, 'dropout_dense_1': 0.5453367051678037}\naccuracy=0.9696145057678223\n100%|██████████| 30/30 [22:54<00:00, 45.82s/it, best loss: -0.9696145057678223]\n" ], [ " ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7dc17d12e262d52f55df313be22b7ce178aa8f
889,700
ipynb
Jupyter Notebook
Feedforward Learning Rate Search/Adam.ipynb
Kulbear/AdaBound-Reproduction
eb436e4fa6c7087042e435c2b40b94bde0dcbf2d
[ "MIT" ]
1
2019-02-27T06:14:45.000Z
2019-02-27T06:14:45.000Z
Feedforward Learning Rate Search/Adam.ipynb
Kulbear/AdaBound-Reproduction
eb436e4fa6c7087042e435c2b40b94bde0dcbf2d
[ "MIT" ]
null
null
null
Feedforward Learning Rate Search/Adam.ipynb
Kulbear/AdaBound-Reproduction
eb436e4fa6c7087042e435c2b40b94bde0dcbf2d
[ "MIT" ]
null
null
null
1,032.134571
116,648
0.952295
[ [ [ "import torch\ntorch.backends.cudnn.deterministic = True\ntorch.backends.cudnn.benchmark = False", "_____no_output_____" ], [ "import numpy as np\n\nimport pickle\nfrom collections import namedtuple\nfrom tqdm import tqdm\n\nimport torch\ntorch.backends.cudnn.deterministic = True\ntorch.backends.cudnn.benchmark = False\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torchvision\nimport torchvision.transforms as transforms\n\nfrom adabound import AdaBound\n\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "transform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\ntrainset = torchvision.datasets.MNIST(root='./data_mnist', train=True,\n download=True, transform=transform)\ntrainloader = torch.utils.data.DataLoader(trainset, batch_size=200,\n shuffle=True, num_workers=4)\n\ntestset = torchvision.datasets.MNIST(root='./data_mnist', train=False,\n download=True, transform=transform)\ntestloader = torch.utils.data.DataLoader(testset, batch_size=200,\n shuffle=False, num_workers=4)", "_____no_output_____" ], [ "device = 'cuda:0'\n\noptim_configs = {\n '1e-4': {\n 'optimizer': optim.Adam, \n 'kwargs': {\n 'lr': 1e-4,\n 'weight_decay': 0,\n 'betas': (0.9, 0.999),\n 'eps': 1e-08,\n 'amsgrad': False\n }\n },\n '5e-3': {\n 'optimizer': optim.Adam, \n 'kwargs': {\n 'lr': 5e-3,\n 'weight_decay': 0,\n 'betas': (0.9, 0.999),\n 'eps': 1e-08,\n 'amsgrad': False\n }\n },\n '1e-2': {\n 'optimizer': optim.Adam, \n 'kwargs': {\n 'lr': 1e-2,\n 'weight_decay': 0,\n 'betas': (0.9, 0.999),\n 'eps': 1e-08,\n 'amsgrad': False\n }\n },\n '1e-3': {\n 'optimizer': optim.Adam, \n 'kwargs': {\n 'lr': 1e-3,\n 'weight_decay': 0,\n 'betas': (0.9, 0.999),\n 'eps': 1e-08,\n 'amsgrad': False\n }\n },\n '5e-4': {\n 'optimizer': optim.Adam, \n 'kwargs': {\n 'lr': 5e-4,\n 'weight_decay': 0,\n 'betas': (0.9, 0.999),\n 'eps': 1e-08,\n 'amsgrad': False\n }\n },\n}", "_____no_output_____" ], [ "class MLP(nn.Module):\n def __init__(self, hidden_size=256):\n super(MLP, self).__init__()\n self.fc1 = nn.Linear(28 * 28, hidden_size)\n self.fc2 = nn.Linear(hidden_size, 10)\n\n def forward(self, x):\n x = x.view(-1, 28 * 28)\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n return x\n \ncriterion = nn.CrossEntropyLoss()", "_____no_output_____" ], [ "hidden_sizes = [256, 512, 1024, 2048]\n\nfor h_size in hidden_sizes:\n Stat = namedtuple('Stat', ['losses', 'accs'])\n train_results = {}\n test_results = {} \n for optim_name, optim_config in optim_configs.items():\n torch.manual_seed(0)\n np.random.seed(0)\n train_results[optim_name] = Stat(losses=[], accs=[])\n test_results[optim_name] = Stat(losses=[], accs=[])\n net = MLP(hidden_size=h_size).to(device)\n optimizer = optim_config['optimizer'](net.parameters(), **optim_config['kwargs'])\n print(optimizer)\n\n for epoch in tqdm(range(100)): # loop over the dataset multiple times\n train_stat = {\n 'loss': .0,\n 'correct': 0,\n 'total': 0\n }\n\n test_stat = {\n 'loss': .0,\n 'correct': 0,\n 'total': 0\n }\n for i, data in enumerate(trainloader, 0):\n # get the inputs\n inputs, labels = data\n inputs = inputs.to(device)\n labels = labels.to(device)\n optimizer.zero_grad()\n outputs = net(inputs)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n _, predicted = torch.max(outputs, 1)\n c = (predicted == labels).sum()\n\n # calculate\n train_stat['loss'] += loss.item()\n train_stat['correct'] += c.item()\n train_stat['total'] += labels.size()[0]\n train_results[optim_name].losses.append(train_stat['loss'] / (i + 1))\n train_results[optim_name].accs.append(train_stat['correct'] / train_stat['total'])\n\n\n with torch.no_grad():\n for i, data in enumerate(testloader, 0):\n inputs, labels = data\n inputs = inputs.to(device)\n labels = labels.to(device)\n outputs = net(inputs)\n loss = criterion(outputs, labels)\n _, predicted = torch.max(outputs, 1)\n c = (predicted == labels).sum()\n\n test_stat['loss'] += loss.item()\n test_stat['correct'] += c.item()\n test_stat['total'] += labels.size()[0]\n test_results[optim_name].losses.append(test_stat['loss'] / (i + 1))\n test_results[optim_name].accs.append(test_stat['correct'] / test_stat['total'])\n \n # Save stat!\n stat = {\n 'train': train_results,\n 'test': test_results\n }\n with open(f'adam_stat_mlp_{h_size}.pkl', 'wb') as f:\n pickle.dump(stat, f)\n \n # Plot loss \n f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 5))\n for optim_name in optim_configs:\n if 'Bound' in optim_name:\n ax1.plot(train_results[optim_name].losses, '--', label=optim_name)\n else:\n ax1.plot(train_results[optim_name].losses, label=optim_name)\n ax1.set_ylabel('Training Loss')\n ax1.set_xlabel('# of Epcoh')\n ax1.legend()\n\n for optim_name in optim_configs:\n if 'Bound' in optim_name:\n ax2.plot(test_results[optim_name].losses, '--', label=optim_name)\n else:\n ax2.plot(test_results[optim_name].losses, label=optim_name)\n ax2.set_ylabel('Test Loss')\n ax2.set_xlabel('# of Epcoh')\n ax2.legend()\n\n plt.suptitle(f'Training Loss and Test Loss for MLP({h_size}) on MNIST', y=1.01)\n plt.tight_layout()\n plt.show()\n \n # Plot accuracy \n f, (ax1, ax2) = plt.subplots(1, 2, figsize=(13, 5))\n for optim_name in optim_configs:\n if 'Bound' in optim_name:\n ax1.plot(train_results[optim_name].accs, '--', label=optim_name)\n else:\n ax1.plot(train_results[optim_name].accs, label=optim_name)\n ax1.set_ylabel('Training Accuracy %')\n ax1.set_xlabel('# of Epcoh')\n ax1.legend()\n\n for optim_name in optim_configs:\n if 'Bound' in optim_name:\n ax2.plot(test_results[optim_name].accs, '--', label=optim_name)\n else:\n ax2.plot(test_results[optim_name].accs, label=optim_name)\n ax2.set_ylabel('Test Accuracy %')\n ax2.set_xlabel('# of Epcoh')\n ax2.legend()\n\n plt.suptitle(f'Training Accuracy and Test Accuracy for MLP({h_size}) on MNIST', y=1.01)\n plt.tight_layout()\n plt.show()", " 0%| | 0/100 [00:00<?, ?it/s]" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cb7dc30f282cbb1c813608eb66feb52f2edaff0d
75,312
ipynb
Jupyter Notebook
assignment_cnn_preenchido.ipynb
WittmannF/udemy-deep-learning-cnns
4aefdd611824284b31b40ac3b496d3f8ee86b691
[ "MIT" ]
null
null
null
assignment_cnn_preenchido.ipynb
WittmannF/udemy-deep-learning-cnns
4aefdd611824284b31b40ac3b496d3f8ee86b691
[ "MIT" ]
null
null
null
assignment_cnn_preenchido.ipynb
WittmannF/udemy-deep-learning-cnns
4aefdd611824284b31b40ac3b496d3f8ee86b691
[ "MIT" ]
null
null
null
137.681901
17,278
0.840504
[ [ [ "<a href=\"https://colab.research.google.com/github/WittmannF/udemy-deep-learning-cnns/blob/main/assignment_cnn_preenchido.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "## Assignment: Fashion MNIST\nNow it is your turn! You are going to use the same methods presented in the previous video in order to classify clothes from a black and white dataset of images (image by Zalando, MIT License):\n![](https://tensorflow.org/images/fashion-mnist-sprite.png)\n\nThe class labels are:\n```\n0. T-shirt/top\n1. Trouser\n2. Pullover\n3. Dress\n4. Coat\n5. Sandal\n6. Shirt\n7. Sneaker\n8. Bag\n9. Ankle boot\n```\n\n### 1. Preparing the input data\nLet's first import the dataset. It is available on [tensorflow.keras.datasets](https://keras.io/datasets/):", "_____no_output_____" ] ], [ [ "import tensorflow\nfashion_mnist = tensorflow.keras.datasets.fashion_mnist\n\n(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 0us/step\n40960/29515 [=========================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 0s 0us/step\n26435584/26421880 [==============================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n16384/5148 [===============================================================================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 0s 0us/step\n4431872/4422102 [==============================] - 0s 0us/step\n" ], [ "print(\"Shape of the training set: {}\".format(X_train.shape))\nprint(\"Shape of the test set: {}\".format(X_test.shape))", "Shape of the training set: (60000, 28, 28)\nShape of the test set: (10000, 28, 28)\n" ], [ "# TODO: Normalize the training and testing set using standardization\ndef normalize(x,m,s):\n return (x-m)/s\n\ntrain_mean = X_train.mean()\ntrain_std = X_train.std()\n\nX_train = normalize(X_train, train_mean, train_std)\nX_test = normalize(X_test, train_mean, train_std)\n", "_____no_output_____" ], [ "print(f'Training Mean after standardization {X_train.mean():.3f}')\nprint(f'Training Std after standardization {X_train.std():.3f}')\nprint(f'Test Mean after standardization {X_test.mean():.3f}')\nprint(f'Test Std after standardization {X_test.std():.3f}')", "Training Mean after standardization -0.000\nTraining Std after standardization 1.000\nTest Mean after standardization 0.002\nTest Std after standardization 0.998\n" ] ], [ [ "### 2. Training with fully connected layers", "_____no_output_____" ] ], [ [ "from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Flatten, Dense\n\nmodel = Sequential([\n Flatten(),\n Dense(512, activation='relu'),\n Dense(10, activation='softmax')\n])\n\nmodel.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n\nmodel.fit(X_train, y_train, epochs=2, validation_data=(X_test, y_test))", "Epoch 1/2\n1875/1875 [==============================] - 6s 2ms/step - loss: 0.4515 - accuracy: 0.8380 - val_loss: 0.4077 - val_accuracy: 0.8497\nEpoch 2/2\n1875/1875 [==============================] - 4s 2ms/step - loss: 0.3473 - accuracy: 0.8720 - val_loss: 0.3933 - val_accuracy: 0.8556\n" ] ], [ [ "### 3. Extending to CNNs\nNow your goal is to develop an architecture that can reach a test accuracy higher than 0.85.", "_____no_output_____" ] ], [ [ "X_train.shape", "_____no_output_____" ], [ "# TODO: Reshape the dataset in order to add the channel dimension\nX_train = X_train.reshape(-1, 28, 28, 1)\nX_test = X_test.reshape(-1, 28, 28, 1)", "_____no_output_____" ], [ "from tensorflow.keras.layers import Conv2D, MaxPooling2D\n\nmodel = Sequential([\n Conv2D(6, kernel_size=(3,3), activation='relu', input_shape=(28,28,1)),\n MaxPooling2D(),\n Conv2D(16, kernel_size=(3,3), activation='relu'),\n MaxPooling2D(),\n Flatten(),\n Dense(512, activation='relu'),\n Dense(10, activation='softmax')\n])\n\nmodel.summary()", "Model: \"sequential_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 26, 26, 6) 60 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 13, 13, 6) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 11, 11, 16) 880 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 5, 5, 16) 0 \n_________________________________________________________________\nflatten_1 (Flatten) (None, 400) 0 \n_________________________________________________________________\ndense_2 (Dense) (None, 512) 205312 \n_________________________________________________________________\ndense_3 (Dense) (None, 10) 5130 \n=================================================================\nTotal params: 211,382\nTrainable params: 211,382\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "model.compile(optimizer='adam',\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])\n", "_____no_output_____" ], [ "hist=model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test))", "Epoch 1/10\n1875/1875 [==============================] - 21s 3ms/step - loss: 0.4505 - accuracy: 0.8368 - val_loss: 0.3674 - val_accuracy: 0.8636\nEpoch 2/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.3198 - accuracy: 0.8812 - val_loss: 0.3373 - val_accuracy: 0.8787\nEpoch 3/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.2808 - accuracy: 0.8971 - val_loss: 0.3061 - val_accuracy: 0.8911\nEpoch 4/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.2502 - accuracy: 0.9064 - val_loss: 0.3113 - val_accuracy: 0.8861\nEpoch 5/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.2270 - accuracy: 0.9147 - val_loss: 0.2970 - val_accuracy: 0.8898\nEpoch 6/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.2039 - accuracy: 0.9232 - val_loss: 0.2721 - val_accuracy: 0.9022\nEpoch 7/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.1843 - accuracy: 0.9305 - val_loss: 0.2746 - val_accuracy: 0.9061\nEpoch 8/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.1652 - accuracy: 0.9372 - val_loss: 0.2927 - val_accuracy: 0.9047\nEpoch 9/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.1484 - accuracy: 0.9436 - val_loss: 0.2862 - val_accuracy: 0.9040\nEpoch 10/10\n1875/1875 [==============================] - 5s 3ms/step - loss: 0.1318 - accuracy: 0.9504 - val_loss: 0.3239 - val_accuracy: 0.9003\n" ], [ "import pandas as pd\npd.DataFrame(hist.history).plot()", "_____no_output_____" ] ], [ [ "### 4. Visualizing Predictions", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nlabel_names = {0:\"T-shirt/top\",\n 1:\"Trouser\",\n 2:\"Pullover\",\n 3:\"Dress\",\n 4:\"Coat\",\n 5:\"Sandal\",\n 6:\"Shirt\",\n 7:\"Sneaker\",\n 8:\"Bag\",\n 9:\"Ankle boot\"}\n\n# Index to be visualized\nfor idx in range(5):\n plt.imshow(X_test[idx].reshape(28,28), cmap='gray')\n out = model.predict(X_test[idx].reshape(1,28,28,1))\n plt.title(\"True: {}, Pred: {}\".format(label_names[y_test[idx]], label_names[np.argmax(out)]))\n plt.show()", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cb7dc3c69efe8faa3a301aad4d196418c9fb835d
8,709
ipynb
Jupyter Notebook
Classificationa ,recomention and the word is positive or negative of text data using NLP.ipynb
Vivek-Upadhya/Vivek-Upadhya.github.io
b02d9c7ae91a652e8ac5d5ed42ef196a8a710704
[ "BSD-3-Clause", "MIT" ]
null
null
null
Classificationa ,recomention and the word is positive or negative of text data using NLP.ipynb
Vivek-Upadhya/Vivek-Upadhya.github.io
b02d9c7ae91a652e8ac5d5ed42ef196a8a710704
[ "BSD-3-Clause", "MIT" ]
null
null
null
Classificationa ,recomention and the word is positive or negative of text data using NLP.ipynb
Vivek-Upadhya/Vivek-Upadhya.github.io
b02d9c7ae91a652e8ac5d5ed42ef196a8a710704
[ "BSD-3-Clause", "MIT" ]
null
null
null
26.231928
432
0.546102
[ [ [ "# Load library\nimport nltk\nimport os\nfrom nltk import tokenize \nfrom nltk.tokenize import sent_tokenize,word_tokenize", "_____no_output_____" ], [ "os.getcwd()", "_____no_output_____" ], [ "# Read the Data\n\nraw=open(\"C:\\\\Users\\\\vivek\\\\Desktop\\\\NLP Python Practice\\\\Labeled Dateset.txt\").read()", "_____no_output_____" ] ], [ [ "# Tokenize and make the Data into the Lower Case", "_____no_output_____" ] ], [ [ "# Change the Data in lower\n\nraw=raw.lower()", "_____no_output_____" ], [ "# tokenize the data\n\ndocs=sent_tokenize(raw)", "_____no_output_____" ], [ "docs", "_____no_output_____" ], [ "# Split the Data into the label and review\n\ndocs=docs[0].split(\"\\n\")", "_____no_output_____" ], [ "docs", "_____no_output_____" ] ], [ [ "# Pre-processing punctuation", "_____no_output_____" ] ], [ [ "from string import punctuation as punc", "_____no_output_____" ], [ "for d in docs:\n for ch in d:\n if ch in punc:\n d.replace(ch,\"\")", "_____no_output_____" ] ], [ [ "# removing Stop word and stemming", "_____no_output_____" ] ], [ [ "from sklearn.feature_extraction.stop_words import ENGLISH_STOP_WORDS\nfrom nltk.stem import PorterStemmer", "C:\\Users\\vivek\\anaconda3\\lib\\site-packages\\sklearn\\utils\\deprecation.py:143: FutureWarning: The sklearn.feature_extraction.stop_words module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.feature_extraction.text. Anything that cannot be imported from sklearn.feature_extraction.text is now part of the private API.\n warnings.warn(message, FutureWarning)\n" ], [ "ps=PorterStemmer()\nfor d in docs:\n for token in word_tokenize(d):\n if token in ENGLISH_STOP_WORDS:\n d.replace(token,\"\")\n d.replace(token,ps.stem(token))", "_____no_output_____" ] ], [ [ "# Ask from the user for test Data", "_____no_output_____" ] ], [ [ "for i in range(len(docs)):\n print(\"D\"+str(i)+\":\"+docs[i])\ntest=input(\"Enter your text:\")\ndocs.append(test+\":\")\n\n\n## Seperating the document into the label,striping off the unwanted white space\nx,y=[],[]\nfor d in docs:\n x.append(d[:d.index(\":\")].strip())\n y.append(d[d.index(\":\")+1:].strip())\n \n# vectorizer using Tfidf\n\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nvectorizer=TfidfVectorizer()\nvec=vectorizer.fit_transform(x)\n\n# trainning KNN Classifier\n\nfrom sklearn.neighbors import KNeighborsClassifier\nknn=KNeighborsClassifier(1)\nknn.fit(vec[:6],y[:6])\nprint(\"Label: \",knn.predict(vec[6]))\n\n# Sntiment Analysis\n\nfrom nltk.corpus import wordnet\ntest_tokens=test.split(\" \")\ngood=wordnet.synsets(\"good\")\nbad=wordnet.synsets(\"evil\")\nscore_pos=0\nscore_neg=0\n\n\nfor token in test_tokens:\n t=wordnet.synsets(token)\n if len(t)>0:\n sim_good=wordnet.wup_similarity(good[0],t[0])\n sim_bad=wordnet.wup_similarity(bad[0],t[0])\n if(sim_good is not None):\n score_pos =score_pos + sim_good\n if(sim_bad is not None):\n score_neg =score_neg + sim_bad\n \n \nif((score_pos - score_neg)>0.1):\n print(\"Subjective Statement, Positive openion of strength: %.2f\" %score_pos)\n \nelif((score_neg - score_pos)>0.1):\n print(\"Subjective Statement, Negative openion of strength: %.2f\" %score_neg)\nelse:\n print(\"Objective Statement, No openion Showed\")\n \n \n\n# Nearest Document\n\nfrom sklearn.neighbors import NearestNeighbors\nnb=NearestNeighbors(n_neighbors=2)\nnb.fit(vec[:6])\nclosest_docs=nb.kneighbors(vec[6])\nprint(\"Recomended document with IDs \",closest_docs[1])\nprint(\"hiving distance \",closest_docs[0])", "D0:this recipe is very special for cooking snacks : cooking\nD1:i like to cook but it is usually takes longer : cokking \nD2:my priorities is cooking include pastas and soup: cooking\nD3:one need to stay fit while playing profesional sport : sports\nD4:it is very important for sportsman to take care of their diet : sports\nD5:professional sports demand a lot of hardwork: sports\nEnter your text:i like to make specual food\nLabel: ['cokking']\nObjective Statement, No openion Showed\nRecomended document with IDs [[1 3]]\nhiving distance [[1.26626032 1.36748507]]\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cb7dd0490e2079bdc9038b6f506e0c7f53ab0bd3
7,007
ipynb
Jupyter Notebook
courses/fast-and-lean-data-science/colab_intro.ipynb
KayvanShah1/training-data-analyst
3f778a57b8e6d2446af40ca6063b2fd9c1b4bc88
[ "Apache-2.0" ]
6,140
2016-05-23T16:09:35.000Z
2022-03-30T19:00:46.000Z
courses/fast-and-lean-data-science/colab_intro.ipynb
KayvanShah1/training-data-analyst
3f778a57b8e6d2446af40ca6063b2fd9c1b4bc88
[ "Apache-2.0" ]
1,384
2016-07-08T22:26:41.000Z
2022-03-24T16:39:43.000Z
courses/fast-and-lean-data-science/colab_intro.ipynb
KayvanShah1/training-data-analyst
3f778a57b8e6d2446af40ca6063b2fd9c1b4bc88
[ "Apache-2.0" ]
5,110
2016-05-27T13:45:18.000Z
2022-03-31T18:40:42.000Z
30.072961
395
0.517625
[ [ [ "<img alt=\"Colaboratory logo\" height=\"45px\" src=\"https://colab.research.google.com/img/colab_favicon.ico\" align=\"left\" hspace=\"10px\" vspace=\"0px\">\n\n<h1>Welcome to Colaboratory!</h1>\n\nColaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.\n\nWith Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser.", "_____no_output_____" ], [ "## Running code\n\nCode cells can be executed in sequence by pressing Shift-ENTER. Try it now.", "_____no_output_____" ] ], [ [ "import math\nimport tensorflow as tf\nfrom matplotlib import pyplot as plt\nprint(\"Tensorflow version \" + tf.__version__)", "_____no_output_____" ], [ "a=1\nb=2", "_____no_output_____" ], [ "a+b", "_____no_output_____" ] ], [ [ "## Hidden cells\nSome cells contain code that is necessary but not interesting for the exercise at hand. These cells will typically be collapsed to let you focus at more interesting pieces of code. If you want to see their contents, double-click the cell. Wether you peek inside or not, **you must run the hidden cells for the code inside to be interpreted**. Try it now, the cell is marked **RUN ME**.", "_____no_output_____" ] ], [ [ "#@title \"Hidden cell with boring code [RUN ME]\"\n\ndef display_sinusoid():\n X = range(180)\n Y = [math.sin(x/10.0) for x in X]\n plt.plot(X, Y)", "_____no_output_____" ], [ "display_sinusoid()", "_____no_output_____" ] ], [ [ "Did it work ? If not, run the collapsed cell marked **RUN ME** and try again!\n", "_____no_output_____" ], [ "## Accelerators\n\nColaboratory offers free GPU and TPU (Tensor Processing Unit) accelerators.\n\nYou can choose your accelerator in *Runtime > Change runtime type*\n\nThe cell below is the standard boilerplate code that enables distributed training on GPUs or TPUs in Keras.\n", "_____no_output_____" ] ], [ [ "# Detect hardware\n \ntry: # detect TPUs\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect() # TPU detection\n strategy = tf.distribute.TPUStrategy(tpu)\nexcept ValueError: # detect GPUs\n strategy = tf.distribute.MirroredStrategy() # for GPU or multi-GPU machines (works on CPU too)\n #strategy = tf.distribute.get_strategy() # default strategy that works on CPU and single GPU\n #strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() # for clusters of multi-GPU machines\n\n# How many accelerators do we have ?\nprint(\"Number of accelerators: \", strategy.num_replicas_in_sync)\n\n# To use the selected distribution strategy:\n# with strategy.scope:\n# # --- define your (Keras) model here ---\n#\n# For distributed computing, the batch size and learning rate need to be adjusted:\n# global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs.\n# learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync", "_____no_output_____" ] ], [ [ "## License", "_____no_output_____" ], [ "\n\n---\n\n\nauthor: Martin Gorner<br>\ntwitter: @martin_gorner\n\n\n---\n\n\nCopyright 2020 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\n\n\n---\n\n\nThis is not an official Google product but sample code provided for an educational purpose\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cb7dd3aa7d82f4ce06a37bb99f7db4d8767c674e
7,373
ipynb
Jupyter Notebook
Basico/Python_Intro_Exercises-Solution.ipynb
jorgemauricio/python
4f45cc64c8b8ff9edbe8b7e8741b2b523f031662
[ "MIT" ]
null
null
null
Basico/Python_Intro_Exercises-Solution.ipynb
jorgemauricio/python
4f45cc64c8b8ff9edbe8b7e8741b2b523f031662
[ "MIT" ]
null
null
null
Basico/Python_Intro_Exercises-Solution.ipynb
jorgemauricio/python
4f45cc64c8b8ff9edbe8b7e8741b2b523f031662
[ "MIT" ]
null
null
null
18.386534
117
0.460735
[ [ [ "# Python Intro Exercises - Solution\n\nThis exercise allow you to practices the basics of python", "_____no_output_____" ], [ "## Exercises\n\nSolve each question", "_____no_output_____" ], [ "##### 7 to the 4th power?", "_____no_output_____" ] ], [ [ "7 ** 4", "_____no_output_____" ] ], [ [ "##### Use the split() method to convert the next phrase to a list\n\n s = \"Hi there Sam!\"", "_____no_output_____" ] ], [ [ "s = 'Hi there Sam!'", "_____no_output_____" ], [ "s.split()", "_____no_output_____" ] ], [ [ " planet = \"Earth\"\n diameter = 12742\n\n##### Use the .format() method to print the next text\n\n The diameter of Earth is 12742 kilometers.", "_____no_output_____" ] ], [ [ "planet = \"Earth\"\ndiameter = 12742", "_____no_output_____" ], [ "print(\"The diameter of {} is {} kilometers.\".format(planet,diameter))", "The diameter of Earth is 12742 kilometers.\n" ] ], [ [ "##### From the next nested list print the word \"hello\"", "_____no_output_____" ] ], [ [ "lst = [1,2,[3,4],[5,[100,200,['hello']],23,11],1,7]", "_____no_output_____" ], [ "lst[3][1][2][0]", "_____no_output_____" ] ], [ [ "### Bonus\n##### From the next dictionary print the word \"hello\"", "_____no_output_____" ] ], [ [ "d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]}", "_____no_output_____" ], [ "d['k1'][3]['tricky'][3]['target'][3]", "_____no_output_____" ] ], [ [ "##### Which is the difference between list and tuple", "_____no_output_____" ] ], [ [ "# Tuple is immutable", "_____no_output_____" ] ], [ [ "##### Create a function that takes an email as an input and pritn print the server\n\n [email protected]\n \n**Example, \"[email protected]\" returns: domain.com**", "_____no_output_____" ] ], [ [ "def domainGet(email):\n return email.split('@')[-1]", "_____no_output_____" ], [ "domainGet('[email protected]')", "_____no_output_____" ] ], [ [ "##### Create a function that returns True if the word \"dog\" is in the phrase the user add.", "_____no_output_____" ] ], [ [ "def findDog(st):\n return 'dog' in st.lower().split()", "_____no_output_____" ], [ "findDog('Is there a dog here?')", "_____no_output_____" ] ], [ [ "##### Create a function that counts the number of times the word \"dog\" is in the phrase", "_____no_output_____" ] ], [ [ "def countDog(st):\n count = 0\n for word in st.lower().split():\n if word == 'dog':\n count += 1\n return count", "_____no_output_____" ], [ "countDog('This dog runs faster than the other dog dude!')", "_____no_output_____" ] ], [ [ "### Bonus\n##### Use the lambda() and filter() methods to filtered the words that start with the letter \"s\", example:\n\n seq = ['soup','dog','salad','cat','great']\n\n##### result:\n\n ['soup','salad']", "_____no_output_____" ] ], [ [ "seq = ['soup','dog','salad','cat','great']", "_____no_output_____" ], [ "list(filter(lambda word: word[0]=='s',seq))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cb7ddf9635b486c601159f271576411807595b5b
61,634
ipynb
Jupyter Notebook
nb/2.0-Simple-Neural-Network.ipynb
unjymslf/pytorch-geo-intro
eaa3bb94aa3b3744a57ac275fe71dba80b0a2303
[ "Apache-2.0" ]
1
2018-05-12T23:07:32.000Z
2018-05-12T23:07:32.000Z
nb/2.0-Simple-Neural-Network.ipynb
gganssle/pytorch-geo-intro
eaa3bb94aa3b3744a57ac275fe71dba80b0a2303
[ "Apache-2.0" ]
null
null
null
nb/2.0-Simple-Neural-Network.ipynb
gganssle/pytorch-geo-intro
eaa3bb94aa3b3744a57ac275fe71dba80b0a2303
[ "Apache-2.0" ]
null
null
null
136.358407
17,508
0.87687
[ [ [ "# PyTorch\n# Intro to Neural Networks\nLets use some simple models and try to match some simple problems", "_____no_output_____" ] ], [ [ "import numpy as np\n\nimport torch\nimport torch.nn as nn\n\nfrom tensorboardX import SummaryWriter\n\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "### Data Loading\n\nBefore we dive deep into the nerual net, lets take a brief aside to discuss data loading.\n\nPytorch provides a Dataset class which is fairly easy to inherit from. We need only implement two methods for our data load:\n9. __len__(self) -> return the size of our dataset\n9. __getitem__(self, idx) -> return a data at a given index.\n\nThe *real* benefit of implimenting a Dataset class comes from using the DataLoader class.\nFor data sets which are too large to fit into memory (or more likely, GPU memory), the DataLoader class gives us two advantages:\n9. Efficient shuffling and random sampling for batches\n9. Data is loaded in a seperate *processes*.\n\nNumber (2) above is *important*. The Python interpretter is single threaded only, enforced with a GIL (Global Interpreter Lock). Without (2), we waste valuable (and potentially expensive) processing time shuffling and sampling and building tensors. \nSo lets invest a little time to build a Dataset and use the DataLoader.\n\nIn or example below, we are going to mock a dataset with a simple function, this time:\n\ny = sin(x) + 0.01 * x^2", "_____no_output_____" ] ], [ [ "fun = lambda x: np.sin(x) + 0.01 * x * x\n\nX = np.linspace(-3, 3, 100)\nY = fun(X)\n\nplt.figure(figsize=(7,7))\n\nplt.scatter(X,Y)\n\nplt.legend() \nplt.show()", "No handles with labels found to put in legend.\n" ] ], [ [ "### Our First Neural Net\nLets now build our first neural net.\n\nIn this case, we'll take a classic approach with 2 fully connected hidden layers and a fully connected output layer.", "_____no_output_____" ] ], [ [ "class FirstNet(nn.Module):\n def __init__(self, input_size, hidden_size, num_classes):\n super(FirstNet, self).__init__()\n \n self.fc1 = nn.Linear(input_size, hidden_size) \n self.relu = nn.ReLU()\n self.fc2 = nn.Linear(hidden_size, num_classes) \n \n def forward(self, x):\n x = x.view(-1,1)\n \n out = self.fc1(x)\n out = self.relu(out)\n out = self.fc2(out)\n \n return out\n \nnet = FirstNet(input_size=1, hidden_size=64, num_classes=1)\n\nprint(net)", "FirstNet(\n (fc1): Linear(in_features=1, out_features=64, bias=True)\n (relu): ReLU()\n (fc2): Linear(in_features=64, out_features=1, bias=True)\n)\n" ] ], [ [ "Lets look at a few key features of our net:\n\n1) We have 2 fully connected layers, defined in our init function.\n\n2) We define a *forward pass* method which is the prediction of the neural net given an input X\n\n3) Note that we make a *view* of our input array. In our simple model, we expect a 1D X value, and we output a 1D Y value. For efficiency, we may wish to pass in *many* X values, particularly when training. Thus, we need to set up a *view* of our input array: Many 1D X values. -1 in this case indicates that the first dimension (number of X values) is inferred from the tensor's shape.\n\n### Logging and Visualizing to TensorboardX\n\nLets track the progress of our training and visualize in tensorboard (using tensorboardX). We'll also add a few other useful functions to help visualize things.\n\nTo view the output, run:\n`tensorboard --logdir nb/run`", "_____no_output_____" ] ], [ [ "tbwriter = SummaryWriter()", "_____no_output_____" ] ], [ [ "### Graph Visualization and Batching\nWe will begin by adding a graph visualization to tensorboard. To do this, we need a valid input to our network.\n\nOur network is simple - floating point in, floating point out. *However*, pytorch expects us to *batch* our inputs - therefore it expects an *array* of inputs instead of a single input. There are many ways to work around this, I like \"unsqueeze\".", "_____no_output_____" ] ], [ [ "X = torch.FloatTensor([0.0])\ntbwriter.add_graph(net, X)", "_____no_output_____" ] ], [ [ "### Cuda\nIF you have a GPU available, your training will run much faster.\nMoving data back and forth between the CPU and the GPU is fairly straightforward - although it can be easy to forget.", "_____no_output_____" ] ], [ [ "use_cuda = torch.cuda.is_available()\nif use_cuda:\n net = net.cuda()", "_____no_output_____" ], [ "def makeFig(iteration):\n X = np.linspace(-3, 3, 100, dtype=np.float32)\n X = torch.FloatTensor(X)\n if use_cuda:\n Y = net.forward(X.cuda()).cpu()\n else:\n Y = net.forward(X)\n \n fig = plt.figure()\n plt.plot(X.data.numpy(), Y.data.numpy())\n plt.title('Prediciton at iter: {}'.format(iteration))\n return fig\n \ndef showFig(iteration):\n fig = makeFig(iteration)\n plt.show()\n plt.close()\n \ndef logFig(iteration):\n fig = makeFig(iteration)\n fig.canvas.draw()\n raw = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')\n raw = raw.reshape(fig.canvas.get_width_height()[::-1] + (3,))\n tbwriter.add_image('Prediction at iter: {}'.format(iteration), raw)\n plt.close()\n \nshowFig(0)", "_____no_output_____" ] ], [ [ "Ok, we have a ways to go. Lets use our data loader and do some training. Here we will use MSE loss (mean squared error) and SGD optimizer.", "_____no_output_____" ] ], [ [ "%%time\n\nlearning_rate = 0.01\nnum_epochs = 4000\n\nif use_cuda:\n net = net.cuda()\n\ncriterion = nn.MSELoss()\noptimizer = torch.optim.SGD(net.parameters(), lr=learning_rate)\nnet.train()\n\nX = np.linspace(-3, 3, 100)\nY = fun(X)\n\nX = torch.FloatTensor(X)\nY = torch.FloatTensor(Y).view(-1,1)\n\nif use_cuda:\n X = X.cuda()\n Y = Y.cuda()\n\nfor epoch in range(num_epochs):\n pred = net.forward(X)\n loss = criterion(pred, Y)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n tbwriter.add_scalar(\"Loss\", loss.data[0])\n\n if (epoch % 100 == 99):\n print(\"Epoch: {:>4} Loss: {}\".format(epoch, loss.data[0]))\n for name, param in net.named_parameters():\n tbwriter.add_histogram(name, param.clone().cpu().data.numpy(), epoch)\n logFig(epoch)\n \nnet.eval()", "Epoch: 99 Loss: 0.07742928713560104\n" ], [ "showFig(0)", "_____no_output_____" ] ], [ [ "## Conclusions\n\nWe've written our first network, take a moment and play with some of our models here.\n\nTry inputting a different function into the functional dataset, such as:\n dataset = FunctionalDataset(lambda x: 1.0 if x > 0 else -1.0\n\nTry experimenting with the network - change the number of neurons in the layer, or add more layers.\n \nTry changing the learning rate (and probably the number of epochs).\n\nAnd lastly, try disabling cuda (if you have a gpu).\n\n#### How well does the prediction match our input function?\n#### How long does it take to train?\n\nOne last note: we are absolutely *over-fitting* our dataset here. In this example, that's ok. For real work, we will need to be more careful.\n\nSpeaking of real work, lets do some real work identifying customer cohorts.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cb7ddfae113a0e0a512b03165e44eb203b3d6fb4
234,516
ipynb
Jupyter Notebook
Analysis/Practices/Graph/tests_grafics.ipynb
Renanrbsc/DataScience
1118d2fdc2326c64228a44841054ccbe0f554075
[ "MIT" ]
null
null
null
Analysis/Practices/Graph/tests_grafics.ipynb
Renanrbsc/DataScience
1118d2fdc2326c64228a44841054ccbe0f554075
[ "MIT" ]
null
null
null
Analysis/Practices/Graph/tests_grafics.ipynb
Renanrbsc/DataScience
1118d2fdc2326c64228a44841054ccbe0f554075
[ "MIT" ]
null
null
null
1,196.510204
179,364
0.95414
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nx = np.arange(0.5, 10, 0.001)\n\ny1 = np.log(x)\ny2 = 5 * np.sin(x) / x\n\nplt.style.use('seaborn-darkgrid') # Define o fundo do gráfico\nplt.figure(figsize=(8,5)) # Define o tamanho do gráfico\n\n# Estipula os parametros das letras do titulo, eixo x e eixo y\nplt.title('Dois Gráficos Aleatórios', fontsize=16, fontweight='bold', fontstyle='italic', fontfamily='serif', color='grey')\nplt.xlabel('Valores do eixo x', fontsize=13, fontfamily='serif')\nplt.ylabel('Valores do eixo y', fontsize=13, fontfamily='serif')\n\nplt.tight_layout()\n\n# Estipula como vai ser as linhas (uma linha sobreposta a outra com largura e opacidade diferente)\nplt.plot(x, y1, label='log(x)', color='blue')\nplt.plot(x, y1, color='blue', linewidth=10, alpha=0.1)\n\nplt.plot(x, y2, label='sen(x)/x', color='red')\nplt.plot(x, y2, color='red', linewidth=10, alpha=0.1)\n\n# Estipula a estilização da legenda\nplt.legend(fontsize=13, frameon=True, framealpha=0.2, facecolor='grey')", "_____no_output_____" ], [ "import numpy as np\n\nx = np.arange(0,10, 0.1)\n\nz = np.arange(0.5, 10)\n\ny1 = np.log(z)\ny2 = 5 * np.sin(z) / z\ny3 = np.sin(z)\ny4 = np.tan(z)\n\nplt.style.use('seaborn-darkgrid')\nfig1, f1_axes = plt.subplots(ncols=2, nrows=2, figsize=(15, 10))\nfig1.suptitle(\"Vários Gráficos Em Uma Mesma Figura\", fontsize=30, fontweight='bold', fontstyle='italic', fontfamily='serif')\n\nbox1 = f1_axes[0, 0]\nbox2 = f1_axes[0, 1]\nbox3 = f1_axes[1, 0]\nbox4 = f1_axes[1, 1]\n\n\nbox1.set_title('Caixa 1', fontsize=15, fontweight='bold')\nbox1.set_xlabel('Caixa 1 - Eixo x', fontsize=13, fontfamily='serif')\nbox1.set_ylabel('Caixa 1 - Eixo y', fontsize=13, fontfamily='serif')\n\nbox1.plot(np.sin(x), label='sen(x)', color= 'red')\nbox1.plot(np.sin(x), color='red', linewidth=10, alpha=0.1)\n\nbox1.plot(np.cos(x), label='cos(x)', color= 'darkturquoise')\nbox1.plot(np.cos(x), color='darkturquoise', linewidth=10, alpha=0.1)\n\nbox1.legend(fontsize=13, frameon=True, framealpha=0.2, facecolor='grey')\n\n\nbox2.set_title('Caixa 2', fontsize=15, fontweight='bold')\nbox2.set_xlabel('Caixa 2 - Eixo x', fontsize=13, fontfamily='serif')\nbox2.set_ylabel('Caixa 2 - Eixo y', fontsize=13, fontfamily='serif')\n\nbox2.plot(z, y1, label='log(x)', color='darkslategrey')\nbox2.plot(z, y1, color='blue', linewidth=10, alpha=0.1)\n\nbox2.plot(z, y2, label='sen(x)/x', color='coral')\nbox2.plot(z, y2, color='red', linewidth=10, alpha=0.1)\n\nbox2.legend(fontsize=13, frameon=True, framealpha=0.2, facecolor='grey')\n\n\nbox3.set_title('Caixa 3', fontsize=15, fontweight='bold')\nbox3.set_xlabel('Caixa 3 - Eixo x', fontsize=13, fontfamily='serif')\nbox3.set_ylabel('Caixa 3 - Eixo y', fontsize=13, fontfamily='serif')\n\nbox3.plot(np.sin(x), label='sen(x)', color= 'black')\nbox3.plot(np.sin(x), color='black', linewidth=10, alpha=0.1)\n\nbox3.plot(np.tan(x), label='cos(x)', color= 'blue')\nbox3.plot(np.tan(x), color='blue', linewidth=10, alpha=0.1)\n\nbox3.legend(fontsize=13, frameon=True, framealpha=0.2, facecolor='grey')\n\n\nbox4.set_title('Caixa 4', fontsize=15, fontweight='bold')\nbox4.set_xlabel('Caixa 4 - Eixo x', fontsize=13, fontfamily='serif')\nbox4.set_ylabel('Caixa 4 - Eixo y', fontsize=13, fontfamily='serif')\n\nbox4.plot(z, y3, label='sen(z)', color='purple')\nbox4.plot(z, y3, color='blue', linewidth=10, alpha=0.1)\n\nbox4.plot(z, y4, label='tan(z)', color='lime')\nbox4.plot(z, y4, color='red', linewidth=10, alpha=0.1)\n\nbox4.legend(fontsize=13, frameon=True, framealpha=0.2, facecolor='grey')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
cb7e2d9feca4e90c4245e6b34ec0084f78ec5249
21,250
ipynb
Jupyter Notebook
codigo_para_abrir_y_contar_palabras_de_archivos.ipynb
WuilsonEstacio/Procesamiento-de-lenguaje-natural
1d1084a71077f46a7aa525272be26c0c82ca251c
[ "MIT" ]
2
2021-07-13T18:45:08.000Z
2021-07-13T18:45:12.000Z
codigo_para_abrir_y_contar_palabras_de_archivos.ipynb
WuilsonEstacio/Procesamiento-de-lenguaje-natural
1d1084a71077f46a7aa525272be26c0c82ca251c
[ "MIT" ]
null
null
null
codigo_para_abrir_y_contar_palabras_de_archivos.ipynb
WuilsonEstacio/Procesamiento-de-lenguaje-natural
1d1084a71077f46a7aa525272be26c0c82ca251c
[ "MIT" ]
null
null
null
35.240464
1,005
0.482588
[ [ [ "<a href=\"https://colab.research.google.com/github/WuilsonEstacio/Procesamiento-de-lenguaje-natural/blob/main/codigo_para_abrir_y_contar_palabras_de_archivos.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "# para leer un archivo\narchivo = open('/content/Hash.txt','r')\nfor linea in archivo:\n print(linea)\narchivo.close()\n\narchivo=\"/content/Hash.txt\"\nwith open(archivo) as f:\n text=f.read()\n\nfor char in \"abcdefghijklmnopqrsrtuvwxyz\":\n perc=100*count_char(text, char)/len(text)\n print(\"{0}-{1}%\".format(char, round(perc, 2)))", " Usted puede interponer demanda ante los jueces civiles del \n\ncircuito que conocen en primera instancia de los \n\nprocesos contenciosos de mayor cuantía por responsabilidad médica. \n\nPretendiendo el pago de los perjuicios materiales \na-6.52%\nb-0.43%\nc-5.65%\nd-5.65%\ne-12.17%\nf-0.0%\ng-0.43%\nh-0.0%\ni-6.96%\nj-0.87%\nk-0.0%\nl-3.48%\nm-2.17%\nn-6.52%\no-7.83%\np-3.48%\nq-0.43%\nr-5.22%\ns-6.52%\nr-5.22%\nt-3.91%\nu-2.61%\nv-0.43%\nw-0.0%\nx-0.0%\ny-0.43%\nz-0.0%\n" ], [ "# Which of the following is the correct regular expression to extract all the phone numbers from the following chunk of text:\nimport re\npatter = '[(]\\d{3}[)]\\s\\d{3}[-]\\d{4}'\nprint(patter)\nre.findall(patter,archivo)", "[(]\\d{3}[)]\\s\\d{3}[-]\\d{4}\n" ], [ "#con este codigo se puede contar las palabras que hay en un archivo\nimport numpy as np\nimport pandas as pd\n\ndef count_char(text, char):\n count=0\n for c in text:\n if c == char:\n count +=1\n return count\n\n# con esto cambiamos el contenido de Hash.txt y modificamos el escrito y lo guardamos\nfile =open(\"/content/Hash.txt\",\"w\")\nfile.write(\"\"\" Usted puede interponer demanda ante los jueces civiles del \ncircuito que conocen en primera instancia de los \nprocesos contenciosos de mayor cuantía por responsabilidad médica. \nPretendiendo el pago de los perjuicios materiales \"\"\")\nfile.close()\nfilename=\"20-12-2020.txt\"\nwith open('/content/Hash.txt') as f:\n text=f.read()\n\nfor char in \"abcdefghijklmnopqrsrtuvwxyz\":\n perc=100*count_char(text, char)/len(text)\n print(\"{0}-{1}%\".format(char, round(perc, 2)))\n\n", "a-6.52%\nb-0.43%\nc-5.65%\nd-5.65%\ne-12.17%\nf-0.0%\ng-0.43%\nh-0.0%\ni-6.96%\nj-0.87%\nk-0.0%\nl-3.48%\nm-2.17%\nn-6.52%\no-7.83%\np-3.48%\nq-0.43%\nr-5.22%\ns-6.52%\nr-5.22%\nt-3.91%\nu-2.61%\nv-0.43%\nw-0.0%\nx-0.0%\ny-0.43%\nz-0.0%\n" ], [ "import numpy as np\nimport pandas as pd\n\nfilename=input(\"ingrese el nombre del archivo: \")\nwith open( filename ) as f:\n text=f.read()\n\nfilename = open(\"20-12-2020.txt\",\"r\")\nfor linea in filename.readlines():\n#str=filename.read()\n#print(len(str))\n print(linea)\nfilename.close()\n\n\n", "ingrese el nombre del archivo: mas\n" ], [ "# importamos librerias\nimport nltk \nnltk.download('cess_esp') # para preeentener\nfrom nltk.corpus import cess_esp as cess\nfrom nltk import UnigramTagger as ut # etiquetador por unigramas\nfrom nltk import BigramTagger as bt # etiquetador por bigramas", "[nltk_data] Downloading package cess_esp to /root/nltk_data...\n[nltk_data] Package cess_esp is already up-to-date!\n" ], [ "# https://www.delftstack.com/es/howto/python-pandas/how-to-load-data-from-txt-with-pandas/#read_csv-m%25C3%25A9todo-para-cargar-los-datos-del-archivo-de-texto\n# una forma de leer el archivo con pandas\nimport pandas as pd\ndf = pd.read_csv(\n '/content/Hash.txt', sep=\" \",header=None)\nprint(df)", " 0 1 2 3 ... 7 8 9 10\n0 NaN Usted puede interponer ... jueces civiles del NaN\n1 circuito que conocen en ... los NaN NaN NaN\n2 procesos contenciosos de mayor ... médica. NaN NaN NaN\n3 Pretendiendo el pago de ... NaN NaN NaN NaN\n\n[4 rows x 11 columns]\n" ], [ "# leemos el archivo\nimport pandas as pd\nimport numpy as np\n\narchivo = open('/content/Hash.txt','r')\nfor linea in archivo:\n print(linea)\narchivo.close()", " Usted puede interponer demanda ante los jueces civiles del \n\ncircuito que conocen en primera instancia de los \n\nprocesos contenciosos de mayor cuantía por responsabilidad médica. \n\nPretendiendo el pago de los perjuicios materiales \n" ], [ "# pip install win_unicode_console", "_____no_output_____" ], [ "# Utilizado para vizualizar caracteres correctamente en consola\nimport codecs\nimport win_unicode_console\nfrom nltk.tokenize import sent_tokenize\nfrom nltk.tokenize import word_tokenize\n \n# Abrimos el archivo\narchivo = codecs.open('/content/Hash.txt', 'r', encoding='utf-8')\ntexto = \"\"\n\n#Almacenamos el texto en una variable\nfor linea in archivo:\n linea = linea.strip()\n texto = texto + \" \" + linea", "_____no_output_____" ], [ "text = word_tokenize(texto)\nnltk.pos_tag(text) # etiquetado aplicado al text", "_____no_output_____" ], [ "#Realizamos el Tokenizing con Sent_Tokenize() a cada una de las sentencias del texto\n# tokens = sent_tokenize(texto)", "_____no_output_____" ] ], [ [ "# **Test** \n\n1.\nSi tenemos un dataset etiquetado donde la categoría adjetivo (ADJ) aparece un total de 500 veces entre todos los tokens, y de esas veces solamente la palabra \"noble\" le corresponde 200 veces, entonces podemos decir que:\n\nLa probabilidad de emisión P(noble|ADJ) = 40%\n\n2.\nEl proceso mediante el cual un Modelo Markoviano Latente determina la secuencia de etiquetas más probable para una secuencia de palabras es:\n\nUsando el algoritmo de Viterbi para obtener la categoría más probable, palabra por palabra.\n\n3.\nDada una cadena de texto text en español, el procedimiento para asignar las etiquetas gramaticales con Stanza es a partir de un objeto nlp(text), donde:\n\nnlp = stanza.Pipeline('es', processors='tokenize,pos')\n\n4.\nLa ingeniería de atributos se usa para:\n\nConstruir atributos particulares de palabras y textos que permitan dar un input más apropiado a un modelo de clasificación.\n\n5.\nEl problema de clasificación de texto pertenece a la categoría de Machine Learning supervisado porque:\n\nDurante el entrenamiento, el modelo tiene conocimiento de las etiquetas correctas que debería predecir.\n\n6.\nEn un modelo de clasificación por categorías gramaticales, el algoritmo de Viterbi se usa para:\n\nEl proceso de decodificación: encontrar la secuencia de etiquetas más probable.\n\n7.\nEn un Modelo Markoviano Latente se necesitan los siguientes ingredientes:\n\nMatrices de transición, emisión y distribución inicial de estados.\n\n8.\nEn un problema de clasificación de emails entre SPAM y HAM, la métrica de recall tiene la siguiente interpretación:\n\nDe todos los correos que realmente son SPAM, la fracción que el modelo logró identificar.\n\n9.\nPara entrenar un clasificador de Naive Bayes en NLTK, se escribe en Python:\n\nnltk.NaiveBayesClassifier.train(data)\n\n10.\nSi tienes un modelo de clasificación binaria que luego de entrenarlo, obtienes que el número de verdaderos positivos es 200 y el número de falsos positivos es 120, entonces la métrica de precisión de dicho modelo tiene un valor de:\n\n200/320\n\n11.\nUn algoritmo general de clasificación de texto:\n\nEs un algoritmo de Machine Learning supervisado.\n\n12.\nEl tokenizador por defecto en NLTK para el idioma inglés es:\n\npunkt\n\n13.\nEn una cadena de Markov se necesitan los siguientes elementos:\n\nMatriz de transiciones y distribución inicial de estados.\n\n14.\nEntrenar un Modelo Markoviano Latente significa:\n\nCalcular las matrices de probabilidad de transición y emisión con un corpus de textos.\n\n15.\nUna de las siguientes no es una categoría de ambigüedades del lenguaje:\n\nVectorial\n\n16.\nEl suavizado de Laplace se usa en un algoritmo de clasificación con el objetivo de:\n\nEvitar probabilidades nulas y denominadores iguales a cero.\n\n17.\nEl clasificador de Naive Bayes es:\n\nUn clasificador probabilístico que hace uso de la regla de Bayes.\n\n18.\nEn la frase: \"mi hermano es muy noble\", la palabra noble hace referencia a:\n\nUn adjetivo\n\n19.\nCon Naive Bayes preferimos hacer cálculos en espacio logarítmico para:\n\nEvitar productos de números demasiado pequeños para la precisión de máquina.\n\n20.\nEn un modelo MEMM:\n\nEl proceso de decodificación es similar al de un HMM, y por lo tanto se puede usar un tipo de algoritmo de Viterbi.\n\n21.\nEl accuracy de entrenamiento de un modelo se calcula como:\n\n(número de veces que el modelo predice la categoría correcta) / (total de datos usados para entrenamiento)\n\n22.\nSi tenemos una cadena de Markov para describir las probabilidades de transición en cuanto al clima de un dia para otro, y observamos la siguiente secuencia de estados día tras día: (frío, frío, caliente, frío, tibio, caliente, tibio, frío), entonces la probabilidad de transición P(caliente|frío) es:\n\n50%\n\n23.\n\nEn un Modelo Markoviano Latente, el problema de calcular la secuencia de etiquetas más probable se expresa con la siguiente expresión matemática:\n\n$${\\arg \\max}_{(t^n)}\\prod_i P(w_i \\vert t_i)P(t_i \\vert t_{i-1})$$\n\n24.\nPara un modelo de clasificación de palabras con Naive Bayes en NLTK, debemos entrenar el algoritmo usando:\n\nnltk.NaiveBayesClassifier.train(train_set) donde usamos una funcion que extrae atributos llamada atributos() y:\n\ntrain_set = [(atributos(palabra), categoría de la palabra), ...]\n\n25.\nDada una cadena de texto text en inglés, el procedimiento para asignar las etiquetas gramaticales con NLTK es:\n\nnltk.pos_tag(word_tokenize(text))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb7e376c1bec9305057b5dea614bf8a600f27f5f
174,952
ipynb
Jupyter Notebook
.ipynb_checkpoints/CMA evolution-clean-checkpoint.ipynb
WillButAgain/ENAS
515eb42791090ae023106e434f1f7266dac19e35
[ "MIT" ]
null
null
null
.ipynb_checkpoints/CMA evolution-clean-checkpoint.ipynb
WillButAgain/ENAS
515eb42791090ae023106e434f1f7266dac19e35
[ "MIT" ]
null
null
null
.ipynb_checkpoints/CMA evolution-clean-checkpoint.ipynb
WillButAgain/ENAS
515eb42791090ae023106e434f1f7266dac19e35
[ "MIT" ]
null
null
null
359.983539
17,420
0.924305
[ [ [ "import torch\nfrom NASmodels import NASController, MasterModel, EvolutionController\nfrom utils import ControllerSettings\nfrom tqdm import tqdm\nimport mlflow\n\n# only for plotting\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ncont_settings = ControllerSettings(mask_ops=True, learning_rate=1e-2, device='cuda:0', search_space = MasterModel(keys=[], mask_ops=True).search_space, max_len=20, hidden_size=488, embedding_size=256)\n\nn_models = 800\nk=40\niterations = 200\n\nEA = EvolutionController(cont_settings, n_models).to(cont_settings.device)\nEA.initialize_models()\n\nscores = []\niterable = tqdm(range(iterations))", " 0%| | 0/200 [00:00<?, ?it/s]" ], [ "# https://docs.python.org/2/library/profile.html#module-cProfile\nmlflow.tracking.set_tracking_uri('file:/share/lazy/will/ConstrastiveLoss/Logs')\n\n\nimport cProfile\n# p = cProfile.run('''\n# for _ in tqdm(range(iterations)):\n# score = EA(k)\n# scores.append(score)\n# iterable.set_description('Score: {}'.format(score))\n# ''', 'restats')\nfor _ in tqdm(range(iterations)):\n score = EA(k)\n scores.append(score)\n iterable.set_description('Score: {}'.format(score))\n print(EA.state_dict()['hidden_to_embedding_weight_cov_matrix'])\n plt.close()\n plt.plot(np.array(scores))\n plt.show()\n", "\nScore: 2519.0: 0%| | 0/200 [00:14<?, ?it/s]" ], [ "import pstats\np = pstats.Stats('restats')\np.strip_dirs().sort_stats('cumtime')\np.print_stats()\n# p.strip_dirs().sort_stats(-1).print_stats()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cb7e3a8e00710e945656568fca1281441d0f3982
9,224
ipynb
Jupyter Notebook
analysis_notebooks/.ipynb_checkpoints/FrontRaiseAnalysis-checkpoint.ipynb
silcheon/UROP_project
3eafdcb21216f910b45c33485ca40a5441a6df2c
[ "Apache-2.0" ]
null
null
null
analysis_notebooks/.ipynb_checkpoints/FrontRaiseAnalysis-checkpoint.ipynb
silcheon/UROP_project
3eafdcb21216f910b45c33485ca40a5441a6df2c
[ "Apache-2.0" ]
null
null
null
analysis_notebooks/.ipynb_checkpoints/FrontRaiseAnalysis-checkpoint.ipynb
silcheon/UROP_project
3eafdcb21216f910b45c33485ca40a5441a6df2c
[ "Apache-2.0" ]
2
2020-10-31T12:53:18.000Z
2022-02-08T08:48:19.000Z
34.939394
116
0.509649
[ [ [ "import os\nimport sys\nimport glob\nimport numpy as np\n\nfrom parse import load_ps\n\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "def split_num(s):\n head = s.rstrip('0123456789')\n tail = s[len(head):]\n return head, tail", "_____no_output_____" ], [ "def files_in_order(folderpath):\n npy_files = os.listdir(folderpath)\n\n no_extensions = [os.path.splitext(npy_file)[0] for npy_file in npy_files]\n\n splitted = [split_num(s) for s in no_extensions]\n\n splitted = np.array(splitted)\n\n indices = np.lexsort((splitted[:, 1].astype(int), splitted[:, 0]))\n\n npy_files = np.array(npy_files)\n return npy_files[indices]", "_____no_output_____" ], [ "files = files_in_order(os.path.join('poses_compressed', 'frontraise'))\nprint(files)", "['frontraise_bad_1.npy' 'frontraise_bad_2.npy' 'frontraise_bad_3.npy'\n 'frontraise_bad_4.npy' 'frontraise_bad_5.npy' 'frontraise_bad_6.npy'\n 'frontraise_bad_7.npy' 'frontraise_bad_8.npy' 'frontraise_bad_9.npy'\n 'frontraise_bad_10.npy' 'frontraise_bad_11.npy' 'frontraise_bad_12.npy'\n 'frontraise_bad_13.npy' 'frontraise_good_1.npy' 'frontraise_good_2.npy'\n 'frontraise_good_3.npy' 'frontraise_good_4.npy' 'frontraise_good_5.npy'\n 'frontraise_good_6.npy' 'frontraise_good_7.npy' 'frontraise_good_8.npy'\n 'frontraise_good_9.npy' 'frontraise_good_10.npy' 'frontraise_good_11.npy'\n 'frontraise_good_12.npy' 'frontraise_good_13.npy'\n 'frontraise_good_14.npy' 'frontraise_good_15.npy']\n" ], [ "for filename in files:\n print(\"=\"*30)\n print(\"Starting:\", filename)\n ps = load_ps(\"poses_compressed/frontraise/\" + filename)\n poses = ps.poses\n \n right_present = [1 for pose in poses \n if pose.rshoulder.exists and pose.relbow.exists and pose.rwrist.exists]\n left_present = [1 for pose in poses\n if pose.lshoulder.exists and pose.lelbow.exists and pose.lwrist.exists]\n right_count = sum(right_present)\n left_count = sum(left_present)\n side = 'right' if right_count > left_count else 'left'\n\n # print('Exercise arm detected as: {}.'.format(side))\n \n if side == 'right':\n joints = [(pose.rshoulder, pose.relbow, pose.rwrist, pose.rhip, pose.neck) for pose in poses]\n else:\n joints = [(pose.lshoulder, pose.lelbow, pose.lwrist, pose.lhip, pose.neck) for pose in poses]\n\n # filter out data points where a part does not exist\n joints = [joint for joint in joints if all(part.exists for part in joint)]\n joints = np.array(joints)\n \n # Neck to hip\n back_vec = np.array([(joint[4].x - joint[3].x, joint[4].y - joint[3].y) for joint in joints])\n # back_vec = np.array([(joint[3].x, joint[3].y) for joint in joints])\n # Check range of motion of the back\n # Straining back\n back_vec_range = np.max(back_vec, axis=0) - np.min(back_vec, axis=0)\n # print(\"Range of motion for back: %s\" % back_vec_range)\n \n # threshold the x difference at 0.3: less is good, more is too much straining and movement of the back.\n \n # Shoulder to hip \n torso_vecs = np.array([(joint[0].x - joint[3].x, joint[0].y - joint[3].y) for joint in joints])\n # Arm\n arm_vecs = np.array([(joint[0].x - joint[2].x, joint[0].y - joint[2].y) for joint in joints])\n \n # normalize vectors\n torso_vecs = torso_vecs / np.expand_dims(np.linalg.norm(torso_vecs, axis=1), axis=1)\n arm_vecs = arm_vecs / np.expand_dims(np.linalg.norm(arm_vecs, axis=1), axis=1)\n \n # Check if raised all the way up\n angles = np.degrees(np.arccos(np.clip(np.sum(np.multiply(torso_vecs, arm_vecs), axis=1), -1.0, 1.0)))\n print(\"Max angle: \", np.max(angles))\n \n", "==============================\nStarting: frontraise_bad_1.npy\nMax angle: 76.44579393687194\n==============================\nStarting: frontraise_bad_2.npy\nMax angle: 76.53542334932133\n==============================\nStarting: frontraise_bad_3.npy\nMax angle: 83.44028662338161\n==============================\nStarting: frontraise_bad_4.npy\nMax angle: 91.45787597595873\n==============================\nStarting: frontraise_bad_5.npy\nMax angle: 73.68385164131362\n==============================\nStarting: frontraise_bad_6.npy\nMax angle: 77.0184894992613\n==============================\nStarting: frontraise_bad_7.npy\nMax angle: 93.28387860403919\n==============================\nStarting: frontraise_bad_8.npy\nMax angle: 61.81476558703319\n==============================\nStarting: frontraise_bad_9.npy\nMax angle: 70.16220719382201\n==============================\nStarting: frontraise_bad_10.npy\nMax angle: 67.74141626093673\n==============================\nStarting: frontraise_bad_11.npy\nMax angle: 65.60892049810339\n==============================\nStarting: frontraise_bad_12.npy\nMax angle: 73.49291738986014\n==============================\nStarting: frontraise_bad_13.npy\nMax angle: 65.20889427317154\n==============================\nStarting: frontraise_good_1.npy\nMax angle: 105.97905342601622\n==============================\nStarting: frontraise_good_2.npy\nMax angle: 104.05891172589975\n==============================\nStarting: frontraise_good_3.npy\nMax angle: 101.64971336209359\n==============================\nStarting: frontraise_good_4.npy\nMax angle: 98.71037797092596\n==============================\nStarting: frontraise_good_5.npy\nMax angle: 104.99856244779693\n==============================\nStarting: frontraise_good_6.npy\nMax angle: 107.13825122517582\n==============================\nStarting: frontraise_good_7.npy\nMax angle: 108.414909388629\n==============================\nStarting: frontraise_good_8.npy\nMax angle: 108.5533791991937\n==============================\nStarting: frontraise_good_9.npy\nMax angle: 92.46571165603493\n==============================\nStarting: frontraise_good_10.npy\nMax angle: 99.73228248176136\n==============================\nStarting: frontraise_good_11.npy\nMax angle: 95.22134879096605\n==============================\nStarting: frontraise_good_12.npy\nMax angle: 91.64297523968443\n==============================\nStarting: frontraise_good_13.npy\nMax angle: 95.6042943399203\n==============================\nStarting: frontraise_good_14.npy\nMax angle: 100.78599389212843\n==============================\nStarting: frontraise_good_15.npy\nMax angle: 100.556373519022\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cb7e4c5129d8690e7d91df45bb32c2061ae52b2c
4,385
ipynb
Jupyter Notebook
assets/all_html/2019_12_03_HW7_kaggle_submitter.ipynb
dskw1/dskw1.github.io
ee85aaa7c99c4320cfac95e26063beaac3ae6fcb
[ "MIT" ]
null
null
null
assets/all_html/2019_12_03_HW7_kaggle_submitter.ipynb
dskw1/dskw1.github.io
ee85aaa7c99c4320cfac95e26063beaac3ae6fcb
[ "MIT" ]
1
2022-03-24T18:28:16.000Z
2022-03-24T18:28:16.000Z
assets/all_html/2019_12_03_HW7_kaggle_submitter.ipynb
dskw1/dskw1.github.io
ee85aaa7c99c4320cfac95e26063beaac3ae6fcb
[ "MIT" ]
1
2021-09-01T16:54:38.000Z
2021-09-01T16:54:38.000Z
37.161017
1,291
0.590422
[ [ [ "import pandas as pd\n\ntrain=pd.read_csv(\"kaggle-sentiment/train.tsv\", delimiter='\\t')\ny=train['Sentiment'].values\nX=train['Phrase'].values\n\npred_vec = bigram_tv_v3 # 60.4\n\n\ntest = pd.read_csv(\"kaggle-sentiment/test.tsv\", delimiter='\\t')\nk_id = test['PhraseId']\nk_text = test['Phrase']\n\n# k_vec = bigram_tv_v3.transform(k_text)\n# k_vec\n\ndef get_kaggle_test_train_vec(X,y,vectorizer):\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=None, random_state=0)\n X_train_vec = vectorizer.fit_transform(X_train)\n# X_test_vec = vectorizer.transform(X_test)\n return X_train_vec, y_train,\n\ndef do_the_kaggle(X,y,vec):\n X_train_vec, y_train = get_kaggle_test_train_vec(X,y,vec)\n svm_clf = LinearSVC(C=1)\n k_vec = pred_vec.transform(k_text)\n print(len(X), X_train_vec.shape, k_vec.shape)\n\n prediction = svm_clf.fit(X_train_vec,y_train).predict(k_vec)\n kaggle_submission = zip(k_id, prediction)\n outf=open('kaggle_submission_linearSVC_v8.csv', 'w')\n outf.write('PhraseId,Sentiment\\n')\n for x, value in enumerate(kaggle_submission): outf.write(str(value[0]) + ',' + str(value[1]) + '\\n')\n outf.close()\n print('prediction complete')\n\ndo_the_kaggle(X,y,pred_vec)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cb7e52a139f7e67cfa84f8d3e397134367e2d74f
3,779
ipynb
Jupyter Notebook
files/coli/week_02/03_dit_coli_rulebasedsentiment.ipynb
albarron/academic-kickstart
a946ea6fd4048abb9e4be488e7e4891e7cc18dbe
[ "MIT" ]
1
2021-02-27T22:10:13.000Z
2021-02-27T22:10:13.000Z
files/coli/week_02/03_dit_coli_rulebasedsentiment.ipynb
albarron/academic-kickstart
a946ea6fd4048abb9e4be488e7e4891e7cc18dbe
[ "MIT" ]
null
null
null
files/coli/week_02/03_dit_coli_rulebasedsentiment.ipynb
albarron/academic-kickstart
a946ea6fd4048abb9e4be488e7e4891e7cc18dbe
[ "MIT" ]
6
2020-02-21T21:52:26.000Z
2021-02-27T22:18:12.000Z
25.362416
152
0.542207
[ [ [ "# Playing with a rule-based sentiment analyser\n", "_____no_output_____" ] ], [ [ "! pip install vaderSentiment", "_____no_output_____" ], [ "from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer\nsa = SentimentIntensityAnalyzer()\n# Let us have a look at the lexicon\n#sa.lexicon\n[(tok, score) for tok, score in sa.lexicon.items() if tok.startswith(\"c\")]", "_____no_output_____" ], [ "# Let us see if there are bigrams\n[(tok, score) for tok, score in sa.lexicon.items() if \" \" in tok]\n", "_____no_output_____" ], [ "# Finally, let's score!!\nsa.polarity_scores(text=\"Python is very readable and it's great for NLP.\")\n", "_____no_output_____" ], [ "sa.polarity_scores(text=\"Python is not a bad choice for many applications.\")\n", "_____no_output_____" ], [ "corpus = [\"Absolutely perfect! Love it! :-) :-) :-)\",\n \"Horrible! Completely useless. :(\",\n \"It was OK. Some good and some bad things.\"]\n\nfor doc in corpus:\n scores = sa.polarity_scores(doc)\n print('{:+}: {}'.format(scores['compound'], doc))\n", "_____no_output_____" ], [ "# Scoring an Amazon review\n\ntext = \"\"\"\"This monitor is definitely a good value. Does it have superb color and \ncontrast? No. Does it boast the best refresh rate on the market? No. \nBut if you're tight on money, this thing looks and preforms great for the money. \nIt has a Matte screen which does a great job at eliminating glare. The chassis it's enclosed \nwithin is absolutely stunning.\")\"\"\"\nlen(text.split())\n\nfor i in [10, 20, 45, 60]:\n t = \" \".join(text.split()[:i])\n print(i,\"\\t\", t)\n print(\"ONE TIME\", sa.polarity_scores(t))\n print(\"THREE TIME\", sa.polarity_scores(\" \".join([t, t, t])))\n print\n", "_____no_output_____" ], [ "print(sa.polarity_scores(\"this is not good\"))\nprint(sa.polarity_scores(\"this is not good at all\"))", "_____no_output_____" ], [ "# Scoring a tweet\nsa.polarity_scores(\"His ass didnt concede until July 12, 2016. Because he was throwing a tantrum. I can't say this enough: Fuck Bernie Sanders\")", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7e5e4cbec8310685978a613499d9cc6c081e31
104,786
ipynb
Jupyter Notebook
Resizing Dataset for Transfer Learning and Saving.ipynb
kawseribn/Resizing-Dataset-Images-for-Transfer-Learning
b20522c9b1662b4c72b163e29fb7452ac5389638
[ "MIT" ]
1
2021-08-09T16:18:02.000Z
2021-08-09T16:18:02.000Z
Resizing Dataset for Transfer Learning and Saving.ipynb
kawseribn/Resizing-Dataset-Images-for-Transfer-Learning
b20522c9b1662b4c72b163e29fb7452ac5389638
[ "MIT" ]
null
null
null
Resizing Dataset for Transfer Learning and Saving.ipynb
kawseribn/Resizing-Dataset-Images-for-Transfer-Learning
b20522c9b1662b4c72b163e29fb7452ac5389638
[ "MIT" ]
null
null
null
489.654206
99,616
0.943657
[ [ [ "# <h2 align=center> Resizing Dataset for Transfer Learning and Saving </h2>", "_____no_output_____" ], [ " ", "_____no_output_____" ], [ "### Import Libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport utils\nimport os\n\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\nfrom IPython.display import SVG, Image\nimport tensorflow as tf\nprint(\"Tensorflow version:\", tf.__version__)", "Tensorflow version: 1.14.0\n" ] ], [ [ "### Plot Some Sample Images", "_____no_output_____" ] ], [ [ "utils.datasets.fer.plot_example_images(plt,\"train/\").show()\n#Here \"train/\" is the folder where you have all the class of your dataset\n#explore \"utils/datasets/fer.py\" for more detials on how its printing samples", "_____no_output_____" ], [ "import numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport utils\nimport os\n\nfrom tensorflow.keras.preprocessing.image import ImageDataGenerator\n\nfrom IPython.display import SVG, Image\nimport tensorflow as tf\nprint(\"Tensorflow version:\", tf.__version__)\n\n#python dataset_resizer.py 224 train/ Resized_images/ lanczos\nclass Resizing:\n\tdef __init__(self, img_size, flow_from_directory1,save_directory,algorithm):\n\t\tself.img_size = img_size\n\t\tself.batch_size = 1\n\t\tself.flow_from_directory1 = str(flow_from_directory1)\n\t\tself.save_to_directory = str(save_directory)\n\t\tself.algorithm=str(algorithm)\n\tdef resizer(self):\n\t\tcounter = 1\n\t\tdatagen_train = ImageDataGenerator(horizontal_flip=False)\n\t\ttrain_generator = datagen_train.flow_from_directory(self.flow_from_directory1,target_size=(self.img_size,self.img_size),color_mode=\"rgb\",batch_size=self.batch_size,shuffle=False,class_mode='categorical',save_to_dir=self.save_to_directory,save_prefix='img',save_format='png',subset=None,interpolation=self.algorithm)\n\t\tfor i in train_generator:\n\t\t\tif counter > int(len(os.listdir(self.flow_from_directory1+ expression )))-1 :\n\t\t\t\tbreak\n\t\t\telse:\n\t\t\t\tcounter+=1\n\t\treturn print(\"successfully resized\")", "Tensorflow version: 1.14.0\n" ], [ "for expression in os.listdir(\"train/\"):\n print(str(len(os.listdir(\"train/\" + expression))) + \" \" + expression + \" images\")", "16 angry images\n" ], [ "#make a directory named \"Resized_images/\" at \"\\Resizing-Dataset-Images-for-Transfer-Learning\\\"", "_____no_output_____" ], [ "p1 = Resizing(224,\"train/\",\"Resized_images/\",\"lanczos\").resizer()", "Found 16 images belonging to 1 classes.\nsuccessfully resized\n" ] ], [ [ "# Now check the folder \"Resized_images/ \"", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb7e7b0ff26bd8a52e7d04b9f496e15ebf5c21d5
4,709
ipynb
Jupyter Notebook
Queue/1_Queue_using_array.ipynb
goutamdadhich/DSA_Python
3e7e7abc3b91ad4880420d266ed884cc175be607
[ "MIT" ]
null
null
null
Queue/1_Queue_using_array.ipynb
goutamdadhich/DSA_Python
3e7e7abc3b91ad4880420d266ed884cc175be607
[ "MIT" ]
null
null
null
Queue/1_Queue_using_array.ipynb
goutamdadhich/DSA_Python
3e7e7abc3b91ad4880420d266ed884cc175be607
[ "MIT" ]
null
null
null
19.95339
72
0.440858
[ [ [ "## Queue ADT\n\nCreating a queue using an array in a **circular fashion** .", "_____no_output_____" ], [ "#### First Approach ", "_____no_output_____" ] ], [ [ "class Queue:\n def __init__(self, limit = 10):\n self.que = limit*[None]\n self.limit = limit\n self.front = 0\n self.rear = 0\n \n def size(self):\n return (self.limit - self.front + self.rear)%self.limit\n \n def isEmpty(self):\n return (self.front == self.rear)\n \n def front(self):\n if self.isEmpty():\n print('Queue is empty.')\n return None\n return self.que[self.front]\n \n def enQueue(self, elem):\n if self.size() == self.limit-1:\n print(\"Queue is full.\")\n \n else:\n self.que[self.rear] = elem\n self.rear = (self.rear+1)%self.limit\n \n print('Queue after enQueue:-', self.que)\n \n def deQueue(self):\n if self.isEmpty():\n print('Queue is empty.')\n \n else:\n self.que[self.front] = None\n self.front = (self.front+1)%self.limit\n ", "_____no_output_____" ], [ "queue = Queue(5)", "_____no_output_____" ], [ "queue.size()", "_____no_output_____" ], [ "queue.isEmpty()", "_____no_output_____" ], [ "queue.enQueue(10)", "Queue after enQueue:- [10, None, None, None, None]\n" ], [ "queue.enQueue(20)\nqueue.enQueue(30)\nqueue.enQueue(40)\nqueue.enQueue(50)", "Queue after enQueue:- [10, 20, None, None, None]\nQueue after enQueue:- [10, 20, 30, None, None]\nQueue after enQueue:- [10, 20, 30, 40, None]\nQueue is full.\nQueue after enQueue:- [10, 20, 30, 40, None]\n" ], [ "queue.deQueue()", "_____no_output_____" ], [ "queue.enQueue(50)", "Queue after enQueue:- [None, 20, 30, 40, 50]\n" ], [ "queue.deQueue()", "_____no_output_____" ], [ "queue.enQueue(60)", "Queue after enQueue:- [60, None, 30, 40, 50]\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7e7d9237b87fc732b4a78ba8b1d648d0c3375d
70,000
ipynb
Jupyter Notebook
Locations.ipynb
kfinity/capstone-speeches
162f218767feffbef12c92a309b66b043f40db0a
[ "MIT" ]
5
2020-11-13T20:08:23.000Z
2021-11-28T03:01:52.000Z
Locations.ipynb
kfinity/capstone-speeches
162f218767feffbef12c92a309b66b043f40db0a
[ "MIT" ]
4
2021-06-08T22:39:49.000Z
2022-03-12T00:51:07.000Z
Locations.ipynb
kfinity/capstone-speeches
162f218767feffbef12c92a309b66b043f40db0a
[ "MIT" ]
null
null
null
81.395349
39,284
0.6322
[ [ [ "import pandas as pd\nimport numpy as np\nimport json\nimport nltk\nimport seaborn as sns\nfrom IPython.core.display import display, HTML\nsns.set()\n\n# local library\nfrom preproc import *\n\nimport streamlit as st\nfrom statistics import mode", "_____no_output_____" ], [ "with open('speeches.json', encoding='utf8') as f:\n speeches = json.load(f)", "_____no_output_____" ], [ "bow = create_bow(speeches)", "_____no_output_____" ], [ "bow", "_____no_output_____" ], [ "speech = bow.reset_index()\nspeech", "_____no_output_____" ], [ "from geotext import GeoText", "_____no_output_____" ], [ "speech['speech'][4].title()", "_____no_output_____" ], [ "places = GeoText(speech['speech'][4].title())", "_____no_output_____" ], [ "places.cities", "_____no_output_____" ], [ "locationlist = []\ny=0\nfor x in speech['title']:\n places = GeoText(speech['title'][y])\n places2 = GeoText(speech['speech'][y].title())\n try:\n #First, attempt to extract location info from the title\n locationlist.append(str(mode(places.cities)))\n except:\n #Second, attempt to extract location info from the speech starting with excluding out any null observations\n if(len(places2.cities)==0):\n locationlist.append(str('Unknown'))\n else:\n # Third, remove all non-standard location terms\n for z in places2.cities[:]:\n if(z=='Of' or z=='Obama' or z=='Man' or z=='Deal' or z=='Much' or z=='Most' or z=='Bar' or z=='March' or z=='Taylor' or z=='Police' or z=='Date' or z=='George' or z=='Wedding' or z=='Beijing' or z=='Barletta'):\n places2.cities.remove(z)\n # Fourth, if there are no more observations left, end with an unknown\n if(len(places2.cities)==0):\n locationlist.append(str('Unknown'))\n # Finally, return with full conditions\n else:\n locationlist.append(str(places2.cities[0]))\n y=y+1", "_____no_output_____" ], [ "locationlist", "_____no_output_____" ], [ "speech.assign(location=locationlist)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7e84b6f0220faa99891d787aba51e7b429ac6d
7,255
ipynb
Jupyter Notebook
0_Python/Untitled.ipynb
fbsh/code_for_book
96150510598e45bfdcbe0f3d2ed4e2b9502c11b1
[ "MIT" ]
1
2022-03-10T11:24:07.000Z
2022-03-10T11:24:07.000Z
0_Python/Untitled.ipynb
fbsh/code_for_book
96150510598e45bfdcbe0f3d2ed4e2b9502c11b1
[ "MIT" ]
null
null
null
0_Python/Untitled.ipynb
fbsh/code_for_book
96150510598e45bfdcbe0f3d2ed4e2b9502c11b1
[ "MIT" ]
null
null
null
27.903846
751
0.526396
[ [ [ "fuelNeeded = 42/1000\ntank1 = 36/1000\ntank2 = 6/1000\ntank1 + tank2 >= fuelNeeded", "_____no_output_____" ], [ "from decimal import Decimal", "_____no_output_____" ], [ "fN = Decimal(fuelNeeded)\nt1 = Decimal(tank1)\nt2 = Decimal(tank2)\nt1 + t2 >= fN", "_____no_output_____" ], [ "class Rational(object):\n def __init__ (self, num, denom):\n self.numerator = num \n self.denominator = denom\n \n def add(self, other):\n newNumerator = self.numerator * other.denominator + self.denominator * other.numerator \n newDenominator = self.denominator*other.denominator \n return Rational(newNumerator, newDenominator)", "_____no_output_____" ], [ "r1 = Rational(36, 1000)\nr2 = Rational(6, 1000)", "_____no_output_____" ], [ "import numpy as np\nfrom mayavi import mlab\nmlab.init_notebook()", "Notebook initialized with ipy backend.\n" ], [ "s = mlab.test_plot3d()\ns", "_____no_output_____" ], [ "from numpy import pi, sin, cos, mgrid", "_____no_output_____" ], [ "dphi, dtheta = pi/250.0, pi/250.0\n[phi,theta] = mgrid[0:pi+dphi*1.5:dphi,0:2*pi+dtheta*1.5:dtheta]\nm0 = 4; m1 = 3; m2 = 2; m3 = 3; m4 = 6; m5 = 2; m6 = 6; m7 = 4;\nr = sin(m0*phi)**m1 + cos(m2*phi)**m3 + sin(m4*theta)**m5 + cos(m6*theta)**m7\nx = r*sin(phi)*cos(theta)\ny = r*cos(phi)\nz = r*sin(phi)*sin(theta)\n\n#对该数据进行三维可视化\ns = mlab.mesh(x, y, z)\ns\nmlab.savefig('example.png')\n", "_____no_output_____" ], [ "import numpy as np\nfrom mayavi import mlab\n\[email protected](delay = 100)\ndef updateAnimation():\n t = 0.0\n while True:\n ball.mlab_source.set(x = np.cos(t), y = np.sin(t), z = 0)\n t += 0.1\n yield\n\nball = mlab.points3d(np.array(1.), np.array(0.), np.array(0.))\n\nupdateAnimation()\nmlab.show()", "_____no_output_____" ], [ "import numpy\nfrom mayavi import mlab\n\n\ndef lorenz(x, y, z, s=10., r=28., b=8. / 3.):\n \"\"\"The Lorenz system.\"\"\"\n u = s * (y - x)\n v = r * x - y - x * z\n w = x * y - b * z\n return u, v, w\n\n# Sample the space in an interesting region.\nx, y, z = numpy.mgrid[-50:50:100j, -50:50:100j, -10:60:70j]\nu, v, w = lorenz(x, y, z)\nfig = mlab.figure(size=(400, 300), bgcolor=(0, 0, 0))\n\n# Plot the flow of trajectories with suitable parameters.\nf = mlab.flow(x, y, z, u, v, w, line_width=3, colormap='Paired')\nf.module_manager.scalar_lut_manager.reverse_lut = True\nf.stream_tracer.integration_direction = 'both'\nf.stream_tracer.maximum_propagation = 200\n# Uncomment the following line if you want to hide the seed:\n#f.seed.widget.enabled = False\n\n# Extract the z-velocity from the vectors and plot the 0 level set\n# hence producing the z-nullcline.\nsrc = f.mlab_source.m_data\ne = mlab.pipeline.extract_vector_components(src)\ne.component = 'z-component'\nzc = mlab.pipeline.iso_surface(e, opacity=0.5, contours=[0, ],\n color=(0.6, 1, 0.2))\n# When using transparency, hiding 'backface' triangles often gives better\n# results\nzc.actor.property.backface_culling = True\n\n# A nice view of the plot.\nmlab.view(140, 120, 113, [0.65, 1.5, 27])\nmlab.savefig('example.png')", "_____no_output_____" ], [ "import numpy as np\nimport mayavi.mlab as mlab\nimport moviepy.editor as mpy\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7e9416a2149b8a95eb029f653173492e1b4cef
180,963
ipynb
Jupyter Notebook
laboratories/W07_PDE3_Spherically_symmetric_parabolic_PDE.ipynb
dmingram66/PDE3
e0b1a4b84c0bbcca3a169ed5d557835bf9c5720f
[ "CC0-1.0" ]
2
2022-01-19T15:08:44.000Z
2022-01-27T15:42:34.000Z
laboratories/W07_PDE3_Spherically_symmetric_parabolic_PDE.ipynb
dmingram66/PDE3
e0b1a4b84c0bbcca3a169ed5d557835bf9c5720f
[ "CC0-1.0" ]
null
null
null
laboratories/W07_PDE3_Spherically_symmetric_parabolic_PDE.ipynb
dmingram66/PDE3
e0b1a4b84c0bbcca3a169ed5d557835bf9c5720f
[ "CC0-1.0" ]
7
2022-01-24T07:03:07.000Z
2022-03-23T12:06:03.000Z
50.156042
6,024
0.601659
[ [ [ "# Week 7 worksheet: Spherically symmetric parabolic PDEs\n\nThis worksheet contains a number of exercises covering only the numerical aspects of the course. Some parts, however, still require you to solve the problem by hand, i.e. with pen and paper. The rest needs you to write pythob code. It should usually be obvious which parts require which.\n\n#### Suggested reading\n\nYou will see lists of links to further reading and resources throughout the worksheets, in sections titled **Learn more:**. These will include links to the Python documentation on the topic at hand, or links to relevant book sections or other online resources. Unless explicitly indicated, these are not mandatory reading, although of course we strongly recommend that you consult them!\n\n#### Displaying solutions\n\nSolutions will be released after the workshop, as a new `.txt` file in the same GitHub repository. After pulling the file to Noteable, **run the following cell** to create clickable buttons under each exercise, which will allow you to reveal the solutions.\n\n## Note:\nThis workbook expects to find a diretory called figures in the same folder as well as the scripts folder. Please make sure you download figures (and the files it contains) from the GitHub.", "_____no_output_____" ] ], [ [ "%run scripts/create_widgets.py W07", "_____no_output_____" ] ], [ [ "*How it works: You will see cells located below each exercise, each containing a command starting with `%run scripts/show_solutions.py`. You don't need to run those yourself; the command above runs a script which automatically runs these specific cells for you. The commands in each of these cells each create the button for the corresponding exercise. The Python code to achieve this is contained in `scripts/show_solutions.py`, and relies on [IPython widgets](https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Basics.html) --- feel free to take a look at the code if you are curious.*", "_____no_output_____" ] ], [ [ "%%javascript\nMathJax.Hub.Config({\n TeX: { equationNumbers: { autoNumber: \"AMS\" } }\n});", "_____no_output_____" ] ], [ [ "## Exercise 1\n\n$$\n\\newcommand{\\vect}[1]{\\bm #1}\n\\newcommand{\\grad}{\\nabla}\n\\newcommand{\\pderiv}[2]{\\frac{\\partial #1}{\\partial #2}}\n\\newcommand{\\pdderiv}[2]{\\frac{\\partial^2 #1}{\\partial #2^2}}\n$$\n\nConsider the spherically symmetric form of the heat conduction equation\n$$\n\\pdderiv{u}{r} + \\frac{2}{r}\\pderiv{u}{r} = \\frac1\\kappa\\pderiv{u}{t}\n$$\n\n### Part a)\n\nDefine\n$$\nv(r,t) = r u(r,t)\n$$\nand show that $v$ satisfies the standard one-dimensional heat conduction equation. \n\nWhat can we expect of a solution as $r\\to\\infty$?", "_____no_output_____" ], [ "**Remarks:**\n\n- The worksheet requires understanding of the material from Analytical methods Part 6: Spherical coordinates\n- The material is applied in Analytical methods Example 5: Radially symmetric heat conduction example 9.24b", "_____no_output_____" ] ], [ [ "%run scripts/show_solutions.py W07_ex1_parta", "_____no_output_____" ] ], [ [ "### Part b)\n\nSolve the equation in the annulus $a\\le r\\le b$ subject to the boundary conditions\n\\begin{align*}\nu(a,t) &= T_0, \\quad & t>0 \\\\\nu(b,t) &= 0, \\quad & t>0 \\\\\nu(r,0) &= 0, & a\\le r\\le b\n\\end{align*}\n\nShow that the solution has the form\n$$\nT(r,t) = \\frac{a T_0}{r} \\left[\\frac{b-r}{b-a} - \\sum_{N=1}^\\infty A_N e^{-\\kappa\\lambda^2 t} \\sin\\left(\\frac{r-a}{b-a}N\\pi\\right) \\right]\n$$\nwhere $\\lambda(b-a)=N\\pi$. Evaluate the Fourier coefficients $A_N$.", "_____no_output_____" ] ], [ [ "%run scripts/show_solutions.py W07_ex1_partb", "_____no_output_____" ] ], [ [ "### Part c)\n\nModify the 1D solver from the Explicit-Parabolic Solver workbook so that it is solving the spherically symmetric form of the heat conduction equation,\n$$\n\\pdderiv{u}{r} + \\frac{2}{r}\\pderiv{u}{r} = \\frac1\\kappa\\pderiv{u}{t}.\n$$\n\nRemember that you will need to discretise the first derivative $\\pderiv{u}{r}$ using the central 2nd order finite difference approximation and will then need to find the coefficients for the spherical form of the FTCS scheme.\n\nUse this solver to solve the problem on an annulus where $a=0.1$, $b=1$ and $T_0=100$ Celcius. Compare your solution with the analytical solution from part (b) \n\n", "_____no_output_____" ] ], [ [ "%run scripts/show_solutions.py W07_ex1_partc", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb7e9f21587d2bf4d9eb068c0880691b477cf167
21,824
ipynb
Jupyter Notebook
03_Applied Machine Learning in Python/Week_2/Assignment+2_solved.ipynb
vblacklion/03_Applied-Data-Science-with-Python-Specialization
7880eaa7f4042ff3f0b4a690d09efba9f34a02cd
[ "MIT" ]
null
null
null
03_Applied Machine Learning in Python/Week_2/Assignment+2_solved.ipynb
vblacklion/03_Applied-Data-Science-with-Python-Specialization
7880eaa7f4042ff3f0b4a690d09efba9f34a02cd
[ "MIT" ]
null
null
null
03_Applied Machine Learning in Python/Week_2/Assignment+2_solved.ipynb
vblacklion/03_Applied-Data-Science-with-Python-Specialization
7880eaa7f4042ff3f0b4a690d09efba9f34a02cd
[ "MIT" ]
null
null
null
42.625
565
0.61469
[ [ [ "---\n\n_You are currently looking at **version 1.5** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._\n\n---", "_____no_output_____" ], [ "# Assignment 2\n\nIn this assignment you'll explore the relationship between model complexity and generalization performance, by adjusting key parameters of various supervised learning models. Part 1 of this assignment will look at regression and Part 2 will look at classification.\n\n## Part 1 - Regression", "_____no_output_____" ], [ "First, run the following block to set up the variables needed for later sections.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom sklearn.model_selection import train_test_split\n\n\nnp.random.seed(0)\nn = 15\nx = np.linspace(0,10,n) + np.random.randn(n)/5\ny = np.sin(x)+x/6 + np.random.randn(n)/10\n\n\nX_train, X_test, y_train, y_test = train_test_split(x, y, random_state=0)\n\n# You can use this function to help you visualize the dataset by\n# plotting a scatterplot of the data points\n# in the training and test sets.\n#def part1_scatter():\n #import matplotlib.pyplot as plt\n #%matplotlib notebook\n #plt.figure()\n #plt.scatter(X_train, y_train, label='training data')\n #plt.scatter(X_test, y_test, label='test data')\n #plt.legend(loc=4);\n \n \n# NOTE: Uncomment the function below to visualize the data, but be sure \n# to **re-comment it before submitting this assignment to the autograder**. \n# part1_scatter()", "_____no_output_____" ] ], [ [ "### Question 1\n\nWrite a function that fits a polynomial LinearRegression model on the *training data* `X_train` for degrees 1, 3, 6, and 9. (Use PolynomialFeatures in sklearn.preprocessing to create the polynomial features and then fit a linear regression model) For each model, find 100 predicted values over the interval x = 0 to 10 (e.g. `np.linspace(0,10,100)`) and store this in a numpy array. The first row of this array should correspond to the output from the model trained on degree 1, the second row degree 3, the third row degree 6, and the fourth row degree 9.\n\n<img src=\"readonly/polynomialreg1.png\" style=\"width: 1000px;\"/>\n\nThe figure above shows the fitted models plotted on top of the original data (using `plot_one()`).\n\n<br>\n*This function should return a numpy array with shape `(4, 100)`*", "_____no_output_____" ] ], [ [ "def answer_one():\n from sklearn.linear_model import LinearRegression\n from sklearn.preprocessing import PolynomialFeatures\n \n result = []\n degrees = [1, 3, 6, 9]\n for idx, degree in enumerate(degrees):\n poly = PolynomialFeatures(degree=degree)\n X_poly = poly.fit_transform(X_train.reshape(-1,1))\n linreg = LinearRegression().fit(X_poly, y_train)\n pred = poly.fit_transform(np.linspace(0,10,100).reshape(-1,1))\n result.append(linreg.predict(pred))\n \n result = np.array(result)\n \n return result# Return your answer\nanswer_one()", "_____no_output_____" ], [ "# feel free to use the function plot_one() to replicate the figure \n# from the prompt once you have completed question one\n#def plot_one(degree_predictions):\n# import matplotlib.pyplot as plt\n# %matplotlib notebook\n# plt.figure(figsize=(10,5))\n# plt.plot(X_train, y_train, 'o', label='training data', markersize=10)\n# plt.plot(X_test, y_test, 'o', label='test data', markersize=10)\n# for i,degree in enumerate([1,3,6,9]):\n# plt.plot(np.linspace(0,10,100), degree_predictions[i], alpha=0.8, lw=2, label='degree={}'.format(degree))\n# plt.ylim(-1,2.5)\n# plt.legend(loc=4)\n#\n#plot_one(answer_one())", "_____no_output_____" ] ], [ [ "### Question 2\n\nWrite a function that fits a polynomial LinearRegression model on the training data `X_train` for degrees 0 through 9. For each model compute the $R^2$ (coefficient of determination) regression score on the training data as well as the the test data, and return both of these arrays in a tuple.\n\n*This function should return one tuple of numpy arrays `(r2_train, r2_test)`. Both arrays should have shape `(10,)`*", "_____no_output_____" ] ], [ [ "def answer_two():\n from sklearn.linear_model import LinearRegression\n from sklearn.preprocessing import PolynomialFeatures\n from sklearn.metrics.regression import r2_score\n\n r2_train = np.zeros(10)\n r2_test = np.zeros(10)\n \n for degree in range(10):\n poly = PolynomialFeatures(degree=degree)\n poly_train = poly.fit_transform(X_train.reshape(-1,1))\n linreg = LinearRegression().fit(poly_train, y_train) \n r2_train[degree] = linreg.score(poly_train, y_train);\n \n poly_test = poly.fit_transform(X_test.reshape(-1,1))\n r2_test[degree] = linreg.score(poly_test, y_test)\n\n return (r2_train, r2_test)# Your answer here\nanswer_two()", "_____no_output_____" ] ], [ [ "### Question 3\n\nBased on the $R^2$ scores from question 2 (degree levels 0 through 9), what degree level corresponds to a model that is underfitting? What degree level corresponds to a model that is overfitting? What choice of degree level would provide a model with good generalization performance on this dataset? \n\nHint: Try plotting the $R^2$ scores from question 2 to visualize the relationship between degree level and $R^2$. Remember to comment out the import matplotlib line before submission.\n\n*This function should return one tuple with the degree values in this order: `(Underfitting, Overfitting, Good_Generalization)`. There might be multiple correct solutions, however, you only need to return one possible solution, for example, (1,2,3).* ", "_____no_output_____" ] ], [ [ "def answer_three():\n \n r2_scores = answer_two()\n df = pd.DataFrame({'training_score':r2_scores[0], 'test_score':r2_scores[1]})\n df['mean'] = df.mean(axis=1)\n df['diff'] = df['training_score'] - df['test_score']\n \n df = df.sort_values(by=['mean'], ascending=False)\n good = df.index[0]\n \n df = df.sort_values(by=['diff'], ascending=False)\n ofit = df.index[0]\n \n df = df.sort_values(by=['training_score'])\n ufit = df.index[0]\n \n return (ufit, ofit, good)# Return your answer\nanswer_three()", "_____no_output_____" ] ], [ [ "### Question 4\n\nTraining models on high degree polynomial features can result in overly complex models that overfit, so we often use regularized versions of the model to constrain model complexity, as we saw with Ridge and Lasso linear regression.\n\nFor this question, train two models: a non-regularized LinearRegression model (default parameters) and a regularized Lasso Regression model (with parameters `alpha=0.01`, `max_iter=10000`) both on polynomial features of degree 12. Return the $R^2$ score for both the LinearRegression and Lasso model's test sets.\n\n*This function should return one tuple `(LinearRegression_R2_test_score, Lasso_R2_test_score)`*", "_____no_output_____" ] ], [ [ "def answer_four():\n from sklearn.preprocessing import PolynomialFeatures\n from sklearn.linear_model import Lasso, LinearRegression\n from sklearn.metrics.regression import r2_score\n\n poly = PolynomialFeatures(degree=12)\n \n X_train_poly = poly.fit_transform(X_train.reshape(-1,1))\n X_test_poly = poly.fit_transform(X_test.reshape(-1,1))\n linreg = LinearRegression().fit(X_train_poly, y_train)\n LinearRegression_R2_test_score = linreg.score(X_test_poly, y_test)\n lasso = Lasso(alpha=0.01, max_iter = 10000).fit(X_train_poly, y_train)\n Lasso_R2_test_score = lasso.score(X_test_poly, y_test)\n \n return (LinearRegression_R2_test_score, Lasso_R2_test_score) # Your answer here\nanswer_four()", "_____no_output_____" ] ], [ [ "## Part 2 - Classification\n\nHere's an application of machine learning that could save your life! For this section of the assignment we will be working with the [UCI Mushroom Data Set](http://archive.ics.uci.edu/ml/datasets/Mushroom?ref=datanews.io) stored in `readonly/mushrooms.csv`. The data will be used to train a model to predict whether or not a mushroom is poisonous. The following attributes are provided:\n\n*Attribute Information:*\n\n1. cap-shape: bell=b, conical=c, convex=x, flat=f, knobbed=k, sunken=s \n2. cap-surface: fibrous=f, grooves=g, scaly=y, smooth=s \n3. cap-color: brown=n, buff=b, cinnamon=c, gray=g, green=r, pink=p, purple=u, red=e, white=w, yellow=y \n4. bruises?: bruises=t, no=f \n5. odor: almond=a, anise=l, creosote=c, fishy=y, foul=f, musty=m, none=n, pungent=p, spicy=s \n6. gill-attachment: attached=a, descending=d, free=f, notched=n \n7. gill-spacing: close=c, crowded=w, distant=d \n8. gill-size: broad=b, narrow=n \n9. gill-color: black=k, brown=n, buff=b, chocolate=h, gray=g, green=r, orange=o, pink=p, purple=u, red=e, white=w, yellow=y \n10. stalk-shape: enlarging=e, tapering=t \n11. stalk-root: bulbous=b, club=c, cup=u, equal=e, rhizomorphs=z, rooted=r, missing=? \n12. stalk-surface-above-ring: fibrous=f, scaly=y, silky=k, smooth=s \n13. stalk-surface-below-ring: fibrous=f, scaly=y, silky=k, smooth=s \n14. stalk-color-above-ring: brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w, yellow=y \n15. stalk-color-below-ring: brown=n, buff=b, cinnamon=c, gray=g, orange=o, pink=p, red=e, white=w, yellow=y \n16. veil-type: partial=p, universal=u \n17. veil-color: brown=n, orange=o, white=w, yellow=y \n18. ring-number: none=n, one=o, two=t \n19. ring-type: cobwebby=c, evanescent=e, flaring=f, large=l, none=n, pendant=p, sheathing=s, zone=z \n20. spore-print-color: black=k, brown=n, buff=b, chocolate=h, green=r, orange=o, purple=u, white=w, yellow=y \n21. population: abundant=a, clustered=c, numerous=n, scattered=s, several=v, solitary=y \n22. habitat: grasses=g, leaves=l, meadows=m, paths=p, urban=u, waste=w, woods=d\n\n<br>\n\nThe data in the mushrooms dataset is currently encoded with strings. These values will need to be encoded to numeric to work with sklearn. We'll use pd.get_dummies to convert the categorical variables into indicator variables. ", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom sklearn.model_selection import train_test_split\n\n\nmush_df = pd.read_csv('mushrooms.csv')\nmush_df2 = pd.get_dummies(mush_df)\n\nX_mush = mush_df2.iloc[:,2:]\ny_mush = mush_df2.iloc[:,1]\n\n# use the variables X_train2, y_train2 for Question 5\nX_train2, X_test2, y_train2, y_test2 = train_test_split(X_mush, y_mush, random_state=0)\n\n# For performance reasons in Questions 6 and 7, we will create a smaller version of the\n# entire mushroom dataset for use in those questions. For simplicity we'll just re-use\n# the 25% test split created above as the representative subset.\n#\n# Use the variables X_subset, y_subset for Questions 6 and 7.\nX_subset = X_test2\ny_subset = y_test2", "_____no_output_____" ] ], [ [ "### Question 5\n\nUsing `X_train2` and `y_train2` from the preceeding cell, train a DecisionTreeClassifier with default parameters and random_state=0. What are the 5 most important features found by the decision tree?\n\nAs a reminder, the feature names are available in the `X_train2.columns` property, and the order of the features in `X_train2.columns` matches the order of the feature importance values in the classifier's `feature_importances_` property. \n\n*This function should return a list of length 5 containing the feature names in descending order of importance.*\n\n*Note: remember that you also need to set random_state in the DecisionTreeClassifier.*", "_____no_output_____" ] ], [ [ "def answer_five():\n from sklearn.tree import DecisionTreeClassifier\n\n dt_clf = DecisionTreeClassifier().fit(X_train2, y_train2)\n \n feature_names = []\n \n # Get index of importance values since their order is the same with feature columns\n for index, importance in enumerate(dt_clf.feature_importances_):\n # Add importance so we can further order this list, and add feature name with index\n feature_names.append([importance, X_train2.columns[index]])\n \n # Descending sort\n feature_names.sort(reverse=True)\n # Turn in to a numpy array\n feature_names = np.array(feature_names)\n # Select only feature names\n feature_names = feature_names[:5,1]\n # Turn back to python list\n feature_names = feature_names.tolist()\n \n return feature_names # Your answer here\nanswer_five()", "_____no_output_____" ] ], [ [ "### Question 6\n\nFor this question, we're going to use the `validation_curve` function in `sklearn.model_selection` to determine training and test scores for a Support Vector Classifier (`SVC`) with varying parameter values. Recall that the validation_curve function, in addition to taking an initialized unfitted classifier object, takes a dataset as input and does its own internal train-test splits to compute results.\n\n**Because creating a validation curve requires fitting multiple models, for performance reasons this question will use just a subset of the original mushroom dataset: please use the variables X_subset and y_subset as input to the validation curve function (instead of X_mush and y_mush) to reduce computation time.**\n\nThe initialized unfitted classifier object we'll be using is a Support Vector Classifier with radial basis kernel. So your first step is to create an `SVC` object with default parameters (i.e. `kernel='rbf', C=1`) and `random_state=0`. Recall that the kernel width of the RBF kernel is controlled using the `gamma` parameter. \n\nWith this classifier, and the dataset in X_subset, y_subset, explore the effect of `gamma` on classifier accuracy by using the `validation_curve` function to find the training and test scores for 6 values of `gamma` from `0.0001` to `10` (i.e. `np.logspace(-4,1,6)`). Recall that you can specify what scoring metric you want validation_curve to use by setting the \"scoring\" parameter. In this case, we want to use \"accuracy\" as the scoring metric.\n\nFor each level of `gamma`, `validation_curve` will fit 3 models on different subsets of the data, returning two 6x3 (6 levels of gamma x 3 fits per level) arrays of the scores for the training and test sets.\n\nFind the mean score across the three models for each level of `gamma` for both arrays, creating two arrays of length 6, and return a tuple with the two arrays.\n\ne.g.\n\nif one of your array of scores is\n\n array([[ 0.5, 0.4, 0.6],\n [ 0.7, 0.8, 0.7],\n [ 0.9, 0.8, 0.8],\n [ 0.8, 0.7, 0.8],\n [ 0.7, 0.6, 0.6],\n [ 0.4, 0.6, 0.5]])\n \nit should then become\n\n array([ 0.5, 0.73333333, 0.83333333, 0.76666667, 0.63333333, 0.5])\n\n*This function should return one tuple of numpy arrays `(training_scores, test_scores)` where each array in the tuple has shape `(6,)`.*", "_____no_output_____" ] ], [ [ "def answer_six():\n from sklearn.svm import SVC\n from sklearn.model_selection import validation_curve\n\n svc = SVC(kernel='rbf', C=1, random_state=0)\n gamma = np.logspace(-4,1,6)\n train_scores, test_scores = validation_curve(svc, X_subset, y_subset,\n param_name='gamma',\n param_range=gamma,\n scoring='accuracy')\n\n scores = (train_scores.mean(axis=1), test_scores.mean(axis=1))\n \n return scores # Your answer here\nanswer_six()", "_____no_output_____" ] ], [ [ "### Question 7\n\nBased on the scores from question 6, what gamma value corresponds to a model that is underfitting (and has the worst test set accuracy)? What gamma value corresponds to a model that is overfitting (and has the worst test set accuracy)? What choice of gamma would be the best choice for a model with good generalization performance on this dataset (high accuracy on both training and test set)? \n\nHint: Try plotting the scores from question 6 to visualize the relationship between gamma and accuracy. Remember to comment out the import matplotlib line before submission.\n\n*This function should return one tuple with the degree values in this order: `(Underfitting, Overfitting, Good_Generalization)` Please note there is only one correct solution.*", "_____no_output_____" ] ], [ [ "def answer_seven():\n \n scores = answer_six()\n df = pd.DataFrame({'training_score':scores[0], 'test_score':scores[1]})\n df['mean'] = df.mean(axis=1)\n df['diff'] = df['training_score'] - df['test_score']\n \n df = df.sort_values(by=['mean'], ascending=False)\n good = df.index[0]\n \n df = df.sort_values(by=['diff'], ascending=False)\n ofit = df.index[0]\n \n df = df.sort_values(by=['training_score'])\n ufit = df.index[0]\n \n gamma = np.logspace(-4,1,6)\n ufit = gamma[ufit]\n ofit = gamma[ofit]\n good = round(gamma[good],1)\n result = (ufit, ofit, good)\n \n return result # Return your answer\nanswer_seven()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb7eb3d020553242bfb214c34cda7429b1b98c23
6,304
ipynb
Jupyter Notebook
tushare/KDJ Calculation logic.ipynb
phosphoric/databook
9317bd835892f0198168ed760526f4670e67d03f
[ "Apache-2.0" ]
20
2018-07-27T15:14:44.000Z
2022-03-10T06:44:46.000Z
tushare/KDJ Calculation logic.ipynb
openthings/databook-old
3b728f444f6c46e5c5d7f219cdf4d6041895b910
[ "Apache-2.0" ]
1
2020-11-18T22:15:54.000Z
2020-11-18T22:15:54.000Z
tushare/KDJ Calculation logic.ipynb
openthings/databook-old
3b728f444f6c46e5c5d7f219cdf4d6041895b910
[ "Apache-2.0" ]
19
2018-07-27T07:42:22.000Z
2021-05-12T01:36:10.000Z
30.307692
119
0.461453
[ [ [ "from pymongo import MongoClient\nimport pandas as pd\nimport datetime\n\n# Open Database and find history data collection\nclient = MongoClient()\ndb = client.test_database\nshdaily = db.indexdata\n\n# KDJ calculation formula\ndef KDJCalculation(K1, D1, high, low, close):\n # input last K1, D1, max value, min value and current close value\n #设定KDJ基期值\n #count = 9\n #设定k、d平滑因子a、b,不过目前已经约定俗成,固定为1/3\n a = 1.0/3\n b = 1.0/3\n # 取得过去count天的最低价格\n low_price = low #low.min() #min(list1)\n # 取得过去count天的最高价格\n high_price = high #high.max() #max(list1)\n # 取得当日收盘价格\n current_close = close\n if high_price!=low_price:\n #计算未成熟随机值RSV(n)=(Ct-Ln)/(Hn-Ln)×100\n RSV = (current_close-low_price)/(high_price-low_price)*100\n else:\n RSV = 50\n #当日K值=(1-a)×前一日K值+a×当日RSV\n K2=(1-a)*K1+a*RSV\n #当日D值=(1-a)×前一日D值+a×当日K值\n D2=(1-b)*D1+b*K2\n #计算J值\n J2 = 3*K2-2*D2\n #log.info(\"Daily K1: %s, D1: %s, K2: %s, D2: %s, J2: %s\" % (K1,D1,K2,D2,J2))\n return K1,D1,K2,D2,J2\n\n\n\n# Put the first dataset in\n\n\n\n# List the data \n# initial Values\nK1 = 50\nD1 = 50\n\n# for each day, calculate data and insert into db\nfor d in shdaily.find()[:10]:\n date = d['date']\n datalist = pd.DataFrame(list(shdaily.find({'date':{\"$lte\": date}}).sort('date', -1)))\n data = datalist[:9]\n \n # get previous KDJ data from database\n K1 = data.ix[1]['KDJ_K']\n D1 = data.ix[1]['KDJ_D']\n \n high = data['high'].values\n low = data['low'].values\n close = data[:1]['close'].values\n K1,D1,K2,D2,J2 = KDJCalculation(K1,D1,max(high),min(low),close)\n d['KDJ_K'] = K2[0]\n d['KDJ_D'] = D2[0]\n d['KDJ_J'] = J2[0]\n \n# K1 = K2\n# D1 = D2\n print d\n\n\n#datalist = pd.DataFrame(list(shdaily.find().sort('date', -1)))\n\n#date1 = datetime.strptime(\"01/01/16\", \"%d/%m/%y\")\n\n\n# List out the data before or equal a specific date\n#list(shdaily.find({'date':{\"$lte\":'2016-02-08'}}).sort('date', -1))\n\n\n# Get last day KDJ data from database\ndatalist = pd.DataFrame(list(shdaily.find({'date':{\"$lte\": '2016-02-10'}}).sort('date', -1)))\ndata = datalist.ix[1]\ndata['KDJ_K']\n\n\n# Save data to db \n\n\n# data = datalist[:9]\n\n# data\n\n\n\n# K1 = 50\n# D1 = 50\n# high = data['high'].values\n# low = data['low'].values\n# close = data[:1]['close'].values\n\n# K1,D1,K2,D2,J2 = KDJCalculation(K1,D1,max(high),min(low),close)\n\n", "_____no_output_____" ], [ "# Another KDJ Calculation based on dataframe\ndef CalculateKDJ(stock_data):\n # Initiate KDJ parameters\n endday = pd.datetime.today()\n N1= 9\n N2= 3\n N3= 3\n # Perform calculation\n #stock_data = get_price(stock, end_date=endday)\n low_list = pd.rolling_min(stock_data['LowPx'], N1)\n low_list.fillna(value=pd.expanding_min(stock_data['LowPx']), inplace=True)\n high_list = pd.rolling_max(stock_data['HighPx'], N1)\n high_list.fillna(value=pd.expanding_max(stock_data['HighPx']), inplace=True)\n #rsv = (stock_data['ClosingPx'] - low_list) / (high_list - low_list) * 100\n \n rsv = (stock_data['ClosingPx'] - stock_data['LowPx']) / (stock_data['HighPx'] - stock_data['LowPx']) * 100\n stock_data['KDJ_K'] = pd.ewma(rsv, com = N2)\n stock_data['KDJ_D'] = pd.ewma(stock_data['KDJ_K'], com = N3)\n stock_data['KDJ_J'] = 3 * stock_data['KDJ_K'] - 2 * stock_data['KDJ_D']\n KDJ = stock_data[['KDJ_K','KDJ_D','KDJ_J']]\n return KDJ", "_____no_output_____" ], [ "\n\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cb7ebd21213256a0fee50c0f3ec9bded22a69af6
55,697
ipynb
Jupyter Notebook
code/notebooks/keras/02-Multilayer_Perceptron.ipynb
gavinln/ml-on-aws
82fc4ae29a7a07d413dec5f33ab4082b28c76dfa
[ "Apache-2.0" ]
null
null
null
code/notebooks/keras/02-Multilayer_Perceptron.ipynb
gavinln/ml-on-aws
82fc4ae29a7a07d413dec5f33ab4082b28c76dfa
[ "Apache-2.0" ]
null
null
null
code/notebooks/keras/02-Multilayer_Perceptron.ipynb
gavinln/ml-on-aws
82fc4ae29a7a07d413dec5f33ab4082b28c76dfa
[ "Apache-2.0" ]
null
null
null
103.718808
25,644
0.808966
[ [ [ "from keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.optimizers import RMSprop\nfrom keras.utils import to_categorical\n\nfrom keras.datasets import mnist\n\nimport numpy as np\n\nfrom matplotlib.figure import Figure\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\n%matplotlib inline\n\nfrom sklearn.metrics import confusion_matrix\n\nimport pandas as pd\nimport seaborn as sns", "/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\nUsing TensorFlow backend.\n/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file \"/home/ubuntu/.config/matplotlib/matplotlibrc\", line #2\n (fname, cnt))\n/home/ubuntu/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/matplotlib/__init__.py:1067: UserWarning: Duplicate key in file \"/home/ubuntu/.config/matplotlib/matplotlibrc\", line #3\n (fname, cnt))\n" ] ], [ [ "Load the mnist training and test data sets", "_____no_output_____" ] ], [ [ "(X_train, y_train), (X_test, y_test) = mnist.load_data()", "Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\n" ] ], [ [ "Display the first five images and the labels", "_____no_output_____" ] ], [ [ "def plot_gray_image(img, title, ax):\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n ax.imshow(img, cmap=cm.gray)\n ax.set_title(title)\n \nfig, ax_list = plt.subplots(nrows=1, ncols=5)\nfor idx, ax in enumerate(ax_list):\n plot_gray_image(X_train[idx], y_train[idx], ax)", "_____no_output_____" ] ], [ [ "Flatten the two dimensional input data and center it around zero", "_____no_output_____" ] ], [ [ "img_size = X_train.shape[1] * X_train.shape[2]\nX_train_flat = X_train.reshape(-1, img_size)\nX_test_flat = X_test.reshape(-1, img_size)\n\nX_train_flat = X_train_flat/255\nX_test_flat = X_test_flat/255", "_____no_output_____" ], [ "num_classes = 10\ny_train_cat = to_categorical(y_train, num_classes)\ny_test_cat = to_categorical(y_test, num_classes)", "_____no_output_____" ], [ "batch_size = 128\nepochs = 10\n\nmodel = Sequential()\nmodel.add(Dense(512, activation='relu', input_shape=(img_size,)))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(num_classes, activation='softmax'))\nmodel.compile(loss='categorical_crossentropy',\n optimizer=RMSprop(),\n metrics=['accuracy'])\nmodel.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_10 (Dense) (None, 512) 401920 \n_________________________________________________________________\ndropout_7 (Dropout) (None, 512) 0 \n_________________________________________________________________\ndense_11 (Dense) (None, 512) 262656 \n_________________________________________________________________\ndropout_8 (Dropout) (None, 512) 0 \n_________________________________________________________________\ndense_12 (Dense) (None, 10) 5130 \n=================================================================\nTotal params: 669,706\nTrainable params: 669,706\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "history = model.fit(X_train_flat, y_train_cat,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n validation_data=(X_test_flat, y_test_cat))\nscore = model.evaluate(X_test_flat, y_test_cat, verbose=0)\nprint('Test loss:', score[0])\nprint('Test accuracy:', score[1])", "Train on 60000 samples, validate on 10000 samples\nEpoch 1/10\n60000/60000 [==============================] - 3s 56us/step - loss: 0.2498 - acc: 0.9229 - val_loss: 0.1218 - val_acc: 0.9610\nEpoch 2/10\n60000/60000 [==============================] - 2s 40us/step - loss: 0.1029 - acc: 0.9689 - val_loss: 0.0932 - val_acc: 0.9713\nEpoch 3/10\n60000/60000 [==============================] - 2s 40us/step - loss: 0.0759 - acc: 0.9771 - val_loss: 0.0796 - val_acc: 0.9748\nEpoch 4/10\n60000/60000 [==============================] - 2s 40us/step - loss: 0.0608 - acc: 0.9813 - val_loss: 0.0708 - val_acc: 0.9808\nEpoch 5/10\n60000/60000 [==============================] - 2s 40us/step - loss: 0.0492 - acc: 0.9848 - val_loss: 0.0754 - val_acc: 0.9798\nEpoch 6/10\n60000/60000 [==============================] - 2s 41us/step - loss: 0.0434 - acc: 0.9868 - val_loss: 0.0857 - val_acc: 0.9792\nEpoch 7/10\n60000/60000 [==============================] - 3s 42us/step - loss: 0.0379 - acc: 0.9889 - val_loss: 0.0859 - val_acc: 0.9802\nEpoch 8/10\n60000/60000 [==============================] - 3s 43us/step - loss: 0.0349 - acc: 0.9893 - val_loss: 0.0967 - val_acc: 0.9787\nEpoch 9/10\n60000/60000 [==============================] - 2s 40us/step - loss: 0.0305 - acc: 0.9908 - val_loss: 0.0859 - val_acc: 0.9822\nEpoch 10/10\n60000/60000 [==============================] - 2s 40us/step - loss: 0.0296 - acc: 0.9913 - val_loss: 0.0786 - val_acc: 0.9830\nTest loss: 0.07859711428600613\nTest accuracy: 0.983\n" ], [ "y_predict = model.predict_classes(X_test_flat)", "_____no_output_____" ] ], [ [ "Display numbers where the prediction is wrong", "_____no_output_____" ] ], [ [ "err_idx = np.where(y_test != y_predict)[0]", "_____no_output_____" ], [ "err_plot_size = 5\nfig, ax_list = plt.subplots(nrows=1, ncols=err_plot_size)\nfig.set_size_inches(w=6, h=2)\nfig.suptitle('a - actual, p - predicted')\nfor idx, ax in enumerate(ax_list):\n data_idx = err_idx[idx]\n msg = 'a {}, p {}'.format(y_test[data_idx], y_predict[data_idx])\n plot_gray_image(X_test[data_idx], msg, ax)", "_____no_output_____" ], [ "cmatrix = confusion_matrix(y_test, y_predict)\ndf_cm = pd.DataFrame(cmatrix)\ndf_cm", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(8, 6))\nsns.heatmap(df_cm, annot=True, fmt='.0f', ax=ax)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb7ecd1a4cefa555fbe75d811c128dbdb5a76e39
384,956
ipynb
Jupyter Notebook
FeatureEngineering/Section-03-Variable-Characteristics/03.3-Rare-Labels.ipynb
wiflore/Deployment-Machine-Learning-Models
52dd12801ba36ae20354a92c0b035cc3c6f8ce53
[ "MIT" ]
null
null
null
FeatureEngineering/Section-03-Variable-Characteristics/03.3-Rare-Labels.ipynb
wiflore/Deployment-Machine-Learning-Models
52dd12801ba36ae20354a92c0b035cc3c6f8ce53
[ "MIT" ]
null
null
null
FeatureEngineering/Section-03-Variable-Characteristics/03.3-Rare-Labels.ipynb
wiflore/Deployment-Machine-Learning-Models
52dd12801ba36ae20354a92c0b035cc3c6f8ce53
[ "MIT" ]
null
null
null
355.781885
52,908
0.927425
[ [ [ "# Rare Labels\n\n## Labels that occur rarely\n\nCategorical variables are those which values are selected from a group of categories, also called labels. Different labels appear in the dataset with different frequencies. Some categories appear a lot in the dataset, whereas some other categories appear only in a few number of observations.\n\nFor example, in a dataset with information about loan applicants where one of the variables is \"city\" where the applicant lives, cities like 'New York' may appear a lot in the data because New York has a huge population, whereas smaller towns like 'Leavenworth' will appear only on a few occasions (population < 2000 people), because the population there is very small. A borrower is more likely to live in New York, because far more people live in New York.\n\nIn fact, categorical variables often contain a few dominant labels that account for the majority of the observations and a large number of labels that appear only seldom.\n\n\n### Are Rare Labels in a categorical variable a problem?\n\nRare values can add a lot of information or none at all. For example, consider a stockholder meeting where each person can vote in proportion to their number of shares. One of the shareholders owns 50% of the stock, and the other 999 shareholders own the remaining 50%. The outcome of the vote is largely influenced by the shareholder who holds the majority of the stock. The remaining shareholders may have an impact collectively, but they have almost no impact individually. \n\nThe same occurs in real life datasets. The label that is over-represented in the dataset tends to dominate the outcome, and those that are under-represented may have no impact individually, but could have an impact if considered collectively.\n\nMore specifically,\n\n- Rare values in categorical variables tend to cause over-fitting, particularly in tree based methods.\n\n- A big number of infrequent labels adds noise, with little information, therefore causing over-fitting.\n\n- Rare labels may be present in training set, but not in test set, therefore causing over-fitting to the train set.\n\n- Rare labels may appear in the test set, and not in the train set. Thus, the machine learning model will not know how to evaluate it. \n\n\n**Note** Sometimes rare values, are indeed important. For example, if we are building a model to predict fraudulent loan applications, which are by nature rare, then a rare value in a certain variable, may be indeed very predictive. This rare value could be telling us that the observation is most likely a fraudulent application, and therefore we would choose not to ignore it.", "_____no_output_____" ], [ "## In this Demo:\n\nWe will:\n\n- Learn to identify rare labels in a dataset\n- Understand how difficult it is to derive reliable information from them.\n- Visualise the uneven distribution of rare labels between train and test sets\n\nWe will use the House Prices dataset.\n\n- To download the dataset please visit the lecture **Datasets** in **Section 1** of the course.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\n# to separate data intro train and test sets\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "# let's load the dataset with the variables\n# we need for this demo\n\n# Variable definitions:\n\n# Neighborhood: Physical locations within Ames city limits\n# Exterior1st: Exterior covering on house\n# Exterior2nd: Exterior covering on house (if more than one material)\n\nuse_cols = ['Neighborhood', 'Exterior1st', 'Exterior2nd', 'SalePrice']\n\ndata = pd.read_csv('../houseprice.csv', usecols=use_cols)\n\ndata.head()", "_____no_output_____" ], [ "# let's look at the different number of labels\n# in each variable (cardinality)\n\n# these are the loaded categorical variables\ncat_cols = ['Neighborhood', 'Exterior1st', 'Exterior2nd']\n\nfor col in cat_cols:\n print('variable: ', col, ' number of labels: ', data[col].nunique())\n\nprint('total houses: ', len(data))", "variable: Neighborhood number of labels: 25\nvariable: Exterior1st number of labels: 15\nvariable: Exterior2nd number of labels: 16\ntotal houses: 1460\n" ] ], [ [ "The variable 'Neighborhood' shows 25 different values, 'Exterior1st' shows 15 different categories, and 'Exterior2nd' shows 16 different categories.", "_____no_output_____" ] ], [ [ "# let's plot how frequently each label\n# appears in the dataset\n\n# in other words, the percentage of houses in the data\n# with each label\n\ntotal_houses = len(data)\n\n# for each categorical variable\nfor col in cat_cols:\n\n # count the number of houses per category\n # and divide by total houses\n\n # aka percentage of houses per category\n\n temp_df = pd.Series(data[col].value_counts() / total_houses)\n\n # make plot with the above percentages\n fig = temp_df.sort_values(ascending=False).plot.bar()\n fig.set_xlabel(col)\n\n # add a line at 5 % to flag the threshold for rare categories\n fig.axhline(y=0.05, color='red')\n fig.set_ylabel('Percentage of houses')\n plt.show()", "_____no_output_____" ] ], [ [ "For each of the categorical variables, some labels appear in more than 10% of the houses and many appear in less than 10% or even 5% of the houses. These are infrequent labels or **Rare Values** and could cause over-fitting.\n\n### How is the target, \"SalePrice\", related to these categories?\n\nIn the following cells, I want to understand the mean SalePrice per group of houses that display each categories.\n\nKeep on reading, it will become clearer.", "_____no_output_____" ] ], [ [ "# the following function calculates:\n\n# 1) the percentage of houses per category\n# 2) the mean SalePrice per category\n\n\ndef calculate_mean_target_per_category(df, var):\n\n # total number of houses\n total_houses = len(df)\n\n # percentage of houses per category\n temp_df = pd.Series(df[var].value_counts() / total_houses).reset_index()\n temp_df.columns = [var, 'perc_houses']\n\n # add the mean SalePrice\n temp_df = temp_df.merge(df.groupby([var])['SalePrice'].mean().reset_index(),\n on=var,\n how='left')\n\n return temp_df", "_____no_output_____" ], [ "# now we use the function for the variable 'Neighborhood'\ntemp_df = calculate_mean_target_per_category(data, 'Neighborhood')\ntemp_df", "_____no_output_____" ] ], [ [ "The above dataframe contains the percentage of houses that show each one of the labels in Neighborhood, and the mean SalePrice for those group of houses. In other words, ~15% of houses are in NAmes and the mean SalePrice is 145847.", "_____no_output_____" ] ], [ [ "# Now I create a function to plot of the\n# category frequency and mean SalePrice.\n\n# This will help us visualise the relationship between the\n# target and the labels of the categorical variable\n\ndef plot_categories(df, var):\n \n fig, ax = plt.subplots(figsize=(8, 4))\n plt.xticks(df.index, df[var], rotation=90)\n\n ax2 = ax.twinx()\n ax.bar(df.index, df[\"perc_houses\"], color='lightgrey')\n ax2.plot(df.index, df[\"SalePrice\"], color='green', label='Seconds')\n ax.axhline(y=0.05, color='red')\n ax.set_ylabel('percentage of houses per category')\n ax.set_xlabel(var)\n ax2.set_ylabel('Average Sale Price per category')\n plt.show()", "_____no_output_____" ], [ "plot_categories(temp_df, 'Neighborhood')", "_____no_output_____" ] ], [ [ "Houses in the 'Neighborhood' of 'NridgHt' sell at a high price, whereas houses in 'Sawyer' tend to be cheaper.\n\nHouses in the 'Neighborhood' of StoneBr have on average a high SalePrice, above 300k. However, StoneBr is present in less than 5% of the houses. Or in other words, less than 5% of the houses in the dataset are located in StoneBr.\n\nWhy is this important? Because if we do not have a lot of houses to learn from, we could be under or over-estimating the effect of StoneBr on the SalePrice.\n\nIn other words, how confident are we to generalise that most houses in StoneBr will sell for around 300k, when we only have a few houses to learn from?", "_____no_output_____" ] ], [ [ "# let's plot the remaining categorical variables\n\nfor col in cat_cols:\n \n # we plotted this variable already\n if col !='Neighborhood':\n \n # re using the functions I created\n temp_df = calculate_mean_target_per_category(data, col)\n plot_categories(temp_df, col)", "_____no_output_____" ] ], [ [ "Let's look at variable Exterior2nd: Most of the categories in Exterior2nd are present in less than 5% of houses. In addition, the \"SalePrice\" varies a lot across those rare categories. The mean value of SalePrice goes up and down over the infrequent categories. In fact, it looks quite noisy. These rare labels could indeed be very predictive, or they could be introducing noise rather than information. And because the labels are under-represented, we can't be sure whether they have a true impact on the house price. We could be under or over-estimating their impact due to the fact that we have information for few houses.\n\n**Note:** This plot would bring more value, if we plotted the errors of the mean SalePrice. It would give us an idea of how much the mean value of the target varies within each label. Why don't you go ahead and add the standard deviation to the plot?", "_____no_output_____" ], [ "### Rare labels: grouping under a new label\n\nOne common way of working with rare or infrequent values, is to group them under an umbrella category called 'Rare' or 'Other'. In this way, we are able to understand the \"collective\" effect of the infrequent labels on the target. See below.", "_____no_output_____" ] ], [ [ "# I will replace all the labels that appear in less than 5%\n# of the houses by the label 'rare'\n\n\ndef group_rare_labels(df, var):\n\n total_houses = len(df)\n\n # first I calculate the % of houses for each category\n temp_df = pd.Series(df[var].value_counts() / total_houses)\n\n # now I create a dictionary to replace the rare labels with the\n # string 'rare' if they are present in less than 5% of houses\n\n grouping_dict = {\n k: ('rare' if k not in temp_df[temp_df >= 0.05].index else k)\n for k in temp_df.index\n }\n\n # now I replace the rare categories\n tmp = df[var].map(grouping_dict)\n\n return tmp", "_____no_output_____" ], [ "# group rare labels in Neighborhood\n\ndata['Neighborhood_grouped'] = group_rare_labels(data, 'Neighborhood')\n\ndata[['Neighborhood', 'Neighborhood_grouped']].head(10)", "_____no_output_____" ], [ "# let's plot Neighborhood with the grouped categories\n# re-using the functions I created above\n\ntemp_df = calculate_mean_target_per_category(data, 'Neighborhood_grouped')\nplot_categories(temp_df, 'Neighborhood_grouped')", "_____no_output_____" ] ], [ [ "\"Rare\" now contains the overall influence of all the infrequent categories on the SalePrice.", "_____no_output_____" ] ], [ [ "# let's plot the original Neighborhood for comparison\ntemp_df = calculate_mean_target_per_category(data, 'Neighborhood')\nplot_categories(temp_df, 'Neighborhood')", "_____no_output_____" ] ], [ [ "Only 9 categories of Neighborhood are relatively common in the dataset. The remaining ones are now grouped into 'rare' which captures the average SalePrice for all the infrequent labels.", "_____no_output_____" ] ], [ [ "# let's group and plot the remaining categorical variables\n\nfor col in cat_cols[1:]:\n \n # re using the functions I created\n data[col+'_grouped'] = group_rare_labels(data, col)\n temp_df = calculate_mean_target_per_category(data, col+'_grouped')\n plot_categories(temp_df, col+'_grouped')", "_____no_output_____" ] ], [ [ "Here is something interesting: In the variable Exterior1st, look at how all the houses with rare values are on average more expensive than the rest, except for those with VinySd.\n\nThe same is true for Exterior2nd. The rare categories seem to have had something in common.\n\n**Note:** Ideally, we would also like to have the standard deviation / inter-quantile range for the SalePrice, to get an idea of how variable the house price is for each category.", "_____no_output_____" ], [ "### Rare labels lead to uneven distribution of categories in train and test sets\n\nSimilarly to highly cardinal variables, rare or infrequent labels often land only on the training set, or only on the testing set. If present only in the training set, they may lead to over-fitting. If present only on the testing set, the machine learning algorithm will not know how to handle them, as they have not seen the rare labels during training. Let's explore this further.", "_____no_output_____" ] ], [ [ "# let's separate into training and testing set\nX_train, X_test, y_train, y_test = train_test_split(data[cat_cols],\n data['SalePrice'],\n test_size=0.3,\n random_state=2910)\n\nX_train.shape, X_test.shape", "_____no_output_____" ], [ "# Let's find labels present only in the training set\n# I will use X2 as example\n\nunique_to_train_set = [\n x for x in X_train['Exterior1st'].unique() if x not in X_test['Exterior1st'].unique()\n]\n\nprint(unique_to_train_set)", "['Stone', 'BrkComm', 'ImStucc', 'CBlock']\n" ] ], [ [ "There are 4 categories present in the train set and are not present in the test set.", "_____no_output_____" ] ], [ [ "# Let's find labels present only in the test set\n\nunique_to_test_set = [\n x for x in X_test['Exterior1st'].unique() if x not in X_train['Exterior1st'].unique()\n]\n\nprint(unique_to_test_set)", "['AsphShn']\n" ] ], [ [ "In this case, there is 1 rare value present in the test set only.", "_____no_output_____" ], [ "**That is all for this demonstration. I hope you enjoyed the notebook, and see you in the next one.**", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cb7ed434eb95c90530bb33919f60c04490c9f240
20,513
ipynb
Jupyter Notebook
v1/Celsius_to_Fahrenheit.ipynb
cccadet/Tensorflow
968f2129a1b27fc61ffc937f30452430ec251275
[ "MIT" ]
null
null
null
v1/Celsius_to_Fahrenheit.ipynb
cccadet/Tensorflow
968f2129a1b27fc61ffc937f30452430ec251275
[ "MIT" ]
null
null
null
v1/Celsius_to_Fahrenheit.ipynb
cccadet/Tensorflow
968f2129a1b27fc61ffc937f30452430ec251275
[ "MIT" ]
null
null
null
39.676983
484
0.573539
[ [ [ "<a href=\"https://colab.research.google.com/github/cccadet/Tensorflow/blob/master/Celsius_to_Fahrenheit.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "##### Copyright 2018 The TensorFlow Authors.", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# The Basics: Training Your First Model", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/courses/udacity_intro_to_tensorflow_for_deep_learning/l02c01_celsius_to_fahrenheit.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l02c01_celsius_to_fahrenheit.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>", "_____no_output_____" ], [ "Welcome to this Colab where you will train your first Machine Learning model!\n\nWe'll try to keep things simple here, and only introduce basic concepts. Later Colabs will cover more advanced problems.\n\nThe problem we will solve is to convert from Celsius to Fahrenheit, where the approximate formula is:\n\n$$ f = c \\times 1.8 + 32 $$\n\n\nOf course, it would be simple enough to create a conventional Python function that directly performs this calculation, but that wouldn't be machine learning.\n\n\nInstead, we will give TensorFlow some sample Celsius values (0, 8, 15, 22, 38) and their corresponding Fahrenheit values (32, 46, 59, 72, 100).\nThen, we will train a model that figures out the above formula through the training process.", "_____no_output_____" ], [ "## Import dependencies\n\nFirst, import TensorFlow. Here, we're calling it `tf` for ease of use. We also tell it to only display errors.\n\nNext, import [NumPy](http://www.numpy.org/) as `np`. Numpy helps us to represent our data as highly performant lists.", "_____no_output_____" ] ], [ [ "from __future__ import absolute_import, division, print_function\nimport tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\n\nimport numpy as np", "_____no_output_____" ] ], [ [ "## Set up training data\n\nAs we saw before, supervised Machine Learning is all about figuring out an algorithm given a set of inputs and outputs. Since the task in this Codelab is to create a model that can give the temperature in Fahrenheit when given the degrees in Celsius, we create two lists `celsius_q` and `fahrenheit_a` that we can use to train our model.", "_____no_output_____" ] ], [ [ "celsius_q = np.array([-40, -10, 0, 8, 15, 22, 38], dtype=float)\nfahrenheit_a = np.array([-40, 14, 32, 46, 59, 72, 100], dtype=float)\n\nfor i,c in enumerate(celsius_q):\n print(\"{} degrees Celsius = {} degrees Fahrenheit\".format(c, fahrenheit_a[i]))", "_____no_output_____" ] ], [ [ "### Some Machine Learning terminology\n\n - **Feature** — The input(s) to our model. In this case, a single value — the degrees in Celsius.\n\n - **Labels** — The output our model predicts. In this case, a single value — the degrees in Fahrenheit.\n \n - **Example** — A pair of inputs/outputs used during training. In our case a pair of values from `celsius_q` and `fahrenheit_a` at a specific index, such as `(22,72)`.\n\n", "_____no_output_____" ], [ "## Create the model\n\nNext create the model. We will use simplest possible model we can, a Dense network. Since the problem is straightforward, this network will require only a single layer, with a single neuron. \n\n### Build a layer\n\nWe'll call the layer `l0` and create it by instantiating `tf.keras.layers.Dense` with the following configuration:\n\n* `input_shape=[1]` — This specifies that the input to this layer is a single value. That is, the shape is a one-dimensional array with one member. Since this is the first (and only) layer, that input shape is the input shape of the entire model. The single value is a floating point number, representing degrees Celsius.\n\n* `units=1` — This specifies the number of neurons in the layer. The number of neurons defines how many internal variables the layer has to try to learn how to solve the problem (more later). Since this is the final layer, it is also the size of the model's output — a single float value representing degrees Fahrenheit. (In a multi-layered network, the size and shape of the layer would need to match the `input_shape` of the next layer.)\n", "_____no_output_____" ] ], [ [ "l0 = tf.keras.layers.Dense(units=1, input_shape=[1]) ", "_____no_output_____" ] ], [ [ "### Assemble layers into the model\n\nOnce layers are defined, they need to be assembled into a model. The Sequential model definition takes a list of layers as argument, specifying the calculation order from the input to the output.\n\nThis model has just a single layer, l0.", "_____no_output_____" ] ], [ [ "model = tf.keras.Sequential([l0])", "_____no_output_____" ] ], [ [ "**Note**\n\nYou will often see the layers defined inside the model definition, rather than beforehand:\n\n```python\nmodel = tf.keras.Sequential([\n tf.keras.layers.Dense(units=1, input_shape=[1])\n])\n```", "_____no_output_____" ], [ "## Compile the model, with loss and optimizer functions\n\nBefore training, the model has to be compiled. When compiled for training, the model is given:\n\n- **Loss function** — A way of measuring how far off predictions are from the desired outcome. (The measured difference is called the \"loss\".\n\n- **Optimizer function** — A way of adjusting internal values in order to reduce the loss.\n", "_____no_output_____" ] ], [ [ "model.compile(loss='mean_squared_error',\n optimizer=tf.keras.optimizers.Adam(0.1))", "_____no_output_____" ] ], [ [ "These are used during training (`model.fit()`, below) to first calculate the loss at each point, and then improve it. In fact, the act of calculating the current loss of a model and then improving it is precisely what training is.\n\nDuring training, the optimizer function is used to calculate adjustments to the model's internal variables. The goal is to adjust the internal variables until the model (which is really a math function) mirrors the actual equation for converting Celsius to Fahrenheit.\n\nTensorFlow uses numerical analysis to perform this tuning, and all this complexity is hidden from you so we will not go into the details here. What is useful to know about these parameters are:\n\nThe loss function ([mean squared error](https://en.wikipedia.org/wiki/Mean_squared_error)) and the optimizer ([Adam](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/)) used here are standard for simple models like this one, but many others are available. It is not important to know how these specific functions work at this point.\n\nOne part of the Optimizer you may need to think about when building your own models is the learning rate (`0.1` in the code above). This is the step size taken when adjusting values in the model. If the value is too small, it will take too many iterations to train the model. Too large, and accuracy goes down. Finding a good value often involves some trial and error, but the range is usually within 0.001 (default), and 0.1", "_____no_output_____" ], [ "## Train the model\n\nTrain the model by calling the `fit` method. \n\nDuring training, the model takes in Celsius values, performs a calculation using the current internal variables (called \"weights\") and outputs values which are meant to be the Fahrenheit equivalent. Since the weights are intially set randomly, the output will not be close to the correct value. The difference between the actual output and the desired output is calculated using the loss function, and the optimizer function directs how the weights should be adjusted. \n\nThis cycle of calculate, compare, adjust is controlled by the `fit` method. The first argument is the inputs, the second argument is the desired outputs. The `epochs` argument specifies how many times this cycle should be run, and the `verbose` argument controls how much output the method produces.", "_____no_output_____" ] ], [ [ "history = model.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)\nprint(\"Finished training the model\")", "_____no_output_____" ] ], [ [ "In later videos, we will go into more details on what actually happens here and how a Dense layer actually works internally.", "_____no_output_____" ], [ "## Display training statistics\n\nThe `fit` method returns a history object. We can use this object to plot how the loss of our model goes down after each training epoch. A high loss means that the Fahrenheit degrees the model predicts is far from the corresponding value in `fahrenheit_a`. \n\nWe'll use [Matplotlib](https://matplotlib.org/) to visualize this (you could use another tool). As you can see, our model improves very quickly at first, and then has a steady, slow improvement until it is very near \"perfect\" towards the end.\n\n", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nplt.xlabel('Epoch Number')\nplt.ylabel(\"Loss Magnitude\")\nplt.plot(history.history['loss'])", "_____no_output_____" ] ], [ [ "## Use the model to predict values\n\nNow you have a model that has been trained to learn the relationshop between `celsius_q` and `fahrenheit_a`. You can use the predict method to have it calculate the Fahrenheit degrees for a previously unknown Celsius degrees. \n\nSo, for example, if the Celsius value is 200, what do you think the Fahrenheit result will be? Take a guess before you run this code.", "_____no_output_____" ] ], [ [ "print(model.predict([100.0]))", "_____no_output_____" ] ], [ [ "The correct answer is $100 \\times 1.8 + 32 = 212$, so our model is doing really well.\n\n### To review\n\n\n* We created a model with a Dense layer\n* We trained it with 3500 examples (7 pairs, over 500 epochs).\n\nOur model tuned the variables (weights) in the Dense layer until it was able to return the correct Fahrenheit value for any Celsius value. (Remember, 100 Celsius was not part of our training data.)\n\n\n", "_____no_output_____" ], [ "## Looking at the layer weights\n\nFinally, let's print the internal variables of the Dense layer. ", "_____no_output_____" ] ], [ [ "print(\"These are the layer variables: {}\".format(l0.get_weights()))", "_____no_output_____" ] ], [ [ "The first variable is close to ~1.8 and the second to ~32. These values (1.8 and 32) are the actual variables in the real conversion formula.\n\nThis is really close to the values in the conversion formula. We'll explain this in an upcoming video where we show how a Dense layer works, but for a single neuron with a single input and a single output, the internal math looks the same as [the equation for a line](https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form), $y = mx + b$, which has the same form as the conversion equation, $f = 1.8c + 32$.\n\nSince the form is the same, the variables should converge on the standard values of 1.8 and 32, which is exactly what happened.\n\nWith additional neurons, additional inputs, and additional outputs, the formula becomes much more complex, but the idea is the same. \n\n### A little experiment\n\nJust for fun, what if we created more Dense layers with different units, which therefore also has more variables?", "_____no_output_____" ] ], [ [ "l0 = tf.keras.layers.Dense(units=4, input_shape=[1]) \nl1 = tf.keras.layers.Dense(units=4) \nl2 = tf.keras.layers.Dense(units=1) \nmodel = tf.keras.Sequential([l0, l1, l2])\nmodel.compile(loss='mean_squared_error', optimizer=tf.keras.optimizers.Adam(0.1))\nmodel.fit(celsius_q, fahrenheit_a, epochs=500, verbose=False)\nprint(\"Finished training the model\")\nprint(model.predict([100.0]))\nprint(\"Model predicts that 100 degrees Celsius is: {} degrees Fahrenheit\".format(model.predict([100.0])))\nprint(\"These are the l0 variables: {}\".format(l0.get_weights()))\nprint(\"These are the l1 variables: {}\".format(l1.get_weights()))\nprint(\"These are the l2 variables: {}\".format(l2.get_weights()))", "_____no_output_____" ] ], [ [ "As you can see, this model is also able to predict the corresponding Fahrenheit value really well. But when you look at the variables (weights) in the `l0` and `l1` layers, they are nothing even close to ~1.8 and ~32. The added complexity hides the \"simple\" form of the conversion equation.\n\nStay tuned for the upcoming video on how Dense layers work for the explanation.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb7ed542e98f47b3a3ae28ecc39e2168b5244b99
78,942
ipynb
Jupyter Notebook
BasicClassification.ipynb
javedsha/tensorflow-practise
8d1e0ed22cc8506835486ede7f8ab6a3f57169fc
[ "Apache-2.0" ]
null
null
null
BasicClassification.ipynb
javedsha/tensorflow-practise
8d1e0ed22cc8506835486ede7f8ab6a3f57169fc
[ "Apache-2.0" ]
null
null
null
BasicClassification.ipynb
javedsha/tensorflow-practise
8d1e0ed22cc8506835486ede7f8ab6a3f57169fc
[ "Apache-2.0" ]
null
null
null
140.216696
57,488
0.891832
[ [ [ "# First Tutorial on Tensorflow", "_____no_output_____" ] ], [ [ "import tensorflow as tf", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n" ], [ "from tensorflow import keras", "_____no_output_____" ], [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "print(tf.__version__)", "1.10.0\n" ], [ "fashion_mnist = keras.datasets.fashion_mnist", "_____no_output_____" ], [ "(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()", "Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-labels-idx1-ubyte.gz\n32768/29515 [=================================] - 0s 1us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/train-images-idx3-ubyte.gz\n26427392/26421880 [==============================] - 1s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-labels-idx1-ubyte.gz\n8192/5148 [===============================================] - 0s 0us/step\nDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/t10k-images-idx3-ubyte.gz\n4423680/4422102 [==============================] - 0s 0us/step\n" ], [ "class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']", "_____no_output_____" ], [ "train_images.shape", "_____no_output_____" ], [ "train_labels", "_____no_output_____" ], [ "test_images.shape", "_____no_output_____" ], [ "test_labels", "_____no_output_____" ] ], [ [ "## Data Preprocessing", "_____no_output_____" ] ], [ [ "plt.figure()\nplt.imshow(train_images[100])\nplt.colorbar()\nplt.grid(False)", "_____no_output_____" ], [ "# Scaling\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0", "_____no_output_____" ], [ "plt.figure(figsize=(10,10))\n\nfor i in range(25):\n plt.subplot(5, 5, i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(train_images[i], cmap=plt.cm.binary)\n plt.xlabel(class_names[train_labels[i]])", "_____no_output_____" ], [ "model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28,28)),\n keras.layers.Dense(128, activation=tf.nn.relu),\n keras.layers.Dense(10, activation=tf.nn.softmax)\n])", "_____no_output_____" ], [ "model.compile(optimizer=tf.train.AdamOptimizer(),\n loss='sparse_categorical_crossentropy',\n metrics=['accuracy'])", "_____no_output_____" ], [ "model.fit(train_images, train_labels, epochs=5)", "Epoch 1/5\n60000/60000 [==============================] - 3s 58us/step - loss: 0.5007 - acc: 0.8237\nEpoch 2/5\n60000/60000 [==============================] - 3s 52us/step - loss: 0.3757 - acc: 0.8639\nEpoch 3/5\n60000/60000 [==============================] - 3s 58us/step - loss: 0.3403 - acc: 0.8749\nEpoch 4/5\n60000/60000 [==============================] - 4s 61us/step - loss: 0.3154 - acc: 0.8829\nEpoch 5/5\n60000/60000 [==============================] - 4s 67us/step - loss: 0.2970 - acc: 0.8903\n" ], [ "test_loss, test_acc = model.evaluate(test_images, test_labels)", "10000/10000 [==============================] - 0s 27us/step\n" ], [ "print('Test accuracy:', test_acc)", "Test accuracy: 0.87\n" ], [ "predictions = model.predict(test_images)", "_____no_output_____" ], [ "predictions[10]", "_____no_output_____" ], [ "np.argmax(predictions[10])", "_____no_output_____" ], [ "test_labels[10]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7ee1737f43f3233f568a2082eb376b477c6fb4
33,613
ipynb
Jupyter Notebook
feed_forward_in_numpy.ipynb
raotnameh/deep-learning-in-numpy-kera-and-tensorflow
4dbd8c155291b709c62564251fde11c88f000efb
[ "Apache-2.0" ]
1
2019-10-30T16:53:50.000Z
2019-10-30T16:53:50.000Z
feed_forward_in_numpy.ipynb
raotnameh/deep-learning-in-numpy-kera-and-tensorflow
4dbd8c155291b709c62564251fde11c88f000efb
[ "Apache-2.0" ]
null
null
null
feed_forward_in_numpy.ipynb
raotnameh/deep-learning-in-numpy-kera-and-tensorflow
4dbd8c155291b709c62564251fde11c88f000efb
[ "Apache-2.0" ]
1
2019-09-21T05:31:50.000Z
2019-09-21T05:31:50.000Z
29.152645
124
0.589058
[ [ [ "import numpy as np\n\nX = np.random.rand(40,5)*2 - 10\nY = np.random.rand(40,1)*3 - 10\nprint(Y.shape)\n\n\ndef sig(z):\n return 1/(1+np.exp(-z))\n\ndef derivative_sigmoid(a):\n c=sig(a)*(1-sig(a))\n return c", "(40, 1)\n" ], [ "lO_1 = 10 #no of nodes in layer 1\nlO_2 = 1 #outpput layer\nw_1 = np.random.rand(lO_1,len(X[0])) # layer 1 weight\nb_1 = np.random.rand(lO_1) # layer 1 bias\nw_2 = np.random.rand(lO_2,lO_1) # layer 2 weight\nb_2 = np.random.rand(lO_2) # layer 1 bias", "_____no_output_____" ], [ "def forward_pass(X,lO_1,lO_2,w_1,w_2,b_2,b_1,lr):\n error =np.eye(len(X[0]))\n for i in range(len(X[0])):\n \n a_1 = np.dot(w_1,X[0]) + b_1\n\n out_1 = sig(a_1)\n\n a_2 = np.dot(w_2,out_1) + b_2\n out_2 = sig(a_2)\n\n error = (Y[i] - out_2)**2\n# print(w_1,w_2,b_1,b_2)\n error = np.sum(error)/len(X[0])\n print(error)\n return error,a_1,a_2,out_1,out_2", "_____no_output_____" ], [ "def back_prop(w_1,w_2,b_1,b_2,error,a_1,a_2,out_1,out_2,lr):\n dw_2 = derivative_sigmoid(a_2)*out_1 # change in w_1\n db_2 = derivative_sigmoid(a_2)# change in b_1\n dw_1 = derivative_sigmoid(a_2)*derivative_sigmoid(a_1).reshape(lO_1,1)*X[0].reshape(1,len(X[0]))# change in w_2\n db_1 = derivative_sigmoid(a_2)*derivative_sigmoid(a_1)# change in b_2\n w_1 = w_1 -lr*dw_1 # weight 1 update\n b_1 = b_1 -lr*db_1# bias 1 update\n w_2 = w_2 -lr*dw_2# weight 2 update\n b_2 = b_2 -lr*db_2# bias 2 update\n return w_1,w_2,b_1,b_2", "_____no_output_____" ], [ "np.random.seed(555)\nlr =1000000000000\nfor i in range(1000):\n error,a_1,a_2,out_1,out_2 = forward_pass(X,lO_1,lO_2,w_1,w_2,b_2,b_1,lr)\n w_1 , w_2, b_1 , b_2 = back_prop(w_1,w_2,b_1,b_2,error,a_1,a_2,out_1,out_2,lr)", "11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n11.318412978619829\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cb7ee2c100cfa84dc7d0991bba048b0bf3a89baf
391,568
ipynb
Jupyter Notebook
notebooks/exploratory/01_overview_metrics.ipynb
Context-Aware-Monitoring/Efficient-Stream-Monitoring
f08faaa87ac2ffe74014a9a6e864b641e4a160f5
[ "MIT" ]
null
null
null
notebooks/exploratory/01_overview_metrics.ipynb
Context-Aware-Monitoring/Efficient-Stream-Monitoring
f08faaa87ac2ffe74014a9a6e864b641e4a160f5
[ "MIT" ]
null
null
null
notebooks/exploratory/01_overview_metrics.ipynb
Context-Aware-Monitoring/Efficient-Stream-Monitoring
f08faaa87ac2ffe74014a9a6e864b641e4a160f5
[ "MIT" ]
null
null
null
178.635036
70,404
0.688504
[ [ [ "This notebook provides a initial overview over the values and correlation of already preprocessed metrics for sequential and concurrent experiments.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport sys\nsys.path.append('../../src')\nimport global_config\nfrom data import reward", "_____no_output_____" ], [ "seq_metrics_df = reward._generate_unified_metrics_dataframe(global_config.HOSTS, True)\ncon_metrics_df = reward._generate_unified_metrics_dataframe(global_config.HOSTS, True)", "_____no_output_____" ], [ "seq_metrics_df.boxplot(column=list(seq_metrics_df.columns.values),figsize=(70,10))", "_____no_output_____" ], [ "con_metrics_df.boxplot(column=list(seq_metrics_df.columns.values),figsize=(70,10))", "_____no_output_____" ], [ "def display_correlation(metrics_df):\n correlation_pearson = metrics_df.corr(method='pearson').fillna(0.0)\n np.fill_diagonal(correlation_pearson.values, 0.0)\n display(correlation_pearson.style.applymap(lambda value: 'color: green' if value > 0.75 else ('color: red' if value < -0.25 else 'color: black')))", "_____no_output_____" ], [ "display('Correlation of sequential metrics for whole experiment')\ndisplay_correlation(seq_metrics_df)", "_____no_output_____" ], [ "display('Correlation of concurrent metrics for whole experiment')\ndisplay_correlation(con_metrics_df)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb7eedda92d9d473cf077f5bb721c5c44edbf4bf
12,634
ipynb
Jupyter Notebook
01_NumPy_para_machine_learning.ipynb
julianox5/Aprendizado-Programacao-Curso-Machine-Learning-do-GoogleDevelopers
eac0ddd53cd6ea16ed87376e4659ff22a70d91d0
[ "Apache-2.0" ]
1
2020-03-19T17:15:51.000Z
2020-03-19T17:15:51.000Z
01_NumPy_para_machine_learning.ipynb
julianox5/Aprendizado-Programacao-Curso-Machine-Learning-do-GoogleDevelopers
eac0ddd53cd6ea16ed87376e4659ff22a70d91d0
[ "Apache-2.0" ]
null
null
null
01_NumPy_para_machine_learning.ipynb
julianox5/Aprendizado-Programacao-Curso-Machine-Learning-do-GoogleDevelopers
eac0ddd53cd6ea16ed87376e4659ff22a70d91d0
[ "Apache-2.0" ]
null
null
null
30.516908
1,112
0.490423
[ [ [ "<a href=\"https://colab.research.google.com/github/julianox5/Desafios-Resolvidos-do-curso-machine-learning-crash-course-google/blob/master/numpy_para_machine_learning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Importando o numpy", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "## Preencher matrizes com números expecíficos", "_____no_output_____" ], [ "Criando uma matriz com o numpy.array()", "_____no_output_____" ] ], [ [ "myArray = np.array([1,2,3,4,5,6,7,8,9,0])\nprint(myArray)", "[1 2 3 4 5 6 7 8 9 0]\n" ] ], [ [ "Criando uma matriz bidimensional 3 x 2", "_____no_output_____" ] ], [ [ "matriz_bi = np.array([[6 , 5], [11 , 4], [5 , 9] ])\nprint(matriz_bi)", "[[ 6 5]\n [11 4]\n [ 5 9]]\n" ] ], [ [ "Prencher um matriz com uma sequência de numeros, numpy.arange()", "_____no_output_____" ] ], [ [ "metodArange = np.arange(5, 12)\nprint(metodArange)", "[ 5 6 7 8 9 10 11]\n" ] ], [ [ "## Preencher matrizes com sequência de números\nNumpy possui varias funções para preencher matrizes com números aleatórios em determinados intervalos.\n***numpy.random.randint*** gera números inteiros aleatórios entre um valor baixo e alto.", "_____no_output_____" ] ], [ [ "aleatorio_randint = np.random.randint(low = 10, high=100, size=(10))\nprint(aleatorio_randint)", "[39 82 57 89 69 43 96 45 21 56]\n" ] ], [ [ "Criar valores aleatórios de ponto flutuante entre 0,0 e 1,0 use **numpy.random.random()**", "_____no_output_____" ] ], [ [ "float_random = np.random.random([10])\nprint(float_random)", "[0.25235296 0.08061505 0.03279996 0.59767375 0.27453448 0.31184018\n 0.17296004 0.6211853 0.01678728 0.4507196 ]\n" ] ], [ [ "O Numpy possui um truque chamado broadcasting que expande virtualmente o operando menor para dimensões compatíveis com a álgebra linear. ", "_____no_output_____" ] ], [ [ "random_floats_2_e_3 = float_random + 2.0\nprint (random_floats_2_e_3)", "[2.25235296 2.08061505 2.03279996 2.59767375 2.27453448 2.31184018\n 2.17296004 2.6211853 2.01678728 2.4507196 ]\n" ] ], [ [ "## Tarefa 1: Criar um conjunto de dados linear\nSeu objetivo é criar um conjunto de dados simples que consiste em um único recurso e um rótulo da seguinte maneira:\n1. Atribua uma sequência de números inteiros de 6 a 20 (inclusive) a uma matriz NumPy denominada `feature`.\n2.Atribua 15 valores a uma matriz NumPy denominada de labelmodo que:\n\n```\n label = (3)(feature) + 4\n```\nPor exemplo, o primeiro valor para `label`deve ser:\n\n```\n label = (3)(6) + 4 = 22\n ```", "_____no_output_____" ] ], [ [ "feature = np.arange(6, 21)\nprint(feature)\nlabel = (feature * 3) + 4\nprint(label)", "[ 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20]\n[22 25 28 31 34 37 40 43 46 49 52 55 58 61 64]\n" ] ], [ [ "\n## Tarefa 2: adicionar algum ruído ao conjunto de dados\n\nPara tornar seu conjunto de dados um pouco mais realista, insira um pouco de ruído aleatório em cada elemento da labelmatriz que você já criou. Para ser mais preciso, modifique cada valor atribuído rótulo, adicionando um valor de ponto flutuante aleatório diferente entre -2 e +2.\n\não confie na transmissão. Em vez disso, crie um ruido na matriz com a mesma dimensão que rótulo.", "_____no_output_____" ] ], [ [ "noise = (np.random.random([15]) * 4) -2\nprint(noise)\nlabel += noise \nprint(label)", "[-0.2195936 1.3326103 1.84731153 -0.2603278 -1.46650376 -0.45077214\n -0.08141228 -0.59877044 -0.39544639 1.06051902 -0.3939568 -0.34886473\n 0.94562907 1.45291256 -0.09328554]\n" ], [ "#@title Example form fields\n#@markdown Forms support many types of fields.\n\nno_type_checking = '' #@param\nstring_type = 'example' #@param {type: \"string\"}\nslider_value = 142 #@param {type: \"slider\", min: 100, max: 200}\nnumber = 102 #@param {type: \"number\"}\ndate = '2010-11-05' #@param {type: \"date\"}\npick_me = \"monday\" #@param ['monday', 'tuesday', 'wednesday', 'thursday']\nselect_or_input = \"apples\" #@param [\"apples\", \"bananas\", \"oranges\"] {allow-input: true}\n#@markdown ---\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb7ef92de988b41a7eeff2d7957700ff42f8d02d
227,355
ipynb
Jupyter Notebook
notebooks/Pong_PyTorch_DQN.ipynb
boangri/uai-thesis-notebooks
f287cef36d1533d2526d0d71da0c55e8b633e8c2
[ "MIT" ]
null
null
null
notebooks/Pong_PyTorch_DQN.ipynb
boangri/uai-thesis-notebooks
f287cef36d1533d2526d0d71da0c55e8b633e8c2
[ "MIT" ]
null
null
null
notebooks/Pong_PyTorch_DQN.ipynb
boangri/uai-thesis-notebooks
f287cef36d1533d2526d0d71da0c55e8b633e8c2
[ "MIT" ]
null
null
null
265.601636
152,947
0.883623
[ [ [ "<a href=\"https://colab.research.google.com/github/boangri/uai-thesis-notebooks/blob/main/notebooks/Pong_PyTorch_DQN.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Решение задачи Pong методом DQN в PyTorch", "_____no_output_____" ] ], [ [ "import os\nimport torch as T\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport numpy as np\nimport pandas as pd \nimport collections\nimport cv2\nimport matplotlib.pyplot as plt\nimport gym\nimport time", "_____no_output_____" ] ], [ [ "Классы сети и буфера воспроизведения опыта.", "_____no_output_____" ] ], [ [ "class DeepQNetwork(nn.Module):\n def __init__(self, lr, n_actions, name, input_dims, chkpt_dir):\n super(DeepQNetwork, self).__init__()\n self.checkpoint_dir = chkpt_dir\n self.checkpoint_file = os.path.join(self.checkpoint_dir, name)\n\n self.conv1 = nn.Conv2d(input_dims[0], 32, 8, stride=4)\n self.conv2 = nn.Conv2d(32, 64, 4, stride=2)\n self.conv3 = nn.Conv2d(64, 64, 3, stride=1)\n\n fc_input_dims = self.calculate_conv_output_dims(input_dims)\n\n self.fc1 = nn.Linear(fc_input_dims, 512)\n self.fc2 = nn.Linear(512, n_actions)\n\n self.optimizer = optim.RMSprop(self.parameters(), lr=lr)\n\n self.loss = nn.MSELoss()\n self.device = T.device('cuda:0' if T.cuda.is_available() else 'cpu')\n self.to(self.device)\n\n def calculate_conv_output_dims(self, input_dims):\n state = T.zeros(1, *input_dims)\n dims = self.conv1(state)\n dims = self.conv2(dims)\n dims = self.conv3(dims)\n return int(np.prod(dims.size()))\n\n def forward(self, state):\n conv1 = F.relu(self.conv1(state))\n conv2 = F.relu(self.conv2(conv1))\n conv3 = F.relu(self.conv3(conv2))\n # conv3 shape is BS x n_filters x H x W\n conv_state = conv3.view(conv3.size()[0], -1)\n # conv_state shape is BS x (n_filters * H * W)\n flat1 = F.relu(self.fc1(conv_state))\n actions = self.fc2(flat1)\n\n return actions\n\n def save_checkpoint(self):\n print('... saving checkpoint ...')\n T.save(self.state_dict(), self.checkpoint_file)\n\n def load_checkpoint(self):\n print('... loading checkpoint ...')\n self.load_state_dict(T.load(self.checkpoint_file))\n\n\nclass ReplayBuffer(object):\n def __init__(self, max_size, input_shape, n_actions):\n self.mem_size = max_size\n self.mem_cntr = 0\n self.state_memory = np.zeros((self.mem_size, *input_shape),\n dtype=np.float32)\n self.new_state_memory = np.zeros((self.mem_size, *input_shape),\n dtype=np.float32)\n\n self.action_memory = np.zeros(self.mem_size, dtype=np.int64)\n self.reward_memory = np.zeros(self.mem_size, dtype=np.float32)\n self.terminal_memory = np.zeros(self.mem_size, dtype=np.bool)\n\n def store_transition(self, state, action, reward, state_, done):\n index = self.mem_cntr % self.mem_size\n self.state_memory[index] = state\n self.new_state_memory[index] = state_\n self.action_memory[index] = action\n self.reward_memory[index] = reward\n self.terminal_memory[index] = done\n self.mem_cntr += 1\n\n def sample_buffer(self, batch_size):\n max_mem = min(self.mem_cntr, self.mem_size)\n batch = np.random.choice(max_mem, batch_size, replace=False)\n\n states = self.state_memory[batch]\n actions = self.action_memory[batch]\n rewards = self.reward_memory[batch]\n states_ = self.new_state_memory[batch]\n terminal = self.terminal_memory[batch]\n\n return states, actions, rewards, states_, terminal\n\n", "_____no_output_____" ] ], [ [ "Классы-обертки", "_____no_output_____" ] ], [ [ "class RepeatActionAndMaxFrame(gym.Wrapper):\n def __init__(self, env=None, repeat=4, clip_reward=False, no_ops=0,\n fire_first=False):\n super(RepeatActionAndMaxFrame, self).__init__(env)\n self.repeat = repeat\n self.shape = env.observation_space.low.shape\n self.frame_buffer = np.zeros_like((2, self.shape))\n self.clip_reward = clip_reward\n self.no_ops = no_ops\n self.fire_first = fire_first\n\n def step(self, action):\n t_reward = 0.0\n done = False\n for i in range(self.repeat):\n obs, reward, done, info = self.env.step(action)\n if self.clip_reward:\n reward = np.clip(np.array([reward]), -1, 1)[0]\n t_reward += reward\n idx = i % 2\n self.frame_buffer[idx] = obs\n if done:\n break\n\n max_frame = np.maximum(self.frame_buffer[0], self.frame_buffer[1])\n return max_frame, t_reward, done, info\n\n def reset(self):\n obs = self.env.reset()\n no_ops = np.random.randint(self.no_ops)+1 if self.no_ops > 0 else 0\n for _ in range(no_ops):\n _, _, done, _ = self.env.step(0)\n if done:\n self.env.reset()\n if self.fire_first:\n assert self.env.unwrapped.get_action_meanings()[1] == 'FIRE'\n obs, _, _, _ = self.env.step(1)\n\n self.frame_buffer = np.zeros_like((2,self.shape))\n self.frame_buffer[0] = obs\n\n return obs\n\nclass PreprocessFrame(gym.ObservationWrapper):\n def __init__(self, shape, env=None):\n super(PreprocessFrame, self).__init__(env)\n self.shape = (shape[2], shape[0], shape[1])\n self.observation_space = gym.spaces.Box(low=0.0, high=1.0,\n shape=self.shape, dtype=np.float32)\n\n def observation(self, obs):\n new_frame = cv2.cvtColor(obs, cv2.COLOR_RGB2GRAY)\n resized_screen = cv2.resize(new_frame, self.shape[1:],\n interpolation=cv2.INTER_AREA)\n new_obs = np.array(resized_screen, dtype=np.uint8).reshape(self.shape)\n new_obs = new_obs / 255.0\n\n return new_obs\n\nclass StackFrames(gym.ObservationWrapper):\n def __init__(self, env, repeat):\n super(StackFrames, self).__init__(env)\n self.observation_space = gym.spaces.Box(\n env.observation_space.low.repeat(repeat, axis=0),\n env.observation_space.high.repeat(repeat, axis=0),\n dtype=np.float32)\n self.stack = collections.deque(maxlen=repeat)\n\n def reset(self):\n self.stack.clear()\n observation = self.env.reset()\n for _ in range(self.stack.maxlen):\n self.stack.append(observation)\n\n return np.array(self.stack).reshape(self.observation_space.low.shape)\n\n def observation(self, observation):\n self.stack.append(observation)\n\n return np.array(self.stack).reshape(self.observation_space.low.shape)\n\ndef make_env(env_name, shape=(84,84,1), repeat=4, clip_rewards=False,\n no_ops=0, fire_first=False):\n env = gym.make(env_name)\n env = RepeatActionAndMaxFrame(env, repeat, clip_rewards, no_ops, fire_first)\n env = PreprocessFrame(shape, env)\n env = StackFrames(env, repeat)\n\n return env\n", "_____no_output_____" ] ], [ [ "Универсальный класс DQN-агента", "_____no_output_____" ] ], [ [ "class DQNAgent(object):\n def __init__(self, gamma, epsilon, lr, n_actions, input_dims,\n mem_size, batch_size, eps_min=0.01, eps_dec=5e-7,\n replace=1000, algo=None, env_name=None, chkpt_dir='tmp/dqn'):\n self.gamma = gamma\n self.epsilon = epsilon\n self.lr = lr\n self.n_actions = n_actions\n self.input_dims = input_dims\n self.batch_size = batch_size\n self.eps_min = eps_min\n self.eps_dec = eps_dec\n self.replace_target_cnt = replace\n self.algo = algo\n self.env_name = env_name\n self.chkpt_dir = chkpt_dir\n self.action_space = [i for i in range(n_actions)]\n self.learn_step_counter = 0\n\n self.memory = ReplayBuffer(mem_size, input_dims, n_actions)\n\n self.q_eval = DeepQNetwork(self.lr, self.n_actions,\n input_dims=self.input_dims,\n name=self.env_name+'_'+self.algo+'_q_eval',\n chkpt_dir=self.chkpt_dir)\n\n self.q_next = DeepQNetwork(self.lr, self.n_actions,\n input_dims=self.input_dims,\n name=self.env_name+'_'+self.algo+'_q_next',\n chkpt_dir=self.chkpt_dir)\n\n def choose_action(self, observation):\n if np.random.random() > self.epsilon:\n state = T.tensor([observation],dtype=T.float).to(self.q_eval.device)\n actions = self.q_eval.forward(state)\n action = T.argmax(actions).item()\n else:\n action = np.random.choice(self.action_space)\n\n return action\n\n def store_transition(self, state, action, reward, state_, done):\n self.memory.store_transition(state, action, reward, state_, done)\n\n def sample_memory(self):\n state, action, reward, new_state, done = \\\n self.memory.sample_buffer(self.batch_size)\n\n states = T.tensor(state).to(self.q_eval.device)\n rewards = T.tensor(reward).to(self.q_eval.device)\n dones = T.tensor(done).to(self.q_eval.device)\n actions = T.tensor(action).to(self.q_eval.device)\n states_ = T.tensor(new_state).to(self.q_eval.device)\n\n return states, actions, rewards, states_, dones\n\n def replace_target_network(self):\n if self.learn_step_counter % self.replace_target_cnt == 0:\n self.q_next.load_state_dict(self.q_eval.state_dict())\n\n def decrement_epsilon(self):\n self.epsilon = self.epsilon - self.eps_dec \\\n if self.epsilon > self.eps_min else self.eps_min\n\n def save_models(self):\n self.q_eval.save_checkpoint()\n self.q_next.save_checkpoint()\n\n def load_models(self):\n self.q_eval.load_checkpoint()\n self.q_next.load_checkpoint()\n\n def learn(self):\n if self.memory.mem_cntr < self.batch_size:\n return\n\n self.q_eval.optimizer.zero_grad()\n\n self.replace_target_network()\n\n states, actions, rewards, states_, dones = self.sample_memory()\n indices = np.arange(self.batch_size)\n\n q_pred = self.q_eval.forward(states)[indices, actions]\n q_next = self.q_next.forward(states_).max(dim=1)[0]\n\n q_next[dones] = 0.0\n q_target = rewards + self.gamma*q_next\n\n loss = self.q_eval.loss(q_target, q_pred).to(self.q_eval.device)\n loss.backward()\n self.q_eval.optimizer.step()\n self.learn_step_counter += 1\n\n self.decrement_epsilon()\n", "_____no_output_____" ] ], [ [ "Цикл обучения", "_____no_output_____" ] ], [ [ "%%time\npath = '/content/drive/My Drive/weights/Pong/'\n\nenv = make_env('PongNoFrameskip-v4')\nbest_score = -np.inf\nload_checkpoint = False\nn_games = 120\nagent = DQNAgent(gamma=0.99, epsilon=1.0, lr=0.0001,\n input_dims=(env.observation_space.shape),\n n_actions=env.action_space.n, mem_size=50000, eps_min=0.05,\n batch_size=32, replace=1000, eps_dec=1e-5,\n chkpt_dir=path, algo='DQNAgent',\n env_name='PongNoFrameskip-v4')\n\nif load_checkpoint:\n agent.load_models()\n\nn_steps = 0\nscores, eps_history, steps_array = [], [], []\nstartTime = time.time()\nfor i in range(n_games):\n done = False\n observation = env.reset()\n\n score = 0\n while not done:\n action = agent.choose_action(observation)\n observation_, reward, done, info = env.step(action)\n score += reward\n if not load_checkpoint:\n agent.store_transition(observation, action, reward, observation_, int(done))\n agent.learn()\n observation = observation_\n n_steps += 1\n scores.append(score)\n steps_array.append(n_steps)\n\n avg_score = np.mean(scores[-10:])\n print(\"ep:%d score:%.0f avg_score:%.2f best_score:%.2f epsilon:%.4f steps:%d time:%.1f\" % (i+1, score, avg_score, best_score, agent.epsilon, n_steps, time.time() - startTime))\n with open(path + 'torch_dqn_history4.csv', 'a') as h: \n h.write(\"%d,%.0f,%.2f,%.6f,%d,%.1f\\n\" % (i+1, score, avg_score, agent.epsilon, n_steps, time.time() - startTime))\n\n if avg_score > best_score:\n if not load_checkpoint:\n agent.save_models()\n best_score = avg_score\n\n eps_history.append(agent.epsilon)\n", "<string>:6: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n" ] ], [ [ "График обучения", "_____no_output_____" ] ], [ [ "path = '/content/drive/My Drive/weights/Pong/'\n\ndf = pd.read_csv(path + 'torch_dqn_history.csv', header=None, names=('episode', 'score', 'avg_score', 'epsilon', 'steps', 'time'))\nx = df.episode\ny = df.score\ny1 = np.zeros_like(x)\nfor i in range(len(y1)):\n imin = i - 10 if i > 10 else 0\n y1[i] = y[imin:i+1].mean()\n# y1 = df.avg_score\nplt.figure(figsize=(12,6))\nplt.scatter(x, y, label='очки')\nplt.plot(x, y1, color='C1', label='среднее за 10')\nplt.ylabel('Очки')\nplt.xlabel('Игры')\nplt.legend()\nplt.grid()\nplt.title('История обучения - Pong-v0 - Deep Q Learning')\nplt.show()", "_____no_output_____" ] ], [ [ "Демонстрация игры", "_____no_output_____" ] ], [ [ "%%time\npath = '/content/drive/My Drive/weights/Pong/'\n\nenv = make_env('PongNoFrameskip-v4')\nenv.seed(3)\nenv = wrap_env(env)\n\nload_checkpoint = True\nn_games = 1\nagent = DQNAgent(gamma=0.99, epsilon=0.0, lr=0.0001,\n input_dims=(env.observation_space.shape),\n n_actions=env.action_space.n, mem_size=1, eps_min=0.0,\n batch_size=32, replace=1000, eps_dec=1e-5,\n chkpt_dir=path, algo='DQNAgent',\n env_name='PongNoFrameskip-v4')\n\nif load_checkpoint:\n agent.load_models()\n\nn_steps = 0\nscores, eps_history, steps_array = [], [], []\nstartTime = time.time()\nfor i in range(n_games):\n done = False\n observation = env.reset()\n\n score = 0\n while not done:\n action = agent.choose_action(observation)\n observation, reward, done, info = env.step(action)\n score += reward\n n_steps += 1\n scores.append(score)\n steps_array.append(n_steps)\n\n avg_score = np.mean(scores[-10:])\n print(i+1, score)\nprint(\"avg_score=%.1f\" % avg_score)\n # print(\"ep:%d score:%.0f avg_score:%.2f best_score:%.2f epsilon:%.4f steps:%d time:%.1f\" % (i+1, score, avg_score, best_score, agent.epsilon, n_steps, time.time - startTime))\nenv.close()\nshow_video()", "<string>:6: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb7f03c38c70be5ff07db8a50f5f15370fd462f1
272,469
ipynb
Jupyter Notebook
torch_1_CycleGAN.ipynb
MNCTTY/Pytorch_bases
9a12f35f9dd1bbead2d1605ae2797633fe2f9c83
[ "Apache-2.0" ]
null
null
null
torch_1_CycleGAN.ipynb
MNCTTY/Pytorch_bases
9a12f35f9dd1bbead2d1605ae2797633fe2f9c83
[ "Apache-2.0" ]
null
null
null
torch_1_CycleGAN.ipynb
MNCTTY/Pytorch_bases
9a12f35f9dd1bbead2d1605ae2797633fe2f9c83
[ "Apache-2.0" ]
null
null
null
583.445396
256,628
0.93556
[ [ [ "import torch\nimport torch.nn as nn\n\nclass ResNetBlock(nn.Module): # <1>\n\n def __init__(self, dim):\n super(ResNetBlock, self).__init__()\n self.conv_block = self.build_conv_block(dim)\n\n def build_conv_block(self, dim):\n conv_block = []\n\n conv_block += [nn.ReflectionPad2d(1)]\n\n conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=0, bias=True),\n nn.InstanceNorm2d(dim),\n nn.ReLU(True)]\n\n conv_block += [nn.ReflectionPad2d(1)]\n\n conv_block += [nn.Conv2d(dim, dim, kernel_size=3, padding=0, bias=True),\n nn.InstanceNorm2d(dim)]\n\n return nn.Sequential(*conv_block)\n\n def forward(self, x):\n out = x + self.conv_block(x) # <2>\n return out\n\n\nclass ResNetGenerator(nn.Module):\n\n def __init__(self, input_nc=3, output_nc=3, ngf=64, n_blocks=9): # <3> \n\n assert(n_blocks >= 0)\n super(ResNetGenerator, self).__init__()\n\n self.input_nc = input_nc\n self.output_nc = output_nc\n self.ngf = ngf\n\n model = [nn.ReflectionPad2d(3),\n nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0, bias=True),\n nn.InstanceNorm2d(ngf),\n nn.ReLU(True)]\n\n n_downsampling = 2\n for i in range(n_downsampling):\n mult = 2**i\n model += [nn.Conv2d(ngf * mult, ngf * mult * 2, kernel_size=3,\n stride=2, padding=1, bias=True),\n nn.InstanceNorm2d(ngf * mult * 2),\n nn.ReLU(True)]\n\n mult = 2**n_downsampling\n for i in range(n_blocks):\n model += [ResNetBlock(ngf * mult)]\n\n for i in range(n_downsampling):\n mult = 2**(n_downsampling - i)\n model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2),\n kernel_size=3, stride=2,\n padding=1, output_padding=1,\n bias=True),\n nn.InstanceNorm2d(int(ngf * mult / 2)),\n nn.ReLU(True)]\n\n model += [nn.ReflectionPad2d(3)]\n model += [nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]\n model += [nn.Tanh()]\n\n self.model = nn.Sequential(*model)\n\n def forward(self, input): # <3>\n return self.model(input)", "_____no_output_____" ] ], [ [ "model_path = '../data/p1ch2/horse2zebra_0.4.0.pth' \nmodel_data = torch.load(model_path) \nnetG.load_state_dict(model_data)", "_____no_output_____" ] ], [ [ "from torchvision import models\nimport torch", "_____no_output_____" ], [ "netG = ResNetGenerator()", "_____no_output_____" ], [ "weight_path = '/Users/mnctty/Desktop/DL with Pytorch/1_lessons/horse2zebra_0.4.0.pth'", "_____no_output_____" ], [ "model_data = torch.load(weight_path)", "_____no_output_____" ], [ "#model_data.keys()", "_____no_output_____" ], [ "netG.load_state_dict(model_data)\n### а что еще можно загрузить?", "_____no_output_____" ], [ "netG.eval()", "_____no_output_____" ], [ "from PIL import Image\nfrom torchvision import transforms", "_____no_output_____" ], [ "preprocess = transforms.Compose([transforms.Resize(256),\ntransforms.ToTensor()])", "_____no_output_____" ], [ "img_path = '/Users/mnctty/Desktop/DL with Pytorch/1_lessons/horses.jpeg'\nimg = Image.open(img_path)", "_____no_output_____" ], [ "img.show()", "_____no_output_____" ], [ "img_transformed = preprocess(img)", "_____no_output_____" ], [ "batch_tensor = torch.unsqueeze(img_transformed, 0)", "_____no_output_____" ], [ "res = netG(batch_tensor)", "_____no_output_____" ], [ "res.show()", "_____no_output_____" ], [ "out_t = (res.data.squeeze() + 1.0) / 2.0 \nout_img = transforms.ToPILImage()(out_t)", "_____no_output_____" ], [ "out_img", "_____no_output_____" ], [ "out_img.save('/Users/mnctty/Desktop/DL with Pytorch/1_lessons/zebhorses.jpg')", "_____no_output_____" ] ] ]
[ "code", "raw", "code" ]
[ [ "code" ], [ "raw" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7f0c027275d75169b2fa77f2abfc5c2918c248
12,825
ipynb
Jupyter Notebook
cgames/04_doom_corridor/doom_corridor_dqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
2a4c28008d2fc73441778ebd2f7e7d3db12f17ff
[ "MIT" ]
21
2020-01-25T12:04:24.000Z
2022-03-13T10:14:36.000Z
cgames/04_doom_corridor/doom_corridor_dqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
2a4c28008d2fc73441778ebd2f7e7d3db12f17ff
[ "MIT" ]
null
null
null
cgames/04_doom_corridor/doom_corridor_dqn.ipynb
deepanshut041/Reinforcement-Learning-Basic
2a4c28008d2fc73441778ebd2f7e7d3db12f17ff
[ "MIT" ]
14
2020-05-15T17:14:02.000Z
2022-03-30T12:37:13.000Z
27.699784
363
0.531462
[ [ [ "# Doom Deadly Corridor with Dqn\n\nThe purpose of this scenario is to teach the agent to navigate towards his fundamental goal (the vest) and make sure he survives at the same time.\n\n### Enviroment\nMap is a corridor with shooting monsters on both sides (6 monsters in total). A green vest is placed at the oposite end of the corridor.Reward is proportional (negative or positive) to change of the distance between the player and the vest. If player ignores monsters on the sides and runs straight for the vest he will be killed somewhere along the way.\n\n### Action\n - MOVE_LEFT\n - MOVE_RIGHT\n - ATTACK\n - MOVE_FORWARD\n - MOVE_BACKWARD\n - TURN_LEFT\n - TURN_RIGHT\n\n### Rewards\n - +dX for getting closer to the vest.\n - -dX for getting further from the vest.\n - -100 death penalty\n\n\n## Step 1: Import the libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport random # Handling random number generation\nimport time # Handling time calculation\nimport cv2\n\nimport torch\nfrom vizdoom import * # Doom Environment\nimport matplotlib.pyplot as plt\nfrom IPython.display import clear_output\nfrom collections import namedtuple, deque\nimport math\n\n%matplotlib inline", "_____no_output_____" ], [ "import sys\nsys.path.append('../../')\nfrom algos.agents import DQNAgent\nfrom algos.models import DQNCnn\nfrom algos.preprocessing.stack_frame import preprocess_frame, stack_frame", "_____no_output_____" ] ], [ [ "## Step 2: Create our environment\n\nInitialize the environment in the code cell below.\n", "_____no_output_____" ] ], [ [ "def create_environment():\n game = DoomGame()\n \n # Load the correct configuration\n game.load_config(\"doom_files/deadly_corridor.cfg\")\n \n # Load the correct scenario (in our case defend_the_center scenario)\n game.set_doom_scenario_path(\"doom_files/deadly_corridor.wad\")\n \n # Here our possible actions\n possible_actions = np.identity(7, dtype=int).tolist()\n \n return game, possible_actions\ngame, possible_actions = create_environment()", "_____no_output_____" ], [ "# if gpu is to be used\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint(\"Device: \", device)", "_____no_output_____" ] ], [ [ "## Step 3: Viewing our Enviroment", "_____no_output_____" ] ], [ [ "print(\"The size of frame is: (\", game.get_screen_height(), \", \", game.get_screen_width(), \")\")\nprint(\"No. of Actions: \", possible_actions)\ngame.init()\nplt.figure()\nplt.imshow(game.get_state().screen_buffer.transpose(1, 2, 0))\nplt.title('Original Frame')\nplt.show()\ngame.close()", "_____no_output_____" ] ], [ [ "### Execute the code cell below to play Pong with a random policy.", "_____no_output_____" ] ], [ [ "def random_play():\n game.init()\n game.new_episode()\n score = 0\n while True:\n reward = game.make_action(possible_actions[np.random.randint(3)])\n done = game.is_episode_finished()\n score += reward\n time.sleep(0.01)\n if done:\n print(\"Your total score is: \", score)\n game.close()\n break\nrandom_play()", "_____no_output_____" ] ], [ [ "## Step 4:Preprocessing Frame", "_____no_output_____" ] ], [ [ "game.init()\nplt.figure()\nplt.imshow(preprocess_frame(game.get_state().screen_buffer.transpose(1, 2, 0), (0, -60, -40, 60), 84), cmap=\"gray\")\ngame.close()\nplt.title('Pre Processed image')\nplt.show()", "_____no_output_____" ] ], [ [ "## Step 5: Stacking Frame", "_____no_output_____" ] ], [ [ "def stack_frames(frames, state, is_new=False):\n frame = preprocess_frame(state, (0, -60, -40, 60), 84)\n frames = stack_frame(frames, frame, is_new)\n\n return frames\n ", "_____no_output_____" ] ], [ [ "## Step 6: Creating our Agent", "_____no_output_____" ] ], [ [ "INPUT_SHAPE = (4, 84, 84)\nACTION_SIZE = len(possible_actions)\nSEED = 0\nGAMMA = 0.99 # discount factor\nBUFFER_SIZE = 100000 # replay buffer size\nBATCH_SIZE = 32 # Update batch size\nLR = 0.0001 # learning rate \nTAU = .1 # for soft update of target parameters\nUPDATE_EVERY = 100 # how often to update the network\nUPDATE_TARGET = 10000 # After which thershold replay to be started \nEPS_START = 0.99 # starting value of epsilon\nEPS_END = 0.01 # Ending value of epsilon\nEPS_DECAY = 100 # Rate by which epsilon to be decayed\n\nagent = DQNAgent(INPUT_SHAPE, ACTION_SIZE, SEED, device, BUFFER_SIZE, BATCH_SIZE, GAMMA, LR, TAU, UPDATE_EVERY, UPDATE_TARGET, DQNCnn)", "_____no_output_____" ] ], [ [ "## Step 7: Watching untrained agent play", "_____no_output_____" ] ], [ [ "\n# watch an untrained agent\ngame.init()\nscore = 0\nstate = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True) \nwhile True:\n action = agent.act(state, 0.01)\n score += game.make_action(possible_actions[action])\n done = game.is_episode_finished()\n if done:\n print(\"Your total score is: \", score)\n break\n else:\n state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False)\n \ngame.close()", "_____no_output_____" ] ], [ [ "## Step 8: Loading Agent\nUncomment line to load a pretrained agent", "_____no_output_____" ] ], [ [ "start_epoch = 0\nscores = []\nscores_window = deque(maxlen=20)\n", "_____no_output_____" ] ], [ [ "## Step 9: Train the Agent with DQN", "_____no_output_____" ] ], [ [ "epsilon_by_epsiode = lambda frame_idx: EPS_END + (EPS_START - EPS_END) * math.exp(-1. * frame_idx /EPS_DECAY)\n\nplt.plot([epsilon_by_epsiode(i) for i in range(1000)])", "_____no_output_____" ], [ "def train(n_episodes=1000):\n \"\"\"\n Params\n ======\n n_episodes (int): maximum number of training episodes\n \"\"\"\n game.init()\n for i_episode in range(start_epoch + 1, n_episodes+1):\n game.new_episode()\n state = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True) \n score = 0\n eps = epsilon_by_epsiode(i_episode)\n while True:\n action = agent.act(state, eps)\n reward = game.make_action(possible_actions[action])\n done = game.is_episode_finished()\n score += reward\n if done:\n agent.step(state, action, reward, state, done)\n break\n else:\n next_state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False)\n agent.step(state, action, reward, next_state, done)\n state = next_state\n scores_window.append(score) # save most recent score\n scores.append(score) # save most recent score\n \n clear_output(True)\n fig = plt.figure()\n ax = fig.add_subplot(111)\n plt.plot(np.arange(len(scores)), scores)\n plt.ylabel('Score')\n plt.xlabel('Episode #')\n plt.show()\n print('\\rEpisode {}\\tAverage Score: {:.2f}\\tEpsilon: {:.2f}'.format(i_episode, np.mean(scores_window), eps), end=\"\")\n game.close()\n return scores", "_____no_output_____" ], [ "scores = train(5000)", "_____no_output_____" ], [ "fig = plt.figure()\nax = fig.add_subplot(111)\nplt.plot(np.arange(len(scores)), scores)\nplt.ylabel('Score')\nplt.xlabel('Episode #')\nplt.show()", "_____no_output_____" ] ], [ [ "## Step 10: Watch a Smart Agent!", "_____no_output_____" ] ], [ [ "game.init()\nscore = 0\nstate = stack_frames(None, game.get_state().screen_buffer.transpose(1, 2, 0), True) \nwhile True:\n action = agent.act(state, 0.01)\n score += game.make_action(possible_actions[action])\n done = game.is_episode_finished()\n if done:\n print(\"Your total score is: \", score)\n break\n else:\n state = stack_frames(state, game.get_state().screen_buffer.transpose(1, 2, 0), False)\n \ngame.close()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
cb7f126828dca93496517dc53ba9f7c37cc3eea5
13,583
ipynb
Jupyter Notebook
nb_11_env_rampup-v3_check.ipynb
sebas-seck/plan-opt
bf95edc2c3609aea7572887097be0f2f75e19216
[ "MIT" ]
null
null
null
nb_11_env_rampup-v3_check.ipynb
sebas-seck/plan-opt
bf95edc2c3609aea7572887097be0f2f75e19216
[ "MIT" ]
null
null
null
nb_11_env_rampup-v3_check.ipynb
sebas-seck/plan-opt
bf95edc2c3609aea7572887097be0f2f75e19216
[ "MIT" ]
null
null
null
47.493007
5,960
0.679894
[ [ [ "# 11 `rampup-v3` Check\nBrief notebook to review variables returned after each step.", "_____no_output_____" ] ], [ [ "import random\nimport gym\n\nfrom plan_opt.demand import Demand\nfrom plan_opt.envs.rampup3 import LEGAL_CHANGES, DEFAULT_CONFIG\nfrom plan_opt.demand_small_samples import four_weeks_uprising\nfrom plan_opt.env_health import print_step_details", "_____no_output_____" ], [ "demand = Demand(period=len(four_weeks_uprising), data=four_weeks_uprising)\ndemand.show(only_data=True)\nenv = gym.make(\"rampup-v3\").create(DEFAULT_CONFIG, demand)", "_____no_output_____" ] ], [ [ "### First step", "_____no_output_____" ] ], [ [ "obs = env._set_initial_state(initial_state_status=3)\nobs, reward, done, info = env.step(2)\nprint_step_details(env, obs, reward, done, info)", "[0.05263157894736842, 0, 0, 0, 1]\n[0.05555555555555555, 0, 0, 1, 0]\nTimestep:\t 1 \nAction:\t\t 2 \nDemand:\t\t 0 \nReward:\t\t -1000 \nDone:\t\t False \nInfo:\t\t \n timestep_change 0 -> 1\n action_change 3 -> 2\n demand_surrounding [0]-NOW(0)-[0 0 0]\n next_profitable_demand 18\n demand_observation 0.05555555555555555 \nShape:\t\t (5,) \nObservation:\n [0.05555556 0. 0. 1. 0. ] \n\n" ] ], [ [ "### Step at random point in time", "_____no_output_____" ] ], [ [ "for i in range(5):\n print(f\"Random step {i}\")\n a = env.reset()\n action = random.sample(LEGAL_CHANGES[env.obs_last_legal_status], 1)[0]\n obs, reward, done, info = env.step(action)\n print_step_details(env, obs, reward, done, info)", "Random step 0\n[0.05263157894736842, 0, 0, 0, 0]\n[1.0, 0, 0, 1, 0]\n[0.5, 0, 1, 0, 0]\nTimestep:\t 19 \nAction:\t\t 1 \nDemand:\t\t 87 \nReward:\t\t -2000 \nDone:\t\t False \nInfo:\t\t \n timestep_change 18 -> 19\n action_change 0 -> 1\n demand_surrounding [32]-NOW(87)-[56 92 83]\n next_profitable_demand 2\n demand_observation 0.5 \nShape:\t\t (5,) \nObservation:\n [0.5 0. 1. 0. 0. ] \n\nRandom step 1\n[0.05263157894736842, 0, 0, 0, 0]\n[0.125, 1, 0, 0, 0]\n[0.14285714285714285, 1, 0, 0, 0]\nTimestep:\t 12 \nAction:\t\t 0 \nDemand:\t\t 3 \nReward:\t\t -3000 \nDone:\t\t False \nInfo:\t\t \n timestep_change 11 -> 12\n action_change 0 -> 0\n demand_surrounding [0]-NOW(3)-[1 0 0]\n next_profitable_demand 7\n demand_observation 0.14285714285714285 \nShape:\t\t (5,) \nObservation:\n [0.14285715 1. 0. 0. 0. ] \n\nRandom step 2\n[0.05263157894736842, 0, 0, 0, 0]\n[0.5, 1, 0, 0, 0]\n[1.0, 1, 0, 0, 0]\nTimestep:\t 24 \nAction:\t\t 0 \nDemand:\t\t 40 \nReward:\t\t -3000 \nDone:\t\t False \nInfo:\t\t \n timestep_change 23 -> 24\n action_change 0 -> 0\n demand_surrounding [29]-NOW(40)-[86 70 45]\n next_profitable_demand 1\n demand_observation 1.0 \nShape:\t\t (5,) \nObservation:\n [1. 1. 0. 0. 0.] \n\nRandom step 3\n[0.05263157894736842, 0, 0, 0, 0]\n[1.0, 0, 0, 0, 1]\n[0.3333333333333333, 0, 0, 0, 1]\nTimestep:\t 22 \nAction:\t\t 3 \nDemand:\t\t 83 \nReward:\t\t -500 \nDone:\t\t False \nInfo:\t\t \n timestep_change 21 -> 22\n action_change 0 -> 3\n demand_surrounding [92]-NOW(83)-[29 40 86]\n next_profitable_demand 3\n demand_observation 0.3333333333333333 \nShape:\t\t (5,) \nObservation:\n [0.33333334 0. 0. 0. 1. ] \n\nRandom step 4\n[0.05263157894736842, 0, 0, 0, 0]\n[0.5, 0, 0, 0, 1]\n[1.0, 0, 0, 0, 1]\nTimestep:\t 18 \nAction:\t\t 3 \nDemand:\t\t 32 \nReward:\t\t -500 \nDone:\t\t False \nInfo:\t\t \n timestep_change 17 -> 18\n action_change 0 -> 3\n demand_surrounding [25]-NOW(32)-[87 56 92]\n next_profitable_demand 1\n demand_observation 1.0 \nShape:\t\t (5,) \nObservation:\n [1. 0. 0. 0. 1.] \n\n" ], [ "env.observation_space.__dict__", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb7f13b3e44741785e3d3ce0a91f147646ff43e1
303,280
ipynb
Jupyter Notebook
Machine_Learning/Data_Visualization/Seaborn/06-Style and Color.ipynb
ashishpatel26/Machine-Learning-and-Quantitative-analysis
26b8811789459549aa3b4922a328f200ede40a29
[ "MIT" ]
29
2019-01-03T15:19:16.000Z
2022-02-25T03:03:13.000Z
Machine_Learning/Data_Visualization/Seaborn/06-Style and Color.ipynb
rajsingh7/Machine-Learning-and-Quantitative-analysis
26b8811789459549aa3b4922a328f200ede40a29
[ "MIT" ]
null
null
null
Machine_Learning/Data_Visualization/Seaborn/06-Style and Color.ipynb
rajsingh7/Machine-Learning-and-Quantitative-analysis
26b8811789459549aa3b4922a328f200ede40a29
[ "MIT" ]
10
2017-04-25T05:49:31.000Z
2018-11-28T13:37:12.000Z
566.878505
37,434
0.936827
[ [ [ "___\n\n___", "_____no_output_____" ], [ "# Style and Color\n\nWe've shown a few times how to control figure aesthetics in seaborn, but let's now go over it formally:", "_____no_output_____" ] ], [ [ "import seaborn as sns\nimport matplotlib.pyplot as plt\nplt.style.use('bmh')\n%matplotlib inline\ntips = sns.load_dataset('tips')", "_____no_output_____" ] ], [ [ "## Styles\n\nYou can set particular styles:", "_____no_output_____" ] ], [ [ "sns.countplot(x='sex',data=tips)", "_____no_output_____" ], [ "sns.set_style('white')\nsns.countplot(x='sex',data=tips)", "_____no_output_____" ], [ "sns.set_style('ticks')\nsns.countplot(x='sex',data=tips,palette='deep')", "_____no_output_____" ] ], [ [ "## Spine Removal", "_____no_output_____" ] ], [ [ "sns.countplot(x='sex',data=tips)\nsns.despine()", "_____no_output_____" ], [ "sns.countplot(x='sex',data=tips)\nsns.despine(left=True)", "_____no_output_____" ] ], [ [ "## Size and Aspect", "_____no_output_____" ], [ "You can use matplotlib's **plt.figure(figsize=(width,height) ** to change the size of most seaborn plots.\n\nYou can control the size and aspect ratio of most seaborn grid plots by passing in parameters: size, and aspect. For example:", "_____no_output_____" ] ], [ [ "# Non Grid Plot\nplt.figure(figsize=(12,7))\nsns.countplot(x='sex',data=tips)", "_____no_output_____" ], [ "# Grid Type Plot\nsns.lmplot(x='total_bill',y='tip',size=2,aspect=4,data=tips)", "_____no_output_____" ] ], [ [ "## Scale and Context\n\nThe set_context() allows you to override default parameters:", "_____no_output_____" ] ], [ [ "#sns.set_context('poster',font_scale=4)\nsns.countplot(x='sex',data=tips,palette='coolwarm')", "_____no_output_____" ] ], [ [ "Check out the documentation page for more info on these topics:\nhttps://stanford.edu/~mwaskom/software/seaborn/tutorial/aesthetics.html", "_____no_output_____" ], [ "# Matplotlib Colormaps\n\n```\nhttps://matplotlib.org/examples/color/colormaps_reference.html\n```", "_____no_output_____" ] ], [ [ "sns.set_context('notebook', font_scale=1)\nsns.set_style('ticks')\nsns.countplot(x='sex', data=tips)\nsns.despine(top=True, bottom=True)", "_____no_output_____" ], [ "sns.lmplot(x='total_bill', y='tip', data=tips, hue='sex', palette='coolwarm')", "_____no_output_____" ], [ "sns.lmplot(x='total_bill', y='tip', data=tips, hue='sex', palette='viridis')", "_____no_output_____" ], [ "sns.lmplot(x='total_bill', y='tip', data=tips, hue='sex', palette='inferno')", "_____no_output_____" ], [ "sns.lmplot(x='total_bill', y='tip', data=tips, hue='sex', palette='Accent')", "_____no_output_____" ], [ "sns.lmplot(x='total_bill', y='tip', data=tips, hue='sex', palette='Dark2')", "_____no_output_____" ] ], [ [ "sns.lmplot(x='total_bill', y='tip', data=tips, hue='sex', palette='CMRmap')", "_____no_output_____" ], [ "# Good Job!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
cb7f15389f6f07414ce8263ee56bd0c56459c186
8,586
ipynb
Jupyter Notebook
StdDevVariance.ipynb
wf539/MLDataSciDeepLearningPython
aed1170b5460b373c37774bbdd12eb32f347be64
[ "MIT" ]
3
2021-02-25T17:33:56.000Z
2021-09-02T07:11:31.000Z
machine-learning/MLCourse/StdDevVariance.ipynb
markmusic2727/learning
cefbaf19c3f9c2ae749a99518c57eef60ec0d859
[ "MIT" ]
null
null
null
machine-learning/MLCourse/StdDevVariance.ipynb
markmusic2727/learning
cefbaf19c3f9c2ae749a99518c57eef60ec0d859
[ "MIT" ]
null
null
null
69.804878
6,368
0.846494
[ [ [ "# Standard Deviation and Variance", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nincomes = np.random.normal(100.0, 50.0, 10000)\n\nplt.hist(incomes, 50)\nplt.show()", "_____no_output_____" ], [ "incomes.std()", "_____no_output_____" ], [ "incomes.var()", "_____no_output_____" ] ], [ [ "## Activity", "_____no_output_____" ], [ "Experiment with different parameters on the normal function, and see what effect it has on the shape of the distribution. How does that new shape relate to the standard deviation and variance?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ] ]
cb7f1570f1386f2e7dc4f2e7e09b60e5e6aa3273
46,249
ipynb
Jupyter Notebook
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR Dataset D.ipynb
Vicomtech/STDG-evaluation-metrics
4662c2cc60f7941723a876a6032b411e40f5ec62
[ "MIT" ]
4
2021-08-20T18:21:09.000Z
2022-01-12T09:30:29.000Z
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR Dataset D.ipynb
Vicomtech/STDG-evaluation-metrics
4662c2cc60f7941723a876a6032b411e40f5ec62
[ "MIT" ]
null
null
null
notebooks/Dataset D - Contraceptive Method Choice/Synthetic data evaluation/Utility/TRTR Dataset D.ipynb
Vicomtech/STDG-evaluation-metrics
4662c2cc60f7941723a876a6032b411e40f5ec62
[ "MIT" ]
null
null
null
31.699109
121
0.389825
[ [ [ "# TRTR Dataset D", "_____no_output_____" ] ], [ [ "#import libraries\nimport warnings\nwarnings.filterwarnings(\"ignore\")\nimport numpy as np\nimport pandas as pd\nimport os\nprint('Libraries imported!!')", "Libraries imported!!\n" ], [ "#define directory of functions and actual directory\nHOME_PATH = '' #home path of the project\nFUNCTIONS_DIR = 'EVALUATION FUNCTIONS/UTILITY'\nACTUAL_DIR = os.getcwd()\n\n#change directory to functions directory\nos.chdir(HOME_PATH + FUNCTIONS_DIR)\n\n#import functions for data labelling analisys\nfrom utility_evaluation import DataPreProcessor\nfrom utility_evaluation import train_evaluate_model\n\n#change directory to actual directory\nos.chdir(ACTUAL_DIR)\nprint('Functions imported!!')", "Functions imported!!\n" ] ], [ [ "## 1. Read data", "_____no_output_____" ] ], [ [ "#read real dataset\ntrain_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TRAIN DATASETS/D_ContraceptiveMethod_Real_Train.csv')\ncategorical_columns = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation',\n 'standard_of_living_index','media_exposure','contraceptive_method_used']\nfor col in categorical_columns :\n train_data[col] = train_data[col].astype('category')\ntrain_data", "_____no_output_____" ], [ "#read test data\ntest_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TEST DATASETS/D_ContraceptiveMethod_Real_Test.csv')\nfor col in categorical_columns :\n test_data[col] = test_data[col].astype('category')\ntest_data", "_____no_output_____" ], [ "target = 'contraceptive_method_used'\n#quick look at the breakdown of class values\nprint('Train data')\nprint(train_data.shape)\nprint(train_data.groupby(target).size())\nprint('#####################################')\nprint('Test data')\nprint(test_data.shape)\nprint(test_data.groupby(target).size())", "Train data\n(1178, 10)\ncontraceptive_method_used\n1 499\n2 262\n3 417\ndtype: int64\n#####################################\nTest data\n(295, 10)\ncontraceptive_method_used\n1 130\n2 71\n3 94\ndtype: int64\n" ] ], [ [ "## 2. Pre-process training data", "_____no_output_____" ] ], [ [ "target = 'contraceptive_method_used'\ncategorical_columns = ['wife_education','husband_education','wife_religion','wife_working','husband_occupation',\n 'standard_of_living_index','media_exposure']\nnumerical_columns = train_data.select_dtypes(include=['int64','float64']).columns.tolist()\ncategories = [np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3]), np.array([0, 1]), np.array([0, 1]), \n np.array([0, 1, 2, 3]), np.array([0, 1, 2, 3]), np.array([0, 1])]\n\ndata_preprocessor = DataPreProcessor(categorical_columns, numerical_columns, categories)\nx_train = data_preprocessor.preprocess_train_data(train_data.loc[:, train_data.columns != target])\ny_train = train_data.loc[:, target]\n\nx_train.shape, y_train.shape", "_____no_output_____" ] ], [ [ "## 3. Preprocess test data", "_____no_output_____" ] ], [ [ "x_test = data_preprocessor.preprocess_test_data(test_data.loc[:, test_data.columns != target])\ny_test = test_data.loc[:, target]\nx_test.shape, y_test.shape", "_____no_output_____" ] ], [ [ "## 4. Create a dataset to save the results", "_____no_output_____" ] ], [ [ "results = pd.DataFrame(columns = ['model','accuracy','precision','recall','f1'])\nresults", "_____no_output_____" ] ], [ [ "## 4. Train and evaluate Random Forest Classifier", "_____no_output_____" ] ], [ [ "rf_results = train_evaluate_model('RF', x_train, y_train, x_test, y_test)\nresults = results.append(rf_results, ignore_index=True)\nrf_results", "[Parallel(n_jobs=3)]: Using backend ThreadingBackend with 3 concurrent workers.\n[Parallel(n_jobs=3)]: Done 44 tasks | elapsed: 0.1s\n[Parallel(n_jobs=3)]: Done 100 out of 100 | elapsed: 0.2s finished\n[Parallel(n_jobs=3)]: Using backend ThreadingBackend with 3 concurrent workers.\n[Parallel(n_jobs=3)]: Done 44 tasks | elapsed: 0.0s\n[Parallel(n_jobs=3)]: Done 100 out of 100 | elapsed: 0.0s finished\n" ] ], [ [ "## 5. Train and Evaluate KNeighbors Classifier", "_____no_output_____" ] ], [ [ "knn_results = train_evaluate_model('KNN', x_train, y_train, x_test, y_test)\nresults = results.append(knn_results, ignore_index=True)\nknn_results", "_____no_output_____" ] ], [ [ "## 6. Train and evaluate Decision Tree Classifier", "_____no_output_____" ] ], [ [ "dt_results = train_evaluate_model('DT', x_train, y_train, x_test, y_test)\nresults = results.append(dt_results, ignore_index=True)\ndt_results", "_____no_output_____" ] ], [ [ "## 7. Train and evaluate Support Vector Machines Classifier", "_____no_output_____" ] ], [ [ "svm_results = train_evaluate_model('SVM', x_train, y_train, x_test, y_test)\nresults = results.append(svm_results, ignore_index=True)\nsvm_results", "[LibSVM]" ] ], [ [ "## 8. Train and evaluate Multilayer Perceptron Classifier", "_____no_output_____" ] ], [ [ "mlp_results = train_evaluate_model('MLP', x_train, y_train, x_test, y_test)\nresults = results.append(mlp_results, ignore_index=True)\nmlp_results", "Iteration 1, loss = 1.07357797\nIteration 2, loss = 1.03344273\nIteration 3, loss = 1.00372110\nIteration 4, loss = 0.97884543\nIteration 5, loss = 0.95352749\nIteration 6, loss = 0.93481026\nIteration 7, loss = 0.91859182\nIteration 8, loss = 0.90458880\nIteration 9, loss = 0.89249298\nIteration 10, loss = 0.88135028\nIteration 11, loss = 0.87626128\nIteration 12, loss = 0.86359562\nIteration 13, loss = 0.85727672\nIteration 14, loss = 0.85049755\nIteration 15, loss = 0.84176044\nIteration 16, loss = 0.83660931\nIteration 17, loss = 0.82904798\nIteration 18, loss = 0.82105654\nIteration 19, loss = 0.81464651\nIteration 20, loss = 0.80737258\nIteration 21, loss = 0.80366191\nIteration 22, loss = 0.79911730\nIteration 23, loss = 0.79152012\nIteration 24, loss = 0.78192791\nIteration 25, loss = 0.77822910\nIteration 26, loss = 0.77242582\nIteration 27, loss = 0.76419193\nIteration 28, loss = 0.75817664\nIteration 29, loss = 0.75301350\nIteration 30, loss = 0.75140847\nIteration 31, loss = 0.74483144\nIteration 32, loss = 0.73572122\nIteration 33, loss = 0.72964133\nIteration 34, loss = 0.72407626\nIteration 35, loss = 0.71699694\nIteration 36, loss = 0.71451517\nIteration 37, loss = 0.71484474\nIteration 38, loss = 0.70992559\nIteration 39, loss = 0.70439114\nIteration 40, loss = 0.69482290\nIteration 41, loss = 0.68609193\nIteration 42, loss = 0.68211477\nIteration 43, loss = 0.67866483\nIteration 44, loss = 0.67072298\nIteration 45, loss = 0.66453060\nIteration 46, loss = 0.65814936\nIteration 47, loss = 0.65635976\nIteration 48, loss = 0.64849682\nIteration 49, loss = 0.64479782\nIteration 50, loss = 0.64205553\nIteration 51, loss = 0.63360016\nIteration 52, loss = 0.62591040\nIteration 53, loss = 0.62316098\nIteration 54, loss = 0.62718580\nIteration 55, loss = 0.61304171\nIteration 56, loss = 0.61213173\nIteration 57, loss = 0.60302231\nIteration 58, loss = 0.59751348\nIteration 59, loss = 0.59285998\nIteration 60, loss = 0.58666778\nIteration 61, loss = 0.58287262\nIteration 62, loss = 0.58240270\nIteration 63, loss = 0.57445888\nIteration 64, loss = 0.56419754\nIteration 65, loss = 0.56047939\nIteration 66, loss = 0.55611326\nIteration 67, loss = 0.55738410\nIteration 68, loss = 0.55651619\nIteration 69, loss = 0.53952197\nIteration 70, loss = 0.53358091\nIteration 71, loss = 0.52966001\nIteration 72, loss = 0.52378438\nIteration 73, loss = 0.52514766\nIteration 74, loss = 0.52631779\nIteration 75, loss = 0.51720250\nIteration 76, loss = 0.52522920\nIteration 77, loss = 0.53131962\nIteration 78, loss = 0.52116416\nIteration 79, loss = 0.50915547\nIteration 80, loss = 0.49920778\nIteration 81, loss = 0.48607966\nIteration 82, loss = 0.48502116\nIteration 83, loss = 0.51744506\nIteration 84, loss = 0.50297622\nIteration 85, loss = 0.50989142\nIteration 86, loss = 0.51097696\nIteration 87, loss = 0.49420184\nIteration 88, loss = 0.47747828\nIteration 89, loss = 0.48108779\nIteration 90, loss = 0.46066943\nIteration 91, loss = 0.45910491\nIteration 92, loss = 0.46049594\nIteration 93, loss = 0.45190215\nIteration 94, loss = 0.45383654\nIteration 95, loss = 0.44130260\nIteration 96, loss = 0.43771214\nIteration 97, loss = 0.43438480\nIteration 98, loss = 0.43450419\nIteration 99, loss = 0.43057644\nIteration 100, loss = 0.42389473\nIteration 101, loss = 0.44171392\nIteration 102, loss = 0.42995730\nIteration 103, loss = 0.41973955\nIteration 104, loss = 0.41590290\nIteration 105, loss = 0.41122888\nIteration 106, loss = 0.40948349\nIteration 107, loss = 0.40576188\nIteration 108, loss = 0.40144115\nIteration 109, loss = 0.40319245\nIteration 110, loss = 0.40880137\nIteration 111, loss = 0.39976430\nIteration 112, loss = 0.39452954\nIteration 113, loss = 0.39301745\nIteration 114, loss = 0.39444832\nIteration 115, loss = 0.38631193\nIteration 116, loss = 0.37720006\nIteration 117, loss = 0.37674778\nIteration 118, loss = 0.37656188\nIteration 119, loss = 0.37500047\nIteration 120, loss = 0.37962704\nIteration 121, loss = 0.37770674\nIteration 122, loss = 0.37371601\nIteration 123, loss = 0.37065036\nIteration 124, loss = 0.36096963\nIteration 125, loss = 0.36834338\nIteration 126, loss = 0.37392902\nIteration 127, loss = 0.35713389\nIteration 128, loss = 0.35239531\nIteration 129, loss = 0.34875058\nIteration 130, loss = 0.34503194\nIteration 131, loss = 0.34735094\nIteration 132, loss = 0.35222827\nIteration 133, loss = 0.34595592\nIteration 134, loss = 0.34495201\nIteration 135, loss = 0.34735483\nIteration 136, loss = 0.34484170\nIteration 137, loss = 0.35064687\nIteration 138, loss = 0.33895251\nIteration 139, loss = 0.33817761\nIteration 140, loss = 0.34642826\nIteration 141, loss = 0.33011003\nIteration 142, loss = 0.33623319\nIteration 143, loss = 0.32458144\nIteration 144, loss = 0.32334164\nIteration 145, loss = 0.32216366\nIteration 146, loss = 0.32240141\nIteration 147, loss = 0.33675347\nIteration 148, loss = 0.31894181\nIteration 149, loss = 0.32709410\nIteration 150, loss = 0.31951480\nIteration 151, loss = 0.31565938\nIteration 152, loss = 0.30938151\nIteration 153, loss = 0.31336452\nIteration 154, loss = 0.31298673\nIteration 155, loss = 0.31069302\nIteration 156, loss = 0.30602695\nIteration 157, loss = 0.30410298\nIteration 158, loss = 0.30847132\nIteration 159, loss = 0.30674677\nIteration 160, loss = 0.30052144\nIteration 161, loss = 0.29881274\nIteration 162, loss = 0.30396880\nIteration 163, loss = 0.29879033\nIteration 164, loss = 0.29573601\nIteration 165, loss = 0.29417461\nIteration 166, loss = 0.29225224\nIteration 167, loss = 0.28941771\nIteration 168, loss = 0.29804575\nIteration 169, loss = 0.28433365\nIteration 170, loss = 0.28915325\nIteration 171, loss = 0.28558330\nIteration 172, loss = 0.28150397\nIteration 173, loss = 0.29596511\nIteration 174, loss = 0.28472828\nIteration 175, loss = 0.28884563\nIteration 176, loss = 0.29126005\nIteration 177, loss = 0.29589680\nIteration 178, loss = 0.28003604\nIteration 179, loss = 0.27854732\nIteration 180, loss = 0.27433139\nIteration 181, loss = 0.29051499\nIteration 182, loss = 0.28587482\nIteration 183, loss = 0.27135240\nIteration 184, loss = 0.28316840\nIteration 185, loss = 0.29608533\nIteration 186, loss = 0.28576163\nIteration 187, loss = 0.28303094\nIteration 188, loss = 0.27166603\nIteration 189, loss = 0.26849130\nIteration 190, loss = 0.27438011\nIteration 191, loss = 0.26472239\nIteration 192, loss = 0.26394400\nIteration 193, loss = 0.26154887\nIteration 194, loss = 0.25767948\nIteration 195, loss = 0.25839342\nIteration 196, loss = 0.26471495\nIteration 197, loss = 0.25976659\nIteration 198, loss = 0.26083007\nIteration 199, loss = 0.25819715\nIteration 200, loss = 0.25985839\nIteration 201, loss = 0.27096976\nIteration 202, loss = 0.26158078\nIteration 203, loss = 0.25615452\nIteration 204, loss = 0.25540705\nIteration 205, loss = 0.25773812\nIteration 206, loss = 0.27057673\nIteration 207, loss = 0.26727793\nIteration 208, loss = 0.25953379\nIteration 209, loss = 0.25401002\nIteration 210, loss = 0.25541414\nIteration 211, loss = 0.25463707\nIteration 212, loss = 0.25288496\nIteration 213, loss = 0.25149211\nIteration 214, loss = 0.24529223\nIteration 215, loss = 0.25201467\nIteration 216, loss = 0.24494456\nIteration 217, loss = 0.23966178\nIteration 218, loss = 0.24464801\nIteration 219, loss = 0.24634630\nIteration 220, loss = 0.24101912\nIteration 221, loss = 0.24220688\nIteration 222, loss = 0.25608367\nIteration 223, loss = 0.26386924\nIteration 224, loss = 0.26850782\nIteration 225, loss = 0.24828777\nIteration 226, loss = 0.24127540\nIteration 227, loss = 0.24289558\nIteration 228, loss = 0.25187354\nTraining loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.\n" ] ], [ [ "## 9. Save results file", "_____no_output_____" ] ], [ [ "results.to_csv('RESULTS/models_results_real.csv', index=False)\nresults", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb7f2cb9e734144e4e10d3704cf8939b09d65adf
8,479
ipynb
Jupyter Notebook
ex_data_structures.ipynb
vzakhozhyi/ComputationalThinking_Gov_1
2a4ee21a70f422098157a4dc1a3eaa26d6d2bc9a
[ "MIT" ]
null
null
null
ex_data_structures.ipynb
vzakhozhyi/ComputationalThinking_Gov_1
2a4ee21a70f422098157a4dc1a3eaa26d6d2bc9a
[ "MIT" ]
null
null
null
ex_data_structures.ipynb
vzakhozhyi/ComputationalThinking_Gov_1
2a4ee21a70f422098157a4dc1a3eaa26d6d2bc9a
[ "MIT" ]
null
null
null
21.966321
99
0.427055
[ [ [ "## Class exercise #1:", "_____no_output_____" ], [ "To create a data frame, we need to call pandas and rename it into pd", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ] ], [ [ "Create the collumns by creating the Python lists", "_____no_output_____" ] ], [ [ "names=[\"Tomás\", \"Pauline\", \"Pablo\", \"Bjork\",\"Alan\",\"Juana\"]\nwoman=[False,True,False,False,False,True]\nages=[32,33,28,30,32,27]\ncountry=[\"Chile\", \"Senegal\", \"Spain\", \"Norway\",\"Peru\",\"Peru\"]\neducation=[\"Bach\", \"Bach\", \"Master\", \"PhD\",\"Bach\",\"Master\"]", "_____no_output_____" ] ], [ [ "The next step is to create a dict", "_____no_output_____" ] ], [ [ "data={'Names':names, 'Woman':woman, 'Ages':ages, 'Country':country, 'Education':education}\ndata", "_____no_output_____" ] ], [ [ "From that dict we create data frame (DF) called friends", "_____no_output_____" ] ], [ [ "friends=pd.DataFrame.from_dict(data)\nfriends", "_____no_output_____" ] ], [ [ "### Queries:\n\n1. Who is the oldest person in this group of friends?", "_____no_output_____" ] ], [ [ "friends[friends.Ages==max(friends.Ages)].Names", "_____no_output_____" ] ], [ [ "2. How many people are 32?", "_____no_output_____" ] ], [ [ "len(friends[friends.Ages==32])", "_____no_output_____" ] ], [ [ "3. How many are not Peruvian? (use two different codes)", "_____no_output_____" ] ], [ [ "len(friends[friends.Country!=\"Peru\"])", "_____no_output_____" ], [ "PeruOrigin=['Peru']\nlen(friends[~friends.Country.isin(PeruOrigin)])", "_____no_output_____" ] ], [ [ "4. Who is the person with the highest level of education?", "_____no_output_____" ] ], [ [ "toSort=[\"Education\"]\nOrder=[True]\nfriends.sort_values(by=toSort,ascending=Order).tail(1).Names", "_____no_output_____" ] ], [ [ "5. What is the sex of the oldest person in the group?", "_____no_output_____" ] ], [ [ "toSort=[\"Ages\"]\nOrder=[False]\nfriends.sort_values(by=toSort,ascending=Order).tail(1).Woman", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb7f4881ca32fa66d18a5c598f0ec31c4195ee73
5,598
ipynb
Jupyter Notebook
examples/01_importing_your_own_neural_net.ipynb
FelixaHub/MIPVerify.jl
0fbb87c87c65e2c5de1049808911d4a293b49730
[ "MIT" ]
null
null
null
examples/01_importing_your_own_neural_net.ipynb
FelixaHub/MIPVerify.jl
0fbb87c87c65e2c5de1049808911d4a293b49730
[ "MIT" ]
null
null
null
examples/01_importing_your_own_neural_net.ipynb
FelixaHub/MIPVerify.jl
0fbb87c87c65e2c5de1049808911d4a293b49730
[ "MIT" ]
null
null
null
23.923077
342
0.568239
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb7f6dc2c7201ec6b29f610cf2bcfc1386d94612
4,270
ipynb
Jupyter Notebook
notebooks/slim_example.ipynb
Zhenxingzhang/tiny_imagenet
f44512023ce52df30cdffd80d3cb7cc4e1426354
[ "Apache-2.0" ]
null
null
null
notebooks/slim_example.ipynb
Zhenxingzhang/tiny_imagenet
f44512023ce52df30cdffd80d3cb7cc4e1426354
[ "Apache-2.0" ]
null
null
null
notebooks/slim_example.ipynb
Zhenxingzhang/tiny_imagenet
f44512023ce52df30cdffd80d3cb7cc4e1426354
[ "Apache-2.0" ]
null
null
null
30.5
108
0.532553
[ [ [ "import sys\n\nsys.path.append(\"/data/slim/models/research/slim\")\n%matplotlib inline\n\nfrom matplotlib import pyplot as plt\n\nimport numpy as np\nimport os\nimport tensorflow as tf\nimport urllib2\n\nfrom datasets import imagenet\nfrom nets import vgg\nfrom preprocessing import vgg_preprocessing\nfrom datasets import dataset_utils\n\n# Main slim library\nfrom tensorflow.contrib import slim", "_____no_output_____" ], [ "input_images = tf.placeholder(tf.float32, shape=[None, 224, 224, 3])\nlabel = tf.placeholder(tf.int64, name= \"label\")\nsummaries_dir = \"/tmp/slim/tutorial\"", "_____no_output_____" ], [ "def vgg16(inputs):\n with slim.arg_scope([slim.conv2d, slim.fully_connected],\n activation_fn=tf.nn.relu,\n weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),\n weights_regularizer=slim.l2_regularizer(0.0005)):\n net = slim.repeat(inputs, 2, slim.conv2d, 64, [3, 3], scope='conv1')\n net = slim.max_pool2d(net, [2, 2], scope='pool1')\n net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], scope='conv2')\n net = slim.max_pool2d(net, [2, 2], scope='pool2')\n net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')\n net = slim.max_pool2d(net, [2, 2], scope='pool3')\n net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv4')\n net = slim.max_pool2d(net, [2, 2], scope='pool4')\n net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], scope='conv5')\n net = slim.max_pool2d(net, [2, 2], scope='pool5')\n net = slim.flatten(net)\n net = slim.fully_connected(net, 4096, scope='fc6')\n net = slim.dropout(net, 0.5, scope='dropout6')\n net = slim.fully_connected(net, 4096, scope='fc7')\n net = slim.dropout(net, 0.5, scope='dropout7')\n net = slim.fully_connected(net, 2, activation_fn=None, scope='fc8')\n return net\n\nlogits = vgg16(input_images)\nprob = tf.nn.softmax(logits)", "_____no_output_____" ], [ "print(logits.shape)\n\nloss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=label, logits=logits)\n\nloss_mean = tf.reduce_mean(loss)\ntf.summary.scalar('loss', loss_mean)\n\nsummary_op = tf.summary.merge_all()\n\nwith tf.Session() as sess:\n train_writer = tf.summary.FileWriter(summaries_dir + '/train', sess.graph)\n \n sess.run(tf.global_variables_initializer())\n\n x = np.ones([1, 224, 224, 3])\n summs, loss, logits_ = sess.run([summary_op, loss_mean, logits], {input_images: x, label: [1]})\n print(\"{}, {}\".format(loss, logits_))\n \n# train_writer.add_summary(summs, 0)\n", "(?, 2)\n0.693147182465, [[ 1.83889781e-09 -7.06232406e-09]]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
cb7f904dedd49c2404699cf320a1930ffdb98711
21,281
ipynb
Jupyter Notebook
site/en/r2/tutorials/distribute/keras.ipynb
shawnkoon/docs
c13cd44cfab572fe5a7111afd60bb0bfd9596039
[ "Apache-2.0" ]
9
2019-04-07T05:14:52.000Z
2020-02-10T15:33:21.000Z
site/en/r2/tutorials/distribute/keras.ipynb
shawnkoon/docs
c13cd44cfab572fe5a7111afd60bb0bfd9596039
[ "Apache-2.0" ]
null
null
null
site/en/r2/tutorials/distribute/keras.ipynb
shawnkoon/docs
c13cd44cfab572fe5a7111afd60bb0bfd9596039
[ "Apache-2.0" ]
3
2019-07-06T07:41:57.000Z
2019-11-13T05:57:20.000Z
29.393646
282
0.50914
[ [ [ "##### Copyright 2019 The TensorFlow Authors.\n\n", "_____no_output_____" ] ], [ [ "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "# Distributed training in TensorFlow", "_____no_output_____" ], [ "<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/alpha/tutorials/distribute/keras\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/keras.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>", "_____no_output_____" ], [ "## Overview\n\nThe `tf.distribute.Strategy` API provides an abstraction for distributing your training\nacross multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes.\n\nThis tutorial uses the `tf.distribute.MirroredStrategy`, which\ndoes in-graph replication with synchronous training on many GPUs on one machine.\nEssentially, it copies all of the model's variables to each processor.\nThen, it uses [all-reduce](http://mpitutorial.com/tutorials/mpi-reduce-and-allreduce/) to combine the gradients from all processors and applies the combined value to all copies of the model.\n\n`MirroredStategy` is one of several distribution strategy available in TensorFlow core. You can read about more strategies at [distribution strategy guide](../../guide/distribute_strategy.ipynb).\n\n", "_____no_output_____" ], [ "### Keras API\n\nThis example uses the `tf.keras` API to build the model and training loop. For custom training loops, see [this tutorial](training_loops.ipynb).", "_____no_output_____" ], [ "## Import Dependencies", "_____no_output_____" ] ], [ [ "from __future__ import absolute_import, division, print_function", "_____no_output_____" ], [ "# Import TensorFlow\n!pip install tensorflow-gpu==2.0.0-alpha0 \nimport tensorflow_datasets as tfds\nimport tensorflow as tf\n\nimport os", "_____no_output_____" ] ], [ [ "## Download the dataset", "_____no_output_____" ], [ "Download the MNIST dataset and load it from [TensorFlow Datasets](https://www.tensorflow.org/datasets). This returns a dataset in `tf.data` format.", "_____no_output_____" ], [ "Setting `with_info` to `True` includes the metadata for the entire dataset, which is being saved here to `ds_info`.\nAmong other things, this metadata object includes the number of train and test examples.\n", "_____no_output_____" ] ], [ [ "datasets, ds_info = tfds.load(name='mnist', with_info=True, as_supervised=True)\nmnist_train, mnist_test = datasets['train'], datasets['test']", "_____no_output_____" ] ], [ [ "## Define Distribution Strategy", "_____no_output_____" ], [ "Create a `MirroredStrategy` object. This will handle distribution, and provides a context manager (`tf.distribute.MirroredStrategy.scope`) to build your model inside.", "_____no_output_____" ] ], [ [ "strategy = tf.distribute.MirroredStrategy()", "_____no_output_____" ], [ "print ('Number of devices: {}'.format(strategy.num_replicas_in_sync))", "_____no_output_____" ] ], [ [ "## Setup Input pipeline", "_____no_output_____" ], [ "If a model is trained on multiple GPUs, the batch size should be increased accordingly so as to make effective use of the extra computing power. Moreover, the learning rate should be tuned accordingly.", "_____no_output_____" ] ], [ [ "# You can also do ds_info.splits.total_num_examples to get the total \n# number of examples in the dataset.\n\nnum_train_examples = ds_info.splits['train'].num_examples\nnum_test_examples = ds_info.splits['test'].num_examples\n\nBUFFER_SIZE = 10000\n\nBATCH_SIZE_PER_REPLICA = 64\nBATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync", "_____no_output_____" ] ], [ [ "Pixel values, which are 0-255, [have to be normalized to the 0-1 range](https://en.wikipedia.org/wiki/Feature_scaling). Define this scale in a function.", "_____no_output_____" ] ], [ [ "def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255\n \n return image, label", "_____no_output_____" ] ], [ [ "Apply this function to the training and test data, shuffle the training data, and [batch it for training](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch).\n", "_____no_output_____" ] ], [ [ "train_dataset = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)\neval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)", "_____no_output_____" ] ], [ [ "## Create the model", "_____no_output_____" ], [ "Create and compile the Keras model in the context of `strategy.scope`.", "_____no_output_____" ] ], [ [ "with strategy.scope():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n \n model.compile(loss='sparse_categorical_crossentropy',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "## Define the callbacks.\n\n", "_____no_output_____" ], [ "The callbacks used here are:\n\n* *Tensorboard*: This callback writes a log for Tensorboard which allows you to visualize the graphs.\n* *Model Checkpoint*: This callback saves the model after every epoch.\n* *Learning Rate Scheduler*: Using this callback, you can schedule the learning rate to change after every epoch/batch.\n\nFor illustrative purposes, add a print callback to display the *learning rate* in the notebook.", "_____no_output_____" ] ], [ [ "# Define the checkpoint directory to store the checkpoints\n\ncheckpoint_dir = './training_checkpoints'\n# Name of the checkpoint files\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt_{epoch}\")", "_____no_output_____" ], [ "# Function for decaying the learning rate.\n# You can define any decay function you need.\ndef decay(epoch):\n if epoch < 3:\n return 1e-3\n elif epoch >= 3 and epoch < 7:\n return 1e-4\n else:\n return 1e-5", "_____no_output_____" ], [ "# Callback for printing the LR at the end of each epoch.\nclass PrintLR(tf.keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs=None):\n print ('\\nLearning rate for epoch {} is {}'.format(epoch + 1, \n model.optimizer.lr.numpy()))", "_____no_output_____" ], [ "callbacks = [\n tf.keras.callbacks.TensorBoard(log_dir='./logs'),\n tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix, \n save_weights_only=True),\n tf.keras.callbacks.LearningRateScheduler(decay),\n PrintLR()\n]", "_____no_output_____" ] ], [ [ "## Train and evaluate", "_____no_output_____" ], [ "Now, train the model in the usual way, calling `fit` on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.\n", "_____no_output_____" ] ], [ [ "model.fit(train_dataset, epochs=10, callbacks=callbacks)", "_____no_output_____" ] ], [ [ "As you can see below, the checkpoints are getting saved.", "_____no_output_____" ] ], [ [ "# check the checkpoint directory\n!ls {checkpoint_dir}", "_____no_output_____" ] ], [ [ "To see how the model perform, load the latest checkpoint and call `evaluate` on the test data.\n\nCall `evaluate` as before using appropriate datasets.", "_____no_output_____" ] ], [ [ "model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))\n\neval_loss, eval_acc = model.evaluate(eval_dataset)\nprint ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))", "_____no_output_____" ] ], [ [ "To see the output, you can download and view the TensorBoard logs at the terminal.\n\n```\n$ tensorboard --logdir=path/to/log-directory\n```", "_____no_output_____" ] ], [ [ "!ls -sh ./logs", "_____no_output_____" ] ], [ [ "## Export to SavedModel", "_____no_output_____" ], [ "If you want to export the graph and the variables, SavedModel is the best way of doing this. The model can be loaded back with or without the scope. Moreover, SavedModel is platform agnostic.", "_____no_output_____" ] ], [ [ "path = 'saved_model/'", "_____no_output_____" ], [ "tf.keras.experimental.export_saved_model(model, path)", "_____no_output_____" ] ], [ [ "Load the model without `strategy.scope`.", "_____no_output_____" ] ], [ [ "unreplicated_model = tf.keras.experimental.load_from_saved_model(path)\n\nunreplicated_model.compile(\n loss='sparse_categorical_crossentropy', \n optimizer=tf.keras.optimizers.Adam(), \n metrics=['accuracy'])\n\neval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)\nprint ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))", "_____no_output_____" ] ], [ [ "Load the model with `strategy.scope`.", "_____no_output_____" ] ], [ [ "with strategy.scope():\n replicated_model = tf.keras.experimental.load_from_saved_model(path)\n replicated_model.compile(loss='sparse_categorical_crossentropy',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['accuracy'])\n\n eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)\n print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))", "_____no_output_____" ] ], [ [ "## What's next?\n\nRead the [distribution strategy guide](../../guide/distribute_strategy.ipynb).\n\nTry the [Distributed Training with Custom Training Loops](training_loops.ipynb) tutorial.\n\nNote: `tf.distribute.Strategy` is actively under development and we will be adding more examples and tutorials in the near future. Please give it a try. We welcome your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb7f914d97e95a0c0135f1f924bab52b73be69fb
361
ipynb
Jupyter Notebook
holoviews/tests/ipython/notebooks/test_opts_image_cell_magic.ipynb
ppwadhwa/holoviews
e8e2ec08c669295479f98bb2f46bbd59782786bf
[ "BSD-3-Clause" ]
864
2019-11-13T08:18:27.000Z
2022-03-31T13:36:13.000Z
holoviews/tests/ipython/notebooks/test_opts_image_cell_magic.ipynb
ppwadhwa/holoviews
e8e2ec08c669295479f98bb2f46bbd59782786bf
[ "BSD-3-Clause" ]
1,117
2019-11-12T16:15:59.000Z
2022-03-30T22:57:59.000Z
holoviews/tests/ipython/notebooks/test_opts_image_cell_magic.ipynb
ppwadhwa/holoviews
e8e2ec08c669295479f98bb2f46bbd59782786bf
[ "BSD-3-Clause" ]
180
2019-11-19T16:44:44.000Z
2022-03-28T22:49:18.000Z
15.695652
51
0.520776
[ [ [ "%%opts Image [xaxis=None] (cmap='viridis')\nhv.Image(np.random.rand(20,20))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cb7f9dcbf45ee3f27a51d1626497f48b90175881
3,052
ipynb
Jupyter Notebook
Chapter 12/pipelines/02 - Ingesting in SageMaker Feature Store.ipynb
amikewatson/Learn-Amazon-SageMaker-second-edition
64955fd96a5917d8d4d5e18a6dfc57a5432250be
[ "MIT" ]
15
2021-10-01T02:36:24.000Z
2022-03-02T23:37:04.000Z
Chapter 12/pipelines/02 - Ingesting in SageMaker Feature Store.ipynb
amikewatson/Learn-Amazon-SageMaker-second-edition
64955fd96a5917d8d4d5e18a6dfc57a5432250be
[ "MIT" ]
null
null
null
Chapter 12/pipelines/02 - Ingesting in SageMaker Feature Store.ipynb
amikewatson/Learn-Amazon-SageMaker-second-edition
64955fd96a5917d8d4d5e18a6dfc57a5432250be
[ "MIT" ]
14
2021-10-30T14:21:43.000Z
2022-03-11T02:14:28.000Z
23.121212
104
0.495413
[ [ [ "%%sh\npip install -q sagemaker --upgrade", "_____no_output_____" ], [ "import sagemaker\n\nprint(sagemaker.__version__)\n\nsession = sagemaker.Session()\nrole = sagemaker.get_execution_role()\nbucket = session.default_bucket()", "_____no_output_____" ], [ "import time\nfrom time import gmtime, strftime, sleep\n\nfeature_group_name = 'amazon-reviews-feature-group-' + strftime('%d-%H-%M-%S', gmtime())", "_____no_output_____" ], [ "input_data = 's3://ENTER_YOUR_PATH/fs_data.tsv'", "_____no_output_____" ], [ "from sagemaker.sklearn.processing import SKLearnProcessor\n\nsklearn_processor = SKLearnProcessor(framework_version='0.23-1',\n role=role,\n instance_type='ml.m5.4xlarge',\n instance_count=1)", "_____no_output_____" ], [ "%%time\n\nfrom sagemaker.processing import ProcessingInput\n\nsklearn_processor.run(\n code='ingesting.py',\n \n inputs=[\n ProcessingInput(\n source=input_data,\n destination='/opt/ml/processing/input')\n ],\n \n arguments=[\n '--region', 'eu-west-1',\n '--bucket', bucket,\n '--role', role,\n '--feature-group-name', feature_group_name,\n '--max-workers', '8'\n ]\n)", "_____no_output_____" ], [ "print(feature_group_name)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb7fc8863b6a69f3e0db6ba727847d5f72d83ad5
309,368
ipynb
Jupyter Notebook
scripts/Validator_for_DGSLR.ipynb
akanimax/DGSLR-validation
c6f90e19ba5cae2de3ef838d0f4da7eaefc9b3be
[ "MIT" ]
null
null
null
scripts/Validator_for_DGSLR.ipynb
akanimax/DGSLR-validation
c6f90e19ba5cae2de3ef838d0f4da7eaefc9b3be
[ "MIT" ]
null
null
null
scripts/Validator_for_DGSLR.ipynb
akanimax/DGSLR-validation
c6f90e19ba5cae2de3ef838d0f4da7eaefc9b3be
[ "MIT" ]
null
null
null
437.578501
121,066
0.915314
[ [ [ "# Create a cell (function for calculating the DGSLR index from the input datat)", "_____no_output_____" ] ], [ [ "import numpy as np\n\nfrom scipy import interp\nimport matplotlib.pyplot as plt\nfrom itertools import cycle\n\n# the module for the roc_curve value\nfrom sklearn.metrics import *\n\n# Config the matlotlib backend as plotting inline in IPython\n%matplotlib inline", "_____no_output_____" ], [ "# function for DGLSR index calculation\ndef calculateDGLSR(data):\n ''' \n function for calculating the DGLSR index\n @Params\n data = the numpy array for the input data\n \n @Return \n calculated DGLSR index for the input data\n '''\n \n # The following weights array was derived by using the AHP process for determining\n # the influencing parameters\n weights_array = np.array([\n 3.00, # drainage density Very high\n 2.40, # --===---- High\n 1.80, # --===---- Moderate\n 1.20, # --===---- Low\n 0.60, # --===---- Very Low\n 6.60, # Geology Diveghat Formation\n 5.40, # --===---- Purandargarh formation \n 4.75, # Slope Very Steep\n 4.07, # --===---- Mod. Steep\n 3.39, # --===---- Strong\n 2.72, # --===---- Mod. Strong\n 2.03, # --===---- Gentle\n 1.36, # --===---- Very Gentle\n 0.68, # --===---- Nearly level\n 4.40, # Landform classi Plateau surface remnants\n 3.30, # --===---- Plateau fringe surface\n 2.20, # --===---- Buried Pediment\n 1.10, # --===---- Rolling Piedmont Plain\n 4.67, # Landuse/land cov Waste Land\n 3.73, # --===---- Forest/vegetation\n 2.80, # --===---- Agriculture Land\n 1.87, # --===---- Water Bodies\n 0.93, # --===---- Built-up land\n 8.33, # Rainfall < 900mm\n 6.67, # --===---- 900mm - 975mm\n 5.00, # --===---- 975mm - 1050mm\n 3.33, # --===---- 1050mm - 1100mm\n 1.67, # --===---- > 1100mm\n 3.33, # Runoff Very High\n 2.67, # --===---- High\n 2.00, # --===---- Moderate\n 1.33, # --===---- Low\n 0.67 # --===---- Very Low\n ])\n \n print data.shape, weights_array.shape\n \n return np.sum(data * weights_array) / 100 # formula for calculating the DGSLR index", "_____no_output_____" ], [ "# test for the Subwater shed 1\ncalculateDGLSR(np.array([\n 81.05,\n 11.17,\n 6.33,\n 1.44,\n 0.00,\n 48.26,\n 51.74,\n 1.81,\n 9.30,\n 15.31,\n 21.88,\n 11.10,\n 25.74,\n 14.87,\n 16.52,\n 59.98,\n 23.50,\n 0.00,\n 39.44,\n 51.41,\n 6.44,\n 0.06,\n 2.65,\n 0.00,\n 0.00,\n 20.07,\n 56.83,\n 23.10,\n 5.28,\n 31.65,\n 18.57,\n 4.40,\n 40.10\n]))", "(33,) (33,)\n" ] ], [ [ "# The CSV File now contains some historical facts related to the ground water level fluctuation. This can be used for validating the DGSLR model using ROC based validation.", "_____no_output_____" ] ], [ [ "# read the csv file into a numpy array.\nvalidation_data = np.genfromtxt('../data/validation.csv', delimiter='\\t', skip_header=1)\n\nprint validation_data.shape # to print the shape of the numpy array\nvalidation_data[:] # print the first 10 values of the data", "(33, 5)\n" ] ], [ [ "# now we transform this array into one hot encoded values.\n# such that first array has the predicted class and second array has the ground truth/ actual class", "_____no_output_____" ] ], [ [ "# function to produce a class for the water-level fluctuation (aka. the actual class)\ndef actual_class(value):\n ''' \n function to give the priority class for the water-level fluctuation\n @Param:\n value = the water level fluctuation value\n \n @Return\n the numerical class for the value.\n '''\n # the implementation is a simply condition ladder of the values given in the excel file.\n if(value <= 3.07):\n return 0 # priority is low\n elif(value > 3.07 and value <= 5.20):\n return 1 # priority is moderate\n elif(value > 5.20 and value <= 7.77):\n return 2 # priority is high\n else:\n return 3 # priority is very high\n", "_____no_output_____" ], [ "# function to produce a class for the DGSLR index value (aka. the predicted class)\ndef predicted_class(index):\n ''' \n function to give the priority class for the DGLSR index value\n @Param:\n value = the DGLSR index so calcuated\n \n @Return\n the numerical class for the value.\n '''\n # the implementation is a simply condition ladder of the values given in the excel file.\n if(index <= 28.02):\n return 0 # priority is low\n elif(index > 28.02 and index <= 28.72):\n return 1 # priority is moderate\n elif(index > 28.72 and index <= 29.42):\n return 2 # priority is high\n else:\n return 3 # priority is very high\n ", "_____no_output_____" ], [ "# number of classes is 4, so:\nn_classes = 4", "_____no_output_____" ], [ "# initialize the two arrays to zero values\npredictions = np.zeros(shape=(validation_data.shape[0], n_classes))\nactual_values = np.zeros(shape=(validation_data.shape[0], n_classes))\n\n(predictions[:3], actual_values[:3])", "_____no_output_____" ], [ "# loop through the validation_data and populate the predictions and the actual_values\nfor i in range(validation_data.shape[0]):\n predictions[i, predicted_class(validation_data[i, 4])] = 1\n actual_values[i, actual_class(validation_data[i, 3])] = 1", "_____no_output_____" ], [ "# print the predictions\npredictions", "_____no_output_____" ], [ "# print the actual classes:\nactual_values", "_____no_output_____" ], [ "# define the reverse label mappings for better visual representation:\nreverse_labels_mappings = {\n 0: \"Low priority\",\n 1: \"Moderate priority\",\n 2: \"High priority\",\n 3: \"Very high priority\"\n}", "_____no_output_____" ], [ "# now time to calculate the ROC_auc and generate the curve plots.\n\n# first generate the curves as follows\n# Compute ROC curve and ROC area for each class\nfpr = dict()\ntpr = dict()\nroc_auc = dict()\nfor i in range(n_classes):\n fpr[i], tpr[i], _ = roc_curve(actual_values[:, i], predictions[:, i])\n roc_auc[i] = auc(fpr[i], tpr[i])", "_____no_output_____" ], [ "# now plot the 4 roc curves using the calculations\n# plot for all the labels\n\nfor i in range(n_classes):\n plt.figure()\n lw = 2\n plt.plot(fpr[i], tpr[i], color='green',\n lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[i])\n plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')\n plt.xlim([0.0, 1.0])\n plt.ylim([0.0, 1.05])\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title('Receiver operating characteristics for label: ' + reverse_labels_mappings[i])\n plt.legend(loc=\"lower right\")\n plt.savefig(\"../ROC_plots/\" + reverse_labels_mappings[i] + \".png\")\n plt.show()", "_____no_output_____" ], [ "# Compute micro-average ROC curve and ROC area\nfpr[\"micro\"], tpr[\"micro\"], _ = roc_curve(actual_values.ravel(), predictions.ravel())\nroc_auc[\"micro\"] = auc(fpr[\"micro\"], tpr[\"micro\"])\n\n\n# Compute macro-average ROC curve and ROC area\n\n# First aggregate all false positive rates\nall_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)]))\n\n# Then interpolate all ROC curves at this points\nmean_tpr = np.zeros_like(all_fpr)\nfor i in range(n_classes):\n mean_tpr += interp(all_fpr, fpr[i], tpr[i])\n\n# Finally average it and compute AUC\nmean_tpr /= n_classes\n\nfpr[\"macro\"] = all_fpr\ntpr[\"macro\"] = mean_tpr\nroc_auc[\"macro\"] = auc(fpr[\"macro\"], tpr[\"macro\"])\n\n# Plot all ROC curves\nplt.figure(figsize=(10, 10))\nplt.plot(fpr[\"micro\"], tpr[\"micro\"],\n label='micro-average ROC curve (area = {0:0.2f})'\n ''.format(roc_auc[\"micro\"]),\n color='deeppink', linestyle=':', linewidth=4)\n\nplt.plot(fpr[\"macro\"], tpr[\"macro\"],\n label='macro-average ROC curve (area = {0:0.2f})'\n ''.format(roc_auc[\"macro\"]),\n color='navy', linestyle=':', linewidth=4)\n\ncolors = cycle(['aqua', 'darkorange', 'cornflowerblue', 'green'])\nfor i, color in zip(range(n_classes), colors):\n plt.plot(fpr[i], tpr[i], color=color, lw=lw,\n label='ROC curve of ' + reverse_labels_mappings[i] + ' (area = {1:0.2f})'\n ''.format(i, roc_auc[i]))\n\nplt.plot([0, 1], [0, 1], 'k--', lw=lw)\nplt.xlim([0.0, 1.0])\nplt.ylim([0.0, 1.05])\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('ROC plot containing all the curves')\nplt.legend(loc=\"lower right\")\nplt.savefig(\"../ROC_plots/all_curves.png\")\nplt.show()", "_____no_output_____" ], [ "# Plot all ROC curves\nplt.figure(figsize=(10, 10))\n\nplt.xlabel('False Positive Rate')\nplt.ylabel('True Positive Rate')\nplt.title('Micro and macro average plots for the earlier plots')\n\nplt.plot(fpr[\"micro\"], tpr[\"micro\"],\n label='micro-average ROC curve (area = {0:0.2f})'\n ''.format(roc_auc[\"micro\"]),\n color='deeppink', linestyle=':', linewidth=4)\n\nplt.plot(fpr[\"macro\"], tpr[\"macro\"],\n label='macro-average ROC curve (area = {0:0.2f})'\n ''.format(roc_auc[\"macro\"]),\n color='navy', linestyle=':', linewidth=4)\n\nplt.plot([0, 1], [0, 1], 'k--', lw=lw, label='Random model line')\n\nplt.legend(loc=\"lower right\")\n\nplt.savefig(\"../ROC_plots/micro_and_macro_average.png\")\n\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb7fd903c52df9e60cef7fb15bc07615dd7a5756
37,014
ipynb
Jupyter Notebook
mpl_base_1.ipynb
darkeggler/data_visualization_notebooks
b54ebb3350ff7f88fc9a9cddee580b49f7dce6d1
[ "MIT" ]
null
null
null
mpl_base_1.ipynb
darkeggler/data_visualization_notebooks
b54ebb3350ff7f88fc9a9cddee580b49f7dce6d1
[ "MIT" ]
null
null
null
mpl_base_1.ipynb
darkeggler/data_visualization_notebooks
b54ebb3350ff7f88fc9a9cddee580b49f7dce6d1
[ "MIT" ]
null
null
null
71.59381
11,732
0.833144
[ [ [ "# matplotlib基础", "_____no_output_____" ], [ "- [API path](https://matplotlib.org/api/path_api.html)\n- [Path Tutorial](https://matplotlib.org/tutorials/advanced/path_tutorial.html#sphx-glr-tutorials-advanced-path-tutorial-py)", "_____no_output_____" ], [ "众所周知,matplotlib的图表是由艺术家使用渲染器在画布上完成的。\n\n其API自然分为3层:\n\n- 画布是绘制图形的区域:matplotlib.backend_bases.FigureCanvas \n- 渲染器是知晓如何在画布上绘制的对象:matplotlib.backend_bases.Renderer \n- 艺术家是知晓如何使用渲染器在画布上绘制的对象:matplotlib.artist.Artist\n\nFigureCanvas和Renderer处理与诸如wxPython之类的用户界面工具包,或PostScript®之类的绘图语言会话的所有细节,而Artist处理所有高级结构,如表示和布置图形,文本和线条。\n\n艺术家有两种类型:图元与容器。图元表示绘制在画布上的标准图形对象,如:Line2D,Rectangle,Text,AxesImage等,容器是放置图元的位置如:Axis,Axes和Figure。标准用法是创建一个Figure实例,使用Figure来创建一个或多个Axes或Subplot实例,并使用Axes实例的辅助方法创建图元。\n\n有很多人将Figure当作画布,其实它是长的像画布的艺术家。", "_____no_output_____" ], [ "<@-<", "_____no_output_____" ], [ "既然是基础,我们就从最简单的地方开始。\n\npath模块处理matplotlib中所有的polyline\n而处理polyline的基础类是Path\nPath与MarkerStyle一样,基类都是object而不是Artist\n\n为什么我会知道MarkerStyle,过程时这样的,我在写[Python可视化实践-手机篇]时,图1想从散点图改成折线图,但有几个问题没想明白,就想认真的学一遍plot折线图,我们都知道plot方法的本质是配置Line2D实例,Line2D是Artist的子类,它包括顶点及连接它们的线段。而顶点的标记是通过MarkerStyle类调用Path实现的。\n\n既然Path不是Artist的子类,自然就不能被渲染器绘制到画布上。所有matplotlib中就需要有Artist的子类来处理Path, PathPatch与PathCollection就是这样的子类。\n\n实际上Path对象是所有matplotlib.patches对象的基础。\n\nPath对象除了包含一组顶点作为路点以外,还包含一组6个标准命令。\n```python\n\ncode_type = np.uint8\n\n# Path codes\nSTOP = code_type(0) # 1 vertex\nMOVETO = code_type(1) # 1 vertex\nLINETO = code_type(2) # 1 vertex\nCURVE3 = code_type(3) # 2 vertices\nCURVE4 = code_type(4) # 3 vertices\nCLOSEPOLY = code_type(79) # 1 vertex\n\n#: A dictionary mapping Path codes to the number of vertices that the\n#: code expects.\nNUM_VERTICES_FOR_CODE = {STOP: 1,\n MOVETO: 1,\n LINETO: 1,\n CURVE3: 2,\n CURVE4: 3,\n CLOSEPOLY: 1}\n```\n\n所以Path实例化时就需要(N, 2)的顶点数组及N-length的路径命令数组。", "_____no_output_____" ], [ "多说无益,以图示例", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nfrom matplotlib.path import Path\nimport matplotlib.patches as patches", "_____no_output_____" ], [ "%matplotlib inline", "_____no_output_____" ], [ "verts = [\n (-0.5, -0.5), # 左, 下\n (-0.5, 0.5), # 左, 上\n ( 0.5, 0.5), # 右, 上\n ( 0.5, -0.5), # 右, 下\n (-0.5, -0.5), # 忽略\n]", "_____no_output_____" ], [ "codes = [\n Path.MOVETO,\n Path.LINETO,\n Path.LINETO,\n Path.LINETO,\n Path.CLOSEPOLY,\n]", "_____no_output_____" ], [ "path = Path(verts, codes)\npatch = patches.PathPatch(path)", "_____no_output_____" ], [ "fig = plt.figure()", "_____no_output_____" ], [ "fig.add_artist(patch)", "_____no_output_____" ] ], [ [ "和你想的一样,只能看到矩形的一角,这是因为Figure的坐标区间是[(0,1),(0,1]\n\n有人说这很像海龟,其实差别还是挺大的,海龟的命令比较多,而且风格是向前爬10步;左转,向前爬10步;左转,向前爬10步;左转,向前爬10步!", "_____no_output_____" ] ], [ [ "言归正传,要看到整个矩形的最简单办法是在Figure中加入坐标空间Axes,之后在Axes空间中制图,坐标系就会自动转换。我们再来一次。", "_____no_output_____" ], [ "fig.clf()", "_____no_output_____" ], [ "ax = fig.add_subplot(111)", "_____no_output_____" ], [ "patch2 = patches.PathPatch(path)\nax.add_patch(patch2)", "_____no_output_____" ], [ "ax.set_xlim(-1, 1)\nax.set_ylim(-1, 1)", "_____no_output_____" ] ], [ [ "如果不新建patch2,而是直接加入patch会是什么样的效果呢?有兴趣可以自己试试,想想为什么?\n\n坑已挖好,有缘再填。", "_____no_output_____" ] ], [ [ "verts = [\n (0., 0.), # P0\n (0.2, 1.), # P1\n (1., 0.8), # P2\n (0.8, 0.), # P3\n]\n\ncodes = [\n Path.MOVETO,\n Path.CURVE4,\n Path.CURVE4,\n Path.CURVE4,\n]", "_____no_output_____" ], [ "path = Path(verts, codes)", "_____no_output_____" ], [ "patch = patches.PathPatch(path)", "_____no_output_____" ], [ "fig, ax = plt.subplots()", "_____no_output_____" ], [ "ax.add_patch(patch)", "_____no_output_____" ], [ "fig", "_____no_output_____" ] ], [ [ "如果点数不够呢", "_____no_output_____" ] ], [ [ "verts = [\n (0., 0.), # P0\n (0.2, 1.), # P1\n (1., 0.8), # P2\n]\n\ncodes = [\n Path.MOVETO,\n Path.CURVE3,\n Path.CURVE3,\n]", "_____no_output_____" ], [ "path2 = Path(verts, codes)", "_____no_output_____" ], [ "patch2 = patches.PathPatch(path2, facecolor='none')", "_____no_output_____" ], [ "ax.cla()", "_____no_output_____" ], [ "ax.add_patch(patch2)", "_____no_output_____" ], [ "fig", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb7fe1fbdae1bb0e51dd65d5d0a3bd3c295e4e02
16,896
ipynb
Jupyter Notebook
how-to-use-azureml/training/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.ipynb
mesameki/MachineLearningNotebooks
4fe8c1702d5d2934beee599e977fd7581c441780
[ "MIT" ]
2
2020-07-12T02:37:49.000Z
2021-09-09T09:55:32.000Z
how-to-use-azureml/training/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.ipynb
mesameki/MachineLearningNotebooks
4fe8c1702d5d2934beee599e977fd7581c441780
[ "MIT" ]
null
null
null
how-to-use-azureml/training/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.ipynb
mesameki/MachineLearningNotebooks
4fe8c1702d5d2934beee599e977fd7581c441780
[ "MIT" ]
3
2020-07-14T21:33:01.000Z
2021-05-20T17:27:48.000Z
31.231054
406
0.53995
[ [ [ "Copyright (c) Microsoft Corporation. All rights reserved.\n\nLicensed under the MIT License.", "_____no_output_____" ], [ "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/training/train-hyperparameter-tune-deploy-with-sklearn/train-hyperparameter-tune-deploy-with-sklearn.png)", "_____no_output_____" ], [ "# Train and hyperparameter tune on Iris Dataset with Scikit-learn\nIn this tutorial, we demonstrate how to use the Azure ML Python SDK to train a support vector machine (SVM) on a single-node CPU with Scikit-learn to perform classification on the popular [Iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). We will also demonstrate how to perform hyperparameter tuning of the model using Azure ML's HyperDrive service.", "_____no_output_____" ], [ "## Prerequisites", "_____no_output_____" ], [ "* Go through the [Configuration](../../../configuration.ipynb) notebook to install the Azure Machine Learning Python SDK and create an Azure ML Workspace", "_____no_output_____" ] ], [ [ "# Check core SDK version number\nimport azureml.core\n\nprint(\"SDK version:\", azureml.core.VERSION)", "_____no_output_____" ] ], [ [ "## Diagnostics", "_____no_output_____" ], [ "Opt-in diagnostics for better experience, quality, and security of future releases.", "_____no_output_____" ] ], [ [ "from azureml.telemetry import set_diagnostics_collection\n\nset_diagnostics_collection(send_diagnostics=True)", "_____no_output_____" ] ], [ [ "## Initialize workspace", "_____no_output_____" ], [ "Initialize a [Workspace](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#workspace) object from the existing workspace you created in the Prerequisites step. `Workspace.from_config()` creates a workspace object from the details stored in `config.json`.", "_____no_output_____" ] ], [ [ "from azureml.core.workspace import Workspace\n\nws = Workspace.from_config()\nprint('Workspace name: ' + ws.name, \n 'Azure region: ' + ws.location, \n 'Subscription id: ' + ws.subscription_id, \n 'Resource group: ' + ws.resource_group, sep = '\\n')", "_____no_output_____" ] ], [ [ "## Create AmlCompute", "_____no_output_____" ], [ "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for training your model. In this tutorial, we use Azure ML managed compute ([AmlCompute](https://docs.microsoft.com/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute)) for our remote training compute resource.\n\nAs with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota.", "_____no_output_____" ] ], [ [ "from azureml.core.compute import ComputeTarget\n\n# choose a name for your cluster\ncluster_name = \"cpu-cluster\"\n\ncompute_target = ComputeTarget(workspace=ws, name=cluster_name)\nprint('Found existing compute target.')\n\n# use get_status() to get a detailed status for the current cluster. \nprint(compute_target.get_status().serialize())", "_____no_output_____" ] ], [ [ "The above code retrieves an existing CPU compute target. Scikit-learn does not support GPU computing.", "_____no_output_____" ], [ "## Train model on the remote compute", "_____no_output_____" ], [ "Now that you have your data and training script prepared, you are ready to train on your remote compute. You can take advantage of Azure compute to leverage a CPU cluster.", "_____no_output_____" ], [ "### Create a project directory", "_____no_output_____" ], [ "Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script and any additional files your training script depends on.", "_____no_output_____" ] ], [ [ "import os\n\nproject_folder = './sklearn-iris'\nos.makedirs(project_folder, exist_ok=True)", "_____no_output_____" ] ], [ [ "### Prepare training script", "_____no_output_____" ], [ "Now you will need to create your training script. In this tutorial, the training script is already provided for you at `train_iris`.py. In practice, you should be able to take any custom training script as is and run it with Azure ML without having to modify your code.\n\nHowever, if you would like to use Azure ML's [tracking and metrics](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#metrics) capabilities, you will have to add a small amount of Azure ML code inside your training script.\n\nIn `train_iris.py`, we will log some metrics to our Azure ML run. To do so, we will access the Azure ML Run object within the script:\n\n```python\nfrom azureml.core.run import Run\nrun = Run.get_context()\n```\n\nFurther within `train_iris.py`, we log the kernel and penalty parameters, and the highest accuracy the model achieves:\n\n```python\nrun.log('Kernel type', np.string(args.kernel))\nrun.log('Penalty', np.float(args.penalty))\n\nrun.log('Accuracy', np.float(accuracy))\n```\n\nThese run metrics will become particularly important when we begin hyperparameter tuning our model in the \"Tune model hyperparameters\" section.\n\nOnce your script is ready, copy the training script `train_iris.py` into your project directory.", "_____no_output_____" ] ], [ [ "import shutil\n\nshutil.copy('train_iris.py', project_folder)", "_____no_output_____" ] ], [ [ "### Create an experiment", "_____no_output_____" ], [ "Create an [Experiment](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#experiment) to track all the runs in your workspace for this Scikit-learn tutorial.", "_____no_output_____" ] ], [ [ "from azureml.core import Experiment\n\nexperiment_name = 'train_iris'\nexperiment = Experiment(ws, name=experiment_name)", "_____no_output_____" ] ], [ [ "### Create a Scikit-learn estimator", "_____no_output_____" ], [ "The Azure ML SDK's Scikit-learn estimator enables you to easily submit Scikit-learn training jobs for single-node runs. The following code will define a single-node Scikit-learn job.", "_____no_output_____" ] ], [ [ "from azureml.train.sklearn import SKLearn\n\nscript_params = {\n '--kernel': 'linear',\n '--penalty': 1.0,\n}\n\nestimator = SKLearn(source_directory=project_folder, \n script_params=script_params,\n compute_target=compute_target,\n entry_script='train_iris.py',\n pip_packages=['joblib']\n )", "_____no_output_____" ] ], [ [ "The `script_params` parameter is a dictionary containing the command-line arguments to your training script `entry_script`.", "_____no_output_____" ], [ "### Submit job", "_____no_output_____" ], [ "Run your experiment by submitting your estimator object. Note that this call is asynchronous.", "_____no_output_____" ] ], [ [ "run = experiment.submit(estimator)", "_____no_output_____" ] ], [ [ "## Monitor your run", "_____no_output_____" ], [ "You can monitor the progress of the run with a Jupyter widget. Like the run submission, the widget is asynchronous and provides live updates every 10-15 seconds until the job completes.", "_____no_output_____" ] ], [ [ "from azureml.widgets import RunDetails\n\nRunDetails(run).show()", "_____no_output_____" ], [ "run.cancel()", "_____no_output_____" ] ], [ [ "## Tune model hyperparameters", "_____no_output_____" ], [ "Now that we've seen how to do a simple Scikit-learn training run using the SDK, let's see if we can further improve the accuracy of our model. We can optimize our model's hyperparameters using Azure Machine Learning's hyperparameter tuning capabilities.", "_____no_output_____" ], [ "### Start a hyperparameter sweep", "_____no_output_____" ], [ "First, we will define the hyperparameter space to sweep over. Let's tune the `kernel` and `penalty` parameters. In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, `Accuracy`.", "_____no_output_____" ] ], [ [ "from azureml.train.hyperdrive.runconfig import HyperDriveRunConfig\nfrom azureml.train.hyperdrive.sampling import RandomParameterSampling\nfrom azureml.train.hyperdrive.run import PrimaryMetricGoal\nfrom azureml.train.hyperdrive.parameter_expressions import choice\n \n\nparam_sampling = RandomParameterSampling( {\n \"--kernel\": choice('linear', 'rbf', 'poly', 'sigmoid'),\n \"--penalty\": choice(0.5, 1, 1.5)\n }\n)\n\nhyperdrive_run_config = HyperDriveRunConfig(estimator=estimator,\n hyperparameter_sampling=param_sampling, \n primary_metric_name='Accuracy',\n primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,\n max_total_runs=12,\n max_concurrent_runs=4)", "_____no_output_____" ] ], [ [ "Finally, lauch the hyperparameter tuning job.", "_____no_output_____" ] ], [ [ "# start the HyperDrive run\nhyperdrive_run = experiment.submit(hyperdrive_run_config)", "_____no_output_____" ] ], [ [ "## Monitor HyperDrive runs", "_____no_output_____" ], [ "You can monitor the progress of the runs with the following Jupyter widget.", "_____no_output_____" ] ], [ [ "RunDetails(hyperdrive_run).show()", "_____no_output_____" ], [ "hyperdrive_run.wait_for_completion(show_output=True)", "_____no_output_____" ] ], [ [ "### Find and register best model\nWhen all jobs finish, we can find out the one that has the highest accuracy.", "_____no_output_____" ] ], [ [ "best_run = hyperdrive_run.get_best_run_by_primary_metric()\nprint(best_run.get_details()['runDefinition']['arguments'])", "_____no_output_____" ] ], [ [ "Now, let's list the model files uploaded during the run.", "_____no_output_____" ] ], [ [ "print(best_run.get_file_names())", "_____no_output_____" ] ], [ [ "We can then register the folder (and all files in it) as a model named `sklearn-iris` under the workspace for deployment", "_____no_output_____" ] ], [ [ "model = best_run.register_model(model_name='sklearn-iris', model_path='model.joblib')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb7fe6ecf77a81d1646edfc2356238277775ad64
300,094
ipynb
Jupyter Notebook
Fire_Classification/fire_clf_main.ipynb
Andy666Fox/TINY_DS_PROJECTS
777edd709062acf673c02575d0a0433fc897584f
[ "MIT" ]
1
2021-09-22T16:37:27.000Z
2021-09-22T16:37:27.000Z
Fire_Classification/fire_clf_main.ipynb
Andy666Fox/TINY_DS_PROJECTS
777edd709062acf673c02575d0a0433fc897584f
[ "MIT" ]
null
null
null
Fire_Classification/fire_clf_main.ipynb
Andy666Fox/TINY_DS_PROJECTS
777edd709062acf673c02575d0a0433fc897584f
[ "MIT" ]
null
null
null
258.701724
41,754
0.910948
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt \nimport seaborn as sns\nimport warnings\n\n\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "df = pd.read_excel('./input_data/Acoustic_Extinguisher_Fire_Dataset.xlsx')\ndf.head()", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 17442 entries, 0 to 17441\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 SIZE 17442 non-null int64 \n 1 FUEL 17442 non-null object \n 2 DISTANCE 17442 non-null int64 \n 3 DESIBEL 17442 non-null int64 \n 4 AIRFLOW 17442 non-null float64\n 5 FREQUENCY 17442 non-null int64 \n 6 STATUS 17442 non-null int64 \ndtypes: float64(1), int64(5), object(1)\nmemory usage: 954.0+ KB\n" ], [ "df.isna().sum()", "_____no_output_____" ], [ "df.describe()", "_____no_output_____" ], [ "df.corr()", "_____no_output_____" ], [ "df['FUEL'].value_counts().plot(kind='pie', autopct='%.2f%%')\nplt.show()", "_____no_output_____" ], [ "from sklearn.preprocessing import OrdinalEncoder", "_____no_output_____" ], [ "oe = OrdinalEncoder()\ndf['FUEL'] = oe.fit_transform(df[['FUEL']])", "_____no_output_____" ], [ "oe.categories_", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "from scipy.stats import skew", "_____no_output_____" ], [ "for col in df:\n print(f'Col name: {col}')\n print(f'Skewness: {skew(df[col])}')\n \n plt.figure(figsize=(10,8))\n sns.distplot(df[col])\n plt.grid(True)\n plt.show()\n ", "Col name: SIZE\nSkewness: 0.2786998636581806\n" ], [ "df.corr()['STATUS'].sort_values()", "_____no_output_____" ], [ "plt.figure(figsize=(10,5))\nsns.heatmap(df.corr(), annot=True, cmap='viridis')\nplt.show()", "_____no_output_____" ], [ "plt.figure(figsize=(10,5))\nplt.bar(df.columns, df.nunique())\nplt.show()", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ], [ "x = df.iloc[:,:-1]\nx.head()", "_____no_output_____" ], [ "y = df.iloc[:, -1]\ny.head()", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=42)", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler", "_____no_output_____" ], [ "st_sc = StandardScaler()\nX_train = st_sc.fit_transform(X_train)\nX_test = st_sc.fit_transform(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score, confusion_matrix, classification_report", "_____no_output_____" ], [ "from xgboost import XGBClassifier", "_____no_output_____" ], [ "xg = XGBClassifier()\nxg.fit(X_train, y_train)\ny_pred = xg.predict(X_test)\nprint(classification_report(y_test, y_pred))", " precision recall f1-score support\n\n 0 0.97 0.98 0.97 2614\n 1 0.98 0.97 0.97 2619\n\n accuracy 0.97 5233\n macro avg 0.97 0.97 0.97 5233\nweighted avg 0.97 0.97 0.97 5233\n\n" ], [ "accuracy_score(y_test, y_pred)", "_____no_output_____" ], [ "confusion_matrix(y_test, y_pred)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb80120c44a1b61209f8d46892f49f6ea30c0009
415,098
ipynb
Jupyter Notebook
unsupervised_analysis.ipynb
toddherman/jupyter_workflow
3c4668d4deaf6e42494d98ed0e99c7b334f3ac8d
[ "MIT" ]
null
null
null
unsupervised_analysis.ipynb
toddherman/jupyter_workflow
3c4668d4deaf6e42494d98ed0e99c7b334f3ac8d
[ "MIT" ]
null
null
null
unsupervised_analysis.ipynb
toddherman/jupyter_workflow
3c4668d4deaf6e42494d98ed0e99c7b334f3ac8d
[ "MIT" ]
null
null
null
663.095847
90,190
0.939795
[ [ [ "%matplotlib inline\nimport numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "from jflow.data import get_data\ndf = get_data()\npivoted = df.pivot_table('T ;otal', index=df.index.time, columns=df.index.date)\npivoted.plot(legend=False, alpha=0.01)", "_____no_output_____" ] ], [ [ "something has gone wrong. There should be 2 peaks just as before. ", "_____no_output_____" ] ], [ [ "pivoted.index", "_____no_output_____" ] ], [ [ "hey, wait, there are onlyu 12 hours. Not 24.", "_____no_output_____" ] ], [ [ "np.unique(df.index.time)", "_____no_output_____" ], [ "!head -24 fremont.csv", "Date,Fremont Bridge East Sidewalk,Fremont Bridge West Sidewalk\n10/03/2012 12:00:00 AM,9,4\n10/03/2012 01:00:00 AM,6,4\n10/03/2012 02:00:00 AM,1,1\n10/03/2012 03:00:00 AM,3,2\n10/03/2012 04:00:00 AM,1,6\n10/03/2012 05:00:00 AM,10,21\n10/03/2012 06:00:00 AM,50,105\n10/03/2012 07:00:00 AM,95,257\n10/03/2012 08:00:00 AM,146,291\n10/03/2012 09:00:00 AM,104,172\n10/03/2012 10:00:00 AM,46,72\n10/03/2012 11:00:00 AM,32,10\n10/03/2012 12:00:00 PM,41,35\n10/03/2012 01:00:00 PM,48,42\n10/03/2012 02:00:00 PM,51,77\n10/03/2012 03:00:00 PM,92,72\n10/03/2012 04:00:00 PM,182,133\n10/03/2012 05:00:00 PM,391,192\n10/03/2012 06:00:00 PM,258,122\n10/03/2012 07:00:00 PM,69,59\n10/03/2012 08:00:00 PM,51,29\n10/03/2012 09:00:00 PM,38,25\n10/03/2012 10:00:00 PM,25,24\n" ], [ "pivoted.shape", "_____no_output_____" ], [ "X = pivoted.fillna(0).T.values #this converts it to an array\nX.shape", "_____no_output_____" ], [ "pivoted.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 24 entries, 00:00:00 to 23:00:00\nColumns: 2250 entries, 2012-10-03 to 2018-11-30\ndtypes: float64(2250)\nmemory usage: 422.1+ KB\n" ], [ "np.info(X)", "class: ndarray\nshape: (2250, 24)\nstrides: (8, 18000)\nitemsize: 8\naligned: True\ncontiguous: False\nfortran: True\ndata pointer: 0x21e42c99f10\nbyteorder: little\nbyteswap: False\ntype: float64\n" ] ], [ [ "Transposing the dataframe... each day is an observation which consists of 24 hours. What are the days in relationship to each other.\n\nDo PCA on it. To reduce the dimensionality.", "_____no_output_____" ] ], [ [ "from sklearn.decomposition import PCA\nX2 = PCA(2, svd_solver='full').fit_transform(X)\n\nX2.shape", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nplt.scatter(X2[:, 0], X2[:, 1])", "_____no_output_____" ] ], [ [ "from the dimensionality reduction, it looks like we have two different days.", "_____no_output_____" ] ], [ [ "from sklearn.mixture import GaussianMixture\ngmm = GaussianMixture(2)\ngmm.fit(X)\nlabels = gmm.predict(X)\nlabels", "_____no_output_____" ], [ "plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow')\nplt.colorbar()", "_____no_output_____" ] ], [ [ "what is going on within each of these clusters?\n\nWhere labels is the red cluster, 0", "_____no_output_____" ] ], [ [ "pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1)\n#shows non-commute rides", "_____no_output_____" ], [ "pivoted.T[labels == 1].T.plot(legend=False, alpha=0.1)", "_____no_output_____" ] ], [ [ "Are they really week days and week ends?", "_____no_output_____" ] ], [ [ "pivoted.columns", "_____no_output_____" ] ], [ [ "we want to convert the days to days of the week.", "_____no_output_____" ] ], [ [ "pd.DatetimeIndex(pivoted.columns)", "_____no_output_____" ], [ "pd.DatetimeIndex(pivoted.columns).dayofweek", "_____no_output_____" ], [ "dow = pd.DatetimeIndex(pivoted.columns).dayofweek", "_____no_output_____" ], [ "plt.scatter(X2[:, 0], X2[:, 1], c=dow, cmap='rainbow')\nplt.colorbar()", "_____no_output_____" ], [ "dates = pd.DatetimeIndex(pivoted.columns)\ndates[(labels == 0) & (dow < 5)]\n#this ends up showing holidays, xmas, days when people don't work", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb801954e927990b9c603337159500aa0e8db0bc
29,833
ipynb
Jupyter Notebook
MA477 - Theory and Applications of Data Science/Lessons/Lesson 12 - Lab/Lesson 12 - Lab.ipynb
jkstarling/MA477-copy
67c0d3da587f167d10f2a72700500408704360ad
[ "MIT" ]
null
null
null
MA477 - Theory and Applications of Data Science/Lessons/Lesson 12 - Lab/Lesson 12 - Lab.ipynb
jkstarling/MA477-copy
67c0d3da587f167d10f2a72700500408704360ad
[ "MIT" ]
null
null
null
MA477 - Theory and Applications of Data Science/Lessons/Lesson 12 - Lab/Lesson 12 - Lab.ipynb
jkstarling/MA477-copy
67c0d3da587f167d10f2a72700500408704360ad
[ "MIT" ]
2
2020-01-13T14:01:56.000Z
2020-11-10T15:16:03.000Z
35.90012
507
0.309657
[ [ [ "<h2> ======================================================</h2>\n <h1>MA477 - Theory and Applications of Data Science</h1> \n <h1>Lesson 12: Lab </h1> \n \n <h4>Dr. Valmir Bucaj</h4>\n United States Military Academy, West Point \nAY20-2\n<h2>======================================================</h2>", "_____no_output_____" ], [ "<h2>House Voting Datset</h2>\n\nIn today's lecture we will be exploring the 1984 United Stated Congressional Voting Records via machine-learning to obtain useful insights.\n\n<h3>Description</h3>\n\nThis data set includes votes for each of the U.S. House of Representatives Congressmen on the 16 key votes identified by the CQA. The CQA lists nine different types of votes: voted for, paired for, and announced for (these three simplified to yea), voted against, paired against, and announced against (these three simplified to nay), voted present, voted present to avoid conflict of interest, and did not vote or otherwise make a position known (these three simplified to an unknown disposition).\nAttribute Information:\n\n Class Name: 2 (democrat, republican)\n handicapped-infants: 2 (y,n)\n water-project-cost-sharing: 2 (y,n)\n adoption-of-the-budget-resolution: 2 (y,n)\n physician-fee-freeze: 2 (y,n)\n el-salvador-aid: 2 (y,n)\n religious-groups-in-schools: 2 (y,n)\n anti-satellite-test-ban: 2 (y,n)\n aid-to-nicaraguan-contras: 2 (y,n)\n mx-missile: 2 (y,n)\n immigration: 2 (y,n)\n synfuels-corporation-cutback: 2 (y,n)\n education-spending: 2 (y,n)\n superfund-right-to-sue: 2 (y,n)\n crime: 2 (y,n)\n duty-free-exports: 2 (y,n)\n export-administration-act-south-africa: 2 (y,n)\n\nSource\n\nOrigin:\n\nCongressional Quarterly Almanac, 98th Congress, 2nd session 1984, Volume XL: Congressional Quarterly Inc. Washington, D.C., 1985.\nhttps://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records\n\nCitation:\nDua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.\n\n\n<h2>Tasks</h2>\n\nUse the following tasks to guide your work, but don't get limited by them. You are strongly encouraged to pursue any additional avenue that you feel is valuable.\n<ul>\n \n <li>Build a machine-learning classification model that predicts whether a member of the Congress is a Democrat or Republican based on how they voted on these 16 issues?\n<ul> \n <li>Describe how are you dealing with missing values?</li>\n <li> Describe the choice of the model and the reasons for that choice.</li>\n <li> What metric are you measuring? Accuracy? Recall? Precision? Why?</li>\n <li> Build the ROC curve and compute AUC</li>\n </ul>\n \n </li>\n \n <li>Explore the voting patterns of Democrats vs. Republicans. Explain what stands out.</li>\n \n \n </ul>", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "df=pd.read_csv('house-votes.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.shape", "_____no_output_____" ], [ "df.columns", "_____no_output_____" ] ], [ [ "Variable assignment:\n\n0=n\n\n1=y\n\n0.5=?", "_____no_output_____" ] ], [ [ "for col in df.columns[1:]:\n df[col]=df[col].apply(lambda x: 0 if x=='n' else 1 if x=='y' else 0.5)\n ", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df[df.columns[0]]=df[df.columns[0]].apply(lambda x: 1 if x=='republican' else 0)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb801ec56b87c95532871fd4a5888724b3802f96
216,134
ipynb
Jupyter Notebook
python/projects/gapminder/gapminderBasic.ipynb
BharathC15/NielitChennai
c817aaf63b741eb7a8e4c1df16b5038a0b4f0df7
[ "MIT" ]
null
null
null
python/projects/gapminder/gapminderBasic.ipynb
BharathC15/NielitChennai
c817aaf63b741eb7a8e4c1df16b5038a0b4f0df7
[ "MIT" ]
null
null
null
python/projects/gapminder/gapminderBasic.ipynb
BharathC15/NielitChennai
c817aaf63b741eb7a8e4c1df16b5038a0b4f0df7
[ "MIT" ]
1
2020-06-11T08:04:43.000Z
2020-06-11T08:04:43.000Z
216,134
216,134
0.753158
[ [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "from gapminder import gapminder\ndf=gapminder.copy()\ndf.tail()", "_____no_output_____" ], [ "df1=df[df.year==2007]\ndf1.tail()", "_____no_output_____" ], [ "plt.figure(figsize=(15,7))\nsns.scatterplot(x=\"gdpPercap\",y=\"lifeExp\",alpha=0.4,data=df1,hue=\"continent\")\nplt.xscale(\"log\")\nplt.title(\"Year = 2007\")\nplt.show()", "_____no_output_____" ], [ "\"\"\"\nX_std = (X - X.min(axis=0)) / (X.max(axis=0) - X.min(axis=0))\nX_scaled = X_std * (max - min) + min\n\"\"\"\ndef minmax(x):\n mm=[]\n for i in list(x):\n mm.append(((i-np.min(x))/(np.max(x)-np.min(x)))*10000)\n return pd.Series(mm)", "_____no_output_____" ], [ "minmax(df1[\"pop\"])[0:10]", "_____no_output_____" ], [ "plt.figure(figsize=(15,7))\nsns.scatterplot(x=\"gdpPercap\",y=\"lifeExp\",alpha=1,data=df1,hue=\"continent\",size=minmax(df1[\"pop\"]))\nplt.xscale(\"log\")\nplt.title(\"Year = 2007\")\nplt.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb803554ea0daaefbdab1de89ce90065dc8fcb99
7,038
ipynb
Jupyter Notebook
notebooks/trunk/combine_decamdr9_ccds.ipynb
mehdirezaie/LSSutils
aa0505b4d711e591f8a54121ea103ca3e72bdfc8
[ "MIT" ]
1
2021-12-15T22:38:31.000Z
2021-12-15T22:38:31.000Z
notebooks/trunk/combine_decamdr9_ccds.ipynb
mehdirezaie/LSSutils
aa0505b4d711e591f8a54121ea103ca3e72bdfc8
[ "MIT" ]
3
2019-08-19T21:47:47.000Z
2020-08-25T17:57:19.000Z
notebooks/trunk/combine_decamdr9_ccds.ipynb
mehdirezaie/LSSutils
aa0505b4d711e591f8a54121ea103ca3e72bdfc8
[ "MIT" ]
null
null
null
36.466321
395
0.508809
[ [ [ "# Combine DESI Imaging ccds for DR9", "_____no_output_____" ], [ "The eboss ccd files did not have the same dtype, therefore we could not easily combine them. We have to enfore a dtype to all of them.", "_____no_output_____" ] ], [ [ "# import modules\nimport fitsio as ft\nimport numpy as np\nfrom glob import glob", "_____no_output_____" ], [ "# read files \nccdsn = glob('/home/mehdi/data/templates/ccds/dr9/ccds-annotated-*.fits')\nprint(ccdsn) # ccdfiles names\n\n\nprt_keep = ['camera', 'filter', 'fwhm', 'mjd_obs', 'exptime', \n 'ra', 'dec', 'ra0','ra1','ra2','ra3','dec0','dec1','dec2','dec3',\n 'galdepth', 'ebv', 'airmass', 'ccdskycounts', 'pixscale_mean', 'ccdzpt']\n\n# read one file to check the columns\nd = ft.read(ccdsn[0], columns=prt_keep)\nprint(d.dtype)", "['/home/mehdi/data/templates/ccds/dr9/ccds-annotated-90prime-dr9-cut.fits', '/home/mehdi/data/templates/ccds/dr9/ccds-annotated-decam-dr9-cut.fits', '/home/mehdi/data/templates/ccds/dr9/ccds-annotated-mosaic-dr9-cut.fits']\n[('camera', '<U7'), ('filter', '<U1'), ('exptime', '>f4'), ('mjd_obs', '>f8'), ('airmass', '>f4'), ('fwhm', '>f4'), ('ra', '>f8'), ('dec', '>f8'), ('ccdzpt', '>f4'), ('ccdskycounts', '>f4'), ('ra0', '>f8'), ('dec0', '>f8'), ('ra1', '>f8'), ('dec1', '>f8'), ('ra2', '>f8'), ('dec2', '>f8'), ('ra3', '>f8'), ('dec3', '>f8'), ('pixscale_mean', '>f4'), ('ebv', '>f4'), ('galdepth', '>f4')]\n" ], [ "# attrs for the general quicksip\n# 'crval1', 'crval2', 'crpix1', 'crpix2', 'cd1_1',\n# 'cd1_2', 'cd2_1', 'cd2_2', 'width', 'height'\n# dtype = np.dtype([('filter', 'S1'), ('exptime', '>f4'), ('mjd_obs', '>f8'), ('airmass', '>f4'),\\\n# ('fwhm', '>f4'), ('width', '>i2'), ('height', '>i2'), ('crpix1', '>f4'), ('crpix2', '>f4'),\\\n# ('crval1', '>f8'), ('crval2', '>f8'), ('cd1_1', '>f4'), ('cd1_2', '>f4'), ('cd2_1', '>f4'),\\\n# ('cd2_2', '>f4'), ('ra', '>f8'), ('dec', '>f8'), ('ccdzpt', '>f4'), ('ccdskycounts', '>f4'),\n# ('pixscale_mean', '>f4'), ('ebv', '>f4'), ('galdepth', '>f4')])\n#\n# only read & combine the following columns\n# this is what the pipeline need to make the MJD maps\n\nprt_keep = ['camera', 'filter', 'fwhm', 'mjd_obs', 'exptime', \n 'ra', 'dec', 'ra0','ra1','ra2','ra3','dec0','dec1','dec2','dec3',\n 'galdepth', 'ebv', 'airmass', 'ccdskycounts', 'pixscale_mean', 'ccdzpt']\n\n# camera could be different for 90prime, decam, mosaic -- we pick S7\ndtype = np.dtype([('camera', '<U7'),('filter', '<U1'), ('exptime', '>f4'), ('mjd_obs', '>f8'), \n ('airmass', '>f4'), ('fwhm', '>f4'), ('ra', '>f8'), ('dec', '>f8'), ('ccdzpt', '>f4'),\n ('ccdskycounts', '>f4'), ('ra0', '>f8'), ('dec0', '>f8'), ('ra1', '>f8'),\n ('dec1', '>f8'), ('ra2', '>f8'), ('dec2', '>f8'), ('ra3', '>f8'), ('dec3', '>f8'),\n ('pixscale_mean', '>f4'), ('ebv', '>f4'), ('galdepth', '>f4')])\n\n\ndef fixdtype(data_in, indtype=dtype):\n m = data_in.size\n data_out = np.zeros(m, dtype=dtype)\n for name in dtype.names:\n data_out[name] = data_in[name].astype(dtype[name])\n return data_out", "_____no_output_____" ], [ "#\n# read each ccd file > fix its dtype > move on to the next\nccds_data = []\nfor ccd_i in ccdsn:\n print('working on .... %s'%ccd_i)\n data_in = ft.FITS(ccd_i)[1].read(columns=prt_keep)\n #print(data_in.dtype)\n data_out = fixdtype(data_in)\n print('number of ccds in this file : %d'%data_in.size)\n print('number of different dtypes (before) : %d'%len(np.setdiff1d(dtype.descr, data_in.dtype.descr)), np.setdiff1d(dtype.descr, data_in.dtype.descr))\n print('number of different dtypes (after) : %d'%len(np.setdiff1d(dtype.descr, data_out.dtype.descr)), np.setdiff1d(dtype.descr, data_out.dtype.descr))\n ccds_data.append(data_out)", "working on .... /home/mehdi/data/templates/ccds/dr9/ccds-annotated-90prime-dr9-cut.fits\nnumber of ccds in this file : 146268\nnumber of different dtypes (before) : 0 []\nnumber of different dtypes (after) : 0 []\nworking on .... /home/mehdi/data/templates/ccds/dr9/ccds-annotated-decam-dr9-cut.fits\nnumber of ccds in this file : 5824141\nnumber of different dtypes (before) : 1 ['<U7']\nnumber of different dtypes (after) : 0 []\nworking on .... /home/mehdi/data/templates/ccds/dr9/ccds-annotated-mosaic-dr9-cut.fits\nnumber of ccds in this file : 240780\nnumber of different dtypes (before) : 1 ['<U7']\nnumber of different dtypes (after) : 0 []\n" ], [ "ccds_data_c = np.concatenate(ccds_data)\nprint('Total number of combined ccds : %d'%ccds_data_c.size)", "Total number of combined ccds : 6211189\n" ], [ "ft.write('/home/mehdi/data/templates/ccds/dr9/ccds-annotated-dr9-combined.fits',\n ccds_data_c, header=dict(NOTE='dr9 combined'), clobber=True)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb80464f4e5636651f62b872141ae8cb83210336
2,306
ipynb
Jupyter Notebook
src/hunit.ipynb
Eoksni/ihaskell-playground
07b33fd9361fe0a5a25c157c8cc52bc3123034a1
[ "MIT" ]
null
null
null
src/hunit.ipynb
Eoksni/ihaskell-playground
07b33fd9361fe0a5a25c157c8cc52bc3123034a1
[ "MIT" ]
null
null
null
src/hunit.ipynb
Eoksni/ihaskell-playground
07b33fd9361fe0a5a25c157c8cc52bc3123034a1
[ "MIT" ]
null
null
null
20.589286
64
0.408066
[ [ [ "import Test.HUnit\n\n(~~) x y = runTestTT $ x ~=? y\nassert = (~~)", "_____no_output_____" ], [ "(1,2) ~~ (2,3)", "_____no_output_____" ], [ "assert\n (1,2)\n (2,3)", "_____no_output_____" ], [ "2 ~~ 2", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
cb805e5f5261456b6cac0ee116fd770857c5ba91
180,555
ipynb
Jupyter Notebook
pandas basics/Pandas Demo 4.ipynb
ikaushikpal/ML_Prerequisite
cc9bd55d5f2584c5706863911679f384a7fbe763
[ "MIT" ]
2
2021-04-08T05:29:17.000Z
2021-06-07T14:18:38.000Z
pandas basics/Pandas Demo 4.ipynb
Amit-Kumar-Mondal-Tech/ML_Prerequisite
4e2e465c8f3df857fb83e0262aea50795f478819
[ "MIT" ]
null
null
null
pandas basics/Pandas Demo 4.ipynb
Amit-Kumar-Mondal-Tech/ML_Prerequisite
4e2e465c8f3df857fb83e0262aea50795f478819
[ "MIT" ]
1
2021-04-15T13:54:13.000Z
2021-04-15T13:54:13.000Z
46.691234
96
0.364869
[ [ [ "## Filtering - Using Conditionals to Filter Rows and Columns", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "people = {\n \"first\": [\"Corey\", 'Jane', 'John'], \n \"last\": [\"Schafer\", 'Doe', 'Doe'], \n \"email\": [\"[email protected]\", '[email protected]', '[email protected]']\n} ", "_____no_output_____" ], [ "df = pd.DataFrame(people)", "_____no_output_____" ], [ "df", "_____no_output_____" ] ], [ [ "**Arithmetic operations on Data Frame**\n\nAND -> & ; OR -> | ; NOT -> ~", "_____no_output_____" ] ], [ [ "df[\"last\"] == \"Doe\"", "_____no_output_____" ], [ "filt = (df[\"last\"] == \"Doe\")", "_____no_output_____" ], [ "filt", "_____no_output_____" ], [ "df[filt]", "_____no_output_____" ], [ "df[df[\"last\"] == \"Doe\"]", "_____no_output_____" ], [ "df.loc[filt]", "_____no_output_____" ], [ "df.loc[filt, 'email']", "_____no_output_____" ], [ " filt = ((df['last'] == \"Doe\") & (df['first']==\"John\"))", "_____no_output_____" ], [ "df.loc[filt, 'email']", "_____no_output_____" ], [ "filt = ((df['last'] == \"Schafer\") | (df['first']==\"John\"))", "_____no_output_____" ], [ "df.loc[filt, ['email']]", "_____no_output_____" ], [ "df.loc[~filt, ['email']]", "_____no_output_____" ] ], [ [ "**Lets Work with Survey Data**", "_____no_output_____" ] ], [ [ "df = pd.read_csv('data/survey_results_public.csv', index_col='Respondent')\nschema_df = pd.read_csv('data/survey_results_schema.csv', index_col='Column')", "_____no_output_____" ], [ "pd.set_option('display.max_columns', 85)\npd.set_option('display.max_rows', 85)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "high_salary = (df['ConvertedComp'] > 70_000)", "_____no_output_____" ], [ "df.loc[high_salary]", "_____no_output_____" ], [ "df.loc[high_salary, ['Country', 'LanguageWorkedWith', 'ConvertedComp']]", "_____no_output_____" ], [ "countries = (\"United States\", \"India\", \"United Kingdom\", \"Germany\", \"Canada\")\nnew_filt = df[\"Country\"].isin(countries)", "_____no_output_____" ], [ "df.loc[new_filt, ['Country']]", "_____no_output_____" ], [ "df[\"LanguageWorkedWith\"]", "_____no_output_____" ], [ "filt1 = df['LanguageWorkedWith'].str.contains('Python', na=False)", "_____no_output_____" ], [ "df.loc[filt1, 'LanguageWorkedWith']", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb80823690d7101d0254b700cf91c740f7ee5326
114,899
ipynb
Jupyter Notebook
4-assets/BOOKS/Jupyter-Notebooks/Overflow/biking.ipynb
impastasyndrome/Lambda-Resource-Static-Assets
7070672038620d29844991250f2476d0f1a60b0a
[ "MIT" ]
null
null
null
4-assets/BOOKS/Jupyter-Notebooks/Overflow/biking.ipynb
impastasyndrome/Lambda-Resource-Static-Assets
7070672038620d29844991250f2476d0f1a60b0a
[ "MIT" ]
null
null
null
4-assets/BOOKS/Jupyter-Notebooks/Overflow/biking.ipynb
impastasyndrome/Lambda-Resource-Static-Assets
7070672038620d29844991250f2476d0f1a60b0a
[ "MIT" ]
1
2021-11-05T07:48:26.000Z
2021-11-05T07:48:26.000Z
282.307125
30,667
0.90899
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb8082c1fdfc090cfe9866abb6cdf607381af9c0
76,243
ipynb
Jupyter Notebook
classifier/DeepLearningClassifiers.ipynb
shivammehta007/NLPinEnglishLearning
ae869d868e39df9b1787134ba6e964acd385dd2e
[ "Apache-2.0" ]
1
2020-05-27T22:21:33.000Z
2020-05-27T22:21:33.000Z
classifier/DeepLearningClassifiers.ipynb
shivammehta007/NLPinEnglishLearning
ae869d868e39df9b1787134ba6e964acd385dd2e
[ "Apache-2.0" ]
null
null
null
classifier/DeepLearningClassifiers.ipynb
shivammehta007/NLPinEnglishLearning
ae869d868e39df9b1787134ba6e964acd385dd2e
[ "Apache-2.0" ]
null
null
null
47.891332
356
0.455399
[ [ [ "<a href=\"https://colab.research.google.com/github/shivammehta007/QuestionGenerator/blob/master/DeepLearningClassifiers.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Testing Classifier Deep Learning Architecture Based", "_____no_output_____" ] ], [ [ "!rm -rf QuestionGenerator\nimport os\nfrom getpass import getpass\nimport urllib\nfrom subprocess import Popen\n\nuser = input('User name: ')\npassword = getpass('Password: ')\npassword = urllib.parse.quote(password) # your password is converted into url format\n# repo_name = input('Repo name: ')\n\ncmd_string = 'git clone --single-branch --branch master https://{0}:{1}@github.com/{0}/{2}.git'.format(user, password, 'QuestionGenerator')\nprint(Popen(cmd_string, shell=True))\ncmd_string, password = \"\", \"\" # removing the password from the variable", "User name: shivammehta007\nPassword: ··········\n<subprocess.Popen object at 0x7f638293ce48>\n" ], [ "%cd QuestionGenerator/classifier/", "/content/QuestionGenerator/classifier\n" ], [ "!python preprocessdata.py", "_____no_output_____" ] ], [ [ "## Download Glove from Kaggle", "_____no_output_____" ] ], [ [ "import os\nimport json\n\nkaggle_info = json.load(open(\"/content/drive/My Drive/kaggle.json\"))\nos.environ['KAGGLE_USERNAME'] = kaggle_info[\"username\"]\nos.environ['KAGGLE_KEY'] = kaggle_info[\"key\"]", "_____no_output_____" ], [ "!kaggle datasets list --user thanakomsn", "Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.6 / client 1.5.4)\nref title size lastUpdated downloadCount \n------------------------- ----------------- ----- ------------------- ------------- \nthanakomsn/glove6b300dtxt glove.6B.300d.txt 386MB 2017-11-28 07:19:43 2926 \n" ], [ "!kaggle datasets download thanakomsn/glove6b300dtxt ", "Downloading glove6b300dtxt.zip to /content/QuestionGenerator/classifier\n 99% 383M/386M [00:08<00:00, 39.7MB/s]\n100% 386M/386M [00:08<00:00, 48.8MB/s]\n" ], [ "%mkdir .vector_cache\n%mv glove6b300dtxt.zip .vector_cache/", "_____no_output_____" ], [ "!unzip .vector_cache/glove6b300dtxt.zip\n%ls -a .vector_cache/", "Archive: .vector_cache/glove6b300dtxt.zip\n inflating: glove.6B.300d.txt \n\u001b[0m\u001b[01;34m.\u001b[0m/ \u001b[01;34m..\u001b[0m/ glove6b300dtxt.zip\n" ] ], [ [ "## Training", "_____no_output_____" ] ], [ [ "!python train.py -h", "usage: train.py [-h] [-s SEED] [-loc MODEL_LOCATION] [-b BIDIRECTIONAL]\n [-d DROPOUT] [-e EMBEDDING_DIM] [-hd HIDDEN_DIM] [-l N_LAYERS]\n [-lr LEARNING_RATE] [-n EPOCHS] [-batch BATCH_SIZE]\n [-f FREEZE_EMBEDDINGS] [-t {multi,answeronly}]\n [-l2 L2_REGULARIZATION]\n [-m {RNNHiddenClassifier,RNNMaxpoolClassifier,RNNFieldClassifier,CNN2dClassifier,CNN1dClassifier,RNNFieldClassifer,CNN1dExtraLayerClassifier}]\n [-lhd LINEAR_HIDDEN_DIM]\n\nUtility to train the Model\n\noptional arguments:\n -h, --help show this help message and exit\n -s SEED, --seed SEED Set custom seed for reproducibility\n -loc MODEL_LOCATION, --model-location MODEL_LOCATION\n Give an already trained model location to use and\n train more epochs on it\n -b BIDIRECTIONAL, --bidirectional BIDIRECTIONAL\n Makes the model Bidirectional\n -d DROPOUT, --dropout DROPOUT\n Dropout count for the model\n -e EMBEDDING_DIM, --embedding-dim EMBEDDING_DIM\n Embedding Dimensions\n -hd HIDDEN_DIM, --hidden-dim HIDDEN_DIM\n Hidden dimensions of the RNN\n -l N_LAYERS, --n-layers N_LAYERS\n Number of layers in RNN\n -lr LEARNING_RATE, --learning-rate LEARNING_RATE\n Learning rate of Adam Optimizer\n -n EPOCHS, --epochs EPOCHS\n Number of Epochs to train model\n -batch BATCH_SIZE, --batch_size BATCH_SIZE\n Number of Epochs to train model\n -f FREEZE_EMBEDDINGS, --freeze-embeddings FREEZE_EMBEDDINGS\n Freeze Embeddings of Model\n -t {multi,answeronly}, --tag {multi,answeronly}\n Use two different dataset type, multi type and single\n type where all are merged into same key\n -l2 L2_REGULARIZATION, --l2-regularization L2_REGULARIZATION\n Value of alpha in l2 regularization 0 means no\n regularization\n -m {RNNHiddenClassifier,RNNMaxpoolClassifier,RNNFieldClassifier,CNN2dClassifier,CNN1dClassifier,RNNFieldClassifer,CNN1dExtraLayerClassifier}, --model {RNNHiddenClassifier,RNNMaxpoolClassifier,RNNFieldClassifier,CNN2dClassifier,CNN1dClassifier,RNNFieldClassifer,CNN1dExtraLayerClassifier}\n select the classifier to train on\n -lhd LINEAR_HIDDEN_DIM, --linear-hidden-dim LINEAR_HIDDEN_DIM\n Freeze Embeddings of Model\n" ] ], [ [ "## RNNClassifiers", "_____no_output_____" ], [ "## RNN Classifiers", "_____no_output_____" ], [ "### RNNHiddenClassifier", "_____no_output_____" ] ], [ [ "!python train.py -n 50 --tag answeronly", "[DEBUG | train.py:307 - <module>() ] Namespace(batch_size=64, bidirectional=True, dropout=0.7, embedding_dim=300, epochs=50, freeze_embeddings=1, hidden_dim=128, l2_regularization=0.001, learning_rate=0.001, linear_hidden_dim=128, model='RNNHiddenClassifier', model_location=None, n_layers=1, seed=1234, tag='answeronly')\n[DEBUG | train.py:308 - <module>() ] Custom seed set with: 1234\n[INFO | train.py:310 - <module>() ] Loading Dataset\n[DEBUG | datasetloader.py:161 - get_iterators() ] Data Loaded Successfully!\n[INFO | vocab.py:386 - cache() ] Loading vectors from .vector_cache/glove.6B.300d.txt.pt\n[DEBUG | datasetloader.py:172 - get_iterators() ] Vocabulary Loaded\n[DEBUG | datasetloader.py:180 - get_iterators() ] Created Iterators\n[INFO | train.py:317 - <module>() ] Dataset Loaded Successfully\n[DEBUG | train.py:73 - initialize_new_model() ] Initializing Model\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py:50: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.7 and num_layers=1\n \"num_layers={}\".format(dropout, num_layers))\n[DEBUG | train.py:162 - initialize_new_model() ] Freeze Embeddings Value 1: False\n[INFO | train.py:168 - initialize_new_model() ] Model Initialized with 443,147 trainiable parameters\n[DEBUG | train.py:180 - initialize_new_model() ] Copied PreTrained Embeddings\n[INFO | train.py:343 - <module>() ] RNNHiddenClassifier(\n (embedding): Embedding(741, 300, padding_idx=1)\n (rnn): LSTM(300, 128, dropout=0.7, bidirectional=True)\n (fc): Linear(in_features=256, out_features=11, bias=True)\n (dropout): Dropout(p=0.7, inplace=False)\n)\n100% 9/9 [00:00<00:00, 142.94it/s]\n100% 2/2 [00:00<00:00, 380.78it/s]\nEpoch: 01 | Epoch Time: 0m 0s\n\tTrain Loss: 2.374 | Train Acc: 12.56%\n\t Val. Loss: 2.265 | Val. Acc: 17.27%\n100% 9/9 [00:00<00:00, 146.50it/s]\n100% 2/2 [00:00<00:00, 273.63it/s]\nEpoch: 02 | Epoch Time: 0m 0s\n\tTrain Loss: 2.294 | Train Acc: 16.75%\n\t Val. Loss: 2.233 | Val. Acc: 17.48%\n100% 9/9 [00:00<00:00, 186.10it/s]\n100% 2/2 [00:00<00:00, 376.56it/s]\nEpoch: 03 | Epoch Time: 0m 0s\n\tTrain Loss: 2.232 | Train Acc: 19.16%\n\t Val. Loss: 2.269 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 159.00it/s]\n100% 2/2 [00:00<00:00, 356.29it/s]\nEpoch: 04 | Epoch Time: 0m 0s\n\tTrain Loss: 2.253 | Train Acc: 18.34%\n\t Val. Loss: 2.234 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 175.28it/s]\n100% 2/2 [00:00<00:00, 380.30it/s]\nEpoch: 05 | Epoch Time: 0m 0s\n\tTrain Loss: 2.228 | Train Acc: 19.55%\n\t Val. Loss: 2.227 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 179.69it/s]\n100% 2/2 [00:00<00:00, 377.20it/s]\nEpoch: 06 | Epoch Time: 0m 0s\n\tTrain Loss: 2.194 | Train Acc: 20.36%\n\t Val. Loss: 2.219 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 181.78it/s]\n100% 2/2 [00:00<00:00, 336.93it/s]\nEpoch: 07 | Epoch Time: 0m 0s\n\tTrain Loss: 2.221 | Train Acc: 21.09%\n\t Val. Loss: 2.215 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 177.93it/s]\n100% 2/2 [00:00<00:00, 393.37it/s]\nEpoch: 08 | Epoch Time: 0m 0s\n\tTrain Loss: 2.199 | Train Acc: 21.98%\n\t Val. Loss: 2.138 | Val. Acc: 24.87%\n100% 9/9 [00:00<00:00, 184.52it/s]\n100% 2/2 [00:00<00:00, 373.76it/s]\nEpoch: 09 | Epoch Time: 0m 0s\n\tTrain Loss: 2.162 | Train Acc: 24.95%\n\t Val. Loss: 2.096 | Val. Acc: 24.09%\n100% 9/9 [00:00<00:00, 160.60it/s]\n100% 2/2 [00:00<00:00, 379.78it/s]\nEpoch: 10 | Epoch Time: 0m 0s\n\tTrain Loss: 2.110 | Train Acc: 26.60%\n\t Val. Loss: 2.073 | Val. Acc: 18.26%\n100% 9/9 [00:00<00:00, 177.02it/s]\n100% 2/2 [00:00<00:00, 376.19it/s]\nEpoch: 11 | Epoch Time: 0m 0s\n\tTrain Loss: 2.075 | Train Acc: 27.58%\n\t Val. Loss: 1.963 | Val. Acc: 29.35%\n100% 9/9 [00:00<00:00, 180.02it/s]\n100% 2/2 [00:00<00:00, 258.52it/s]\nEpoch: 12 | Epoch Time: 0m 0s\n\tTrain Loss: 1.986 | Train Acc: 31.48%\n\t Val. Loss: 1.859 | Val. Acc: 32.62%\n100% 9/9 [00:00<00:00, 187.75it/s]\n100% 2/2 [00:00<00:00, 393.11it/s]\nEpoch: 13 | Epoch Time: 0m 0s\n\tTrain Loss: 1.875 | Train Acc: 35.54%\n\t Val. Loss: 1.773 | Val. Acc: 28.36%\n100% 9/9 [00:00<00:00, 187.00it/s]\n100% 2/2 [00:00<00:00, 408.78it/s]\nEpoch: 14 | Epoch Time: 0m 0s\n\tTrain Loss: 1.734 | Train Acc: 37.82%\n\t Val. Loss: 1.613 | Val. Acc: 35.90%\n100% 9/9 [00:00<00:00, 191.22it/s]\n100% 2/2 [00:00<00:00, 355.13it/s]\nEpoch: 15 | Epoch Time: 0m 0s\n\tTrain Loss: 1.584 | Train Acc: 44.85%\n\t Val. Loss: 1.593 | Val. Acc: 34.54%\n100% 9/9 [00:00<00:00, 187.03it/s]\n100% 2/2 [00:00<00:00, 388.70it/s]\nEpoch: 16 | Epoch Time: 0m 0s\n\tTrain Loss: 1.494 | Train Acc: 46.41%\n\t Val. Loss: 1.575 | Val. Acc: 28.57%\n100% 9/9 [00:00<00:00, 177.76it/s]\n100% 2/2 [00:00<00:00, 397.92it/s]\nEpoch: 17 | Epoch Time: 0m 0s\n\tTrain Loss: 1.425 | Train Acc: 47.26%\n\t Val. Loss: 1.226 | Val. Acc: 45.48%\n100% 9/9 [00:00<00:00, 190.68it/s]\n100% 2/2 [00:00<00:00, 358.96it/s]\nEpoch: 18 | Epoch Time: 0m 0s\n\tTrain Loss: 1.268 | Train Acc: 51.83%\n\t Val. Loss: 1.238 | Val. Acc: 44.70%\n100% 9/9 [00:00<00:00, 191.08it/s]\n100% 2/2 [00:00<00:00, 386.82it/s]\nEpoch: 19 | Epoch Time: 0m 0s\n\tTrain Loss: 1.202 | Train Acc: 57.19%\n\t Val. Loss: 1.246 | Val. Acc: 46.41%\n100% 9/9 [00:00<00:00, 189.05it/s]\n100% 2/2 [00:00<00:00, 382.87it/s]\nEpoch: 20 | Epoch Time: 0m 0s\n\tTrain Loss: 1.134 | Train Acc: 58.13%\n\t Val. Loss: 1.041 | Val. Acc: 56.78%\n100% 9/9 [00:00<00:00, 181.70it/s]\n100% 2/2 [00:00<00:00, 351.87it/s]\nEpoch: 21 | Epoch Time: 0m 0s\n\tTrain Loss: 1.189 | Train Acc: 57.67%\n\t Val. Loss: 1.111 | Val. Acc: 50.53%\n100% 9/9 [00:00<00:00, 173.09it/s]\n100% 2/2 [00:00<00:00, 397.92it/s]\nEpoch: 22 | Epoch Time: 0m 0s\n\tTrain Loss: 1.055 | Train Acc: 61.34%\n\t Val. Loss: 0.885 | Val. Acc: 59.27%\n100% 9/9 [00:00<00:00, 170.67it/s]\n100% 2/2 [00:00<00:00, 332.91it/s]\nEpoch: 23 | Epoch Time: 0m 0s\n\tTrain Loss: 0.979 | Train Acc: 62.62%\n\t Val. Loss: 0.894 | Val. Acc: 60.05%\n100% 9/9 [00:00<00:00, 182.19it/s]\n100% 2/2 [00:00<00:00, 386.25it/s]\nEpoch: 24 | Epoch Time: 0m 0s\n\tTrain Loss: 0.904 | Train Acc: 66.63%\n\t Val. Loss: 0.902 | Val. Acc: 59.06%\n100% 9/9 [00:00<00:00, 179.40it/s]\n100% 2/2 [00:00<00:00, 347.70it/s]\nEpoch: 25 | Epoch Time: 0m 0s\n\tTrain Loss: 0.840 | Train Acc: 72.71%\n\t Val. Loss: 0.818 | Val. Acc: 61.40%\n100% 9/9 [00:00<00:00, 166.54it/s]\n100% 2/2 [00:00<00:00, 385.95it/s]\nEpoch: 26 | Epoch Time: 0m 0s\n\tTrain Loss: 0.784 | Train Acc: 72.75%\n\t Val. Loss: 0.789 | Val. Acc: 61.04%\n100% 9/9 [00:00<00:00, 170.37it/s]\n100% 2/2 [00:00<00:00, 388.45it/s]\nEpoch: 27 | Epoch Time: 0m 0s\n\tTrain Loss: 0.767 | Train Acc: 72.66%\n\t Val. Loss: 0.739 | Val. Acc: 65.10%\n100% 9/9 [00:00<00:00, 178.10it/s]\n100% 2/2 [00:00<00:00, 394.39it/s]\nEpoch: 28 | Epoch Time: 0m 0s\n\tTrain Loss: 0.716 | Train Acc: 76.05%\n\t Val. Loss: 0.732 | Val. Acc: 62.96%\n100% 9/9 [00:00<00:00, 187.83it/s]\n100% 2/2 [00:00<00:00, 381.20it/s]\nEpoch: 29 | Epoch Time: 0m 0s\n\tTrain Loss: 0.710 | Train Acc: 75.57%\n\t Val. Loss: 0.681 | Val. Acc: 65.88%\n100% 9/9 [00:00<00:00, 184.32it/s]\n100% 2/2 [00:00<00:00, 398.89it/s]\nEpoch: 30 | Epoch Time: 0m 0s\n\tTrain Loss: 0.665 | Train Acc: 75.98%\n\t Val. Loss: 0.682 | Val. Acc: 70.50%\n100% 9/9 [00:00<00:00, 173.39it/s]\n100% 2/2 [00:00<00:00, 388.54it/s]\nEpoch: 31 | Epoch Time: 0m 0s\n\tTrain Loss: 0.608 | Train Acc: 78.15%\n\t Val. Loss: 0.703 | Val. Acc: 69.15%\n100% 9/9 [00:00<00:00, 193.42it/s]\n100% 2/2 [00:00<00:00, 372.23it/s]\nEpoch: 32 | Epoch Time: 0m 0s\n\tTrain Loss: 0.635 | Train Acc: 77.48%\n\t Val. Loss: 0.616 | Val. Acc: 74.20%\n100% 9/9 [00:00<00:00, 187.25it/s]\n100% 2/2 [00:00<00:00, 401.93it/s]\nEpoch: 33 | Epoch Time: 0m 0s\n\tTrain Loss: 0.578 | Train Acc: 79.67%\n\t Val. Loss: 0.599 | Val. Acc: 75.76%\n100% 9/9 [00:00<00:00, 183.43it/s]\n100% 2/2 [00:00<00:00, 365.69it/s]\nEpoch: 34 | Epoch Time: 0m 0s\n\tTrain Loss: 0.581 | Train Acc: 78.85%\n\t Val. Loss: 0.655 | Val. Acc: 70.50%\n100% 9/9 [00:00<00:00, 184.68it/s]\n100% 2/2 [00:00<00:00, 390.62it/s]\nEpoch: 35 | Epoch Time: 0m 0s\n\tTrain Loss: 0.561 | Train Acc: 80.24%\n\t Val. Loss: 0.546 | Val. Acc: 78.82%\n100% 9/9 [00:00<00:00, 165.36it/s]\n100% 2/2 [00:00<00:00, 380.18it/s]\nEpoch: 36 | Epoch Time: 0m 0s\n\tTrain Loss: 0.523 | Train Acc: 81.17%\n\t Val. Loss: 0.555 | Val. Acc: 77.11%\n100% 9/9 [00:00<00:00, 181.96it/s]\n100% 2/2 [00:00<00:00, 310.02it/s]\nEpoch: 37 | Epoch Time: 0m 0s\n\tTrain Loss: 0.575 | Train Acc: 80.00%\n\t Val. Loss: 0.561 | Val. Acc: 77.11%\n100% 9/9 [00:00<00:00, 182.71it/s]\n100% 2/2 [00:00<00:00, 381.23it/s]\nEpoch: 38 | Epoch Time: 0m 0s\n\tTrain Loss: 0.546 | Train Acc: 79.78%\n\t Val. Loss: 0.563 | Val. Acc: 76.12%\n100% 9/9 [00:00<00:00, 157.79it/s]\n100% 2/2 [00:00<00:00, 352.52it/s]\nEpoch: 39 | Epoch Time: 0m 0s\n\tTrain Loss: 0.513 | Train Acc: 81.38%\n\t Val. Loss: 0.573 | Val. Acc: 70.71%\n100% 9/9 [00:00<00:00, 180.74it/s]\n100% 2/2 [00:00<00:00, 365.01it/s]\nEpoch: 40 | Epoch Time: 0m 0s\n\tTrain Loss: 0.483 | Train Acc: 83.16%\n\t Val. Loss: 0.543 | Val. Acc: 78.46%\n100% 9/9 [00:00<00:00, 183.49it/s]\n100% 2/2 [00:00<00:00, 371.11it/s]\nEpoch: 41 | Epoch Time: 0m 0s\n\tTrain Loss: 0.516 | Train Acc: 80.62%\n\t Val. Loss: 0.551 | Val. Acc: 76.33%\n100% 9/9 [00:00<00:00, 160.31it/s]\n100% 2/2 [00:00<00:00, 398.32it/s]\nEpoch: 42 | Epoch Time: 0m 0s\n\tTrain Loss: 0.476 | Train Acc: 82.56%\n\t Val. Loss: 0.557 | Val. Acc: 72.07%\n100% 9/9 [00:00<00:00, 187.72it/s]\n100% 2/2 [00:00<00:00, 226.81it/s]\nEpoch: 43 | Epoch Time: 0m 0s\n\tTrain Loss: 0.466 | Train Acc: 83.95%\n\t Val. Loss: 0.556 | Val. Acc: 79.24%\n100% 9/9 [00:00<00:00, 155.01it/s]\n100% 2/2 [00:00<00:00, 346.14it/s]\nEpoch: 44 | Epoch Time: 0m 0s\n\tTrain Loss: 0.452 | Train Acc: 83.60%\n\t Val. Loss: 0.622 | Val. Acc: 70.71%\n100% 9/9 [00:00<00:00, 174.61it/s]\n100% 2/2 [00:00<00:00, 393.20it/s]\nEpoch: 45 | Epoch Time: 0m 0s\n\tTrain Loss: 0.410 | Train Acc: 85.03%\n\t Val. Loss: 0.587 | Val. Acc: 73.63%\n100% 9/9 [00:00<00:00, 185.27it/s]\n100% 2/2 [00:00<00:00, 375.73it/s]\nEpoch: 46 | Epoch Time: 0m 0s\n\tTrain Loss: 0.409 | Train Acc: 85.44%\n\t Val. Loss: 0.577 | Val. Acc: 73.42%\n100% 9/9 [00:00<00:00, 191.89it/s]\n100% 2/2 [00:00<00:00, 390.10it/s]\nEpoch: 47 | Epoch Time: 0m 0s\n\tTrain Loss: 0.384 | Train Acc: 87.24%\n\t Val. Loss: 0.558 | Val. Acc: 74.98%\n100% 9/9 [00:00<00:00, 192.97it/s]\n100% 2/2 [00:00<00:00, 372.31it/s]\nEpoch: 48 | Epoch Time: 0m 0s\n\tTrain Loss: 0.413 | Train Acc: 84.99%\n\t Val. Loss: 0.585 | Val. Acc: 73.63%\n100% 9/9 [00:00<00:00, 187.57it/s]\n100% 2/2 [00:00<00:00, 382.88it/s]\nEpoch: 49 | Epoch Time: 0m 0s\n\tTrain Loss: 0.365 | Train Acc: 87.48%\n\t Val. Loss: 0.538 | Val. Acc: 73.42%\n100% 9/9 [00:00<00:00, 182.74it/s]\n100% 2/2 [00:00<00:00, 345.11it/s]\nEpoch: 50 | Epoch Time: 0m 0s\n\tTrain Loss: 0.344 | Train Acc: 88.37%\n\t Val. Loss: 0.507 | Val. Acc: 77.32%\n" ] ], [ [ "### RNNMaxPoolClassifier", "_____no_output_____" ] ], [ [ "!python train.py -n 50 -m RNNMaxpoolClassifier --tag answeronly", "[DEBUG | train.py:307 - <module>() ] Namespace(batch_size=64, bidirectional=True, dropout=0.7, embedding_dim=300, epochs=50, freeze_embeddings=1, hidden_dim=128, l2_regularization=0.001, learning_rate=0.001, linear_hidden_dim=128, model='RNNMaxpoolClassifier', model_location=None, n_layers=1, seed=1234, tag='answeronly')\n[DEBUG | train.py:308 - <module>() ] Custom seed set with: 1234\n[INFO | train.py:310 - <module>() ] Loading Dataset\n[DEBUG | datasetloader.py:161 - get_iterators() ] Data Loaded Successfully!\n[INFO | vocab.py:386 - cache() ] Loading vectors from .vector_cache/glove.6B.300d.txt.pt\n[DEBUG | datasetloader.py:172 - get_iterators() ] Vocabulary Loaded\n[DEBUG | datasetloader.py:180 - get_iterators() ] Created Iterators\n[INFO | train.py:317 - <module>() ] Dataset Loaded Successfully\n[DEBUG | train.py:73 - initialize_new_model() ] Initializing Model\n/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py:50: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.7 and num_layers=1\n \"num_layers={}\".format(dropout, num_layers))\n[DEBUG | train.py:162 - initialize_new_model() ] Freeze Embeddings Value 1: False\n[INFO | train.py:168 - initialize_new_model() ] Model Initialized with 441,739 trainiable parameters\n[DEBUG | train.py:180 - initialize_new_model() ] Copied PreTrained Embeddings\n[INFO | train.py:343 - <module>() ] RNNMaxpoolClassifier(\n (embedding): Embedding(741, 300, padding_idx=1)\n (rnn): LSTM(300, 128, dropout=0.7, bidirectional=True)\n (fc): Linear(in_features=128, out_features=11, bias=True)\n (dropout): Dropout(p=0.7, inplace=False)\n)\n100% 9/9 [00:00<00:00, 151.74it/s]\n100% 2/2 [00:00<00:00, 381.47it/s]\nEpoch: 01 | Epoch Time: 0m 0s\n\tTrain Loss: 2.350 | Train Acc: 11.95%\n\t Val. Loss: 2.282 | Val. Acc: 13.79%\n100% 9/9 [00:00<00:00, 201.18it/s]\n100% 2/2 [00:00<00:00, 421.16it/s]\nEpoch: 02 | Epoch Time: 0m 0s\n\tTrain Loss: 2.247 | Train Acc: 18.53%\n\t Val. Loss: 2.239 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 203.89it/s]\n100% 2/2 [00:00<00:00, 415.67it/s]\nEpoch: 03 | Epoch Time: 0m 0s\n\tTrain Loss: 2.215 | Train Acc: 21.07%\n\t Val. Loss: 2.269 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 178.76it/s]\n100% 2/2 [00:00<00:00, 382.10it/s]\nEpoch: 04 | Epoch Time: 0m 0s\n\tTrain Loss: 2.203 | Train Acc: 17.64%\n\t Val. Loss: 2.240 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 196.92it/s]\n100% 2/2 [00:00<00:00, 380.83it/s]\nEpoch: 05 | Epoch Time: 0m 0s\n\tTrain Loss: 2.190 | Train Acc: 19.14%\n\t Val. Loss: 2.218 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 196.90it/s]\n100% 2/2 [00:00<00:00, 393.31it/s]\nEpoch: 06 | Epoch Time: 0m 0s\n\tTrain Loss: 2.162 | Train Acc: 22.81%\n\t Val. Loss: 2.191 | Val. Acc: 7.81%\n100% 9/9 [00:00<00:00, 197.90it/s]\n100% 2/2 [00:00<00:00, 407.43it/s]\nEpoch: 07 | Epoch Time: 0m 0s\n\tTrain Loss: 2.151 | Train Acc: 20.88%\n\t Val. Loss: 2.160 | Val. Acc: 8.59%\n100% 9/9 [00:00<00:00, 145.70it/s]\n100% 2/2 [00:00<00:00, 334.94it/s]\nEpoch: 08 | Epoch Time: 0m 0s\n\tTrain Loss: 2.105 | Train Acc: 28.58%\n\t Val. Loss: 2.031 | Val. Acc: 30.70%\n100% 9/9 [00:00<00:00, 187.21it/s]\n100% 2/2 [00:00<00:00, 390.55it/s]\nEpoch: 09 | Epoch Time: 0m 0s\n\tTrain Loss: 2.042 | Train Acc: 33.37%\n\t Val. Loss: 1.928 | Val. Acc: 37.88%\n100% 9/9 [00:00<00:00, 187.63it/s]\n100% 2/2 [00:00<00:00, 409.74it/s]\nEpoch: 10 | Epoch Time: 0m 0s\n\tTrain Loss: 1.954 | Train Acc: 35.57%\n\t Val. Loss: 1.836 | Val. Acc: 40.22%\n100% 9/9 [00:00<00:00, 191.85it/s]\n100% 2/2 [00:00<00:00, 423.47it/s]\nEpoch: 11 | Epoch Time: 0m 0s\n\tTrain Loss: 1.791 | Train Acc: 43.31%\n\t Val. Loss: 1.671 | Val. Acc: 40.01%\n100% 9/9 [00:00<00:00, 198.87it/s]\n100% 2/2 [00:00<00:00, 390.29it/s]\nEpoch: 12 | Epoch Time: 0m 0s\n\tTrain Loss: 1.654 | Train Acc: 45.72%\n\t Val. Loss: 1.494 | Val. Acc: 36.95%\n100% 9/9 [00:00<00:00, 169.47it/s]\n100% 2/2 [00:00<00:00, 392.27it/s]\nEpoch: 13 | Epoch Time: 0m 0s\n\tTrain Loss: 1.487 | Train Acc: 49.03%\n\t Val. Loss: 1.420 | Val. Acc: 43.71%\n100% 9/9 [00:00<00:00, 196.27it/s]\n100% 2/2 [00:00<00:00, 322.78it/s]\nEpoch: 14 | Epoch Time: 0m 0s\n\tTrain Loss: 1.314 | Train Acc: 54.26%\n\t Val. Loss: 1.217 | Val. Acc: 46.47%\n100% 9/9 [00:00<00:00, 195.57it/s]\n100% 2/2 [00:00<00:00, 396.91it/s]\nEpoch: 15 | Epoch Time: 0m 0s\n\tTrain Loss: 1.220 | Train Acc: 60.86%\n\t Val. Loss: 1.144 | Val. Acc: 53.23%\n100% 9/9 [00:00<00:00, 191.57it/s]\n100% 2/2 [00:00<00:00, 407.65it/s]\nEpoch: 16 | Epoch Time: 0m 0s\n\tTrain Loss: 1.077 | Train Acc: 66.09%\n\t Val. Loss: 1.086 | Val. Acc: 49.18%\n100% 9/9 [00:00<00:00, 203.47it/s]\n100% 2/2 [00:00<00:00, 378.56it/s]\nEpoch: 17 | Epoch Time: 0m 0s\n\tTrain Loss: 0.996 | Train Acc: 65.31%\n\t Val. Loss: 1.006 | Val. Acc: 52.87%\n100% 9/9 [00:00<00:00, 196.36it/s]\n100% 2/2 [00:00<00:00, 403.07it/s]\nEpoch: 18 | Epoch Time: 0m 0s\n\tTrain Loss: 0.908 | Train Acc: 69.23%\n\t Val. Loss: 0.967 | Val. Acc: 57.71%\n100% 9/9 [00:00<00:00, 198.32it/s]\n100% 2/2 [00:00<00:00, 396.19it/s]\nEpoch: 19 | Epoch Time: 0m 0s\n\tTrain Loss: 0.855 | Train Acc: 70.69%\n\t Val. Loss: 0.908 | Val. Acc: 60.83%\n100% 9/9 [00:00<00:00, 197.89it/s]\n100% 2/2 [00:00<00:00, 269.80it/s]\nEpoch: 20 | Epoch Time: 0m 0s\n\tTrain Loss: 0.808 | Train Acc: 71.32%\n\t Val. Loss: 0.850 | Val. Acc: 67.02%\n100% 9/9 [00:00<00:00, 197.99it/s]\n100% 2/2 [00:00<00:00, 403.63it/s]\nEpoch: 21 | Epoch Time: 0m 0s\n\tTrain Loss: 0.740 | Train Acc: 75.01%\n\t Val. Loss: 0.896 | Val. Acc: 63.75%\n100% 9/9 [00:00<00:00, 191.05it/s]\n100% 2/2 [00:00<00:00, 395.26it/s]\nEpoch: 22 | Epoch Time: 0m 0s\n\tTrain Loss: 0.695 | Train Acc: 78.72%\n\t Val. Loss: 0.822 | Val. Acc: 62.75%\n100% 9/9 [00:00<00:00, 194.35it/s]\n100% 2/2 [00:00<00:00, 387.88it/s]\nEpoch: 23 | Epoch Time: 0m 0s\n\tTrain Loss: 0.611 | Train Acc: 79.82%\n\t Val. Loss: 0.754 | Val. Acc: 67.44%\n100% 9/9 [00:00<00:00, 180.35it/s]\n100% 2/2 [00:00<00:00, 362.80it/s]\nEpoch: 24 | Epoch Time: 0m 0s\n\tTrain Loss: 0.606 | Train Acc: 77.59%\n\t Val. Loss: 0.769 | Val. Acc: 68.58%\n100% 9/9 [00:00<00:00, 197.61it/s]\n100% 2/2 [00:00<00:00, 387.41it/s]\nEpoch: 25 | Epoch Time: 0m 0s\n\tTrain Loss: 0.590 | Train Acc: 79.30%\n\t Val. Loss: 0.702 | Val. Acc: 65.67%\n100% 9/9 [00:00<00:00, 194.32it/s]\n100% 2/2 [00:00<00:00, 407.67it/s]\nEpoch: 26 | Epoch Time: 0m 0s\n\tTrain Loss: 0.557 | Train Acc: 80.97%\n\t Val. Loss: 0.725 | Val. Acc: 69.15%\n100% 9/9 [00:00<00:00, 185.74it/s]\n100% 2/2 [00:00<00:00, 340.50it/s]\nEpoch: 27 | Epoch Time: 0m 0s\n\tTrain Loss: 0.542 | Train Acc: 82.64%\n\t Val. Loss: 0.765 | Val. Acc: 67.02%\n100% 9/9 [00:00<00:00, 204.70it/s]\n100% 2/2 [00:00<00:00, 403.69it/s]\nEpoch: 28 | Epoch Time: 0m 0s\n\tTrain Loss: 0.452 | Train Acc: 87.50%\n\t Val. Loss: 0.664 | Val. Acc: 70.14%\n100% 9/9 [00:00<00:00, 177.93it/s]\n100% 2/2 [00:00<00:00, 304.43it/s]\nEpoch: 29 | Epoch Time: 0m 0s\n\tTrain Loss: 0.448 | Train Acc: 85.66%\n\t Val. Loss: 0.717 | Val. Acc: 69.57%\n100% 9/9 [00:00<00:00, 191.46it/s]\n100% 2/2 [00:00<00:00, 368.21it/s]\nEpoch: 30 | Epoch Time: 0m 0s\n\tTrain Loss: 0.442 | Train Acc: 84.75%\n\t Val. Loss: 0.654 | Val. Acc: 69.36%\n100% 9/9 [00:00<00:00, 176.87it/s]\n100% 2/2 [00:00<00:00, 373.21it/s]\nEpoch: 31 | Epoch Time: 0m 0s\n\tTrain Loss: 0.417 | Train Acc: 86.81%\n\t Val. Loss: 0.654 | Val. Acc: 72.49%\n100% 9/9 [00:00<00:00, 181.23it/s]\n100% 2/2 [00:00<00:00, 409.88it/s]\nEpoch: 32 | Epoch Time: 0m 0s\n\tTrain Loss: 0.395 | Train Acc: 87.50%\n\t Val. Loss: 0.643 | Val. Acc: 70.71%\n100% 9/9 [00:00<00:00, 199.50it/s]\n100% 2/2 [00:00<00:00, 418.11it/s]\nEpoch: 33 | Epoch Time: 0m 0s\n\tTrain Loss: 0.389 | Train Acc: 86.40%\n\t Val. Loss: 0.588 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 194.77it/s]\n100% 2/2 [00:00<00:00, 409.58it/s]\nEpoch: 34 | Epoch Time: 0m 0s\n\tTrain Loss: 0.346 | Train Acc: 89.65%\n\t Val. Loss: 0.652 | Val. Acc: 71.71%\n100% 9/9 [00:00<00:00, 209.36it/s]\n100% 2/2 [00:00<00:00, 410.84it/s]\nEpoch: 35 | Epoch Time: 0m 0s\n\tTrain Loss: 0.336 | Train Acc: 90.39%\n\t Val. Loss: 0.583 | Val. Acc: 73.42%\n100% 9/9 [00:00<00:00, 197.28it/s]\n100% 2/2 [00:00<00:00, 402.70it/s]\nEpoch: 36 | Epoch Time: 0m 0s\n\tTrain Loss: 0.340 | Train Acc: 88.07%\n\t Val. Loss: 0.643 | Val. Acc: 73.06%\n100% 9/9 [00:00<00:00, 194.15it/s]\n100% 2/2 [00:00<00:00, 418.51it/s]\nEpoch: 37 | Epoch Time: 0m 0s\n\tTrain Loss: 0.355 | Train Acc: 88.22%\n\t Val. Loss: 0.604 | Val. Acc: 70.14%\n100% 9/9 [00:00<00:00, 206.56it/s]\n100% 2/2 [00:00<00:00, 427.51it/s]\nEpoch: 38 | Epoch Time: 0m 0s\n\tTrain Loss: 0.317 | Train Acc: 90.15%\n\t Val. Loss: 0.617 | Val. Acc: 72.49%\n100% 9/9 [00:00<00:00, 206.99it/s]\n100% 2/2 [00:00<00:00, 391.57it/s]\nEpoch: 39 | Epoch Time: 0m 0s\n\tTrain Loss: 0.318 | Train Acc: 91.02%\n\t Val. Loss: 0.566 | Val. Acc: 74.20%\n100% 9/9 [00:00<00:00, 201.02it/s]\n100% 2/2 [00:00<00:00, 422.69it/s]\nEpoch: 40 | Epoch Time: 0m 0s\n\tTrain Loss: 0.282 | Train Acc: 91.95%\n\t Val. Loss: 0.590 | Val. Acc: 76.33%\n100% 9/9 [00:00<00:00, 199.15it/s]\n100% 2/2 [00:00<00:00, 396.44it/s]\nEpoch: 41 | Epoch Time: 0m 0s\n\tTrain Loss: 0.279 | Train Acc: 91.91%\n\t Val. Loss: 0.588 | Val. Acc: 75.76%\n100% 9/9 [00:00<00:00, 200.52it/s]\n100% 2/2 [00:00<00:00, 396.10it/s]\nEpoch: 42 | Epoch Time: 0m 0s\n\tTrain Loss: 0.280 | Train Acc: 90.28%\n\t Val. Loss: 0.571 | Val. Acc: 74.98%\n100% 9/9 [00:00<00:00, 197.83it/s]\n100% 2/2 [00:00<00:00, 403.34it/s]\nEpoch: 43 | Epoch Time: 0m 0s\n\tTrain Loss: 0.263 | Train Acc: 92.64%\n\t Val. Loss: 0.596 | Val. Acc: 76.75%\n100% 9/9 [00:00<00:00, 199.95it/s]\n100% 2/2 [00:00<00:00, 409.04it/s]\nEpoch: 44 | Epoch Time: 0m 0s\n\tTrain Loss: 0.233 | Train Acc: 93.88%\n\t Val. Loss: 0.635 | Val. Acc: 73.06%\n100% 9/9 [00:00<00:00, 195.16it/s]\n100% 2/2 [00:00<00:00, 245.18it/s]\nEpoch: 45 | Epoch Time: 0m 0s\n\tTrain Loss: 0.241 | Train Acc: 92.99%\n\t Val. Loss: 0.591 | Val. Acc: 78.67%\n100% 9/9 [00:00<00:00, 193.40it/s]\n100% 2/2 [00:00<00:00, 395.41it/s]\nEpoch: 46 | Epoch Time: 0m 0s\n\tTrain Loss: 0.253 | Train Acc: 90.91%\n\t Val. Loss: 0.657 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 188.06it/s]\n100% 2/2 [00:00<00:00, 400.35it/s]\nEpoch: 47 | Epoch Time: 0m 0s\n\tTrain Loss: 0.239 | Train Acc: 93.32%\n\t Val. Loss: 0.788 | Val. Acc: 73.63%\n100% 9/9 [00:00<00:00, 175.68it/s]\n100% 2/2 [00:00<00:00, 429.44it/s]\nEpoch: 48 | Epoch Time: 0m 0s\n\tTrain Loss: 0.293 | Train Acc: 89.50%\n\t Val. Loss: 0.784 | Val. Acc: 70.14%\n100% 9/9 [00:00<00:00, 203.15it/s]\n100% 2/2 [00:00<00:00, 382.64it/s]\nEpoch: 49 | Epoch Time: 0m 0s\n\tTrain Loss: 0.300 | Train Acc: 90.87%\n\t Val. Loss: 0.655 | Val. Acc: 72.28%\n100% 9/9 [00:00<00:00, 201.16it/s]\n100% 2/2 [00:00<00:00, 413.35it/s]\nEpoch: 50 | Epoch Time: 0m 0s\n\tTrain Loss: 0.248 | Train Acc: 92.13%\n\t Val. Loss: 0.572 | Val. Acc: 78.46%\n" ] ], [ [ "## CNN Classifiers", "_____no_output_____" ], [ "### CNN1DClassifier", "_____no_output_____" ] ], [ [ "!python train.py -n 50 -m CNN1dClassifier --tag answeronly", "[DEBUG | train.py:307 - <module>() ] Namespace(batch_size=64, bidirectional=True, dropout=0.7, embedding_dim=300, epochs=50, freeze_embeddings=1, hidden_dim=128, l2_regularization=0.001, learning_rate=0.001, linear_hidden_dim=128, model='CNN1dClassifier', model_location=None, n_layers=1, seed=1234, tag='answeronly')\n[DEBUG | train.py:308 - <module>() ] Custom seed set with: 1234\n[INFO | train.py:310 - <module>() ] Loading Dataset\n[DEBUG | datasetloader.py:161 - get_iterators() ] Data Loaded Successfully!\n[INFO | vocab.py:386 - cache() ] Loading vectors from .vector_cache/glove.6B.300d.txt.pt\n[DEBUG | datasetloader.py:172 - get_iterators() ] Vocabulary Loaded\n[DEBUG | datasetloader.py:180 - get_iterators() ] Created Iterators\n[INFO | train.py:317 - <module>() ] Dataset Loaded Successfully\n[DEBUG | train.py:73 - initialize_new_model() ] Initializing Model\n[DEBUG | train.py:162 - initialize_new_model() ] Freeze Embeddings Value 1: False\n[INFO | train.py:168 - initialize_new_model() ] Model Initialized with 175,115 trainiable parameters\n[DEBUG | train.py:180 - initialize_new_model() ] Copied PreTrained Embeddings\n[INFO | train.py:343 - <module>() ] CNN1dClassifier(\n (embedding): Embedding(741, 300, padding_idx=1)\n (convs): ModuleList(\n (0): Conv1d(300, 64, kernel_size=(1,), stride=(1,))\n (1): Conv1d(300, 64, kernel_size=(3,), stride=(1,))\n (2): Conv1d(300, 64, kernel_size=(5,), stride=(1,))\n )\n (fc): Linear(in_features=192, out_features=11, bias=True)\n (dropout): Dropout(p=0.7, inplace=False)\n)\n100% 9/9 [00:00<00:00, 207.93it/s]\n100% 2/2 [00:00<00:00, 544.36it/s]\nEpoch: 01 | Epoch Time: 0m 0s\n\tTrain Loss: 2.412 | Train Acc: 14.45%\n\t Val. Loss: 2.157 | Val. Acc: 14.21%\n100% 9/9 [00:00<00:00, 268.68it/s]\n100% 2/2 [00:00<00:00, 562.24it/s]\nEpoch: 02 | Epoch Time: 0m 0s\n\tTrain Loss: 2.118 | Train Acc: 27.36%\n\t Val. Loss: 2.086 | Val. Acc: 21.54%\n100% 9/9 [00:00<00:00, 261.69it/s]\n100% 2/2 [00:00<00:00, 574.29it/s]\nEpoch: 03 | Epoch Time: 0m 0s\n\tTrain Loss: 1.866 | Train Acc: 36.54%\n\t Val. Loss: 1.972 | Val. Acc: 23.10%\n100% 9/9 [00:00<00:00, 271.55it/s]\n100% 2/2 [00:00<00:00, 562.31it/s]\nEpoch: 04 | Epoch Time: 0m 0s\n\tTrain Loss: 1.732 | Train Acc: 42.16%\n\t Val. Loss: 1.859 | Val. Acc: 36.32%\n100% 9/9 [00:00<00:00, 240.99it/s]\n100% 2/2 [00:00<00:00, 480.39it/s]\nEpoch: 05 | Epoch Time: 0m 0s\n\tTrain Loss: 1.575 | Train Acc: 50.19%\n\t Val. Loss: 1.789 | Val. Acc: 36.53%\n100% 9/9 [00:00<00:00, 265.29it/s]\n100% 2/2 [00:00<00:00, 565.73it/s]\nEpoch: 06 | Epoch Time: 0m 0s\n\tTrain Loss: 1.435 | Train Acc: 55.00%\n\t Val. Loss: 1.690 | Val. Acc: 38.30%\n100% 9/9 [00:00<00:00, 269.67it/s]\n100% 2/2 [00:00<00:00, 547.13it/s]\nEpoch: 07 | Epoch Time: 0m 0s\n\tTrain Loss: 1.289 | Train Acc: 60.60%\n\t Val. Loss: 1.555 | Val. Acc: 48.40%\n100% 9/9 [00:00<00:00, 274.36it/s]\n100% 2/2 [00:00<00:00, 573.42it/s]\nEpoch: 08 | Epoch Time: 0m 0s\n\tTrain Loss: 1.183 | Train Acc: 64.40%\n\t Val. Loss: 1.510 | Val. Acc: 46.05%\n100% 9/9 [00:00<00:00, 275.72it/s]\n100% 2/2 [00:00<00:00, 507.29it/s]\nEpoch: 09 | Epoch Time: 0m 0s\n\tTrain Loss: 1.060 | Train Acc: 67.83%\n\t Val. Loss: 1.419 | Val. Acc: 45.90%\n100% 9/9 [00:00<00:00, 276.03it/s]\n100% 2/2 [00:00<00:00, 598.29it/s]\nEpoch: 10 | Epoch Time: 0m 0s\n\tTrain Loss: 0.992 | Train Acc: 68.72%\n\t Val. Loss: 1.297 | Val. Acc: 48.61%\n100% 9/9 [00:00<00:00, 259.95it/s]\n100% 2/2 [00:00<00:00, 568.60it/s]\nEpoch: 11 | Epoch Time: 0m 0s\n\tTrain Loss: 0.884 | Train Acc: 72.99%\n\t Val. Loss: 1.233 | Val. Acc: 61.97%\n100% 9/9 [00:00<00:00, 247.51it/s]\n100% 2/2 [00:00<00:00, 550.65it/s]\nEpoch: 12 | Epoch Time: 0m 0s\n\tTrain Loss: 0.819 | Train Acc: 75.57%\n\t Val. Loss: 1.146 | Val. Acc: 61.40%\n100% 9/9 [00:00<00:00, 256.75it/s]\n100% 2/2 [00:00<00:00, 602.41it/s]\nEpoch: 13 | Epoch Time: 0m 0s\n\tTrain Loss: 0.719 | Train Acc: 80.02%\n\t Val. Loss: 1.091 | Val. Acc: 63.75%\n100% 9/9 [00:00<00:00, 280.13it/s]\n100% 2/2 [00:00<00:00, 594.43it/s]\nEpoch: 14 | Epoch Time: 0m 0s\n\tTrain Loss: 0.698 | Train Acc: 80.82%\n\t Val. Loss: 1.034 | Val. Acc: 61.61%\n100% 9/9 [00:00<00:00, 275.21it/s]\n100% 2/2 [00:00<00:00, 637.14it/s]\nEpoch: 15 | Epoch Time: 0m 0s\n\tTrain Loss: 0.638 | Train Acc: 82.58%\n\t Val. Loss: 0.976 | Val. Acc: 65.88%\n100% 9/9 [00:00<00:00, 278.01it/s]\n100% 2/2 [00:00<00:00, 576.70it/s]\nEpoch: 16 | Epoch Time: 0m 0s\n\tTrain Loss: 0.593 | Train Acc: 82.73%\n\t Val. Loss: 1.031 | Val. Acc: 64.32%\n100% 9/9 [00:00<00:00, 247.42it/s]\n100% 2/2 [00:00<00:00, 603.97it/s]\nEpoch: 17 | Epoch Time: 0m 0s\n\tTrain Loss: 0.586 | Train Acc: 80.74%\n\t Val. Loss: 0.941 | Val. Acc: 66.87%\n100% 9/9 [00:00<00:00, 277.35it/s]\n100% 2/2 [00:00<00:00, 584.45it/s]\nEpoch: 18 | Epoch Time: 0m 0s\n\tTrain Loss: 0.509 | Train Acc: 87.13%\n\t Val. Loss: 0.933 | Val. Acc: 70.14%\n100% 9/9 [00:00<00:00, 261.92it/s]\n100% 2/2 [00:00<00:00, 592.75it/s]\nEpoch: 19 | Epoch Time: 0m 0s\n\tTrain Loss: 0.483 | Train Acc: 86.85%\n\t Val. Loss: 0.911 | Val. Acc: 68.01%\n100% 9/9 [00:00<00:00, 250.32it/s]\n100% 2/2 [00:00<00:00, 584.78it/s]\nEpoch: 20 | Epoch Time: 0m 0s\n\tTrain Loss: 0.485 | Train Acc: 84.73%\n\t Val. Loss: 0.851 | Val. Acc: 71.71%\n100% 9/9 [00:00<00:00, 242.57it/s]\n100% 2/2 [00:00<00:00, 590.83it/s]\nEpoch: 21 | Epoch Time: 0m 0s\n\tTrain Loss: 0.423 | Train Acc: 87.48%\n\t Val. Loss: 0.829 | Val. Acc: 68.79%\n100% 9/9 [00:00<00:00, 283.78it/s]\n100% 2/2 [00:00<00:00, 619.50it/s]\nEpoch: 22 | Epoch Time: 0m 0s\n\tTrain Loss: 0.425 | Train Acc: 89.72%\n\t Val. Loss: 0.779 | Val. Acc: 69.57%\n100% 9/9 [00:00<00:00, 252.78it/s]\n100% 2/2 [00:00<00:00, 610.35it/s]\nEpoch: 23 | Epoch Time: 0m 0s\n\tTrain Loss: 0.422 | Train Acc: 87.76%\n\t Val. Loss: 0.778 | Val. Acc: 70.92%\n100% 9/9 [00:00<00:00, 271.03it/s]\n100% 2/2 [00:00<00:00, 572.68it/s]\nEpoch: 24 | Epoch Time: 0m 0s\n\tTrain Loss: 0.363 | Train Acc: 90.04%\n\t Val. Loss: 0.822 | Val. Acc: 70.14%\n100% 9/9 [00:00<00:00, 281.26it/s]\n100% 2/2 [00:00<00:00, 598.84it/s]\nEpoch: 25 | Epoch Time: 0m 0s\n\tTrain Loss: 0.354 | Train Acc: 91.32%\n\t Val. Loss: 0.745 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 280.83it/s]\n100% 2/2 [00:00<00:00, 582.22it/s]\nEpoch: 26 | Epoch Time: 0m 0s\n\tTrain Loss: 0.341 | Train Acc: 90.93%\n\t Val. Loss: 0.756 | Val. Acc: 73.06%\n100% 9/9 [00:00<00:00, 267.38it/s]\n100% 2/2 [00:00<00:00, 565.12it/s]\nEpoch: 27 | Epoch Time: 0m 0s\n\tTrain Loss: 0.310 | Train Acc: 91.74%\n\t Val. Loss: 0.743 | Val. Acc: 72.28%\n100% 9/9 [00:00<00:00, 263.78it/s]\n100% 2/2 [00:00<00:00, 614.46it/s]\nEpoch: 28 | Epoch Time: 0m 0s\n\tTrain Loss: 0.294 | Train Acc: 93.08%\n\t Val. Loss: 0.723 | Val. Acc: 70.35%\n100% 9/9 [00:00<00:00, 277.43it/s]\n100% 2/2 [00:00<00:00, 614.42it/s]\nEpoch: 29 | Epoch Time: 0m 0s\n\tTrain Loss: 0.282 | Train Acc: 92.41%\n\t Val. Loss: 0.701 | Val. Acc: 70.35%\n100% 9/9 [00:00<00:00, 273.07it/s]\n100% 2/2 [00:00<00:00, 592.12it/s]\nEpoch: 30 | Epoch Time: 0m 0s\n\tTrain Loss: 0.296 | Train Acc: 92.99%\n\t Val. Loss: 0.682 | Val. Acc: 70.35%\n100% 9/9 [00:00<00:00, 264.37it/s]\n100% 2/2 [00:00<00:00, 542.81it/s]\nEpoch: 31 | Epoch Time: 0m 0s\n\tTrain Loss: 0.259 | Train Acc: 93.41%\n\t Val. Loss: 0.670 | Val. Acc: 71.71%\n100% 9/9 [00:00<00:00, 247.75it/s]\n100% 2/2 [00:00<00:00, 441.65it/s]\nEpoch: 32 | Epoch Time: 0m 0s\n\tTrain Loss: 0.282 | Train Acc: 94.43%\n\t Val. Loss: 0.655 | Val. Acc: 76.54%\n100% 9/9 [00:00<00:00, 256.08it/s]\n100% 2/2 [00:00<00:00, 610.70it/s]\nEpoch: 33 | Epoch Time: 0m 0s\n\tTrain Loss: 0.265 | Train Acc: 93.21%\n\t Val. Loss: 0.680 | Val. Acc: 72.28%\n100% 9/9 [00:00<00:00, 277.60it/s]\n100% 2/2 [00:00<00:00, 633.77it/s]\nEpoch: 34 | Epoch Time: 0m 0s\n\tTrain Loss: 0.234 | Train Acc: 94.90%\n\t Val. Loss: 0.656 | Val. Acc: 71.71%\n100% 9/9 [00:00<00:00, 277.01it/s]\n100% 2/2 [00:00<00:00, 604.80it/s]\nEpoch: 35 | Epoch Time: 0m 0s\n\tTrain Loss: 0.224 | Train Acc: 93.95%\n\t Val. Loss: 0.659 | Val. Acc: 74.62%\n100% 9/9 [00:00<00:00, 282.86it/s]\n100% 2/2 [00:00<00:00, 608.31it/s]\nEpoch: 36 | Epoch Time: 0m 0s\n\tTrain Loss: 0.212 | Train Acc: 94.30%\n\t Val. Loss: 0.659 | Val. Acc: 70.35%\n100% 9/9 [00:00<00:00, 268.42it/s]\n100% 2/2 [00:00<00:00, 612.31it/s]\nEpoch: 37 | Epoch Time: 0m 0s\n\tTrain Loss: 0.230 | Train Acc: 95.90%\n\t Val. Loss: 0.631 | Val. Acc: 72.49%\n100% 9/9 [00:00<00:00, 275.74it/s]\n100% 2/2 [00:00<00:00, 327.31it/s]\nEpoch: 38 | Epoch Time: 0m 0s\n\tTrain Loss: 0.190 | Train Acc: 96.27%\n\t Val. Loss: 0.634 | Val. Acc: 73.84%\n100% 9/9 [00:00<00:00, 280.56it/s]\n100% 2/2 [00:00<00:00, 641.92it/s]\nEpoch: 39 | Epoch Time: 0m 0s\n\tTrain Loss: 0.204 | Train Acc: 95.10%\n\t Val. Loss: 0.635 | Val. Acc: 73.84%\n100% 9/9 [00:00<00:00, 269.96it/s]\n100% 2/2 [00:00<00:00, 430.38it/s]\nEpoch: 40 | Epoch Time: 0m 0s\n\tTrain Loss: 0.204 | Train Acc: 94.69%\n\t Val. Loss: 0.597 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 251.30it/s]\n100% 2/2 [00:00<00:00, 592.54it/s]\nEpoch: 41 | Epoch Time: 0m 0s\n\tTrain Loss: 0.196 | Train Acc: 96.14%\n\t Val. Loss: 0.585 | Val. Acc: 78.67%\n100% 9/9 [00:00<00:00, 235.85it/s]\n100% 2/2 [00:00<00:00, 625.55it/s]\nEpoch: 42 | Epoch Time: 0m 0s\n\tTrain Loss: 0.188 | Train Acc: 95.62%\n\t Val. Loss: 0.580 | Val. Acc: 78.67%\n100% 9/9 [00:00<00:00, 278.30it/s]\n100% 2/2 [00:00<00:00, 638.99it/s]\nEpoch: 43 | Epoch Time: 0m 0s\n\tTrain Loss: 0.139 | Train Acc: 96.44%\n\t Val. Loss: 0.587 | Val. Acc: 75.97%\n100% 9/9 [00:00<00:00, 262.57it/s]\n100% 2/2 [00:00<00:00, 565.69it/s]\nEpoch: 44 | Epoch Time: 0m 0s\n\tTrain Loss: 0.154 | Train Acc: 97.37%\n\t Val. Loss: 0.587 | Val. Acc: 73.84%\n100% 9/9 [00:00<00:00, 288.55it/s]\n100% 2/2 [00:00<00:00, 610.92it/s]\nEpoch: 45 | Epoch Time: 0m 0s\n\tTrain Loss: 0.157 | Train Acc: 96.14%\n\t Val. Loss: 0.558 | Val. Acc: 77.32%\n100% 9/9 [00:00<00:00, 286.29it/s]\n100% 2/2 [00:00<00:00, 661.67it/s]\nEpoch: 46 | Epoch Time: 0m 0s\n\tTrain Loss: 0.189 | Train Acc: 93.93%\n\t Val. Loss: 0.588 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 275.94it/s]\n100% 2/2 [00:00<00:00, 593.00it/s]\nEpoch: 47 | Epoch Time: 0m 0s\n\tTrain Loss: 0.164 | Train Acc: 97.01%\n\t Val. Loss: 0.583 | Val. Acc: 73.06%\n100% 9/9 [00:00<00:00, 282.32it/s]\n100% 2/2 [00:00<00:00, 669.86it/s]\nEpoch: 48 | Epoch Time: 0m 0s\n\tTrain Loss: 0.155 | Train Acc: 97.37%\n\t Val. Loss: 0.590 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 270.98it/s]\n100% 2/2 [00:00<00:00, 424.87it/s]\nEpoch: 49 | Epoch Time: 0m 0s\n\tTrain Loss: 0.144 | Train Acc: 96.81%\n\t Val. Loss: 0.594 | Val. Acc: 73.06%\n100% 9/9 [00:00<00:00, 274.08it/s]\n100% 2/2 [00:00<00:00, 652.30it/s]\nEpoch: 50 | Epoch Time: 0m 0s\n\tTrain Loss: 0.125 | Train Acc: 98.24%\n\t Val. Loss: 0.571 | Val. Acc: 73.84%\n" ] ], [ [ "### CNN1dExtraLayerClassifier", "_____no_output_____" ] ], [ [ "!python train.py -n 50 -m CNN1dExtraLayerClassifier --tag answeronly", "[DEBUG | train.py:307 - <module>() ] Namespace(batch_size=64, bidirectional=True, dropout=0.7, embedding_dim=300, epochs=50, freeze_embeddings=1, hidden_dim=128, l2_regularization=0.001, learning_rate=0.001, linear_hidden_dim=128, model='CNN1dExtraLayerClassifier', model_location=None, n_layers=1, seed=1234, tag='answeronly')\n[DEBUG | train.py:308 - <module>() ] Custom seed set with: 1234\n[INFO | train.py:310 - <module>() ] Loading Dataset\n[DEBUG | datasetloader.py:161 - get_iterators() ] Data Loaded Successfully!\n[INFO | vocab.py:386 - cache() ] Loading vectors from .vector_cache/glove.6B.300d.txt.pt\n[DEBUG | datasetloader.py:172 - get_iterators() ] Vocabulary Loaded\n[DEBUG | datasetloader.py:180 - get_iterators() ] Created Iterators\n[INFO | train.py:317 - <module>() ] Dataset Loaded Successfully\n[DEBUG | train.py:73 - initialize_new_model() ] Initializing Model\n[DEBUG | train.py:162 - initialize_new_model() ] Freeze Embeddings Value 1: False\n[INFO | train.py:168 - initialize_new_model() ] Model Initialized with 199,115 trainiable parameters\n[DEBUG | train.py:180 - initialize_new_model() ] Copied PreTrained Embeddings\n[INFO | train.py:343 - <module>() ] CNN1dExtraLayerClassifier(\n (embedding): Embedding(741, 300, padding_idx=1)\n (convs): ModuleList(\n (0): CustomConv1d(\n (convlayer): Conv1d(300, 64, kernel_size=(1,), stride=(1,))\n )\n (1): CustomConv1d(\n (convlayer): Conv1d(300, 64, kernel_size=(3,), stride=(1,), padding=(1,))\n )\n (2): CustomConv1d(\n (convlayer): Conv1d(300, 64, kernel_size=(5,), stride=(1,), padding=(2,))\n )\n )\n (hidden_layer): Linear(in_features=192, out_features=128, bias=True)\n (fc): Linear(in_features=128, out_features=11, bias=True)\n (dropout): Dropout(p=0.7, inplace=False)\n)\n100% 9/9 [00:00<00:00, 158.73it/s]\n100% 2/2 [00:00<00:00, 443.54it/s]\nEpoch: 01 | Epoch Time: 0m 0s\n\tTrain Loss: 2.333 | Train Acc: 16.10%\n\t Val. Loss: 2.252 | Val. Acc: 9.16%\n100% 9/9 [00:00<00:00, 227.95it/s]\n100% 2/2 [00:00<00:00, 465.26it/s]\nEpoch: 02 | Epoch Time: 0m 0s\n\tTrain Loss: 2.186 | Train Acc: 22.68%\n\t Val. Loss: 2.190 | Val. Acc: 9.16%\n100% 9/9 [00:00<00:00, 220.38it/s]\n100% 2/2 [00:00<00:00, 446.39it/s]\nEpoch: 03 | Epoch Time: 0m 0s\n\tTrain Loss: 2.125 | Train Acc: 27.15%\n\t Val. Loss: 2.141 | Val. Acc: 10.16%\n100% 9/9 [00:00<00:00, 226.07it/s]\n100% 2/2 [00:00<00:00, 475.71it/s]\nEpoch: 04 | Epoch Time: 0m 0s\n\tTrain Loss: 2.010 | Train Acc: 30.18%\n\t Val. Loss: 2.047 | Val. Acc: 28.93%\n100% 9/9 [00:00<00:00, 213.65it/s]\n100% 2/2 [00:00<00:00, 470.61it/s]\nEpoch: 05 | Epoch Time: 0m 0s\n\tTrain Loss: 1.878 | Train Acc: 35.91%\n\t Val. Loss: 1.983 | Val. Acc: 27.79%\n100% 9/9 [00:00<00:00, 215.14it/s]\n100% 2/2 [00:00<00:00, 483.55it/s]\nEpoch: 06 | Epoch Time: 0m 0s\n\tTrain Loss: 1.763 | Train Acc: 41.62%\n\t Val. Loss: 1.862 | Val. Acc: 34.04%\n100% 9/9 [00:00<00:00, 230.08it/s]\n100% 2/2 [00:00<00:00, 482.35it/s]\nEpoch: 07 | Epoch Time: 0m 0s\n\tTrain Loss: 1.649 | Train Acc: 43.05%\n\t Val. Loss: 1.694 | Val. Acc: 49.96%\n100% 9/9 [00:00<00:00, 237.74it/s]\n100% 2/2 [00:00<00:00, 496.22it/s]\nEpoch: 08 | Epoch Time: 0m 0s\n\tTrain Loss: 1.529 | Train Acc: 49.69%\n\t Val. Loss: 1.567 | Val. Acc: 47.04%\n100% 9/9 [00:00<00:00, 237.37it/s]\n100% 2/2 [00:00<00:00, 475.17it/s]\nEpoch: 09 | Epoch Time: 0m 0s\n\tTrain Loss: 1.321 | Train Acc: 57.46%\n\t Val. Loss: 1.417 | Val. Acc: 52.66%\n100% 9/9 [00:00<00:00, 221.09it/s]\n100% 2/2 [00:00<00:00, 486.18it/s]\nEpoch: 10 | Epoch Time: 0m 0s\n\tTrain Loss: 1.121 | Train Acc: 66.26%\n\t Val. Loss: 1.236 | Val. Acc: 58.13%\n100% 9/9 [00:00<00:00, 213.20it/s]\n100% 2/2 [00:00<00:00, 485.54it/s]\nEpoch: 11 | Epoch Time: 0m 0s\n\tTrain Loss: 0.961 | Train Acc: 72.45%\n\t Val. Loss: 1.114 | Val. Acc: 60.83%\n100% 9/9 [00:00<00:00, 225.12it/s]\n100% 2/2 [00:00<00:00, 486.92it/s]\nEpoch: 12 | Epoch Time: 0m 0s\n\tTrain Loss: 0.824 | Train Acc: 77.35%\n\t Val. Loss: 0.992 | Val. Acc: 60.83%\n100% 9/9 [00:00<00:00, 238.39it/s]\n100% 2/2 [00:00<00:00, 494.38it/s]\nEpoch: 13 | Epoch Time: 0m 0s\n\tTrain Loss: 0.690 | Train Acc: 80.95%\n\t Val. Loss: 0.912 | Val. Acc: 61.61%\n100% 9/9 [00:00<00:00, 226.13it/s]\n100% 2/2 [00:00<00:00, 463.33it/s]\nEpoch: 14 | Epoch Time: 0m 0s\n\tTrain Loss: 0.591 | Train Acc: 83.86%\n\t Val. Loss: 0.803 | Val. Acc: 70.71%\n100% 9/9 [00:00<00:00, 204.43it/s]\n100% 2/2 [00:00<00:00, 350.07it/s]\nEpoch: 15 | Epoch Time: 0m 0s\n\tTrain Loss: 0.495 | Train Acc: 86.77%\n\t Val. Loss: 0.766 | Val. Acc: 58.13%\n100% 9/9 [00:00<00:00, 235.10it/s]\n100% 2/2 [00:00<00:00, 479.95it/s]\nEpoch: 16 | Epoch Time: 0m 0s\n\tTrain Loss: 0.445 | Train Acc: 89.26%\n\t Val. Loss: 0.700 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 235.18it/s]\n100% 2/2 [00:00<00:00, 476.22it/s]\nEpoch: 17 | Epoch Time: 0m 0s\n\tTrain Loss: 0.401 | Train Acc: 87.96%\n\t Val. Loss: 0.671 | Val. Acc: 66.09%\n100% 9/9 [00:00<00:00, 240.09it/s]\n100% 2/2 [00:00<00:00, 487.65it/s]\nEpoch: 18 | Epoch Time: 0m 0s\n\tTrain Loss: 0.342 | Train Acc: 91.08%\n\t Val. Loss: 0.611 | Val. Acc: 74.41%\n100% 9/9 [00:00<00:00, 214.79it/s]\n100% 2/2 [00:00<00:00, 476.84it/s]\nEpoch: 19 | Epoch Time: 0m 0s\n\tTrain Loss: 0.350 | Train Acc: 91.80%\n\t Val. Loss: 0.633 | Val. Acc: 78.67%\n100% 9/9 [00:00<00:00, 230.77it/s]\n100% 2/2 [00:00<00:00, 475.17it/s]\nEpoch: 20 | Epoch Time: 0m 0s\n\tTrain Loss: 0.272 | Train Acc: 93.32%\n\t Val. Loss: 0.576 | Val. Acc: 75.76%\n100% 9/9 [00:00<00:00, 219.29it/s]\n100% 2/2 [00:00<00:00, 484.69it/s]\nEpoch: 21 | Epoch Time: 0m 0s\n\tTrain Loss: 0.249 | Train Acc: 93.69%\n\t Val. Loss: 0.527 | Val. Acc: 80.03%\n100% 9/9 [00:00<00:00, 237.06it/s]\n100% 2/2 [00:00<00:00, 499.44it/s]\nEpoch: 22 | Epoch Time: 0m 0s\n\tTrain Loss: 0.244 | Train Acc: 93.97%\n\t Val. Loss: 0.517 | Val. Acc: 78.10%\n100% 9/9 [00:00<00:00, 237.34it/s]\n100% 2/2 [00:00<00:00, 493.97it/s]\nEpoch: 23 | Epoch Time: 0m 0s\n\tTrain Loss: 0.208 | Train Acc: 95.05%\n\t Val. Loss: 0.518 | Val. Acc: 78.67%\n100% 9/9 [00:00<00:00, 215.30it/s]\n100% 2/2 [00:00<00:00, 499.38it/s]\nEpoch: 24 | Epoch Time: 0m 0s\n\tTrain Loss: 0.191 | Train Acc: 94.88%\n\t Val. Loss: 0.500 | Val. Acc: 76.75%\n100% 9/9 [00:00<00:00, 234.69it/s]\n100% 2/2 [00:00<00:00, 507.08it/s]\nEpoch: 25 | Epoch Time: 0m 0s\n\tTrain Loss: 0.188 | Train Acc: 94.69%\n\t Val. Loss: 0.487 | Val. Acc: 78.67%\n100% 9/9 [00:00<00:00, 236.43it/s]\n100% 2/2 [00:00<00:00, 504.70it/s]\nEpoch: 26 | Epoch Time: 0m 0s\n\tTrain Loss: 0.161 | Train Acc: 96.98%\n\t Val. Loss: 0.501 | Val. Acc: 75.76%\n100% 9/9 [00:00<00:00, 239.01it/s]\n100% 2/2 [00:00<00:00, 498.64it/s]\nEpoch: 27 | Epoch Time: 0m 0s\n\tTrain Loss: 0.150 | Train Acc: 95.71%\n\t Val. Loss: 0.456 | Val. Acc: 78.67%\n100% 9/9 [00:00<00:00, 195.58it/s]\n100% 2/2 [00:00<00:00, 462.16it/s]\nEpoch: 28 | Epoch Time: 0m 0s\n\tTrain Loss: 0.156 | Train Acc: 96.25%\n\t Val. Loss: 0.510 | Val. Acc: 75.19%\n100% 9/9 [00:00<00:00, 218.85it/s]\n100% 2/2 [00:00<00:00, 458.82it/s]\nEpoch: 29 | Epoch Time: 0m 0s\n\tTrain Loss: 0.154 | Train Acc: 96.51%\n\t Val. Loss: 0.465 | Val. Acc: 80.60%\n100% 9/9 [00:00<00:00, 213.90it/s]\n100% 2/2 [00:00<00:00, 474.33it/s]\nEpoch: 30 | Epoch Time: 0m 0s\n\tTrain Loss: 0.135 | Train Acc: 96.64%\n\t Val. Loss: 0.460 | Val. Acc: 77.89%\n100% 9/9 [00:00<00:00, 238.23it/s]\n100% 2/2 [00:00<00:00, 512.50it/s]\nEpoch: 31 | Epoch Time: 0m 0s\n\tTrain Loss: 0.126 | Train Acc: 97.51%\n\t Val. Loss: 0.472 | Val. Acc: 75.76%\n100% 9/9 [00:00<00:00, 242.15it/s]\n100% 2/2 [00:00<00:00, 518.33it/s]\nEpoch: 32 | Epoch Time: 0m 0s\n\tTrain Loss: 0.130 | Train Acc: 97.03%\n\t Val. Loss: 0.453 | Val. Acc: 76.54%\n100% 9/9 [00:00<00:00, 236.69it/s]\n100% 2/2 [00:00<00:00, 503.19it/s]\nEpoch: 33 | Epoch Time: 0m 0s\n\tTrain Loss: 0.092 | Train Acc: 98.61%\n\t Val. Loss: 0.476 | Val. Acc: 76.33%\n100% 9/9 [00:00<00:00, 230.54it/s]\n100% 2/2 [00:00<00:00, 485.51it/s]\nEpoch: 34 | Epoch Time: 0m 0s\n\tTrain Loss: 0.091 | Train Acc: 98.57%\n\t Val. Loss: 0.442 | Val. Acc: 79.24%\n100% 9/9 [00:00<00:00, 232.44it/s]\n100% 2/2 [00:00<00:00, 525.67it/s]\nEpoch: 35 | Epoch Time: 0m 0s\n\tTrain Loss: 0.111 | Train Acc: 97.20%\n\t Val. Loss: 0.420 | Val. Acc: 80.60%\n100% 9/9 [00:00<00:00, 240.36it/s]\n100% 2/2 [00:00<00:00, 519.26it/s]\nEpoch: 36 | Epoch Time: 0m 0s\n\tTrain Loss: 0.091 | Train Acc: 98.74%\n\t Val. Loss: 0.402 | Val. Acc: 86.21%\n100% 9/9 [00:00<00:00, 239.73it/s]\n100% 2/2 [00:00<00:00, 531.77it/s]\nEpoch: 37 | Epoch Time: 0m 0s\n\tTrain Loss: 0.077 | Train Acc: 98.44%\n\t Val. Loss: 0.452 | Val. Acc: 76.54%\n100% 9/9 [00:00<00:00, 242.50it/s]\n100% 2/2 [00:00<00:00, 501.35it/s]\nEpoch: 38 | Epoch Time: 0m 0s\n\tTrain Loss: 0.086 | Train Acc: 98.22%\n\t Val. Loss: 0.444 | Val. Acc: 82.73%\n100% 9/9 [00:00<00:00, 237.32it/s]\n100% 2/2 [00:00<00:00, 523.90it/s]\nEpoch: 39 | Epoch Time: 0m 0s\n\tTrain Loss: 0.090 | Train Acc: 97.72%\n\t Val. Loss: 0.421 | Val. Acc: 80.03%\n100% 9/9 [00:00<00:00, 229.45it/s]\n100% 2/2 [00:00<00:00, 495.87it/s]\nEpoch: 40 | Epoch Time: 0m 0s\n\tTrain Loss: 0.076 | Train Acc: 98.61%\n\t Val. Loss: 0.420 | Val. Acc: 84.29%\n100% 9/9 [00:00<00:00, 238.15it/s]\n100% 2/2 [00:00<00:00, 528.72it/s]\nEpoch: 41 | Epoch Time: 0m 0s\n\tTrain Loss: 0.084 | Train Acc: 98.18%\n\t Val. Loss: 0.518 | Val. Acc: 74.98%\n100% 9/9 [00:00<00:00, 244.37it/s]\n100% 2/2 [00:00<00:00, 553.30it/s]\nEpoch: 42 | Epoch Time: 0m 0s\n\tTrain Loss: 0.084 | Train Acc: 98.07%\n\t Val. Loss: 0.395 | Val. Acc: 81.38%\n100% 9/9 [00:00<00:00, 232.85it/s]\n100% 2/2 [00:00<00:00, 556.57it/s]\nEpoch: 43 | Epoch Time: 0m 0s\n\tTrain Loss: 0.073 | Train Acc: 98.96%\n\t Val. Loss: 0.397 | Val. Acc: 81.38%\n100% 9/9 [00:00<00:00, 238.09it/s]\n100% 2/2 [00:00<00:00, 398.96it/s]\nEpoch: 44 | Epoch Time: 0m 0s\n\tTrain Loss: 0.067 | Train Acc: 98.44%\n\t Val. Loss: 0.418 | Val. Acc: 79.24%\n100% 9/9 [00:00<00:00, 222.51it/s]\n100% 2/2 [00:00<00:00, 510.13it/s]\nEpoch: 45 | Epoch Time: 0m 0s\n\tTrain Loss: 0.064 | Train Acc: 98.96%\n\t Val. Loss: 0.422 | Val. Acc: 77.89%\n100% 9/9 [00:00<00:00, 218.93it/s]\n100% 2/2 [00:00<00:00, 487.45it/s]\nEpoch: 46 | Epoch Time: 0m 0s\n\tTrain Loss: 0.049 | Train Acc: 99.46%\n\t Val. Loss: 0.402 | Val. Acc: 79.24%\n100% 9/9 [00:00<00:00, 219.91it/s]\n100% 2/2 [00:00<00:00, 511.10it/s]\nEpoch: 47 | Epoch Time: 0m 0s\n\tTrain Loss: 0.048 | Train Acc: 99.13%\n\t Val. Loss: 0.395 | Val. Acc: 85.64%\n100% 9/9 [00:00<00:00, 219.91it/s]\n100% 2/2 [00:00<00:00, 504.43it/s]\nEpoch: 48 | Epoch Time: 0m 0s\n\tTrain Loss: 0.044 | Train Acc: 99.65%\n\t Val. Loss: 0.430 | Val. Acc: 79.24%\n100% 9/9 [00:00<00:00, 219.32it/s]\n100% 2/2 [00:00<00:00, 501.08it/s]\nEpoch: 49 | Epoch Time: 0m 0s\n\tTrain Loss: 0.055 | Train Acc: 99.11%\n\t Val. Loss: 0.377 | Val. Acc: 81.38%\n100% 9/9 [00:00<00:00, 219.62it/s]\n100% 2/2 [00:00<00:00, 499.62it/s]\nEpoch: 50 | Epoch Time: 0m 0s\n\tTrain Loss: 0.050 | Train Acc: 99.26%\n\t Val. Loss: 0.410 | Val. Acc: 81.95%\n" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb808de0564a6806faeff078337b061f696f3175
44,209
ipynb
Jupyter Notebook
News Analysis.ipynb
lnlwd/news-analysis
d5afaadbb3bb0e07e36eeb3f3924581b92f92ad1
[ "MIT" ]
null
null
null
News Analysis.ipynb
lnlwd/news-analysis
d5afaadbb3bb0e07e36eeb3f3924581b92f92ad1
[ "MIT" ]
null
null
null
News Analysis.ipynb
lnlwd/news-analysis
d5afaadbb3bb0e07e36eeb3f3924581b92f92ad1
[ "MIT" ]
null
null
null
39.792079
4,353
0.528625
[ [ [ "# Brazilian Newspaper analysis\n\nIn this project, we'll use a dataset from a Brazilian Newspaper called \"Folha de São Paulo\".\n\nWe're going to use word embeddings, tensorboard and rnn's and search for political opinions and positions.\n\nYou can find the dataset at [kaggle](https://www.kaggle.com/marlesson/news-of-the-site-folhauol).\n\nI want to find in this study case:\n\n+ Political opinions\n+ Check if this newspaper is impartial or biased", "_____no_output_____" ], [ "## Skip-gram model\n\nLet's use a word embedding model to find the relationship between words in the articles. Our model will learn how one word is related to another word and we'll see this relationship in tensorboard and a T-SNE chart (to project our model in a 2D chart).\n\nWe have two options to use: CBOW (Continuous Bag-Of-Words) and Skip-gram.\n\nIn our case we'll use Skip-gram because it performs better than CBOW.\n\nThe models works like this:\n\n![](assets/word2vec_architectures.png)", "_____no_output_____" ], [ "In CBOW we get some words around another word and try to predict the \"middle\" word.\n\nIn Skip-gram we do the opposite, we get one word and try to predict the words around it.", "_____no_output_____" ], [ "## Loading the data\n\nAfter downloading the dataset, put it on a directory `data/` and let's load it using pandas.\n\n**Using python 3.6 and tensorflow 1.3**", "_____no_output_____" ] ], [ [ "# Import dependencies\n\nimport pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib\nimport os\nimport pickle\nimport random\nimport time\nimport math\nfrom collections import Counter", "_____no_output_____" ], [ "dataset = pd.read_csv('data/articles.csv')\n\ndataset.head()", "_____no_output_____" ] ], [ [ "## Preprocessing the data", "_____no_output_____" ], [ "### Removing unnecessary articles\n\nWe are trying to find political opinions. So, let's take only the articles in category 'poder' (power).", "_____no_output_____" ] ], [ [ "political_dataset = dataset.loc[dataset.category == 'poder']\n\npolitical_dataset.head()", "_____no_output_____" ] ], [ [ "### Merging title and text\nTo maintain article titles and text related, let's merge then together and use this merged text as our inputs", "_____no_output_____" ] ], [ [ "# Merges the title and text with a separator (---)\nmerged_text = [str(title) + ' ---- ' + str(text) for title, text in zip(political_dataset.title, political_dataset.text)]\n\nprint(merged_text[0])", "Lula diz que está 'lascado', mas que ainda tem força como cabo eleitoral ---- Com a possibilidade de uma condenação impedir sua candidatura em 2018, o ex-presidente Luiz Inácio Lula da Silva fez, nesta segunda (9), um discurso inflamado contra a Lava Jato, no qual disse saber que está \"lascado\", exigiu um pedido de desculpas do juiz Sergio Moro e afirmou que, mesmo fora da disputa pelo Planalto, será um cabo eleitoral expressivo para a sucessão de Michel Temer. Segundo o petista, réu em sete ações penais, o objetivo de Moro é impedir sua candidatura no ano que vem, desidratando-o, inclusive, no apoio a um nome alternativo, como o do ex-prefeito de São Paulo Fernando Haddad (PT), caso ele não possa concorrer à Presidência. \"Eu sei que tô lascado, todo dia tem um processo. Eu não quero nem que Moro me absolva, eu só quero que ele peça desculpas\", disse Lula durante um seminário sobre educação em Brasília. \"Eles [investigadores] chegam a dizer: 'Ah, se o Lula não for candidato, ele não vai ter força como cabo eleitoral'. Testem\", completou o petista. Para o ex-presidente, Moro usou \"mentiras contadas pela Polícia Federal e pelo Ministério Público\" para julgá-lo e condená-lo a nove anos e seis meses de prisão pelo caso do tríplex em Guarujá (SP). O ex-presidente disse ainda não ter \"medo\" dos investigadores que, de acordo com ele, estão acostumados a \"mexer com deputados e senadores\" que temem as apurações. \"Eu quero que eles saibam o seguinte: se eles estão acostumados a lidar com deputado que tem medo deles, a mexer com senadores que têm medo deles, quero dizer que tenho respeito profundo por quem me respeita, pelas leis que nós ajudamos a criar, mas não tenho respeito por quem não me respeita e eles não me respeitaram\", afirmou o petista. De acordo com aliados, Lula não gosta de discutir, mesmo que nos bastidores, a chance de não ser candidato ao Planalto e a projeção do nome de Haddad como plano B do PT tem incomodado os mais próximos ao ex-presidente. O ex-prefeito, que estava no evento nesta segunda, fez um discurso rápido, de menos de dez minutos, em que encerrou dizendo esperar que Lula assuma a Presidência em 2019. \"Espero que dia 1º de janeiro de 2019 esse pesadelo chamado Temer acabe e o senhor assuma a Presidência da República\", disse Haddad. 'DEMÔNIO DO MERCADO' Lula voltou a fazer um discurso mais agressivo em relação ao mercado e disse que \"não tem cara de demônio\", mas quer que o respeitem \"como se fosse\". \"Não tenho cara de demônio, mas quero que eles me respeitem como se eu fosse, porque eles sabem que a economia não vai ficar subordinada ao elitismo da sociedade brasileira\", disse o ex-presidente. O petista rivalizou ainda com o deputado Jair Bolsonaro (PSC-RJ), segundo colocado nas últimas pesquisas empatado com Marina Silva, e disse que se ele \"agrada ao mercado\", o PT tem que \"desagradar\". A Folha publicou nesta segunda (9) reportagem em que mostrou que o deputado ensaia movimento ao centro no debate econômico, adotando um discurso simpático aos investidores do mercado financeiro.\n" ] ], [ [ "### Tokenizing punctuation\nWe need to tokenize all text punctuation, otherwise the network will see punctuated words differently (eg: hello != hello!)", "_____no_output_____" ] ], [ [ "def token_lookup():\n tokens = {\n '.' : 'period',\n ',' : 'comma',\n '\"' : 'quote',\n '\\'' : 'single-quote',\n ';' : 'semicolon',\n ':' : 'colon',\n '!' : 'exclamation-mark',\n '?' : 'question-mark',\n '(' : 'parentheses-left',\n ')' : 'parentheses-right',\n '[' : 'brackets-left',\n ']' : 'brackets-right',\n '{' : 'braces-left',\n '}' : 'braces-right',\n '_' : 'underscore',\n '--' : 'dash',\n '\\n' : 'return'\n }\n \n return {token: '||{0}||'.format(value) for token, value in tokens.items()}\n\ntoken_dict = token_lookup()\n\ntokenized_text = []\n\nfor text in merged_text:\n for key, token in token_dict.items():\n text = text.replace(key, ' {} '.format(token))\n \n tokenized_text.append(text)\n\nprint(tokenized_text[0])", "Lula diz que está ||single-quote|| lascado ||single-quote|| ||comma|| mas que ainda tem força como cabo eleitoral ||dash|| ||dash|| Com a possibilidade de uma condenação impedir sua candidatura em 2018 ||comma|| o ex-presidente Luiz Inácio Lula da Silva fez ||comma|| nesta segunda ||parentheses-left|| 9 ||parentheses-right|| ||comma|| um discurso inflamado contra a Lava Jato ||comma|| no qual disse saber que está ||quote|| lascado ||quote|| ||comma|| exigiu um pedido de desculpas do juiz Sergio Moro e afirmou que ||comma|| mesmo fora da disputa pelo Planalto ||comma|| será um cabo eleitoral expressivo para a sucessão de Michel Temer ||period|| Segundo o petista ||comma|| réu em sete ações penais ||comma|| o objetivo de Moro é impedir sua candidatura no ano que vem ||comma|| desidratando-o ||comma|| inclusive ||comma|| no apoio a um nome alternativo ||comma|| como o do ex-prefeito de São Paulo Fernando Haddad ||parentheses-left|| PT ||parentheses-right|| ||comma|| caso ele não possa concorrer à Presidência ||period|| ||quote|| Eu sei que tô lascado ||comma|| todo dia tem um processo ||period|| Eu não quero nem que Moro me absolva ||comma|| eu só quero que ele peça desculpas ||quote|| ||comma|| disse Lula durante um seminário sobre educação em Brasília ||period|| ||quote|| Eles ||brackets-left|| investigadores ||brackets-right|| chegam a dizer ||colon|| ||single-quote|| Ah ||comma|| se o Lula não for candidato ||comma|| ele não vai ter força como cabo eleitoral ||single-quote|| ||period|| Testem ||quote|| ||comma|| completou o petista ||period|| Para o ex-presidente ||comma|| Moro usou ||quote|| mentiras contadas pela Polícia Federal e pelo Ministério Público ||quote|| para julgá-lo e condená-lo a nove anos e seis meses de prisão pelo caso do tríplex em Guarujá ||parentheses-left|| SP ||parentheses-right|| ||period|| O ex-presidente disse ainda não ter ||quote|| medo ||quote|| dos investigadores que ||comma|| de acordo com ele ||comma|| estão acostumados a ||quote|| mexer com deputados e senadores ||quote|| que temem as apurações ||period|| ||quote|| Eu quero que eles saibam o seguinte ||colon|| se eles estão acostumados a lidar com deputado que tem medo deles ||comma|| a mexer com senadores que têm medo deles ||comma|| quero dizer que tenho respeito profundo por quem me respeita ||comma|| pelas leis que nós ajudamos a criar ||comma|| mas não tenho respeito por quem não me respeita e eles não me respeitaram ||quote|| ||comma|| afirmou o petista ||period|| De acordo com aliados ||comma|| Lula não gosta de discutir ||comma|| mesmo que nos bastidores ||comma|| a chance de não ser candidato ao Planalto e a projeção do nome de Haddad como plano B do PT tem incomodado os mais próximos ao ex-presidente ||period|| O ex-prefeito ||comma|| que estava no evento nesta segunda ||comma|| fez um discurso rápido ||comma|| de menos de dez minutos ||comma|| em que encerrou dizendo esperar que Lula assuma a Presidência em 2019 ||period|| ||quote|| Espero que dia 1º de janeiro de 2019 esse pesadelo chamado Temer acabe e o senhor assuma a Presidência da República ||quote|| ||comma|| disse Haddad ||period|| ||single-quote|| DEMÔNIO DO MERCADO ||single-quote|| Lula voltou a fazer um discurso mais agressivo em relação ao mercado e disse que ||quote|| não tem cara de demônio ||quote|| ||comma|| mas quer que o respeitem ||quote|| como se fosse ||quote|| ||period|| ||quote|| Não tenho cara de demônio ||comma|| mas quero que eles me respeitem como se eu fosse ||comma|| porque eles sabem que a economia não vai ficar subordinada ao elitismo da sociedade brasileira ||quote|| ||comma|| disse o ex-presidente ||period|| O petista rivalizou ainda com o deputado Jair Bolsonaro ||parentheses-left|| PSC-RJ ||parentheses-right|| ||comma|| segundo colocado nas últimas pesquisas empatado com Marina Silva ||comma|| e disse que se ele ||quote|| agrada ao mercado ||quote|| ||comma|| o PT tem que ||quote|| desagradar ||quote|| ||period|| A Folha publicou nesta segunda ||parentheses-left|| 9 ||parentheses-right|| reportagem em que mostrou que o deputado ensaia movimento ao centro no debate econômico ||comma|| adotando um discurso simpático aos investidores do mercado financeiro ||period|| \n" ] ], [ [ "### Lookup tables\n\nWe need to create two dicts: `word_to_int` and `int_to_word`.", "_____no_output_____" ] ], [ [ "def lookup_tables(tokenized_text):\n vocab = set()\n \n for text in tokenized_text:\n text = text.lower()\n vocab = vocab.union(set(text.split()))\n \n vocab_to_int = {word: ii for ii, word in enumerate(vocab)}\n int_to_vocab = {ii: word for ii, word in enumerate(vocab)}\n \n return vocab, vocab_to_int, int_to_vocab\n\nvocab, vocab_to_int, int_to_vocab = lookup_tables(tokenized_text)\n\nprint('First ten vocab words: ')\nprint(list(vocab_to_int.items())[0:10])\nprint('\\nVocab length:')\nprint(len(vocab_to_int))\n\npickle.dump((tokenized_text, vocab, vocab_to_int, int_to_vocab, token_dict), open('preprocess/preprocess.p', 'wb'))", "First ten vocab words: \n[('a-', 0), ('retrocederem', 1), ('airton', 2), ('mab', 3), ('lovanni', 4), ('retratem', 5), ('filippeli', 6), ('roousseff', 7), ('monarquismo', 8), ('pintaram', 9)]\n\nVocab length:\n97648\n" ] ], [ [ "### Convert all text to integers\n\nLet's convert all articles to integer using the `vocab_to_int` variable.", "_____no_output_____" ] ], [ [ "tokenized_text, vocab, vocab_to_int, int_to_vocab, token_dict = pickle.load(open('preprocess/preprocess.p', mode='rb'))", "_____no_output_____" ], [ "def text_to_int(text):\n int_text = []\n for word in text.split():\n if word in vocab_to_int.keys():\n int_text.append(vocab_to_int[word])\n return np.asarray(int_text, dtype=np.int32)", "_____no_output_____" ], [ "def convert_articles_to_int(tokenized_text):\n all_int_text = []\n for text in tokenized_text:\n all_int_text.append(text_to_int(text))\n return np.asarray(all_int_text)", "_____no_output_____" ], [ "converted_text = convert_articles_to_int(tokenized_text)\n\npickle.dump((converted_text, vocab, vocab_to_int, int_to_vocab, token_dict), open('preprocess/preprocess2.p', 'wb'))", "_____no_output_____" ], [ "converted_text, vocab, vocab_to_int, int_to_vocab, token_dict = pickle.load(open('preprocess/preprocess2.p', mode='rb'))", "_____no_output_____" ], [ "converted_text[3]", "_____no_output_____" ] ], [ [ "### Subsampling text\n\nWe need to subsample our text and remove the words that not provides meaningful information, like: 'the', 'of', 'for'.\n\nLet's use Mikolov's subsampling formula, that's give us the probability of a word to be discarted:\n\n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\n\nWhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.", "_____no_output_____" ] ], [ [ "# Converts all articles to one big text\n\nall_converted_text = np.concatenate(converted_text)", "_____no_output_____" ], [ "def subsampling(int_words, threshold=1e-5):\n word_counts = Counter(int_words)\n total_count = len(int_words)\n freqs = {word: count/total_count for word, count in word_counts.items()}\n p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}\n train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]\n \n return np.asarray(train_words)\n\nsubsampled_text = subsampling(all_converted_text)\n\nprint('Lenght before sumsampling: {0}'.format(len(all_converted_text)))\nprint('Lenght after sumsampling: {0}'.format(len(subsampled_text)))", "Lenght before sumsampling: 10536901\nLenght after sumsampling: 2134935\n" ], [ "pickle.dump((subsampled_text, vocab, vocab_to_int, int_to_vocab, token_dict), open('preprocess/preprocess3.p', 'wb'))", "_____no_output_____" ], [ "subsampled_text, vocab, vocab_to_int, int_to_vocab, token_dict = pickle.load(open('preprocess/preprocess3.p', mode='rb'))", "_____no_output_____" ] ], [ [ "### Save vocab to csv\n\nLet's save our vocab to csv file, so that way we can use it as an embedding on tensorboard.", "_____no_output_____" ] ], [ [ "subsampled_ints = set(subsampled_text)\n\nsubsampled_vocab = []\n\nfor word in subsampled_ints:\n subsampled_vocab.append(int_to_vocab[word])", "_____no_output_____" ], [ "vocab_df = pd.DataFrame.from_dict(int_to_vocab, orient='index')\n\nvocab_df.head()", "_____no_output_____" ], [ "vocab_df.to_csv('preprocess/vocab.tsv', header=False, index=False)", "_____no_output_____" ] ], [ [ "### Generate batches\n\nNow, we need to convert all text to numbers with lookup tables and create a batch generator.", "_____no_output_____" ] ], [ [ "def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n words = words.flat\n words = list(words)\n \n R = np.random.randint(1, window_size+1)\n start = idx - R if (idx - R) > 0 else 0\n stop = idx + R\n target_words = set(words[start:idx] + words[idx+1:stop+1])\n \n return list(target_words)", "_____no_output_____" ], [ "def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y", "_____no_output_____" ] ], [ [ "## Building the Embedding Graph\n", "_____no_output_____" ] ], [ [ "def get_embed_placeholders(graph, reuse=False):\n with graph.as_default():\n with tf.variable_scope('placeholder', reuse=reuse):\n inputs = tf.placeholder(tf.int32, [None], name='inputs')\n labels = tf.placeholder(tf.int32, [None, None], name='labels')\n learning_rate = tf.placeholder(tf.float32, [None], name='learning_rate')\n \n return inputs, labels, learning_rate", "_____no_output_____" ], [ "def get_embed_embeddings(graph, vocab_size, embedding_size, inputs, reuse=False):\n with graph.as_default():\n with tf.variable_scope('embedding', reuse=reuse):\n embedding = tf.Variable(tf.random_uniform((vocab_size, embedding_size),\n -0.5 / embedding_size,\n 0.5 / embedding_size))\n embed = tf.nn.embedding_lookup(embedding, inputs)\n \n return embed", "_____no_output_____" ], [ "def get_nce_weights_biases(graph, vocab_size, embedding_size, reuse=False):\n with graph.as_default():\n with tf.variable_scope('nce', reuse=reuse):\n nce_weights = tf.Variable(tf.truncated_normal((vocab_size, embedding_size),\n stddev=1.0/math.sqrt(embedding_size)))\n nce_biases = tf.Variable(tf.zeros(vocab_size))\n \n # Historigram for tensorboard\n tf.summary.histogram('weights', nce_weights)\n tf.summary.histogram('biases', nce_biases)\n\n return nce_weights, nce_biases", "_____no_output_____" ], [ "def get_embed_loss(graph, num_sampled, nce_weights, nce_biases, labels, embed, vocab_size, reuse=False):\n with graph.as_default():\n with tf.variable_scope('nce', reuse=reuse):\n loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(weights=nce_weights,\n biases=nce_biases,\n labels=labels,\n inputs=embed,\n num_sampled=num_sampled,\n num_classes=vocab_size))\n \n # Scalar for tensorboard\n tf.summary.scalar('loss', loss)\n \n return loss", "_____no_output_____" ], [ "def get_embed_opt(graph, learning_rate, loss, reuse=False):\n with graph.as_default():\n with tf.variable_scope('optmizer'):\n optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(loss)\n \n return optimizer", "_____no_output_____" ], [ "def train_embed(graph,\n batch_size,\n learning_rate,\n epochs,\n window_size,\n train_words,\n num_sampled,\n embedding_size,\n vocab_size,\n save_dir,\n print_every):\n \n with tf.Session(graph=graph) as sess:\n\n inputs, labels, lr = get_embed_placeholders(graph)\n\n embed = get_embed_embeddings(graph, vocab_size, embedding_size, inputs)\n\n nce_weights, nce_biases = get_nce_weights_biases(graph, vocab_size, embedding_size, reuse=True)\n\n loss = get_embed_loss(graph, num_sampled, nce_weights, nce_biases, labels, embed, vocab_size, reuse=True)\n\n optimizer = get_embed_opt(graph, learning_rate, loss, reuse=True)\n \n merged_summary = tf.summary.merge_all()\n \n train_writer = tf.summary.FileWriter(save_dir)\n\n sess.run(tf.global_variables_initializer())\n saver = tf.train.Saver()\n\n avg_loss = 0\n iteration = 1\n\n for e in range(1, epochs + 1):\n batches = get_batches(train_words, batch_size, window_size)\n\n start = time.time()\n\n for x, y in batches:\n feed = {\n inputs: x,\n labels: np.array(y)[:, None]\n }\n\n summary, _, train_loss = sess.run([merged_summary, optimizer, loss], feed_dict=feed)\n\n avg_loss += train_loss\n \n train_writer.add_summary(summary, epochs + 1)\n\n if iteration % print_every == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Batch: {}\".format(iteration),\n \"Training loss: {:.4f}\".format(avg_loss/print_every),\n \"Speed: {:.4f} sec/batch\".format((end-start)/print_every))\n avg_loss = 0\n start = time.time()\n #break\n iteration += 1\n \n save_path = saver.save(sess, save_dir + '/embed.ckpt')", "_____no_output_____" ], [ "epochs = 10\nlearning_rate = 0.01\nwindow_size = 10\nbatch_size = 1024\nnum_sampled = 100\nembedding_size = 200\nvocab_size = len(vocab_to_int)\nsave_dir = 'checkpoints/embed/train'\nprint_every = 1000\n\ntf.reset_default_graph()\n\nembed_train_graph = tf.Graph()\n\ntrain_embed(embed_train_graph,\n batch_size,\n learning_rate,\n epochs,\n window_size,\n subsampled_text,\n num_sampled,\n embedding_size,\n vocab_size,\n save_dir,\n print_every\n )", "Epoch 1/10 Batch: 1000 Training loss: 3.2113 Speed: 0.6224 sec/batch\nEpoch 1/10 Batch: 2000 Training loss: 4.0933 Speed: 0.6024 sec/batch\nEpoch 2/10 Batch: 3000 Training loss: 4.3585 Speed: 0.5493 sec/batch\nEpoch 2/10 Batch: 4000 Training loss: 4.4642 Speed: 0.6041 sec/batch\nEpoch 3/10 Batch: 5000 Training loss: 4.4193 Speed: 0.5077 sec/batch\nEpoch 3/10 Batch: 6000 Training loss: 4.2437 Speed: 0.6098 sec/batch\nEpoch 4/10 Batch: 7000 Training loss: 4.4818 Speed: 0.4570 sec/batch\nEpoch 4/10 Batch: 8000 Training loss: 4.2949 Speed: 0.6150 sec/batch\nEpoch 5/10 Batch: 9000 Training loss: 4.7017 Speed: 0.4088 sec/batch\nEpoch 5/10 Batch: 10000 Training loss: 4.5748 Speed: 0.6180 sec/batch\nEpoch 6/10 Batch: 11000 Training loss: 4.8532 Speed: 0.3605 sec/batch\nEpoch 6/10 Batch: 12000 Training loss: 5.0591 Speed: 0.6264 sec/batch\nEpoch 7/10 Batch: 13000 Training loss: 4.8968 Speed: 0.3107 sec/batch\nEpoch 7/10 Batch: 14000 Training loss: 5.1024 Speed: 0.6286 sec/batch\nEpoch 8/10 Batch: 15000 Training loss: 5.1757 Speed: 0.2598 sec/batch\nEpoch 8/10 Batch: 16000 Training loss: 5.1417 Speed: 0.6313 sec/batch\nEpoch 9/10 Batch: 17000 Training loss: 5.5590 Speed: 0.2073 sec/batch\nEpoch 9/10 Batch: 18000 Training loss: 5.5394 Speed: 0.6324 sec/batch\nEpoch 10/10 Batch: 19000 Training loss: 5.7862 Speed: 0.1546 sec/batch\nEpoch 10/10 Batch: 20000 Training loss: 5.5163 Speed: 0.6339 sec/batch\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
cb80a337a0c731510f89c4fba0b29a352f369304
9,835
ipynb
Jupyter Notebook
how-to-use-azureml/work-with-data/dataprep/how-to-guides/join.ipynb
diondrapeck/MachineLearningNotebooks
14ecfb0bf34c74f85673371712da8030e7636a98
[ "MIT" ]
2
2020-07-12T02:37:49.000Z
2021-09-09T09:55:32.000Z
how-to-use-azureml/work-with-data/dataprep/how-to-guides/join.ipynb
diondrapeck/MachineLearningNotebooks
14ecfb0bf34c74f85673371712da8030e7636a98
[ "MIT" ]
null
null
null
how-to-use-azureml/work-with-data/dataprep/how-to-guides/join.ipynb
diondrapeck/MachineLearningNotebooks
14ecfb0bf34c74f85673371712da8030e7636a98
[ "MIT" ]
3
2020-07-14T21:33:01.000Z
2021-05-20T17:27:48.000Z
36.973684
278
0.559634
[ [ [ "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/work-with-data/dataprep/how-to-guides/join.png)", "_____no_output_____" ], [ "# Join\nCopyright (c) Microsoft Corporation. All rights reserved.<br>\nLicensed under the MIT License.<br>\n\nIn Data Prep you can easily join two Dataflows.", "_____no_output_____" ] ], [ [ "import azureml.dataprep as dprep", "_____no_output_____" ] ], [ [ "First, get the left side of the data into a shape that is ready for the join.", "_____no_output_____" ] ], [ [ "# get the first Dataflow and derive desired key column\ndflow_left = dprep.read_csv(path='https://dpreptestfiles.blob.core.windows.net/testfiles/BostonWeather.csv')\ndflow_left = dflow_left.derive_column_by_example(source_columns='DATE', new_column_name='date_timerange',\n example_data=[('11/11/2015 0:54', 'Nov 11, 2015 | 12AM-2AM'),\n ('2/1/2015 0:54', 'Feb 1, 2015 | 12AM-2AM'),\n ('1/29/2015 20:54', 'Jan 29, 2015 | 8PM-10PM')])\ndflow_left = dflow_left.drop_columns(['DATE'])\n\n# convert types and summarize data\ndflow_left = dflow_left.set_column_types(type_conversions={'HOURLYDRYBULBTEMPF': dprep.TypeConverter(dprep.FieldType.DECIMAL)})\ndflow_left = dflow_left.filter(expression=~dflow_left['HOURLYDRYBULBTEMPF'].is_error())\ndflow_left = dflow_left.summarize(group_by_columns=['date_timerange'],summary_columns=[dprep.SummaryColumnsValue('HOURLYDRYBULBTEMPF', dprep.api.engineapi.typedefinitions.SummaryFunction.MEAN, 'HOURLYDRYBULBTEMPF_Mean')] )\n\n# cache the result so the steps above are not executed every time we pull on the data\nimport os\nfrom pathlib import Path\ncache_dir = str(Path(os.getcwd(), 'dataflow-cache'))\ndflow_left.cache(directory_path=cache_dir)\ndflow_left.head(5)", "_____no_output_____" ] ], [ [ "Now let's prepare the data for the right side of the join.", "_____no_output_____" ] ], [ [ "# get the second Dataflow and desired key column\ndflow_right = dprep.read_csv(path='https://dpreptestfiles.blob.core.windows.net/bike-share/*-hubway-tripdata.csv')\ndflow_right = dflow_right.keep_columns(['starttime', 'start station id'])\ndflow_right = dflow_right.derive_column_by_example(source_columns='starttime', new_column_name='l_date_timerange',\n example_data=[('2015-01-01 00:21:44', 'Jan 1, 2015 | 12AM-2AM')])\ndflow_right = dflow_right.drop_columns('starttime')\n\n# cache the results\ndflow_right.cache(directory_path=cache_dir)\ndflow_right.head(5)", "_____no_output_____" ] ], [ [ "There are three ways you can join two Dataflows in Data Prep:\n1. Create a `JoinBuilder` object for interactive join configuration.\n2. Call ```join()``` on one of the Dataflows and pass in the other along with all other arguments.\n3. Call ```Dataflow.join()``` method and pass in two Dataflows along with all other arguments.\n\nWe will explore the builder object as it simplifies the determination of correct arguments. ", "_____no_output_____" ] ], [ [ "# construct a builder for joining dataflow_l with dataflow_r\njoin_builder = dflow_left.builders.join(right_dataflow=dflow_right, left_column_prefix='l', right_column_prefix='r')\n\njoin_builder", "_____no_output_____" ] ], [ [ "So far the builder has no properties set except default values.\nFrom here you can set each of the options and preview its effect on the join result or use Data Prep to determine some of them.\n\nLet's start with determining appropriate column prefixes for left and right side of the join and lists of columns that would not conflict and therefore don't need to be prefixed.", "_____no_output_____" ] ], [ [ "join_builder.detect_column_info()\njoin_builder", "_____no_output_____" ] ], [ [ "You can see that Data Prep has performed a pull on both Dataflows to determine the column names in them. Given that `dataflow_r` already had a column starting with `l_` new prefix got generated which would not collide with any column names that are already present.\nAdditionally columns in each Dataflow that won't conflict during join would remain unprefixed.\nThis apprach to column naming is crucial for join robustness to schema changes in the data. Let's say that at some time in future the data consumed by left Dataflow will also have `l_date_timerange` column in it.\nConfigured as above the join will still run as expected and the new column will be prefixed with `l2_` ensuring that ig column `l_date_timerange` was consumed by some other future transformation it remains unaffected.\n\nNote: `KEY_generated` is appended to both lists and is reserved for Data Prep use in case Autojoin is performed.\n\n### Autojoin\nAutojoin is a Data prep feature that determines suitable join arguments given data on both sides. In some cases Autojoin can even derive a key column from a number of available columns in the data.\nHere is how you can use Autojoin:", "_____no_output_____" ] ], [ [ "# generate join suggestions\njoin_builder.generate_suggested_join()\n\n# list generated suggestions\njoin_builder.list_join_suggestions()", "_____no_output_____" ] ], [ [ "Now let's select the first suggestion and preview the result of the join.", "_____no_output_____" ] ], [ [ "# apply first suggestion\njoin_builder.apply_suggestion(0)\n\njoin_builder.preview(10)", "_____no_output_____" ] ], [ [ "Now, get our new joined Dataflow.", "_____no_output_____" ] ], [ [ "dflow_autojoined = join_builder.to_dataflow().drop_columns(['l_date_timerange'])", "_____no_output_____" ] ], [ [ "### Joining two Dataflows without pulling the data\n\nIf you don't want to pull on data and know what join should look like, you can always use the join method on the Dataflow.", "_____no_output_____" ] ], [ [ "dflow_joined = dprep.Dataflow.join(left_dataflow=dflow_left,\n right_dataflow=dflow_right,\n join_key_pairs=[('date_timerange', 'l_date_timerange')],\n left_column_prefix='l2_',\n right_column_prefix='r_')\n", "_____no_output_____" ], [ "dflow_joined.head(5)", "_____no_output_____" ], [ "dflow_joined = dflow_joined.filter(expression=dflow_joined['r_start station id'] == '67')\ndf = dflow_joined.to_pandas_dataframe()\ndf", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cb80a4855fba535e00415eefc61a5013878fc489
41,142
ipynb
Jupyter Notebook
Starter_Code/.ipynb_checkpoints/credit_risk_resampling-checkpoint.ipynb
kenlindgren93/Homework-11
faba4975846a5fb6ef331ce46af48013bf6d5a7d
[ "ADSL" ]
2
2021-02-17T16:15:11.000Z
2021-02-17T16:16:25.000Z
Starter_Code/.ipynb_checkpoints/credit_risk_resampling-checkpoint.ipynb
kenlindgren93/Homework-11
faba4975846a5fb6ef331ce46af48013bf6d5a7d
[ "ADSL" ]
null
null
null
Starter_Code/.ipynb_checkpoints/credit_risk_resampling-checkpoint.ipynb
kenlindgren93/Homework-11
faba4975846a5fb6ef331ce46af48013bf6d5a7d
[ "ADSL" ]
null
null
null
32.092044
294
0.427519
[ [ [ "# Credit Risk Resampling Techniques", "_____no_output_____" ] ], [ [ "import warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "import numpy as np\nimport pandas as pd\nfrom pathlib import Path\nfrom collections import Counter", "_____no_output_____" ] ], [ [ "# Read the CSV and Perform Basic Data Cleaning", "_____no_output_____" ] ], [ [ "columns = [\n \"loan_amnt\", \"int_rate\", \"installment\", \"home_ownership\",\n \"annual_inc\", \"verification_status\", \"issue_d\", \"loan_status\",\n \"pymnt_plan\", \"dti\", \"delinq_2yrs\", \"inq_last_6mths\",\n \"open_acc\", \"pub_rec\", \"revol_bal\", \"total_acc\",\n \"initial_list_status\", \"out_prncp\", \"out_prncp_inv\", \"total_pymnt\",\n \"total_pymnt_inv\", \"total_rec_prncp\", \"total_rec_int\", \"total_rec_late_fee\",\n \"recoveries\", \"collection_recovery_fee\", \"last_pymnt_amnt\", \"next_pymnt_d\",\n \"collections_12_mths_ex_med\", \"policy_code\", \"application_type\", \"acc_now_delinq\",\n \"tot_coll_amt\", \"tot_cur_bal\", \"open_acc_6m\", \"open_act_il\",\n \"open_il_12m\", \"open_il_24m\", \"mths_since_rcnt_il\", \"total_bal_il\",\n \"il_util\", \"open_rv_12m\", \"open_rv_24m\", \"max_bal_bc\",\n \"all_util\", \"total_rev_hi_lim\", \"inq_fi\", \"total_cu_tl\",\n \"inq_last_12m\", \"acc_open_past_24mths\", \"avg_cur_bal\", \"bc_open_to_buy\",\n \"bc_util\", \"chargeoff_within_12_mths\", \"delinq_amnt\", \"mo_sin_old_il_acct\",\n \"mo_sin_old_rev_tl_op\", \"mo_sin_rcnt_rev_tl_op\", \"mo_sin_rcnt_tl\", \"mort_acc\",\n \"mths_since_recent_bc\", \"mths_since_recent_inq\", \"num_accts_ever_120_pd\", \"num_actv_bc_tl\",\n \"num_actv_rev_tl\", \"num_bc_sats\", \"num_bc_tl\", \"num_il_tl\",\n \"num_op_rev_tl\", \"num_rev_accts\", \"num_rev_tl_bal_gt_0\",\n \"num_sats\", \"num_tl_120dpd_2m\", \"num_tl_30dpd\", \"num_tl_90g_dpd_24m\",\n \"num_tl_op_past_12m\", \"pct_tl_nvr_dlq\", \"percent_bc_gt_75\", \"pub_rec_bankruptcies\",\n \"tax_liens\", \"tot_hi_cred_lim\", \"total_bal_ex_mort\", \"total_bc_limit\",\n \"total_il_high_credit_limit\", \"hardship_flag\", \"debt_settlement_flag\"\n]\n\ntarget = [\"loan_status\"]", "_____no_output_____" ], [ "# Load the data\nfile_path = Path('../Resources/LoanStats_2019Q1.csv.zip')\ndf = pd.read_csv(file_path, skiprows=1)[:-2]\ndf = df.loc[:, columns].copy()\n\n# Drop the null columns where all values are null\ndf = df.dropna(axis='columns', how='all')\n\n# Drop the null rows\ndf = df.dropna()\n\n# Remove the `Issued` loan status\nissued_mask = df['loan_status'] != 'Issued'\ndf = df.loc[issued_mask]\n\n# convert interest rate to numerical\ndf['int_rate'] = df['int_rate'].str.replace('%', '')\ndf['int_rate'] = df['int_rate'].astype('float') / 100\n\n\n# Convert the target column values to low_risk and high_risk based on their values\nx = {'Current': 'low_risk'} \ndf = df.replace(x)\n\nx = dict.fromkeys(['Late (31-120 days)', 'Late (16-30 days)', 'Default', 'In Grace Period'], 'high_risk') \ndf = df.replace(x)\n\ndf.reset_index(inplace=True, drop=True)\n\ndf.head()", "_____no_output_____" ] ], [ [ "# Split the Data into Training and Testing", "_____no_output_____" ] ], [ [ "# Create our features\nX = # YOUR CODE HERE\n\n# Create our target\ny = # YOUR CODE HERE", "_____no_output_____" ], [ "X.describe()", "_____no_output_____" ], [ "# Check the balance of our target values\ny['loan_status'].value_counts()", "_____no_output_____" ], [ "# Create X_train, X_test, y_train, y_test\n# YOUR CODE HERE", "_____no_output_____" ] ], [ [ "## Data Pre-Processing\n\nScale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).", "_____no_output_____" ] ], [ [ "# Create the StandardScaler instance\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()", "_____no_output_____" ], [ "# Fit the Standard Scaler with the training data\n# When fitting scaling functions, only train on the training dataset\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Scale the training and testing data\n# YOUR CODE HERE", "_____no_output_____" ] ], [ [ "# Oversampling\n\nIn this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:\n\n1. View the count of the target classes using `Counter` from the collections library. \n3. Use the resampled data to train a logistic regression model.\n3. Calculate the balanced accuracy score from sklearn.metrics.\n4. Print the confusion matrix from sklearn.metrics.\n5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.\n\nNote: Use a random state of 1 for each sampling algorithm to ensure consistency between tests", "_____no_output_____" ], [ "### Naive Random Oversampling", "_____no_output_____" ] ], [ [ "# Resample the training data with the RandomOversampler\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Train the Logistic Regression model using the resampled data\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Calculated the balanced accuracy score\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Display the confusion matrix\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Print the imbalanced classification report\n# YOUR CODE HERE", " pre rec spe f1 geo iba sup\n\n high_risk 0.01 0.72 0.71 0.03 0.72 0.51 101\n low_risk 1.00 0.71 0.72 0.83 0.72 0.51 17104\n\navg / total 0.99 0.71 0.72 0.82 0.72 0.51 17205\n\n" ] ], [ [ "### SMOTE Oversampling", "_____no_output_____" ] ], [ [ "# Resample the training data with SMOTE\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Train the Logistic Regression model using the resampled data\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Calculated the balanced accuracy score\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Display the confusion matrix\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Print the imbalanced classification report\n# YOUR CODE HERE", " pre rec spe f1 geo iba sup\n\n high_risk 0.01 0.70 0.70 0.03 0.70 0.49 101\n low_risk 1.00 0.70 0.70 0.82 0.70 0.49 17104\n\navg / total 0.99 0.70 0.70 0.82 0.70 0.49 17205\n\n" ] ], [ [ "# Undersampling\n\nIn this section, you will test an undersampling algorithms to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:\n\n1. View the count of the target classes using `Counter` from the collections library. \n3. Use the resampled data to train a logistic regression model.\n3. Calculate the balanced accuracy score from sklearn.metrics.\n4. Print the confusion matrix from sklearn.metrics.\n5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.\n\nNote: Use a random state of 1 for each sampling algorithm to ensure consistency between tests", "_____no_output_____" ] ], [ [ "# Resample the data using the ClusterCentroids resampler\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Train the Logistic Regression model using the resampled data\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Calculated the balanced accuracy score\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Display the confusion matrix\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Print the imbalanced classification report\n# YOUR CODE HERE", " pre rec spe f1 geo iba sup\n\n high_risk 0.01 0.81 0.47 0.02 0.62 0.40 101\n low_risk 1.00 0.47 0.81 0.64 0.62 0.37 17104\n\navg / total 0.99 0.48 0.81 0.64 0.62 0.37 17205\n\n" ] ], [ [ "# Combination (Over and Under) Sampling\n\nIn this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:\n\n1. View the count of the target classes using `Counter` from the collections library. \n3. Use the resampled data to train a logistic regression model.\n3. Calculate the balanced accuracy score from sklearn.metrics.\n4. Print the confusion matrix from sklearn.metrics.\n5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.\n\nNote: Use a random state of 1 for each sampling algorithm to ensure consistency between tests", "_____no_output_____" ] ], [ [ "# Resample the training data with SMOTEENN\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Train the Logistic Regression model using the resampled data\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Calculated the balanced accuracy score\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Display the confusion matrix\n# YOUR CODE HERE", "_____no_output_____" ], [ "# Print the imbalanced classification report\n# YOUR CODE HERE", " pre rec spe f1 geo iba sup\n\n high_risk 0.01 0.71 0.68 0.03 0.70 0.49 101\n low_risk 1.00 0.68 0.71 0.81 0.70 0.48 17104\n\navg / total 0.99 0.68 0.71 0.81 0.70 0.48 17205\n\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cb80a57d78a94f55e1a640c64f8af3463b0b65f6
5,940
ipynb
Jupyter Notebook
Jupyter Notebooks/Jupyter_Notebooks_Count_code_lines.ipynb
krajai/testt
3aaf5fd7fe85e712c8c1615852b50f9ccb6737e5
[ "BSD-3-Clause" ]
1
2022-03-24T07:46:45.000Z
2022-03-24T07:46:45.000Z
Jupyter Notebooks/Jupyter_Notebooks_Count_code_lines.ipynb
PZawieja/awesome-notebooks
8ae86e5689749716e1315301cecdad6f8843dcf8
[ "BSD-3-Clause" ]
null
null
null
Jupyter Notebooks/Jupyter_Notebooks_Count_code_lines.ipynb
PZawieja/awesome-notebooks
8ae86e5689749716e1315301cecdad6f8843dcf8
[ "BSD-3-Clause" ]
null
null
null
22.585551
308
0.525253
[ [ [ "<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>", "_____no_output_____" ], [ "# Jupyter Notebooks - Count code lines\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Jupyter%20Notebooks/Jupyter_Notebooks_Count_code_lines.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>", "_____no_output_____" ], [ "**Tags:** #jupyternotebooks #naas #jupyter-notebooks #read #codelines #snippet #operations #text", "_____no_output_____" ], [ "**Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/)", "_____no_output_____" ], [ "## Input", "_____no_output_____" ], [ "### Import libraries", "_____no_output_____" ] ], [ [ "import json", "_____no_output_____" ] ], [ [ "### Variables", "_____no_output_____" ] ], [ [ "# Input\nnotebook_path = \"../template.ipynb\"", "_____no_output_____" ] ], [ [ "## Model", "_____no_output_____" ], [ "### Get module libraries in notebook", "_____no_output_____" ] ], [ [ "def count_codes(notebook_path):\n with open(notebook_path) as f:\n nb = json.load(f)\n data = 0\n \n cells = nb.get(\"cells\")\n # Check each cells\n for cell in cells:\n cell_type = cell.get('cell_type')\n sources = cell.get('source')\n for source in sources:\n if cell_type == \"code\":\n if not source.startswith('\\n') and not source.startswith('#'):\n data += 1\n if data == 0:\n print(\"❎ No line of code wrote in notebook:\", notebook_path)\n else:\n print(f\"✅ {data} line(s) of code wrote in notebook:\", notebook_path)\n return data", "_____no_output_____" ] ], [ [ "## Output", "_____no_output_____" ], [ "### Display result", "_____no_output_____" ] ], [ [ "no_lines = count_codes(notebook_path)\nno_lines", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
cb80a90d8415cc9e87f762a859c5622aba51534b
3,149
ipynb
Jupyter Notebook
chapter1/homework/computer/3-15/201411680849.ipynb
hpishacker/python_tutorial
9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd
[ "MIT" ]
76
2017-09-26T01:07:26.000Z
2021-02-23T03:06:25.000Z
chapter1/homework/computer/3-15/201411680849.ipynb
hpishacker/python_tutorial
9005f0db9dae10bdc1d1c3e9e5cf2268036cd5bd
[ "MIT" ]
5
2017-12-10T08:40:11.000Z
2020-01-10T03:39:21.000Z
chapter1/homework/computer/3-15/201411680849.ipynb
hacker-14/python_tutorial
4a110b12aaab1313ded253f5207ff263d85e1b56
[ "MIT" ]
112
2017-09-26T01:07:30.000Z
2021-11-25T19:46:51.000Z
20.185897
94
0.444586
[ [ [ "#练习2:写出由用户指定整数个数,并由用户输入多个整数,并求和的代码\nnum=int(input(\"请输入想要求和的整数个数,并在输入结束后回车:\"))\ni=0\nsum=0\nprint(\"请输入想要求和的整数。每输入一个后请回车:\")\nwhile i<num:\n a=int(input());\n sum=sum+a;\n i=i+1\nprint(\"输入的\",num,\"个整数的和为:\",sum,\"。\\n\")", "请输入想要求和的整数个数,并在输入结束后回车:10\n请输入想要求和的整数。每输入一个后请回车:\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n输入的 10 个整数的和为: 55 。\n\n" ], [ "#练习3:用户可以输入的任意多个数字,直到用户不想输入为止。\nsum=0\ni=0\nprint(\"请输入想要求和的整数,不想再输时请输入'n'。每输入一个后请回车:\")\nwhile True:\n a=str(input());\n if a=='n':\n break;\n else:\n a=int(a)\n sum=sum+a\n i+=1;\nprint(\"输入的\",i,\"个整数的和为\",sum,\"\\n\")", "请输入想要求和的整数,不想再输时请输入'n'。每输入一个后请回车:\n1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n-20\nn\n输入的 11 个整数的和为 35 \n\n" ], [ "#练习4:用户可以输入的任意多个数字,直到输入所有数字的和比当前输入数字小,且输入所有数字的积比当前输入数字的平方小\nhe=0\nji=1\ni=0\nwhile True:\n a=int(input('请输入一个整数,并在输入结束后回车:'));\n if he<a and ji<a**2:\n break;\n else: \n he=he+a;\n ji=ji*a;\n i+=1;\nprint(\"输入的\",i,\"个数字的和为\",he,\",小于当前输入的数字\",a,\"。且积为\",ji,\",小于当前输入的数字的平方\",a**2,\"。\")", "请输入一个整数,并在输入结束后回车:1\n请输入一个整数,并在输入结束后回车:1\n请输入一个整数,并在输入结束后回车:2\n请输入一个整数,并在输入结束后回车:3\n请输入一个整数,并在输入结束后回车:4\n请输入一个整数,并在输入结束后回车:10\n请输入一个整数,并在输入结束后回车:22\n输入的 6 个数字的和为 21 ,小于当前输入的数字 22 。且积为 240 ,小于当前输入的数字的平方 484 。\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cb80aa0ffbd3cc458c66462bb821ccb14ba1c3b6
8,268
ipynb
Jupyter Notebook
STH_Graph.ipynb
unsuthee/node2hash
d48ef8e0b8699c6a47d135615f2b39bb98fba41b
[ "MIT" ]
2
2019-10-24T15:12:09.000Z
2020-10-10T01:34:58.000Z
STH_Graph.ipynb
unsuthee/node2hash
d48ef8e0b8699c6a47d135615f2b39bb98fba41b
[ "MIT" ]
null
null
null
STH_Graph.ipynb
unsuthee/node2hash
d48ef8e0b8699c6a47d135615f2b39bb98fba41b
[ "MIT" ]
2
2019-09-05T00:44:32.000Z
2019-10-27T10:40:30.000Z
41.34
118
0.515723
[ [ [ "import numpy as np\nimport random\nfrom tqdm import *\nimport os\nimport sklearn.preprocessing\nfrom utils import *\nfrom graph_utils import *\nfrom rank_metrics import *\n\nimport time\n\nparams = get_cmdline_params()\nmodel_name = \"STHgraph_{}_{}_step{}\".format(params.walk_type, params.modelinfo, params.walk_steps)\n\n##################################################################################################\n\nnameManager = createGraphNameManager(params.dataset)\ndata = Load_Graph_Dataset(nameManager.bow_fn)\n\nprint('num train:{}'.format(data.n_trains))\nprint('num test:{}'.format(data.n_tests))\nprint('num vocabs:{}'.format(data.n_feas))\nprint('num labels:{}'.format(data.n_tags))\n\n##################################################################################################\n\ntrain_graph = GraphData(nameManager.train_graph)\ntest_graph = GraphData(nameManager.test_graph)\n \n#################################################################################################\n\nfrom scipy.sparse.linalg import eigsh\nfrom scipy.sparse import coo_matrix\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom sklearn.svm import LinearSVC\n\nclass STH:\n def __init__(self, num_bits):\n super(STH, self).__init__()\n \n self.num_bits = num_bits\n self.clfs = [LinearSVC() for n in range(num_bits)]\n \n def create_weight_matrix(self, train_mat, num_train, graph):\n columns = []\n rows = []\n weights = []\n for node_id in range(num_train):\n col = graph.graph[node_id]\n #col = DFS_walk(graph, node_id, 20)\n #col = second_order_neighbor_walk(graph, node_id)\n #print(node_id)\n if len(col) <= 0:\n col = [node_id]\n #assert(len(col) > 0)\n \n row = [node_id] * len(col)\n w = cosine_similarity(train_mat[node_id], train_mat[col])\n #w = [[0.9] * len(col)]\n\n columns += col\n rows += row\n weights += list(w[0])\n\n W = coo_matrix((weights, (rows, columns)), shape=(num_train, num_train))\n return W\n \n def fit_transform(self, train_mat, num_train, graph):\n W = self.create_weight_matrix(train_mat, num_train, graph)\n D = np.asarray(W.sum(axis=1)).squeeze() + 0.0001 # adding damping value for a numerical stabability\n D = scipy.sparse.diags(D)\n L = D - W\n \n L = scipy.sparse.csc_matrix(L)\n D = scipy.sparse.csc_matrix(D)\n\n num_attempts = 0\n max_attempts = 3\n success = False\n \n while not success:\n E, Y = eigsh(L, k=self.num_bits+1, M=D, which='SM')\n success = np.all(np.isreal(Y))\n \n if not success:\n print(\"Warning: Some eigenvalues are not real values. Retry to solve Eigen-decomposition.\")\n num_attempts += 1\n \n if num_attempts > max_attempts:\n assert(np.all(np.isreal(Y))) # if this fails, re-run fit again\n assert(False) # Check your data \n \n Y = np.real(Y)\n Y = Y[:, 1:]\n \n medHash = MedianHashing()\n cbTrain = medHash.fit_transform(Y) \n for b in range(0, cbTrain.shape[1]):\n self.clfs[b].fit(train_mat, cbTrain[:, b])\n return cbTrain\n \n def transform(self, test_mat, num_test):\n cbTest = np.zeros((num_test, self.num_bits), dtype=np.int64)\n for b in range(0, self.num_bits):\n cbTest[:,b] = self.clfs[b].predict(test_mat)\n return cbTest\n \nos.environ[\"CUDA_VISIBLE_DEVICES\"]=params.gpu_num\n\nsth_model = STH(params.nbits)\n\ncbTrain = sth_model.fit_transform(data.train, data.n_trains, train_graph)\ncbTest = sth_model.transform(data.test, data.n_tests)\n\ngnd_train = data.gnd_train.toarray()\ngnd_test = data.gnd_test.toarray()\n\neval_results = DotMap()\n\ntop_k_indices = retrieveTopKDoc(cbTrain, cbTest, batchSize=params.test_batch_size, TopK=100)\nrelevances = countNumRelevantDoc(gnd_train, gnd_test, top_k_indices)\nrelevances = relevances.cpu().numpy()\n\neval_results.ndcg_at_5 = np.mean([ndcg_at_k(r, 5) for r in relevances[:, :5]])\neval_results.ndcg_at_10 = np.mean([ndcg_at_k(r, 10) for r in relevances[:, :10]])\neval_results.ndcg_at_20 = np.mean([ndcg_at_k(r, 20) for r in relevances[:, :20]])\neval_results.ndcg_at_50 = np.mean([ndcg_at_k(r, 50) for r in relevances[:, :50]])\neval_results.ndcg_at_100 = np.mean([ndcg_at_k(r, 100) for r in relevances[:, :100]])\n\nrelevances = (relevances > 0)\neval_results.prec_at_5 = np.mean(np.sum(relevances[:, :5], axis=1)) / 100\neval_results.prec_at_10 = np.mean(np.sum(relevances[:, :10], axis=1)) / 100\neval_results.prec_at_20 = np.mean(np.sum(relevances[:, :20], axis=1)) / 100\neval_results.prec_at_50 = np.mean(np.sum(relevances[:, :50], axis=1)) / 100\neval_results.prec_at_100 = np.mean(np.sum(relevances[:, :100], axis=1)) / 100\n\nbest_results = EvalResult(eval_results)\n\nprint('*' * 80)\nmodel_name = \"STH_graph\"\nif params.save:\n import scipy.io\n data_path = os.path.join(os.environ['HOME'], 'projects/graph_embedding/save_bincode', params.dataset)\n save_fn = os.path.join(data_path, '{}.bincode.{}.mat'.format(model_name, params.nbits))\n\n print(\"save the binary code to {} ...\".format(save_fn))\n cbTrain = sth_model.fit_transform(data.train, data.n_trains, train_graph)\n cbTest = sth_model.transform(data.test, data.n_tests)\n \n scipy.io.savemat(save_fn, mdict={'train': cbTrain, 'test': cbTest})\n print('save data to {}'.format(save_fn))\n\nif params.save_results:\n fn = \"results/{}/results.{}.csv\".format(params.dataset, params.nbits)\n save_eval_results(fn, model_name, best_results)\n\nprint('*' * 80)\nprint(\"{}\".format(model_name))\n\nmetrics = ['prec_at_{}'.format(n) for n in ['5', '10', '20', '50', '100']]\nprec_results = \",\".join([\"{:.3f}\".format(best_results.best_scores[metric]) for metric in metrics])\nprint(\"prec: {}\".format(prec_results))\n\nmetrics = ['ndcg_at_{}'.format(n) for n in ['5', '10', '20', '50', '100']]\nndcg_results = \",\".join([\"{:.3f}\".format(best_results.best_scores[metric]) for metric in metrics])\nprint(\"ndcg: {}\".format(ndcg_results))\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cb80b81e04357f8f47d8d1b2fabb00e568d81b16
9,399
ipynb
Jupyter Notebook
Codes/RadiatorArray.ipynb
rxa254/GWPhasedArray
2340338fc67d1c9ded4f383a99527516b5c29d1a
[ "MIT" ]
1
2019-04-03T04:17:43.000Z
2019-04-03T04:17:43.000Z
Codes/RadiatorArray.ipynb
rxa254/GWPhasedArray
2340338fc67d1c9ded4f383a99527516b5c29d1a
[ "MIT" ]
null
null
null
Codes/RadiatorArray.ipynb
rxa254/GWPhasedArray
2340338fc67d1c9ded4f383a99527516b5c29d1a
[ "MIT" ]
null
null
null
32.978947
138
0.476646
[ [ [ "# <font color='Purple'>Gravitational Wave Generation Array</font>\n\nA Phase Array of dumbells can make a detectable signal...\n\n#### To do:\n1. Calculate the dumbell parameters for given mass and frequency\n1. How many dumbells?\n1. Far-field radiation pattern from many radiators.\n1. Beamed GW won't be a plane wave. So what?\n1. How much energy is lost to keep it spinning?\n1. How do we levitate while spinning?\n\n##### Related work on GW radiation\n1. https://www.mit.edu/~iancross/8901_2019A/readings/Quadrupole-GWradiation-Ferrari.pdf\n1. Wikipedia article on the GW Quadrupole formula (https://en.wikipedia.org/wiki/Quadrupole_formula)\n1. MIT 8.901 lecture on GW radiation (http://www.mit.edu/~iancross/8901_2019A/lec005.pdf)", "_____no_output_____" ], [ "## <font color='Orange'>Imports, settings, and constants</font>", "_____no_output_____" ] ], [ [ "\nimport numpy as np\n#import matplotlib as mpl\nimport matplotlib.pyplot as plt\n#import multiprocessing as mproc\n#import scipy.signal as sig\nimport scipy.constants as scc\n#import scipy.special as scsp\n#import sys, time\nfrom scipy.io import loadmat\n\n# http://www.astropy.org/astropy-tutorials/Quantities.html\n# http://docs.astropy.org/en/stable/constants/index.html\nfrom astropy import constants as ascon\n\n# Update the matplotlib configuration parameters:\nplt.rcParams.update({'text.usetex': False,\n 'lines.linewidth': 4,\n 'font.family': 'serif',\n 'font.serif': 'Georgia',\n 'font.size': 22,\n 'xtick.direction': 'in',\n 'ytick.direction': 'in',\n 'xtick.labelsize': 'medium',\n 'ytick.labelsize': 'medium',\n 'axes.labelsize': 'medium',\n 'axes.titlesize': 'medium',\n 'axes.grid.axis': 'both',\n 'axes.grid.which': 'both',\n 'axes.grid': True,\n 'grid.color': 'xkcd:beige',\n 'grid.alpha': 0.253,\n 'lines.markersize': 12,\n 'legend.borderpad': 0.2,\n 'legend.fancybox': True,\n 'legend.fontsize': 'small',\n 'legend.framealpha': 0.8,\n 'legend.handletextpad': 0.5,\n 'legend.labelspacing': 0.33,\n 'legend.loc': 'best',\n 'figure.figsize': ((12, 8)),\n 'savefig.dpi': 140,\n 'savefig.bbox': 'tight',\n 'pdf.compression': 9})\n\ndef setGrid(ax):\n ax.grid(which='major', alpha=0.6)\n ax.grid(which='major', linestyle='solid', alpha=0.6)\n \ncList = [(0, 0.1, 0.9),\n (0.9, 0, 0),\n (0, 0.7, 0),\n (0, 0.8, 0.8),\n (1.0, 0, 0.9),\n (0.8, 0.8, 0),\n (1, 0.5, 0),\n (0.5, 0.5, 0.5),\n (0.4, 0, 0.5),\n (0, 0, 0),\n (0.3, 0, 0),\n (0, 0.3, 0)]\n\nG = scc.G # N * m**2 / kg**2; gravitational constant\nc = scc.c", "_____no_output_____" ] ], [ [ "## Terrestrial Dumbell (Current Tech)", "_____no_output_____" ] ], [ [ "sigma_yield = 9000e6 # Yield strength of annealed silicon [Pa]\nm_dumb = 100 # mass of the dumbell end [kg]\nL_dumb = 10 # Length of the dumbell [m]\nr_dumb = 1 # radius of the dumbell rod [m]\n\nrho_pb = 11.34e3 # density of lead [kg/m^3]\nr_ball = ((m_dumb / rho_pb)/(4/3 * np.pi))**(1/3)\n\n\nf_rot = 1e3 / 2\nlamduh = c / f_rot\n\nv_dumb = 2*np.pi*(L_dumb/2) * f_rot\na_dumb = v_dumb**2 / (L_dumb / 2)\nF = a_dumb * m_dumb\n\nstress = F / (np.pi * r_dumb**2)\n\nprint('Ball radius is ' + '{:0.2f}'.format(r_ball) + ' m')\nprint(r'Acceleration of ball = ' + '{:0.2g}'.format(a_dumb) + r' m/s^2')\nprint('Stress = ' + '{:0.2f}'.format(stress/sigma_yield) + 'x Yield Stress')", "_____no_output_____" ] ], [ [ "#### Futuristic Dumbell", "_____no_output_____" ] ], [ [ "sigma_yield = 5000e9 # ultimate tensile strength of ??? [Pa]\nm_f = 1000 # mass of the dumbell end [kg]\nL_f = 3000 # Length of the dumbell [m]\nr_f = 40 # radius of the dumbell rod [m]\n\nrho_pb = 11.34e3 # density of lead [kg/m^3]\nr_b = ((m_dumb / rho_pb)/(4/3 * np.pi))**(1/3)\n\n\nf_f = 37e3 / 2\nlamduh_f = c / f_f\n\nv_f = 2*np.pi*(L_f/2) * f_f\na_f = v_f**2 / (L_f / 2)\nF = a_f * m_f\n\nstress = F / (np.pi * r_f**2)\n\nprint('Ball radius = ' + '{:0.2f}'.format(r_f) + ' m')\nprint('Acceleration of ball = ' + '{:0.2g}'.format(a_f) + ' m/s**2')\nprint('Stress = ' + '{:0.2f}'.format(stress/sigma_yield) + 'x Yield Stress')", "_____no_output_____" ] ], [ [ "## <font color='Navy'>Radiation of a dumbell</font>\n\nThe dumbell is levitated from its middle point using a magnet. So we can spin it at any frequency without friction.\n\nThe quadrupole formula for the strain from this rotating dumbell is:\n\n$\\ddot{I} = \\omega^2 \\frac{M R^2}{2}$\n\n$\\ddot{I} = \\frac{1}{2} \\sigma_{yield}~A~(L_{dumb} / 2)$\n\nThe resulting strain is:\n\n$h = \\frac{2 G}{c^4 r} \\ddot{I}$\n\n", "_____no_output_____" ] ], [ [ "def h_of_f(omega_rotor, M_ball, d_earth_alien, L_rotor):\n \n I_ddot = 1/2 * M_ball * (L_rotor/2)**2 * (omega_rotor**2)\n h = (2*G)/(c**4 * d_earth_alien) * I_ddot\n \n return h\n\n\nr = 2 * lamduh # take the distance to be 2 x wavelength\n\n#h_2020 = (2*G)/(c**4 * r) * (1/2 * m_dumb * (L_dumb/2)**2) * (2*np.pi*f_rot)**2\nw_rot = 2 * np.pi * f_rot\nh_2020 = h_of_f(w_rot, m_dumb, r, L_dumb)\n\nd_ref = c * 3600*24*365 * 1000 # 1000 light years [m]\nd = 1 * d_ref\n\nh_2035 = h_of_f(w_rot, m_dumb, d, L_dumb)\nprint('Strain from a single (2018) dumbell is {h:0.3g} at a distance of {r:0.1f} km'.format(h=h_2020, r=r/1000))\nprint('Strain from a single (2018) dumbell is {h:0.3g} at a distance of {r:0.1f} kilo lt-years'.format(h=h_2035, r=d/d_ref))\n\n\nr = 2 * lamduh_f # take the distance to be 2 x wavelength\n\nh_f = (2*G)/(c**4 * r) * (1/2 * m_f * (L_f/2)**2) * (2*np.pi*f_f)**2\nh_2345 = h_of_f(2*np.pi*f_f, m_f, d, L_dumb)\n\nN_rotors = 100e6\n\nprint(\"Strain from a single (alien) dumbell is {h:0.3g} at a distance of {r:0.1f} kilo lt-years\".format(h=h_2345, r=d/d_ref))\nprint(\"Strain from many many (alien) dumbells is \" + '{:0.3g}'.format(N_rotors*h_2345) + ' at ' + str(1) + ' k lt-yr')", "_____no_output_____" ] ], [ [ "## <font color='Navy'>Phased Array</font>\n\nBeam pattern for a 2D grid of rotating dumbells\n\nTreat them like point sources?\n\nMake an array and add up all the spherical waves", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb80bad7e31f7010926f36c56cfd6c37eced171c
60,483
ipynb
Jupyter Notebook
pretrained_network/simple_image_classifier_with_MobileNet.ipynb
lenaromanenko/deep_learning
c6c0e6d7fff06c01238fc395d505d6e4c385cdfc
[ "MIT" ]
3
2021-03-23T10:27:04.000Z
2021-12-10T22:32:57.000Z
pretrained_network/simple_image_classifier_with_MobileNet.ipynb
lenaromanenko/deep_learning
c6c0e6d7fff06c01238fc395d505d6e4c385cdfc
[ "MIT" ]
null
null
null
pretrained_network/simple_image_classifier_with_MobileNet.ipynb
lenaromanenko/deep_learning
c6c0e6d7fff06c01238fc395d505d6e4c385cdfc
[ "MIT" ]
null
null
null
535.247788
56,688
0.944927
[ [ [ "from tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.applications.mobilenet import preprocess_input\nfrom google.colab import drive\nfrom tensorflow.keras.applications import MobileNet\nfrom pprint import pprint\nfrom tensorflow.keras.applications.mobilenet import decode_predictions", "_____no_output_____" ], [ "def Image(URL):\n\n im = image.load_img(URL, target_size=(224, 224))\n a = image.img_to_array(im)\n a = a.reshape(1, 224, 224, 3)\n a = preprocess_input(a)\n\n drive.mount('/content/drive')\n\n m = MobileNet(input_shape=(224, 224, 3))\n m.compile(optimizer='rmsprop', loss='categorical_crossentropy',\n metrics=['accuracy'])\n p = m.predict(a)\n pprint(decode_predictions(p, 10))\n print('Image:')\n return im\n\nImage('/content/drive/MyDrive/Colab Notebooks/MobileNet/balloon.jpg')", "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\nWARNING:tensorflow:11 out of the last 11 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f26c13aa200> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.\n[[('n02782093', 'balloon', 0.99374264),\n ('n03888257', 'parachute', 0.0031904425),\n ('n03271574', 'electric_fan', 0.00076084875),\n ('n04562935', 'water_tower', 0.0004421579),\n ('n02692877', 'airship', 0.00037853004),\n ('n04041544', 'radio', 0.00031346694),\n ('n03483316', 'hand_blower', 0.00018036395),\n ('n03691459', 'loudspeaker', 9.322758e-05),\n ('n03759954', 'microphone', 8.5214146e-05),\n ('n04265275', 'space_heater', 8.408814e-05)]]\nImage:\n" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
cb80f5024c443e3c27378890bfd1bd0d0cc135c7
125,140
ipynb
Jupyter Notebook
jupyter_notebooks/statistics/DAT4_linear_regression.ipynb
manual123/Nacho-Jupyter-Notebooks
e75523434b1a90313a6b44e32b056f63de8a7135
[ "MIT" ]
2
2021-02-13T05:52:05.000Z
2022-02-08T09:52:35.000Z
statistics/DAT4_linear_regression.ipynb
manual123/Nacho-Jupyter-Notebooks
e75523434b1a90313a6b44e32b056f63de8a7135
[ "MIT" ]
null
null
null
statistics/DAT4_linear_regression.ipynb
manual123/Nacho-Jupyter-Notebooks
e75523434b1a90313a6b44e32b056f63de8a7135
[ "MIT" ]
null
null
null
68.4948
47,694
0.739076
[ [ [ "# Introduction to Linear Regression\n\n*Adapted from Chapter 3 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/)*\n\n||continuous|categorical|\n|---|---|---|\n|**supervised**|**regression**|classification|\n|**unsupervised**|dimension reduction|clustering|\n\n## Motivation\n\nWhy are we learning linear regression?\n- widely used\n- runs fast\n- easy to use (not a lot of tuning required)\n- highly interpretable\n- basis for many other methods\n\n## Libraries\n\nWill be using [Statsmodels](http://statsmodels.sourceforge.net/) for **teaching purposes** since it has some nice characteristics for linear modeling. However, we recommend that you spend most of your energy on [scikit-learn](http://scikit-learn.org/stable/) since it provides significantly more useful functionality for machine learning in general.", "_____no_output_____" ] ], [ [ "# imports\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# this allows plots to appear directly in the notebook\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Example: Advertising Data\n\nLet's take a look at some data, ask some questions about that data, and then use linear regression to answer those questions!", "_____no_output_____" ] ], [ [ "# read data into a DataFrame\ndata = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)\ndata.head()", "_____no_output_____" ] ], [ [ "What are the **features**?\n- TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)\n- Radio: advertising dollars spent on Radio\n- Newspaper: advertising dollars spent on Newspaper\n\nWhat is the **response**?\n- Sales: sales of a single product in a given market (in thousands of widgets)", "_____no_output_____" ] ], [ [ "# print the shape of the DataFrame\ndata.shape", "_____no_output_____" ] ], [ [ "There are 200 **observations**, and thus 200 markets in the dataset.", "_____no_output_____" ] ], [ [ "# visualize the relationship between the features and the response using scatterplots\nfig, axs = plt.subplots(1, 3, sharey=True)\ndata.plot(kind='scatter', x='TV', y='Sales', ax=axs[0], figsize=(16, 8))\ndata.plot(kind='scatter', x='Radio', y='Sales', ax=axs[1])\ndata.plot(kind='scatter', x='Newspaper', y='Sales', ax=axs[2])", "_____no_output_____" ] ], [ [ "## Questions About the Advertising Data\n\nLet's pretend you work for the company that manufactures and markets this widget. The company might ask you the following: On the basis of this data, how should we spend our advertising money in the future?\n\nThis general question might lead you to more specific questions:\n1. Is there a relationship between ads and sales?\n2. How strong is that relationship?\n3. Which ad types contribute to sales?\n4. What is the effect of each ad type of sales?\n5. Given ad spending in a particular market, can sales be predicted?\n\nWe will explore these questions below!", "_____no_output_____" ], [ "## Simple Linear Regression\n\nSimple linear regression is an approach for predicting a **quantitative response** using a **single feature** (or \"predictor\" or \"input variable\"). It takes the following form:\n\n$y = \\beta_0 + \\beta_1x$\n\nWhat does each term represent?\n- $y$ is the response\n- $x$ is the feature\n- $\\beta_0$ is the intercept\n- $\\beta_1$ is the coefficient for x\n\nTogether, $\\beta_0$ and $\\beta_1$ are called the **model coefficients**. To create your model, you must \"learn\" the values of these coefficients. And once we've learned these coefficients, we can use the model to predict Sales!", "_____no_output_____" ], [ "## Estimating (\"Learning\") Model Coefficients\n\nGenerally speaking, coefficients are estimated using the **least squares criterion**, which means we are find the line (mathematically) which minimizes the **sum of squared residuals** (or \"sum of squared errors\"):", "_____no_output_____" ], [ "<img src=\"08_estimating_coefficients.png\">", "_____no_output_____" ], [ "What elements are present in the diagram?\n- The black dots are the **observed values** of x and y.\n- The blue line is our **least squares line**.\n- The red lines are the **residuals**, which are the distances between the observed values and the least squares line.\n\nHow do the model coefficients relate to the least squares line?\n- $\\beta_0$ is the **intercept** (the value of $y$ when $x$=0)\n- $\\beta_1$ is the **slope** (the change in $y$ divided by change in $x$)\n\nHere is a graphical depiction of those calculations:", "_____no_output_____" ], [ "<img src=\"08_slope_intercept.png\">", "_____no_output_____" ], [ "Let's use **Statsmodels** to estimate the model coefficients for the advertising data:", "_____no_output_____" ] ], [ [ "# this is the standard import if you're using \"formula notation\" (similar to R)\nimport statsmodels.formula.api as smf\n\n# create a fitted model in one line\nlm = smf.ols(formula='Sales ~ TV', data=data).fit()\n\n# print the coefficients\nlm.params", "_____no_output_____" ] ], [ [ "## Interpreting Model Coefficients\n\nHow do we interpret the TV coefficient ($\\beta_1$)?\n- A \"unit\" increase in TV ad spending is **associated with** a 0.047537 \"unit\" increase in Sales.\n- Or more clearly: An additional $1,000 spent on TV ads is **associated with** an increase in sales of 47.537 widgets.\n\nNote that if an increase in TV ad spending was associated with a **decrease** in sales, $\\beta_1$ would be **negative**.", "_____no_output_____" ], [ "## Using the Model for Prediction\n\nLet's say that there was a new market where the TV advertising spend was **$50,000**. What would we predict for the Sales in that market?\n\n$$y = \\beta_0 + \\beta_1x$$\n$$y = 7.032594 + 0.047537 \\times 50$$", "_____no_output_____" ] ], [ [ "# manually calculate the prediction\n7.032594 + 0.047537*50", "_____no_output_____" ] ], [ [ "Thus, we would predict Sales of **9,409 widgets** in that market.\n\nOf course, we can also use Statsmodels to make the prediction:", "_____no_output_____" ] ], [ [ "# you have to create a DataFrame since the Statsmodels formula interface expects it\nX_new = pd.DataFrame({'TV': [50]})\nX_new.head()", "_____no_output_____" ], [ "# use the model to make predictions on a new value\nlm.predict(X_new)", "_____no_output_____" ] ], [ [ "## Plotting the Least Squares Line\n\nLet's make predictions for the **smallest and largest observed values of x**, and then use the predicted values to plot the least squares line:", "_____no_output_____" ] ], [ [ "# create a DataFrame with the minimum and maximum values of TV\nX_new = pd.DataFrame({'TV': [data.TV.min(), data.TV.max()]})\nX_new.head()", "_____no_output_____" ], [ "# make predictions for those x values and store them\npreds = lm.predict(X_new)\npreds", "_____no_output_____" ], [ "# first, plot the observed data\ndata.plot(kind='scatter', x='TV', y='Sales')\n\n# then, plot the least squares line\nplt.plot(X_new, preds, c='red', linewidth=2)", "_____no_output_____" ] ], [ [ "## Confidence in our Model\n\n**Question:** Is linear regression a high bias/low variance model, or a low bias/high variance model?\n\n**Answer:** High bias/low variance. Under repeated sampling, the line will stay roughly in the same place (low variance), but the average of those models won't do a great job capturing the true relationship (high bias). Note that low variance is a useful characteristic when you don't have a lot of training data!\n\nA closely related concept is **confidence intervals**. Statsmodels calculates 95% confidence intervals for our model coefficients, which are interpreted as follows: If the population from which this sample was drawn was **sampled 100 times**, approximately **95 of those confidence intervals** would contain the \"true\" coefficient.", "_____no_output_____" ] ], [ [ "# print the confidence intervals for the model coefficients\nlm.conf_int()", "_____no_output_____" ] ], [ [ "Keep in mind that we only have a **single sample of data**, and not the **entire population of data**. The \"true\" coefficient is either within this interval or it isn't, but there's no way to actually know. We estimate the coefficient with the data we do have, and we show uncertainty about that estimate by giving a range that the coefficient is **probably** within.\n\nNote that using 95% confidence intervals is just a convention. You can create 90% confidence intervals (which will be more narrow), 99% confidence intervals (which will be wider), or whatever intervals you like.", "_____no_output_____" ], [ "## Hypothesis Testing and p-values\n\nClosely related to confidence intervals is **hypothesis testing**. Generally speaking, you start with a **null hypothesis** and an **alternative hypothesis** (that is opposite the null). Then, you check whether the data supports **rejecting the null hypothesis** or **failing to reject the null hypothesis**.\n\n(Note that \"failing to reject\" the null is not the same as \"accepting\" the null hypothesis. The alternative hypothesis may indeed be true, except that you just don't have enough data to show that.)\n\nAs it relates to model coefficients, here is the conventional hypothesis test:\n- **null hypothesis:** There is no relationship between TV ads and Sales (and thus $\\beta_1$ equals zero)\n- **alternative hypothesis:** There is a relationship between TV ads and Sales (and thus $\\beta_1$ is not equal to zero)\n\nHow do we test this hypothesis? Intuitively, we reject the null (and thus believe the alternative) if the 95% confidence interval **does not include zero**. Conversely, the **p-value** represents the probability that the coefficient is actually zero:", "_____no_output_____" ] ], [ [ "# print the p-values for the model coefficients\nlm.pvalues", "_____no_output_____" ] ], [ [ "If the 95% confidence interval **includes zero**, the p-value for that coefficient will be **greater than 0.05**. If the 95% confidence interval **does not include zero**, the p-value will be **less than 0.05**. Thus, a p-value less than 0.05 is one way to decide whether there is likely a relationship between the feature and the response. (Again, using 0.05 as the cutoff is just a convention.)\n\nIn this case, the p-value for TV is far less than 0.05, and so we **believe** that there is a relationship between TV ads and Sales.\n\nNote that we generally ignore the p-value for the intercept.", "_____no_output_____" ], [ "## How Well Does the Model Fit the data?\n\nThe most common way to evaluate the overall fit of a linear model is by the **R-squared** value. R-squared is the **proportion of variance explained**, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the **null model**. (The null model just predicts the mean of the observed response, and thus it has an intercept and no slope.)\n\nR-squared is between 0 and 1, and higher is better because it means that more variance is explained by the model. Here's an example of what R-squared \"looks like\":", "_____no_output_____" ], [ "<img src=\"08_r_squared.png\">", "_____no_output_____" ], [ "You can see that the **blue line** explains some of the variance in the data (R-squared=0.54), the **green line** explains more of the variance (R-squared=0.64), and the **red line** fits the training data even further (R-squared=0.66). (Does the red line look like it's overfitting?)\n\nLet's calculate the R-squared value for our simple linear model:", "_____no_output_____" ] ], [ [ "# print the R-squared value for the model\nlm.rsquared", "_____no_output_____" ] ], [ [ "Is that a \"good\" R-squared value? It's hard to say. The threshold for a good R-squared value depends widely on the domain. Therefore, it's most useful as a tool for **comparing different models**.", "_____no_output_____" ], [ "## Multiple Linear Regression\n\nSimple linear regression can easily be extended to include multiple features. This is called **multiple linear regression**:\n\n$y = \\beta_0 + \\beta_1x_1 + ... + \\beta_nx_n$\n\nEach $x$ represents a different feature, and each feature has its own coefficient. In this case:\n\n$y = \\beta_0 + \\beta_1 \\times TV + \\beta_2 \\times Radio + \\beta_3 \\times Newspaper$\n\nLet's use Statsmodels to estimate these coefficients:", "_____no_output_____" ] ], [ [ "# create a fitted model with all three features\nlm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit()\n\n# print the coefficients\nlm.params", "_____no_output_____" ] ], [ [ "How do we interpret these coefficients? For a given amount of Radio and Newspaper ad spending, an **increase of $1000 in TV ad spending** is associated with an **increase in Sales of 45.765 widgets**.\n\nA lot of the information we have been reviewing piece-by-piece is available in the model summary output:", "_____no_output_____" ] ], [ [ "# print a summary of the fitted model\nlm.summary()", "_____no_output_____" ] ], [ [ "What are a few key things we learn from this output?\n\n- TV and Radio have significant **p-values**, whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio (that there is no association between those features and Sales), and fail to reject the null hypothesis for Newspaper.\n- TV and Radio ad spending are both **positively associated** with Sales, whereas Newspaper ad spending is **slightly negatively associated** with Sales. (However, this is irrelevant since we have failed to reject the null hypothesis for Newspaper.)\n- This model has a higher **R-squared** (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV.", "_____no_output_____" ], [ "## Feature Selection\n\nHow do I decide **which features to include** in a linear model? Here's one idea:\n- Try different models, and only keep predictors in the model if they have small p-values.\n- Check whether the R-squared value goes up when you add new predictors.\n\nWhat are the **drawbacks** to this approach?\n- Linear models rely upon a lot of **assumptions** (such as the features being independent), and if those assumptions are violated (which they usually are), R-squared and p-values are less reliable.\n- Using a p-value cutoff of 0.05 means that if you add 100 predictors to a model that are **pure noise**, 5 of them (on average) will still be counted as significant.\n- R-squared is susceptible to **overfitting**, and thus there is no guarantee that a model with a high R-squared value will generalize. Below is an example:", "_____no_output_____" ] ], [ [ "# only include TV and Radio in the model\nlm = smf.ols(formula='Sales ~ TV + Radio', data=data).fit()\nlm.rsquared", "_____no_output_____" ], [ "# add Newspaper to the model (which we believe has no association with Sales)\nlm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit()\nlm.rsquared", "_____no_output_____" ] ], [ [ "**R-squared will always increase as you add more features to the model**, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model.\n\nThere is alternative to R-squared called **adjusted R-squared** that penalizes model complexity (to control for overfitting), but it generally [under-penalizes complexity](http://scott.fortmann-roe.com/docs/MeasuringError.html).\n\nSo is there a better approach to feature selection? **Cross-validation.** It provides a more reliable estimate of out-of-sample error, and thus is a better way to choose which of your models will best **generalize** to out-of-sample data. There is extensive functionality for cross-validation in scikit-learn, including automated methods for searching different sets of parameters and different models. Importantly, cross-validation can be applied to any model, whereas the methods described above only apply to linear models.", "_____no_output_____" ], [ "## Linear Regression in scikit-learn\n\nLet's redo some of the Statsmodels code above in scikit-learn:", "_____no_output_____" ] ], [ [ "# create X and y\nfeature_cols = ['TV', 'Radio', 'Newspaper']\nX = data[feature_cols]\ny = data.Sales\n\n# follow the usual sklearn pattern: import, instantiate, fit\nfrom sklearn.linear_model import LinearRegression\nlm = LinearRegression()\nlm.fit(X, y)\n\n# print intercept and coefficients\nprint lm.intercept_\nprint lm.coef_", "2.93888936946\n[ 0.04576465 0.18853002 -0.00103749]\n" ], [ "# pair the feature names with the coefficients\nzip(feature_cols, lm.coef_)", "_____no_output_____" ], [ "# predict for a new observation\nlm.predict([100, 25, 25])", "_____no_output_____" ], [ "# calculate the R-squared\nlm.score(X, y)", "_____no_output_____" ] ], [ [ "Note that **p-values** and **confidence intervals** are not (easily) accessible through scikit-learn.", "_____no_output_____" ], [ "## Handling Categorical Predictors with Two Categories\n\nUp to now, all of our predictors have been numeric. What if one of our predictors was categorical?\n\nLet's create a new feature called **Size**, and randomly assign observations to be **small or large**:", "_____no_output_____" ] ], [ [ "import numpy as np\n\n# set a seed for reproducibility\nnp.random.seed(12345)\n\n# create a Series of booleans in which roughly half are True\nnums = np.random.rand(len(data))\nmask_large = nums > 0.5\n\n# initially set Size to small, then change roughly half to be large\ndata['Size'] = 'small'\ndata.loc[mask_large, 'Size'] = 'large'\ndata.head()", "_____no_output_____" ] ], [ [ "For scikit-learn, we need to represent all data **numerically**. If the feature only has two categories, we can simply create a **dummy variable** that represents the categories as a binary value:", "_____no_output_____" ] ], [ [ "# create a new Series called IsLarge\ndata['IsLarge'] = data.Size.map({'small':0, 'large':1})\ndata.head()", "_____no_output_____" ] ], [ [ "Let's redo the multiple linear regression and include the **IsLarge** predictor:", "_____no_output_____" ] ], [ [ "# create X and y\nfeature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge']\nX = data[feature_cols]\ny = data.Sales\n\n# instantiate, fit\nlm = LinearRegression()\nlm.fit(X, y)\n\n# print coefficients\nzip(feature_cols, lm.coef_)", "_____no_output_____" ] ], [ [ "How do we interpret the **IsLarge coefficient**? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average **increase** in Sales of 57.42 widgets (as compared to a Small market, which is called the **baseline level**).\n\nWhat if we had reversed the 0/1 coding and created the feature 'IsSmall' instead? The coefficient would be the same, except it would be **negative instead of positive**. As such, your choice of category for the baseline does not matter, all that changes is your **interpretation** of the coefficient.", "_____no_output_____" ], [ "## Handling Categorical Predictors with More than Two Categories\n\nLet's create a new feature called **Area**, and randomly assign observations to be **rural, suburban, or urban**:", "_____no_output_____" ] ], [ [ "# set a seed for reproducibility\nnp.random.seed(123456)\n\n# assign roughly one third of observations to each group\nnums = np.random.rand(len(data))\nmask_suburban = (nums > 0.33) & (nums < 0.66)\nmask_urban = nums > 0.66\ndata['Area'] = 'rural'\ndata.loc[mask_suburban, 'Area'] = 'suburban'\ndata.loc[mask_urban, 'Area'] = 'urban'\ndata.head()", "_____no_output_____" ] ], [ [ "We have to represent Area numerically, but we can't simply code it as 0=rural, 1=suburban, 2=urban because that would imply an **ordered relationship** between suburban and urban (and thus urban is somehow \"twice\" the suburban category).\n\nInstead, we create **another dummy variable**:", "_____no_output_____" ] ], [ [ "# create three dummy variables using get_dummies, then exclude the first dummy column\narea_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:, 1:]\n\n# concatenate the dummy variable columns onto the original DataFrame (axis=0 means rows, axis=1 means columns)\ndata = pd.concat([data, area_dummies], axis=1)\ndata.head()", "_____no_output_____" ] ], [ [ "Here is how we interpret the coding:\n- **rural** is coded as Area_suburban=0 and Area_urban=0\n- **suburban** is coded as Area_suburban=1 and Area_urban=0\n- **urban** is coded as Area_suburban=0 and Area_urban=1\n\nWhy do we only need **two dummy variables, not three?** Because two dummies captures all of the information about the Area feature, and implicitly defines rural as the baseline level. (In general, if you have a categorical feature with k levels, you create k-1 dummy variables.)\n\nIf this is confusing, think about why we only needed one dummy variable for Size (IsLarge), not two dummy variables (IsSmall and IsLarge).\n\nLet's include the two new dummy variables in the model:", "_____no_output_____" ] ], [ [ "# create X and y\nfeature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge', 'Area_suburban', 'Area_urban']\nX = data[feature_cols]\ny = data.Sales\n\n# instantiate, fit\nlm = LinearRegression()\nlm.fit(X, y)\n\n# print coefficients\nzip(feature_cols, lm.coef_)", "_____no_output_____" ] ], [ [ "How do we interpret the coefficients?\n- Holding all other variables fixed, being a **suburban** area is associated with an average **decrease** in Sales of 106.56 widgets (as compared to the baseline level, which is rural).\n- Being an **urban** area is associated with an average **increase** in Sales of 268.13 widgets (as compared to rural).\n\n**A final note about dummy encoding:** If you have categories that can be ranked (i.e., strongly disagree, disagree, neutral, agree, strongly agree), you can potentially use a single dummy variable and represent the categories numerically (such as 1, 2, 3, 4, 5).", "_____no_output_____" ], [ "## What Didn't We Cover?\n\n- Detecting collinearity\n- Diagnosing model fit\n- Transforming predictors to fit non-linear relationships\n- Interaction terms\n- Assumptions of linear regression\n- And so much more!\n\nYou could certainly go very deep into linear regression, and learn how to apply it really, really well. It's an excellent way to **start your modeling process** when working a regression problem. However, it is limited by the fact that it can only make good predictions if there is a **linear relationship** between the features and the response, which is why more complex methods (with higher variance and lower bias) will often outperform linear regression.\n\nTherefore, we want you to understand linear regression conceptually, understand its strengths and weaknesses, be familiar with the terminology, and know how to apply it. However, we also want to spend time on many other machine learning models, which is why we aren't going deeper here.", "_____no_output_____" ], [ "## Resources\n\n- To go much more in-depth on linear regression, read Chapter 3 of [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), from which this lesson was adapted. Alternatively, watch the [related videos](http://www.dataschool.io/15-hours-of-expert-machine-learning-videos/) or read my [quick reference guide](http://www.dataschool.io/applying-and-interpreting-linear-regression/) to the key points in that chapter.\n- To learn more about Statsmodels and how to interpret the output, DataRobot has some decent posts on [simple linear regression](http://www.datarobot.com/blog/ordinary-least-squares-in-python/) and [multiple linear regression](http://www.datarobot.com/blog/multiple-regression-using-statsmodels/).\n- This [introduction to linear regression](http://people.duke.edu/~rnau/regintro.htm) is much more detailed and mathematically thorough, and includes lots of good advice.\n- This is a relatively quick post on the [assumptions of linear regression](http://pareonline.net/getvn.asp?n=2&v=8).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
cb80fed9b43677c250ce974a7ed5e7087bd8318d
96,433
ipynb
Jupyter Notebook
notebooks/ConfigurationFuzzer.ipynb
R3dFruitRollUp/fuzzingbook
02d4800b6a9aeb21b5a01ee57ddbcb8ec0ae5738
[ "MIT" ]
1
2019-02-02T19:04:36.000Z
2019-02-02T19:04:36.000Z
notebooks/ConfigurationFuzzer.ipynb
FOGSEC/fuzzingbook
02d4800b6a9aeb21b5a01ee57ddbcb8ec0ae5738
[ "MIT" ]
null
null
null
notebooks/ConfigurationFuzzer.ipynb
FOGSEC/fuzzingbook
02d4800b6a9aeb21b5a01ee57ddbcb8ec0ae5738
[ "MIT" ]
1
2018-12-01T16:34:30.000Z
2018-12-01T16:34:30.000Z
25.343758
662
0.554178
[ [ [ "# Testing Configurations\n\nThe behavior of a program is not only governed by its data. The _configuration_ of a program – that is, the settings that govern the execution of a program on its (regular) input data, as set by options or configuration files – just as well influences behavior, and thus can and should be tested. In this chapter, we explore how to systematically _test_ and _cover_ software configurations. By _automatically inferring configuration options_, we can apply these techniques out of the box, with no need for writing a grammar. Finally, we show how to systematically cover _combinations_ of configuration options, quickly detecting unwanted interferences.", "_____no_output_____" ], [ "**Prerequisites**\n\n* You should have read the [chapter on grammars](Grammars.ipynb).\n* You should have read the [chapter on grammar coverage](GrammarCoverageFuzzer.ipynb).", "_____no_output_____" ], [ "## Configuration Options\n\nWhen we talk about the input to a program, we usually think of the _data_ it processes. This is also what we have been fuzzing in the past chapters – be it with [random input](Fuzzer.ipynb), [mutation-based fuzzing](MutationFuzzer.ipynb), or [grammar-based fuzzing](GrammarFuzzer.ipynb). However, programs typically have several input sources, all of which can and should be tested – and included in test generation.", "_____no_output_____" ], [ "One important source of input is the program's _configuration_ – that is, a set of inputs that typically is set once when beginning to process data and then stays constant while processing data, while the program is running, or even while the program is deployed. Such a configuration is frequently set in _configuration files_ (for instance, as key/value pairs); the most ubiquitous method for command-line tools, though, are _configuration options_ on the command line.", "_____no_output_____" ], [ "As an example, consider the `grep` utility to find textual patterns in files. The exact mode by which `grep` works is governed by a multitude of options, which can be listed by providing a `--help` option:", "_____no_output_____" ] ], [ [ "!grep --help", "_____no_output_____" ] ], [ [ "All these options need to be tested for whether they operate correctly. In security testing, any such option may also trigger a yet unknown vulnerability. Hence, such options can become _fuzz targets_ on their own. In this chapter, we analyze how to systematically test such options – and better yet, how to extract possible configurations right out of given program files, such that we do not have to specify anything.", "_____no_output_____" ], [ "## Options in Python\n\nLet us stick to our common programming language here and examine how options are processed in Python. The `argparse` module provides a parser for command-line arguments (and options) with great functionality – and great complexity. You start by defining a parser (`argparse.ArgumentParser()`) to which individual arguments with various features are added, one after another. Additional parameters for each argument can specify the type (`type`) of the argument (say, integers or strings), or the number of arguments (`nargs`).", "_____no_output_____" ], [ "By default, arguments are stored under their name in the `args` object coming from `parse_args()` – thus, `args.integers` holds the `integers` arguments added earlier. Special actions (`actions`) allow to store specific values in given variables; the `store_const` action stores the given `const` in the attribute named by `dest`. The following example takes a number of integer arguments (`integers`) as well as an operator (`--sum`, `--min`, or `--max`) to be applied on these integers. The operators all store a function reference in the `accumulate` attribute, which is finally invoked on the integers parsed:", "_____no_output_____" ] ], [ [ "import argparse", "_____no_output_____" ], [ "def process_numbers(args=[]):\n parser = argparse.ArgumentParser(description='Process some integers.')\n parser.add_argument('integers', metavar='N', type=int, nargs='+',\n help='an integer for the accumulator')\n group = parser.add_mutually_exclusive_group(required=True)\n group.add_argument('--sum', dest='accumulate', action='store_const',\n const=sum,\n help='sum the integers')\n group.add_argument('--min', dest='accumulate', action='store_const',\n const=min,\n help='compute the minimum')\n group.add_argument('--max', dest='accumulate', action='store_const',\n const=max,\n help='compute the maximum')\n\n args = parser.parse_args(args)\n print(args.accumulate(args.integers))", "_____no_output_____" ] ], [ [ "Here's how `process_numbers()` works. We can, for instance, invoke the `--min` option on the given arguments to compute the minimum:", "_____no_output_____" ] ], [ [ "process_numbers([\"--min\", \"100\", \"200\", \"300\"])", "_____no_output_____" ] ], [ [ "Or compute the sum of three numbers:", "_____no_output_____" ] ], [ [ "process_numbers([\"--sum\", \"1\", \"2\", \"3\"])", "_____no_output_____" ] ], [ [ "When defined via `add_mutually_exclusive_group()` (as above), options are mutually exclusive. Consequently, we can have only one operator:", "_____no_output_____" ] ], [ [ "import fuzzingbook_utils", "_____no_output_____" ], [ "from ExpectError import ExpectError", "_____no_output_____" ], [ "with ExpectError(print_traceback=False):\n process_numbers([\"--sum\", \"--max\", \"1\", \"2\", \"3\"])", "_____no_output_____" ] ], [ [ "## A Grammar for Configurations\n\nHow can we test a system with several options? The easiest answer is to write a grammar for it. The grammar `PROCESS_NUMBERS_EBNF_GRAMMAR` reflects the possible combinations of options and arguments:", "_____no_output_____" ] ], [ [ "from Grammars import crange, srange, convert_ebnf_grammar, is_valid_grammar, START_SYMBOL, new_symbol", "_____no_output_____" ], [ "PROCESS_NUMBERS_EBNF_GRAMMAR = {\n \"<start>\": [\"<operator> <integers>\"],\n \"<operator>\": [\"--sum\", \"--min\", \"--max\"],\n \"<integers>\": [\"<integer>\", \"<integers> <integer>\"],\n \"<integer>\": [\"<digit>+\"],\n \"<digit>\": crange('0', '9')\n}\n\nassert is_valid_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)", "_____no_output_____" ], [ "PROCESS_NUMBERS_GRAMMAR = convert_ebnf_grammar(PROCESS_NUMBERS_EBNF_GRAMMAR)", "_____no_output_____" ] ], [ [ "We can feed this grammar into our [grammar coverage fuzzer](GrammarCoverageFuzzer.ipynb) and have it cover one option after another:", "_____no_output_____" ] ], [ [ "from GrammarCoverageFuzzer import GrammarCoverageFuzzer", "_____no_output_____" ], [ "f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)\nfor i in range(3):\n print(f.fuzz())", "_____no_output_____" ] ], [ [ "Of course, we can also invoke `process_numbers()` with these very arguments. To this end, we need to convert the string produced by the grammar back into a list of individual arguments, using `split()`:", "_____no_output_____" ] ], [ [ "f = GrammarCoverageFuzzer(PROCESS_NUMBERS_GRAMMAR, min_nonterminals=10)\nfor i in range(3):\n args = f.fuzz().split()\n print(args)\n process_numbers(args)", "_____no_output_____" ] ], [ [ "In a similar way, we can define grammars for any program to be tested; as well as define grammars for, say, configuration files. Yet, the grammar has to be updated with every change to the program, which creates a maintenance burden. Given that the information required for the grammar is already all encoded in the program, the question arises: _Can't we go and extract configuration options right out of the program in the first place?_", "_____no_output_____" ], [ "## Mining Configuration Options\n", "_____no_output_____" ], [ "In this section, we try to extract option and argument information right out of a program, such that we do not have to specify a configuration grammar. The aim is to have a configuration fuzzer that works on the options and arguments of an arbitrary program, as long as it follows specific conventions for processing its arguments. In the case of Python programs, this means using the `argparse` module.", "_____no_output_____" ], [ "Our idea is as follows: We execute the given program up to the point where the arguments are actually parsed – that is, `argparse.parse_args()` is invoked. Up to this point, we track all calls into the argument parser, notably those calls that define arguments and options (`add_argument()`). From these, we construct the grammar.", "_____no_output_____" ], [ "### Tracking Arguments", "_____no_output_____" ], [ "Let us illustrate this approach with a simple experiment: We define a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while `process_numbers` is invoked. If we have a call to a method `add_argument`, we access and print out the local variables (which at this point are the arguments to the method).", "_____no_output_____" ] ], [ [ "import sys", "_____no_output_____" ], [ "import string", "_____no_output_____" ], [ "def traceit(frame, event, arg):\n if event != \"call\":\n return\n method_name = frame.f_code.co_name\n if method_name != \"add_argument\":\n return\n locals = frame.f_locals\n print(method_name, locals)", "_____no_output_____" ] ], [ [ "What we get is a list of all calls to `add_argument()`, together with the method arguments passed:", "_____no_output_____" ] ], [ [ "sys.settrace(traceit)\nprocess_numbers([\"--sum\", \"1\", \"2\", \"3\"])\nsys.settrace(None)", "_____no_output_____" ] ], [ [ "From the `args` argument, we can access the individual options and arguments to be defined:", "_____no_output_____" ] ], [ [ "def traceit(frame, event, arg):\n if event != \"call\":\n return\n method_name = frame.f_code.co_name\n if method_name != \"add_argument\":\n return\n locals = frame.f_locals\n print(locals['args'])", "_____no_output_____" ], [ "sys.settrace(traceit)\nprocess_numbers([\"--sum\", \"1\", \"2\", \"3\"])\nsys.settrace(None)", "_____no_output_____" ] ], [ [ "We see that each argument comes as a tuple with one (say, `integers` or `--sum`) or two members (`-h` and `--help`), which denote alternate forms for the same option. Our job will be to go through the arguments of `add_arguments()` and detect not only the names of options and arguments, but also whether they accept additional parameters, as well as the type of the parameters.", "_____no_output_____" ], [ "### A Grammar Miner for Options and Arguments", "_____no_output_____" ], [ "Let us now build a class that gathers all this information to create a grammar.", "_____no_output_____" ], [ "We use the `ParseInterrupt` exception to interrupt program execution after gathering all arguments and options:", "_____no_output_____" ] ], [ [ "class ParseInterrupt(Exception):\n pass", "_____no_output_____" ] ], [ [ "The class `OptionGrammarMiner` takes an executable function for which the grammar of options and arguments is to be mined:", "_____no_output_____" ] ], [ [ "class OptionGrammarMiner(object):\n def __init__(self, function, log=False):\n self.function = function\n self.log = log", "_____no_output_____" ] ], [ [ "The method `mine_ebnf_grammar()` is where everything happens. It creates a grammar of the form\n\n```\n<start> ::= <option>* <arguments>\n<option> ::= <empty>\n<arguments> ::= <empty>\n```\n\nin which the options and arguments will be collected. It then sets a trace function (see [our chapter on coverage](Coverage.ipynb) for details) that is active while the previously defined `function` is invoked. Raising `ParseInterrupt` (when `parse_args()` is invoked) ends execution.", "_____no_output_____" ] ], [ [ "class OptionGrammarMiner(OptionGrammarMiner):\n OPTION_SYMBOL = \"<option>\"\n ARGUMENTS_SYMBOL = \"<arguments>\"\n\n def mine_ebnf_grammar(self):\n self.grammar = {\n START_SYMBOL: [\"(\" + self.OPTION_SYMBOL + \")*\" + self.ARGUMENTS_SYMBOL],\n self.OPTION_SYMBOL: [],\n self.ARGUMENTS_SYMBOL: []\n }\n self.current_group = self.OPTION_SYMBOL\n\n old_trace = sys.settrace(self.traceit)\n try:\n self.function()\n except ParseInterrupt:\n pass\n sys.settrace(old_trace)\n\n return self.grammar\n\n def mine_grammar(self):\n return convert_ebnf_grammar(self.mine_ebnf_grammar())", "_____no_output_____" ] ], [ [ "The trace function checks for four methods: `add_argument()` is the most important function, resulting in processing arguments; `frame.f_locals` again is the set of local variables, which at this point is mostly the arguments to `add_argument()`. Since mutually exclusive groups also have a method `add_argument()`, we set the flag `in_group` to differentiate.", "_____no_output_____" ], [ "Note that we make no specific efforts to differentiate between multiple parsers or groups; we simply assume that there is one parser, and at any point at most one mutually exclusive group.", "_____no_output_____" ] ], [ [ "class OptionGrammarMiner(OptionGrammarMiner):\n def traceit(self, frame, event, arg):\n if event != \"call\":\n return\n\n if \"self\" not in frame.f_locals:\n return\n self_var = frame.f_locals[\"self\"]\n\n method_name = frame.f_code.co_name\n\n if method_name == \"add_argument\":\n in_group = repr(type(self_var)).find(\"Group\") >= 0\n self.process_argument(frame.f_locals, in_group)\n elif method_name == \"add_mutually_exclusive_group\":\n self.add_group(frame.f_locals, exclusive=True)\n elif method_name == \"add_argument_group\":\n # self.add_group(frame.f_locals, exclusive=False)\n pass\n elif method_name == \"parse_args\":\n raise ParseInterrupt\n\n return None", "_____no_output_____" ] ], [ [ "The `process_arguments()` now analyzes the arguments passed and adds them to the grammar:\n\n* If the argument starts with `-`, it gets added as an optional element to the `<option>` list\n* Otherwise, it gets added to the `<argument>` list.\n\nThe optional `nargs` argument specifies how many arguments can follow. If it is a number, we add the appropriate number of elements to the grammar; if it is an abstract specifier (say, `+` or `*`), we use it directly as EBNF operator.\n\nGiven the large number of parameters and optional behavior, this is a somewhat messy function, but it does the job.", "_____no_output_____" ] ], [ [ "class OptionGrammarMiner(OptionGrammarMiner):\n def process_argument(self, locals, in_group):\n args = locals[\"args\"]\n kwargs = locals[\"kwargs\"]\n\n if self.log:\n print(args)\n print(kwargs)\n print()\n\n for arg in args:\n self.process_arg(arg, in_group, kwargs)", "_____no_output_____" ], [ "class OptionGrammarMiner(OptionGrammarMiner):\n def process_arg(self, arg, in_group, kwargs):\n if arg.startswith('-'):\n if not in_group:\n target = self.OPTION_SYMBOL\n else:\n target = self.current_group\n metavar = None\n arg = \" \" + arg\n else:\n target = self.ARGUMENTS_SYMBOL\n metavar = arg\n arg = \"\"\n\n if \"nargs\" in kwargs:\n nargs = kwargs[\"nargs\"]\n else:\n nargs = 1\n\n param = self.add_parameter(kwargs, metavar)\n if param == \"\":\n nargs = 0\n\n if isinstance(nargs, int):\n for i in range(nargs):\n arg += param\n else:\n assert nargs in \"?+*\"\n arg += '(' + param + ')' + nargs\n\n if target == self.OPTION_SYMBOL:\n self.grammar[target].append(arg)\n else:\n self.grammar[target].append(arg)", "_____no_output_____" ] ], [ [ "The method `add_parameter()` handles possible parameters of options. If the argument has an `action` defined, it takes no parameter. Otherwise, we identify the type of the parameter (as `int` or `str`) and augment the grammar with an appropriate rule.", "_____no_output_____" ] ], [ [ "import inspect", "_____no_output_____" ], [ "class OptionGrammarMiner(OptionGrammarMiner):\n def add_parameter(self, kwargs, metavar):\n if \"action\" in kwargs:\n # No parameter\n return \"\"\n\n type_ = \"str\"\n if \"type\" in kwargs:\n given_type = kwargs[\"type\"]\n # int types come as '<class int>'\n if inspect.isclass(given_type) and issubclass(given_type, int):\n type_ = \"int\"\n\n if metavar is None:\n if \"metavar\" in kwargs:\n metavar = kwargs[\"metavar\"]\n else:\n metavar = type_\n\n self.add_type_rule(type_)\n if metavar != type_:\n self.add_metavar_rule(metavar, type_)\n\n param = \" <\" + metavar + \">\"\n\n return param", "_____no_output_____" ] ], [ [ "The method `add_type_rule()` adds a rule for parameter types to the grammar. If the parameter is identified by a meta-variable (say, `N`), we add a rule for this as well to improve legibility.", "_____no_output_____" ] ], [ [ "class OptionGrammarMiner(OptionGrammarMiner):\n def add_type_rule(self, type_):\n if type_ == \"int\":\n self.add_int_rule()\n else:\n self.add_str_rule()\n\n def add_int_rule(self):\n self.grammar[\"<int>\"] = [\"(-)?<digit>+\"]\n self.grammar[\"<digit>\"] = crange('0', '9')\n\n def add_str_rule(self):\n self.grammar[\"<str>\"] = [\"<char>+\"]\n self.grammar[\"<char>\"] = srange(\n string.digits\n + string.ascii_letters\n + string.punctuation)\n\n def add_metavar_rule(self, metavar, type_):\n self.grammar[\"<\" + metavar + \">\"] = [\"<\" + type_ + \">\"]", "_____no_output_____" ] ], [ [ "The method `add_group()` adds a new mutually exclusive group to the grammar. We define a new symbol (say, `<group>`) for the options added to the group, and use the `required` and `exclusive` flags to define an appropriate expansion operator. The group is then prefixed to the grammar, as in\n\n```\n<start> ::= <group><option>* <arguments>\n<group> ::= <empty>\n```\n\nand filled with the next calls to `add_argument()` within the group.", "_____no_output_____" ] ], [ [ "class OptionGrammarMiner(OptionGrammarMiner):\n def add_group(self, locals, exclusive):\n kwargs = locals[\"kwargs\"]\n if self.log:\n print(kwargs)\n\n required = kwargs.get(\"required\", False)\n group = new_symbol(self.grammar, \"<group>\")\n\n if required and exclusive:\n group_expansion = group\n if required and not exclusive:\n group_expansion = group + \"+\"\n if not required and exclusive:\n group_expansion = group + \"?\"\n if not required and not exclusive:\n group_expansion = group + \"*\"\n\n self.grammar[START_SYMBOL][0] = group_expansion + \\\n self.grammar[START_SYMBOL][0]\n self.grammar[group] = []\n self.current_group = group", "_____no_output_____" ] ], [ [ "That's it! With this, we can now extract the grammar from our `process_numbers()` program. Turning on logging again reveals the variables we draw upon.", "_____no_output_____" ] ], [ [ "miner = OptionGrammarMiner(process_numbers, log=True)\nprocess_numbers_grammar = miner.mine_ebnf_grammar()", "_____no_output_____" ] ], [ [ "Here is the extracted grammar:", "_____no_output_____" ] ], [ [ "process_numbers_grammar", "_____no_output_____" ] ], [ [ "The grammar properly identifies the group found:", "_____no_output_____" ] ], [ [ "process_numbers_grammar[\"<start>\"]", "_____no_output_____" ], [ "process_numbers_grammar[\"<group>\"]", "_____no_output_____" ] ], [ [ "It also identifies a `--help` option provided not by us, but by the `argparse` module:", "_____no_output_____" ] ], [ [ "process_numbers_grammar[\"<option>\"]", "_____no_output_____" ] ], [ [ "The grammar also correctly identifies the types of the arguments:", "_____no_output_____" ] ], [ [ "process_numbers_grammar[\"<arguments>\"]", "_____no_output_____" ], [ "process_numbers_grammar[\"<integers>\"]", "_____no_output_____" ] ], [ [ "The rules for `int` are set as defined by `add_int_rule()`", "_____no_output_____" ] ], [ [ "process_numbers_grammar[\"<int>\"]", "_____no_output_____" ] ], [ [ "We can take this grammar and convert it to BNF, such that we can fuzz with it right away:", "_____no_output_____" ] ], [ [ "assert is_valid_grammar(process_numbers_grammar)", "_____no_output_____" ], [ "grammar = convert_ebnf_grammar(process_numbers_grammar)\nassert is_valid_grammar(grammar)", "_____no_output_____" ], [ "f = GrammarCoverageFuzzer(grammar)\nfor i in range(10):\n print(f.fuzz())", "_____no_output_____" ] ], [ [ "Each and every invocation adheres to the rules as set forth in the `argparse` calls. By mining options and arguments from existing programs, we can now fuzz these options out of the box – without having to specify a grammar.", "_____no_output_____" ], [ "## Testing Autopep8", "_____no_output_____" ], [ "Let us try out the option grammar miner on real-world Python programs. `autopep8` is a tool that automatically converts Python code to the [PEP 8 Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/). (Actually, all Python code in this book runs through `autopep8` during production.) `autopep8` offers a wide range of options, as can be seen by invoking it with `--help`:", "_____no_output_____" ] ], [ [ "!autopep8 --help", "_____no_output_____" ] ], [ [ "### Autopep8 Setup\n\nWe want to systematically test these options. In order to deploy our configuration grammar miner, we need to find the source code of the executable:", "_____no_output_____" ] ], [ [ "import os", "_____no_output_____" ], [ "def find_executable(name):\n for path in os.get_exec_path():\n qualified_name = os.path.join(path, name)\n if os.path.exists(qualified_name):\n return qualified_name\n return None", "_____no_output_____" ], [ "autopep8_executable = find_executable(\"autopep8\")\nassert autopep8_executable is not None\nautopep8_executable", "_____no_output_____" ] ], [ [ "Next, we build a function that reads the contents of the file and executes it.", "_____no_output_____" ] ], [ [ "def autopep8():\n executable = find_executable(\"autopep8\")\n\n # First line has to contain \"/usr/bin/env python\" or like\n first_line = open(executable).readline()\n assert first_line.find(\"python\") >= 0\n\n contents = open(executable).read()\n exec(contents)", "_____no_output_____" ] ], [ [ "### Mining an Autopep8 Grammar\n\nWe can use the `autopep8()` function in our grammar miner:", "_____no_output_____" ] ], [ [ "autopep8_miner = OptionGrammarMiner(autopep8)", "_____no_output_____" ] ], [ [ "and extract a grammar for it:", "_____no_output_____" ] ], [ [ "autopep8_ebnf_grammar = autopep8_miner.mine_ebnf_grammar()", "_____no_output_____" ] ], [ [ "This works because here, `autopep8` is not a separate process (and a separate Python interpreter), but we run the `autopep8()` function (and the `autopep8` code) in our current Python interpreter – up to the call to `parse_args()`, where we interrupt execution again. At this point, the `autopep8` code has done nothing but setting up the argument parser – which is what we are interested in.", "_____no_output_____" ], [ "The grammar options mined reflect precisely the options seen when providing `--help`:", "_____no_output_____" ] ], [ [ "print(autopep8_ebnf_grammar[\"<option>\"])", "_____no_output_____" ] ], [ [ "Metavariables like `<n>` or `<line>` are placeholders for integers. We assume all metavariables of the same name have the same type:", "_____no_output_____" ] ], [ [ "autopep8_ebnf_grammar[\"<line>\"]", "_____no_output_____" ] ], [ [ "The grammar miner has inferred that the argument to `autopep8` is a list of files:", "_____no_output_____" ] ], [ [ "autopep8_ebnf_grammar[\"<arguments>\"]", "_____no_output_____" ] ], [ [ "which in turn all are strings:", "_____no_output_____" ] ], [ [ "autopep8_ebnf_grammar[\"<files>\"]", "_____no_output_____" ] ], [ [ "As we are only interested in testing options, not arguments, we fix the arguments to a single mandatory input. (Otherwise, we'd have plenty of random file names generated.)", "_____no_output_____" ] ], [ [ "autopep8_ebnf_grammar[\"<arguments>\"] = [\" <files>\"]\nautopep8_ebnf_grammar[\"<files>\"] = [\"foo.py\"]\nassert is_valid_grammar(autopep8_ebnf_grammar)", "_____no_output_____" ] ], [ [ "### Creating Autopep8 Options", "_____no_output_____" ], [ "Let us now use the inferred grammar for fuzzing. Again, we convert the EBNF grammar into a regular BNF grammar:", "_____no_output_____" ] ], [ [ "autopep8_grammar = convert_ebnf_grammar(autopep8_ebnf_grammar)\nassert is_valid_grammar(autopep8_grammar)", "_____no_output_____" ] ], [ [ "And we can use the grammar for fuzzing all options:", "_____no_output_____" ] ], [ [ "f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=4)\nfor i in range(20):\n print(f.fuzz())", "_____no_output_____" ] ], [ [ "Let us apply these options on the actual program. We need a file `foo.py` that will serve as input:", "_____no_output_____" ] ], [ [ "def create_foo_py():\n open(\"foo.py\", \"w\").write(\"\"\"\ndef twice(x = 2):\n return x + x\n\"\"\")", "_____no_output_____" ], [ "create_foo_py()", "_____no_output_____" ], [ "print(open(\"foo.py\").read(), end=\"\")", "_____no_output_____" ] ], [ [ "We see how `autopep8` fixes the spacing:", "_____no_output_____" ] ], [ [ "!autopep8 foo.py", "_____no_output_____" ] ], [ [ "Let us now put things together. We define a `ProgramRunner` that will run the `autopep8` executable with arguments coming from the mined `autopep8` grammar.", "_____no_output_____" ] ], [ [ "from Fuzzer import ProgramRunner", "_____no_output_____" ] ], [ [ "Running `autopep8` with the mined options reveals a surprisingly high number of passing runs. (We see that some options depend on each other or are mutually exclusive, but this is handled by the program logic, not the argument parser, and hence out of our scope.) The `GrammarCoverageFuzzer` ensures that each option is tested at least once. (Digits and letters, too, by the way.)", "_____no_output_____" ] ], [ [ "f = GrammarCoverageFuzzer(autopep8_grammar, max_nonterminals=5)\nfor i in range(20):\n invocation = \"autopep8\" + f.fuzz()\n print(\"$ \" + invocation)\n args = invocation.split()\n autopep8 = ProgramRunner(args)\n result, outcome = autopep8.run()\n if result.stderr != \"\":\n print(result.stderr, end=\"\")", "_____no_output_____" ] ], [ [ "Our `foo.py` file now has been formatted in place a number of times:", "_____no_output_____" ] ], [ [ "print(open(\"foo.py\").read(), end=\"\")", "_____no_output_____" ] ], [ [ "We don't need it anymore, so we clean up things:", "_____no_output_____" ] ], [ [ "import os", "_____no_output_____" ], [ "os.remove(\"foo.py\")", "_____no_output_____" ] ], [ [ "## Classes for Fuzzing Configuration Options", "_____no_output_____" ], [ "Let us now create reusable classes that we can use for testing arbitrary programs. (Okay, make that \"arbitrary programs that are written in Python and use the `argparse` module to process command-line arguments.\")", "_____no_output_____" ], [ "The class `OptionRunner` is a subclass of `ProgramRunner` that takes care of automatically determining the grammar, using the same steps we used for `autopep8`, above.", "_____no_output_____" ] ], [ [ "class OptionRunner(ProgramRunner):\n def __init__(self, program, arguments=None):\n if isinstance(program, str):\n self.base_executable = program\n else:\n self.base_executable = program[0]\n\n self.find_contents()\n self.find_grammar()\n if arguments is not None:\n self.set_arguments(arguments)\n super().__init__(program)", "_____no_output_____" ] ], [ [ "First, we find the contents of the Python executable:", "_____no_output_____" ] ], [ [ "class OptionRunner(OptionRunner):\n def find_contents(self):\n self._executable = find_executable(self.base_executable)\n first_line = open(self._executable).readline()\n assert first_line.find(\"python\") >= 0\n self.contents = open(self._executable).read()\n\n def invoker(self):\n exec(self.contents)\n\n def executable(self):\n return self._executable", "_____no_output_____" ] ], [ [ "Next, we determine the grammar using the `OptionGrammarMiner` class:", "_____no_output_____" ] ], [ [ "class OptionRunner(OptionRunner):\n def find_grammar(self):\n miner = OptionGrammarMiner(self.invoker)\n self._ebnf_grammar = miner.mine_ebnf_grammar()\n\n def ebnf_grammar(self):\n return self._ebnf_grammar\n\n def grammar(self):\n return convert_ebnf_grammar(self._ebnf_grammar)", "_____no_output_____" ] ], [ [ "The two service methods `set_arguments()` and `set_invocation()` help us to change the arguments and program, respectively.", "_____no_output_____" ] ], [ [ "class OptionRunner(OptionRunner):\n def set_arguments(self, args):\n self._ebnf_grammar[\"<arguments>\"] = [\" \" + args]\n\n def set_invocation(self, program):\n self.program = program", "_____no_output_____" ] ], [ [ "We can instantiate the class on `autopep8` and immediately get the grammar:", "_____no_output_____" ] ], [ [ "autopep8_runner = OptionRunner(\"autopep8\", \"foo.py\")", "_____no_output_____" ], [ "print(autopep8_runner.ebnf_grammar()[\"<option>\"])", "_____no_output_____" ] ], [ [ "An `OptionFuzzer` interacts with the given `OptionRunner` to obtain its grammar, which is then passed to its `GrammarCoverageFuzzer` superclass.", "_____no_output_____" ] ], [ [ "class OptionFuzzer(GrammarCoverageFuzzer):\n def __init__(self, runner, *args, **kwargs):\n assert issubclass(type(runner), OptionRunner)\n self.runner = runner\n grammar = runner.grammar()\n super().__init__(grammar, *args, **kwargs)", "_____no_output_____" ] ], [ [ "When invoking `run()`, the `OptionFuzzer` creates a new invocation (using `fuzz()` from its grammar) and runs the now given (or previously set) runner with the arguments from the grammar. Note that the runner specified in `run()` can differ from the one set during initialization; this allows for mining options from one program and applying it in another context.", "_____no_output_____" ] ], [ [ "class OptionFuzzer(OptionFuzzer):\n def run(self, runner=None, inp=\"\"):\n if runner is None:\n runner = self.runner\n assert issubclass(type(runner), OptionRunner)\n invocation = runner.executable() + \" \" + self.fuzz()\n runner.set_invocation(invocation.split())\n return runner.run(inp)", "_____no_output_____" ] ], [ [ "### Example: Autopep8", "_____no_output_____" ], [ "Let us apply this on the `autopep8` runner:", "_____no_output_____" ] ], [ [ "autopep8_fuzzer = OptionFuzzer(autopep8_runner, max_nonterminals=5)", "_____no_output_____" ], [ "for i in range(3):\n print(autopep8_fuzzer.fuzz())", "_____no_output_____" ] ], [ [ "We can now systematically test `autopep8` with these classes:", "_____no_output_____" ] ], [ [ "autopep8_fuzzer.run(autopep8_runner)", "_____no_output_____" ] ], [ [ "### Example: MyPy\n\nWe can extract options for the `mypy` static type checker for Python:", "_____no_output_____" ] ], [ [ "assert find_executable(\"mypy\") is not None", "_____no_output_____" ], [ "mypy_runner = OptionRunner(\"mypy\", \"foo.py\")\nprint(mypy_runner.ebnf_grammar()[\"<option>\"])", "_____no_output_____" ], [ "mypy_fuzzer = OptionFuzzer(mypy_runner, max_nonterminals=5)\nfor i in range(10):\n print(mypy_fuzzer.fuzz())", "_____no_output_____" ] ], [ [ "### Example: Notedown\n\nHere's the configuration options for the `notedown` Notebook to Markdown converter:", "_____no_output_____" ] ], [ [ "assert find_executable(\"notedown\") is not None", "_____no_output_____" ], [ "notedown_runner = OptionRunner(\"notedown\")", "_____no_output_____" ], [ "print(notedown_runner.ebnf_grammar()[\"<option>\"])", "_____no_output_____" ], [ "notedown_fuzzer = OptionFuzzer(notedown_runner, max_nonterminals=5)\nfor i in range(10):\n print(notedown_fuzzer.fuzz())", "_____no_output_____" ] ], [ [ "## Combinatorial Testing\n\nOur `CoverageGrammarFuzzer` does a good job in covering each and every option at least once, which is great for systematic testing. However, as we also can see in our examples above, some options require each other, while others interfere with each other. What we should do as good testers is not only to cover every option individually, but also _combinations_ of options.", "_____no_output_____" ], [ "The Python `itertools` module gives us means to create combinations from lists. We can, for instance, take the `notedown` options and create a list of all pairs.", "_____no_output_____" ] ], [ [ "from itertools import combinations", "_____no_output_____" ], [ "option_list = notedown_runner.ebnf_grammar()[\"<option>\"]\npairs = list(combinations(option_list, 2))", "_____no_output_____" ] ], [ [ "There's quite a number of pairs:", "_____no_output_____" ] ], [ [ "len(pairs)", "_____no_output_____" ], [ "print(pairs[:20])", "_____no_output_____" ] ], [ [ "Testing every such pair of options frequently suffices to cover all interferences between options. (Programs rarely have conditions involving three or more configuration settings.) To this end, we _change_ the grammar from having a list of options to having a list of _option pairs_, such that covering these will automatically cover all pairs.", "_____no_output_____" ], [ "We create a function `pairwise()` that takes a list of options as occurring in our grammar and returns a list of _pairwise options_ – that is, our original options, but concatenated.", "_____no_output_____" ] ], [ [ "def pairwise(option_list):\n return [option_1 + option_2\n for (option_1, option_2) in combinations(option_list, 2)]", "_____no_output_____" ] ], [ [ "Here's the first 20 pairs:", "_____no_output_____" ] ], [ [ "print(pairwise(option_list)[:20])", "_____no_output_____" ] ], [ [ "The new grammar `pairwise_notedown_grammar` is a copy of the `notedown` grammar, but with the list of options replaced with the above pairwise option list.", "_____no_output_____" ] ], [ [ "from copy import deepcopy", "_____no_output_____" ], [ "notedown_grammar = notedown_runner.grammar()\npairwise_notedown_grammar = deepcopy(notedown_grammar)\npairwise_notedown_grammar[\"<option>\"] = pairwise(notedown_grammar[\"<option>\"])\nassert is_valid_grammar(pairwise_notedown_grammar)", "_____no_output_____" ] ], [ [ "Using the \"pairwise\" grammar to fuzz now covers one pair after another:", "_____no_output_____" ] ], [ [ "notedown_fuzzer = GrammarCoverageFuzzer(\n pairwise_notedown_grammar, max_nonterminals=4)", "_____no_output_____" ], [ "for i in range(10):\n print(notedown_fuzzer.fuzz())", "_____no_output_____" ] ], [ [ "Can we actually test all combinations of options? Not in practice, as the number of combinations quickly grows as the length increases. It decreases again as the number of options reaches the maximum (with 20 options, there is only 1 combination involving _all_ options), but the absolute numbers are still staggering:", "_____no_output_____" ] ], [ [ "for combination_length in range(1, 20):\n tuples = list(combinations(option_list, combination_length))\n print(combination_length, len(tuples))", "_____no_output_____" ] ], [ [ "Formally, the number of combinations of length $k$ in a set of options of length $n$ is the binomial coefficient\n$$\n{n \\choose k} = \\frac{n!}{k!(n - k)!}\n$$", "_____no_output_____" ], [ "which for $k = 2$ (all pairs) gives us\n\n$$\n{n \\choose 2} = \\frac{n!}{2(n - 2)!} = n \\times (n - 1)\n$$", "_____no_output_____" ], [ "For `autopep8` with its 29 options...", "_____no_output_____" ] ], [ [ "len(autopep8_runner.ebnf_grammar()[\"<option>\"])", "_____no_output_____" ] ], [ [ "... we thus need 812 tests to cover all pairs:", "_____no_output_____" ] ], [ [ "len(autopep8_runner.ebnf_grammar()[\"<option>\"]) * \\\n (len(autopep8_runner.ebnf_grammar()[\"<option>\"]) - 1)", "_____no_output_____" ] ], [ [ "For `mypy` with its 110 options, though, we already end up with 11,990 tests to be conducted:", "_____no_output_____" ] ], [ [ "len(mypy_runner.ebnf_grammar()[\"<option>\"])", "_____no_output_____" ], [ "len(mypy_runner.ebnf_grammar()[\"<option>\"]) * \\\n (len(mypy_runner.ebnf_grammar()[\"<option>\"]) - 1)", "_____no_output_____" ] ], [ [ "Even if each pair takes a second to run, we'd still be done in three hours of testing, though.", "_____no_output_____" ], [ "If your program has more options that you all want to get covered in combinations, it is advisable that you limit the number of configurations further – for instance by limiting combinatorial testing to those combinations that possibly can interact with each other; and covering all other (presumably orthogonal) options individually.", "_____no_output_____" ], [ "This mechanism of creating configurations by extending grammars can be easily extended to other configuration targets. One may want to explore a greater number of configurations, or expansions in specific contexts. The [exercises](#Exercises), below, have a number of options ready for you.", "_____no_output_____" ], [ "## Lessons Learned\n\n* Besides regular input data, program _configurations_ make an important testing target.\n* For a given program using a standard library to parse command-line options and arguments, one can automatically extract these and convert them into a grammar.\n* To cover not only single options, but combinations of options, one can expand the grammar to cover all pairs, or come up with even more ambitious targets.", "_____no_output_____" ], [ "## Next Steps\n\nIf you liked the idea of mining a grammar from a program, do not miss:\n\n* [how to mine grammars for input data](GrammarMiner.ipynb)", "_____no_output_____" ], [ "Our next steps in the book focus on:\n\n* [how to parse and recombine inputs](Parser.ipynb)\n* [how to assign weights and probabilities to specific productions](ProbabilisticGrammarFuzzer.ipynb)\n* [how to simplify inputs that cause a failure](Reducer.ipynb)", "_____no_output_____" ], [ "## Background\n\nAlthough configuration data is just as likely to cause failures as other input data, it has received relatively little attention in test generation – possibly because, unlike \"regular\" input data, configuration data is not so much under control of external parties, and because, again unlike regular data, there is little variance in configurations. Creating models for software configurations and using these models for testing is commonplace, as is the idea of pairwise testing. For an overview, see \\cite{Pezze2008}; for a discussion and comparison of state-of-the-art techniques, see \\cite{Petke2015}.\n\nMore specifically, \\cite{Sutton2007} also discuss techniques to systematically cover command-line options. Dai et al. \\cite{Dai2010} apply configuration fuzzing by changing variables associated with configuration files. ", "_____no_output_____" ], [ "## Exercises", "_____no_output_____" ], [ "### Exercise 1: #ifdef Configuration Fuzzing\n\nIn C programs, the *C preprocessor* can be used to choose which code parts should be compiled and which ones should not. As an example, in the C code\n\n```C\n#ifdef LONG_FOO\nlong foo() { ... }\n#else\nint foo() { ... }\n#endif\n```\n\nthe compiler will compile the function `foo()` with return type`long` if the _preprocessor variable_ `LONG_FOO` is defined, and with return type `int` if not. Such preprocessor variables are either set in the source files (using `#define`, as in `#define LONG_FOO`) or on the C compiler command line (using `-D<variable>` or `-D<variable>=<value>`, as in `-DLONG_FOO`.", "_____no_output_____" ], [ "Such *conditional compilation* is used to configure C programs towards their environment. System-specific code can contain lots of conditional compilation. As an example, consider this excerpt of `xmlparse.c`, the XML parser that is part of the Python runtime library:\n\n```c\n#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)\n# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800\n#endif\n\n#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \\\n && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \\\n && !defined(XML_DEV_URANDOM) \\\n && !defined(_WIN32) \\\n && !defined(XML_POOR_ENTROPY)\n# error\n#endif\n\n#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)\n#define USE_SYSV_ENVVARS\t/* COLUMNS/LINES vs. TERMCAP */\n#endif\n\n#ifdef XML_UNICODE_WCHAR_T\n#define XML_T(x) (const wchar_t)x\n#define XML_L(x) L ## x\n#else\n#define XML_T(x) (const unsigned short)x\n#define XML_L(x) x\n#endif\n\nint fun(int x) { return XML_T(x); }\n```", "_____no_output_____" ], [ "A typical configuration for the C preprocessor on the above code could be `cc -c -D_WIN32 -DXML_POOR_ENTROPY -DXML_UNICODE_WCHAR_T xmlparse.c`, defining the given preprocessor variables and selecting the appropriate code fragments.", "_____no_output_____" ], [ "Since the compiler can only compile one configuration at a time (implying that we can also only _test_ one resulting executable at a time), your task is to find out which of these configurations actually compile. To this end, proceed in three steps.", "_____no_output_____" ], [ "#### Part 1: Extract Preprocessor Variables\n\nWrite a _function_ `cpp_identifiers()` that, given a set of lines (say, from `open(filename).readlines()`), extracts all preprocessor variables referenced in `#if` or `#ifdef` preprocessor instructions. Apply `ifdef_identifiers()` on the sample C input above, such that\n\n```python\ncpp_identifiers(open(\"xmlparse.c\").readlines()) \n```\n\nreturns the set\n\n```python\n{'_WIN32', 'LOAD_LIBRARY_SEARCH_SYSTEM32', 'HAVE_GETRANDOM', 'HAVE_SYSCALL_GETRANDOM', 'HAVE_ARC4RANDOM_BUF', ...}\n\n```", "_____no_output_____" ], [ "**Solution.** Let us start with creating a sample input file, `xmlparse.c`:", "_____no_output_____" ] ], [ [ "filename = \"xmlparse.c\"\n\nopen(filename, \"w\").write(\n\"\"\"\n#if defined(_WIN32) && !defined(LOAD_LIBRARY_SEARCH_SYSTEM32)\n# define LOAD_LIBRARY_SEARCH_SYSTEM32 0x00000800\n#endif\n\n#if !defined(HAVE_GETRANDOM) && !defined(HAVE_SYSCALL_GETRANDOM) \\\n && !defined(HAVE_ARC4RANDOM_BUF) && !defined(HAVE_ARC4RANDOM) \\\n && !defined(XML_DEV_URANDOM) \\\n && !defined(_WIN32) \\\n && !defined(XML_POOR_ENTROPY)\n# error \n#endif\n\n#if !defined(TIOCSWINSZ) || defined(__SCO__) || defined(__UNIXWARE__)\n#define USE_SYSV_ENVVARS\t/* COLUMNS/LINES vs. TERMCAP */\n#endif\n\n#ifdef XML_UNICODE_WCHAR_T\n#define XML_T(x) (const wchar_t)x\n#define XML_L(x) L ## x\n#else\n#define XML_T(x) (const unsigned short)x\n#define XML_L(x) x\n#endif\n\nint fun(int x) { return XML_T(x); }\n\"\"\");", "_____no_output_____" ] ], [ [ "To find C preprocessor `#if` directives and preprocessor variables, we use regular expressions matching them.", "_____no_output_____" ] ], [ [ "import re", "_____no_output_____" ], [ "re_cpp_if_directive = re.compile(r\"\\s*#\\s*(el)?if\")\nre_cpp_identifier = re.compile(r\"[a-zA-Z_$]+\")\n\ndef cpp_identifiers(lines):\n identifiers = set()\n for line in lines:\n if re_cpp_if_directive.match(line):\n identifiers |= set(re_cpp_identifier.findall(line))\n\n # These are preprocessor keywords\n identifiers -= { \"if\", \"ifdef\", \"ifndef\", \"defined\" }\n return identifiers", "_____no_output_____" ], [ "cpp_ids = cpp_identifiers(open(\"xmlparse.c\").readlines())\ncpp_ids", "_____no_output_____" ] ], [ [ "#### Part 2: Derive an Option Grammar\n\nWith the help of `cpp_identifiers()`, create a grammar which has C compiler invocations with a list of options, where each option takes the form `-D<variable>` for a preprocessor variable `<variable>`. Using this grammar `cpp_grammar`, a fuzzer \n\n```python\ng = GrammarCoverageFuzzer(cpp_grammar)\n```\n\nwould create C compiler invocations such as\n\n```python\n[g.fuzz() for i in range(10)]\n['cc -DHAVE_SYSCALL_GETRANDOM xmlparse.c',\n 'cc -D__SCO__ -DRANDOM_BUF -DXML_UNICODE_WCHAR_T -D__UNIXWARE__ xmlparse.c',\n 'cc -DXML_POOR_ENTROPY xmlparse.c',\n 'cc -DRANDOM xmlparse.c',\n 'cc -D_WIN xmlparse.c',\n 'cc -DHAVE_ARC xmlparse.c', ...]\n```", "_____no_output_____" ], [ "**Solution.** This is not very difficult:", "_____no_output_____" ] ], [ [ "from Grammars import new_symbol", "_____no_output_____" ], [ "cpp_grammar = {\n \"<start>\": [\"cc -c<options> \" + filename],\n \"<options>\": [\"<option>\", \"<options><option>\"],\n \"<option>\": []\n}\nfor id in cpp_ids:\n s = new_symbol(cpp_grammar, \"<\" + id + \">\")\n cpp_grammar[\"<option>\"].append(s)\n cpp_grammar[s] = [\" -D\" + id]\n\ncpp_grammar", "_____no_output_____" ], [ "assert is_valid_grammar(cpp_grammar)", "_____no_output_____" ] ], [ [ "#### Part 3: C Preprocessor Configuration Fuzzing\n\nUsing the grammar just produced, use a `GrammarCoverageFuzzer` to\n\n1. Test each processor variable individually\n2. Test each pair of processor variables, using `pairwise()`.\n\nWhat happens if you actually run the invocations?", "_____no_output_____" ], [ "**Solution.** We can simply run the coverage fuzzer, as described above.", "_____no_output_____" ] ], [ [ "g = GrammarCoverageFuzzer(cpp_grammar)\ng.fuzz()", "_____no_output_____" ], [ "from Fuzzer import ProgramRunner", "_____no_output_____" ], [ "for i in range(10):\n invocation = g.fuzz()\n print(\"$\", invocation)\n # subprocess.call(invocation, shell=True)\n cc_runner = ProgramRunner(invocation.split(' '))\n (result, outcome) = cc_runner.run()\n print(result.stderr, end=\"\")", "_____no_output_____" ] ], [ [ "To test all pairs, we can use `pairwise()`:", "_____no_output_____" ] ], [ [ "pairwise_cpp_grammar = deepcopy(cpp_grammar)\npairwise_cpp_grammar[\"<option>\"] = pairwise(cpp_grammar[\"<option>\"])\npairwise_cpp_grammar[\"<option>\"][:10]", "_____no_output_____" ], [ "for i in range(10):\n invocation = g.fuzz()\n print(\"$\", invocation)\n # subprocess.call(invocation, shell=True)\n cc_runner = ProgramRunner(invocation.split(' '))\n (result, outcome) = cc_runner.run()\n print(result.stderr, end=\"\")", "_____no_output_____" ] ], [ [ "Some of the compilation errors we get could be expected – for instance, defining `XML_UNICODE_WCHAR_T` when actually, the type is not supported in our environment. Other errors may not be expected – and it is these errors we would find through systematic configuration fuzzing, as described above.", "_____no_output_____" ], [ "At the end, don't forget to clean up:", "_____no_output_____" ] ], [ [ "os.remove(\"xmlparse.c\")", "_____no_output_____" ], [ "if os.path.exists(\"xmlparse.o\"):\n os.remove(\"xmlparse.o\")", "_____no_output_____" ] ], [ [ "### Exercise 2: .ini Configuration Fuzzing\n\nBesides command-line options, another important source of configurations are _configuration files_. In this exercise, we will consider the very simple configuration language provided by the Python `ConfigParser` module, which is very similar to what is found in Microsoft Windows _.ini_ files.", "_____no_output_____" ], [ "The following example for a `ConfigParser` input file stems right from [the ConfigParser documentation](https://docs.python.org/3/library/configparser.html):\n```\n[DEFAULT]\nServerAliveInterval = 45\nCompression = yes\nCompressionLevel = 9\nForwardX11 = yes\n\n[bitbucket.org]\nUser = hg\n\n[topsecret.server.com]\nPort = 50022\nForwardX11 = no\n```", "_____no_output_____" ], [ "The above `ConfigParser` file can be created programmatically:", "_____no_output_____" ] ], [ [ "import configparser", "_____no_output_____" ], [ "config = configparser.ConfigParser()\nconfig['DEFAULT'] = {'ServerAliveInterval': '45',\n 'Compression': 'yes',\n 'CompressionLevel': '9'}\nconfig['bitbucket.org'] = {}\nconfig['bitbucket.org']['User'] = 'hg'\nconfig['topsecret.server.com'] = {}\ntopsecret = config['topsecret.server.com']\ntopsecret['Port'] = '50022' # mutates the parser\ntopsecret['ForwardX11'] = 'no' # same here\nconfig['DEFAULT']['ForwardX11'] = 'yes'\nwith open('example.ini', 'w') as configfile:\n config.write(configfile)\n\nwith open('example.ini') as configfile:\n print(configfile.read(), end=\"\")", "_____no_output_____" ] ], [ [ "and be read in again:", "_____no_output_____" ] ], [ [ "config = configparser.ConfigParser()\nconfig.read('example.ini')\ntopsecret = config['topsecret.server.com']\ntopsecret['Port']", "_____no_output_____" ] ], [ [ "#### Part 1: Read Configuration\n\nUsing `configparser`, create a program reading in the above configuration file and accessing the individual elements.", "_____no_output_____" ], [ "#### Part 2: Create a Configuration Grammar\n\nDesign a grammar that will automatically create configuration files suitable for your above program. Fuzz your program with it.", "_____no_output_____" ], [ "#### Part 3: Mine a Configuration Grammar\n\nBy dynamically tracking the individual accesses to configuration elements, you can again extract a basic grammar from the execution. To this end, create a subclass of `ConfigParser` with a special method `__getitem__`:", "_____no_output_____" ] ], [ [ "class TrackingConfigParser(configparser.ConfigParser):\n def __getitem__(self, key):\n print(\"Accessing\", repr(key))\n return super().__getitem__(key)", "_____no_output_____" ] ], [ [ "For a `TrackingConfigParser` object `p`, `p.__getitem__(key)` will be invoked whenever `p[key]` is accessed:", "_____no_output_____" ] ], [ [ "tracking_config_parser = TrackingConfigParser()\ntracking_config_parser.read('example.ini')\nsection = tracking_config_parser['topsecret.server.com']", "_____no_output_____" ] ], [ [ "Using `__getitem__()`, as above, implement a tracking mechanism that, while your program accesses the read configuration, automatically saves options accessed and values read. Create a prototype grammar from these values; use it for fuzzing.", "_____no_output_____" ], [ "At the end, don't forget to clean up:", "_____no_output_____" ] ], [ [ "import os", "_____no_output_____" ], [ "os.remove(\"example.ini\")", "_____no_output_____" ] ], [ [ "**Solution.** Left to the reader. Enjoy!", "_____no_output_____" ], [ "### Exercise 3: Extracting and Fuzzing C Command-Line Options\n\nIn C programs, the `getopt()` function are frequently used to process configuration options. A call\n\n```\ngetopt(argc, argv, \"bf:\")\n```\n\nindicates that the program accepts two options `-b` and `-f`, with `-f` taking an argument (as indicated by the following colon).", "_____no_output_____" ], [ "#### Part 1: Getopt Fuzzing\n\nWrite a framework which, for a given C program, automatically extracts the argument to `getopt()` and derives a fuzzing grammar for it. There are multiple ways to achieve this:\n\n1. Scan the program source code for occurrences of `getopt()` and return the string passed. (Crude, but should frequently work.)\n2. Insert your own implementation of `getopt()` into the source code (effectively replacing `getopt()` from the runtime library), which outputs the `getopt()` argument and exits the program. Recompile and run.\n3. (Advanced.) As above, but instead of changing the source code, hook into the _dynamic linker_ which at runtime links the program with the C runtime library. Set the library loading path (on Linux and Unix, this is the `LD_LIBRARY_PATH` environment variable) such that your own version of `getopt()` is linked first, and the regular libraries later. Executing the program (without recompiling) should yield the desired result.\n\nApply this on `grep` and `ls`; report the resulting grammars and results.", "_____no_output_____" ], [ "**Solution.** Left to the reader. Enjoy hacking!", "_____no_output_____" ], [ "#### Part 2: Fuzzing Long Options in C\n\nSame as Part 1, but also hook into the GNU variant `getopt_long()`, which accepts \"long\" arguments with double dashes such as `--help`. Note that method 1, above, will not work here, since the \"long\" options are defined in a separately defined structure.", "_____no_output_____" ], [ "**Solution.** Left to the reader. Enjoy hacking!", "_____no_output_____" ], [ "### Exercise 4: Expansions in Context\n\nIn our above option configurations, we have multiple symbols which all expand to the same integer. For instance, the `--line-range` option of `autopep8` takes two `<line>` parameters which both expand into the same `<int>` symbol:\n\n```\n<option> ::= ... | --line-range <line> <line> | ...\n<line> ::= <int>\n<int> ::= (-)?<digit>+\n<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9\n```", "_____no_output_____" ] ], [ [ "autopep8_runner.ebnf_grammar()[\"<line>\"]", "_____no_output_____" ], [ "autopep8_runner.ebnf_grammar()[\"<int>\"]", "_____no_output_____" ], [ "autopep8_runner.ebnf_grammar()[\"<digit>\"]", "_____no_output_____" ] ], [ [ "Once the `GrammarCoverageFuzzer` has covered all variations of `<int>` (especially by covering all digits) for _one_ option, though, it will no longer strive to achieve such coverage for the next option. Yet, it could be desirable to achieve such coverage for each option separately.", "_____no_output_____" ], [ "One way to achieve this with our existing `GrammarCoverageFuzzer` is again to change the grammar accordingly. The idea is to _duplicate_ expansions – that is, to replace an expansion of a symbol $s$ with a new symbol $s'$ whose definition is duplicated from $s$. This way, $s'$ and $s$ are separate symbols from a coverage point of view and would be independently covered.", "_____no_output_____" ], [ "As an example, consider again the above `--line-range` option. If we want our tests to independently cover all elements of the two `<line>` parameters, we can duplicate the second `<line>` expansion into a new symbol `<line'>` with subsequent duplicated expansions:\n```\n<option> ::= ... | --line-range <line> <line'> | ...\n<line> ::= <int>\n<line'> ::= <int'>\n<int> ::= (-)?<digit>+\n<int'> ::= (-)?<digit'>+\n<digit> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9\n<digit'> ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9\n```", "_____no_output_____" ], [ "Design a function `inline(grammar, symbol)` that returns a duplicate of `grammar` in which every occurrence of `<symbol>` and its expansions become separate copies. The above grammar could be a result of `inline(autopep8_runner.ebnf_grammar(), \"<line>\")`.", "_____no_output_____" ], [ "When copying, expansions in the copy should also refer to symbols in the copy. Hence, when expanding `<int>` in\n\n```<int> ::= <int><digit>```\n\nmake that\n\n```<int> ::= <int><digit>\n<int'> ::= <int'><digit'>\n```\n\n(and not `<int'> ::= <int><digit'>` or `<int'> ::= <int><digit>`).", "_____no_output_____" ], [ "Be sure to add precisely one new set of symbols for each occurrence in the original grammar, and not to expand further in the presence of recursion.", "_____no_output_____" ], [ "**Solution.** Again, left to the reader. Enjoy!", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb810960e2b5d12b8d8439a2eb52d90a3298fd17
78,919
ipynb
Jupyter Notebook
discrete_fourier_transform/fast_convolution.ipynb
swchao/signalsAndSystemsLecture
7f135d091499e1d3d635bac6ddf22adee15454f8
[ "MIT" ]
3
2019-01-27T12:39:27.000Z
2022-03-15T10:26:12.000Z
discrete_fourier_transform/fast_convolution.ipynb
swchao/signalsAndSystemsLecture
7f135d091499e1d3d635bac6ddf22adee15454f8
[ "MIT" ]
null
null
null
discrete_fourier_transform/fast_convolution.ipynb
swchao/signalsAndSystemsLecture
7f135d091499e1d3d635bac6ddf22adee15454f8
[ "MIT" ]
2
2020-09-18T06:26:48.000Z
2021-12-10T06:11:45.000Z
236.284431
28,436
0.895855
[ [ [ "# The Discrete Fourier Transform\n\n*This Jupyter notebook is part of a [collection of notebooks](../index.ipynb) in the bachelors module Signals and Systems, Comunications Engineering, Universität Rostock. Please direct questions and suggestions to [[email protected]](mailto:[email protected]).*", "_____no_output_____" ], [ "## Fast Convolution\n\nThe linear convolution of signals is a basic building block in many practical applications. The straightforward convolution of two finite-length signals $x[k]$ and $h[k]$ has considerable numerical complexity. This has led to the development of various algorithms that realize the convolution with lower complexity. The basic concept of the *fast convolution* is to exploit the [convolution theorem](theorems.ipynb#Convolution-Theorem) of the discrete Fourier transform (DFT). This theorem states that the periodic convolution of two signals is equal to a scalar multiplication of their spectra. The scalar multiplication has considerably less numerical operations that the convolution. The transformation of the signals can be performed efficiently by the [fast Fourier transform](fast_fourier_transform.ipynb) (FFT). \n\nSince the scalar multiplication of the spectra realizes a periodic convolution, special care has to be taken to realize a linear convolution in the spectral domain. The equivalence between linear and periodic convolution is discussed in the following.", "_____no_output_____" ], [ "### Equivalence of Linear and Periodic Convolution\n\nThe [linear convolution](../discrete_systems/linear_convolution.ipynb#Finite-Length-Signals) of a causal signal $x_L[k]$ of length $L$ with a causal signal $h_N[k]$ of length $N$ reads\n\n\\begin{equation}\ny[k] = x_L[k] * h_N[k] = \\sum_{\\kappa = 0}^{L-1} x_L[\\kappa] \\; h_N[k - \\kappa] = \\sum_{\\kappa = 0}^{N-1} h_N[\\kappa] \\; x_L[k - \\kappa]\n\\end{equation}\n\nThe resulting signal $y[k]$ is of finite length $M = N+L-1$. Without loss of generality it is assumed in the following that $N \\leq L$. The computation of $y[k]$ for $k=0,1, \\dots, M-1$ requires $M \\cdot N$ multiplications and $M \\cdot (N-1)$ additions. The computational complexity of the convolution is consequently [on the order of](https://en.wikipedia.org/wiki/Big_O_notation) $\\mathcal{O}(M \\cdot N)$.\n\nThe periodic convolution of the two signals $x_L[k]$ and $h_N[k]$ is defined as\n\n\\begin{equation}\nx_L[k] \\circledast_P h_N[k] = \\sum_{\\kappa = 0}^{N-1} h_N[\\kappa] \\cdot \\tilde{x}[k-\\kappa]\n\\end{equation}\n\nwhere $\\tilde{x}[k]$ denotes the periodic summation of $x_L[k]$ with period $P$\n\n\\begin{equation}\n\\tilde{x}[k] = \\sum_{\\nu = -\\infty}^{\\infty} x_L[k - \\nu P]\n\\end{equation}\n\nThe result of the circular convolution is periodic with period $P$. To compute the linear convolution by a periodic convolution, one has to take care that the result of the linear convolution fits into one period of the periodic convolution. Hence, the periodicity has to be chosen as $P \\geq M$ where $M = N+L-1$. This can be achieved by zero-padding of $x_L[k]$ to the total length $M$ resulting in the signal $x_M[k]$ of length $M$ which is defined as\n\n\\begin{equation}\nx_M[k] = \\begin{cases} \nx_L[k] & \\text{for } 0 \\leq k < L \\\\\n0 & \\text{for } L \\leq k < M\n\\end{cases}\n\\end{equation}\n\nand similar for $h_N[k]$ resulting in the zero-padded signal $h_M[k]$ which is defined as\n\n\\begin{equation}\nh_M[k] = \\begin{cases} \nh_N[k] & \\text{for } 0 \\leq k < N \\\\\n0 & \\text{for } N \\leq k < M\n\\end{cases}\n\\end{equation}\n\nUsing these signals, the linear and periodic convolution are equivalent for the first $M$ samples $k = 0,1,\\dots, M-1$\n\n\\begin{equation}\nx_L[k] * h_N[k] = x_M[k] \\circledast_M h_M[k]\n\\end{equation}", "_____no_output_____" ], [ "#### Example\n\nThe following example computes the linear, periodic and linear by periodic convolution of two signals $x[k] = \\text{rect}_L[k]$ and $h[k] = \\text{rect}_N[k]$.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom tools import cconv\n\nL = 8 # length of signal x[k]\nN = 10 # length of signal h[k]\nP = 14 # periodicity of periodic convolution\n\n# generate signals\nx = np.ones(L)\nh = np.ones(N)\n\n# linear convolution\ny1 = np.convolve(x, h, 'full')\n# periodic convolution\ny2 = cconv(x, h, P)\n# linear convolution via periodic convolution\nxp = np.append(x, np.zeros(N-1))\nhp = np.append(h, np.zeros(L-1))\ny3 = cconv(xp, hp, L+N-1)\n\n# plot results\ndef plot_signal(x):\n plt.stem(x)\n plt.xlabel('$k$')\n plt.ylabel('$y[k]$')\n plt.xlim([0, N+L])\n plt.gca().margins(y=0.1)\n\nplt.figure(figsize = (10, 8))\nplt.subplot(3,1,1)\nplot_signal(y1)\nplt.title('Linear convolution')\n\nplt.subplot(3,1,2)\nplot_signal(y2)\nplt.title('Periodic convolution with period $P=%d$'%P)\n\nplt.subplot(3,1,3)\nplot_signal(y3)\nplt.title('Linear convolution as periodic convolution')\nplt.tight_layout()", "_____no_output_____" ] ], [ [ "**Exercise**\n\n* Change the lengths `L`, `N` and `P` and check how the results for the different convolutions change.", "_____no_output_____" ], [ "### The Fast Convolution Algorithm\n\nUsing the above derived equality of the linear and periodic convolution one can express the linear convolution $y[k] = x_L[k] * h_N[k]$ by the DFT as\n\n$$ y[k] = \\text{IDFT}_M \\{ \\; \\text{DFT}_M\\{ x_M[k] \\} \\cdot \\text{DFT}_M\\{ h_M[k] \\} \\; \\} $$\n\nThe resulting algorithm is composed of the following steps\n\n1. Zero-padding of the two input signals $x_L[k]$ and $h_N[k]$ to at least a total length of $M \\geq N+L-1$\n\n2. Computation of the DFTs $X[\\mu]$ and $H[\\mu]$ using a FFT of length $M$\n\n3. Multiplication of the spectra $Y[\\mu] = X[\\mu] \\cdot H[\\mu]$\n\n4. Inverse DFT of $Y[\\mu]$ using an inverse FFT of length $M$\n\nThe algorithm requires two DFTs of length $M$, $M$ complex multiplications and one IDFT of length $M$. On first sight this does not seem to be an improvement, since one DFT/IDFT requires $M^2$ complex multiplications and $M \\cdot (M-1)$ complex additions. The overall numerical complexity is hence in the order of $\\mathcal{O}(M^2)$. The DFT can be realized efficiently by the [fast Fourier transformation](fast_fourier_transform.ipynb) (FFT), which lowers the number of numerical operations for each DFT/IDFT significantly. The actual gain depends on the particular implementation of the FFT. Many FFTs are most efficient for lengths which are a power of two. It therefore can make sense, in terms of the number of numerical operations, to choose $M$ as a power of two instead of the shortest possible length $N+L-1$. In this case, the numerical complexity of the radix-2 algorithm is on the order of $\\mathcal{O}(M \\log_2 M)$.\n\nThe introduced algorithm is known as *fast convolution* due to its computational efficiency when realized by the FFT. For real valued signals $x[k] \\in \\mathbb{R}$ and $h[k] \\in \\mathbb{R}$ the number of numerical operations can be reduced further by using a real valued FFT.", "_____no_output_____" ], [ "#### Example\n\nThe implementation of the fast convolution algorithm is straightforward. In the following example the fast convolution of two real-valued signals $x[k] = \\text{rect}_L[k]$ and $h[k] = \\text{rect}_N[k]$ is shown. The real valued FFT/IFFT is consequently used. Most implementations of the FFT include the zero-padding to a given length $M$, e.g as in `numpy` by `numpy.fft.rfft(x, M)`.", "_____no_output_____" ] ], [ [ "L = 8 # length of signal x[k]\nN = 10 # length of signal h[k]\n\n# generate signals\nx = np.ones(L)\nh = np.ones(N)\n\n# fast convolution\nM = N+L-1\ny = np.fft.irfft(np.fft.rfft(x, M)*np.fft.rfft(h, M))\n\n# show result\nplt.figure(figsize=(10, 3))\nplt.stem(y)\nplt.xlabel('k')\nplt.ylabel('y[k]');", "_____no_output_____" ] ], [ [ "### Benchmark\n\nIt was already argued that the numerical complexity of the fast convolution is considerably lower due to the usage of the FFT. As measure, the gain in terms of execution time with respect to the linear convolution is evaluated in the following. Both algorithms are executed for the convolution of two real-valued signals $x_L[k]$ and $h_N[k]$ of length $L=N=2^n$ for $n \\in \\mathbb{N}$. The length of the FFTs/IFFT was chosen as $M=2^{n+1}$. The results depend heavily on the implementation of the FFT and the hardware used. Note that the execution of the following script may take some time.", "_____no_output_____" ] ], [ [ "import timeit\n\nn = np.arange(17) # lengths = 2**n to evaluate\nreps = 20 # number of repetitions for timeit\n\ngain = np.zeros(len(n))\nfor N in n:\n length = 2**N\n # setup environment for timeit\n tsetup = 'import numpy as np; from numpy.fft import rfft, irfft; \\\n x=np.random.randn(%d); h=np.random.randn(%d)' % (length, length)\n # direct convolution\n tc = timeit.timeit('np.convolve(x, x, \"full\")', setup=tsetup, number=reps)\n # fast convolution\n tf = timeit.timeit('irfft(rfft(x, %d) * rfft(h, %d))' % (2*length, 2*length), setup=tsetup, number=reps)\n # speedup by using the fast convolution\n gain[N] = tc/tf\n\n# show the results\nplt.figure(figsize = (15, 10))\nplt.barh(n-.5, gain, log=True)\nplt.plot([1, 1], [-1, n[-1]+1], 'r-')\nplt.yticks(n, 2**n)\nplt.xlabel('Gain of fast convolution')\nplt.ylabel('Length of signals')\nplt.title('Comparison of execution times between direct and fast convolution')\nplt.grid()", "_____no_output_____" ] ], [ [ "**Exercise**\n\n* For which lengths is the fast convolution faster than the linear convolution? \n* Why is it slower below a given signal length?\n* Is the trend of the gain as expected from above considerations?", "_____no_output_____" ], [ "**Copyright**\n\nThe notebooks are provided as [Open Educational Resource](https://de.wikipedia.org/wiki/Open_Educational_Resources). Feel free to use the notebooks for your own educational purposes. The text is licensed under [Creative Commons Attribution 4.0](https://creativecommons.org/licenses/by/4.0/), the code of the IPython examples under the [MIT license](https://opensource.org/licenses/MIT). Please attribute the work as follows: *Lecture Notes on Signals and Systems* by Sascha Spors.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cb81228414365bf0f6aabd54f27e84d232182862
15,818
ipynb
Jupyter Notebook
python/d2l-en/pytorch/chapter_linear-networks/linear-regression-concise.ipynb
rtp-aws/devpost_aws_disaster_recovery
2ccfff2d8b85614f3043f09d98c9981dedf43c05
[ "MIT" ]
1
2022-01-13T23:36:05.000Z
2022-01-13T23:36:05.000Z
python/d2l-en/pytorch/chapter_linear-networks/linear-regression-concise.ipynb
rtp-aws/devpost_aws_disaster_recovery
2ccfff2d8b85614f3043f09d98c9981dedf43c05
[ "MIT" ]
9
2022-01-13T19:34:34.000Z
2022-01-14T19:41:18.000Z
python/d2l-en/pytorch/chapter_linear-networks/linear-regression-concise.ipynb
rtp-aws/devpost_aws_disaster_recovery
2ccfff2d8b85614f3043f09d98c9981dedf43c05
[ "MIT" ]
null
null
null
28.046099
290
0.575231
[ [ [ "# Concise Implementation of Linear Regression\n:label:`sec_linear_concise`\n\nBroad and intense interest in deep learning for the past several years\nhas inspired companies, academics, and hobbyists\nto develop a variety of mature open source frameworks\nfor automating the repetitive work of implementing\ngradient-based learning algorithms.\nIn :numref:`sec_linear_scratch`, we relied only on\n(i) tensors for data storage and linear algebra;\nand (ii) auto differentiation for calculating gradients.\nIn practice, because data iterators, loss functions, optimizers,\nand neural network layers\nare so common, modern libraries implement these components for us as well.\n\nIn this section, (**we will show you how to implement\nthe linear regression model**) from :numref:`sec_linear_scratch`\n(**concisely by using high-level APIs**) of deep learning frameworks.\n\n\n## Generating the Dataset\n\nTo start, we will generate the same dataset as in :numref:`sec_linear_scratch`.\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport torch\nfrom torch.utils import data\nfrom d2l import torch as d2l", "_____no_output_____" ], [ "true_w = torch.tensor([2, -3.4])\ntrue_b = 4.2\nfeatures, labels = d2l.synthetic_data(true_w, true_b, 1000)", "_____no_output_____" ] ], [ [ "## Reading the Dataset\n\nRather than rolling our own iterator,\nwe can [**call upon the existing API in a framework to read data.**]\nWe pass in `features` and `labels` as arguments and specify `batch_size`\nwhen instantiating a data iterator object.\nBesides, the boolean value `is_train`\nindicates whether or not\nwe want the data iterator object to shuffle the data\non each epoch (pass through the dataset).\n", "_____no_output_____" ] ], [ [ "def load_array(data_arrays, batch_size, is_train=True): #@save\n \"\"\"Construct a PyTorch data iterator.\"\"\"\n dataset = data.TensorDataset(*data_arrays)\n return data.DataLoader(dataset, batch_size, shuffle=is_train)", "_____no_output_____" ], [ "batch_size = 10\ndata_iter = load_array((features, labels), batch_size)", "_____no_output_____" ] ], [ [ "Now we can use `data_iter` in much the same way as we called\nthe `data_iter` function in :numref:`sec_linear_scratch`.\nTo verify that it is working, we can read and print\nthe first minibatch of examples.\nComparing with :numref:`sec_linear_scratch`,\nhere we use `iter` to construct a Python iterator and use `next` to obtain the first item from the iterator.\n", "_____no_output_____" ] ], [ [ "next(iter(data_iter))", "_____no_output_____" ] ], [ [ "## Defining the Model\n\nWhen we implemented linear regression from scratch\nin :numref:`sec_linear_scratch`,\nwe defined our model parameters explicitly\nand coded up the calculations to produce output\nusing basic linear algebra operations.\nYou *should* know how to do this.\nBut once your models get more complex,\nand once you have to do this nearly every day,\nyou will be glad for the assistance.\nThe situation is similar to coding up your own blog from scratch.\nDoing it once or twice is rewarding and instructive,\nbut you would be a lousy web developer\nif every time you needed a blog you spent a month\nreinventing the wheel.\n\nFor standard operations, we can [**use a framework's predefined layers,**]\nwhich allow us to focus especially\non the layers used to construct the model\nrather than having to focus on the implementation.\nWe will first define a model variable `net`,\nwhich will refer to an instance of the `Sequential` class.\nThe `Sequential` class defines a container\nfor several layers that will be chained together.\nGiven input data, a `Sequential` instance passes it through\nthe first layer, in turn passing the output\nas the second layer's input and so forth.\nIn the following example, our model consists of only one layer,\nso we do not really need `Sequential`.\nBut since nearly all of our future models\nwill involve multiple layers,\nwe will use it anyway just to familiarize you\nwith the most standard workflow.\n\nRecall the architecture of a single-layer network as shown in :numref:`fig_single_neuron`.\nThe layer is said to be *fully-connected*\nbecause each of its inputs is connected to each of its outputs\nby means of a matrix-vector multiplication.\n", "_____no_output_____" ], [ "In PyTorch, the fully-connected layer is defined in the `Linear` class. Note that we passed two arguments into `nn.Linear`. The first one specifies the input feature dimension, which is 2, and the second one is the output feature dimension, which is a single scalar and therefore 1.\n", "_____no_output_____" ] ], [ [ "# `nn` is an abbreviation for neural networks\nfrom torch import nn\n\nnet = nn.Sequential(nn.Linear(2, 1))", "_____no_output_____" ] ], [ [ "## Initializing Model Parameters\n\nBefore using `net`, we need to (**initialize the model parameters,**)\nsuch as the weights and bias in the linear regression model.\nDeep learning frameworks often have a predefined way to initialize the parameters.\nHere we specify that each weight parameter\nshould be randomly sampled from a normal distribution\nwith mean 0 and standard deviation 0.01.\nThe bias parameter will be initialized to zero.\n", "_____no_output_____" ], [ "As we have specified the input and output dimensions when constructing `nn.Linear`,\nnow we can access the parameters directly to specify their initial values.\nWe first locate the layer by `net[0]`, which is the first layer in the network,\nand then use the `weight.data` and `bias.data` methods to access the parameters.\nNext we use the replace methods `normal_` and `fill_` to overwrite parameter values.\n", "_____no_output_____" ] ], [ [ "net[0].weight.data.normal_(0, 0.01)\nnet[0].bias.data.fill_(0)", "_____no_output_____" ] ], [ [ "\n", "_____no_output_____" ], [ "## Defining the Loss Function\n", "_____no_output_____" ], [ "[**The `MSELoss` class computes the mean squared error (without the $1/2$ factor in :eqref:`eq_mse`).**]\nBy default it returns the average loss over examples.\n", "_____no_output_____" ] ], [ [ "loss = nn.MSELoss()", "_____no_output_____" ] ], [ [ "## Defining the Optimization Algorithm\n", "_____no_output_____" ], [ "Minibatch stochastic gradient descent is a standard tool\nfor optimizing neural networks\nand thus PyTorch supports it alongside a number of\nvariations on this algorithm in the `optim` module.\nWhen we (**instantiate an `SGD` instance,**)\nwe will specify the parameters to optimize over\n(obtainable from our net via `net.parameters()`), with a dictionary of hyperparameters\nrequired by our optimization algorithm.\nMinibatch stochastic gradient descent just requires that\nwe set the value `lr`, which is set to 0.03 here.\n", "_____no_output_____" ] ], [ [ "trainer = torch.optim.SGD(net.parameters(), lr=0.03)", "_____no_output_____" ] ], [ [ "## Training\n\nYou might have noticed that expressing our model through\nhigh-level APIs of a deep learning framework\nrequires comparatively few lines of code.\nWe did not have to individually allocate parameters,\ndefine our loss function, or implement minibatch stochastic gradient descent.\nOnce we start working with much more complex models,\nadvantages of high-level APIs will grow considerably.\nHowever, once we have all the basic pieces in place,\n[**the training loop itself is strikingly similar\nto what we did when implementing everything from scratch.**]\n\nTo refresh your memory: for some number of epochs,\nwe will make a complete pass over the dataset (`train_data`),\niteratively grabbing one minibatch of inputs\nand the corresponding ground-truth labels.\nFor each minibatch, we go through the following ritual:\n\n* Generate predictions by calling `net(X)` and calculate the loss `l` (the forward propagation).\n* Calculate gradients by running the backpropagation.\n* Update the model parameters by invoking our optimizer.\n\nFor good measure, we compute the loss after each epoch and print it to monitor progress.\n", "_____no_output_____" ] ], [ [ "num_epochs = 3\nfor epoch in range(num_epochs):\n for X, y in data_iter:\n l = loss(net(X) ,y)\n trainer.zero_grad()\n l.backward()\n trainer.step()\n l = loss(net(features), labels)\n print(f'epoch {epoch + 1}, loss {l:f}')", "epoch 1, loss 0.000220\nepoch 2, loss 0.000106\nepoch 3, loss 0.000106\n" ] ], [ [ "Below, we [**compare the model parameters learned by training on finite data\nand the actual parameters**] that generated our dataset.\nTo access parameters,\nwe first access the layer that we need from `net`\nand then access that layer's weights and bias.\nAs in our from-scratch implementation,\nnote that our estimated parameters are\nclose to their ground-truth counterparts.\n", "_____no_output_____" ] ], [ [ "w = net[0].weight.data\nprint('error in estimating w:', true_w - w.reshape(true_w.shape))\nb = net[0].bias.data\nprint('error in estimating b:', true_b - b)", "error in estimating w: tensor([ 0.0002, -0.0010])\nerror in estimating b: tensor([0.0003])\n" ] ], [ [ "## Summary\n", "_____no_output_____" ], [ "* Using PyTorch's high-level APIs, we can implement models much more concisely.\n* In PyTorch, the `data` module provides tools for data processing, the `nn` module defines a large number of neural network layers and common loss functions.\n* We can initialize the parameters by replacing their values with methods ending with `_`.\n", "_____no_output_____" ], [ "## Exercises\n", "_____no_output_____" ], [ "1. If we replace `nn.MSELoss(reduction='sum')` with `nn.MSELoss()`, how can we change the learning rate for the code to behave identically. Why?\n1. Review the PyTorch documentation to see what loss functions and initialization methods are provided. Replace the loss by Huber's loss.\n1. How do you access the gradient of `net[0].weight`?\n\n[Discussions](https://discuss.d2l.ai/t/45)\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
cb812758c77db0c727912f941d001169d3b92e41
5,445
ipynb
Jupyter Notebook
classes/pt/conditionals.ipynb
vauxgomes/python-course
a7c4eadfd50b1b03f3004a2d25dc9a7e8b82c307
[ "MIT" ]
null
null
null
classes/pt/conditionals.ipynb
vauxgomes/python-course
a7c4eadfd50b1b03f3004a2d25dc9a7e8b82c307
[ "MIT" ]
null
null
null
classes/pt/conditionals.ipynb
vauxgomes/python-course
a7c4eadfd50b1b03f3004a2d25dc9a7e8b82c307
[ "MIT" ]
null
null
null
24.638009
195
0.518274
[ [ [ "---\n\n![Header](img/conditionals-header.png)\n\n---\n\n# Estrutura de controles de fluxo\n\nServem para alterar a ordem dos passos de um algoritmo/programa.\n\n<img src=\"img/conditionals/control-flow.png\" width=\"600px\"/>\n\n## Tipos de estruturas de controle\n\n - Estruturas condicionais\n - Laços de repetição\n - Funções", "_____no_output_____" ], [ "## Estruturas condicionais \n### Estrutura condicional `if`\nA estrutura `if` é utilizada sempre que precisarmos desviar o curso do programa de acordo com um ou mais **condições**.\n\n<img src=\"img/conditionals/if-statement.png\" width=\"250px\"/>\n\n#### IF\n```py\nif <expr>:\n <statement>\n```\n\nNo trecho acima:\n - `<expr>` representa uma expressão Booleana\n - `<statement>` representa um bloco de código qualquer em Python\n - Este bloco deve ser válido; e\n - Deve estar identado\n\n#### Indentação\nIndentação é um recuo dado em uma ou mais linhas do código\n\n#### Exemplo\n*Se chover hoje eu vou lavar o meu carro!*", "_____no_output_____" ], [ "#### Exemplo\n*Se a temperatura estiver boa eu vou ao parque relaxar e levar o cachoro para caminhar*", "_____no_output_____" ], [ "### Estrutura condicional `else`\nEsta estutura é utilizada em conjunto com o `if` e serve para especificar uma alternativa para quando a condição do `if` **não for aceita**.\n\n> O `else` pode ser utilizado com outras estruturas de controle de fluxo\n\n<img src=\"img/conditionals/if-else-statement.png\" width=\"290px\"/>\n\n#### IF-ELSE\n```py\nif <expr>:\n <statement>\nelse:\n <statement>\n```\n\n#### Exemplo\n*Se a nota do aluno for maior ou igual à média o sistema imprime \"aprovado\", caso contrário o sistema escreve \"reprovado\"*", "_____no_output_____" ], [ "### Estrutura condicional `elif`\nEsta estutura é utilizada em conjunto com o `if` e serve para testar uma nova condição para quando a condição do `if` **não for aceita**.\n\n<img src=\"img/conditionals/if-elif-statement.png\" width=\"290px\"/>\n\n#### IF-ELIF\n```py\nif <expr>:\n <statement>\nelif:\n <statement>\n```\n\n#### IF-ELIF-ELSE\n```py\nif <expr>:\n <statement>\nelif:\n <statement>\nelse:\n <statement>\n```\n\n#### IF-ELIF-ELIF-ELIF-...-ELIF-ELSE\n```py\nif <expr>:\n <statement>\nelif:\n <statement>\nelif:\n <statement>\n .\n .\n .\nelse:\n <statement>\n```\n\n#### Exemplo\n*Se a nota do aluno for maior ou igual à média o sistema imprime \"aprovado\", caso contrário se a nota for maior ou igual à nota mínima da recuperação o sistema escreve \"recuperado\"*", "_____no_output_____" ], [ "#### Exemplo\nCrie um programa que pergunta o ano de nascimento de uma pessoa e informa o grupo etário baseado na tabela abaixo:\n\n| Grupo | Idade |\n| ----- | ----- | \n| Jóvem | até 14 anos |\n| PIA* | até 64 anos |\n| Idóso | acima de 65 anos | \n\n> *PIA: População em Idade Ativa", "_____no_output_____" ], [ "#### Desafio\nCrie um programa que realiza até 5 perguntas e determina a qual dos cinco reinos um determado ser vivo pertence (Reino Monera, Reino Protista, Reino Fungi, Reino Animalia ou Reino Plantae).", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb8127b0720cd0757b6263bb8f1af61d612ed66d
528,486
ipynb
Jupyter Notebook
LSTM models/LSTM_single_variable.ipynb
lllkx9220/soybean-market-price-forecasting
416a8cc3fb6cdc6a1c4638d910e5464410125d42
[ "MIT" ]
2
2021-02-18T17:07:04.000Z
2021-02-24T14:18:21.000Z
LSTM models/LSTM_single_variable.ipynb
lllkx9220/soybean-market-price-forecasting
416a8cc3fb6cdc6a1c4638d910e5464410125d42
[ "MIT" ]
2
2020-02-01T21:32:55.000Z
2020-02-01T22:22:56.000Z
LSTM models/LSTM_single_variable.ipynb
lllkx9220/soybean-market-price-forecasting
416a8cc3fb6cdc6a1c4638d910e5464410125d42
[ "MIT" ]
2
2020-05-27T05:51:09.000Z
2020-06-19T16:34:08.000Z
296.070588
120,140
0.898815
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\npd.set_option('display.float_format', lambda x: '%.4f' % x)\nimport seaborn as sns\nsns.set_context(\"paper\", font_scale=1.3)\nsns.set_style('white')\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom time import time\nimport matplotlib.ticker as tkr\nfrom scipy import stats\nfrom statsmodels.tsa.stattools import adfuller\nfrom sklearn import preprocessing\nfrom statsmodels.tsa.stattools import pacf\n%matplotlib inline\nimport math\nimport keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\nfrom keras.layers import Dropout\nfrom keras.layers import *\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.metrics import mean_squared_error\nfrom sklearn.metrics import mean_absolute_error\nfrom keras.callbacks import EarlyStopping\ndf=pd.read_csv('3contrat.csv')\nprint('Number of rows and columns:', df.shape)\ndf.head(5)", "Using TensorFlow backend.\n" ], [ "df.dtypes", "_____no_output_____" ], [ "df.head(5)\n", "_____no_output_____" ], [ "df['Time'] = pd.to_datetime(df['Time'])\ndf['year'] = df['Time'].apply(lambda x: x.year)\ndf['quarter'] = df['Time'].apply(lambda x: x.quarter)\ndf['month'] = df['Time'].apply(lambda x: x.month)\ndf['day'] = df['Time'].apply(lambda x: x.day)\ndf=df.loc[:,['Time','Close', 'year','quarter','month','day']]\ndf.sort_values('Time', inplace=True, ascending=True)\ndf = df.reset_index(drop=True)\ndf[\"weekday\"]=df.apply(lambda row: row[\"Time\"].weekday(),axis=1)\ndf[\"weekday\"] = (df[\"weekday\"] < 5).astype(int)", "_____no_output_____" ], [ "print('The time series starts from: ', df.Time.min())\nprint('The time series ends on: ', df.Time.max())", "The time series starts from: 2017-11-15 00:00:00\nThe time series ends on: 2019-11-01 00:00:00\n" ], [ "stat, p = stats.normaltest(df.Close)\nprint('Statistics=%.3f, p=%.3f' % (stat, p))\nalpha = 0.05\nif p > alpha:\n print('Data looks Gaussian (fail to reject H0)')\nelse:\n print('Data does not look Gaussian (reject H0)')", "Statistics=49.652, p=0.000\nData does not look Gaussian (reject H0)\n" ], [ "sns.distplot(df.Close);\nprint( 'Kurtosis of normal distribution: {}'.format(stats.kurtosis(df.Close)))\nprint( 'Skewness of normal distribution: {}'.format(stats.skew(df.Close)))", "Kurtosis of normal distribution: -0.8378035425782784\nSkewness of normal distribution: -0.21719033388331357\n" ] ], [ [ "Kurtosis: describes heaviness of the tails of a distribution.\n\nIf the kurtosis is less than zero, then the distribution is light tails.\n\n\n\nSkewness: measures asymmetry of the distribution.\n\nIf the skewness is between -0.5 and 0.5, the data are fairly symmetrical.\n\nIf the skewness is between -1 and — 0.5 or between 0.5 and 1, the data are moderately skewed.\n\nIf the skewness is less than -1 or greater than 1, the data are highly skewed. ", "_____no_output_____" ] ], [ [ "df1=df.loc[:,['Time','Close']]\ndf1.set_index('Time',inplace=True)\ndf1.plot(figsize=(12,5))\nplt.ylabel('Close')\nplt.legend().set_visible(False)\nplt.tight_layout()\nplt.title('Close Price Time Series')\nsns.despine(top=True)\nplt.show();", "_____no_output_____" ], [ "plt.figure(figsize=(14,5))\nplt.subplot(1,2,1)\nplt.subplots_adjust(wspace=0.2)\nsns.boxplot(x=\"year\", y=\"Close\", data=df)\nplt.xlabel('year')\nplt.title('Box plot of Yearly Close Price')\nsns.despine(left=True)\nplt.tight_layout()\nplt.subplot(1,2,2)\nsns.boxplot(x=\"quarter\", y=\"Close\", data=df)\nplt.xlabel('quarter')\nplt.title('Box plot of Quarterly Close Price')\nsns.despine(left=True)\nplt.tight_layout();", "_____no_output_____" ], [ "plt.figure(figsize=(14,6))\nplt.subplot(1,2,1)\ndf['Close'].hist(bins=50)\nplt.title('Close Price Distribution')\nplt.subplot(1,2,2)\nstats.probplot(df['Close'], plot=plt);\ndf1.describe().T", "_____no_output_____" ], [ "df.index = df.Time\nfig = plt.figure(figsize=(18,16))\nfig.subplots_adjust(hspace=.4)\nax1 = fig.add_subplot(5,1,1)\nax1.plot(df['Close'].resample('D').mean(),linewidth=1)\nax1.set_title('Mean Close Price resampled over day')\nax1.tick_params(axis='both', which='major')\n\nax2 = fig.add_subplot(5,1,2, sharex=ax1)\nax2.plot(df['Close'].resample('W').mean(),linewidth=1)\nax2.set_title('Mean Close Price resampled over week')\nax2.tick_params(axis='both', which='major')\n\nax3 = fig.add_subplot(5,1,3, sharex=ax1)\nax3.plot(df['Close'].resample('M').mean(),linewidth=1)\nax3.set_title('Mean Close Price resampled over month')\nax3.tick_params(axis='both', which='major')\n\nax4 = fig.add_subplot(5,1,4, sharex=ax1)\nax4.plot(df['Close'].resample('Q').mean(),linewidth=1)\nax4.set_title('Mean Close Price resampled over quarter')\nax4.tick_params(axis='both', which='major')\n\nax5 = fig.add_subplot(5,1,5, sharex=ax1)\nax5.plot(df['Close'].resample('A').mean(),linewidth=1)\nax5.set_title('Mean Close Price resampled over year')\nax5.tick_params(axis='both', which='major');", "_____no_output_____" ], [ "plt.figure(figsize=(14,8))\nplt.subplot(2,2,1)\ndf.groupby('year').Close.agg('mean').plot()\nplt.xlabel('')\nplt.title('Mean Close Price by Year')\n\nplt.subplot(2,2,2)\ndf.groupby('quarter').Close.agg('mean').plot()\nplt.xlabel('')\nplt.title('Mean Close Price by Quarter')\n\nplt.subplot(2,2,3)\ndf.groupby('month').Close.agg('mean').plot()\nplt.xlabel('')\nplt.title('Mean Close Price by Month')\n\nplt.subplot(2,2,4)\ndf.groupby('day').Close.agg('mean').plot()\nplt.xlabel('')\nplt.title('Mean Close Price by Day');", "_____no_output_____" ], [ "pd.pivot_table(df.loc[df['year'] != 2017], values = \"Close\", \n columns = \"year\", index = \"month\").plot(subplots = True, figsize=(12, 12), layout=(3, 5), sharey=True);", "_____no_output_____" ], [ "dic={0:'Weekend',1:'Weekday'}\ndf['Day'] = df.weekday.map(dic)\na=plt.figure(figsize=(9,4)) \nplt1=sns.boxplot('year','Close',hue='Day',width=0.6,fliersize=3,\n data=df) \na.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=2)\nsns.despine(left=True, bottom=True) \nplt.xlabel('')\nplt.tight_layout() \nplt.legend().set_visible(False);", "_____no_output_____" ], [ "plt1=sns.factorplot('year','Close',hue='Day',\n data=df, size=4, aspect=1.5, legend=False) \nplt.title('Factor Plot of Close Price by Weekday') \nplt.tight_layout() \nsns.despine(left=True, bottom=True) \nplt.legend(loc='upper right');", "_____no_output_____" ], [ "df2=df1.resample('D', how=np.mean)\n\ndef test_stationarity(timeseries):\n rolmean = timeseries.rolling(window=30).mean()\n rolstd = timeseries.rolling(window=30).std()\n \n plt.figure(figsize=(14,5))\n sns.despine(left=True)\n orig = plt.plot(timeseries, color='blue',label='Original')\n mean = plt.plot(rolmean, color='red', label='Rolling Mean')\n std = plt.plot(rolstd, color='black', label = 'Rolling Std')\n\n plt.legend(loc='best'); plt.title('Rolling Mean & Standard Deviation')\n plt.show()\n \n print ('<Results of Dickey-Fuller Test>')\n dftest = adfuller(timeseries, autolag='AIC')\n dfoutput = pd.Series(dftest[0:4],\n index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])\n for key,value in dftest[4].items():\n dfoutput['Critical Value (%s)'%key] = value\n print(dfoutput)\ntest_stationarity(df2.Close.dropna())", "_____no_output_____" ] ], [ [ "### Dickey-Fuller test\n\nNull Hypothesis (H0): It suggests the time series has a unit root, meaning it is non-stationary. It has some time dependent structure.\n\nAlternate Hypothesis (H1): It suggests the time series does not have a unit root, meaning it is stationary. It does not have time-dependent structure.\n\np-value > 0.05: Accept the null hypothesis (H0), the data has a unit root and is non-stationary.", "_____no_output_____" ] ], [ [ "dataset = df.Close.values #numpy.ndarray\ndataset = dataset.astype('float32') #arrary of close price\ndataset = np.reshape(dataset, (-1, 1)) #make each close price a list [839,],[900,]\nscaler = MinMaxScaler(feature_range=(0, 1)) \ndataset = scaler.fit_transform(dataset) \n# 80% 20% split test set and training set\ntrain_size = int(len(dataset) * 0.80) # 396\ntest_size = len(dataset) - train_size # 99\ntrain, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:]\n\ndef create_dataset(dataset, look_back=1):\n X, Y = [], []\n for i in range(len(dataset)-look_back-1):\n a = dataset[i:(i+look_back), 0]\n X.append(a)\n Y.append(dataset[i + look_back, 0])\n return np.array(X), np.array(Y)\n \nlook_back = 7\nX_train, Y_train = create_dataset(train, look_back) # training \nX_test, Y_test = create_dataset(test, look_back) # testing", "_____no_output_____" ], [ "create_dataset(train, look_back)", "_____no_output_____" ], [ "# reshape input to be [samples, time steps, features]\nX_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))\nX_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "X_train", "_____no_output_____" ], [ "create_dataset(train, look_back)", "_____no_output_____" ], [ "X_train.shape", "_____no_output_____" ], [ "model = Sequential()\nmodel.add(LSTM(100, input_shape=(X_train.shape[1], X_train.shape[2])))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(1))\nmodel.compile(loss='mean_absolute_error', optimizer='adam')\n\nhistory = model.fit(X_train, Y_train, epochs=120, batch_size=15, validation_data=(X_test, Y_test), \n callbacks=[EarlyStopping(monitor='val_loss', patience=10)], verbose=1, shuffle=False)\n\nmodel.summary()\n# train data", "Train on 388 samples, validate on 91 samples\nEpoch 1/120\n388/388 [==============================] - 1s 2ms/step - loss: 0.4916 - val_loss: 0.0886\nEpoch 2/120\n388/388 [==============================] - 0s 131us/step - loss: 0.1197 - val_loss: 0.0616\nEpoch 3/120\n388/388 [==============================] - 0s 119us/step - loss: 0.0740 - val_loss: 0.0615\nEpoch 4/120\n388/388 [==============================] - 0s 115us/step - loss: 0.0669 - val_loss: 0.0599\nEpoch 5/120\n388/388 [==============================] - 0s 115us/step - loss: 0.0685 - val_loss: 0.0580\nEpoch 6/120\n388/388 [==============================] - 0s 130us/step - loss: 0.0668 - val_loss: 0.0556\nEpoch 7/120\n388/388 [==============================] - 0s 110us/step - loss: 0.0681 - val_loss: 0.0560\nEpoch 8/120\n388/388 [==============================] - 0s 106us/step - loss: 0.0635 - val_loss: 0.0538\nEpoch 9/120\n388/388 [==============================] - 0s 110us/step - loss: 0.0645 - val_loss: 0.0526\nEpoch 10/120\n388/388 [==============================] - 0s 108us/step - loss: 0.0619 - val_loss: 0.0517\nEpoch 11/120\n388/388 [==============================] - 0s 110us/step - loss: 0.0582 - val_loss: 0.0516\nEpoch 12/120\n388/388 [==============================] - 0s 111us/step - loss: 0.0574 - val_loss: 0.0503\nEpoch 13/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0588 - val_loss: 0.0497\nEpoch 14/120\n388/388 [==============================] - 0s 117us/step - loss: 0.0597 - val_loss: 0.0490\nEpoch 15/120\n388/388 [==============================] - 0s 103us/step - loss: 0.0553 - val_loss: 0.0478\nEpoch 16/120\n388/388 [==============================] - 0s 110us/step - loss: 0.0596 - val_loss: 0.0495\nEpoch 17/120\n388/388 [==============================] - 0s 106us/step - loss: 0.0623 - val_loss: 0.0532\nEpoch 18/120\n388/388 [==============================] - 0s 105us/step - loss: 0.0612 - val_loss: 0.0521\nEpoch 19/120\n388/388 [==============================] - 0s 100us/step - loss: 0.0577 - val_loss: 0.0536\nEpoch 20/120\n388/388 [==============================] - 0s 103us/step - loss: 0.0560 - val_loss: 0.0539\nEpoch 21/120\n388/388 [==============================] - 0s 114us/step - loss: 0.0553 - val_loss: 0.0489\nEpoch 22/120\n388/388 [==============================] - 0s 116us/step - loss: 0.0545 - val_loss: 0.0487\nEpoch 23/120\n388/388 [==============================] - 0s 108us/step - loss: 0.0513 - val_loss: 0.0459\nEpoch 24/120\n388/388 [==============================] - 0s 107us/step - loss: 0.0557 - val_loss: 0.0457\nEpoch 25/120\n388/388 [==============================] - 0s 107us/step - loss: 0.0520 - val_loss: 0.0464\nEpoch 26/120\n388/388 [==============================] - 0s 99us/step - loss: 0.0546 - val_loss: 0.0477\nEpoch 27/120\n388/388 [==============================] - 0s 113us/step - loss: 0.0510 - val_loss: 0.0488\nEpoch 28/120\n388/388 [==============================] - 0s 109us/step - loss: 0.0498 - val_loss: 0.0446\nEpoch 29/120\n388/388 [==============================] - 0s 133us/step - loss: 0.0501 - val_loss: 0.0430\nEpoch 30/120\n388/388 [==============================] - 0s 108us/step - loss: 0.0507 - val_loss: 0.0470\nEpoch 31/120\n388/388 [==============================] - 0s 103us/step - loss: 0.0484 - val_loss: 0.0423\nEpoch 32/120\n388/388 [==============================] - 0s 113us/step - loss: 0.0509 - val_loss: 0.0483\nEpoch 33/120\n388/388 [==============================] - 0s 113us/step - loss: 0.0480 - val_loss: 0.0438\nEpoch 34/120\n388/388 [==============================] - 0s 137us/step - loss: 0.0506 - val_loss: 0.0420\nEpoch 35/120\n388/388 [==============================] - 0s 126us/step - loss: 0.0507 - val_loss: 0.0425\nEpoch 36/120\n388/388 [==============================] - 0s 107us/step - loss: 0.0488 - val_loss: 0.0420\nEpoch 37/120\n388/388 [==============================] - 0s 106us/step - loss: 0.0456 - val_loss: 0.0416\nEpoch 38/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0460 - val_loss: 0.0432\nEpoch 39/120\n388/388 [==============================] - 0s 105us/step - loss: 0.0505 - val_loss: 0.0413\nEpoch 40/120\n388/388 [==============================] - 0s 115us/step - loss: 0.0464 - val_loss: 0.0419\nEpoch 41/120\n388/388 [==============================] - 0s 111us/step - loss: 0.0474 - val_loss: 0.0422\nEpoch 42/120\n388/388 [==============================] - 0s 102us/step - loss: 0.0488 - val_loss: 0.0413\nEpoch 43/120\n388/388 [==============================] - 0s 108us/step - loss: 0.0452 - val_loss: 0.0416\nEpoch 44/120\n388/388 [==============================] - 0s 109us/step - loss: 0.0461 - val_loss: 0.0415\nEpoch 45/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0484 - val_loss: 0.0420\nEpoch 46/120\n388/388 [==============================] - 0s 108us/step - loss: 0.0463 - val_loss: 0.0423\nEpoch 47/120\n388/388 [==============================] - 0s 105us/step - loss: 0.0471 - val_loss: 0.0419\nEpoch 48/120\n388/388 [==============================] - 0s 105us/step - loss: 0.0473 - val_loss: 0.0411\nEpoch 49/120\n388/388 [==============================] - 0s 102us/step - loss: 0.0475 - val_loss: 0.0463\nEpoch 50/120\n388/388 [==============================] - 0s 103us/step - loss: 0.0459 - val_loss: 0.0424\nEpoch 51/120\n388/388 [==============================] - 0s 103us/step - loss: 0.0457 - val_loss: 0.0412\nEpoch 52/120\n388/388 [==============================] - 0s 109us/step - loss: 0.0464 - val_loss: 0.0409\nEpoch 53/120\n388/388 [==============================] - 0s 113us/step - loss: 0.0458 - val_loss: 0.0419\nEpoch 54/120\n388/388 [==============================] - 0s 105us/step - loss: 0.0457 - val_loss: 0.0418\nEpoch 55/120\n388/388 [==============================] - 0s 111us/step - loss: 0.0480 - val_loss: 0.0430\nEpoch 56/120\n388/388 [==============================] - 0s 124us/step - loss: 0.0492 - val_loss: 0.0444\nEpoch 57/120\n388/388 [==============================] - 0s 107us/step - loss: 0.0467 - val_loss: 0.0407\nEpoch 58/120\n388/388 [==============================] - 0s 124us/step - loss: 0.0452 - val_loss: 0.0454\nEpoch 59/120\n388/388 [==============================] - 0s 109us/step - loss: 0.0487 - val_loss: 0.0445\nEpoch 60/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0451 - val_loss: 0.0435\nEpoch 61/120\n388/388 [==============================] - 0s 97us/step - loss: 0.0567 - val_loss: 0.0542\nEpoch 62/120\n388/388 [==============================] - 0s 101us/step - loss: 0.0493 - val_loss: 0.0426\nEpoch 63/120\n388/388 [==============================] - 0s 106us/step - loss: 0.0471 - val_loss: 0.0421\nEpoch 64/120\n388/388 [==============================] - 0s 102us/step - loss: 0.0454 - val_loss: 0.0407\nEpoch 65/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0472 - val_loss: 0.0413\nEpoch 66/120\n388/388 [==============================] - 0s 106us/step - loss: 0.0466 - val_loss: 0.0414\nEpoch 67/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0460 - val_loss: 0.0421\nEpoch 68/120\n388/388 [==============================] - 0s 102us/step - loss: 0.0456 - val_loss: 0.0412\nEpoch 69/120\n388/388 [==============================] - 0s 105us/step - loss: 0.0437 - val_loss: 0.0407\nEpoch 70/120\n388/388 [==============================] - 0s 103us/step - loss: 0.0449 - val_loss: 0.0457\nEpoch 71/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0501 - val_loss: 0.0406\nEpoch 72/120\n388/388 [==============================] - 0s 106us/step - loss: 0.0454 - val_loss: 0.0405\nEpoch 73/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0459 - val_loss: 0.0404\nEpoch 74/120\n388/388 [==============================] - 0s 96us/step - loss: 0.0461 - val_loss: 0.0421\nEpoch 75/120\n388/388 [==============================] - 0s 102us/step - loss: 0.0449 - val_loss: 0.0405\nEpoch 76/120\n388/388 [==============================] - 0s 98us/step - loss: 0.0446 - val_loss: 0.0424\nEpoch 77/120\n388/388 [==============================] - 0s 101us/step - loss: 0.0473 - val_loss: 0.0404\nEpoch 78/120\n388/388 [==============================] - 0s 104us/step - loss: 0.0446 - val_loss: 0.0406\nEpoch 79/120\n" ], [ "#make prediction\ntrain_predict = model.predict(X_train) \ntest_predict = model.predict(X_test) \n# invert predictions\ntrain_predict = scaler.inverse_transform(train_predict)\nY_train = scaler.inverse_transform([Y_train])\ntest_predict = scaler.inverse_transform(test_predict)\nY_test = scaler.inverse_transform([Y_test])\nprint('Train Mean Absolute Error:', mean_absolute_error(Y_train[0], train_predict[:,0]))\nprint('Train Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_train[0], train_predict[:,0])))\nprint('Test Mean Absolute Error:', mean_absolute_error(Y_test[0], test_predict[:,0]))\nprint('Test Root Mean Squared Error:',np.sqrt(mean_squared_error(Y_test[0], test_predict[:,0])))", "Train Mean Absolute Error: 5.569344626517569\nTrain Root Mean Squared Error: 7.498574295488622\nTest Mean Absolute Error: 6.550505223110931\nTest Root Mean Squared Error: 8.40227413252618\n" ], [ "Y_test", "_____no_output_____" ], [ "plt.figure(figsize=(8,4))\nplt.plot(history.history['loss'], label='Train Loss')\nplt.plot(history.history['val_loss'], label='Test Loss')\nplt.title('model loss')\nplt.ylabel('loss')\nplt.xlabel('epochs')\nplt.legend(loc='upper right')\nplt.show();", "_____no_output_____" ], [ "aa=[x for x in range(48)]\nplt.figure(figsize=(8,4))\nplt.plot(aa, Y_test[0][:48], marker='.', label=\"actual\")\nplt.plot(aa, test_predict[:,0][:48], 'r', label=\"prediction\")\n# plt.tick_params(left=False, labelleft=True) #remove ticks\nplt.tight_layout()\nsns.despine(top=True)\nplt.subplots_adjust(left=0.07)\nplt.ylabel('Close', size=15)\nplt.xlabel('Time step', size=15)\nplt.legend(fontsize=15)\nplt.show();", "_____no_output_____" ], [ "Y_test", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb81454fde39cb8f90eac08e92e399e7ceb2ae65
14,130
ipynb
Jupyter Notebook
examples/tutorials/tutorial-3-basis.ipynb
rscohn2/libCEED
0983a37fb2a235bf55920dd2b1098e5586f89fb8
[ "BSD-2-Clause" ]
null
null
null
examples/tutorials/tutorial-3-basis.ipynb
rscohn2/libCEED
0983a37fb2a235bf55920dd2b1098e5586f89fb8
[ "BSD-2-Clause" ]
null
null
null
examples/tutorials/tutorial-3-basis.ipynb
rscohn2/libCEED
0983a37fb2a235bf55920dd2b1098e5586f89fb8
[ "BSD-2-Clause" ]
1
2021-03-30T23:13:18.000Z
2021-03-30T23:13:18.000Z
31.824324
401
0.542321
[ [ [ "# libCEED for Python examples\n\nThis is a tutorial to illustrate the main feautures of the Python interface for [libCEED](https://github.com/CEED/libCEED/), the low-level API library for efficient high-order discretization methods developed by the co-design [Center for Efficient Exascale Discretizations](https://ceed.exascaleproject.org/) (CEED) of the [Exascale Computing Project](https://www.exascaleproject.org/) (ECP).\n\nWhile libCEED's focus is on high-order finite/spectral element method implementations, the approach is mostly algebraic and thus applicable to other discretizations in factored form, as explained in the [user manual](https://libceed.readthedocs.io/).", "_____no_output_____" ], [ "## Setting up libCEED for Python\n\nInstall libCEED for Python by running", "_____no_output_____" ] ], [ [ "! python -m pip install libceed", "_____no_output_____" ] ], [ [ "## CeedBasis\n\nHere we show some basic examples to illustrate the `libceed.Basis` class. In libCEED, a `libceed.Basis` defines the finite element basis and associated quadrature rule (see [the API documentation](https://libceed.readthedocs.io/en/latest/libCEEDapi.html#finite-element-operator-decomposition)).", "_____no_output_____" ], [ "First we declare some auxiliary functions needed in the following examples", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.style.use('ggplot')\n\ndef eval(dim, x):\n result, center = 1, 0.1\n for d in range(dim):\n result *= np.tanh(x[d] - center)\n center += 0.1\n return result\n\ndef feval(x1, x2):\n return x1*x1 + x2*x2 + x1*x2 + 1\n\ndef dfeval(x1, x2):\n return 2*x1 + x2", "_____no_output_____" ] ], [ [ "## $H^1$ Lagrange bases in 1D\n\nThe Lagrange interpolation nodes are at the Gauss-Lobatto points, so interpolation to Gauss-Lobatto quadrature points is the identity.", "_____no_output_____" ] ], [ [ "import libceed\n\nceed = libceed.Ceed()\n\nb = ceed.BasisTensorH1Lagrange(\n dim=1, # topological dimension\n ncomp=1, # number of components\n P=4, # number of basis functions (nodes) per dimension\n Q=4, # number of quadrature points per dimension\n qmode=libceed.GAUSS_LOBATTO)\nprint(b)", "_____no_output_____" ] ], [ [ "Although a `libceed.Basis` is fully discrete, we can use the Lagrange construction to extend the basis to continuous functions by applying `EVAL_INTERP` to the identity. This is the Vandermonde matrix of the continuous basis.", "_____no_output_____" ] ], [ [ "P = b.get_num_nodes()\nnviz = 50\nbviz = ceed.BasisTensorH1Lagrange(1, 1, P, nviz, libceed.GAUSS_LOBATTO)\n\n# Construct P \"elements\" with one node activated\nI = ceed.Vector(P * P)\nwith I.array(P, P) as x:\n x[...] = np.eye(P)\n\nBvander = ceed.Vector(P * nviz)\nbviz.apply(4, libceed.EVAL_INTERP, I, Bvander)\n\nqviz, _weight = b.lobatto_quadrature(nviz)\nwith Bvander.array_read(nviz, P) as B:\n plt.plot(qviz, B)\n\n# Mark tho Lobatto nodes\nqb, _weight = b.lobatto_quadrature(P)\nplt.plot(qb, 0*qb, 'ok');", "_____no_output_____" ] ], [ [ "In contrast, the Gauss quadrature points are not collocated, and thus all basis functions are generally nonzero at every quadrature point.", "_____no_output_____" ] ], [ [ "b = ceed.BasisTensorH1Lagrange(1, 1, 4, 4, libceed.GAUSS)\nprint(b)\n\nwith Bvander.array_read(nviz, P) as B:\n plt.plot(qviz, B)\n# Mark tho Gauss quadrature points\nqb, _weight = b.gauss_quadrature(P)\nplt.plot(qb, 0*qb, 'ok');", "_____no_output_____" ] ], [ [ "Although the underlying functions are not an intrinsic property of a `libceed.Basis` in libCEED, the sizes are.\nHere, we create a 3D tensor product element with more quadrature points than Lagrange interpolation nodes.", "_____no_output_____" ] ], [ [ "b = ceed.BasisTensorH1Lagrange(3, 1, 4, 5, libceed.GAUSS_LOBATTO)\n\np = libceed.Basis.get_num_nodes(b)\nprint('p =', p)\n\nq = libceed.Basis.get_num_quadrature_points(b)\nprint('q =', q)", "_____no_output_____" ] ], [ [ "* In the following example, we demonstrate the application of an interpolatory basis in multiple dimensions", "_____no_output_____" ] ], [ [ "for dim in range(1, 4):\n Q = 4\n Qdim = Q**dim\n Xdim = 2**dim\n x = np.empty(Xdim*dim, dtype=\"float64\")\n uq = np.empty(Qdim, dtype=\"float64\")\n\n for d in range(dim):\n for i in range(Xdim):\n x[d*Xdim + i] = 1 if (i % (2**(dim-d))) // (2**(dim-d-1)) else -1\n\n X = ceed.Vector(Xdim*dim)\n X.set_array(x, cmode=libceed.USE_POINTER)\n Xq = ceed.Vector(Qdim*dim)\n Xq.set_value(0)\n U = ceed.Vector(Qdim)\n U.set_value(0)\n Uq = ceed.Vector(Qdim)\n\n bxl = ceed.BasisTensorH1Lagrange(dim, dim, 2, Q, libceed.GAUSS_LOBATTO)\n bul = ceed.BasisTensorH1Lagrange(dim, 1, Q, Q, libceed.GAUSS_LOBATTO)\n\n bxl.apply(1, libceed.EVAL_INTERP, X, Xq)\n\n with Xq.array_read() as xq:\n for i in range(Qdim):\n xx = np.empty(dim, dtype=\"float64\")\n for d in range(dim):\n xx[d] = xq[d*Qdim + i]\n uq[i] = eval(dim, xx)\n\n Uq.set_array(uq, cmode=libceed.USE_POINTER)\n\n # This operation is the identity because the quadrature is collocated\n bul.T.apply(1, libceed.EVAL_INTERP, Uq, U)\n\n bxg = ceed.BasisTensorH1Lagrange(dim, dim, 2, Q, libceed.GAUSS)\n bug = ceed.BasisTensorH1Lagrange(dim, 1, Q, Q, libceed.GAUSS)\n\n bxg.apply(1, libceed.EVAL_INTERP, X, Xq)\n bug.apply(1, libceed.EVAL_INTERP, U, Uq)\n\n with Xq.array_read() as xq, Uq.array_read() as u:\n #print('xq =', xq)\n #print('u =', u)\n if dim == 2:\n # Default ordering is contiguous in x direction, but\n # pyplot expects meshgrid convention, which is transposed.\n x, y = xq.reshape(2, Q, Q).transpose(0, 2, 1)\n plt.scatter(x, y, c=np.array(u).reshape(Q, Q))\n plt.xlim(-1, 1)\n plt.ylim(-1, 1)\n plt.colorbar(label='u')", "_____no_output_____" ] ], [ [ "* In the following example, we demonstrate the application of the gradient of the shape functions in multiple dimensions", "_____no_output_____" ] ], [ [ "for dim in range (1, 4):\n P, Q = 8, 10\n Pdim = P**dim\n Qdim = Q**dim\n Xdim = 2**dim\n sum1 = sum2 = 0\n x = np.empty(Xdim*dim, dtype=\"float64\")\n u = np.empty(Pdim, dtype=\"float64\")\n\n for d in range(dim):\n for i in range(Xdim):\n x[d*Xdim + i] = 1 if (i % (2**(dim-d))) // (2**(dim-d-1)) else -1\n\n X = ceed.Vector(Xdim*dim)\n X.set_array(x, cmode=libceed.USE_POINTER)\n Xq = ceed.Vector(Pdim*dim)\n Xq.set_value(0)\n U = ceed.Vector(Pdim)\n Uq = ceed.Vector(Qdim*dim)\n Uq.set_value(0)\n Ones = ceed.Vector(Qdim*dim)\n Ones.set_value(1)\n Gtposeones = ceed.Vector(Pdim)\n Gtposeones.set_value(0)\n\n # Get function values at quadrature points\n bxl = ceed.BasisTensorH1Lagrange(dim, dim, 2, P, libceed.GAUSS_LOBATTO)\n bxl.apply(1, libceed.EVAL_INTERP, X, Xq)\n\n with Xq.array_read() as xq:\n for i in range(Pdim):\n xx = np.empty(dim, dtype=\"float64\")\n for d in range(dim):\n xx[d] = xq[d*Pdim + i]\n u[i] = eval(dim, xx)\n\n U.set_array(u, cmode=libceed.USE_POINTER)\n\n # Calculate G u at quadrature points, G' * 1 at dofs\n bug = ceed.BasisTensorH1Lagrange(dim, 1, P, Q, libceed.GAUSS)\n bug.apply(1, libceed.EVAL_GRAD, U, Uq)\n bug.T.apply(1, libceed.EVAL_GRAD, Ones, Gtposeones)\n\n # Check if 1' * G * u = u' * (G' * 1)\n with Gtposeones.array_read() as gtposeones, Uq.array_read() as uq:\n for i in range(Pdim):\n sum1 += gtposeones[i]*u[i]\n for i in range(dim*Qdim):\n sum2 += uq[i]\n\n # Check that (1' * G * u - u' * (G' * 1)) is numerically zero\n print('1T * G * u - uT * (GT * 1) =', np.abs(sum1 - sum2))", "_____no_output_____" ] ], [ [ "### Advanced topics", "_____no_output_____" ], [ "* In the following example, we demonstrate the QR factorization of a basis matrix.\nThe representation is similar to LAPACK's [`dgeqrf`](https://www.netlib.org/lapack/explore-html/dd/d9a/group__double_g_ecomputational_ga3766ea903391b5cf9008132f7440ec7b.html#ga3766ea903391b5cf9008132f7440ec7b), with elementary reflectors in the lower triangular block, scaled by `tau`.", "_____no_output_____" ] ], [ [ "qr = np.array([1, -1, 4, 1, 4, -2, 1, 4, 2, 1, -1, 0], dtype=\"float64\")\ntau = np.empty(3, dtype=\"float64\")\n\nqr, tau = libceed.Basis.qr_factorization(ceed, qr, tau, 4, 3)\n\nprint('qr =')\nprint(qr.reshape(4, 3))\n\nprint('tau =')\nprint(tau)", "_____no_output_____" ] ], [ [ "* In the following example, we demonstrate the symmetric Schur decomposition of a basis matrix", "_____no_output_____" ] ], [ [ "A = np.array([0.19996678, 0.0745459, -0.07448852, 0.0332866,\n 0.0745459, 1., 0.16666509, -0.07448852,\n -0.07448852, 0.16666509, 1., 0.0745459,\n 0.0332866, -0.07448852, 0.0745459, 0.19996678], dtype=\"float64\")\n\nlam = libceed.Basis.symmetric_schur_decomposition(ceed, A, 4)\n\nprint(\"Q =\")\nfor i in range(4):\n for j in range(4):\n if A[j+4*i] <= 1E-14 and A[j+4*i] >= -1E-14:\n A[j+4*i] = 0\n print(\"%12.8f\"%A[j+4*i])\n\nprint(\"lambda =\")\nfor i in range(4):\n if lam[i] <= 1E-14 and lam[i] >= -1E-14:\n lam[i] = 0\n print(\"%12.8f\"%lam[i])", "_____no_output_____" ] ], [ [ "* In the following example, we demonstrate the simultaneous diagonalization of a basis matrix", "_____no_output_____" ] ], [ [ "M = np.array([0.19996678, 0.0745459, -0.07448852, 0.0332866,\n 0.0745459, 1., 0.16666509, -0.07448852,\n -0.07448852, 0.16666509, 1., 0.0745459,\n 0.0332866, -0.07448852, 0.0745459, 0.19996678], dtype=\"float64\")\nK = np.array([3.03344425, -3.41501767, 0.49824435, -0.11667092,\n -3.41501767, 5.83354662, -2.9167733, 0.49824435,\n 0.49824435, -2.9167733, 5.83354662, -3.41501767,\n -0.11667092, 0.49824435, -3.41501767, 3.03344425], dtype=\"float64\")\n\nx, lam = libceed.Basis.simultaneous_diagonalization(ceed, K, M, 4)\n\nprint(\"x =\")\nfor i in range(4):\n for j in range(4):\n if x[j+4*i] <= 1E-14 and x[j+4*i] >= -1E-14:\n x[j+4*i] = 0\n print(\"%12.8f\"%x[j+4*i])\n\nprint(\"lambda =\")\nfor i in range(4):\n if lam[i] <= 1E-14 and lam[i] >= -1E-14:\n lam[i] = 0\n print(\"%12.8f\"%lam[i])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb8147dda9ea877f17ff57929a543d38c7522f3a
18,299
ipynb
Jupyter Notebook
tutorial/alink_python/Chap13.ipynb
yangqi199808/Alink
51402e1a7549ad062f5a9fc70e59fa3b65999f81
[ "Apache-2.0" ]
null
null
null
tutorial/alink_python/Chap13.ipynb
yangqi199808/Alink
51402e1a7549ad062f5a9fc70e59fa3b65999f81
[ "Apache-2.0" ]
null
null
null
tutorial/alink_python/Chap13.ipynb
yangqi199808/Alink
51402e1a7549ad062f5a9fc70e59fa3b65999f81
[ "Apache-2.0" ]
null
null
null
34.591682
100
0.49571
[ [ [ "from pyalink.alink import *\nuseLocalEnv(1)\n\nfrom utils import *\nimport os\nimport pandas as pd\n\npd.set_option('display.max_colwidth', 5000)\npd.set_option('display.html.use_mathjax', False)\n\nDATA_DIR = ROOT_DIR + \"mnist\" + os.sep\n\nDENSE_TRAIN_FILE = \"dense_train.ak\";\nDENSE_TEST_FILE = \"dense_test.ak\";\nSPARSE_TRAIN_FILE = \"sparse_train.ak\";\nSPARSE_TEST_FILE = \"sparse_test.ak\";\nTABLE_TRAIN_FILE = \"table_train.ak\";\nTABLE_TEST_FILE = \"table_test.ak\";\n\nVECTOR_COL_NAME = \"vec\";\nLABEL_COL_NAME = \"label\";\nPREDICTION_COL_NAME = \"id_cluster\";\n", "_____no_output_____" ], [ "#c_1\n\nimport numpy as np\nimport gzip, struct\n\ndef get_df(image_path, label_path):\n with gzip.open(label_path) as flbl:\n magic, num = struct.unpack(\">II\", flbl.read(8))\n label = np.frombuffer(flbl.read(), dtype=np.int8)\n label = label.reshape(len(label), 1)\n with gzip.open(image_path, 'rb') as fimg:\n magic, num, rows, cols = struct.unpack(\">IIII\", fimg.read(16))\n image = np.frombuffer(fimg.read(), dtype=np.uint8).reshape(len(label), rows * cols)\n return pd.DataFrame(np.hstack((label, image)))\n\nschema_str = \"label int\"\nfor i in range(0, 784):\n schema_str = schema_str + \", c_\" + str(i) + \" double\"\n\nif not(os.path.exists(DATA_DIR + TABLE_TRAIN_FILE)) :\n BatchOperator\\\n .fromDataframe(\n get_df(DATA_DIR + 'train-images-idx3-ubyte.gz', \n DATA_DIR + 'train-labels-idx1-ubyte.gz'),\n schema_str\n )\\\n .link(\n AkSinkBatchOp().setFilePath(DATA_DIR + TABLE_TRAIN_FILE)\n )\n BatchOperator.execute()\n\nif not(os.path.exists(DATA_DIR + TABLE_TEST_FILE)) :\n BatchOperator\\\n .fromDataframe(\n get_df(DATA_DIR + 't10k-images-idx3-ubyte.gz', \n DATA_DIR + 't10k-labels-idx1-ubyte.gz'),\n schema_str\n )\\\n .link(\n AkSinkBatchOp().setFilePath(DATA_DIR + TABLE_TEST_FILE)\n )\n BatchOperator.execute()\n", "_____no_output_____" ], [ "\nfeature_cols = []\nfor i in range(0, 784) :\n feature_cols.append(\"c_\" + str(i))\n\nif not(os.path.exists(DATA_DIR + DENSE_TRAIN_FILE)) :\n AkSourceBatchOp()\\\n .setFilePath(DATA_DIR + TABLE_TRAIN_FILE)\\\n .lazyPrint(3)\\\n .link(\n ColumnsToVectorBatchOp()\\\n .setSelectedCols(feature_cols)\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setReservedCols([LABEL_COL_NAME])\n )\\\n .lazyPrint(3)\\\n .link(\n AkSinkBatchOp().setFilePath(DATA_DIR + DENSE_TRAIN_FILE)\n );\n BatchOperator.execute();\n\n\nif not(os.path.exists(DATA_DIR + DENSE_TEST_FILE)) :\n AkSourceBatchOp()\\\n .setFilePath(DATA_DIR + TABLE_TEST_FILE)\\\n .lazyPrint(3)\\\n .link(\n ColumnsToVectorBatchOp()\\\n .setSelectedCols(feature_cols)\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setReservedCols([LABEL_COL_NAME])\n )\\\n .lazyPrint(3)\\\n .link(\n AkSinkBatchOp().setFilePath(DATA_DIR + DENSE_TEST_FILE)\n );\n BatchOperator.execute();\n\n \nif not(os.path.exists(DATA_DIR + SPARSE_TEST_FILE)) :\n source = AkSourceBatchOp()\\\n .setFilePath(DATA_DIR + TABLE_TEST_FILE)\\\n .link(\n AppendIdBatchOp().setIdCol(\"row_id\")\n );\n\n row_id_label = source\\\n .select(\"row_id AS id, \" + LABEL_COL_NAME)\\\n .lazyPrint(3, \"row_id_label\");\n\n row_id_vec = source\\\n .lazyPrint(3)\\\n .link(\n ColumnsToTripleBatchOp()\\\n .setSelectedCols(feature_cols)\\\n .setTripleColumnValueSchemaStr(\"col string, val double\")\\\n .setReservedCols([\"row_id\"])\n )\\\n .filter(\"val<>0\")\\\n .lazyPrint(3)\\\n .select(\"row_id, val, CAST(SUBSTRING(col FROM 3) AS INT) AS col\")\\\n .lazyPrint(3)\\\n .link(\n TripleToVectorBatchOp()\\\n .setTripleRowCol(\"row_id\")\\\n .setTripleColumnCol(\"col\")\\\n .setTripleValueCol(\"val\")\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setVectorSize(784)\n )\\\n .lazyPrint(3);\n\n JoinBatchOp()\\\n .setJoinPredicate(\"row_id = id\")\\\n .setSelectClause(LABEL_COL_NAME + \", \" + VECTOR_COL_NAME)\\\n .linkFrom(row_id_vec, row_id_label)\\\n .lazyPrint(3)\\\n .link(\n AkSinkBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE)\n );\n BatchOperator.execute();\n\n\nif not(os.path.exists(DATA_DIR + SPARSE_TRAIN_FILE)) :\n source = AkSourceBatchOp()\\\n .setFilePath(DATA_DIR + TABLE_TRAIN_FILE)\\\n .link(\n AppendIdBatchOp().setIdCol(\"row_id\")\n );\n\n row_id_label = source\\\n .select(\"row_id AS id, \" + LABEL_COL_NAME)\\\n .lazyPrint(3, \"row_id_label\");\n\n row_id_vec = source\\\n .lazyPrint(3)\\\n .link(\n ColumnsToTripleBatchOp()\\\n .setSelectedCols(feature_cols)\\\n .setTripleColumnValueSchemaStr(\"col string, val double\")\\\n .setReservedCols([\"row_id\"])\n )\\\n .filter(\"val<>0\")\\\n .lazyPrint(3)\\\n .select(\"row_id, val, CAST(SUBSTRING(col FROM 3) AS INT) AS col\")\\\n .lazyPrint(3)\\\n .link(\n TripleToVectorBatchOp()\\\n .setTripleRowCol(\"row_id\")\\\n .setTripleColumnCol(\"col\")\\\n .setTripleValueCol(\"val\")\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setVectorSize(784)\n )\\\n .lazyPrint(3);\n\n JoinBatchOp()\\\n .setJoinPredicate(\"row_id = id\")\\\n .setSelectClause(LABEL_COL_NAME + \", \" + VECTOR_COL_NAME)\\\n .linkFrom(row_id_vec, row_id_label)\\\n .lazyPrint(3)\\\n .link(\n AkSinkBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE)\n );\n BatchOperator.execute();\n", "_____no_output_____" ], [ "AkSourceBatchOp()\\\n .setFilePath(DATA_DIR + DENSE_TRAIN_FILE)\\\n .lazyPrint(1, \"MNIST data\")\\\n .link(\n VectorSummarizerBatchOp()\\\n .setSelectedCol(VECTOR_COL_NAME)\\\n .lazyPrintVectorSummary()\n );\n\nAkSourceBatchOp()\\\n .setFilePath(DATA_DIR + SPARSE_TRAIN_FILE)\\\n .lazyPrint(1, \"MNIST data\")\\\n .link(\n VectorSummarizerBatchOp()\\\n .setSelectedCol(VECTOR_COL_NAME)\\\n .lazyPrintVectorSummary()\n );\n\nAkSourceBatchOp()\\\n .setFilePath(DATA_DIR + SPARSE_TRAIN_FILE)\\\n .lazyPrintStatistics()\\\n .groupBy(LABEL_COL_NAME, LABEL_COL_NAME + \", COUNT(*) AS cnt\")\\\n .orderBy(\"cnt\", 100)\\\n .lazyPrint(-1);\n\nBatchOperator.execute()", "_____no_output_____" ], [ "#c_2\ntrain_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);\ntest_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);\n\nSoftmax()\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .enableLazyPrintTrainInfo()\\\n .enableLazyPrintModelInfo()\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"Softmax\")\n );\n\nBatchOperator.execute()", "_____no_output_____" ], [ "#c_3\ntrain_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);\ntest_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);\n\nOneVsRest()\\\n .setClassifier(\n LogisticRegression()\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\n )\\\n .setNumClass(10)\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"OneVsRest - LogisticRegression\")\n );\n\nOneVsRest()\\\n .setClassifier(\n LinearSvm()\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\n )\\\n .setNumClass(10)\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"OneVsRest - LinearSvm\")\n );\n\nBatchOperator.execute();", "_____no_output_____" ], [ "#c_4\n\nuseLocalEnv(4)\n\ntrain_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);\ntest_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);\n\nMultilayerPerceptronClassifier()\\\n .setLayers([784, 10])\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"MultilayerPerceptronClassifier {784, 10}\")\n );\nBatchOperator.execute();\n\nMultilayerPerceptronClassifier()\\\n .setLayers([784, 256, 128, 10])\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"MultilayerPerceptronClassifier {784, 256, 128, 10}\")\n );\nBatchOperator.execute();", "_____no_output_____" ], [ "#c_5\n\nuseLocalEnv(4)\n\ntrain_data = AkSourceBatchOp().setFilePath(DATA_DIR + TABLE_TRAIN_FILE)\ntest_data = AkSourceBatchOp().setFilePath(DATA_DIR + TABLE_TEST_FILE)\n\nfeatureColNames = train_data.getColNames()\nfeatureColNames.remove(LABEL_COL_NAME)\n\ntrain_data.lazyPrint(5)\n\nBatchOperator.execute()\n\nsw = Stopwatch()\n\nfor treeType in ['GINI', 'INFOGAIN', 'INFOGAINRATIO'] : \n sw.reset()\n sw.start()\n DecisionTreeClassifier()\\\n .setTreeType(treeType)\\\n .setFeatureCols(featureColNames)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .enableLazyPrintModelInfo()\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"DecisionTreeClassifier \" + treeType)\n );\n BatchOperator.execute()\n sw.stop()\n print(sw.getElapsedTimeSpan())\n\n\nfor numTrees in [2, 4, 8, 16, 32, 64, 128] :\n sw.reset();\n sw.start();\n RandomForestClassifier()\\\n .setSubsamplingRatio(0.6)\\\n .setNumTreesOfInfoGain(numTrees)\\\n .setFeatureCols(featureColNames)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .enableLazyPrintModelInfo()\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"RandomForestClassifier : \" + str(numTrees))\n );\n BatchOperator.execute();\n sw.stop();\n print(sw.getElapsedTimeSpan());", "_____no_output_____" ], [ "#c_6\n\nuseLocalEnv(4)\n\ntrain_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TRAIN_FILE);\ntest_data = AkSourceBatchOp().setFilePath(DATA_DIR + SPARSE_TEST_FILE);\n\nKnnClassifier()\\\n .setK(3)\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"KnnClassifier - 3 - EUCLIDEAN\")\n );\n\nBatchOperator.execute();\n\nKnnClassifier()\\\n .setDistanceType('COSINE')\\\n .setK(3)\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"KnnClassifier - 3 - COSINE\")\n );\n\nBatchOperator.execute();\n\nKnnClassifier()\\\n .setK(7)\\\n .setVectorCol(VECTOR_COL_NAME)\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .fit(train_data)\\\n .transform(test_data)\\\n .link(\n EvalMultiClassBatchOp()\\\n .setLabelCol(LABEL_COL_NAME)\\\n .setPredictionCol(PREDICTION_COL_NAME)\\\n .lazyPrintMetrics(\"KnnClassifier - 7 - EUCLIDEAN\")\n );\n\nBatchOperator.execute();", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb815305c64781639930ee1ad449c459cf351f1b
18,964
ipynb
Jupyter Notebook
test/FillStyleTest.ipynb
dimitar-j/spark
ce332fe46dc9608735856ee8eb2f59f9587c7d72
[ "MIT" ]
null
null
null
test/FillStyleTest.ipynb
dimitar-j/spark
ce332fe46dc9608735856ee8eb2f59f9587c7d72
[ "MIT" ]
null
null
null
test/FillStyleTest.ipynb
dimitar-j/spark
ce332fe46dc9608735856ee8eb2f59f9587c7d72
[ "MIT" ]
null
null
null
45.586538
153
0.551519
[ [ [ "import spark\n%reload_ext spark", "_____no_output_____" ], [ "%%ignite\n\n# Happy path: \n# fill_style accepts strings, rgb and rgba inputs\n# fill_style caps out-of-bound numbers to respective ranges\n\ndef setup():\n size(200, 200)\n print(\"fill_style renders fill color \")\n \ndef draw():\n global color\n clear()\n with_string()\n with_rgb_in_bounds()\n with_rgb_out_of_bounds()\n with_rgba_in_bounds()\n with_rgba_out_of_bounds()\n\ndef expect_fill_style(expected):\n global canvas\n if canvas.fill_style != expected:\n print(\"FAIL:\\n\\tExpected canvas.fill_style to be:\\n\\t\\t\" + expected)\n print(\"\\tbut received:\\n\\t\\t\" + str(canvas.fill_style))\n \ndef with_string():\n print(\"with string input\")\n fill_style('green') # Expected colour: green\n fill_rect(0, 0, 30, 30)\n expect_fill_style('green')\n \ndef with_rgb_in_bounds():\n print(\"with rgb in bounds\")\n fill_style(0, 0, 255) # Expected colour: blue\n fill_rect(0, 40, 30, 30)\n expect_fill_style('rgb(0, 0, 255)')\n \ndef with_rgb_out_of_bounds():\n print(\"with rgb out of bounds\")\n fill_style(-100, -200, 500) # Expected colour: blue\n fill_rect(40, 40, 30, 30)\n expect_fill_style('rgb(0, 0, 255)')\n \ndef with_rgba_in_bounds():\n print(\"with rgba in bounds\")\n fill_style(255, 0, 0, 0.3) # Expected colour: translucent red\n fill_rect(0, 80, 30, 30)\n expect_fill_style('rgba(255, 0, 0, 0.3)')\n \ndef with_rgba_out_of_bounds():\n print(\"with rgba out of bounds\")\n fill_style(500, -1, -1000, 2) # Expected colour: solid red. Note sending 2 instead of 2.0\n fill_rect(40, 80, 30, 30)\n expect_fill_style('rgba(255, 0, 0, 1.0)')", "_____no_output_____" ], [ "%%ignite\n\n# Unhappy path\n# Incorrect number of args is rejected\n# Non-ints are rejected for RGB\n# None as arg is rejected\n\ndef setup():\n print(\"fill_style throws exceptions\")\n \n size(100, 100)\n expect_type_error(with_missing_args, \"fill_style expected 1, 3 or 4 arguments, got 0\")\n expect_type_error(with_none_in_rgba, \"fill_style expected None to be an int\")\n expect_type_error(with_string_in_rgb, \"fill_style expected 'x' to be an int\")\n expect_type_error(with_float_in_rgb, \"fill_style expected 128.0 to be an int\")\n \n # TODO: This test expects a different error type\n # expect_type_error(with_none_in_string, \"The 'fill_style' trait of a Canvas instance expected a valid HTML color, not the NoneType None\")\n \ndef expect_type_error(func, expected_error):\n try:\n func()\n except TypeError as e:\n if str(e) != expected_error:\n print(\"FAIL:\\n\\tExpected \" + str(func.__name__) + \" to raise error:\\n\\t\\t\" + expected_error)\n print(\"\\tbut received:\\n\\t\\t\" + str(e))\n \ndef with_missing_args():\n print(\"with missing args\")\n fill_style()\n \ndef with_none_in_string():\n print(\"with None in string\")\n fill_style(None)\n \ndef with_none_in_rgba():\n print(\"with None-types in rgba\")\n fill_style(None, None, None, None)\n \ndef with_string_in_rgb():\n print(\"with string in rgb\")\n fill_style('x', 'y', 'z')\n \ndef with_float_in_rgb():\n print(\"with float in rgb\")\n fill_style(128.0, 128, 128)", "\nwith rgb in bounds\nwith rgb out of bounds\nwith rgba in bounds\nwith rgba out of bounds\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cb815ae3ae0ae353b1dc13a483088d20de5f0221
2,906
ipynb
Jupyter Notebook
notebook/workflow_for_developers.ipynb
heroakisi/MDToolbox.jl
43c9a4a07846490dfab38078a5fa4da17393601b
[ "MIT" ]
null
null
null
notebook/workflow_for_developers.ipynb
heroakisi/MDToolbox.jl
43c9a4a07846490dfab38078a5fa4da17393601b
[ "MIT" ]
null
null
null
notebook/workflow_for_developers.ipynb
heroakisi/MDToolbox.jl
43c9a4a07846490dfab38078a5fa4da17393601b
[ "MIT" ]
null
null
null
27.415094
226
0.525809
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb815b8c6e8cd5204e2f4b1cc8aa67ab7ef27c7d
14,316
ipynb
Jupyter Notebook
pretrained-model/stt/conformer/evaluate/base-singlish.ipynb
ishine/malaya-speech
fd34afc7107af1656dff4b3201fa51dda54fde18
[ "MIT" ]
111
2020-08-31T04:58:54.000Z
2022-03-29T15:44:18.000Z
pretrained-model/stt/conformer/evaluate/base-singlish.ipynb
ishine/malaya-speech
fd34afc7107af1656dff4b3201fa51dda54fde18
[ "MIT" ]
14
2020-12-16T07:27:22.000Z
2022-03-15T17:39:01.000Z
pretrained-model/stt/conformer/evaluate/base-singlish.ipynb
ishine/malaya-speech
fd34afc7107af1656dff4b3201fa51dda54fde18
[ "MIT" ]
29
2021-02-09T08:57:15.000Z
2022-03-12T14:09:19.000Z
34.330935
300
0.559165
[ [ [ "import os\nos.environ['CUDA_VISIBLE_DEVICES'] = ''\nos.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/home/husein/t5/prepare/mesolitica-tpu.json'", "_____no_output_____" ], [ "import malaya_speech.train.model.conformer as conformer\nimport malaya_speech.train.model.transducer as transducer\nimport malaya_speech\nimport tensorflow as tf\nimport numpy as np\nimport json\nfrom glob import glob\nimport pandas as pd", "WARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:39: The name tf.train.AdagradOptimizer is deprecated. Please use tf.compat.v1.train.AdagradOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:40: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:41: The name tf.train.FtrlOptimizer is deprecated. Please use tf.compat.v1.train.FtrlOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:43: The name tf.train.RMSPropOptimizer is deprecated. Please use tf.compat.v1.train.RMSPropOptimizer instead.\n\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/optimizer/__init__.py:44: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.\n\nWARNING:tensorflow:\nThe TensorFlow contrib module will not be included in TensorFlow 2.0.\nFor more information, please see:\n * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md\n * https://github.com/tensorflow/addons\n * https://github.com/tensorflow/io (for I/O related ops)\nIf you depend on functionality not listed there, please file an issue.\n\n" ], [ "subwords = malaya_speech.subword.load('transducer-singlish.subword')", "_____no_output_____" ], [ "featurizer = malaya_speech.tf_featurization.STTFeaturizer(\n normalize_per_feature = True\n)", "_____no_output_____" ], [ "n_mels = 80\nsr = 16000\nmaxlen = 18\nminlen_text = 1\n\ndef mp3_to_wav(file, sr = sr):\n audio = AudioSegment.from_file(file)\n audio = audio.set_frame_rate(sr).set_channels(1)\n sample = np.array(audio.get_array_of_samples())\n return malaya_speech.astype.int_to_float(sample), sr\n\n\ndef generate(file):\n print(file)\n with open(file) as fopen:\n audios = json.load(fopen)\n for i in range(len(audios)):\n try:\n audio = audios[i][0]\n wav_data, _ = malaya_speech.load(audio, sr = sr)\n\n if (len(wav_data) / sr) > maxlen:\n # print(f'skipped audio too long {audios[i]}')\n continue\n\n if len(audios[i][1]) < minlen_text:\n # print(f'skipped text too short {audios[i]}')\n continue\n\n t = malaya_speech.subword.encode(\n subwords, audios[i][1], add_blank = False\n )\n back = np.zeros(shape=(2000,))\n front = np.zeros(shape=(200,))\n wav_data = np.concatenate([front, wav_data, back], axis=-1)\n\n yield {\n 'waveforms': wav_data,\n 'targets': t,\n 'targets_length': [len(t)],\n }\n except Exception as e:\n print(e)\n\n\ndef preprocess_inputs(example):\n s = featurizer.vectorize(example['waveforms'])\n mel_fbanks = tf.reshape(s, (-1, n_mels))\n length = tf.cast(tf.shape(mel_fbanks)[0], tf.int32)\n length = tf.expand_dims(length, 0)\n example['inputs'] = mel_fbanks\n example['inputs_length'] = length\n example.pop('waveforms', None)\n example['targets'] = tf.cast(example['targets'], tf.int32)\n example['targets_length'] = tf.cast(example['targets_length'], tf.int32)\n return example\n\n\ndef get_dataset(\n file,\n batch_size = 3,\n shuffle_size = 20,\n thread_count = 24,\n maxlen_feature = 1800,\n):\n def get():\n dataset = tf.data.Dataset.from_generator(\n generate,\n {\n 'waveforms': tf.float32,\n 'targets': tf.int32,\n 'targets_length': tf.int32,\n },\n output_shapes = {\n 'waveforms': tf.TensorShape([None]),\n 'targets': tf.TensorShape([None]),\n 'targets_length': tf.TensorShape([None]),\n },\n args = (file,),\n )\n dataset = dataset.prefetch(tf.contrib.data.AUTOTUNE)\n dataset = dataset.map(\n preprocess_inputs, num_parallel_calls = thread_count\n )\n dataset = dataset.padded_batch(\n batch_size,\n padded_shapes = {\n 'inputs': tf.TensorShape([None, n_mels]),\n 'inputs_length': tf.TensorShape([None]),\n 'targets': tf.TensorShape([None]),\n 'targets_length': tf.TensorShape([None]),\n },\n padding_values = {\n 'inputs': tf.constant(0, dtype = tf.float32),\n 'inputs_length': tf.constant(0, dtype = tf.int32),\n 'targets': tf.constant(0, dtype = tf.int32),\n 'targets_length': tf.constant(0, dtype = tf.int32),\n },\n )\n return dataset\n\n return get", "_____no_output_____" ], [ "dev_dataset = get_dataset('test-set-imda.json', batch_size = 3)()\nfeatures = dev_dataset.make_one_shot_iterator().get_next()\nfeatures", "WARNING:tensorflow:From <ipython-input-6-26ff2481d716>:2: DatasetV1.make_one_shot_iterator (from tensorflow.python.data.ops.dataset_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse `for ... in dataset:` to iterate over a dataset. If using `tf.estimator`, return the `Dataset` object directly from your input function. As a last resort, you can use `tf.compat.v1.data.make_one_shot_iterator(dataset)`.\n" ], [ "training = True", "_____no_output_____" ], [ "config = malaya_speech.config.conformer_base_encoder_config\nconfig['dropout'] = 0.0\nconformer_model = conformer.Model(\n kernel_regularizer = None, bias_regularizer = None, **config\n)\ndecoder_config = malaya_speech.config.conformer_base_decoder_config\ndecoder_config['embed_dropout'] = 0.0\ntransducer_model = transducer.rnn.Model(\n conformer_model, vocabulary_size = subwords.vocab_size, **decoder_config\n)\ntargets_length = features['targets_length'][:, 0]\nv = tf.expand_dims(features['inputs'], -1)\nz = tf.zeros((tf.shape(features['targets'])[0], 1), dtype = tf.int32)\nc = tf.concat([z, features['targets']], axis = 1)\n\nlogits = transducer_model([v, c, targets_length + 1], training = training)", "WARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\nInstructions for updating:\nIf using Keras pass *_constraint arguments to layers.\nWARNING:tensorflow:From /home/husein/malaya-speech/malaya_speech/train/model/transducer/layer.py:37: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.\n\nWARNING:tensorflow:From /home/husein/.local/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py:3994: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.\nInstructions for updating:\nUse tf.where in 2.0, which has the same broadcast rule as np.where\n" ], [ "decoded = transducer_model.greedy_decoder(v, features['inputs_length'][:, 0], training = training)\ndecoded", "_____no_output_____" ], [ "sess = tf.Session()\nsess.run(tf.global_variables_initializer())\nvar_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)\nsaver = tf.train.Saver(var_list = var_list)\nsaver.restore(sess, 'asr-base-conformer-transducer-singlish/model.ckpt-800000')", "INFO:tensorflow:Restoring parameters from asr-base-conformer-transducer-singlish/model.ckpt-800000\n" ], [ "wer, cer = [], []\nindex = 0\nwhile True:\n try:\n r = sess.run([decoded, features['targets']])\n for no, row in enumerate(r[0]):\n d = malaya_speech.subword.decode(subwords, row[row > 0])\n t = malaya_speech.subword.decode(subwords, r[1][no])\n wer.append(malaya_speech.metrics.calculate_wer(t, d))\n cer.append(malaya_speech.metrics.calculate_cer(t, d))\n index += 1\n except Exception as e:\n break", "b'test-set-imda.json'\n" ], [ "np.mean(wer), np.mean(cer)", "_____no_output_____" ], [ "for no, row in enumerate(r[0]):\n d = malaya_speech.subword.decode(subwords, row[row > 0])\n t = malaya_speech.subword.decode(subwords, r[1][no])\n print(no, d)\n print(t)\n print()", "0 the health ministry encourages all households to update their information so these subsidies can be calculated accurately\nthe health ministry encourages all households to update their information so these subsidies can be calculated accurately\n\n1 bus bridging will be deployed for the affected stretch for the duration of the suspension\nbus bridging will be deployed for the affected stretch for the duration of the suspension\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb8162bd6c12679e9fb8c31c71ae111ad9870041
429,966
ipynb
Jupyter Notebook
Project27-ProteinVariantMolecularEffectPredictor/singleProteinModels.ipynb
Vauke/Deep-Neural-Networks-HealthCare
a6e0cc9d44e06ab3b3f3a947c512ca25f3e17a14
[ "MIT" ]
1
2019-09-25T08:00:50.000Z
2019-09-25T08:00:50.000Z
Project27-ProteinVariantMolecularEffectPredictor/singleProteinModels.ipynb
Vauke/Deep-Neural-Networks-HealthCare
a6e0cc9d44e06ab3b3f3a947c512ca25f3e17a14
[ "MIT" ]
7
2020-09-26T01:27:55.000Z
2022-01-13T03:14:02.000Z
Project27-ProteinVariantMolecularEffectPredictor/singleProteinModels.ipynb
Vauke/Deep-Neural-Networks-HealthCare
a6e0cc9d44e06ab3b3f3a947c512ca25f3e17a14
[ "MIT" ]
1
2019-07-15T21:44:09.000Z
2019-07-15T21:44:09.000Z
41.172652
1,960
0.499705
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb81646c999330d1d7099753ee8f63781cd15cbd
60,236
ipynb
Jupyter Notebook
master/Demo.ipynb
lrittner/ia979
b4e6eb9495adb1d45cc550d2544dac88e91b9788
[ "MIT" ]
14
2017-07-12T17:32:44.000Z
2021-08-19T13:30:46.000Z
master/Demo.ipynb
lrittner/ia979
b4e6eb9495adb1d45cc550d2544dac88e91b9788
[ "MIT" ]
1
2017-06-29T13:34:26.000Z
2017-06-29T13:34:26.000Z
master/Demo.ipynb
lrittner/ia979
b4e6eb9495adb1d45cc550d2544dac88e91b9788
[ "MIT" ]
19
2017-03-05T17:40:48.000Z
2020-03-09T17:01:20.000Z
360.694611
56,708
0.922654
[ [ [ "import numpy as np", "_____no_output_____" ], [ "#(np.random.rand(10,5)*10).astype(int)", "_____no_output_____" ], [ "from PIL import Image", "_____no_output_____" ], [ "fimg = Image.open('../data/cameraman.tif')\nfimg", "_____no_output_____" ], [ "np.array(fimg)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cb8166d5ac204a1d601fdfe9176b618cfce086cd
50,571
ipynb
Jupyter Notebook
demos/oasis/demo/pipeline/ingest/src/example.ipynb
natbusa/datalabframework-env
45d85111b82cff7602f54d2d4de925901fc2eaef
[ "MIT" ]
14
2019-10-19T01:38:07.000Z
2021-12-25T06:30:32.000Z
demos/oasis/demo/pipeline/ingest/src/example.ipynb
natbusa/datalabframework-env
45d85111b82cff7602f54d2d4de925901fc2eaef
[ "MIT" ]
3
2019-10-10T13:43:31.000Z
2019-10-18T15:43:43.000Z
demos/oasis/demo/pipeline/ingest/src/example.ipynb
natbusa/datalabframework-env
45d85111b82cff7602f54d2d4de925901fc2eaef
[ "MIT" ]
10
2019-10-12T12:00:13.000Z
2019-12-24T07:24:08.000Z
35.070042
1,622
0.424235
[ [ [ "import datafaucet as dfc", "_____no_output_____" ], [ "# start the engine\nproject = dfc.project.load()", "created SparkEngine\nInit engine \"spark\"\nConfiguring packages:\n - mysql:mysql-connector-java:8.0.12\n - org.apache.hadoop:hadoop-aws:3.2.1\nConfiguring conf:\n - spark.hadoop.fs.s3a.access.key : ****** (redacted)\n - spark.hadoop.fs.s3a.endpoint : http://minio:9000\n - spark.hadoop.fs.s3a.impl : org.apache.hadoop.fs.s3a.S3AFileSystem\n - spark.hadoop.fs.s3a.path.style.access : true\n - spark.hadoop.fs.s3a.secret.key : ****** (redacted)\nConnecting to spark master: local[*]\nEngine context spark:2.4.4 successfully started\n" ], [ "spark = dfc.context()", "_____no_output_____" ], [ "df = spark.range(100) ", "_____no_output_____" ], [ "df.data.grid()", "_____no_output_____" ], [ "(df\n .cols.get('name').obscure(alias='enc')\n .cols.get('enc').unravel(alias='dec')\n).data.grid()", "_____no_output_____" ], [ "df.data.grid().groupby(['id', 'name'])\\\n .agg({'fight':[max, 'min'], 'trade': 'count'}).stack(0)", "_____no_output_____" ], [ "from pyspark.sql import functions as F", "_____no_output_____" ], [ "df.cols.get('groupby('id', 'name')\\\n .agg({'fight':[F.max, 'min'], 'trade': 'min'}).data.grid()", "_____no_output_____" ], [ "df.cols.groupby('id', 'name')\\\n .agg({'fight':[F.max, 'min'], 'trade': 'count'}, stack=True).data.grid()", "_____no_output_____" ], [ "from pyspark.sql import functions as F\ndf.groupby('id', 'name').agg(\n F.lit('fight').alias('colname'), \n F.min('fight').alias('min'),\n F.max('fight').alias('max'),\n F.lit(None).alias('count')).union(\n df.groupby('id', 'name').agg(\n F.lit('trade').alias('colname'), \n F.lit(None).alias('min'),\n F.lit(None).alias('max'),\n F.count('trade').alias('count'))\n).data.grid()\n", "_____no_output_____" ], [ " def string2func(func):\n if isinstance(func, str):\n f = A.all.get(func)\n if f:\n return (func,f)\n else:\n raise ValueError(f'function {func} not found')\n elif isinstance(func, (type(lambda x: x), type(max))):\n return (func.__name__, func)\n else:\n raise ValueError('Invalid aggregation function')\n \n def parse_single_func(func):\n if isinstance(func, (str, type(lambda x: x), type(max))):\n return string2func(func)\n elif isinstance(func, (tuple)):\n if len(func)==2:\n return (func[0], string2func(func[1])[1])\n else:\n raise ValueError('Invalid list/tuple')\n else:\n raise ValueError(f'Invalid aggregation item {func}')\n \n def parse_list_func(func):\n func = [func] if type(func)!=list else func\n return [parse_single_func(x) for x in func]\n\n def parse_dict_func(func):\n func = {0: func} if not isinstance(func, dict) else func\n return {x[0]:parse_list_func(x[1]) for x in func.items()}\n", "_____no_output_____" ], [ "lst = [\n F.max,\n 'max',\n ('maxx', F.max),\n ('maxx', 'max'),\n ['max', F.max, ('maxx', F.max)],\n {'a': F.max},\n {'a': 'max'},\n {'a': ('maxx', F.max)},\n {'a': ('maxx', 'max')},\n {'a': ['max', F.max, ('maxx', F.max)]},\n {'a': F.max, 'b': F.max},\n {'a': 'max', 'b': 'max'},\n {'a': ('maxx', F.max), 'b': ('maxx', F.max)},\n {'a': ('maxx', 'max'), 'b': ('maxx', 'max')},\n {'a': ['max', F.max, ('maxx', F.max)], 'b': ['min', F.min, ('minn', F.min)]}\n]\nfor i in lst: \n print('=====')\n print(i)\n funcs = parse_dict_func(i)\n all_cols = set()\n for k, v in funcs.items():\n all_cols = all_cols.union(( x[0] for x in v ))\n print('all_cols:', all_cols)\n\n for c in ['a', 'b']:\n print('-----', c, '-----')\n agg_funcs = funcs.get(0, funcs.get(c))\n if agg_funcs is None:\n continue\n agg_cols = set([x[0] for x in agg_funcs])\n null_cols = all_cols - agg_cols \n print('column',c)\n print('all ',all_cols)\n print('agg ',agg_cols)\n print('null ', null_cols)\n\n for n,f in agg_funcs:\n print(c, n,f)", "=====\n<function _create_function.<locals>._ at 0x7f649f8aed08>\nall_cols: {'max'}\n----- a -----\ncolumn a\nall {'max'}\nagg {'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'max'}\nagg {'max'}\nnull set()\nb max <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\nmax\nall_cols: {'max'}\n----- a -----\ncolumn a\nall {'max'}\nagg {'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'max'}\nagg {'max'}\nnull set()\nb max <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n('maxx', <function _create_function.<locals>._ at 0x7f649f8aed08>)\nall_cols: {'maxx'}\n----- a -----\ncolumn a\nall {'maxx'}\nagg {'maxx'}\nnull set()\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'maxx'}\nagg {'maxx'}\nnull set()\nb maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n('maxx', 'max')\nall_cols: {'maxx'}\n----- a -----\ncolumn a\nall {'maxx'}\nagg {'maxx'}\nnull set()\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'maxx'}\nagg {'maxx'}\nnull set()\nb maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n['max', <function _create_function.<locals>._ at 0x7f649f8aed08>, ('maxx', <function _create_function.<locals>._ at 0x7f649f8aed08>)]\nall_cols: {'maxx', 'max'}\n----- a -----\ncolumn a\nall {'maxx', 'max'}\nagg {'maxx', 'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'maxx', 'max'}\nagg {'maxx', 'max'}\nnull set()\nb max <function _create_function.<locals>._ at 0x7f649f8aed08>\nb max <function _create_function.<locals>._ at 0x7f649f8aed08>\nb maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n{'a': <function _create_function.<locals>._ at 0x7f649f8aed08>}\nall_cols: {'max'}\n----- a -----\ncolumn a\nall {'max'}\nagg {'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\n=====\n{'a': 'max'}\nall_cols: {'max'}\n----- a -----\ncolumn a\nall {'max'}\nagg {'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\n=====\n{'a': ('maxx', <function _create_function.<locals>._ at 0x7f649f8aed08>)}\nall_cols: {'maxx'}\n----- a -----\ncolumn a\nall {'maxx'}\nagg {'maxx'}\nnull set()\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\n=====\n{'a': ('maxx', 'max')}\nall_cols: {'maxx'}\n----- a -----\ncolumn a\nall {'maxx'}\nagg {'maxx'}\nnull set()\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\n=====\n{'a': ['max', <function _create_function.<locals>._ at 0x7f649f8aed08>, ('maxx', <function _create_function.<locals>._ at 0x7f649f8aed08>)]}\nall_cols: {'maxx', 'max'}\n----- a -----\ncolumn a\nall {'maxx', 'max'}\nagg {'maxx', 'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\n=====\n{'a': <function _create_function.<locals>._ at 0x7f649f8aed08>, 'b': <function _create_function.<locals>._ at 0x7f649f8aed08>}\nall_cols: {'max'}\n----- a -----\ncolumn a\nall {'max'}\nagg {'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'max'}\nagg {'max'}\nnull set()\nb max <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n{'a': 'max', 'b': 'max'}\nall_cols: {'max'}\n----- a -----\ncolumn a\nall {'max'}\nagg {'max'}\nnull set()\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'max'}\nagg {'max'}\nnull set()\nb max <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n{'a': ('maxx', <function _create_function.<locals>._ at 0x7f649f8aed08>), 'b': ('maxx', <function _create_function.<locals>._ at 0x7f649f8aed08>)}\nall_cols: {'maxx'}\n----- a -----\ncolumn a\nall {'maxx'}\nagg {'maxx'}\nnull set()\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'maxx'}\nagg {'maxx'}\nnull set()\nb maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n{'a': ('maxx', 'max'), 'b': ('maxx', 'max')}\nall_cols: {'maxx'}\n----- a -----\ncolumn a\nall {'maxx'}\nagg {'maxx'}\nnull set()\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'maxx'}\nagg {'maxx'}\nnull set()\nb maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n=====\n{'a': ['max', <function _create_function.<locals>._ at 0x7f649f8aed08>, ('maxx', <function _create_function.<locals>._ at 0x7f649f8aed08>)], 'b': ['min', <function _create_function.<locals>._ at 0x7f649f8aed90>, ('minn', <function _create_function.<locals>._ at 0x7f649f8aed90>)]}\nall_cols: {'maxx', 'max', 'min', 'minn'}\n----- a -----\ncolumn a\nall {'maxx', 'max', 'min', 'minn'}\nagg {'maxx', 'max'}\nnull {'minn', 'min'}\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\na max <function _create_function.<locals>._ at 0x7f649f8aed08>\na maxx <function _create_function.<locals>._ at 0x7f649f8aed08>\n----- b -----\ncolumn b\nall {'maxx', 'max', 'min', 'minn'}\nagg {'minn', 'min'}\nnull {'maxx', 'max'}\nb min <function _create_function.<locals>._ at 0x7f649f8aed90>\nb min <function _create_function.<locals>._ at 0x7f649f8aed90>\nb minn <function _create_function.<locals>._ at 0x7f649f8aed90>\n" ], [ "df.cols.groupby('id', 'name').agg({\n 'fight':['sum', 'min', 'max'], \n 'trade':['max', 'count']}).data.grid()", "_____no_output_____" ], [ "pdf = df.data.grid()\nhelp(pdf.agg)", "Help on method aggregate in module pandas.core.frame:\n\naggregate(func, axis=0, *args, **kwargs) method of pandas.core.frame.DataFrame instance\n Aggregate using one or more operations over the specified axis.\n \n .. versionadded:: 0.20.0\n \n Parameters\n ----------\n func : function, str, list or dict\n Function to use for aggregating the data. If a function, must either\n work when passed a DataFrame or when passed to DataFrame.apply.\n \n Accepted combinations are:\n \n - function\n - string function name\n - list of functions and/or function names, e.g. ``[np.sum, 'mean']``\n - dict of axis labels -> functions, function names or list of such.\n axis : {0 or 'index', 1 or 'columns'}, default 0\n If 0 or 'index': apply function to each column.\n If 1 or 'columns': apply function to each row.\n *args\n Positional arguments to pass to `func`.\n **kwargs\n Keyword arguments to pass to `func`.\n \n Returns\n -------\n scalar, Series or DataFrame\n \n The return can be:\n \n * scalar : when Series.agg is called with single function\n * Series : when DataFrame.agg is called with a single function\n * DataFrame : when DataFrame.agg is called with several functions\n \n Return scalar, Series or DataFrame.\n \n The aggregation operations are always performed over an axis, either the\n index (default) or the column axis. This behavior is different from\n `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`,\n `var`), where the default is to compute the aggregation of the flattened\n array, e.g., ``numpy.mean(arr_2d)`` as opposed to\n ``numpy.mean(arr_2d, axis=0)``.\n \n `agg` is an alias for `aggregate`. Use the alias.\n \n See Also\n --------\n DataFrame.apply : Perform any type of operations.\n DataFrame.transform : Perform transformation type operations.\n core.groupby.GroupBy : Perform operations over groups.\n core.resample.Resampler : Perform operations over resampled bins.\n core.window.Rolling : Perform operations over rolling window.\n core.window.Expanding : Perform operations over expanding window.\n core.window.EWM : Perform operation over exponential weighted\n window.\n \n Notes\n -----\n `agg` is an alias for `aggregate`. Use the alias.\n \n A passed user-defined-function will be passed a Series for evaluation.\n \n Examples\n --------\n >>> df = pd.DataFrame([[1, 2, 3],\n ... [4, 5, 6],\n ... [7, 8, 9],\n ... [np.nan, np.nan, np.nan]],\n ... columns=['A', 'B', 'C'])\n \n Aggregate these functions over the rows.\n \n >>> df.agg(['sum', 'min'])\n A B C\n sum 12.0 15.0 18.0\n min 1.0 2.0 3.0\n \n Different aggregations per column.\n \n >>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})\n A B\n max NaN 8.0\n min 1.0 2.0\n sum 12.0 NaN\n \n Aggregate over the columns.\n \n >>> df.agg(\"mean\", axis=\"columns\")\n 0 2.0\n 1 5.0\n 2 8.0\n 3 NaN\n dtype: float64\n\n" ], [ "# hash / rand columns which you wish to protect during ingest\ndf = (df\n .cols.find('greedy').rand()\n .cols.get('name').hashstr(salt='foobar')\n .rows.sample(3)\n)", "_____no_output_____" ], [ "df.data.grid()", "_____no_output_____" ], [ "from pyspark.sql import functions as F\ndf.cols.agg({'type':'type', 'sample':'first'}).data.grid()", "_____no_output_____" ], [ "df.save('races', 'minio')", "_____no_output_____" ], [ "dfc.list('minio', 'races').data.grid()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb816b00eff85f7bda7ab92f3c1b23e12b390ca5
3,868
ipynb
Jupyter Notebook
examples/absolute-importing.ipynb
agoose77/literary
10245977c1c35b7ae19e42de5eb20d918d5207a5
[ "BSD-3-Clause" ]
10
2020-11-05T16:00:04.000Z
2022-02-04T15:53:59.000Z
examples/absolute-importing.ipynb
agoose77/literary
10245977c1c35b7ae19e42de5eb20d918d5207a5
[ "BSD-3-Clause" ]
33
2020-10-30T15:38:23.000Z
2021-11-10T17:31:04.000Z
examples/absolute-importing.ipynb
agoose77/literary
10245977c1c35b7ae19e42de5eb20d918d5207a5
[ "BSD-3-Clause" ]
1
2021-05-13T14:44:05.000Z
2021-05-13T14:44:05.000Z
19.734694
218
0.507756
[ [ [ "# Absolute Importing\nThis notebook is a pure Jupyter notebook (doesn't use any special cell tags) that imports from `package_a`, to demonstrate absolute imports.", "_____no_output_____" ], [ "### Install notebook hook\nThis is only required once, if this notebook wishes to import from .ipynb\nOnce Literary packages have been built into pure-Python equivalents, this is not required.\n\nWe are going to use the _notebook_ hook, which adds support for importing other notebooks. This is different to the _module_ hook, which can be seen in [src/package_a/importer.ipynb](src/package_a/importer.ipynb)", "_____no_output_____" ] ], [ [ "%load_ext literary.notebook", "_____no_output_____" ] ], [ [ "### Import test package", "_____no_output_____" ] ], [ [ "import package_a", "_____no_output_____" ], [ "dir(package_a)", "_____no_output_____" ] ], [ [ "### Investigate a submodule", "_____no_output_____" ] ], [ [ "import package_a.exports as exports", "_____no_output_____" ], [ "dir(exports)", "_____no_output_____" ], [ "import pathlib", "_____no_output_____" ], [ "path = pathlib.Path()\npath.parent", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb81789949073478301d48f189ad4ebdb790dcd9
12,716
ipynb
Jupyter Notebook
notebooks/Sonar - Decentralized Learning Demo.ipynb
jopasserat/PySonar
877cad6ec1e180ff0f3831501bd3c30c5880731b
[ "Apache-2.0" ]
171
2017-07-29T21:51:07.000Z
2018-04-07T10:04:12.000Z
notebooks/Sonar - Decentralized Learning Demo.ipynb
jopasserat/PySonar
877cad6ec1e180ff0f3831501bd3c30c5880731b
[ "Apache-2.0" ]
36
2017-07-31T01:54:18.000Z
2017-12-06T00:15:32.000Z
notebooks/Sonar - Decentralized Learning Demo.ipynb
jopasserat/PySonar
877cad6ec1e180ff0f3831501bd3c30c5880731b
[ "Apache-2.0" ]
63
2017-08-06T18:52:35.000Z
2018-03-29T12:56:41.000Z
27.17094
371
0.575417
[ [ [ "# Sonar - Decentralized Model Training Simulation (local)\n\nDISCLAIMER: This is a proof-of-concept implementation. It does not represent a remotely product ready implementation or follow proper conventions for security, convenience, or scalability. It is part of a broader proof-of-concept demonstrating the vision of the OpenMined project, its major moving parts, and how they might work together.\n", "_____no_output_____" ], [ "# Getting Started: Installation\n\n##### Step 1: install IPFS\n\n- https://ipfs.io/docs/install/\n\n##### Step 2: Turn on IPFS Daemon\nExecute on command line:\n> ipfs daemon\n\n##### Step 3: Install Ethereum testrpc\n\n- https://github.com/ethereumjs/testrpc\n\n##### Step 4: Turn on testrpc with 1000 initialized accounts (each with some money)\nExecute on command line:\n> testrpc -a 1000\n\n##### Step 5: install openmined/sonar and all dependencies (truffle)\n\n##### Step 6: Locally Deploy Smart Contracts in openmined/sonar\nFrom the OpenMined/Sonar repository root run\n> truffle compile\n> truffle migrate\n\nyou should see something like this when you run migrate:\n```\nUsing network 'development'.\n\nRunning migration: 1_initial_migration.js\n Deploying Migrations...\n Migrations: 0xf06039885460a42dcc8db5b285bb925c55fbaeae\nSaving successful migration to network...\nSaving artifacts...\nRunning migration: 2_deploy_contracts.js\n Deploying ConvertLib...\n ConvertLib: 0x6cc86f0a80180a491f66687243376fde45459436\n Deploying ModelRepository...\n ModelRepository: 0xe26d32efe1c573c9f81d68aa823dcf5ff3356946\n Linking ConvertLib to MetaCoin\n Deploying MetaCoin...\n MetaCoin: 0x6d3692bb28afa0eb37d364c4a5278807801a95c5\n```\n\nThe address after 'ModelRepository' is something you'll need to copy paste into the code\nbelow when you initialize the \"ModelRepository\" object. In this case the address to be\ncopy pasted is `0xe26d32efe1c573c9f81d68aa823dcf5ff3356946`.\n\n##### Step 7: execute the following code", "_____no_output_____" ], [ "# The Simulation: Diabetes Prediction", "_____no_output_____" ], [ "In this example, a diabetes research center (Cure Diabetes Inc) wants to train a model to try to predict the progression of diabetes based on several indicators. They have collected a small sample (42 patients) of data but it's not enough to train a model. So, they intend to offer up a bounty of $5,000 to the OpenMined commmunity to train a high quality model.\n\nAs it turns out, there are 400 diabetics in the network who are candidates for the model (are collecting the relevant fields). In this simulation, we're going to faciliate the training of Cure Diabetes Inc incentivizing these 400 anonymous contributors to train the model using the Ethereum blockchain.\n\nNote, in this simulation we're only going to use the sonar and syft packages (and everything is going to be deployed locally on a test blockchain). Future simulations will incorporate mine and capsule for greater anonymity and automation.", "_____no_output_____" ], [ "### Imports and Convenience Functions", "_____no_output_____" ] ], [ [ "import warnings\nimport numpy as np\nimport phe as paillier\nfrom sonar.contracts import ModelRepository,Model\nfrom syft.he.paillier.keys import KeyPair\nfrom syft.nn.linear import LinearClassifier\nfrom sklearn.datasets import load_diabetes\n\ndef get_balance(account):\n return repo.web3.fromWei(repo.web3.eth.getBalance(account),'ether')\n\nwarnings.filterwarnings('ignore')", "_____no_output_____" ] ], [ [ "### Setting up the Experiment", "_____no_output_____" ] ], [ [ "# for the purpose of the simulation, we're going to split our dataset up amongst\n# the relevant simulated users\n\ndiabetes = load_diabetes()\ny = diabetes.target\nX = diabetes.data\n\nvalidation = (X[0:5],y[0:5])\nanonymous_diabetes_users = (X[6:],y[6:])\n\n# we're also going to initialize the model trainer smart contract, which in the\n# real world would already be on the blockchain (managing other contracts) before\n# the simulation begins\n\n# ATTENTION: copy paste the correct address (NOT THE DEFAULT SEEN HERE) from truffle migrate output.\nrepo = ModelRepository('0x6c7a23081b37e64adc5500c12ee851894d9fd500', ipfs_host='localhost', web3_host='localhost') # blockchain hosted model repository", "No account submitted... using default[2]\nConnected to OpenMined ModelRepository:0xb0f99be3d5c858efaabe19bcc54405f3858d48bc\n" ], [ "\n\n# we're going to set aside 10 accounts for our 42 patients\n# Let's go ahead and pair each data point with each patient's \n# address so that we know we don't get them confused\npatient_addresses = repo.web3.eth.accounts[1:10]\nanonymous_diabetics = list(zip(patient_addresses,\n anonymous_diabetes_users[0],\n anonymous_diabetes_users[1]))\n\n# we're going to set aside 1 account for Cure Diabetes Inc\ncure_diabetes_inc = repo.web3.eth.accounts[1]", "_____no_output_____" ] ], [ [ "## Step 1: Cure Diabetes Inc Initializes a Model and Provides a Bounty", "_____no_output_____" ] ], [ [ "pubkey,prikey = KeyPair().generate(n_length=1024)\ndiabetes_classifier = LinearClassifier(desc=\"DiabetesClassifier\",n_inputs=10,n_labels=1)\ninitial_error = diabetes_classifier.evaluate(validation[0],validation[1])\ndiabetes_classifier.encrypt(pubkey)\n\ndiabetes_model = Model(owner=cure_diabetes_inc,\n syft_obj = diabetes_classifier,\n bounty = 1,\n initial_error = initial_error,\n target_error = 10000\n )\nmodel_id = repo.submit_model(diabetes_model)", "_____no_output_____" ] ], [ [ "## Step 2: An Anonymous Patient Downloads the Model and Improves It", "_____no_output_____" ] ], [ [ "model_id", "_____no_output_____" ], [ "model = repo[model_id]", "_____no_output_____" ], [ "diabetic_address,input_data,target_data = anonymous_diabetics[0]", "_____no_output_____" ], [ "repo[model_id].submit_gradient(diabetic_address,input_data,target_data)", "_____no_output_____" ] ], [ [ "## Step 3: Cure Diabetes Inc. Evaluates the Gradient ", "_____no_output_____" ] ], [ [ "repo[model_id]", "_____no_output_____" ], [ "old_balance = get_balance(diabetic_address)\nprint(old_balance)", "98.999999999999645746\n" ], [ "new_error = repo[model_id].evaluate_gradient(cure_diabetes_inc,repo[model_id][0],prikey,pubkey,validation[0],validation[1])", "_____no_output_____" ], [ "new_error", "_____no_output_____" ], [ "new_balance = get_balance(diabetic_address)\nincentive = new_balance - old_balance\nprint(incentive)", "0.002461227149139725\n" ] ], [ [ "## Step 4: Rinse and Repeat", "_____no_output_____" ] ], [ [ "model", "_____no_output_____" ], [ "for i,(addr, input, target) in enumerate(anonymous_diabetics):\n try:\n \n model = repo[model_id]\n \n # patient is doing this\n model.submit_gradient(addr,input,target)\n \n # Cure Diabetes Inc does this\n old_balance = get_balance(addr)\n new_error = model.evaluate_gradient(cure_diabetes_inc,model[i+1],prikey,pubkey,validation[0],validation[1],alpha=2)\n print(\"new error = \"+str(new_error))\n incentive = round(get_balance(addr) - old_balance,5)\n print(\"incentive = \"+str(incentive))\n except:\n \"Connection Reset\"", "new error = 21637218\nincentive = 0.00473\nnew error = 21749031\nincentive = 0.00000\nnew error = 21594788\nincentive = 0.00196\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cb8178c35941a1cc30689a0d9081b44942e978a5
38,257
ipynb
Jupyter Notebook
zJupyterNB_Script/ScriptPrecipitationAPI_Kai.ipynb
ZoeyYiZhou/141BProject.github.io
ad11d9fc9e2379a2fda78a545661a7e71826e01b
[ "CC0-1.0" ]
null
null
null
zJupyterNB_Script/ScriptPrecipitationAPI_Kai.ipynb
ZoeyYiZhou/141BProject.github.io
ad11d9fc9e2379a2fda78a545661a7e71826e01b
[ "CC0-1.0" ]
null
null
null
zJupyterNB_Script/ScriptPrecipitationAPI_Kai.ipynb
ZoeyYiZhou/141BProject.github.io
ad11d9fc9e2379a2fda78a545661a7e71826e01b
[ "CC0-1.0" ]
null
null
null
31.827787
1,264
0.411637
[ [ [ "from urllib2 import Request, urlopen\nfrom urlparse import urlparse, urlunparse\nimport requests, requests_cache\nimport pandas as pd\nimport json\nimport os\nimport numpy as np\n\nfrom matplotlib import pyplot as plt\nplt.style.use('ggplot')\n%matplotlib inline\n\n\nfrom urllib2 import Request, urlopen\n\nimport requests\n# In terminal: conda install requests\nimport requests_cache\n# In terminal: pip install requests_cache", "_____no_output_____" ], [ "pwd", "_____no_output_____" ], [ "os.chdir('/Users/kaijin/Downloads')", "_____no_output_____" ], [ "KAWEAH=pd.read_csv('San_Joaquin_Valley.csv')\nKAWEAH['count'] = pd.Series(1, index =KAWEAH.index )\nf = {'Acres':['sum'], 'WaterUsage':['mean'], 'UsageTotal':['sum'], 'count':['sum']}\nKAWEAH.groupby(['Subbasin_N', 'County_N', 'Year', 'CropName']).agg(f).head()\n", "_____no_output_____" ], [ "county_name=np.unique(KAWEAH[\"County_N\"])\n", "_____no_output_____" ] ], [ [ "Lets extract the zipcode according to the county name ", "_____no_output_____" ] ], [ [ "for i in range(9):\n print county_name[i]", "Fresno\nKern\nKings\nMadera\nMerced\nSacramento\nSan Joaquin\nStanislaus\nTulare\n" ], [ "zipcode=[93210,93263,93202,93638,93620,95641,95242,95326,93201]\nZipcodeList=[{ \"County_N\":county_name[i], \"zipcode\":zipcode[i] } for i in range(len(zipcode))]\nCOUNTYZIP=pd.DataFrame(ZipcodeList, columns=[\"County_N\", \"zipcode\"])\nCOUNTYZIP\n", "_____no_output_____" ] ], [ [ "Lets extract the zipcode and precipetation data from California Department of Water Resources \nhttp://et.water.ca.gov/Rest/Index", "_____no_output_____" ] ], [ [ "start=\"2010-01-01\"\ndef ndb_search(term,start,end,verbose = False):\n \"\"\"\n This takes all of the necessary parameters to form a query \n Input: key (data.gov API key, string), term (type, string)\n Output: JSON object\n \"\"\"\n url = \"http://et.water.ca.gov/api/data\"\n response = requests.get(url, params = {\n \"targets\": term,\n \"appKey\":\"90e36c84-3f23-48a3-becd-1865076a04fd\",\n \"startDate\":start,\n \"EndDate\":end,\n \"dataItems\": \"day-precip\" \n })\n response.raise_for_status() # check for errors\n if verbose:\n print response.url\n\n return response.json() # parse JSON", "_____no_output_____" ], [ "Tulare2010_Recode=Tulare2010[\"Data\"][\"Providers\"][0]['Records']\nlen(Tulare2010_Recode)", "_____no_output_____" ], [ "#note inside a county there may be multilple station that recode the data \n# we take the mean then times 365 to get one year rain \n# note the value is inches ", "_____no_output_____" ], [ "precip=[ Tulare2010_Recode[i]['DayPrecip']['Value'] for i in range(len(Tulare2010_Recode))]\nprecip2=np.array(precip).astype(np.float)\n#precip2", "_____no_output_____" ], [ "#WRITE INTO FUNCTIONS\ndef precip_cal(term,year,verbose = False):\n \"\"\"\n This takes zipcode and year gives precipitaion of a year \n Input: term (zipcode, int), year (year, int)\n Output: precipitation of a year and a certain county \n \"\"\"\n start=\"{}-01-01\".format(\"\".join(str(year)))\n end=\"{}-12-31\".format(\"\".join(str(year)))\n\n Tulare2010=ndb_search(term,start,end,verbose = False)\n Tulare2010_Recode=Tulare2010[\"Data\"][\"Providers\"][0]['Records']\n precip=[ Tulare2010_Recode[i]['DayPrecip']['Value'] for i in range(len(Tulare2010_Recode))]\n precip2=np.array(precip).astype(np.float)\n \n return np.nanmean(precip2)*365 # parse JSON", "_____no_output_____" ], [ "year=[2010,2011,2012,2013,2014,2015]\nZipcodeList=[{ \"County_N\":county_name[i], \"zipcode\":zipcode[i],\"year\":year[j]} for i in range(len(zipcode)) for j in range(6) ]\nZipcodeList\nCOUNTYYear=pd.DataFrame(ZipcodeList, columns=[\"County_N\", \"zipcode\",\"year\"])\nx=[precip_cal(COUNTYYear[\"zipcode\"][i],COUNTYYear[\"year\"][i]) for i in xrange(54) ]\n", "_____no_output_____" ], [ "COUNTYYear=pd.DataFrame(ZipcodeList, columns=[\"County_N\", \"zipcode\",\"year\"])\nCOUNTYYear[\"Precip\"]=x\nCOUNTYYear", "_____no_output_____" ], [ "COUNTYYear\n# unit for precip is inch \nnewtable=pd.merge(KAWEAH, COUNTYYear,how=\"right\")\nf = {'Acres':['sum'], 'WaterUsage':['mean'], 'UsageTotal':['sum'], 'count':['sum'],\"Precip\":['mean']}\ngrouped_data=newtable.groupby(['Subbasin_N', 'County_N', 'Year', 'CropName']).agg(f)\n", "_____no_output_____" ] ], [ [ "Crop value extract from \nhttps://www.nass.usda.gov/Statistics_by_State/California/Publications/California_Ag_Statistics/CAFieldCrops.pdf\ncwt is unit 100pounds \nhttps://www.nass.usda.gov/Statistics_by_State/California/Publications/", "_____no_output_____" ] ], [ [ "cropname=np.unique(KAWEAH[\"CropName\"])", "_____no_output_____" ], [ "cropname", "_____no_output_____" ], [ "for i in range(len(cropname)):\n print corpname[i]\nlen(cropname)", "Al Pist\nAlfalfa\nCorn\nCotton\nCucurb\nDryBean\nGrain\nOn Gar\nOth Dec\nOth Fld\nOth Trk \nPasture\nPotato\nPro Tom\nRice\nSafflwr\nSgrBeet\nSubtrop\nVine\n" ], [ "def avg(l):\n return sum(l, 0.0) / len(l)\navg([1*3,2*5])\n1628*140.00\navg([ 8.88*466 ,5.73*682 ,2.48*3390 ,19.00*391,8.33*780,14.10*429 ,5.30*664 , 1.76 *3710,1750*2.06 ]) ", "_____no_output_____" ], [ "# data from price value in 2013 \n# econ value is dollar per acers\n\nEcon_dict = { \"Al Pist\":2360*3.21, \n \"Alfalfa\":7.0*206.00,\n \"Corn\": 26.50*48.23,\n \"Cotton\":1628*140.00, \n \"Cucurb\":avg([260*20.20, 180*35.40, 200*25.90,580*13.00,300*16.00,330*15.60]),\n#Honeydew Melons 260 2,730,000 20.20 Cwt. Cwt. $/Cwt.\n#\"Squash\" 180 1,224,000 35.40 Cwt. Cwt. $/Cwt.\n#\"Cucumbers\" 200 760,000 25.90 Cwt. Cwt. $/Cwt.\n#\"Watermelons\" 580 5,800,000 13.00 Cwt. Cwt. $/Cwt.\n#\"Cantaloupes\" 300 12,750,000 16.00 Cwt. Cwt. $/Cwt.\n#\"Pumpkins 330 1,947,000 15.60 Cwt. Cwt. $/Cwt.\n\n \"DryBean\": 2320*56.80, \n \"Grain\":5.35*190.36,\n \"On Gar\":avg([ 400*13.20,165*60.30 ]), \n#\"Onions\" spring 400 2,720,000 13.20 summer 490 3,822,000 6.40 Onions, Summer Storage 399 11,700,000 9.11\n# \"Garlic\" 165 3,795,000 60.30\n\n \"Oth Dec\":avg([ 8.88*466 ,5.73*682 ,2.48*3390 ,19.00*391,8.33*780,14.10*429 ,5.30*664 , 1.76 *3710,1750*2.06 ]),\n#\"Apples\" 8.88 135,000 466 Tons Tons $/Ton\n#\"Apricots\" 5.73 54,400 682 Tons Tons $/Ton\n#\"Cherries\", 2.48 82,000 3,390 Tons Tons $/Ton\n#\"Pears\", 19.00 220,000 391 Tons Tons $/Ton\n#\"Nectarines\" 8.33 150,000 780 Tons Tons $/Ton\n#\"Peaches\", 14.10 648,000 429 Tons Tons $/Ton\n#\"Plums\", 5.30 95,400 664 Tons Tons $/Ton\n#\"Walnuts\" 1.76 492,000 3,710 #tones Tons $/Ton\n#\"Pecans\" 1,750 5,000 2.06 Pounds 1000pounds $/Pound\n \"Oth Fld\":avg([1296.00* 27.1, 17.00*37.56]),\n# sunflowers 1,296.00 751,500 27.1 Tons Tons $/Ton\n # Sorghum2009 17.00 646,000 37.56 Tons Tons $/Ton\n \"Oth Trk\":avg([320*29.60, 350*24.90, 32*152.00, 180*42.70, 107*248.00,425*41.70,385* 38.70 ,165*42.10,405*21.70 ]),\n#\"Carrots\" 320 20,000,000 29.60 Cwt. Cwt. $/Cwt.\n#\"Lettuce\" 350 33,600,000 24.90 Cwt. Cwt. $/Cwt.\n#\"Asparagus\" 32 368,000 152.00 Cwt. Cwt. $/Cwt.\n#\"Cauliflower\" 180 5,868,000 42.70 Cwt. Cwt. $/Cwt.\n# berries 107 514,000 248.00 Cwt. Cwt. $/Cwt.\n# \"Peppers Bell\", 425 8,465,000 41.70 Cwt. Cwt. $/Cwt.\n# pepers Chile 385 2,640,000 38.70 Cwt. Cwt. $/Cwt.\n# \"Broccoli\", 165 20,460,000 42.10 8 Cwt. Cwt. $/Cwt.\n# \"Cabbage\", 405 5,670,000 21.70 Cwt. Cwt. $/Cwt.\n \"Pasture\":0, \n \"Potato\":425*17.1, # Cwt. Cwt. $/Cwt.\n \"Pro Tom\":300*36.20, # Cwt. Cwt. $/Cwt \n \"Rice\":84.80*20.9, # Cwt. Cwt. $/Cwt\n \"Safflwr\": 2000.00*26.5, # Pounds Cwt. $/Cwt.\n \"SgrBeet\": 43.40*52.1, # Tons Tons $/Ton\n \"Subtrop\":avg([622*6.52,4.15*813 ]), \n# orange 622 109000000 6.52\n# Olives 4.15 166000 813 Tons Tons $/Ton\n\n \"Vine\":900*5.07}# Cartons 3/ Cartons $/Carton\n\n\nEcon_dict", "_____no_output_____" ] ], [ [ "find 33 perentile 66 percentile of the water usage", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb817b16045bcb451cb43dfbec064d7bc8b143fb
67,725
ipynb
Jupyter Notebook
Sessions/Session-3.ipynb
rprasai/datascience
8f9422a648121e2cf1f0d855defc0f70b2c5b7ec
[ "CC-BY-4.0" ]
null
null
null
Sessions/Session-3.ipynb
rprasai/datascience
8f9422a648121e2cf1f0d855defc0f70b2c5b7ec
[ "CC-BY-4.0" ]
null
null
null
Sessions/Session-3.ipynb
rprasai/datascience
8f9422a648121e2cf1f0d855defc0f70b2c5b7ec
[ "CC-BY-4.0" ]
null
null
null
29.079004
591
0.430506
[ [ [ "# Session 3\n---", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ], [ "ar = np.arange(3, 32)", "_____no_output_____" ], [ "ar", "_____no_output_____" ] ], [ [ "np.any\nnp.all\nnp.where", "_____no_output_____" ] ], [ [ "help(np.where)", "Help on built-in function where in module numpy.core.multiarray:\n\nwhere(...)\n where(condition, [x, y])\n \n Return elements, either from `x` or `y`, depending on `condition`.\n \n If only `condition` is given, return ``condition.nonzero()``.\n \n Parameters\n ----------\n condition : array_like, bool\n When True, yield `x`, otherwise yield `y`.\n x, y : array_like, optional\n Values from which to choose. `x`, `y` and `condition` need to be\n broadcastable to some shape.\n \n Returns\n -------\n out : ndarray or tuple of ndarrays\n If both `x` and `y` are specified, the output array contains\n elements of `x` where `condition` is True, and elements from\n `y` elsewhere.\n \n If only `condition` is given, return the tuple\n ``condition.nonzero()``, the indices where `condition` is True.\n \n See Also\n --------\n nonzero, choose\n \n Notes\n -----\n If `x` and `y` are given and input arrays are 1-D, `where` is\n equivalent to::\n \n [xv if c else yv for (c,xv,yv) in zip(condition,x,y)]\n \n Examples\n --------\n >>> np.where([[True, False], [True, True]],\n ... [[1, 2], [3, 4]],\n ... [[9, 8], [7, 6]])\n array([[1, 8],\n [3, 4]])\n \n >>> np.where([[0, 1], [1, 0]])\n (array([0, 1]), array([1, 0]))\n \n >>> x = np.arange(9.).reshape(3, 3)\n >>> np.where( x > 5 )\n (array([2, 2, 2]), array([0, 1, 2]))\n >>> x[np.where( x > 3.0 )] # Note: result is 1D.\n array([ 4., 5., 6., 7., 8.])\n >>> np.where(x < 5, x, -1) # Note: broadcasting.\n array([[ 0., 1., 2.],\n [ 3., 4., -1.],\n [-1., -1., -1.]])\n \n Find the indices of elements of `x` that are in `goodvalues`.\n \n >>> goodvalues = [3, 4, 7]\n >>> ix = np.isin(x, goodvalues)\n >>> ix\n array([[False, False, False],\n [ True, True, False],\n [False, True, False]], dtype=bool)\n >>> np.where(ix)\n (array([1, 1, 2]), array([0, 1, 1]))\n\n" ], [ "ar", "_____no_output_____" ], [ "np.where(ar < 10, 15, 18)", "_____no_output_____" ] ], [ [ "```python\nx if condition else y\n```", "_____no_output_____" ] ], [ [ "np.where(ar < 10, 'Y', 'N')", "_____no_output_____" ], [ "np.where(ar < 10, 'Y', 18)", "_____no_output_____" ], [ "np.where(ar < 10, 15)", "_____no_output_____" ], [ "np.where(ar < 10)", "_____no_output_____" ], [ "type(np.where(ar < 10))", "_____no_output_____" ], [ "help(np.any)", "Help on function any in module numpy.core.fromnumeric:\n\nany(a, axis=None, out=None, keepdims=<class 'numpy._globals._NoValue'>)\n Test whether any array element along a given axis evaluates to True.\n \n Returns single boolean unless `axis` is not ``None``\n \n Parameters\n ----------\n a : array_like\n Input array or object that can be converted to an array.\n axis : None or int or tuple of ints, optional\n Axis or axes along which a logical OR reduction is performed.\n The default (`axis` = `None`) is to perform a logical OR over all\n the dimensions of the input array. `axis` may be negative, in\n which case it counts from the last to the first axis.\n \n .. versionadded:: 1.7.0\n \n If this is a tuple of ints, a reduction is performed on multiple\n axes, instead of a single axis or all the axes as before.\n out : ndarray, optional\n Alternate output array in which to place the result. It must have\n the same shape as the expected output and its type is preserved\n (e.g., if it is of type float, then it will remain so, returning\n 1.0 for True and 0.0 for False, regardless of the type of `a`).\n See `doc.ufuncs` (Section \"Output arguments\") for details.\n \n keepdims : bool, optional\n If this is set to True, the axes which are reduced are left\n in the result as dimensions with size one. With this option,\n the result will broadcast correctly against the input array.\n \n If the default value is passed, then `keepdims` will not be\n passed through to the `any` method of sub-classes of\n `ndarray`, however any non-default value will be. If the\n sub-classes `sum` method does not implement `keepdims` any\n exceptions will be raised.\n \n Returns\n -------\n any : bool or ndarray\n A new boolean or `ndarray` is returned unless `out` is specified,\n in which case a reference to `out` is returned.\n \n See Also\n --------\n ndarray.any : equivalent method\n \n all : Test whether all elements along a given axis evaluate to True.\n \n Notes\n -----\n Not a Number (NaN), positive infinity and negative infinity evaluate\n to `True` because these are not equal to zero.\n \n Examples\n --------\n >>> np.any([[True, False], [True, True]])\n True\n \n >>> np.any([[True, False], [False, False]], axis=0)\n array([ True, False], dtype=bool)\n \n >>> np.any([-1, 0, 5])\n True\n \n >>> np.any(np.nan)\n True\n \n >>> o=np.array([False])\n >>> z=np.any([-1, 4, 5], out=o)\n >>> z, o\n (array([ True], dtype=bool), array([ True], dtype=bool))\n >>> # Check now that z is a reference to o\n >>> z is o\n True\n >>> id(z), id(o) # identity of z and o # doctest: +SKIP\n (191614240, 191614240)\n\n" ], [ "np.any([0, 1, 1, 1, 0])", "_____no_output_____" ], [ "np.any([False, True, False, True, True])", "_____no_output_____" ], [ "np.any([[True, False], [True, False]])", "_____no_output_____" ], [ "np.any([[True, True], [False, False]])", "_____no_output_____" ], [ "np.any([[True, True], [False, False]], axis=1)", "_____no_output_____" ], [ "np.any([[True, True], [False, False]], axis=0)", "_____no_output_____" ], [ "np.all([[True, False], [True, False]])", "_____no_output_____" ], [ "np.all([[True, False], [True, False]], axis=0)", "_____no_output_____" ], [ "np.all([[True, False], [True, False]], axis=1)", "_____no_output_____" ], [ "np.all(~np.array([False, False, False]))", "_____no_output_____" ], [ "not np.all(np.array([False, False, False]))", "_____no_output_____" ], [ "~np.array([False, False, False])", "_____no_output_____" ], [ "np.all(np.array([False, True, False]))", "_____no_output_____" ], [ "not np.all(np.array([False, True, False]))", "_____no_output_____" ], [ "np.all(~np.array([False, True, False]))", "_____no_output_____" ], [ "np.all(not np.array([False, True, False]))", "_____no_output_____" ], [ "~np.all(np.array([False, False, False]))", "_____no_output_____" ], [ "ar2 = np.arange(0, 12).reshape((3, 4))", "_____no_output_____" ], [ "ar2", "_____no_output_____" ], [ "ar2.sum()", "_____no_output_____" ], [ "ar2.sum(axis=1)", "_____no_output_____" ], [ "ar2.sum(axis=0)", "_____no_output_____" ], [ "ar3 = np.arange(0,12).reshape((2,2,3))", "_____no_output_____" ], [ "ar3", "_____no_output_____" ], [ "ar3.sum(axis=0)", "_____no_output_____" ], [ "ar3.sum(axis=1)", "_____no_output_____" ], [ "ar3.sum(axis=2)", "_____no_output_____" ], [ "ar4 = np.arange(0, 16).reshape((2,2,2,2))", "_____no_output_____" ], [ "ar4", "_____no_output_____" ], [ "ar4.sum(axis=0)", "_____no_output_____" ], [ "ar4.sum(axis=1)", "_____no_output_____" ], [ "ar4.sum(axis=2)", "_____no_output_____" ], [ "ar4", "_____no_output_____" ], [ "ar4.sum(axis=3)", "_____no_output_____" ], [ "ar4.sum(axis=3).shape", "_____no_output_____" ], [ "np.add.reduce(np.array([1, 2, 3]))", "_____no_output_____" ], [ "np.add(np.array([1, 2, 3]), 2)", "_____no_output_____" ], [ "help(np.prod)", "Help on function prod in module numpy.core.fromnumeric:\n\nprod(a, axis=None, dtype=None, out=None, keepdims=<class 'numpy._globals._NoValue'>)\n Return the product of array elements over a given axis.\n \n Parameters\n ----------\n a : array_like\n Input data.\n axis : None or int or tuple of ints, optional\n Axis or axes along which a product is performed. The default,\n axis=None, will calculate the product of all the elements in the\n input array. If axis is negative it counts from the last to the\n first axis.\n \n .. versionadded:: 1.7.0\n \n If axis is a tuple of ints, a product is performed on all of the\n axes specified in the tuple instead of a single axis or all the\n axes as before.\n dtype : dtype, optional\n The type of the returned array, as well as of the accumulator in\n which the elements are multiplied. The dtype of `a` is used by\n default unless `a` has an integer dtype of less precision than the\n default platform integer. In that case, if `a` is signed then the\n platform integer is used while if `a` is unsigned then an unsigned\n integer of the same precision as the platform integer is used.\n out : ndarray, optional\n Alternative output array in which to place the result. It must have\n the same shape as the expected output, but the type of the output\n values will be cast if necessary.\n keepdims : bool, optional\n If this is set to True, the axes which are reduced are left in the\n result as dimensions with size one. With this option, the result\n will broadcast correctly against the input array.\n \n If the default value is passed, then `keepdims` will not be\n passed through to the `prod` method of sub-classes of\n `ndarray`, however any non-default value will be. If the\n sub-classes `sum` method does not implement `keepdims` any\n exceptions will be raised.\n \n Returns\n -------\n product_along_axis : ndarray, see `dtype` parameter above.\n An array shaped as `a` but with the specified axis removed.\n Returns a reference to `out` if specified.\n \n See Also\n --------\n ndarray.prod : equivalent method\n numpy.doc.ufuncs : Section \"Output arguments\"\n \n Notes\n -----\n Arithmetic is modular when using integer types, and no error is\n raised on overflow. That means that, on a 32-bit platform:\n \n >>> x = np.array([536870910, 536870910, 536870910, 536870910])\n >>> np.prod(x) #random\n 16\n \n The product of an empty array is the neutral element 1:\n \n >>> np.prod([])\n 1.0\n \n Examples\n --------\n By default, calculate the product of all elements:\n \n >>> np.prod([1.,2.])\n 2.0\n \n Even when the input array is two-dimensional:\n \n >>> np.prod([[1.,2.],[3.,4.]])\n 24.0\n \n But we can also specify the axis over which to multiply:\n \n >>> np.prod([[1.,2.],[3.,4.]], axis=1)\n array([ 2., 12.])\n \n If the type of `x` is unsigned, then the output type is\n the unsigned platform integer:\n \n >>> x = np.array([1, 2, 3], dtype=np.uint8)\n >>> np.prod(x).dtype == np.uint\n True\n \n If `x` is of a signed integer type, then the output type\n is the default platform integer:\n \n >>> x = np.array([1, 2, 3], dtype=np.int8)\n >>> np.prod(x).dtype == np.int\n True\n\n" ], [ "help(np.log10)", "Help on ufunc object:\n\nlog10 = class ufunc(builtins.object)\n | Functions that operate element by element on whole arrays.\n | \n | To see the documentation for a specific ufunc, use `info`. For\n | example, ``np.info(np.sin)``. Because ufuncs are written in C\n | (for speed) and linked into Python with NumPy's ufunc facility,\n | Python's help() function finds this page whenever help() is called\n | on a ufunc.\n | \n | A detailed explanation of ufuncs can be found in the docs for :ref:`ufuncs`.\n | \n | Calling ufuncs:\n | ===============\n | \n | op(*x[, out], where=True, **kwargs)\n | Apply `op` to the arguments `*x` elementwise, broadcasting the arguments.\n | \n | The broadcasting rules are:\n | \n | * Dimensions of length 1 may be prepended to either array.\n | * Arrays may be repeated along dimensions of length 1.\n | \n | Parameters\n | ----------\n | *x : array_like\n | Input arrays.\n | out : ndarray, None, or tuple of ndarray and None, optional\n | Alternate array object(s) in which to put the result; if provided, it\n | must have a shape that the inputs broadcast to. A tuple of arrays\n | (possible only as a keyword argument) must have length equal to the\n | number of outputs; use `None` for outputs to be allocated by the ufunc.\n | where : array_like, optional\n | Values of True indicate to calculate the ufunc at that position, values\n | of False indicate to leave the value in the output alone.\n | **kwargs\n | For other keyword-only arguments, see the :ref:`ufunc docs <ufuncs.kwargs>`.\n | \n | Returns\n | -------\n | r : ndarray or tuple of ndarray\n | `r` will have the shape that the arrays in `x` broadcast to; if `out` is\n | provided, `r` will be equal to `out`. If the function has more than one\n | output, then the result will be a tuple of arrays.\n | \n | Methods defined here:\n | \n | __call__(self, /, *args, **kwargs)\n | Call self as a function.\n | \n | __repr__(self, /)\n | Return repr(self).\n | \n | __str__(self, /)\n | Return str(self).\n | \n | accumulate(...)\n | accumulate(array, axis=0, dtype=None, out=None, keepdims=None)\n | \n | Accumulate the result of applying the operator to all elements.\n | \n | For a one-dimensional array, accumulate produces results equivalent to::\n | \n | r = np.empty(len(A))\n | t = op.identity # op = the ufunc being applied to A's elements\n | for i in range(len(A)):\n | t = op(t, A[i])\n | r[i] = t\n | return r\n | \n | For example, add.accumulate() is equivalent to np.cumsum().\n | \n | For a multi-dimensional array, accumulate is applied along only one\n | axis (axis zero by default; see Examples below) so repeated use is\n | necessary if one wants to accumulate over multiple axes.\n | \n | Parameters\n | ----------\n | array : array_like\n | The array to act on.\n | axis : int, optional\n | The axis along which to apply the accumulation; default is zero.\n | dtype : data-type code, optional\n | The data-type used to represent the intermediate results. Defaults\n | to the data-type of the output array if such is provided, or the\n | the data-type of the input array if no output array is provided.\n | out : ndarray, None, or tuple of ndarray and None, optional\n | A location into which the result is stored. If not provided or `None`,\n | a freshly-allocated array is returned. For consistency with\n | :ref:`ufunc.__call__`, if given as a keyword, this may be wrapped in a\n | 1-element tuple.\n | \n | .. versionchanged:: 1.13.0\n | Tuples are allowed for keyword argument.\n | keepdims : bool\n | Has no effect. Deprecated, and will be removed in future.\n | \n | Returns\n | -------\n | r : ndarray\n | The accumulated values. If `out` was supplied, `r` is a reference to\n | `out`.\n | \n | Examples\n | --------\n | 1-D array examples:\n | \n | >>> np.add.accumulate([2, 3, 5])\n | array([ 2, 5, 10])\n | >>> np.multiply.accumulate([2, 3, 5])\n | array([ 2, 6, 30])\n | \n | 2-D array examples:\n | \n | >>> I = np.eye(2)\n | >>> I\n | array([[ 1., 0.],\n | [ 0., 1.]])\n | \n | Accumulate along axis 0 (rows), down columns:\n | \n | >>> np.add.accumulate(I, 0)\n | array([[ 1., 0.],\n | [ 1., 1.]])\n | >>> np.add.accumulate(I) # no axis specified = axis zero\n | array([[ 1., 0.],\n | [ 1., 1.]])\n | \n | Accumulate along axis 1 (columns), through rows:\n | \n | >>> np.add.accumulate(I, 1)\n | array([[ 1., 1.],\n | [ 0., 1.]])\n | \n | at(...)\n | at(a, indices, b=None)\n | \n | Performs unbuffered in place operation on operand 'a' for elements\n | specified by 'indices'. For addition ufunc, this method is equivalent to\n | `a[indices] += b`, except that results are accumulated for elements that\n | are indexed more than once. For example, `a[[0,0]] += 1` will only\n | increment the first element once because of buffering, whereas\n | `add.at(a, [0,0], 1)` will increment the first element twice.\n | \n | .. versionadded:: 1.8.0\n | \n | Parameters\n | ----------\n | a : array_like\n | The array to perform in place operation on.\n | indices : array_like or tuple\n | Array like index object or slice object for indexing into first\n | operand. If first operand has multiple dimensions, indices can be a\n | tuple of array like index objects or slice objects.\n | b : array_like\n | Second operand for ufuncs requiring two operands. Operand must be\n | broadcastable over first operand after indexing or slicing.\n | \n | Examples\n | --------\n | Set items 0 and 1 to their negative values:\n | \n | >>> a = np.array([1, 2, 3, 4])\n | >>> np.negative.at(a, [0, 1])\n | >>> print(a)\n | array([-1, -2, 3, 4])\n | \n | ::\n | \n | Increment items 0 and 1, and increment item 2 twice:\n | \n | >>> a = np.array([1, 2, 3, 4])\n | >>> np.add.at(a, [0, 1, 2, 2], 1)\n | >>> print(a)\n | array([2, 3, 5, 4])\n | \n | ::\n | \n | Add items 0 and 1 in first array to second array,\n | and store results in first array:\n | \n | >>> a = np.array([1, 2, 3, 4])\n | >>> b = np.array([1, 2])\n | >>> np.add.at(a, [0, 1], b)\n | >>> print(a)\n | array([2, 4, 3, 4])\n | \n | outer(...)\n | outer(A, B, **kwargs)\n | \n | Apply the ufunc `op` to all pairs (a, b) with a in `A` and b in `B`.\n | \n | Let ``M = A.ndim``, ``N = B.ndim``. Then the result, `C`, of\n | ``op.outer(A, B)`` is an array of dimension M + N such that:\n | \n | .. math:: C[i_0, ..., i_{M-1}, j_0, ..., j_{N-1}] =\n | op(A[i_0, ..., i_{M-1}], B[j_0, ..., j_{N-1}])\n | \n | For `A` and `B` one-dimensional, this is equivalent to::\n | \n | r = empty(len(A),len(B))\n | for i in range(len(A)):\n | for j in range(len(B)):\n | r[i,j] = op(A[i], B[j]) # op = ufunc in question\n | \n | Parameters\n | ----------\n | A : array_like\n | First array\n | B : array_like\n | Second array\n | kwargs : any\n | Arguments to pass on to the ufunc. Typically `dtype` or `out`.\n | \n | Returns\n | -------\n | r : ndarray\n | Output array\n | \n | See Also\n | --------\n | numpy.outer\n | \n | Examples\n | --------\n | >>> np.multiply.outer([1, 2, 3], [4, 5, 6])\n | array([[ 4, 5, 6],\n | [ 8, 10, 12],\n | [12, 15, 18]])\n | \n | A multi-dimensional example:\n | \n | >>> A = np.array([[1, 2, 3], [4, 5, 6]])\n | >>> A.shape\n | (2, 3)\n | >>> B = np.array([[1, 2, 3, 4]])\n | >>> B.shape\n | (1, 4)\n | >>> C = np.multiply.outer(A, B)\n | >>> C.shape; C\n | (2, 3, 1, 4)\n | array([[[[ 1, 2, 3, 4]],\n | [[ 2, 4, 6, 8]],\n | [[ 3, 6, 9, 12]]],\n | [[[ 4, 8, 12, 16]],\n | [[ 5, 10, 15, 20]],\n | [[ 6, 12, 18, 24]]]])\n | \n | reduce(...)\n | reduce(a, axis=0, dtype=None, out=None, keepdims=False)\n | \n | Reduces `a`'s dimension by one, by applying ufunc along one axis.\n | \n | Let :math:`a.shape = (N_0, ..., N_i, ..., N_{M-1})`. Then\n | :math:`ufunc.reduce(a, axis=i)[k_0, ..,k_{i-1}, k_{i+1}, .., k_{M-1}]` =\n | the result of iterating `j` over :math:`range(N_i)`, cumulatively applying\n | ufunc to each :math:`a[k_0, ..,k_{i-1}, j, k_{i+1}, .., k_{M-1}]`.\n | For a one-dimensional array, reduce produces results equivalent to:\n | ::\n | \n | r = op.identity # op = ufunc\n | for i in range(len(A)):\n | r = op(r, A[i])\n | return r\n | \n | For example, add.reduce() is equivalent to sum().\n | \n | Parameters\n | ----------\n | a : array_like\n | The array to act on.\n | axis : None or int or tuple of ints, optional\n | Axis or axes along which a reduction is performed.\n | The default (`axis` = 0) is perform a reduction over the first\n | dimension of the input array. `axis` may be negative, in\n | which case it counts from the last to the first axis.\n | \n | .. versionadded:: 1.7.0\n | \n | If this is `None`, a reduction is performed over all the axes.\n | If this is a tuple of ints, a reduction is performed on multiple\n | axes, instead of a single axis or all the axes as before.\n | \n | For operations which are either not commutative or not associative,\n | doing a reduction over multiple axes is not well-defined. The\n | ufuncs do not currently raise an exception in this case, but will\n | likely do so in the future.\n | dtype : data-type code, optional\n | The type used to represent the intermediate results. Defaults\n | to the data-type of the output array if this is provided, or\n | the data-type of the input array if no output array is provided.\n | out : ndarray, None, or tuple of ndarray and None, optional\n | A location into which the result is stored. If not provided or `None`,\n | a freshly-allocated array is returned. For consistency with\n | :ref:`ufunc.__call__`, if given as a keyword, this may be wrapped in a\n | 1-element tuple.\n | \n | .. versionchanged:: 1.13.0\n | Tuples are allowed for keyword argument.\n | keepdims : bool, optional\n | If this is set to True, the axes which are reduced are left\n | in the result as dimensions with size one. With this option,\n | the result will broadcast correctly against the original `arr`.\n | \n | .. versionadded:: 1.7.0\n | \n | Returns\n | -------\n | r : ndarray\n | The reduced array. If `out` was supplied, `r` is a reference to it.\n | \n | Examples\n | --------\n | >>> np.multiply.reduce([2,3,5])\n | 30\n | \n | A multi-dimensional array example:\n | \n | >>> X = np.arange(8).reshape((2,2,2))\n | >>> X\n | array([[[0, 1],\n | [2, 3]],\n | [[4, 5],\n | [6, 7]]])\n | >>> np.add.reduce(X, 0)\n | array([[ 4, 6],\n | [ 8, 10]])\n | >>> np.add.reduce(X) # confirm: default axis value is 0\n | array([[ 4, 6],\n | [ 8, 10]])\n | >>> np.add.reduce(X, 1)\n | array([[ 2, 4],\n | [10, 12]])\n | >>> np.add.reduce(X, 2)\n | array([[ 1, 5],\n | [ 9, 13]])\n | \n | reduceat(...)\n | reduceat(a, indices, axis=0, dtype=None, out=None)\n | \n | Performs a (local) reduce with specified slices over a single axis.\n | \n | For i in ``range(len(indices))``, `reduceat` computes\n | ``ufunc.reduce(a[indices[i]:indices[i+1]])``, which becomes the i-th\n | generalized \"row\" parallel to `axis` in the final result (i.e., in a\n | 2-D array, for example, if `axis = 0`, it becomes the i-th row, but if\n | `axis = 1`, it becomes the i-th column). There are three exceptions to this:\n | \n | * when ``i = len(indices) - 1`` (so for the last index),\n | ``indices[i+1] = a.shape[axis]``.\n | * if ``indices[i] >= indices[i + 1]``, the i-th generalized \"row\" is\n | simply ``a[indices[i]]``.\n | * if ``indices[i] >= len(a)`` or ``indices[i] < 0``, an error is raised.\n | \n | The shape of the output depends on the size of `indices`, and may be\n | larger than `a` (this happens if ``len(indices) > a.shape[axis]``).\n | \n | Parameters\n | ----------\n | a : array_like\n | The array to act on.\n | indices : array_like\n | Paired indices, comma separated (not colon), specifying slices to\n | reduce.\n | axis : int, optional\n | The axis along which to apply the reduceat.\n | dtype : data-type code, optional\n | The type used to represent the intermediate results. Defaults\n | to the data type of the output array if this is provided, or\n | the data type of the input array if no output array is provided.\n | out : ndarray, None, or tuple of ndarray and None, optional\n | A location into which the result is stored. If not provided or `None`,\n | a freshly-allocated array is returned. For consistency with\n | :ref:`ufunc.__call__`, if given as a keyword, this may be wrapped in a\n | 1-element tuple.\n | \n | .. versionchanged:: 1.13.0\n | Tuples are allowed for keyword argument.\n | \n | Returns\n | -------\n | r : ndarray\n | The reduced values. If `out` was supplied, `r` is a reference to\n | `out`.\n | \n | Notes\n | -----\n | A descriptive example:\n | \n | If `a` is 1-D, the function `ufunc.accumulate(a)` is the same as\n | ``ufunc.reduceat(a, indices)[::2]`` where `indices` is\n | ``range(len(array) - 1)`` with a zero placed\n | in every other element:\n | ``indices = zeros(2 * len(a) - 1)``, ``indices[1::2] = range(1, len(a))``.\n | \n | Don't be fooled by this attribute's name: `reduceat(a)` is not\n | necessarily smaller than `a`.\n | \n | Examples\n | --------\n | To take the running sum of four successive values:\n | \n | >>> np.add.reduceat(np.arange(8),[0,4, 1,5, 2,6, 3,7])[::2]\n | array([ 6, 10, 14, 18])\n | \n | A 2-D example:\n | \n | >>> x = np.linspace(0, 15, 16).reshape(4,4)\n | >>> x\n | array([[ 0., 1., 2., 3.],\n | [ 4., 5., 6., 7.],\n | [ 8., 9., 10., 11.],\n | [ 12., 13., 14., 15.]])\n | \n | ::\n | \n | # reduce such that the result has the following five rows:\n | # [row1 + row2 + row3]\n | # [row4]\n | # [row2]\n | # [row3]\n | # [row1 + row2 + row3 + row4]\n | \n | >>> np.add.reduceat(x, [0, 3, 1, 2, 0])\n | array([[ 12., 15., 18., 21.],\n | [ 12., 13., 14., 15.],\n | [ 4., 5., 6., 7.],\n | [ 8., 9., 10., 11.],\n | [ 24., 28., 32., 36.]])\n | \n | ::\n | \n | # reduce such that result has the following two columns:\n | # [col1 * col2 * col3, col4]\n | \n | >>> np.multiply.reduceat(x, [0, 3], 1)\n | array([[ 0., 3.],\n | [ 120., 7.],\n | [ 720., 11.],\n | [ 2184., 15.]])\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | identity\n | The identity value.\n | \n | Data attribute containing the identity element for the ufunc, if it has one.\n | If it does not, the attribute value is None.\n | \n | Examples\n | --------\n | >>> np.add.identity\n | 0\n | >>> np.multiply.identity\n | 1\n | >>> np.power.identity\n | 1\n | >>> print(np.exp.identity)\n | None\n | \n | nargs\n | The number of arguments.\n | \n | Data attribute containing the number of arguments the ufunc takes, including\n | optional ones.\n | \n | Notes\n | -----\n | Typically this value will be one more than what you might expect because all\n | ufuncs take the optional \"out\" argument.\n | \n | Examples\n | --------\n | >>> np.add.nargs\n | 3\n | >>> np.multiply.nargs\n | 3\n | >>> np.power.nargs\n | 3\n | >>> np.exp.nargs\n | 2\n | \n | nin\n | The number of inputs.\n | \n | Data attribute containing the number of arguments the ufunc treats as input.\n | \n | Examples\n | --------\n | >>> np.add.nin\n | 2\n | >>> np.multiply.nin\n | 2\n | >>> np.power.nin\n | 2\n | >>> np.exp.nin\n | 1\n | \n | nout\n | The number of outputs.\n | \n | Data attribute containing the number of arguments the ufunc treats as output.\n | \n | Notes\n | -----\n | Since all ufuncs can take output arguments, this will always be (at least) 1.\n | \n | Examples\n | --------\n | >>> np.add.nout\n | 1\n | >>> np.multiply.nout\n | 1\n | >>> np.power.nout\n | 1\n | >>> np.exp.nout\n | 1\n | \n | ntypes\n | The number of types.\n | \n | The number of numerical NumPy types - of which there are 18 total - on which\n | the ufunc can operate.\n | \n | See Also\n | --------\n | numpy.ufunc.types\n | \n | Examples\n | --------\n | >>> np.add.ntypes\n | 18\n | >>> np.multiply.ntypes\n | 18\n | >>> np.power.ntypes\n | 17\n | >>> np.exp.ntypes\n | 7\n | >>> np.remainder.ntypes\n | 14\n | \n | signature\n | \n | types\n | Returns a list with types grouped input->output.\n | \n | Data attribute listing the data-type \"Domain-Range\" groupings the ufunc can\n | deliver. The data-types are given using the character codes.\n | \n | See Also\n | --------\n | numpy.ufunc.ntypes\n | \n | Examples\n | --------\n | >>> np.add.types\n | ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l',\n | 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D',\n | 'GG->G', 'OO->O']\n | \n | >>> np.multiply.types\n | ['??->?', 'bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l',\n | 'LL->L', 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D',\n | 'GG->G', 'OO->O']\n | \n | >>> np.power.types\n | ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L',\n | 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'FF->F', 'DD->D', 'GG->G',\n | 'OO->O']\n | \n | >>> np.exp.types\n | ['f->f', 'd->d', 'g->g', 'F->F', 'D->D', 'G->G', 'O->O']\n | \n | >>> np.remainder.types\n | ['bb->b', 'BB->B', 'hh->h', 'HH->H', 'ii->i', 'II->I', 'll->l', 'LL->L',\n | 'qq->q', 'QQ->Q', 'ff->f', 'dd->d', 'gg->g', 'OO->O']\n\n" ], [ "np.info(np.log10)", "log10(x, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])\n\nReturn the base 10 logarithm of the input array, element-wise.\n\nParameters\n----------\nx : array_like\n Input values.\nout : ndarray, None, or tuple of ndarray and None, optional\n A location into which the result is stored. If provided, it must have\n a shape that the inputs broadcast to. If not provided or `None`,\n a freshly-allocated array is returned. A tuple (possible only as a\n keyword argument) must have length equal to the number of outputs.\nwhere : array_like, optional\n Values of True indicate to calculate the ufunc at that position, values\n of False indicate to leave the value in the output alone.\n**kwargs\n For other keyword-only arguments, see the\n :ref:`ufunc docs <ufuncs.kwargs>`.\n\nReturns\n-------\ny : ndarray\n The logarithm to the base 10 of `x`, element-wise. NaNs are\n returned where x is negative.\n\nSee Also\n--------\nemath.log10\n\nNotes\n-----\nLogarithm is a multivalued function: for each `x` there is an infinite\nnumber of `z` such that `10**z = x`. The convention is to return the\n`z` whose imaginary part lies in `[-pi, pi]`.\n\nFor real-valued input data types, `log10` always returns real output.\nFor each value that cannot be expressed as a real number or infinity,\nit yields ``nan`` and sets the `invalid` floating point error flag.\n\nFor complex-valued input, `log10` is a complex analytical function that\nhas a branch cut `[-inf, 0]` and is continuous from above on it.\n`log10` handles the floating-point negative zero as an infinitesimal\nnegative number, conforming to the C99 standard.\n\nReferences\n----------\n.. [1] M. Abramowitz and I.A. Stegun, \"Handbook of Mathematical Functions\",\n 10th printing, 1964, pp. 67. http://www.math.sfu.ca/~cbm/aands/\n.. [2] Wikipedia, \"Logarithm\". http://en.wikipedia.org/wiki/Logarithm\n\nExamples\n--------\n>>> np.log10([1e-15, -3.])\narray([-15., NaN])\n" ], [ "np.log10([567])", "_____no_output_____" ], [ "b = np.float64(567)\nb.log10", "_____no_output_____" ], [ "np.log10(1287648359798234792387492384923849243)", "_____no_output_____" ], [ "np.log10(np.float64(1287648359798234792387492384923849243))", "_____no_output_____" ], [ "np.int64", "_____no_output_____" ], [ "np.log10(np.float64(29348792384921384792384921387492834928374928734928734928734928734928734987234987234987239487293487293487293847))", "_____no_output_____" ], [ "def add_numbers(x):\n return x + x", "_____no_output_____" ], [ "(lambda x: x + x)(2)", "_____no_output_____" ], [ "add_two_numbers = lambda x: x + x", "_____no_output_____" ], [ "add_two_numbers(2)", "_____no_output_____" ], [ "np.sin(np.array([2, 3]))", "_____no_output_____" ], [ "add_three = lambda x: x + 3", "_____no_output_____" ], [ "add_three(np.array([2, 3]))", "_____no_output_____" ], [ "add_number = lambda x, y: x + y", "_____no_output_____" ], [ "add_number(np.array([2, 3]), 5)", "_____no_output_____" ] ], [ [ "### Notes\n\nnumpy has __vectorize__ function just like __map__ python function\n\n```python\nimport numpy as np\n\nvfunc = np.vectorize(lambda x, y: x % y == 0)\n\nvfunc(np.array([2, 3, 4, 5, 6, 7]), 2)\n\n>>> np.array([True, False, True, False, True, False])\n```\n\nAbove example vfunc returns array, *vfunc is much like np.sin/np.cos etc*\n\n\n```python\ndef sum(a, b):\n return a + b\n\nlambda a, b: a + b\n```\n\n**reduce**\n```python\n\nnp.add.reduce(np.array([1, 2, 3]))\n>>> 6\n```", "_____no_output_____" ], [ "### Find primes numbers between 1 and 1000, ( use only numpy array, slicing, masking etc. )", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
cb817ee55eb3ac69d9b8c800ab518ed4dc8b023f
21,860
ipynb
Jupyter Notebook
NSCI801_Reproducibility.ipynb
jouterleys/NSCI801-QuantNeuro
dfc7fc9c22d8a50c44bcf65772ae173b83f6ffa3
[ "CC-BY-4.0" ]
102
2020-01-09T00:13:32.000Z
2022-02-23T01:26:01.000Z
NSCI801_Reproducibility.ipynb
jouterleys/NSCI801-QuantNeuro
dfc7fc9c22d8a50c44bcf65772ae173b83f6ffa3
[ "CC-BY-4.0" ]
null
null
null
NSCI801_Reproducibility.ipynb
jouterleys/NSCI801-QuantNeuro
dfc7fc9c22d8a50c44bcf65772ae173b83f6ffa3
[ "CC-BY-4.0" ]
21
2020-01-22T15:41:59.000Z
2022-02-08T01:44:28.000Z
42.119461
8,000
0.717155
[ [ [ "# NSCI 801 - Quantitative Neuroscience\n## Reproducibility, reliability, validity\nGunnar Blohm", "_____no_output_____" ], [ "### Outline\n* statistical considerations\n * multiple comparisons\n * exploratory analyses vs hypothesis testing\n* Open Science\n * general steps toward transparency\n * pre-registration / registered report\n* Open science vs. patents", "_____no_output_____" ], [ "### Multiple comparisons\nIn [2009, Bennett et al.](https://teenspecies.github.io/pdfs/NeuralCorrelates.pdf) studies the brain of a salmon using fMRI and found and found significant activation despite the salmon being dead... (IgNobel Prize 2012)\n\nWhy did they find this?", "_____no_output_____" ], [ "They images 140 volumes (samples) of the brain and ran a standard preprocessing pipeline, including spatial realignment, co-registration of functional and anatomical volumes, and 8mm full-width at half maximum (FWHM) Gaussian smoothing. \n\nThey computed voxel-wise statistics. \n\n<img style=\"float: center; width:750px;\" src=\"stuff/salmon.png\">", "_____no_output_____" ], [ "This is a prime example of what's known as the **multiple comparison problem**!\n\n“the problem that occurs when one considers a set of statistical inferences simultaneously or infers a subset of parameters selected based on the observed values” (Wikipedia)\n* problem that arises when implementing a large number of statistical tests in the same experiment\n* the more tests we do, the higher probability of obtaining, at least, one test with statistical significance", "_____no_output_____" ], [ "### Probability(false positive) = f(number comparisons)\nIf you repeat a statistical test over and over again, the false positive ($FP$) rate ($P$) evolves as follows:\n$$P(FP)=1-(1-\\alpha)^N$$\n* $\\alpha$ is the confidence level for each individual test (e.g. 0.05)\n* $N$ is the number of comparisons\n\nLet's see how this works...", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom scipy import stats\n\nplt.style.use('dark_background')", "_____no_output_____" ] ], [ [ "Let's create some random data...", "_____no_output_____" ] ], [ [ "rvs = stats.norm.rvs(loc=0, scale=10, size=1000)\nsns.displot(rvs)", "_____no_output_____" ] ], [ [ "Now let's run a t-test to see if it's different from 0", "_____no_output_____" ] ], [ [ "statistic, pvalue = stats.ttest_1samp(rvs, 0)\nprint(pvalue)", "0.8239422967948438\n" ] ], [ [ "Now let's do this many times for different samples, e.g. different voxels of our salmon...", "_____no_output_____" ] ], [ [ "def t_test_function(alp, N):\n \"\"\"computes t-test statistics on N random samples and returns number of significant tests\"\"\"\n \n counter = 0\n for i in range(N):\n rvs = stats.norm.rvs(loc=0, scale=10, size=1000)\n statistic, pvalue = stats.ttest_1samp(rvs, 0)\n if pvalue <= alp:\n counter = counter + 1\n \n print(counter)\n return counter\n\nN = 100\ncounter = t_test_function(0.05, N)\nprint(\"The false positve rate was\", counter/N*100, \"%\")", "4\nThe false positve rate was 4.0 %\n" ] ], [ [ "Well, we wanted a $\\alpha=0.05$, so what's the problem?\n\nThe problem is that we have hugely increased the likelihood of finding something significant by chance! (**p-hacking**)\n\nTake the above example:\n* running 100 independent tests with $\\alpha=0.05$ resulted in a few positives\n* well, that's good right? Now we can see if there is astory here we can publish...\n * dead salmon!\n* remember, our data was just noise!!! There was NO signal!\n\nThis is why we have corrections for multiple comparisons that adjust the p-value so that the **overall chance** to find a false positive stays at $\\alpha$!\n\nWhy does this matter?", "_____no_output_____" ], [ "### Exploratory analyses vs hypothesis testing\n\nWhy do we distinguish between them?", "_____no_output_____" ], [ "<img style=\"float: center; width:750px;\" src=\"stuff/ExploreConfirm1.png\">", "_____no_output_____" ], [ "But in science, confirmatory analyses that are hypothesis-driven are often much more valued. \n\nThere is a temptation to frame *exploratory* analyses and *confirmatory*...\n\n**This leads to disaster!!!**\n* science is not solid\n* replication crisis (psychology, social science, medicine, marketing, economics, sports science, etc, etc...)\n* shaken trust in science\n\n<img style=\"float: center; width:750px;\" src=\"stuff/crisis.jpeg\">\n\n([Baker 2016](https://www.nature.com/news/1-500-scientists-lift-the-lid-on-reproducibility-1.19970))", "_____no_output_____" ], [ "### Quick excursion: survivorship bias\n\"Survivorship bias or survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility.\" (Wikipedia)\n\n<img style=\"float: center; width:750px;\" src=\"stuff/SurvivorshipBias.png\">", "_____no_output_____" ], [ "**How does survivorship bias affect neuroscience?**\n\nThink about it...", "_____no_output_____" ], [ "E.g.\n* people select neurons to analyze\n* profs say it's absolutely achievable to become a prof\n\nJust keep it in mind...", "_____no_output_____" ], [ "### Open science - transparency\nOpen science can hugely help increasing transparency in many different ways so that findings and data can be evaluated for what they are:\n* publish data acquisition protocol and code: increases data reproducibility & credibility\n* publish data: data get second, third, etc... lives\n* publish data processing / analyses: increases reproducibility of results\n* publish figures code and stats: increases reproducibility and credibility of conclusions\n* pre-register hypotheses and analyses: ensures *confirmatory* analyses are not *exploratory* (HARKing)\n\nFor more info, see NSCI800 lectures about Open Science: [OS1](http://www.compneurosci.com/NSCI800/OpenScienceI.pdf), [OS2](http://www.compneurosci.com/NSCI800/OpenScienceII.pdf)", "_____no_output_____" ], [ "### Pre-registration / registered reports\n<img style=\"float:right; width:500px;\" src=\"stuff/RR.png\">\n\n* IPA guarantees publication\n * If original methods are followed\n * Main conclusions need to come from originally proposed analyses\n* Does not prevent exploratory analyses\n * Need to be labeled as such\n \n[https://Cos.io/rr](https://Cos.io/rr)", "_____no_output_____" ], [ "Please follow **Stage 1** instructions of [the registered report intrustions from eNeuro](https://www.eneuro.org/sites/default/files/additional_assets/pdf/eNeuro%20Registered%20Reports%20Author%20Guidelines.pdf) for the course evaluation...\n\nQuestions???", "_____no_output_____" ], [ "### Open science vs. patents\nThe goal of Open Science is to share all aspects of research with the public!\n* because knowledge should be freely available\n* because the public paid for the science to happen in the first place\n\nHowever, this prevents from patenting scientific results! \n* this is good for science, because patents obstruct research\n* prevents full privitazation of research: companies driving research is biased by private interest", "_____no_output_____" ], [ "Turns out open science is good for business!\n* more people contribute\n* wider adoption\n * e.g. Github = Microsoft, Android = Google, etc\n* better for society\n * e.g. nonprofit pharma", "_____no_output_____" ], [ "**Why are patents still a thing?**\n\nWell, some people think it's an outdated and morally corrupt concept. \n* goal: maximum profit\n* enabler: capitalism\n* victims: general public\n\nThink about it abd decide for yourself what to do with your research!!!", "_____no_output_____" ], [ "### THANK YOU!!!\n<img style=\"float:center; width:750px;\" src=\"stuff/empower.jpg\">", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cb817ef8c936c4c0e0f7d4f5389f5824002e4996
7,849
ipynb
Jupyter Notebook
notebooks/run.ipynb
mrpeerat/OSKut
0c103cde154fe7b93ac8228b9bb1123a992b51c8
[ "MIT" ]
14
2021-08-01T06:18:30.000Z
2022-03-31T17:20:24.000Z
notebooks/run.ipynb
mrpeerat/OSKut
0c103cde154fe7b93ac8228b9bb1123a992b51c8
[ "MIT" ]
null
null
null
notebooks/run.ipynb
mrpeerat/OSKut
0c103cde154fe7b93ac8228b9bb1123a992b51c8
[ "MIT" ]
null
null
null
26.60678
772
0.589374
[ [ [ "from oskut import load_model, OSKut", "_____no_output_____" ], [ "load_model(engine='ws',mode='LSTM_Attension')", "_____no_output_____" ], [ "text = 'สวัสดีประเทศไทย'", "_____no_output_____" ], [ "print(OSKut(text))", "['สวัสดี', 'ประเทศ', 'ไทย']\n" ], [ "load_model(engine='tnhc',mode='LSTM_Attension')", "_____no_output_____" ], [ "print(OSKut(text))", "['สวัสดี', 'ประเทศ', 'ไทย']\n" ], [ "load_model(engine='lst20',mode='LSTM_Attension')", "_____no_output_____" ], [ "print(OSKut(text))", "['สวัสดี', 'ประเทศ', 'ไทย']\n" ], [ "load_model(engine='deepcut')", "_____no_output_____" ], [ "print(OSKut(text))", "['สวัสดี', 'ประเทศไทย']\n" ], [ "load_model(engine='tl-deepcut-best')", "best\n" ], [ "print(OSKut(text))", "WARNING:tensorflow:5 out of the last 18 calls to <function Model.make_predict_function.<locals>.predict_function at 0x0000011C474B4040> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\n['สวัสดี', 'ประเทศไทย']\n" ], [ "load_model(engine='tl-deepcut-lst20')", "lst20\n" ], [ "print(OSKut(text))", "WARNING:tensorflow:6 out of the last 19 calls to <function Model.make_predict_function.<locals>.predict_function at 0x0000011C4A3A65E0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\n['สวัสดี', 'ประเทศ', 'ไทย']\n" ], [ "load_model(engine='tl-deepcut-ws-augment-60p')", "ws-augment-60p\n" ], [ "print(OSKut(text))", "WARNING:tensorflow:7 out of the last 11 calls to <function Model.make_predict_function.<locals>.predict_function at 0x0000011C4CC720D0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\n['สวัส', 'ดี', 'ประเทศ', 'ไทย']\n" ], [ "load_model(engine='tl-deepcut-tnhc')", "tnhc\n" ], [ "print(OSKut(text))", "WARNING:tensorflow:8 out of the last 12 calls to <function Model.make_predict_function.<locals>.predict_function at 0x0000011C4D0A65E0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.\n['สวัสดี', 'ประเทศ', 'ไทย']\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb817f481f3993568a212b67c980bbd32e4b8109
6,557
ipynb
Jupyter Notebook
plot_objective.ipynb
vbhavank/Computed-Tomography-Reconstruction
8cef3fad418c43cb9e9335c42423b7313165d29e
[ "MIT" ]
null
null
null
plot_objective.ipynb
vbhavank/Computed-Tomography-Reconstruction
8cef3fad418c43cb9e9335c42423b7313165d29e
[ "MIT" ]
null
null
null
plot_objective.ipynb
vbhavank/Computed-Tomography-Reconstruction
8cef3fad418c43cb9e9335c42423b7313165d29e
[ "MIT" ]
1
2021-11-27T04:13:30.000Z
2021-11-27T04:13:30.000Z
39.263473
159
0.620406
[ [ [ "import numpy as np\nfrom scipy import sparse\nimport matplotlib.pyplot as plt\nimport time", "_____no_output_____" ], [ "from em import initialize, em, mle_em, em_bdct, mle_em_with_obj", "_____no_output_____" ], [ "def run(max_step, threshold, sparse, ix = 0):\n np.random.seed()\n start = time.time()\n y_rand = np.random.poisson(Ax)\n # x_initial = np.random.randn(len(x_flat))\n print(f\"process: {ix: 2d}, y[:10]: {y_rand[:10]}\")\n x_et, diff, mse, objs, step = mle_em_with_obj(max_step, A, y_rand, x_true=x_flat, threshold=threshold, x_initial=None, sparse=sparse)\n print(f\"process: {ix: 2d} finished. step: {step: 2d}, mse: {mse: 8.2f}, diff: {diff: 8.2f} time consuming: {time.time() - start: 8.1f} seconds\")\n return x_et, diff, mse, objs, step", "_____no_output_____" ], [ "A_original = sparse.load_npz(\"data/simulated_large_A_23_10.npz\")\nx_flat = np.load(\"data/simulated_large_x_23_10.npy\")", "_____no_output_____" ], [ "e_stop = 0.001", "_____no_output_____" ], [ "A = A_original\nAx = A @ x_flat\nx_et, diff, mse, objs, step = run(1000, e_stop, True)", "process: 0, y[:10]: [ 0 244 439 425 605 590 712 1100 1266 1065]\nstep: 0, diff: 1053.0667721214488, mse: 777.1636832014116\nstep: 20, diff: 8.510329088223905, mse: 294.4596068232555\nstep: 40, diff: 3.689886082985331, mse: 203.55862699587442\nstep: 60, diff: 2.132880449185456, mse: 156.62592673297576\nstep: 80, diff: 1.3942120141470329, mse: 129.22504461208814\nstep: 100, diff: 0.982539137128792, mse: 112.18344560928085\nstep: 120, diff: 0.7276835897629993, mse: 101.2434302736558\nstep: 140, diff: 0.5584956696666459, mse: 94.09433665602506\nstep: 160, diff: 0.4406326608168013, mse: 89.3627129216426\nstep: 180, diff: 0.35549480177307535, mse: 86.19535895425271\nstep: 200, diff: 0.2921328399363283, mse: 84.05163582796958\nstep: 220, diff: 0.24373011381982054, mse: 82.58518393943571\nstep: 240, diff: 0.20588362348721742, mse: 81.57200003230831\nstep: 260, diff: 0.17567455333696042, mse: 80.86570787623066\nstep: 280, diff: 0.151125941235839, mse: 80.36956628080253\nstep: 300, diff: 0.1308707292335585, mse: 80.0188819476888\nstep: 320, diff: 0.11394318404766221, mse: 79.76988853284793\nstep: 340, diff: 0.0996458952390125, mse: 79.59264854170526\nstep: 360, diff: 0.08746401867344067, mse: 79.46647140063911\nstep: 380, diff: 0.07700932516647643, mse: 79.37691972644846\nstep: 400, diff: 0.0679831791220793, mse: 79.31382955128373\nstep: 420, diff: 0.060151642649722545, mse: 79.26998591997253\nstep: 440, diff: 0.05332843562085474, mse: 79.24022733441652\nstep: 460, diff: 0.04736305693299486, mse: 79.22083413259622\nstep: 480, diff: 0.04213234992726612, mse: 79.20910692879193\nstep: 500, diff: 0.037534404222247124, mse: 79.20307357331663\nstep: 520, diff: 0.03348406912023319, mse: 79.20128383462286\nstep: 540, diff: 0.029909597117095826, mse: 79.20266447925965\nstep: 560, diff: 0.026750092810753103, mse: 79.20641627414194\nstep: 580, diff: 0.023953544902586565, mse: 79.21194030834542\nstep: 600, diff: 0.021475286824627782, mse: 79.21878496662737\nstep: 620, diff: 0.019276777100098934, mse: 79.22660754697507\nstep: 640, diff: 0.017324621574846273, mse: 79.23514632769205\nstep: 660, diff: 0.01558978107740681, mse: 79.24420013518012\nstep: 680, diff: 0.014046923038935552, mse: 79.25361332566209\nstep: 700, diff: 0.012673886210457138, mse: 79.26326469494836\nstep: 720, diff: 0.011451235228495516, mse: 79.27305925197481\nstep: 740, diff: 0.010361887301373676, mse: 79.28292208960175\nstep: 760, diff: 0.009390797356721617, mse: 79.2927937977577\nstep: 780, diff: 0.008524691006000088, mse: 79.30262701524936\nstep: 800, diff: 0.007751836953926938, mse: 79.31238382527162\nstep: 820, diff: 0.00706185220251061, mse: 79.32203377819427\nstep: 840, diff: 0.006445534715528999, mse: 79.33155238224431\nstep: 860, diff: 0.005894719228358615, mse: 79.34091994430483\nstep: 880, diff: 0.005402152675425543, mse: 79.35012067353014\nstep: 900, diff: 0.004961386325472561, mse: 79.35914198289564\nstep: 920, diff: 0.004566682197285179, mse: 79.36797394033532\nstep: 940, diff: 0.0042129317135266185, mse: 79.3766088333696\nstep: 960, diff: 0.0038955848537513653, mse: 79.38504082021169\nstep: 980, diff: 0.003610588316844303, mse: 79.39326564710757\nprocess: 0 finished. step: 1000, mse: 79.40, diff: 0.00 time consuming: 0.7 seconds\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cb817f6361b517ae89042b891a804a829cbe0e64
25,763
ipynb
Jupyter Notebook
machine_learning/mini_lessons/image_data.ipynb
krmiddlebrook/intro_to_deep_learning
800a8628f413a8ebde104c4af4117e3781861c7f
[ "MIT" ]
3
2020-06-04T15:11:33.000Z
2021-09-14T02:12:22.000Z
machine_learning/mini_lessons/image_data.ipynb
krmiddlebrook/intro_to_deep_learning
800a8628f413a8ebde104c4af4117e3781861c7f
[ "MIT" ]
19
2021-05-27T16:42:42.000Z
2022-03-22T23:37:03.000Z
machine_learning/mini_lessons/image_data.ipynb
krmiddlebrook/intro_to_deep_learning
800a8628f413a8ebde104c4af4117e3781861c7f
[ "MIT" ]
1
2020-07-08T21:35:07.000Z
2020-07-08T21:35:07.000Z
55.285408
8,838
0.689982
[ [ [ "<a href=\"https://colab.research.google.com/github/krmiddlebrook/intro_to_deep_learning/blob/master/machine_learning/mini_lessons/image_data.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Processing Image Data\nComputer vision is a field of machine learning that trains computers to interpret and understand the visual world. It is one of the most popular fields in deep learning (neural networks). In computer vision, it is common to use digital images from cameras and videos to train models to accurately identify and classify objects. \n\nBefore we can solve computer vision tasks, it is important to understand how to handle image data. To this end, we will demonstrate how to process (prepare) image data for machine learning models. \n\nWe will use the MNIST digits dataset, which is provided by Kera Datasets--a collection of ready-to-use datasets for machine learning. All datasets are available through the `tf.keras.datasets` API endpoint. \n\nHere is the lesson roadmap:\n- Load the dataset\n- Visualize the data\n- Transform the data\n- Normalize the data", "_____no_output_____" ] ], [ [ "# TensorFlow and tf.keras and TensorFlow datasets\nimport tensorflow as tf\nfrom tensorflow import keras\n\n# Commonly used modules\nimport numpy as np\n\n# Images, plots, display, and visualization\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "# Load the dataset\nWhen we want to solve a problem with machine learning methods, the first step is almost always to find a good dataset. As we mentioned above, we will retrieve the MNIST dataset using the `tf.keras.datasets` module. \n\nThe MNIST dataset contains 70k grayscale images of handwritten digits (i.e., numbers between 0 and 9). Let's load the dataset into our notebook.", "_____no_output_____" ] ], [ [ "# the data, split between train and test sets\n(train_features, train_labels), (test_features, test_labels) = keras.datasets.mnist.load_data()\n\nprint(f\"training set shape: {train_features.shape}\")\nprint(f\"test set shape: {test_features.shape}\")\n\nprint(f'dtypes of training and test set tensors: {train_features.dtype}, {test_features.dtype}')\n", "training set shape: (60000, 28, 28)\ntest set shape: (10000, 28, 28)\ndtypes of training and test set tensors: uint8, uint8\n" ] ], [ [ "We see that TensorFlow Datasets takes care of most of the processing we need to do. The `training_features` object tells us that there are 60k training images, and the `test_features` indicates there are 10k test images, so 70k total. We also see that the images are tensors of shape ($28 \\times 28$) with integers of type uint8.\n", "_____no_output_____" ], [ "## Visualize the dataset\nNow that we have the dataset, let's visualize some samples.\n\nWe will use the matplotlib plotting framework to display the images. Here are the first 5 images in the training dataset.\n\n", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(10, 10))\nfor i in range(5):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(train_features[i], cmap=plt.cm.binary)\n plt.title(int(train_labels[i]))\n plt.axis(\"off\")", "_____no_output_____" ] ], [ [ "The above images give us a sense of the data, including samples belonging to different classes. ", "_____no_output_____" ], [ "# Transforming the data\nBefore we start transforming data, let's discuss *tensors*--a key part of the machine learning (ML) process, particularly for deep learning methods. \n\nAs we learned in previous lessons, data, whether it be categorical or numerical in nature, is converted to a numerical representation. This process makes the data useful for machine learning models. In deep learning (neural networks), the numerical data is often stored in objects called *tensors*. A tensor is a container that can house data in $N$ dimensions. ML researchers sometimes use the term \"tensor\" and \"matrix\" interchangeably because a matrix is a 2-dimensional tensor. But, tensors are generalizations of matrices to $N$-dimensional space. \n\n<figure>\n <img src='https://www.kdnuggets.com/wp-content/uploads/scalar-vector-matrix-tensor.jpg' width='75%'>\n <figcaption>A scalar, vector ($2 \\times 1$), matrix ($2 \\times 1$), and tensor ($2 \\times 2 \\times 2$) .</figcaption>\n</figure>", "_____no_output_____" ] ], [ [ "# a (2 x 2 x 2) tensor\nmy_tensor = np.array([\n [[1, 2], [3, 2]],\n [[1, 7],[5, 4]]\n ])\n\nprint('my_tensor shape:', my_tensor.shape)", "my_tensor shape: (2, 2, 2)\n" ] ], [ [ "Now let's discuss how images are stored in tensors. Computer screens are composed of pixels. Each pixel generates three colors of light (red, green, and blue) and the different colors we see are due to different combinations and intensities of these three primary colors. \n\n<figure>\n <img src='https://www.chem.purdue.edu/gchelp/cchem/RGBColors/BlackWhiteGray.gif' width='75%'>\n <figcaption>The colors black, white, and gray with a sketch of a pixel from each.</figcaption>\n</figure>\n\nWe use tensors to store the pixel intensities for a given image. Colorized pictures have 3 different *channels*. Each channel contains a matrix that represents the intensity values that correspond to the pixels of a particular color (red, green, and blue; RGB for short). For instance, consider a small colorized $28 \\times 28$ pixel image of a dog. Because the dog image is colorize, it has 3 channels, so its tensor shape is ($28 \\times 28 \\times 3$).\n\nLet's have a look at the shape of the images in the MNIST dataset.", "_____no_output_____" ] ], [ [ "train_features[0, :, :].shape", "_____no_output_____" ] ], [ [ "Using the `train_features.shape` method, we can extract the image shape and see that images are in the tensor shape $28 \\times 28$. The returned shape has no 3rd dimension, this indicates that we are working with grayscale images. By grayscale, we mean the pixels don't have intensities for red, green, and blue channels but rather for one grayscale channel, which describes an image using combinations of various shades of gray. Pixel intensities range between $0$ and $255$, and in our case, they correspond to black $0$ to white $255$. \n\n\nNow let's reshape the images into $784 \\times 1$ dimensional tensors. We call converting an image into an $n \\times 1$ tensor \"flattening\" the tensor. ", "_____no_output_____" ] ], [ [ "# get a subset of 5 images from the dataset\noriginal_shape = train_features.shape\n\n# Flatten the images.\ninput_shape = (-1, 28*28)\ntrain_features = train_features.reshape(input_shape)\ntest_features = test_features.reshape(input_shape) \n\nprint(f'original shape: {original_shape}, flattened shape: {train_features.shape}')", "original shape: (60000, 28, 28), flattened shape: (60000, 784)\n" ] ], [ [ "We flattened all the images by using the NumPy `reshape` method. Since one shape dimension can be -1, and we may not always know the number of samples in the dataset we used $(-1,784)$ as the parameters to `reshape`. In our example, this means that each $28 \\times 28$ image gets flattened into a $28 \\cdot 28 = 784$ feature array. Then the images are stacked (because of the -1) to produce a final large tensor with shape $(\\text{num samples}, 784$).\n\n", "_____no_output_____" ], [ "# Normalize the data\nAnother important transformation technique is *normalization*. We normalize data before training the model with it to encourage the model to learn generalizable features, which should lead to better results on unseen data. \n\nAt a high level, normalization makes the data more, well...normal. There are various ways to normalize data. Perhaps the most common normalization approach for image data is to subtract the mean pixel value and divide by the standard deviation (this method is applied to every pixel).\n\n ", "_____no_output_____" ], [ "Before we can do any normalization, we have to cast the \"uint8\" tensors to the \"float32\" numeric type.", "_____no_output_____" ] ], [ [ "# convert to float32 type \ntrain_features = train_features.astype('float32')\ntest_features = test_features.astype('float32')", "_____no_output_____" ] ], [ [ "Now we can normalize the data. We should mention that you always use the training set data to calculate normalization statistics like mean, standard deviation, etc.. Consequently, the test set is always normalized with the training set statistics.", "_____no_output_____" ] ], [ [ "# normalize the reshaped images\nmean = train_features.mean()\nstd = train_features.std()\n\ntrain_features -= mean\ntrain_features /= std\n\ntest_features -= mean\ntest_features /= std\n\nprint(f'pre-normalization mean and std: {round(mean, 4)}, {round(std, 4)}')\nprint(f'normalized images mean and std: {round(train_features.mean(), 4)}, {round(train_features.std(), 4)}')", "pre-normalization mean and std: 33.31840133666992, 78.56739807128906\nnormalized images mean and std: -0.0, 1.0\n" ] ], [ [ "As the output above indicates, the normalized pixel values are now centered around 0 (i.e., mean = 0) and have a standard deviation of 1.", "_____no_output_____" ], [ "# Summary\nIn this lesson we learned:\n- Keras offers ready-to-use datasets.\n- Images are represented by *tensors*\n- Tensors can be transformed (reshaped) and normalized easily using NumPy (or any other frameworks that enable tensor operations).\n ", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ] ]
cb8187d51e6592c4dd99c31d2d69518653b3e377
60,376
ipynb
Jupyter Notebook
notebooks/03_evaluate/evaluation.ipynb
bethz/Recommenders
499453cab0c2acd9c91e1a2d73ff0c147765dc90
[ "MIT" ]
2
2019-05-23T11:44:47.000Z
2021-11-14T17:34:19.000Z
notebooks/03_evaluate/evaluation.ipynb
bethz/Recommenders
499453cab0c2acd9c91e1a2d73ff0c147765dc90
[ "MIT" ]
null
null
null
notebooks/03_evaluate/evaluation.ipynb
bethz/Recommenders
499453cab0c2acd9c91e1a2d73ff0c147765dc90
[ "MIT" ]
2
2019-08-13T13:45:42.000Z
2020-08-04T11:38:35.000Z
30.962051
511
0.441599
[ [ [ "<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>\n\n<i>Licensed under the MIT License.</i>", "_____no_output_____" ], [ "# Evaluation", "_____no_output_____" ], [ "Evaluation with offline metrics is pivotal to assess the quality of a recommender before it goes into production. Usually, evaluation metrics are carefully chosen based on the actual application scenario of a recommendation system. It is hence important to data scientists and AI developers that build recommendation systems to understand how each evaluation metric is calculated and what it is for.\n\nThis notebook deep dives into several commonly used evaluation metrics, and illustrates how these metrics are used in practice. The metrics covered in this notebook are merely for off-line evaluations.", "_____no_output_____" ], [ "## 0 Global settings", "_____no_output_____" ], [ "Most of the functions used in the notebook can be found in the `reco_utils` directory.", "_____no_output_____" ] ], [ [ "# set the environment path to find Recommenders\nimport sys\nsys.path.append(\"../../\")\nimport pandas as pd\nimport pyspark\nfrom sklearn.preprocessing import minmax_scale\n\nfrom reco_utils.common.spark_utils import start_or_get_spark\nfrom reco_utils.evaluation.spark_evaluation import SparkRankingEvaluation, SparkRatingEvaluation\nfrom reco_utils.evaluation.python_evaluation import auc, logloss\nfrom reco_utils.recommender.sar.sar_singlenode import SARSingleNode\nfrom reco_utils.dataset.download_utils import maybe_download\nfrom reco_utils.dataset.python_splitters import python_random_split\n\nprint(\"System version: {}\".format(sys.version))\nprint(\"Pandas version: {}\".format(pd.__version__))\nprint(\"PySpark version: {}\".format(pyspark.__version__))", "System version: 3.6.0 | packaged by conda-forge | (default, Feb 9 2017, 14:36:55) \n[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)]\nPandas version: 0.23.4\nPySpark version: 2.3.1\n" ] ], [ [ "Note to successfully run Spark codes with the Jupyter kernel, one needs to correctly set the environment variables of `PYSPARK_PYTHON` and `PYSPARK_DRIVER_PYTHON` that point to Python executables with the desired version. Detailed information can be found in the setup instruction document [SETUP.md](../../SETUP.md).", "_____no_output_____" ] ], [ [ "COL_USER = \"UserId\"\nCOL_ITEM = \"MovieId\"\nCOL_RATING = \"Rating\"\nCOL_PREDICTION = \"Rating\"\n\nHEADER = {\n \"col_user\": COL_USER,\n \"col_item\": COL_ITEM,\n \"col_rating\": COL_RATING,\n \"col_prediction\": COL_PREDICTION,\n}", "_____no_output_____" ] ], [ [ "## 1 Prepare data", "_____no_output_____" ], [ "### 1.1 Prepare dummy data", "_____no_output_____" ], [ "For illustration purpose, a dummy data set is created for demonstrating how different evaluation metrics work. \n\nThe data has the schema that can be frequently found in a recommendation problem, that is, each row in the dataset is a (user, item, rating) tuple, where \"rating\" can be an ordinal rating score (e.g., discrete integers of 1, 2, 3, etc.) or an numerical float number that quantitatively indicates the preference of the user towards that item. \n\nFor simplicity reason, the column of rating in the dummy dataset we use in the example represent some ordinal ratings.", "_____no_output_____" ] ], [ [ "df_true = pd.DataFrame(\n {\n COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],\n COL_ITEM: [1, 2, 3, 1, 4, 5, 6, 7, 2, 5, 6, 8, 9, 10, 11, 12, 13, 14],\n COL_RATING: [5, 4, 3, 5, 5, 3, 3, 1, 5, 5, 5, 4, 4, 3, 3, 3, 2, 1],\n }\n )\ndf_pred = pd.DataFrame(\n {\n COL_USER: [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3],\n COL_ITEM: [3, 10, 12, 10, 3, 5, 11, 13, 4, 10, 7, 13, 1, 3, 5, 2, 11, 14],\n COL_PREDICTION: [14, 13, 12, 14, 13, 12, 11, 10, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5]\n }\n)", "_____no_output_____" ] ], [ [ "Take a look at ratings of the user with ID \"1\" in the dummy dataset.", "_____no_output_____" ] ], [ [ "df_true[df_true[COL_USER] == 1]", "_____no_output_____" ], [ "df_pred[df_pred[COL_USER] == 1]", "_____no_output_____" ] ], [ [ "### 1.2 Prepare Spark data", "_____no_output_____" ], [ "Spark framework is sometimes used to evaluate metrics given datasets that are hard to fit into memory. In our example, Spark DataFrames can be created from the Python dummy dataset.", "_____no_output_____" ] ], [ [ "spark = start_or_get_spark(\"EvaluationTesting\", \"local\")\n\ndfs_true = spark.createDataFrame(df_true)\ndfs_pred = spark.createDataFrame(df_pred)", "_____no_output_____" ], [ "dfs_true.filter(dfs_true[COL_USER] == 1).show()", "+------+-------+------+\n|UserId|MovieId|Rating|\n+------+-------+------+\n| 1| 1| 5|\n| 1| 2| 4|\n| 1| 3| 3|\n+------+-------+------+\n\n" ], [ "dfs_pred.filter(dfs_pred[COL_USER] == 1).show()", "+------+-------+------+\n|UserId|MovieId|Rating|\n+------+-------+------+\n| 1| 3| 14|\n| 1| 10| 13|\n| 1| 12| 12|\n+------+-------+------+\n\n" ] ], [ [ "## 2 Evaluation metrics", "_____no_output_____" ], [ "### 2.1 Rating metrics\n\nRating metrics are similar to regression metrics used for evaluating a regression model that predicts numerical values given input observations. In the context of recommendation system, rating metrics are to evaluate how accurate a recommender is to predict ratings that users may give to items. Therefore, the metrics are **calculated exactly on the same group of (user, item) pairs that exist in both ground-truth dataset and prediction dataset** and **averaged by the total number of users**.\n\n#### 2.1.1 Use cases\n\nRating metrics are effective in measuring the model accuracy. However, in some cases, the rating metrics are limited if\n* **the recommender is to predict ranking instead of explicit rating**. For example, if the consumer of the recommender cares about the ranked recommended items, rating metrics do not apply directly. Usually a relevancy function such as top-k will be applied to generate the ranked list from predicted ratings in order to evaluate the recommender with other metrics. \n* **the recommender is to generate recommendation scores that have different scales with the original ratings (e.g., the SAR algorithm)**. In this case, the difference between the generated scores and the original scores (or, ratings) is not valid for measuring accuracy of the model.\n\n#### 2.1.2 How-to with the evaluation utilities\n\nA few notes about the interface of the Rating evaluator class:\n1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame).\n2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics.\n3. Default column names for user, item, rating, and prediction are \"UserId\", \"ItemId\", \"Rating\", and \"Prediciton\", respectively.\n\nIn our examples below, to calculate rating metrics for input data frames in Spark, a Spark object, `SparkRatingEvaluation` is initialized. The input data schemas for the ground-truth dataset and the prediction dataset are\n\n* Ground-truth dataset.\n\n|Column|Data type|Description|\n|-------------|------------|-------------|\n|`COL_USER`|<int\\>|User ID|\n|`COL_ITEM`|<int\\>|Item ID|\n|`COL_RATING`|<float\\>|Rating or numerical value of user preference.|\n\n* Prediction dataset.\n\n|Column|Data type|Description|\n|-------------|------------|-------------|\n|`COL_USER`|<int\\>|User ID|\n|`COL_ITEM`|<int\\>|Item ID|\n|`COL_RATING`|<float\\>|Predicted rating or numerical value of user preference.|", "_____no_output_____" ] ], [ [ "spark_rate_eval = SparkRatingEvaluation(dfs_true, dfs_pred, **HEADER)", "_____no_output_____" ] ], [ [ "#### 2.1.3 Root Mean Square Error (RMSE)\n\nRMSE is for evaluating the accuracy of prediction on ratings. RMSE is the most widely used metric to evaluate a recommendation algorithm that predicts missing ratings. The benefit is that RMSE is easy to explain and calculate.", "_____no_output_____" ] ], [ [ "print(\"The RMSE is {}\".format(spark_rate_eval.rmse()))", "The RMSE is 7.254309064273455\n" ] ], [ [ "#### 2.1.4 R Squared (R2)\n\nR2 is also called \"coefficient of determination\" in some context. It is a metric that evaluates how well a regression model performs, based on the proportion of total variations of the observed results. ", "_____no_output_____" ] ], [ [ "print(\"The R2 is {}\".format(spark_rate_eval.rsquared()))", "The R2 is -31.699029126213595\n" ] ], [ [ "#### 2.1.5 Mean Absolute Error (MAE)\n\nMAE evaluates accuracy of prediction. It computes the metric value from ground truths and prediction in the same scale. Compared to RMSE, MAE is more explainable. ", "_____no_output_____" ] ], [ [ "print(\"The MAE is {}\".format(spark_rate_eval.mae()))", "The MAE is 6.375\n" ] ], [ [ "#### 2.1.6 Explained Variance \n\nExplained variance is usually used to measure how well a model performs with regard to the impact from the variation of the dataset. ", "_____no_output_____" ] ], [ [ "print(\"The explained variance is {}\".format(spark_rate_eval.exp_var()))", "The explained variance is -6.446601941747574\n" ] ], [ [ "#### 2.1.7 Summary", "_____no_output_____" ], [ "|Metric|Range|Selection criteria|Limitation|Reference|\n|------|-------------------------------|---------|----------|---------|\n|RMSE|$> 0$|The smaller the better.|May be biased, and less explainable than MSE|[link](https://en.wikipedia.org/wiki/Root-mean-square_deviation)|\n|R2|$\\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Coefficient_of_determination)|\n|MSE|$\\geq 0$|The smaller the better.|Dependent on variable scale.|[link](https://en.wikipedia.org/wiki/Mean_absolute_error)|\n|Explained variance|$\\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://en.wikipedia.org/wiki/Explained_variation)|", "_____no_output_____" ], [ "### 2.2 Ranking metrics", "_____no_output_____" ], [ "\"Beyond-accuray evaluation\" was proposed to evaluate how relevant recommendations are for users. In this case, a recommendation system is a treated as a ranking system. Given a relency definition, recommendation system outputs a list of recommended items to each user, which is ordered by relevance. The evaluation part takes ground-truth data, the actual items that users interact with (e.g., liked, purchased, etc.), and the recommendation data, as inputs, to calculate ranking evaluation metrics. \n\n#### 2.2.1 Use cases\n\nRanking metrics are often used when hit and/or ranking of the items are considered:\n* **Hit** - defined by relevancy, a hit usually means whether the recommended \"k\" items hit the \"relevant\" items by the user. For example, a user may have clicked, viewed, or purchased an item for many times, and a hit in the recommended items indicate that the recommender performs well. Metrics like \"precision\", \"recall\", etc. measure the performance of such hitting accuracy.\n* **Ranking** - ranking metrics give more explanations about, for the hitted items, whether they are ranked in a way that is preferred by the users whom the items will be recommended to. Metrics like \"mean average precision\", \"ndcg\", etc., evaluate whether the relevant items are ranked higher than the less-relevant or irrelevant items. \n\n#### 2.2.2 How-to with evaluation utilities\n\nA few notes about the interface of the Rating evaluator class:\n1. The columns of user, item, and rating (prediction) should be present in the ground-truth DataFrame (prediction DataFrame). The column of timestamp is optional, but it is required if certain relevanc function is used. For example, timestamps will be used if the most recent items are defined as the relevant one.\n2. There should be no duplicates of (user, item) pairs in the ground-truth and the prediction DataFrames, othewise there may be unexpected behavior in calculating certain metrics.\n3. Default column names for user, item, rating, and prediction are \"UserId\", \"ItemId\", \"Rating\", and \"Prediciton\", respectively.", "_____no_output_____" ], [ "#### 2.2.1 Relevancy of recommendation\n\nRelevancy of recommendation can be measured in different ways:\n\n* **By ranking** - In this case, relevant items in the recommendations are defined as the top ranked items, i.e., top k items, which are taken from the list of the recommended items that is ordered by the predicted ratings (or other numerical scores that indicate preference of a user to an item). \n\n* **By timestamp** - Relevant items are defined as the most recently viewed k items, which are obtained from the recommended items ranked by timestamps.\n\n* **By rating** - Relevant items are defined as items with ratings (or other numerical scores that indicate preference of a user to an item) that are above a given threshold. \n\nSimilarly, a ranking metric object can be initialized as below. The input data schema is\n\n* Ground-truth dataset.\n\n|Column|Data type|Description|\n|-------------|------------|-------------|\n|`COL_USER`|<int\\>|User ID|\n|`COL_ITEM`|<int\\>|Item ID|\n|`COL_RATING`|<float\\>|Rating or numerical value of user preference.|\n|`COL_TIMESTAMP`|<string\\>|Timestamps.|\n\n* Prediction dataset.\n\n|Column|Data type|Description|\n|-------------|------------|-------------|\n|`COL_USER`|<int\\>|User ID|\n|`COL_ITEM`|<int\\>|Item ID|\n|`COL_RATING`|<float\\>|Predicted rating or numerical value of user preference.|\n|`COL_TIMESTAM`|<string\\>|Timestamps.|\n\nIn this case, in addition to the input datasets, there are also other arguments used for calculating the ranking metrics:\n\n|Argument|Data type|Description|\n|------------|------------|--------------|\n|`k`|<int\\>|Number of items recommended to user.|\n|`revelancy_method`|<string\\>|Methonds that extract relevant items from the recommendation list|", "_____no_output_____" ], [ "For example, the following code initializes a ranking metric object that calculates the metrics.", "_____no_output_____" ] ], [ [ "spark_rank_eval = SparkRankingEvaluation(dfs_true, dfs_pred, k=3, relevancy_method=\"top_k\", **HEADER)", "_____no_output_____" ] ], [ [ "A few ranking metrics can then be calculated.", "_____no_output_____" ], [ "#### 2.2.1 Precision\n\nPrecision@k is a metric that evaluates how many items in the recommendation list are relevant (hit) in the ground-truth data. For each user the precision score is normalized by `k` and then the overall precision scores are averaged by the total number of users. \n\nNote it is apparent that the precision@k metric grows with the number of `k`.", "_____no_output_____" ] ], [ [ "print(\"The precision at k is {}\".format(spark_rank_eval.precision_at_k()))", "The precision at k is 0.3333333333333333\n" ] ], [ [ "#### 2.2.2 Recall\n\nRecall@k is a metric that evaluates how many relevant items in the ground-truth data are in the recommendation list. For each user the recall score is normalized by the total number of ground-truth items and then the overall recall scores are averaged by the total number of users. ", "_____no_output_____" ] ], [ [ "print(\"The recall at k is {}\".format(spark_rank_eval.recall_at_k()))", "The recall at k is 0.2111111111111111\n" ] ], [ [ "#### 2.2.3 Normalized Discounted Cumulative Gain (NDCG)\n\nNDCG is a metric that evaluates how well the recommender performs in recommending ranked items to users. Therefore both hit of relevant items and correctness in ranking of these items matter to the NDCG evaluation. The total NDCG score is normalized by the total number of users.", "_____no_output_____" ] ], [ [ "print(\"The ndcg at k is {}\".format(spark_rank_eval.ndcg_at_k()))", "The ndcg at k is 0.3333333333333333\n" ] ], [ [ "#### 2.2.4 Mean Average Precision (MAP)\n\nMAP is a metric that evaluates the average precision for each user in the datasets. It also penalizes ranking correctness of the recommended items. The overall MAP score is normalized by the total number of users.", "_____no_output_____" ] ], [ [ "print(\"The map at k is {}\".format(spark_rank_eval.map_at_k()))", "The map at k is 0.15\n" ] ], [ [ "#### 2.2.5 ROC and AUC\n\nROC, as well as AUC, is a well known metric that is used for evaluating binary classification problem. It is similar in the case of binary rating typed recommendation algorithm where the \"hit\" accuracy on the relevant items is used for measuring the recommender's performance. \n\nTo demonstrate the evaluation method, the original data for testing is manipuldated in a way that the ratings in the testing data are arranged as binary scores, whilst the ones in the prediction are scaled in 0 to 1. ", "_____no_output_____" ] ], [ [ "# Convert the original rating to 0 and 1.\ndf_true_bin = df_true.copy()\ndf_true_bin[COL_RATING] = df_true_bin[COL_RATING].apply(lambda x: 1 if x > 3 else 0)\n\ndf_true_bin", "_____no_output_____" ], [ "# Convert the predicted ratings into a [0, 1] scale.\ndf_pred_bin = df_pred.copy()\ndf_pred_bin[COL_PREDICTION] = minmax_scale(df_pred_bin[COL_PREDICTION].astype(float))\n\ndf_pred_bin", "_____no_output_____" ], [ "# Calculate the AUC metric\nauc_score = auc(\n df_true_bin,\n df_pred_bin,\n col_user = COL_USER,\n col_item = COL_ITEM,\n col_rating = COL_RATING,\n col_prediction = COL_RATING\n)", "_____no_output_____" ], [ "print(\"The auc score is {}\".format(auc_score))", "The auc score is 0.3333\n" ] ], [ [ "It is worth mentioning that in some literature there are variants of the original AUC metric, that considers the effect of **the number of the recommended items (k)**, **grouping effect of users (compute AUC for each user group, and take the average across different groups)**. These variants are applicable to various different scenarios, and choosing an appropriate one depends on the context of the use case itself.", "_____no_output_____" ], [ "#### 2.3.2 Logistic loss", "_____no_output_____" ], [ "Logistic loss (sometimes it is called simply logloss, or cross-entropy loss) is another useful metric to evaluate the hit accuracy. It is defined as the negative log-likelihood of the true labels given the predictions of a classifier.", "_____no_output_____" ] ], [ [ "# Calculate the logloss metric\nlogloss_score = logloss(\n df_true_bin,\n df_pred_bin,\n col_user = COL_USER,\n col_item = COL_ITEM,\n col_rating = COL_RATING,\n col_prediction = COL_RATING\n)\n\nprint(\"The logloss score is {}\".format(logloss_score))", "The logloss score is 4.1061\n" ] ], [ [ "It is worth noting that logloss may be sensitive to the class balance of datasets, as it penalizes heavily classifiers that are confident about incorrect classifications. To demonstrate, the ground truth data set for testing is manipulated purposely to unbalance the binary labels. For example, the following binarizes the original rating data by using a lower threshold, i.e., 2, to create more positive feedback from the user.", "_____no_output_____" ] ], [ [ "df_true_bin_pos = df_true.copy()\ndf_true_bin_pos[COL_RATING] = df_true_bin_pos[COL_RATING].apply(lambda x: 1 if x > 2 else 0)\n\ndf_true_bin_pos", "_____no_output_____" ] ], [ [ "By using threshold of 2, the labels in the ground truth data is not balanced, and the ratio of 1 over 0 is ", "_____no_output_____" ] ], [ [ "one_zero_ratio = df_true_bin_pos[COL_PREDICTION].sum() / (df_true_bin_pos.shape[0] - df_true_bin_pos[COL_PREDICTION].sum())\n\nprint('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio))", "The ratio between label 1 and label 0 is 5.0\n" ] ], [ [ "Another prediction data is also created, where the probabilities for label 1 and label 0 are fixed. Without loss of generity, the probability of predicting 1 is 0.6. The data set is purposely created to make the precision to be 100% given an presumption of cut-off equal to 0.5.", "_____no_output_____" ] ], [ [ "prob_true = 0.6\n\ndf_pred_bin_pos = df_true_bin_pos.copy()\ndf_pred_bin_pos[COL_PREDICTION] = df_pred_bin_pos[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true)\n\ndf_pred_bin_pos", "_____no_output_____" ] ], [ [ "Then the logloss is calculated as follows. ", "_____no_output_____" ] ], [ [ "# Calculate the logloss metric\nlogloss_score_pos = logloss(\n df_true_bin_pos,\n df_pred_bin_pos,\n col_user = COL_USER,\n col_item = COL_ITEM,\n col_rating = COL_RATING,\n col_prediction = COL_RATING\n)\n\nprint(\"The logloss score is {}\".format(logloss_score))", "The logloss score is 4.1061\n" ] ], [ [ "For comparison, a similar process is used with a threshold value of 3 to create a more balanced dataset. Another prediction dataset is also created by using the balanced dataset. Again, the probabilities of predicting label 1 and label 0 are fixed as 0.6 and 0.4, respectively. **NOTE**, same as above, in this case, the prediction also gives us a 100% precision. The only difference is the proportion of binary labels.", "_____no_output_____" ] ], [ [ "prob_true = 0.6\n\ndf_pred_bin_balanced = df_true_bin.copy()\ndf_pred_bin_balanced[COL_PREDICTION] = df_pred_bin_balanced[COL_PREDICTION].apply(lambda x: prob_true if x==1 else 1-prob_true)\n\ndf_pred_bin_balanced", "_____no_output_____" ] ], [ [ "The ratio of label 1 and label 0 is", "_____no_output_____" ] ], [ [ "one_zero_ratio = df_true_bin[COL_PREDICTION].sum() / (df_true_bin.shape[0] - df_true_bin[COL_PREDICTION].sum())\n\nprint('The ratio between label 1 and label 0 is {}'.format(one_zero_ratio))", "The ratio between label 1 and label 0 is 1.0\n" ] ], [ [ "It is perfectly balanced.", "_____no_output_____" ], [ "Applying the logloss function to calculate the metric gives us a more promising result, as shown below.", "_____no_output_____" ] ], [ [ "# Calculate the logloss metric\nlogloss_score = logloss(\n df_true_bin,\n df_pred_bin_balanced,\n col_user = COL_USER,\n col_item = COL_ITEM,\n col_rating = COL_RATING,\n col_prediction = COL_RATING\n)\n\nprint(\"The logloss score is {}\".format(logloss_score))", "The logloss score is 0.5108\n" ] ], [ [ "It can be seen that the score is more close to 0, and, by definition, it means that the predictions are generating better results than the one before where binary labels are more biased.", "_____no_output_____" ], [ "#### 2.2.5 Summary", "_____no_output_____" ], [ "|Metric|Range|Selection criteria|Limitation|Reference|\n|------|-------------------------------|---------|----------|---------|\n|Precision|$\\geq 0$ and $\\leq 1$|The closer to $1$ the better.|Only for hits in recommendations.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|\n|Recall|$\\geq 0$ and $\\leq 1$|The closer to $1$ the better.|Only for hits in the ground truth.|[link](https://en.wikipedia.org/wiki/Precision_and_recall)|\n|NDCG|$\\geq 0$ and $\\leq 1$|The closer to $1$ the better.|Does not penalize for bad/missing items, and does not perform for several equally good items.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|\n|MAP|$\\geq 0$ and $\\leq 1$|The closer to $1$ the better.|Depend on variable distributions.|[link](https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html#ranking-systems)|\n|AUC|$\\geq 0$ and $\\leq 1$|The closer to $1$ the better. 0.5 indicates an uninformative classifier|Depend on the number of recommended items (k).|[link](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve)|\n|Logloss|$0$ to $\\infty$|The closer to $0$ the better.|Logloss can be sensitive to imbalanced datasets.|[link](https://en.wikipedia.org/wiki/Cross_entropy#Relation_to_log-likelihood)|", "_____no_output_____" ], [ "## References\n\n1. Guy Shani and Asela Gunawardana, \"Evaluating Recommendation Systems\", Recommender Systems Handbook, Springer, 2015.\n2. PySpark MLlib evaluation metrics, url: https://spark.apache.org/docs/2.3.0/mllib-evaluation-metrics.html.\n3. Dimitris Paraschakis et al, \"Comparative Evaluation of Top-N Recommenders in e-Commerce: An Industrial Perspective\", IEEE ICMLA, 2015, Miami, FL, USA.\n4. Yehuda Koren and Robert Bell, \"Advances in Collaborative Filtering\", Recommender Systems Handbook, Springer, 2015.\n5. Chris Bishop, \"Pattern Recognition and Machine Learning\", Springer, 2006.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
cb818a26ef57a397c519dd214198c35ac5c46866
823,916
ipynb
Jupyter Notebook
pyspark-tutorial/2021-07-06 - DBFS Example.ipynb
OmkarMehta/pyspark-projects
acd49737c618b8ee60626eb86210b300d4a9a504
[ "MIT" ]
null
null
null
pyspark-tutorial/2021-07-06 - DBFS Example.ipynb
OmkarMehta/pyspark-projects
acd49737c618b8ee60626eb86210b300d4a9a504
[ "MIT" ]
null
null
null
pyspark-tutorial/2021-07-06 - DBFS Example.ipynb
OmkarMehta/pyspark-projects
acd49737c618b8ee60626eb86210b300d4a9a504
[ "MIT" ]
null
null
null
823,916
823,916
0.514517
[ [ [ "# Overview\n\n## &copy; [Omkar Mehta]([email protected]) ##\n### Industrial and Enterprise Systems Engineering, The Grainger College of Engineering, UIUC ###\n\n<hr style=\"border:2px solid blue\"> </hr>\n\nThis notebook will show you how to create and query a table or DataFrame that you uploaded to DBFS. [DBFS](https://docs.databricks.com/user-guide/dbfs-databricks-file-system.html) is a Databricks File System that allows you to store data for querying inside of Databricks. This notebook assumes that you have a file already inside of DBFS that you would like to read from.\n\nThis notebook is written in **Python** so the default cell type is Python. However, you can use different languages by using the `%LANGUAGE` syntax. Python, Scala, SQL, and R are all supported.", "_____no_output_____" ] ], [ [ "# File location and type\nfile_location = \"/FileStore/tables/game_skater_stats.csv\"\nfile_type = \"csv\"\n\n# CSV options\ninfer_schema = \"false\"\nfirst_row_is_header = \"false\"\ndelimiter = \",\"\n\n# The applied options are for CSV files. For other file types, these will be ignored.\ndf = spark.read.format(file_type) \\\n .option(\"inferSchema\", infer_schema) \\\n .option(\"header\", first_row_is_header) \\\n .option(\"sep\", delimiter) \\\n .load(file_location)\n\ndisplay(df)", "_____no_output_____" ], [ "# Create a view or table\n\ntemp_table_name = \"game_skater_stats_csv\"\n\ndf.createOrReplaceTempView(temp_table_name)", "_____no_output_____" ], [ "%sql\n\n/* Query the created temp table in a SQL cell */\n\nselect * from `game_skater_stats_csv`", "_____no_output_____" ], [ "# With this registered as a temp view, it will only be available to this particular notebook. If you'd like other users to be able to query this table, you can also create a table from the DataFrame.\n# Once saved, this table will persist across cluster restarts as well as allow various users across different notebooks to query this data.\n# To do so, choose your table name and uncomment the bottom line.\n\npermanent_table_name = \"game_skater_stats_csv\"\n\n# df.write.format(\"parquet\").saveAsTable(permanent_table_name)", "_____no_output_____" ] ], [ [ "# Read Data", "_____no_output_____" ] ], [ [ "# Load data from a CSV\nfile_location = \"/FileStore/tables/game_skater_stats.csv\"\ndf = spark.read.format(\"CSV\").option(\"inferSchema\", True).option(\"header\", True).load(file_location)\ndisplay(df.take(5))", "_____no_output_____" ] ], [ [ "## Write Data", "_____no_output_____" ] ], [ [ "# Save as CSV and parquet\n\n# DBFS\ndf.write.save('/FileStore/parquet/game__stats', format='parquet')\n\n# S3\n#df.write.parquet(\"s3a://my_bucket/game_skater_stats\", mode=\"overwrite\")\n\n# DBFS\ndf.write.save('/FileStore/parquet/game__stats.csv', format='csv')\n\n# S3\n#df.coalesce(1).write.format(\"com.databricks.spark.csv\")\n# .option(\"header\", \"true\").save(\"s3a://my_bucket/game_skater_stats.csv\")\n", "_____no_output_____" ] ], [ [ "## Transforming Data", "_____no_output_____" ] ], [ [ "df.createOrReplaceTempView(\"stats\")\n\ndisplay(spark.sql(\"\"\"\n select player_id, sum(1) as games, sum(goals) as goals\n from stats\n group by 1\n order by 3 desc\n limit 5\n\"\"\"))", "_____no_output_____" ], [ "# player names\nfile_location = \"/FileStore/tables/player_info.csv\"\nnames = spark.read.format(\"CSV\").option(\"inferSchema\", True).option(\"header\", True).load(file_location)\n#display(names)", "_____no_output_____" ], [ "df.createOrReplaceTempView(\"stats\")\n\ntop_players = spark.sql(\"\"\"\nselect player_id, sum(1) as games, sum(goals) as goals\nfrom stats\ngroup by 1\norder by 3 desc\nlimit 5\n\"\"\")\n\ntop_players.createOrReplaceTempView(\"top_players\")\nnames.createOrReplaceTempView(\"names\")\n\ndisplay(spark.sql(\"\"\"\nselect p.player_id, goals, firstName, lastName\nfrom top_players p\njoin names n\n on p.player_id = n.player_id\norder by 2 desc \n\"\"\"))\n", "_____no_output_____" ], [ "display(spark.sql(\"\"\"\nselect cast(substring(game_id, 1, 4) || '-' \n || substring(game_id, 5, 2) || '-01' as Date) as month\n , sum(goals)/count(distinct game_id) as goals_per_goal\nfrom stats\ngroup by 1\norder by 1\n\"\"\"))", "_____no_output_____" ], [ "\ndisplay(spark.sql(\"\"\"\nselect cast(goals/shots * 50 as int)/50.0 as Goals_per_shot, sum(1) as Players \nfrom (\n select player_id, sum(shots) as shots, sum(goals) as goals\n from stats\n group by 1\n having goals >= 5\n) \ngroup by 1\norder by 1\n\"\"\")) \n ", "_____no_output_____" ], [ "display(spark.sql(\"\"\"\n select cast(substring(game_id, 1, 4) || '-' \n || substring(game_id, 5, 2) || '-01' as Date) as month\n , sum(goals)/count(distinct game_id) as goals_per_goal\n from stats\n group by 1\n order by 1\n\"\"\"))", "_____no_output_____" ], [ "display(spark.sql(\"\"\"\n select cast(goals/shots * 50 as int)/50.0 as Goals_per_shot\n ,sum(1) as Players \n from (\n select player_id, sum(shots) as shots, sum(goals) as goals\n from stats\n group by 1\n having goals >= 5\n ) \n group by 1\n order by 1\n\"\"\"))", "_____no_output_____" ] ], [ [ "## MLlib: Linear Regression", "_____no_output_____" ] ], [ [ "from pyspark.ml.feature import VectorAssembler\nfrom pyspark.ml.regression import LinearRegression\n\nassembler = VectorAssembler(inputCols=['shots', 'assists', 'penaltyMinutes', 'timeOnIce'], outputCol=\"features\" )\ntrain_df = assembler.transform(df) \n\nlr = LinearRegression(featuresCol = 'features', labelCol='goals')\nlr_model = lr.fit(train_df)\n\ntrainingSummary = lr_model.summary\nprint(\"Coefficients: \" + str(lr_model.coefficients))\nprint(\"RMSE: %f\" % trainingSummary.rootMeanSquaredError)\nprint(\"R2: %f\" % trainingSummary.r2)", "_____no_output_____" ] ], [ [ "## Pandas UDFs", "_____no_output_____" ] ], [ [ "# creating a linear fit for a single player\n\ndf.createOrReplaceTempView(\"stats\")\n\nsample_pd = spark.sql(\"\"\"\nselect * from stats\nwhere player_id = 8471214\n\"\"\").toPandas()\n\nfrom scipy.optimize import leastsq\nimport numpy as np\n\ndef fit(params, x, y):\n return (y - (params[0] + x * params[1] )) \n\nresult = leastsq(fit, [1, 0], args=(sample_pd.shots, sample_pd.hits))\nprint(result)\n", "_____no_output_____" ], [ "from pyspark.sql.functions import pandas_udf, PandasUDFType\nfrom pyspark.sql.types import *\nimport pandas as pd\n\nschema = StructType([StructField('ID', LongType(), True),\n StructField('p0', DoubleType(), True),\n StructField('p1', DoubleType(), True)]) \n\n \n@pandas_udf(schema, PandasUDFType.GROUPED_MAP)\ndef analyze_player(sample_pd):\n \n if (len(sample_pd.shots) <= 1):\n return pd.DataFrame({'ID': [sample_pd.player_id[0]], 'p0': [ 0 ], 'p1': [ 0 ]})\n \n result = leastsq(fit, [1, 0], args=(sample_pd.shots, sample_pd.hits))\n return pd.DataFrame({'ID': [sample_pd.player_id[0]], 'p0': [result[0][0]], 'p1': [result[0][1]]})\n\nplayer_df = df.groupby('player_id').apply(analyze_player)\ndisplay(player_df.take(5))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb819799372f666fb761db06c0f747d9dfd2963c
7,219
ipynb
Jupyter Notebook
notebooks/parameter_tuning_sol_02.ipynb
zahramor/scikit-learn
89640fec224b6b77ea129017650fd5d78aa8f0d7
[ "CC-BY-4.0" ]
null
null
null
notebooks/parameter_tuning_sol_02.ipynb
zahramor/scikit-learn
89640fec224b6b77ea129017650fd5d78aa8f0d7
[ "CC-BY-4.0" ]
null
null
null
notebooks/parameter_tuning_sol_02.ipynb
zahramor/scikit-learn
89640fec224b6b77ea129017650fd5d78aa8f0d7
[ "CC-BY-4.0" ]
null
null
null
35.561576
91
0.598144
[ [ [ "# 📃 Solution for Exercise M3.01\n\nThe goal is to write an exhaustive search to find the best parameters\ncombination maximizing the model generalization performance.\n\nHere we use a small subset of the Adult Census dataset to make the code\nfaster to execute. Once your code works on the small subset, try to\nchange `train_size` to a larger value (e.g. 0.8 for 80% instead of\n20%).", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nfrom sklearn.model_selection import train_test_split\n\nadult_census = pd.read_csv(\"../datasets/adult-census.csv\")\n\ntarget_name = \"class\"\ntarget = adult_census[target_name]\ndata = adult_census.drop(columns=[target_name, \"education-num\"])\n\ndata_train, data_test, target_train, target_test = train_test_split(\n data, target, train_size=0.2, random_state=42)", "_____no_output_____" ], [ "from sklearn.compose import ColumnTransformer\nfrom sklearn.compose import make_column_selector as selector\nfrom sklearn.preprocessing import OrdinalEncoder\n\ncategorical_preprocessor = OrdinalEncoder(handle_unknown=\"use_encoded_value\",\n unknown_value=-1)\npreprocessor = ColumnTransformer(\n [('cat_preprocessor', categorical_preprocessor,\n selector(dtype_include=object))],\n remainder='passthrough', sparse_threshold=0)\n\nfrom sklearn.ensemble import HistGradientBoostingClassifier\nfrom sklearn.pipeline import Pipeline\n\nmodel = Pipeline([\n (\"preprocessor\", preprocessor),\n (\"classifier\", HistGradientBoostingClassifier(random_state=42))\n])", "_____no_output_____" ] ], [ [ "\nUse the previously defined model (called `model`) and using two nested `for`\nloops, make a search of the best combinations of the `learning_rate` and\n`max_leaf_nodes` parameters. In this regard, you will need to train and test\nthe model by setting the parameters. The evaluation of the model should be\nperformed using `cross_val_score` on the training set. We will use the\nfollowing parameters search:\n- `learning_rate` for the values 0.01, 0.1, 1 and 10. This parameter controls\n the ability of a new tree to correct the error of the previous sequence of\n trees\n- `max_leaf_nodes` for the values 3, 10, 30. This parameter controls the\n depth of each tree.", "_____no_output_____" ] ], [ [ "# solution\nfrom sklearn.model_selection import cross_val_score\n\nlearning_rate = [0.01, 0.1, 1, 10]\nmax_leaf_nodes = [3, 10, 30]\n\nbest_score = 0\nbest_params = {}\nfor lr in learning_rate:\n for mln in max_leaf_nodes:\n print(f\"Evaluating model with learning rate {lr:.3f}\"\n f\" and max leaf nodes {mln}... \", end=\"\")\n model.set_params(\n classifier__learning_rate=lr,\n classifier__max_leaf_nodes=mln\n )\n scores = cross_val_score(model, data_train, target_train, cv=2)\n mean_score = scores.mean()\n print(f\"score: {mean_score:.3f}\")\n if mean_score > best_score:\n best_score = mean_score\n best_params = {'learning-rate': lr, 'max leaf nodes': mln}\n print(f\"Found new best model with score {best_score:.3f}!\")\n\nprint(f\"The best accuracy obtained is {best_score:.3f}\")\nprint(f\"The best parameters found are:\\n {best_params}\")", "Evaluating model with learning rate 0.010 and max leaf nodes 3... score: 0.789\nFound new best model with score 0.789!\nEvaluating model with learning rate 0.010 and max leaf nodes 10... score: 0.813\nFound new best model with score 0.813!\nEvaluating model with learning rate 0.010 and max leaf nodes 30... score: 0.842\nFound new best model with score 0.842!\nEvaluating model with learning rate 0.100 and max leaf nodes 3... score: 0.847\nFound new best model with score 0.847!\nEvaluating model with learning rate 0.100 and max leaf nodes 10... score: 0.859\nFound new best model with score 0.859!\nEvaluating model with learning rate 0.100 and max leaf nodes 30... score: 0.857\nEvaluating model with learning rate 1.000 and max leaf nodes 3... score: 0.852\nEvaluating model with learning rate 1.000 and max leaf nodes 10... score: 0.833\nEvaluating model with learning rate 1.000 and max leaf nodes 30... score: 0.828\nEvaluating model with learning rate 10.000 and max leaf nodes 3... score: 0.288\nEvaluating model with learning rate 10.000 and max leaf nodes 10... score: 0.480\nEvaluating model with learning rate 10.000 and max leaf nodes 30... score: 0.639\nThe best accuracy obtained is 0.859\nThe best parameters found are:\n {'learning-rate': 0.1, 'max leaf nodes': 10}\n" ] ], [ [ "\nNow use the test set to score the model using the best parameters\nthat we found using cross-validation in the training set.", "_____no_output_____" ] ], [ [ "# solution\nbest_lr = best_params['learning-rate']\nbest_mln = best_params['max leaf nodes']\n\nmodel.set_params(classifier__learning_rate=best_lr,\n classifier__max_leaf_nodes=best_mln)\nmodel.fit(data_train, target_train)\ntest_score = model.score(data_test, target_test)\n\nprint(f\"Test score after the parameter tuning: {test_score:.3f}\")", "Test score after the parameter tuning: 0.870\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cb819886028ae3074fb34af7abcae04beefb2b36
988,550
ipynb
Jupyter Notebook
lit_review/notebooks/datasets_and_pipelines.ipynb
neurodatascience/watts_up_compute
1ed41e62690f99f699b44180208689cc19616bb7
[ "MIT" ]
null
null
null
lit_review/notebooks/datasets_and_pipelines.ipynb
neurodatascience/watts_up_compute
1ed41e62690f99f699b44180208689cc19616bb7
[ "MIT" ]
null
null
null
lit_review/notebooks/datasets_and_pipelines.ipynb
neurodatascience/watts_up_compute
1ed41e62690f99f699b44180208689cc19616bb7
[ "MIT" ]
null
null
null
4,555.529954
563,083
0.897994
[ [ [ "## Look at supply and demand of available datasets and ML models", "_____no_output_____" ] ], [ [ "import os\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns", "_____no_output_____" ], [ "dataset_regular = '../datasets/Dataset_evolution_regular.csv'\ndataset_consortia = '../datasets/Dataset_evolution_consortia.csv'\n\nfreesurfer_papers = '../datasets/FreeSurfer_papers.csv'\nDL_papers = '../datasets/DL_papers.csv'", "_____no_output_____" ], [ "datasets_regular_df = pd.read_csv(dataset_regular)\ndatasets_regular_df['Number of datasets'] = 1\ndatasets_regular_df['Dataset Type'] = 'regular'\n\ndatasets_consortia_df = pd.read_csv(dataset_consortia)\ndatasets_consortia_df['Dataset'] = datasets_consortia_df['Working group']\ndatasets_consortia_df['Dataset Type'] = 'consortia'\n\nuseful_cols = ['Dataset','Sample size', 'Year', 'Number of datasets', 'Dataset Type']\ndatasets_df = datasets_regular_df[useful_cols].append(datasets_consortia_df[useful_cols])\ndatasets_df.head()", "_____no_output_____" ], [ "plot_df = datasets_df.copy()\nplot_df['Year'] = plot_df['Year'].astype(int)\nplot_df['Sample size'] = plot_df['Sample size'].astype(int)\n\nsns.set(font_scale = 6)\npalette = ['deepskyblue','navy'] #['tomato','firebrick'] # sns.color_palette(\"husl\", 2)\nwith sns.axes_style(\"whitegrid\"):\n fig, ax1 = plt.subplots(figsize=(40,25),sharex=True,sharey=True)\n g = sns.scatterplot(x='Year',y='Sample size', hue='Dataset Type', size='Sample size', sizes=(1000,5000), data=plot_df, palette=palette,ax=ax1)\n \n g.grid(True,which=\"both\",ls=\"--\",c='lightgray') \n plt.title('Dataset Sizes Over the Years')\n\n g.set(yscale='log')\n # g.set(xlim=(1e6, 1e8))\n\n # EXTRACT CURRENT HANDLES AND LABELS\n h,l = ax1.get_legend_handles_labels() \n col_lgd = plt.legend(h[:3], l[:3], loc='upper left') \n col_lgd.legendHandles[1]._sizes = [1000]\n col_lgd.legendHandles[2]._sizes = [1000]\n \n # add model names as bubble labels\n def label_point(x, y, val, ax, x_shift=0, y_shift=0):\n a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)\n for i, point in a.iterrows():\n x_shift, y_shift = 0.1*np.random.randint(-3,3, 2)\n ax.text(point['x']+x_shift, point['y']+y_shift, str(point['val']), fontsize=32)\n\n label_point(plot_df['Year'], plot_df['Sample size'], plot_df['Dataset'], plt.gca())", "_____no_output_____" ], [ "freesurfer_papers_df = pd.read_csv(freesurfer_papers)\nDL_papers_df = pd.read_csv(DL_papers)\n\ncitation_df = pd.merge(freesurfer_papers_df, DL_papers_df, on='Year', how='left')\n\ncitation_df.head()", "_____no_output_____" ], [ "plot_df = citation_df[citation_df['Year']!=2021].copy()\n\npal = sns.color_palette(\"husl\", 2)\nwith sns.axes_style(\"whitegrid\"):\n fig, ax1 = plt.subplots(figsize=(40,10),sharex=True,sharey=True)\n sns.lineplot(x='Year',y='Total', marker='d', markersize=40, data=plot_df, linewidth = 20, color=pal[1], label='FreeSurfer')\n sns.lineplot(x='Year',y='N_AI-papers',marker='d', markersize=40, data=plot_df, linewidth = 20, color=pal[0], label='Machine-learning')\n\nplt.title('Number of Citations in Neuroimaging Studies', fontsize=80)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb819c729c54c760a2ac7e3b8c39481de6c7f86c
27,627
ipynb
Jupyter Notebook
course_2/course_material/Part_7_Deep_Learning/S53_L379/TensorFlow_Minimal_Example_Exercise_4_Solution.ipynb
Alexander-Meldrum/learning-data-science
a87cf6be80c67a8d1b57a96c042bdf423ba0a142
[ "MIT" ]
null
null
null
course_2/course_material/Part_7_Deep_Learning/S53_L379/TensorFlow_Minimal_Example_Exercise_4_Solution.ipynb
Alexander-Meldrum/learning-data-science
a87cf6be80c67a8d1b57a96c042bdf423ba0a142
[ "MIT" ]
null
null
null
course_2/course_material/Part_7_Deep_Learning/S53_L379/TensorFlow_Minimal_Example_Exercise_4_Solution.ipynb
Alexander-Meldrum/learning-data-science
a87cf6be80c67a8d1b57a96c042bdf423ba0a142
[ "MIT" ]
null
null
null
52.622857
10,586
0.725486
[ [ [ "# Using the same code as before, please solve the following exercises\n \n 4. Examine the code where we plot the data. Study how we managed to get the value of the outputs. \n In a similar way, find get the value of the weights and the biases and print it. This exercise will help you comprehend the TensorFlow syntax\n \n \nUseful tip: When you change something, don't forget to RERUN all cells. This can be done easily by clicking:\nKernel -> Restart & Run All\nIf you don't do that, your algorithm will keep the OLD values of all parameters.\n\n## Solution\n\nSimilar to the code for the outputs:\nout = sess.run([outputs], \n feed_dict={inputs: training_data['inputs']})\n \nWe can \"catch\" the values of the weights and the biases following the code:\n\nw = sess.run([weights], \n feed_dict={inputs: training_data['inputs']})\n \nb = sess.run([biases], \n feed_dict={inputs: training_data['inputs']})\n \nNote that we don't need to feed targets, as we just need to feed input data. We can include the targets if we want to, but the result will be the same.\n\nAt the end we print the w and b to be able to observe their values.\n\nprint (w)\nprint (b)\n\nSolution at the bottom of the file.", "_____no_output_____" ], [ "### Import the relevant libraries", "_____no_output_____" ] ], [ [ "# We must always import the relevant libraries for our problem at hand. NumPy and TensorFlow are required for this example.\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf", "_____no_output_____" ] ], [ [ "### Data generation\n\nWe generate data using the exact same logic and code as the example from the previous notebook. The only difference now is that we save it to an npz file. Npz is numpy's file type which allows you to save numpy arrays into a single .npz file. We introduce this change because in machine learning most often: \n\n* you are given some data (csv, database, etc.)\n* you preprocess it into a desired format (later on we will see methods for preprocesing)\n* you save it into npz files (if you're working in Python) to access later\n\nNothing to worry about - this is literally saving your NumPy arrays into a file that you can later access, nothing more.", "_____no_output_____" ] ], [ [ "# First, we should declare a variable containing the size of the training set we want to generate.\nobservations = 1000\n\n# We will work with two variables as inputs. You can think about them as x1 and x2 in our previous examples.\n# We have picked x and z, since it is easier to differentiate them.\n# We generate them randomly, drawing from an uniform distribution. There are 3 arguments of this method (low, high, size).\n# The size of xs and zs is observations x 1. In this case: 1000 x 1.\nxs = np.random.uniform(low=-10, high=10, size=(observations,1))\nzs = np.random.uniform(-10, 10, (observations,1))\n\n# Combine the two dimensions of the input into one input matrix. \n# This is the X matrix from the linear model y = x*w + b.\n# column_stack is a Numpy method, which combines two matrices (vectors) into one.\ngenerated_inputs = np.column_stack((xs,zs))\n\n# We add a random small noise to the function i.e. f(x,z) = 2x - 3z + 5 + <small noise>\nnoise = np.random.uniform(-1, 1, (observations,1))\n\n# Produce the targets according to our f(x,z) = 2x - 3z + 5 + noise definition.\n# In this way, we are basically saying: the weights should be 2 and -3, while the bias is 5.\ngenerated_targets = 2*xs - 3*zs + 5 + noise\n\n# save into an npz file called \"TF_intro\"\nnp.savez('TF_intro', inputs=generated_inputs, targets=generated_targets)", "_____no_output_____" ] ], [ [ "## Solving with TensorFlow\n\n<i/>Note: This intro is just the basics of TensorFlow which has way more capabilities and depth than that.<i>", "_____no_output_____" ] ], [ [ "# The shape of the data we've prepared above. Think about it as: number of inputs, number of outputs.\ninput_size = 2\noutput_size = 1", "_____no_output_____" ] ], [ [ "### Outlining the model", "_____no_output_____" ] ], [ [ "# Here we define a basic TensorFlow object - the placeholder.\n# As before, we will feed the inputs and targets to the model. \n# In the TensorFlow context, we feed the data to the model THROUGH the placeholders. \n# The particular inputs and targets are contained in our .npz file.\n\n# The first None parameter of the placeholders' shape means that\n# this dimension could be of any length. That's since we are mainly interested in\n# the input size, i.e. how many input variables we have and not the number of samples (observations)\n# The number of input variables changes the MODEL itself, while the number of observations doesn't.\n# Remember that the weights and biases were independent of the number of samples, so the MODEL is independent.\n# Important: NO calculation happens at this point.\ninputs = tf.placeholder(tf.float32, [None, input_size])\ntargets = tf.placeholder(tf.float32, [None, output_size])\n\n# As before, we define our weights and biases.\n# They are the other basic TensorFlow object - a variable.\n# We feed data into placeholders and they have a different value for each iteration\n# Variables, however, preserve their values across iterations.\n# To sum up, data goes into placeholders; parameters go into variables.\n\n# We use the same random uniform initialization in [-0.1,0.1] as in the minimal example but using the TF syntax\n# Important: NO calculation happens at this point.\nweights = tf.Variable(tf.random_uniform([input_size, output_size], minval=-0.1, maxval=0.1))\nbiases = tf.Variable(tf.random_uniform([output_size], minval=-0.1, maxval=0.1))\n\n# We get the outputs following our linear combination: y = xw + b\n# Important: NO calculation happens at this point.\n# This line simply tells TensorFlow what rule to apply when we feed in the training data (below).\noutputs = tf.matmul(inputs, weights) + biases", "_____no_output_____" ] ], [ [ "### Choosing the objective function and the optimization method", "_____no_output_____" ] ], [ [ "# Again, we use a loss function, this time readily available, though.\n# mean_squared_error is the scaled L2-norm (per observation)\n# We divide by two to follow our earlier definitions. That doesn't really change anything.\nmean_loss = tf.losses.mean_squared_error(labels=targets, predictions=outputs) / 2.\n\n# Note that there also exists a function tf.nn.l2_loss. \n# tf.nn.l2_loss calculates the loss over all samples, instead of the average loss per sample.\n# Practically it's the same, a matter of preference.\n# The difference would be a smaller or larger learning rate to achieve the exact same result.\n\n# Instead of implementing Gradient Descent on our own, in TensorFlow we can simply state\n# \"Minimize the mean loss by using Gradient Descent with a given learning rate\"\n# Simple as that.\noptimize = tf.train.GradientDescentOptimizer(learning_rate=0.05).minimize(mean_loss)", "_____no_output_____" ] ], [ [ "### Prepare for execution", "_____no_output_____" ] ], [ [ "# So far we've defined the placeholders, variables, the loss function and the optimization method.\n# We have the structure for training, but we haven't trained anything yet.\n# The actual training (and subsequent implementation of the ML algorithm) happens inside sessions.\nsess = tf.InteractiveSession()", "_____no_output_____" ] ], [ [ "### Initializing variables", "_____no_output_____" ] ], [ [ "# Before we start training, we need to initialize our variables: the weights and biases.\n# There is a specific method for initializing called global_variables_initializer().\n# Let's declare a variable \"initializer\" that will do that.\ninitializer = tf.global_variables_initializer()\n\n# Time to initialize the variables.\nsess.run(initializer)", "_____no_output_____" ] ], [ [ "### Loading training data", "_____no_output_____" ] ], [ [ "# We finally load the training data we created above.\ntraining_data = np.load('TF_intro.npz')", "_____no_output_____" ] ], [ [ "### Learning", "_____no_output_____" ] ], [ [ "# As in the previous example, we train for a set number (100) of iterations over the dataset\nfor i in range(100):\n # This expression is a bit more complex but you'll learn to appreciate its power and\n # flexibility in the following lessons.\n # sess.run is the session's function to actually do something, anything.\n # Above, we used it to initialize the variables.\n # Here, we use it to feed the training data to the computational graph, defined by the feed_dict parameter\n # and run operations (already defined above), given as the first parameter (optimize, mean_loss).\n \n # So the line of code means: \"Run the optimize and mean_loss operations by filling the placeholder\n # objects with data from the feed_dict parameter\".\n # Curr_loss catches the output from the two operations.\n # Using \"_,\" we omit the first one, because optimize has no output (it's always \"None\"). \n # The second one catches the value of the mean_loss for the current run, thus curr_loss actually = mean_loss \n _, curr_loss = sess.run([optimize, mean_loss], \n feed_dict={inputs: training_data['inputs'], targets: training_data['targets']})\n \n # We print the current average loss\n print(curr_loss)", "249.164\n146.448\n87.6192\n53.6138\n33.7927\n22.1245\n15.1628\n10.9296\n8.28726\n6.57972\n5.42798\n4.61236\n4.00493\n3.53068\n3.14519\n2.8217\n2.54378\n2.301\n2.08651\n1.89558\n1.72476\n1.57146\n1.43359\n1.30943\n1.19752\n1.0966\n1.00556\n0.923408\n0.849273\n0.782365\n0.721974\n0.667465\n0.618263\n0.573851\n0.533762\n0.497576\n0.464912\n0.435426\n0.408811\n0.384787\n0.3631\n0.343525\n0.325854\n0.309904\n0.295506\n0.282509\n0.270778\n0.260188\n0.250629\n0.242\n0.234211\n0.22718\n0.220833\n0.215105\n0.209933\n0.205265\n0.201052\n0.197248\n0.193815\n0.190716\n0.187918\n0.185393\n0.183113\n0.181056\n0.179198\n0.177522\n0.176009\n0.174642\n0.173409\n0.172296\n0.171291\n0.170384\n0.169566\n0.168827\n0.16816\n0.167557\n0.167014\n0.166523\n0.16608\n0.165681\n0.16532\n0.164994\n0.1647\n0.164434\n0.164195\n0.163979\n0.163783\n0.163607\n0.163448\n0.163305\n0.163175\n0.163058\n0.162952\n0.162857\n0.162771\n0.162693\n0.162623\n0.16256\n0.162503\n0.162451\n" ] ], [ [ "### Plotting the data", "_____no_output_____" ] ], [ [ "# As before, we want to plot the last output vs targets after the training is supposedly over.\n# Same notation as above but this time we don't want to train anymore, and we are not interested\n# in the loss function value.\n# What we want, however, are the outputs. \n# Therefore, instead of the optimize and mean_loss operations, we pass the \"outputs\" as the only parameter.\nout = sess.run([outputs], \n feed_dict={inputs: training_data['inputs']})\n# The model is optimized, so the outputs are calculated based on the last form of the model\n\n# We have to np.squeeze the arrays in order to fit them to what the plot function expects.\n# Doesn't change anything as we cut dimensions of size 1 - just a technicality.\nplt.plot(np.squeeze(out), np.squeeze(training_data['targets']))\nplt.xlabel('outputs')\nplt.ylabel('targets')\nplt.show()\n \n# Voila - what you see should be exactly the same as in the previous notebook!\n# You probably don't see the point of TensorFlow now - it took us more lines of code\n# to achieve this simple result. However, once we go deeper in the next chapter,\n# TensorFlow will save us hundreds of lines of code.", "_____no_output_____" ], [ "w = sess.run([weights], \n feed_dict={inputs: training_data['inputs']})\n \nb = sess.run([biases], \n feed_dict={inputs: training_data['inputs']})\n\nprint (w)\nprint (b)", "[array([[ 2.00622678],\n [-2.99534726]], dtype=float32)]\n[array([ 4.95888424], dtype=float32)]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cb81b35b608ca9f6a6fecd455daf7321781cb1a2
104,632
ipynb
Jupyter Notebook
6- Data Visualization/6- Data Visualization-iris.ipynb
2series/10-steps-to-become-a-data-scientist
ac94920864d922e855f92b7ba9cab3e5f1aeb4ef
[ "Apache-2.0" ]
6
2019-07-15T01:02:13.000Z
2020-01-29T06:57:29.000Z
6- Data Visualization/6- Data Visualization-iris.ipynb
2series/10-steps-to-become-a-data-scientist
ac94920864d922e855f92b7ba9cab3e5f1aeb4ef
[ "Apache-2.0" ]
null
null
null
6- Data Visualization/6- Data Visualization-iris.ipynb
2series/10-steps-to-become-a-data-scientist
ac94920864d922e855f92b7ba9cab3e5f1aeb4ef
[ "Apache-2.0" ]
3
2019-02-13T08:40:45.000Z
2019-07-11T21:20:35.000Z
57.521715
2,888
0.646628
[ [ [ "## <div style=\"text-align: center\"> 20 ML Algorithms from start to Finish for Iris</div>\n\n<div style=\"text-align: center\"> I want to solve<b> iris problem</b> a popular machine learning Dataset as a comprehensive workflow with python packages. \nAfter reading, you can use this workflow to solve other real problems and use it as a template to deal with <b>machine learning</b> problems.</div>\n![iris](https://image.ibb.co/gbH3ue/iris.png)\n<div style=\"text-align:center\">last update: <b>10/28/2018</b></div>\n\n\n\n>###### you may be interested have a look at it: [**10-steps-to-become-a-data-scientist**](https://github.com/mjbahmani/10-steps-to-become-a-data-scientist)\n\n\n---------------------------------------------------------------------\nyou can Fork and Run this kernel on Github:\n> ###### [ GitHub](https://github.com/mjbahmani/Machine-Learning-Workflow-with-Python)\n\n-------------------------------------------------------------------------------------------------------------\n **I hope you find this kernel helpful and some <font color=\"red\"><b>UPVOTES</b></font> would be very much appreciated**\n \n -----------", "_____no_output_____" ], [ "## Notebook Content\n* 1- [Introduction](#1)\n* 2- [Machine learning workflow](#2)\n* 2-1 [Real world Application Vs Competitions](#2)\n\n* 3- [Problem Definition](#3)\n* 3-1 [Problem feature](#4)\n* 3-2 [Aim](#5)\n* 3-3 [Variables](#6)\n* 4-[ Inputs & Outputs](#7)\n* 4-1 [Inputs ](#8)\n* 4-2 [Outputs](#9)\n* 5- [Installation](#10)\n* 5-1 [ jupyter notebook](#11)\n* 5-2[ kaggle kernel](#12)\n* 5-3 [Colab notebook](#13)\n* 5-4 [install python & packages](#14)\n* 5-5 [Loading Packages](#15)\n* 6- [Exploratory data analysis](#16)\n* 6-1 [Data Collection](#17)\n* 6-2 [Visualization](#18)\n* 6-2-1 [Scatter plot](#19)\n* 6-2-2 [Box](#20)\n* 6-2-3 [Histogram](#21)\n* 6-2-4 [Multivariate Plots](#22)\n* 6-2-5 [Violinplots](#23)\n* 6-2-6 [Pair plot](#24)\n* 6-2-7 [Kde plot](#25)\n* 6-2-8 [Joint plot](#26)\n* 6-2-9 [Andrews curves](#27)\n* 6-2-10 [Heatmap](#28)\n* 6-2-11 [Radviz](#29)\n* 6-3 [Data Preprocessing](#30)\n* 6-4 [Data Cleaning](#31)\n* 7- [Model Deployment](#32)\n* 7-1[ KNN](#33)\n* 7-2 [Radius Neighbors Classifier](#34)\n* 7-3 [Logistic Regression](#35)\n* 7-4 [Passive Aggressive Classifier](#36)\n* 7-5 [Naive Bayes](#37)\n* 7-6 [MultinomialNB](#38)\n* 7-7 [BernoulliNB](#39)\n* 7-8 [SVM](#40)\n* 7-9 [Nu-Support Vector Classification](#41)\n* 7-10 [Linear Support Vector Classification](#42)\n* 7-11 [Decision Tree](#43)\n* 7-12 [ExtraTreeClassifier](#44)\n* 7-13 [Neural network](#45)\n* 7-13-1 [What is a Perceptron?](#45)\n* 7-14 [RandomForest](#46)\n* 7-15 [Bagging classifier ](#47)\n* 7-16 [AdaBoost classifier](#48)\n* 7-17 [Gradient Boosting Classifier](#49)\n* 7-18 [Linear Discriminant Analysis](#50)\n* 7-19 [Quadratic Discriminant Analysis](#51)\n* 7-20 [Kmeans](#52)\n* 7-21 [Backpropagation](#53)\n* 8- [Conclusion](#54)\n* 10- [References](#55)", "_____no_output_____" ], [ " <a id=\"1\"></a> <br>\n## 1- Introduction\nThis is a **comprehensive ML techniques with python** , that I have spent for more than two months to complete it.\n\nit is clear that everyone in this community is familiar with IRIS dataset but if you need to review your information about the dataset please visit this [link](https://archive.ics.uci.edu/ml/datasets/iris).\n\nI have tried to help **beginners** in Kaggle how to face machine learning problems. and I think it is a great opportunity for who want to learn machine learning workflow with python completely.\nI have covered most of the methods that are implemented for iris until **2018**, you can start to learn and review your knowledge about ML with a simple dataset and try to learn and memorize the workflow for your journey in Data science world.\n\n## 1-1 Courses\n\nThere are alot of Online courses that can help you develop your knowledge, here I have just listed some of them:\n\n1. [Machine Learning Certification by Stanford University (Coursera)](https://www.coursera.org/learn/machine-learning/)\n\n2. [Machine Learning A-Z™: Hands-On Python & R In Data Science (Udemy)](https://www.udemy.com/machinelearning/)\n\n3. [Deep Learning Certification by Andrew Ng from deeplearning.ai (Coursera)](https://www.coursera.org/specializations/deep-learning)\n\n4. [Python for Data Science and Machine Learning Bootcamp (Udemy)](Python for Data Science and Machine Learning Bootcamp (Udemy))\n\n5. [Mathematics for Machine Learning by Imperial College London](https://www.coursera.org/specializations/mathematics-machine-learning)\n\n6. [Deep Learning A-Z™: Hands-On Artificial Neural Networks](https://www.udemy.com/deeplearning/)\n\n7. [Complete Guide to TensorFlow for Deep Learning Tutorial with Python](https://www.udemy.com/complete-guide-to-tensorflow-for-deep-learning-with-python/)\n\n8. [Data Science and Machine Learning Tutorial with Python – Hands On](https://www.udemy.com/data-science-and-machine-learning-with-python-hands-on/)\n\n9. [Machine Learning Certification by University of Washington](https://www.coursera.org/specializations/machine-learning)\n\n10. [Data Science and Machine Learning Bootcamp with R](https://www.udemy.com/data-science-and-machine-learning-bootcamp-with-r/)\n\n\n5- [https://www.kaggle.com/startupsci/titanic-data-science-solutions](https://www.kaggle.com/startupsci/titanic-data-science-solutions)\n\nI am open to getting your feedback for improving this **kernel**\n", "_____no_output_____" ], [ "<a id=\"2\"></a> <br>\n## 2- Machine Learning Workflow\nField of \tstudy \tthat \tgives\tcomputers\tthe\tability \tto\tlearn \twithout \tbeing\nexplicitly \tprogrammed.\n\nArthur\tSamuel, 1959\n\nIf you have already read some [machine learning books](https://towardsdatascience.com/list-of-free-must-read-machine-learning-books-89576749d2ff). You have noticed that there are different ways to stream data into machine learning.\n\nmost of these books share the following steps (checklist):\n* Define the Problem(Look at the big picture)\n* Specify Inputs & Outputs\n* Data Collection\n* Exploratory data analysis\n* Data Preprocessing\n* Model Design, Training, and Offline Evaluation\n* Model Deployment, Online Evaluation, and Monitoring\n* Model Maintenance, Diagnosis, and Retraining\n\n**You can see my workflow in the below image** :\n <img src=\"http://s9.picofile.com/file/8338227634/workflow.png\" />\n\n**you should\tfeel free\tto\tadapt \tthis\tchecklist \tto\tyour needs**", "_____no_output_____" ], [ "## 2-1 Real world Application Vs Competitions\n<img src=\"http://s9.picofile.com/file/8339956300/reallife.png\" height=\"600\" width=\"500\" />", "_____no_output_____" ], [ "<a id=\"3\"></a> <br>\n## 3- Problem Definition\nI think one of the important things when you start a new machine learning project is Defining your problem. that means you should understand business problem.( **Problem Formalization**)\n\nProblem Definition has four steps that have illustrated in the picture below:\n<img src=\"http://s8.picofile.com/file/8338227734/ProblemDefination.png\">\n<a id=\"4\"></a> <br>\n### 3-1 Problem Feature\nwe will use the classic Iris data set. This dataset contains information about three different types of Iris flowers:\n\n* Iris Versicolor\n* Iris Virginica\n* Iris Setosa\n\nThe data set contains measurements of four variables :\n\n* sepal length \n* sepal width\n* petal length \n* petal width\n \nThe Iris data set has a number of interesting features:\n\n1. One of the classes (Iris Setosa) is linearly separable from the other two. However, the other two classes are not linearly separable.\n\n2. There is some overlap between the Versicolor and Virginica classes, so it is unlikely to achieve a perfect classification rate.\n\n3. There is some redundancy in the four input variables, so it is possible to achieve a good solution with only three of them, or even (with difficulty) from two, but the precise choice of best variables is not obvious.\n\n**Why am I using iris dataset:**\n\n1- This is a good project because it is so well understood.\n\n2- Attributes are numeric so you have to figure out how to load and handle data.\n\n3- It is a classification problem, allowing you to practice with perhaps an easier type of supervised learning algorithm.\n\n4- It is a multi-class classification problem (multi-nominal) that may require some specialized handling.\n\n5- It only has 4 attributes and 150 rows, meaning it is small and easily fits into memory (and a screen or A4 page).\n\n6- All of the numeric attributes are in the same units and the same scale, not requiring any special scaling or transforms to get started.[5]\n\n7- we can define problem as clustering(unsupervised algorithm) project too.\n<a id=\"5\"></a> <br>\n### 3-2 Aim\nThe aim is to classify iris flowers among three species (setosa, versicolor or virginica) from measurements of length and width of sepals and petals\n<a id=\"6\"></a> <br>\n### 3-3 Variables\nThe variables are :\n**sepal_length**: Sepal length, in centimeters, used as input.\n**sepal_width**: Sepal width, in centimeters, used as input.\n**petal_length**: Petal length, in centimeters, used as input.\n**petal_width**: Petal width, in centimeters, used as input.\n**setosa**: Iris setosa, true or false, used as target.\n**versicolour**: Iris versicolour, true or false, used as target.\n**virginica**: Iris virginica, true or false, used as target.\n\n**<< Note >>**\n> You must answer the following question:\nHow does your company expact to use and benfit from your model.", "_____no_output_____" ], [ "<a id=\"7\"></a> <br>\n## 4- Inputs & Outputs\n<a id=\"8\"></a> <br>\n### 4-1 Inputs\n**Iris** is a very popular **classification** and **clustering** problem in machine learning and it is such as \"Hello world\" program when you start learning a new programming language. then I decided to apply Iris on 20 machine learning method on it.\nThe Iris flower data set or Fisher's Iris data set is a **multivariate data set** introduced by the British statistician and biologist Ronald Fisher in his 1936 paper The use of multiple measurements in taxonomic problems as an example of linear discriminant analysis. It is sometimes called Anderson's Iris data set because Edgar Anderson collected the data to quantify the morphologic variation of Iris flowers in three related species. Two of the three species were collected in the Gaspé Peninsula \"all from the same pasture, and picked on the same day and measured at the same time by the same person with the same apparatus\".\nThe data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica, and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, Fisher developed a linear discriminant model to distinguish the species from each other.\n\nAs a result, **iris dataset is used as the input of all algorithms**.\n<a id=\"9\"></a> <br>\n### 4-2 Outputs\nthe outputs for our algorithms totally depend on the type of classification or clustering algorithms.\nthe outputs can be the number of clusters or predict for new input.\n\n**setosa**: Iris setosa, true or false, used as target.\n**versicolour**: Iris versicolour, true or false, used as target.\n**virginica**: Iris virginica, true or false, used as a target.", "_____no_output_____" ], [ "<a id=\"10\"></a> <br>\n## 5-Installation\n#### Windows:\n* Anaconda (from https://www.continuum.io) is a free Python distribution for SciPy stack. It is also available for Linux and Mac.\n* Canopy (https://www.enthought.com/products/canopy/) is available as free as well as commercial distribution with full SciPy stack for Windows, Linux and Mac.\n* Python (x,y) is a free Python distribution with SciPy stack and Spyder IDE for Windows OS. (Downloadable from http://python-xy.github.io/)\n#### Linux\nPackage managers of respective Linux distributions are used to install one or more packages in SciPy stack.\n\nFor Ubuntu Users:\nsudo apt-get install python-numpy python-scipy python-matplotlibipythonipythonnotebook\npython-pandas python-sympy python-nose", "_____no_output_____" ], [ "<a id=\"11\"></a> <br>\n## 5-1 Jupyter notebook\nI strongly recommend installing **Python** and **Jupyter** using the **[Anaconda Distribution](https://www.anaconda.com/download/)**, which includes Python, the Jupyter Notebook, and other commonly used packages for scientific computing and data science.\n\nFirst, download Anaconda. We recommend downloading Anaconda’s latest Python 3 version.\n\nSecond, install the version of Anaconda which you downloaded, following the instructions on the download page.\n\nCongratulations, you have installed Jupyter Notebook! To run the notebook, run the following command at the Terminal (Mac/Linux) or Command Prompt (Windows):", "_____no_output_____" ], [ "> jupyter notebook\n> ", "_____no_output_____" ], [ "<a id=\"12\"></a> <br>\n## 5-2 Kaggle Kernel\nKaggle kernel is an environment just like you use jupyter notebook, it's an **extension** of the where in you are able to carry out all the functions of jupyter notebooks plus it has some added tools like forking et al.", "_____no_output_____" ], [ "<a id=\"13\"></a> <br>\n## 5-3 Colab notebook\n**Colaboratory** is a research tool for machine learning education and research. It’s a Jupyter notebook environment that requires no setup to use.\n### 5-3-1 What browsers are supported?\nColaboratory works with most major browsers, and is most thoroughly tested with desktop versions of Chrome and Firefox.\n### 5-3-2 Is it free to use?\nYes. Colaboratory is a research project that is free to use.\n### 5-3-3 What is the difference between Jupyter and Colaboratory?\nJupyter is the open source project on which Colaboratory is based. Colaboratory allows you to use and share Jupyter notebooks with others without having to download, install, or run anything on your own computer other than a browser.", "_____no_output_____" ], [ "<a id=\"15\"></a> <br>\n## 5-5 Loading Packages\nIn this kernel we are using the following packages:", "_____no_output_____" ], [ " <img src=\"http://s8.picofile.com/file/8338227868/packages.png\">\n", "_____no_output_____" ], [ "### 5-5-1 Import", "_____no_output_____" ] ], [ [ "from sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.decomposition import PCA\nimport matplotlib.pyplot as plt\nfrom pandas import get_dummies\nimport plotly.graph_objs as go\nfrom sklearn import datasets\nimport plotly.plotly as py\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport warnings\nimport sklearn\nimport scipy\nimport numpy\nimport json\nimport sys\nimport csv\nimport os", "_____no_output_____" ] ], [ [ "### 5-5-2 Print", "_____no_output_____" ] ], [ [ "print('matplotlib: {}'.format(matplotlib.__version__))\nprint('sklearn: {}'.format(sklearn.__version__))\nprint('scipy: {}'.format(scipy.__version__))\nprint('seaborn: {}'.format(sns.__version__))\nprint('pandas: {}'.format(pd.__version__))\nprint('numpy: {}'.format(np.__version__))\nprint('Python: {}'.format(sys.version))", "_____no_output_____" ], [ "#show plot inline\n%matplotlib inline", "_____no_output_____" ] ], [ [ "<a id=\"16\"></a> <br>\n## 6- Exploratory Data Analysis(EDA)\n In this section, you'll learn how to use graphical and numerical techniques to begin uncovering the structure of your data. \n \n* Which variables suggest interesting relationships?\n* Which observations are unusual?\n\nBy the end of the section, you'll be able to answer these questions and more, while generating graphics that are both insightful and beautiful. then We will review analytical and statistical operations:\n\n* 5-1 Data Collection\n* 5-2 Visualization\n* 5-3 Data Preprocessing\n* 5-4 Data Cleaning\n<img src=\"http://s9.picofile.com/file/8338476134/EDA.png\">", "_____no_output_____" ], [ "<a id=\"17\"></a> <br>\n## 6-1 Data Collection\n**Data collection** is the process of gathering and measuring data, information or any variables of interest in a standardized and established manner that enables the collector to answer or test hypothesis and evaluate outcomes of the particular collection.[techopedia]\n\n**Iris dataset** consists of 3 different types of irises’ (Setosa, Versicolour, and Virginica) petal and sepal length, stored in a 150x4 numpy.ndarray\n\nThe rows being the samples and the columns being: Sepal Length, Sepal Width, Petal Length and Petal Width.[6]\n", "_____no_output_____" ] ], [ [ "# import Dataset to play with it\ndataset = pd.read_csv('../input/Iris.csv')", "_____no_output_____" ] ], [ [ "**<< Note 1 >>**\n\n* Each row is an observation (also known as : sample, example, instance, record)\n* Each column is a feature (also known as: Predictor, attribute, Independent Variable, input, regressor, Covariate)", "_____no_output_____" ], [ "After loading the data via **pandas**, we should checkout what the content is, description and via the following:", "_____no_output_____" ] ], [ [ "type(dataset)", "_____no_output_____" ] ], [ [ "<a id=\"18\"></a> <br>\n## 6-2 Visualization\n**Data visualization** is the presentation of data in a pictorial or graphical format. It enables decision makers to see analytics presented visually, so they can grasp difficult concepts or identify new patterns.\n\nWith interactive visualization, you can take the concept a step further by using technology to drill down into charts and graphs for more detail, interactively changing what data you see and how it’s processed.[SAS]\n\n In this section I show you **11 plots** with **matplotlib** and **seaborn** that is listed in the blew picture:\n <img src=\"http://s8.picofile.com/file/8338475500/visualization.jpg\" />\n", "_____no_output_____" ], [ "<a id=\"19\"></a> <br>\n### 6-2-1 Scatter plot\n\nScatter plot Purpose To identify the type of relationship (if any) between two quantitative variables\n\n\n", "_____no_output_____" ] ], [ [ "# Modify the graph above by assigning each species an individual color.\nsns.FacetGrid(dataset, hue=\"Species\", size=5) \\\n .map(plt.scatter, \"SepalLengthCm\", \"SepalWidthCm\") \\\n .add_legend()\nplt.show()", "_____no_output_____" ] ], [ [ "<a id=\"20\"></a> <br>\n### 6-2-2 Box\nIn descriptive statistics, a **box plot** or boxplot is a method for graphically depicting groups of numerical data through their quartiles. Box plots may also have lines extending vertically from the boxes (whiskers) indicating variability outside the upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram.[wikipedia]", "_____no_output_____" ] ], [ [ "dataset.plot(kind='box', subplots=True, layout=(2,3), sharex=False, sharey=False)\nplt.figure()\n#This gives us a much clearer idea of the distribution of the input attributes:\n\n", "_____no_output_____" ], [ "# To plot the species data using a box plot:\n\nsns.boxplot(x=\"Species\", y=\"PetalLengthCm\", data=dataset )\nplt.show()", "_____no_output_____" ], [ "# Use Seaborn's striplot to add data points on top of the box plot \n# Insert jitter=True so that the data points remain scattered and not piled into a verticle line.\n# Assign ax to each axis, so that each plot is ontop of the previous axis. \n\nax= sns.boxplot(x=\"Species\", y=\"PetalLengthCm\", data=dataset)\nax= sns.stripplot(x=\"Species\", y=\"PetalLengthCm\", data=dataset, jitter=True, edgecolor=\"gray\")\nplt.show()", "_____no_output_____" ], [ "# Tweek the plot above to change fill and border color color using ax.artists.\n# Assing ax.artists a variable name, and insert the box number into the corresponding brackets\n\nax= sns.boxplot(x=\"Species\", y=\"PetalLengthCm\", data=dataset)\nax= sns.stripplot(x=\"Species\", y=\"PetalLengthCm\", data=dataset, jitter=True, edgecolor=\"gray\")\n\nboxtwo = ax.artists[2]\nboxtwo.set_facecolor('red')\nboxtwo.set_edgecolor('black')\nboxthree=ax.artists[1]\nboxthree.set_facecolor('yellow')\nboxthree.set_edgecolor('black')\n\nplt.show()", "_____no_output_____" ] ], [ [ "<a id=\"21\"></a> <br>\n### 6-2-3 Histogram\nWe can also create a **histogram** of each input variable to get an idea of the distribution.\n\n", "_____no_output_____" ] ], [ [ "# histograms\ndataset.hist(figsize=(15,20))\nplt.figure()", "_____no_output_____" ] ], [ [ "It looks like perhaps two of the input variables have a Gaussian distribution. This is useful to note as we can use algorithms that can exploit this assumption.\n\n", "_____no_output_____" ] ], [ [ "dataset[\"PetalLengthCm\"].hist();", "_____no_output_____" ] ], [ [ "<a id=\"22\"></a> <br>\n### 6-2-4 Multivariate Plots\nNow we can look at the interactions between the variables.\n\nFirst, let’s look at scatterplots of all pairs of attributes. This can be helpful to spot structured relationships between input variables.", "_____no_output_____" ] ], [ [ "\n# scatter plot matrix\npd.plotting.scatter_matrix(dataset,figsize=(10,10))\nplt.figure()", "_____no_output_____" ] ], [ [ "Note the diagonal grouping of some pairs of attributes. This suggests a high correlation and a predictable relationship.", "_____no_output_____" ], [ "<a id=\"23\"></a> <br>\n### 6-2-5 violinplots", "_____no_output_____" ] ], [ [ "# violinplots on petal-length for each species\nsns.violinplot(data=dataset,x=\"Species\", y=\"PetalLengthCm\")", "_____no_output_____" ] ], [ [ "<a id=\"24\"></a> <br>\n### 6-2-6 pairplot", "_____no_output_____" ] ], [ [ "# Using seaborn pairplot to see the bivariate relation between each pair of features\nsns.pairplot(dataset, hue=\"Species\")", "_____no_output_____" ] ], [ [ "From the plot, we can see that the species setosa is separataed from the other two across all feature combinations\n\nWe can also replace the histograms shown in the diagonal of the pairplot by kde.", "_____no_output_____" ] ], [ [ "# updating the diagonal elements in a pairplot to show a kde\nsns.pairplot(dataset, hue=\"Species\",diag_kind=\"kde\")", "_____no_output_____" ] ], [ [ "<a id=\"25\"></a> <br>\n### 6-2-7 kdeplot", "_____no_output_____" ] ], [ [ "# seaborn's kdeplot, plots univariate or bivariate density estimates.\n#Size can be changed by tweeking the value used\nsns.FacetGrid(dataset, hue=\"Species\", size=5).map(sns.kdeplot, \"PetalLengthCm\").add_legend()\nplt.show()", "_____no_output_____" ] ], [ [ "<a id=\"26\"></a> <br>\n### 6-2-8 jointplot", "_____no_output_____" ] ], [ [ "# Use seaborn's jointplot to make a hexagonal bin plot\n#Set desired size and ratio and choose a color.\nsns.jointplot(x=\"SepalLengthCm\", y=\"SepalWidthCm\", data=dataset, size=10,ratio=10, kind='hex',color='green')\nplt.show()", "_____no_output_____" ] ], [ [ "<a id=\"27\"></a> <br>\n### 6-2-9 andrews_curves", "_____no_output_____" ] ], [ [ "#In Pandas use Andrews Curves to plot and visualize data structure.\n#Each multivariate observation is transformed into a curve and represents the coefficients of a Fourier series.\n#This useful for detecting outliers in times series data.\n#Use colormap to change the color of the curves\n\nfrom pandas.tools.plotting import andrews_curves\nandrews_curves(dataset.drop(\"Id\", axis=1), \"Species\",colormap='rainbow')\nplt.show()", "_____no_output_____" ], [ "# we will use seaborn jointplot shows bivariate scatterplots and univariate histograms with Kernel density \n# estimation in the same figure\nsns.jointplot(x=\"SepalLengthCm\", y=\"SepalWidthCm\", data=dataset, size=6, kind='kde', color='#800000', space=0)", "_____no_output_____" ] ], [ [ "<a id=\"28\"></a> <br>\n### 6-2-10 Heatmap", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(7,4)) \nsns.heatmap(dataset.corr(),annot=True,cmap='cubehelix_r') #draws heatmap with input as the correlation matrix calculted by(iris.corr())\nplt.show()", "_____no_output_____" ] ], [ [ "<a id=\"29\"></a> <br>\n### 6-2-11 radviz", "_____no_output_____" ] ], [ [ "# A final multivariate visualization technique pandas has is radviz\n# Which puts each feature as a point on a 2D plane, and then simulates\n# having each sample attached to those points through a spring weighted\n# by the relative value for that feature\nfrom pandas.tools.plotting import radviz\nradviz(dataset.drop(\"Id\", axis=1), \"Species\")", "_____no_output_____" ] ], [ [ "### 6-2-12 Bar Plot", "_____no_output_____" ] ], [ [ "dataset['Species'].value_counts().plot(kind=\"bar\");", "_____no_output_____" ] ], [ [ "### 6-2-14 visualization with Plotly", "_____no_output_____" ] ], [ [ "import plotly.offline as py\nimport plotly.graph_objs as go\npy.init_notebook_mode(connected=True)\nfrom plotly import tools\nimport plotly.figure_factory as ff\niris = datasets.load_iris()\nX = iris.data[:, :2] # we only take the first two features.\nY = iris.target\n\nx_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5\ny_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5\ntrace = go.Scatter(x=X[:, 0],\n y=X[:, 1],\n mode='markers',\n marker=dict(color=np.random.randn(150),\n size=10,\n colorscale='Viridis',\n showscale=False))\n\nlayout = go.Layout(title='Training Points',\n xaxis=dict(title='Sepal length',\n showgrid=False),\n yaxis=dict(title='Sepal width',\n showgrid=False),\n )\n \nfig = go.Figure(data=[trace], layout=layout)", "_____no_output_____" ], [ "py.iplot(fig)", "_____no_output_____" ] ], [ [ "**<< Note >>**\n\n**Yellowbrick** is a suite of visual diagnostic tools called “Visualizers” that extend the Scikit-Learn API to allow human steering of the model selection process. In a nutshell, Yellowbrick combines scikit-learn with matplotlib in the best tradition of the scikit-learn documentation, but to produce visualizations for your models! ", "_____no_output_____" ], [ "### 6-2-13 Conclusion\nwe have used Python to apply data visualization tools to the Iris dataset. Color and size changes were made to the data points in scatterplots. I changed the border and fill color of the boxplot and violin, respectively.", "_____no_output_____" ], [ "<a id=\"30\"></a> <br>\n## 6-3 Data Preprocessing\n**Data preprocessing** refers to the transformations applied to our data before feeding it to the algorithm.\n \nData Preprocessing is a technique that is used to convert the raw data into a clean data set. In other words, whenever the data is gathered from different sources it is collected in raw format which is not feasible for the analysis.\nthere are plenty of steps for data preprocessing and we just listed some of them :\n* removing Target column (id)\n* Sampling (without replacement)\n* Making part of iris unbalanced and balancing (with undersampling and SMOTE)\n* Introducing missing values and treating them (replacing by average values)\n* Noise filtering\n* Data discretization\n* Normalization and standardization\n* PCA analysis\n* Feature selection (filter, embedded, wrapper)", "_____no_output_____" ], [ "## 6-3-1 Features\nFeatures:\n* numeric\n* categorical\n* ordinal\n* datetime\n* coordinates\n\nfind the type of features in titanic dataset\n<img src=\"http://s9.picofile.com/file/8339959442/titanic.png\" height=\"700\" width=\"600\" />", "_____no_output_____" ], [ "### 6-3-2 Explorer Dataset\n1- Dimensions of the dataset.\n\n2- Peek at the data itself.\n\n3- Statistical summary of all attributes.\n\n4- Breakdown of the data by the class variable.[7]\n\nDon’t worry, each look at the data is **one command**. These are useful commands that you can use again and again on future projects.", "_____no_output_____" ] ], [ [ "# shape\nprint(dataset.shape)", "_____no_output_____" ], [ "#columns*rows\ndataset.size", "_____no_output_____" ] ], [ [ "how many NA elements in every column\n", "_____no_output_____" ] ], [ [ "dataset.isnull().sum()", "_____no_output_____" ], [ "# remove rows that have NA's\ndataset = dataset.dropna()", "_____no_output_____" ] ], [ [ "\nWe can get a quick idea of how many instances (rows) and how many attributes (columns) the data contains with the shape property.\n\nYou should see 150 instances and 5 attributes:", "_____no_output_____" ], [ "for getting some information about the dataset you can use **info()** command", "_____no_output_____" ] ], [ [ "print(dataset.info())", "_____no_output_____" ] ], [ [ "you see number of unique item for Species with command below:", "_____no_output_____" ] ], [ [ "dataset['Species'].unique()", "_____no_output_____" ], [ "dataset[\"Species\"].value_counts()\n", "_____no_output_____" ] ], [ [ "to check the first 5 rows of the data set, we can use head(5).", "_____no_output_____" ] ], [ [ "dataset.head(5) ", "_____no_output_____" ] ], [ [ "to check out last 5 row of the data set, we use tail() function", "_____no_output_____" ] ], [ [ "dataset.tail() ", "_____no_output_____" ] ], [ [ "to pop up 5 random rows from the data set, we can use **sample(5)** function", "_____no_output_____" ] ], [ [ "dataset.sample(5) ", "_____no_output_____" ] ], [ [ "to give a statistical summary about the dataset, we can use **describe()", "_____no_output_____" ] ], [ [ "dataset.describe() ", "_____no_output_____" ] ], [ [ "to check out how many null info are on the dataset, we can use **isnull().sum()", "_____no_output_____" ] ], [ [ "dataset.isnull().sum()", "_____no_output_____" ], [ "dataset.groupby('Species').count()", "_____no_output_____" ] ], [ [ "to print dataset **columns**, we can use columns atribute", "_____no_output_____" ] ], [ [ "dataset.columns", "_____no_output_____" ] ], [ [ "**<< Note 2 >>**\nin pandas's data frame you can perform some query such as \"where\"", "_____no_output_____" ] ], [ [ "dataset.where(dataset ['Species']=='Iris-setosa')", "_____no_output_____" ] ], [ [ "as you can see in the below in python, it is so easy perform some query on the dataframe:", "_____no_output_____" ] ], [ [ "dataset[dataset['SepalLengthCm']>7.2]", "_____no_output_____" ], [ "# Seperating the data into dependent and independent variables\nX = dataset.iloc[:, :-1].values\ny = dataset.iloc[:, -1].values", "_____no_output_____" ] ], [ [ "**<< Note >>**\n>**Preprocessing and generation pipelines depend on a model type**", "_____no_output_____" ], [ "<a id=\"31\"></a> <br>\n## 6-4 Data Cleaning\nWhen dealing with real-world data, dirty data is the norm rather than the exception. We continuously need to predict correct values, impute missing ones, and find links between various data artefacts such as schemas and records. We need to stop treating data cleaning as a piecemeal exercise (resolving different types of errors in isolation), and instead leverage all signals and resources (such as constraints, available statistics, and dictionaries) to accurately predict corrective actions.\n\nThe primary goal of data cleaning is to detect and remove errors and **anomalies** to increase the value of data in analytics and decision making. While it has been the focus of many researchers for several years, individual problems have been addressed separately. These include missing value imputation, outliers detection, transformations, integrity constraints violations detection and repair, consistent query answering, deduplication, and many other related problems such as profiling and constraints mining.[8]", "_____no_output_____" ] ], [ [ "cols = dataset.columns\nfeatures = cols[0:4]\nlabels = cols[4]\nprint(features)\nprint(labels)", "_____no_output_____" ], [ "#Well conditioned data will have zero mean and equal variance\n#We get this automattically when we calculate the Z Scores for the data\n\ndata_norm = pd.DataFrame(dataset)\n\nfor feature in features:\n dataset[feature] = (dataset[feature] - dataset[feature].mean())/dataset[feature].std()\n\n#Show that should now have zero mean\nprint(\"Averages\")\nprint(dataset.mean())\n\nprint(\"\\n Deviations\")\n#Show that we have equal variance\nprint(pow(dataset.std(),2))", "_____no_output_____" ], [ "#Shuffle The data\nindices = data_norm.index.tolist()\nindices = np.array(indices)\nnp.random.shuffle(indices)\n", "_____no_output_____" ], [ "# One Hot Encode as a dataframe\nfrom sklearn.cross_validation import train_test_split\ny = get_dummies(y)\n\n# Generate Training and Validation Sets\nX_train, X_test, y_train, y_test = train_test_split(X,y, test_size=.3)\n\n# Convert to np arrays so that we can use with TensorFlow\nX_train = np.array(X_train).astype(np.float32)\nX_test = np.array(X_test).astype(np.float32)\ny_train = np.array(y_train).astype(np.float32)\ny_test = np.array(y_test).astype(np.float32)", "_____no_output_____" ], [ "#Check to make sure split still has 4 features and 3 labels\nprint(X_train.shape, y_train.shape)\nprint(X_test.shape, y_test.shape)", "_____no_output_____" ] ], [ [ "<a id=\"32\"></a> <br>\n## 7- Model Deployment\nIn this section have been applied more than **20 learning algorithms** that play an important rule in your experiences and improve your knowledge in case of ML technique.\n\n> **<< Note 3 >>** : The results shown here may be slightly different for your analysis because, for example, the neural network algorithms use random number generators for fixing the initial value of the weights (starting points) of the neural networks, which often result in obtaining slightly different (local minima) solutions each time you run the analysis. Also note that changing the seed for the random number generator used to create the train, test, and validation samples can change your results.", "_____no_output_____" ], [ "## Families of ML algorithms\nThere are several categories for machine learning algorithms, below are some of these categories:\n* Linear\n * Linear Regression\n * Logistic Regression\n * Support Vector Machines\n* Tree-Based\n * Decision Tree\n * Random Forest\n * GBDT\n* KNN\n* Neural Networks\n\n-----------------------------\nAnd if we want to categorize ML algorithms with the type of learning, there are below type:\n* Classification\n\n * k-Nearest \tNeighbors\n * LinearRegression\n * SVM\n * DT \n * NN\n \n* clustering\n\n * K-means\n * HCA\n * Expectation Maximization\n \n* Visualization \tand\tdimensionality \treduction:\n\n * Principal \tComponent \tAnalysis(PCA)\n * Kernel PCA\n * Locally -Linear\tEmbedding \t(LLE)\n * t-distributed\tStochastic\tNeighbor\tEmbedding \t(t-SNE)\n \n* Association \trule\tlearning\n\n * Apriori\n * Eclat\n* Semisupervised learning\n* Reinforcement Learning\n * Q-learning\n* Batch learning & Online learning\n* Ensemble Learning\n\n**<< Note >>**\n> Here is no method which outperforms all others for all tasks\n\n", "_____no_output_____" ], [ "<a id=\"33\"></a> <br>\n## Prepare Features & Targets\nFirst of all seperating the data into dependent(Feature) and independent(Target) variables.\n\n**<< Note 4 >>**\n* X==>>Feature\n* y==>>Target", "_____no_output_____" ] ], [ [ "\nX = dataset.iloc[:, :-1].values\ny = dataset.iloc[:, -1].values\n\n# Splitting the dataset into the Training set and Test set\nfrom sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)", "_____no_output_____" ] ], [ [ "## Accuracy and precision\n* **precision** : \n\nIn pattern recognition, information retrieval and binary classification, precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, \n* **recall** : \n\nrecall is the fraction of relevant instances that have been retrieved over the total amount of relevant instances. \n* **F-score** :\n\nthe F1 score is a measure of a test's accuracy. It considers both the precision p and the recall r of the test to compute the score: p is the number of correct positive results divided by the number of all positive results returned by the classifier, and r is the number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive). The F1 score is the harmonic average of the precision and recall, where an F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.\n**What is the difference between accuracy and precision?\n\"Accuracy\" and \"precision\" are general terms throughout science. A good way to internalize the difference are the common \"bullseye diagrams\". In machine learning/statistics as a whole, accuracy vs. precision is analogous to bias vs. variance.", "_____no_output_____" ], [ "<a id=\"33\"></a> <br>\n## 7-1 K-Nearest Neighbours\nIn **Machine Learning**, the **k-nearest neighbors algorithm** (k-NN) is a non-parametric method used for classification and regression. In both cases, the input consists of the k closest training examples in the feature space. The output depends on whether k-NN is used for classification or regression:\n\nIn k-NN classification, the output is a class membership. An object is classified by a majority vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor.\nIn k-NN regression, the output is the property value for the object. This value is the average of the values of its k nearest neighbors.\nk-NN is a type of instance-based learning, or lazy learning, where the function is only approximated locally and all computation is deferred until classification. The k-NN algorithm is among the simplest of all machine learning algorithms.", "_____no_output_____" ] ], [ [ "# K-Nearest Neighbours\nfrom sklearn.neighbors import KNeighborsClassifier\n\nModel = KNeighborsClassifier(n_neighbors=8)\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\n\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"34\"></a> <br>\n## 7-2 Radius Neighbors Classifier\nClassifier implementing a **vote** among neighbors within a given **radius**\n\nIn scikit-learn **RadiusNeighborsClassifier** is very similar to **KNeighborsClassifier** with the exception of two parameters. First, in RadiusNeighborsClassifier we need to specify the radius of the fixed area used to determine if an observation is a neighbor using radius. Unless there is some substantive reason for setting radius to some value, it is best to treat it like any other hyperparameter and tune it during model selection. The second useful parameter is outlier_label, which indicates what label to give an observation that has no observations within the radius - which itself can often be a useful tool for identifying outliers.", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import RadiusNeighborsClassifier\nModel=RadiusNeighborsClassifier(radius=8.0)\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\n#summary of the predictions made by the classifier\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_test,y_pred))\n#Accouracy score\nprint('accuracy is ', accuracy_score(y_test,y_pred))", "_____no_output_____" ] ], [ [ "<a id=\"35\"></a> <br>\n## 7-3 Logistic Regression\nLogistic regression is the appropriate regression analysis to conduct when the dependent variable is **dichotomous** (binary). Like all regression analyses, the logistic regression is a **predictive analysis**.\n\nIn statistics, the logistic model (or logit model) is a widely used statistical model that, in its basic form, uses a logistic function to model a binary dependent variable; many more complex extensions exist. In regression analysis, logistic regression (or logit regression) is estimating the parameters of a logistic model; it is a form of binomial regression. Mathematically, a binary logistic model has a dependent variable with two possible values, such as pass/fail, win/lose, alive/dead or healthy/sick; these are represented by an indicator variable, where the two values are labeled \"0\" and \"1\"", "_____no_output_____" ] ], [ [ "# LogisticRegression\nfrom sklearn.linear_model import LogisticRegression\nModel = LogisticRegression()\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"36\"></a> <br>\n## 7-4 Passive Aggressive Classifier", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import PassiveAggressiveClassifier\nModel = PassiveAggressiveClassifier()\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"37\"></a> <br>\n## 7-5 Naive Bayes\nIn machine learning, naive Bayes classifiers are a family of simple \"**probabilistic classifiers**\" based on applying Bayes' theorem with strong (naive) independence assumptions between the features.", "_____no_output_____" ] ], [ [ "# Naive Bayes\nfrom sklearn.naive_bayes import GaussianNB\nModel = GaussianNB()\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"39\"></a> <br>\n## 7-7 BernoulliNB\nLike MultinomialNB, this classifier is suitable for **discrete data**. The difference is that while MultinomialNB works with occurrence counts, BernoulliNB is designed for binary/boolean features.", "_____no_output_____" ] ], [ [ "# BernoulliNB\nfrom sklearn.naive_bayes import BernoulliNB\nModel = BernoulliNB()\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"40\"></a> <br>\n## 7-8 SVM\n\nThe advantages of support vector machines are:\n* Effective in high dimensional spaces.\n* Still effective in cases where number of dimensions is greater than the number of samples. \n* Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient.\n* Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.\n\nThe disadvantages of support vector machines include:\n\n* If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial.\n* SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation", "_____no_output_____" ] ], [ [ "# Support Vector Machine\nfrom sklearn.svm import SVC\n\nModel = SVC()\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\n\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"41\"></a> <br>\n## 7-9 Nu-Support Vector Classification\n\n> Similar to SVC but uses a parameter to control the number of support vectors.", "_____no_output_____" ] ], [ [ "# Support Vector Machine's \nfrom sklearn.svm import NuSVC\n\nModel = NuSVC()\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\n\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"42\"></a> <br>\n## 7-10 Linear Support Vector Classification\n\nSimilar to **SVC** with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.", "_____no_output_____" ] ], [ [ "# Linear Support Vector Classification\nfrom sklearn.svm import LinearSVC\n\nModel = LinearSVC()\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\n\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"43\"></a> <br>\n## 7-11 Decision Tree\nDecision Trees (DTs) are a non-parametric supervised learning method used for **classification** and **regression**. The goal is to create a model that predicts the value of a target variable by learning simple **decision rules** inferred from the data features.", "_____no_output_____" ] ], [ [ "# Decision Tree's\nfrom sklearn.tree import DecisionTreeClassifier\n\nModel = DecisionTreeClassifier()\n\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"44\"></a> <br>\n## 7-12 ExtraTreeClassifier\nAn extremely randomized tree classifier.\n\nExtra-trees differ from classic decision trees in the way they are built. When looking for the best split to separate the samples of a node into two groups, random splits are drawn for each of the **max_features** randomly selected features and the best split among those is chosen. When max_features is set 1, this amounts to building a totally random decision tree.\n\n**Warning**: Extra-trees should only be used within ensemble methods.", "_____no_output_____" ] ], [ [ "# ExtraTreeClassifier\nfrom sklearn.tree import ExtraTreeClassifier\n\nModel = ExtraTreeClassifier()\n\nModel.fit(X_train, y_train)\n\ny_pred = Model.predict(X_test)\n\n# Summary of the predictions made by the classifier\nprint(classification_report(y_test, y_pred))\nprint(confusion_matrix(y_test, y_pred))\n# Accuracy score\nprint('accuracy is',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"45\"></a> <br>\n## 7-13 Neural network\n\nI have used multi-layer Perceptron classifier.\nThis model optimizes the log-loss function using **LBFGS** or **stochastic gradient descent**.", "_____no_output_____" ], [ "## 7-13-1 What is a Perceptron?", "_____no_output_____" ], [ "There are many online examples and tutorials on perceptrons and learning. Here is a list of some articles:\n- [Wikipedia on Perceptrons](https://en.wikipedia.org/wiki/Perceptron)\n- Jurafsky and Martin (ed. 3), Chapter 8", "_____no_output_____" ], [ "This is an example that I have taken from a draft of the 3rd edition of Jurafsky and Martin, with slight modifications:\nWe import *numpy* and use its *exp* function. We could use the same function from the *math* module, or some other module like *scipy*. The *sigmoid* function is defined as in the textbook:\n", "_____no_output_____" ] ], [ [ "import numpy as np\n\ndef sigmoid(z):\n return 1 / (1 + np.exp(-z))", "_____no_output_____" ] ], [ [ "Our example data, **weights** $w$, **bias** $b$, and **input** $x$ are defined as:", "_____no_output_____" ] ], [ [ "w = np.array([0.2, 0.3, 0.8])\nb = 0.5\nx = np.array([0.5, 0.6, 0.1])", "_____no_output_____" ] ], [ [ "Our neural unit would compute $z$ as the **dot-product** $w \\cdot x$ and add the **bias** $b$ to it. The sigmoid function defined above will convert this $z$ value to the **activation value** $a$ of the unit:", "_____no_output_____" ] ], [ [ "z = w.dot(x) + b\nprint(\"z:\", z)\nprint(\"a:\", sigmoid(z))", "_____no_output_____" ] ], [ [ "### The XOR Problem\nThe power of neural units comes from combining them into larger networks. Minsky and Papert (1969): A single neural unit cannot compute the simple logical function XOR.\n\nThe task is to implement a simple **perceptron** to compute logical operations like AND, OR, and XOR.\n\n- Input: $x_1$ and $x_2$\n- Bias: $b = -1$ for AND; $b = 0$ for OR\n- Weights: $w = [1, 1]$\n\nwith the following activation function:\n\n$$\ny = \\begin{cases}\n \\ 0 & \\quad \\text{if } w \\cdot x + b \\leq 0\\\\\n \\ 1 & \\quad \\text{if } w \\cdot x + b > 0\n \\end{cases}\n$$", "_____no_output_____" ], [ "We can define this activation function in Python as:", "_____no_output_____" ] ], [ [ "def activation(z):\n if z > 0:\n return 1\n return 0", "_____no_output_____" ] ], [ [ "For AND we could implement a perceptron as:", "_____no_output_____" ] ], [ [ "w = np.array([1, 1])\nb = -1\nx = np.array([0, 0])\nprint(\"0 AND 0:\", activation(w.dot(x) + b))\nx = np.array([1, 0])\nprint(\"1 AND 0:\", activation(w.dot(x) + b))\nx = np.array([0, 1])\nprint(\"0 AND 1:\", activation(w.dot(x) + b))\nx = np.array([1, 1])\nprint(\"1 AND 1:\", activation(w.dot(x) + b))", "_____no_output_____" ] ], [ [ "For OR we could implement a perceptron as:", "_____no_output_____" ] ], [ [ "w = np.array([1, 1])\nb = 0\nx = np.array([0, 0])\nprint(\"0 OR 0:\", activation(w.dot(x) + b))\nx = np.array([1, 0])\nprint(\"1 OR 0:\", activation(w.dot(x) + b))\nx = np.array([0, 1])\nprint(\"0 OR 1:\", activation(w.dot(x) + b))\nx = np.array([1, 1])\nprint(\"1 OR 1:\", activation(w.dot(x) + b))", "_____no_output_____" ] ], [ [ "There is no way to implement a perceptron for XOR this way.", "_____no_output_____" ], [ "no see our prediction for iris", "_____no_output_____" ] ], [ [ "from sklearn.neural_network import MLPClassifier\nModel=MLPClassifier()\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\n# Summary of the predictions\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_test,y_pred))\n#Accuracy Score\nprint('accuracy is ',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"46\"></a> <br>\n## 7-14 RandomForest\nA random forest is a meta estimator that **fits a number of decision tree classifiers** on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. \n\nThe sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if bootstrap=True (default).", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import RandomForestClassifier\nModel=RandomForestClassifier(max_depth=2)\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_pred,y_test))\n#Accuracy Score\nprint('accuracy is ',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"47\"></a> <br>\n## 7-15 Bagging classifier \nA Bagging classifier is an ensemble **meta-estimator** that fits base classifiers each on random subsets of the original dataset and then aggregate their individual predictions (either by voting or by averaging) to form a final prediction. Such a meta-estimator can typically be used as a way to reduce the variance of a black-box estimator (e.g., a decision tree), by introducing randomization into its construction procedure and then making an ensemble out of it.\n\nThis algorithm encompasses several works from the literature. When random subsets of the dataset are drawn as random subsets of the samples, then this algorithm is known as Pasting . If samples are drawn with replacement, then the method is known as Bagging . When random subsets of the dataset are drawn as random subsets of the features, then the method is known as Random Subspaces . Finally, when base estimators are built on subsets of both samples and features, then the method is known as Random Patches .[http://scikit-learn.org]", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import BaggingClassifier\nModel=BaggingClassifier()\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_pred,y_test))\n#Accuracy Score\nprint('accuracy is ',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"48\"></a> <br>\n## 7-16 AdaBoost classifier\n\nAn AdaBoost classifier is a meta-estimator that begins by fitting a classifier on the original dataset and then fits additional copies of the classifier on the same dataset but where the weights of incorrectly classified instances are adjusted such that subsequent classifiers focus more on difficult cases.\nThis class implements the algorithm known as **AdaBoost-SAMME** .", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import AdaBoostClassifier\nModel=AdaBoostClassifier()\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_pred,y_test))\n#Accuracy Score\nprint('accuracy is ',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"49\"></a> <br>\n## 7-17 Gradient Boosting Classifier\nGB builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions.", "_____no_output_____" ] ], [ [ "from sklearn.ensemble import GradientBoostingClassifier\nModel=GradientBoostingClassifier()\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_pred,y_test))\n#Accuracy Score\nprint('accuracy is ',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"50\"></a> <br>\n## 7-18 Linear Discriminant Analysis\nLinear Discriminant Analysis (discriminant_analysis.LinearDiscriminantAnalysis) and Quadratic Discriminant Analysis (discriminant_analysis.QuadraticDiscriminantAnalysis) are two classic classifiers, with, as their names suggest, a **linear and a quadratic decision surface**, respectively.\n\nThese classifiers are attractive because they have closed-form solutions that can be easily computed, are inherently multiclass, have proven to work well in practice, and have no **hyperparameters** to tune.", "_____no_output_____" ] ], [ [ "from sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nModel=LinearDiscriminantAnalysis()\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_pred,y_test))\n#Accuracy Score\nprint('accuracy is ',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"51\"></a> <br>\n## 7-19 Quadratic Discriminant Analysis\nA classifier with a quadratic decision boundary, generated by fitting class conditional densities to the data and using Bayes’ rule.\n\nThe model fits a **Gaussian** density to each class.", "_____no_output_____" ] ], [ [ "from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\nModel=QuadraticDiscriminantAnalysis()\nModel.fit(X_train,y_train)\ny_pred=Model.predict(X_test)\nprint(classification_report(y_test,y_pred))\nprint(confusion_matrix(y_pred,y_test))\n#Accuracy Score\nprint('accuracy is ',accuracy_score(y_pred,y_test))", "_____no_output_____" ] ], [ [ "<a id=\"52\"></a> <br>\n## 7-20 Kmeans \nK-means clustering is a type of unsupervised learning, which is used when you have unlabeled data (i.e., data without defined categories or groups). \n\nThe goal of this algorithm is **to find groups in the data**, with the number of groups represented by the variable K. The algorithm works iteratively to assign each data point to one of K groups based on the features that are provided.\n\n", "_____no_output_____" ] ], [ [ "from sklearn.cluster import KMeans\niris_SP = dataset[['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm']]\n# k-means cluster analysis for 1-15 clusters \nfrom scipy.spatial.distance import cdist\nclusters=range(1,15)\nmeandist=[]\n\n# loop through each cluster and fit the model to the train set\n# generate the predicted cluster assingment and append the mean \n# distance my taking the sum divided by the shape\nfor k in clusters:\n model=KMeans(n_clusters=k)\n model.fit(iris_SP)\n clusassign=model.predict(iris_SP)\n meandist.append(sum(np.min(cdist(iris_SP, model.cluster_centers_, 'euclidean'), axis=1))\n / iris_SP.shape[0])\n\n\"\"\"\nPlot average distance from observations from the cluster centroid\nto use the Elbow Method to identify number of clusters to choose\n\"\"\"\nplt.plot(clusters, meandist)\nplt.xlabel('Number of clusters')\nplt.ylabel('Average distance')\nplt.title('Selecting k with the Elbow Method') \n# pick the fewest number of clusters that reduces the average distance\n# If you observe after 3 we can see graph is almost linear", "_____no_output_____" ] ], [ [ "<a id=\"53\"></a> <br>\n## 7-21- Backpropagation", "_____no_output_____" ], [ "Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.It is commonly used to train deep neural networks,a term referring to neural networks with more than one hidden layer.", "_____no_output_____" ], [ "In this example we will use a very simple network to start with. The network will only have one input and one output layer. We want to make the following predictions from the input:\n\n| Input | Output |\n| ------ |:------:|\n| 0 0 1 | 0 |\n| 1 1 1 | 1 |\n| 1 0 1 | 1 |\n| 0 1 1 | 0 |", "_____no_output_____" ], [ "We will use **Numpy** to compute the network parameters, weights, activation, and outputs:", "_____no_output_____" ], [ "We will use the *[Sigmoid](http://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html#sigmoid)* activation function:", "_____no_output_____" ] ], [ [ "def sigmoid(z):\n \"\"\"The sigmoid activation function.\"\"\"\n return 1 / (1 + np.exp(-z))", "_____no_output_____" ] ], [ [ "We could use the [ReLU](http://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html#activation-relu) activation function instead:", "_____no_output_____" ] ], [ [ "def relu(z):\n \"\"\"The ReLU activation function.\"\"\"\n return max(0, z)", "_____no_output_____" ] ], [ [ "The [Sigmoid](http://ml-cheatsheet.readthedocs.io/en/latest/activation_functions.html#sigmoid) activation function introduces non-linearity to the computation. It maps the input value to an output value between $0$ and $1$.", "_____no_output_____" ], [ "<img src=\"http://s8.picofile.com/file/8339774900/SigmoidFunction1.png\" style=\"max-width:100%; width: 30%; max-width: none\">", "_____no_output_____" ], [ "The derivative of the sigmoid function is maximal at $x=0$ and minimal for lower or higher values of $x$:", "_____no_output_____" ], [ "<img src=\"http://s9.picofile.com/file/8339770650/sigmoid_prime.png\" style=\"max-width:100%; width: 25%; max-width: none\">", "_____no_output_____" ], [ "The *sigmoid_prime* function returns the derivative of the sigmoid for any given $z$. The derivative of the sigmoid is $z * (1 - z)$. This is basically the slope of the sigmoid function at any given point: ", "_____no_output_____" ] ], [ [ "def sigmoid_prime(z):\n \"\"\"The derivative of sigmoid for z.\"\"\"\n return z * (1 - z)", "_____no_output_____" ] ], [ [ "We define the inputs as rows in *X*. There are three input nodes (three columns per vector in $X$. Each row is one trainig example:", "_____no_output_____" ] ], [ [ "X = np.array([ [ 0, 0, 1 ],\n [ 0, 1, 1 ],\n [ 1, 0, 1 ],\n [ 1, 1, 1 ] ])\nprint(X)", "_____no_output_____" ] ], [ [ "The outputs are stored in *y*, where each row represents the output for the corresponding input vector (row) in *X*. The vector is initiated as a single row vector and with four columns and transposed (using the $.T$ method) into a column vector with four rows:", "_____no_output_____" ] ], [ [ "y = np.array([[0,0,1,1]]).T\nprint(y)", "_____no_output_____" ] ], [ [ "To make the outputs deterministic, we seed the random number generator with a constant. This will guarantee that every time you run the code, you will get the same random distribution:", "_____no_output_____" ] ], [ [ "np.random.seed(1)", "_____no_output_____" ] ], [ [ "We create a weight matrix ($Wo$) with randomly initialized weights:", "_____no_output_____" ] ], [ [ "n_inputs = 3\nn_outputs = 1\n#Wo = 2 * np.random.random( (n_inputs, n_outputs) ) - 1\nWo = np.random.random( (n_inputs, n_outputs) ) * np.sqrt(2.0/n_inputs)\nprint(Wo)", "_____no_output_____" ] ], [ [ "The reason for the output weight matrix ($Wo$) to have 3 rows and 1 column is that it represents the weights of the connections from the three input neurons to the single output neuron. The initialization of the weight matrix is random with a mean of $0$ and a variance of $1$. There is a good reason for chosing a mean of zero in the weight initialization. See for details the section on Weight Initialization in the [Stanford course CS231n on Convolutional Neural Networks for Visual Recognition](https://cs231n.github.io/neural-networks-2/#init).", "_____no_output_____" ], [ "The core representation of this network is basically the weight matrix *Wo*. The rest, input matrix, output vector and so on are components that we need to learning and evaluation. The leraning result is stored in the *Wo* weight matrix.", "_____no_output_____" ], [ "We loop in the optimization and learning cycle 10,000 times. In the *forward propagation* line we process the entire input matrix for training. This is called **full batch** training. I do not use an alternative variable name to represent the input layer, instead I use the input matrix $X$ directly here. Think of this as the different inputs to the input neurons computed at once. In principle the input or training data could have many more training examples, the code would stay the same.", "_____no_output_____" ] ], [ [ "for n in range(10000):\n # forward propagation\n l1 = sigmoid(np.dot(X, Wo))\n \n # compute the loss\n l1_error = y - l1\n #print(\"l1_error:\\n\", l1_error)\n \n # multiply the loss by the slope of the sigmoid at l1\n l1_delta = l1_error * sigmoid_prime(l1)\n #print(\"l1_delta:\\n\", l1_delta)\n \n #print(\"error:\", l1_error, \"\\nderivative:\", sigmoid(l1, True), \"\\ndelta:\", l1_delta, \"\\n\", \"-\"*10, \"\\n\")\n # update weights\n Wo += np.dot(X.T, l1_delta)\n\nprint(\"l1:\\n\", l1)", "_____no_output_____" ] ], [ [ "The dots in $l1$ represent the lines in the graphic below. The lines represent the slope of the sigmoid in the particular position. The slope is highest with a value $x = 0$ (blue dot). It is rather shallow with $x = 2$ (green dot), and not so shallow and not as high with $x = -1$. All derivatives are between $0$ and $1$, of course, that is, no slope or a maximal slope of $1$. There is no negative slope in a sigmoid function.", "_____no_output_____" ], [ "<img src=\"http://s8.picofile.com/file/8339770734/sigmoid_deriv_2.png\" style=\"max-width:100%; width: 50%; max-width: none\">", "_____no_output_____" ], [ "The matrix $l1\\_error$ is a 4 by 1 matrix (4 rows, 1 column). The derivative matrix $sigmoid\\_prime(l1)$ is also a 4 by one matrix. The returned matrix of the element-wise product $l1\\_delta$ is also the 4 by 1 matrix.", "_____no_output_____" ], [ "The product of the error and the slopes **reduces the error of high confidence predictions**. When the sigmoid slope is very shallow, the network had a very high or a very low value, that is, it was rather confident. If the network guessed something close to $x=0, y=0.5$, it was not very confident. Such predictions without confidence are updated most significantly. The other peripheral scores are multiplied with a number closer to $0$.", "_____no_output_____" ], [ "In the prediction line $l1 = sigmoid(np.dot(X, Wo))$ we compute the dot-product of the input vectors with the weights and compute the sigmoid on the sums.\nThe result of the dot-product is the number of rows of the first matrix ($X$) and the number of columns of the second matrix ($Wo$).\nIn the computation of the difference between the true (or gold) values in $y$ and the \"guessed\" values in $l1$ we have an estimate of the miss.", "_____no_output_____" ], [ "An example computation for the input $[ 1, 0, 1 ]$ and the weights $[ 9.5, 0.2, -0.1 ]$ and an output of $0.99$: If $y = 1$, the $l1\\_error = y - l2 = 0.01$, and $l1\\_delta = 0.01 * tiny\\_deriv$:", "_____no_output_____" ], [ "<img src=\"http://s8.picofile.com/file/8339770792/toy_network_deriv.png\" style=\"max-width:100%; width: 40%; max-width: none\">", "_____no_output_____" ], [ "## 7-21-1 More Complex Example with Backpropagation", "_____no_output_____" ], [ "Consider now a more complicated example where no column has a correlation with the output:\n\n| Input | Output |\n| ------ |:------:|\n| 0 0 1 | 0 |\n| 0 1 1 | 1 |\n| 1 0 1 | 1 |\n| 1 1 1 | 0 |", "_____no_output_____" ], [ "The pattern here is our XOR pattern or problem: If there is a $1$ in either column $1$ or $2$, but not in both, the output is $1$ (XOR over column $1$ and $2$).", "_____no_output_____" ], [ "From our discussion of the XOR problem we remember that this is a *non-linear pattern*, a **one-to-one relationship between a combination of inputs**.", "_____no_output_____" ], [ "To cope with this problem, we need a network with another layer, that is a layer that will combine and transform the input, and an additional layer will map it to the output. We will add a *hidden layer* with randomized weights and then train those to optimize the output probabilities of the table above.", "_____no_output_____" ], [ "We will define a new $X$ input matrix that reflects the above table:", "_____no_output_____" ] ], [ [ "X = np.array([[0, 0, 1],\n [0, 1, 1],\n [1, 0, 1],\n [1, 1, 1]])\nprint(X)", "_____no_output_____" ] ], [ [ "We also define a new output matrix $y$:", "_____no_output_____" ] ], [ [ "y = np.array([[ 0, 1, 1, 0]]).T\nprint(y)", "_____no_output_____" ] ], [ [ "We initialize the random number generator with a constant again:", "_____no_output_____" ] ], [ [ "np.random.seed(1)", "_____no_output_____" ] ], [ [ "Assume that our 3 inputs are mapped to 4 hidden layer ($Wh$) neurons, we have to initialize the hidden layer weights in a 3 by 4 matrix. The outout layer ($Wo$) is a single neuron that is connected to the hidden layer, thus the output layer is a 4 by 1 matrix:", "_____no_output_____" ] ], [ [ "n_inputs = 3\nn_hidden_neurons = 4\nn_output_neurons = 1\nWh = np.random.random( (n_inputs, n_hidden_neurons) ) * np.sqrt(2.0/n_inputs)\nWo = np.random.random( (n_hidden_neurons, n_output_neurons) ) * np.sqrt(2.0/n_hidden_neurons)\nprint(\"Wh:\\n\", Wh)\nprint(\"Wo:\\n\", Wo)", "_____no_output_____" ] ], [ [ "We will loop now 60,000 times to optimize the weights:", "_____no_output_____" ] ], [ [ "for i in range(100000):\n l1 = sigmoid(np.dot(X, Wh))\n l2 = sigmoid(np.dot(l1, Wo))\n \n l2_error = y - l2\n \n if (i % 10000) == 0:\n print(\"Error:\", np.mean(np.abs(l2_error)))\n \n # gradient, changing towards the target value\n l2_delta = l2_error * sigmoid_prime(l2)\n \n # compute the l1 contribution by value to the l2 error, given the output weights\n l1_error = l2_delta.dot(Wo.T)\n \n # direction of the l1 target:\n # in what direction is the target l1?\n l1_delta = l1_error * sigmoid_prime(l1)\n \n Wo += np.dot(l1.T, l2_delta)\n Wh += np.dot(X.T, l1_delta)\n\nprint(\"Wo:\\n\", Wo)\nprint(\"Wh:\\n\", Wh)", "_____no_output_____" ] ], [ [ "The new computation in this new loop is $l1\\_error = l2\\_delta.dot(Wo.T)$, a **confidence weighted error** from $l2$ to compute an error for $l1$. The computation sends the error across the weights from $l2$ to $l1$. The result is a **contribution weighted error**, because we learn how much each node value in $l1$ **contributed** to the error in $l2$. This step is called **backpropagation**. We update $Wh$ using the same steps we did in the 2 layer implementation.", "_____no_output_____" ] ], [ [ "from sklearn import datasets\niris = datasets.load_iris()\nX_iris = iris.data\ny_iris = iris.target", "_____no_output_____" ], [ "plt.figure('sepal')\ncolormarkers = [ ['red','s'], ['greenyellow','o'], ['blue','x']]\nfor i in range(len(colormarkers)):\n px = X_iris[:, 0][y_iris == i]\n py = X_iris[:, 1][y_iris == i]\n plt.scatter(px, py, c=colormarkers[i][0], marker=colormarkers[i][1])\n\nplt.title('Iris Dataset: Sepal width vs sepal length')\nplt.legend(iris.target_names)\nplt.xlabel('Sepal length')\nplt.ylabel('Sepal width')\nplt.figure('petal')\n\nfor i in range(len(colormarkers)):\n px = X_iris[:, 2][y_iris == i]\n py = X_iris[:, 3][y_iris == i]\n plt.scatter(px, py, c=colormarkers[i][0], marker=colormarkers[i][1])\n\nplt.title('Iris Dataset: petal width vs petal length')\nplt.legend(iris.target_names)\nplt.xlabel('Petal length')\nplt.ylabel('Petal width')\nplt.show()", "_____no_output_____" ] ], [ [ "-----------------\n<a id=\"54\"></a> <br>\n# 8- Conclusion", "_____no_output_____" ], [ "In this kernel, I have tried to cover all the parts related to the process of **Machine Learning** with a variety of Python packages and I know that there are still some problems then I hope to get your feedback to improve it.\n", "_____no_output_____" ], [ "you can follow me on:\n\n> #### [ GitHub](https://github.com/mjbahmani)\n\n--------------------------------------\n\n **I hope you find this kernel helpful and some <font color=\"red\"><b>UPVOTES</b></font> would be very much appreciated** ", "_____no_output_____" ], [ "<a id=\"55\"></a> <br>\n\n-----------\n\n# 9- References\n* [1] [Iris image](https://rpubs.com/wjholst/322258)\n* [2] [IRIS](https://archive.ics.uci.edu/ml/datasets/iris)\n* [3] [https://skymind.ai/wiki/machine-learning-workflow](https://skymind.ai/wiki/machine-learning-workflow)\n* [4] [IRIS-wiki](https://archive.ics.uci.edu/ml/datasets/iris)\n* [5] [Problem-define](https://machinelearningmastery.com/machine-learning-in-python-step-by-step/)\n* [6] [Sklearn](http://scikit-learn.org/)\n* [7] [machine-learning-in-python-step-by-step](https://machinelearningmastery.com/machine-learning-in-python-step-by-step/)\n* [8] [Data Cleaning](http://wp.sigmod.org/?p=2288)\n* [9] [competitive data science](https://www.coursera.org/learn/competitive-data-science/)\n\n\n-------------\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ] ]
cb81c674f2ae99b4baa51b020c699a60d48b11b2
3,491
ipynb
Jupyter Notebook
testjs.ipynb
lovemoganna/jupyterworkspace
6de89389a23140bcecd758bb77647caa5c1cb396
[ "MIT" ]
null
null
null
testjs.ipynb
lovemoganna/jupyterworkspace
6de89389a23140bcecd758bb77647caa5c1cb396
[ "MIT" ]
null
null
null
testjs.ipynb
lovemoganna/jupyterworkspace
6de89389a23140bcecd758bb77647caa5c1cb396
[ "MIT" ]
null
null
null
24.758865
131
0.509883
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cb81cb966bca11c8d542d758b0c7335a96ad93d4
50,054
ipynb
Jupyter Notebook
cpd3.5/notebooks/rest_api/curl/experiments/deep_learning/Use PyTorch to recognize hand-written digits.ipynb
muthukumarbala07/watson-machine-learning-samples
ecc66faf7a7c60ca168b9c7ef0bca3c766babb94
[ "Apache-2.0" ]
null
null
null
cpd3.5/notebooks/rest_api/curl/experiments/deep_learning/Use PyTorch to recognize hand-written digits.ipynb
muthukumarbala07/watson-machine-learning-samples
ecc66faf7a7c60ca168b9c7ef0bca3c766babb94
[ "Apache-2.0" ]
null
null
null
cpd3.5/notebooks/rest_api/curl/experiments/deep_learning/Use PyTorch to recognize hand-written digits.ipynb
muthukumarbala07/watson-machine-learning-samples
ecc66faf7a7c60ca168b9c7ef0bca3c766babb94
[ "Apache-2.0" ]
null
null
null
33.038944
2,660
0.53682
[ [ [ "# Use PyTorch to recognize hand-written digits with Watson Machine Learning REST API", "_____no_output_____" ], [ "This notebook contains steps and code to demonstrate support of PyTorch Deep Learning experiments in Watson Machine Learning Service. It introduces commands for getting data, training experiments, persisting pipelines, publishing models, deploying models and scoring.\n\nSome familiarity with cURL is helpful. This notebook uses cURL examples.\n\n\n## Learning goals\n\nThe learning goals of this notebook are:\n\n- Working with Watson Machine Learning experiments to train Deep Learning models.\n- Downloading computed models to local storage.\n- Online deployment and scoring of trained model.\n\n\n## Contents\n\nThis notebook contains the following parts:\n\n1.\t[Setup](#setup) \n2. [Model definition](#model_definition) \n3.\t[Experiment Run](#run) \n4.\t[Historical runs](#runs) \n5.\t[Deploy and Score](#deploy_and_score) \n6.\t[Cleaning](#cleaning) \n7.\t[Summary and next steps](#summary)", "_____no_output_____" ], [ "<a id=\"setup\"></a>\n## 1. Set up the environment\n\nBefore you use the sample code in this notebook, you must perform the following setup tasks:\n\n- Contact with your Cloud Pack for Data administrator and ask him for your account credentials", "_____no_output_____" ], [ "### Connection to WML\n\nAuthenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`.", "_____no_output_____" ] ], [ [ "%env USERNAME=\n%env PASSWORD=\n%env DATAPLATFORM_URL=\n\n%env SPACE_ID=", "_____no_output_____" ] ], [ [ "<a id=\"wml_token\"></a>\n### Getting WML authorization token for further cURL calls", "_____no_output_____" ], [ "<a href=\"https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-curl#curl-token\" target=\"_blank\" rel=\"noopener no referrer\">Example of cURL call to get WML token</a>", "_____no_output_____" ] ], [ [ "%%bash --out token\n\ntoken=$(curl -sk -X GET \\\n --user $USERNAME:$PASSWORD \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/v1/preauth/validateAuth\")\n\ntoken=${token#*accessToken\\\":\\\"}\ntoken=${token%%\\\"*}\necho $token", "_____no_output_____" ], [ "%env TOKEN=$token ", "env: TOKEN=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImphbiIsInJvbGUiOiJVc2VyIiwicGVybWlzc2lvbnMiOlsiYWNjZXNzX2NhdGFsb2ciLCJjYW5fcHJvdmlzaW9uIl0sImdyb3VwcyI6WzEwMDAwXSwic3ViIjoiamFuIiwiaXNzIjoiS05PWFNTTyIsImF1ZCI6IkRTWCIsInVpZCI6IjEwMDAzMzEwMDkiLCJhdXRoZW50aWNhdG9yIjoiZGVmYXVsdCIsImlhdCI6MTYwNzYwOTAxOCwiZXhwIjoxNjA3NjUyMTgyfQ.nUrtxSaxEuaRhVWUQN3pOGBj3HLI2Rqi3-jlJSy3xZUHXCm5ac6RmdopSdVo-gvEaB1lhaKP2QeDO_5WgE_nULgPvjhOP-GhJlceCPf8dySM1DC5jbMFWEAFHd1AtKlbBUT-rh1AA5xa6mWEcRemtFm1KwgrdARuyvGrl5o6zX3NL8FDPZxK3_nZu1kan4MZ-B2p_9PEK1HXHMyWjAx_h8ktBn5UHgRn_BdfvpDX39C6-AdziJVxIe1gK1fGgJuLsJ8HIYQIH7UkS1GiZ11I719T4Q8g4bj25wjNdCaKi5GK-HibgxtURZAwQN-QXh9lMaZnxUipI6ZNy_Gupf3-BA\n" ] ], [ [ "<a id=\"space_creation\"></a>\n### Space creation\n**Tip:** If you do not have `space` already created, please convert below three cells to `code` and run them.\n\nFirst of all, you need to create a `space` that will be used in all of your further cURL calls. \nIf you do not have `space` already created, below is the cURL call to create one.", "_____no_output_____" ], [ "<a href=\"https://cpd-spaces-api.eu-gb.cf.appdomain.cloud/#/Spaces/spaces_create\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Space creation</a>", "_____no_output_____" ] ], [ [ "%%bash --out space_id\n\ncurl -sk -X POST \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data '{\"name\": \"curl_DL\"}' \\\n \"$DATAPLATFORM_URL/v2/spaces\" \\\n | grep '\"id\": ' | awk -F '\"' '{ print $4 }'", "_____no_output_____" ], [ "space_id = space_id.split('\\n')[1]\n%env SPACE_ID=$space_id", "_____no_output_____" ] ], [ [ "Space creation is asynchronous. This means that you need to check space creation status after creation call.\nMake sure that your newly created space is `active`.", "_____no_output_____" ], [ "<a href=\"https://cpd-spaces-api.eu-gb.cf.appdomain.cloud/#/Spaces/spaces_get\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Get space information</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X GET \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/v2/spaces/$SPACE_ID\"", "_____no_output_____" ] ], [ [ "<a id=\"model_definition\"></a>\n<a id=\"experiment_definition\"></a>\n## 2. Model definition\n\nThis section provides samples about how to store model definition via cURL calls.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_create\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Store a model definition for Deep Learning experiment</a>", "_____no_output_____" ] ], [ [ "%%bash --out model_definition_payload\n\nMODEL_DEFINITION_PAYLOAD='{\"name\": \"PyTorch Hand-written Digit Recognition\", \"space_id\": \"'\"$SPACE_ID\"'\", \"description\": \"PyTorch Hand-written Digit Recognition\", \"tags\": [\"DL\", \"PyTorch\"], \"version\": \"v1\", \"platform\": {\"name\": \"python\", \"versions\": [\"3.7\"]}, \"command\": \"pytorch_v_1.1_mnist_onnx.py --epochs 10 --debug-level debug\"}'\necho $MODEL_DEFINITION_PAYLOAD | python -m json.tool", "_____no_output_____" ], [ "%env MODEL_DEFINITION_PAYLOAD=$model_definition_payload", "env: MODEL_DEFINITION_PAYLOAD={\n \"name\": \"PyTorch Hand-written Digit Recognition\",\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\",\n \"description\": \"PyTorch Hand-written Digit Recognition\",\n \"tags\": [\n \"DL\",\n \"PyTorch\"\n ],\n \"version\": \"v1\",\n \"platform\": {\n \"name\": \"python\",\n \"versions\": [\n \"3.7\"\n ]\n },\n \"command\": \"pytorch_v_1.1_mnist_onnx.py --epochs 10 --debug-level debug\"\n}\n" ], [ "%%bash --out model_definition_id\n\ncurl -sk -X POST \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data \"$MODEL_DEFINITION_PAYLOAD\" \\\n \"$DATAPLATFORM_URL/ml/v4/model_definitions?version=2020-08-01\"| grep '\"id\": ' | awk -F '\"' '{ print $4 }'", "_____no_output_____" ], [ "%env MODEL_DEFINITION_ID=$model_definition_id", "env: MODEL_DEFINITION_ID=ecea5add-e901-4416-9592-18bcfcf84557\n" ] ], [ [ "<a id=\"model_preparation\"></a>\n### Model preparation\n\nDownload files with pytorch code. You can either download it via link below or run the cell below the link.", "_____no_output_____" ], [ "<a href=\"https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/definitions/pytorch/mnist/pytorch-onnx_v1_3.zip\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Download pytorch-model.zip</a>", "_____no_output_____" ] ], [ [ "%%bash\n\nwget https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/definitions/pytorch/mnist/pytorch_onnx_v1_3.zip \\\n -O pytorch-onnx_v1_3.zip", "--2020-12-10 15:03:45-- https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/definitions/pytorch/mnist/pytorch_onnx_v1_3.zip\nResolving github.com... 140.82.121.3\nConnecting to github.com|140.82.121.3|:443... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd3.5/definitions/pytorch/mnist/pytorch_onnx_v1_3.zip [following]\n--2020-12-10 15:03:45-- https://raw.githubusercontent.com/IBM/watson-machine-learning-samples/master/cpd3.5/definitions/pytorch/mnist/pytorch_onnx_v1_3.zip\nResolving raw.githubusercontent.com... 151.101.64.133, 151.101.128.133, 151.101.192.133, ...\nConnecting to raw.githubusercontent.com|151.101.64.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 4061 (4.0K) [application/zip]\nSaving to: 'pytorch-onnx_v1_3.zip'\n\n 0K ... 100% 10.4M=0s\n\n2020-12-10 15:03:46 (10.4 MB/s) - 'pytorch-onnx_v1_3.zip' saved [4061/4061]\n\n" ] ], [ [ "**Tip**: Convert below cell to code and run it to see model deinition's code.", "_____no_output_____" ] ], [ [ "!unzip -oqd . pytorch-onnx_v1_3.zip && cat pytorch_v_1.1_mnist_onnx.py", "_____no_output_____" ] ], [ [ "<a id=\"def_upload\"></a>\n### Upload model for the model definition", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_upload_model\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Upload model for the model definition</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X PUT \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data-binary \"@pytorch-onnx_v1_3.zip\" \\\n \"$DATAPLATFORM_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID/model?version=2020-08-01&space_id=$SPACE_ID\" \\\n | python -m json.tool", "{\n \"attachment_id\": \"3d0b34b5-8eb7-4fd0-813c-3ab13a94f75b\",\n \"content_format\": \"native\",\n \"persisted\": true\n}\n" ] ], [ [ "<a id=\"run\"></a>\n## 3. Experiment run\n\nThis section provides samples about how to trigger Deep Learning experiment via cURL calls.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_create\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Schedule a training job for Deep Learning experiment</a>", "_____no_output_____" ], [ "Specify the source files folder where you have stored your training data. The path should point to a local repository on Watson Machine Learning Accelerator that your system administrator has set up for your use.\n\n**Action:**\nChange `training_data_references: location: path: ...`", "_____no_output_____" ] ], [ [ "%%bash --out training_payload\n\nTRAINING_PAYLOAD='{\"training_data_references\": [{\"name\": \"training_input_data\", \"type\": \"fs\", \"connection\": {}, \"location\": {\"path\": \"pytorch-mnist\"}, \"schema\": {\"id\": \"idmlp_schema\", \"fields\": [{\"name\": \"text\", \"type\": \"string\"}]}}], \"results_reference\": {\"name\": \"MNIST results\", \"connection\": {}, \"location\": {\"path\": \"spaces/'\"$SPACE_ID\"'/assets/experiment\"}, \"type\": \"fs\"}, \"tags\": [{\"value\": \"tags_pytorch\", \"description\": \"Tags PyTorch\"}], \"name\": \"PyTorch hand-written Digit Recognition\", \"description\": \"PyTorch hand-written Digit Recognition\", \"model_definition\": {\"id\": \"'\"$MODEL_DEFINITION_ID\"'\", \"command\": \"pytorch_v_1.1_mnist_onnx.py --epochs 10 --debug-level debug\", \"hardware_spec\": {\"name\": \"K80\", \"nodes\": 1}, \"software_spec\": {\"name\": \"pytorch-onnx_1.3-py3.7\"}, \"parameters\": {\"name\": \"PyTorch_mnist\", \"description\": \"PyTorch mnist recognition\"}}, \"space_id\": \"'\"$SPACE_ID\"'\"}'\necho $TRAINING_PAYLOAD | python -m json.tool", "_____no_output_____" ], [ "%env TRAINING_PAYLOAD=$training_payload", "env: TRAINING_PAYLOAD={\n \"training_data_references\": [\n {\n \"name\": \"training_input_data\",\n \"type\": \"fs\",\n \"connection\": {},\n \"location\": {\n \"path\": \"pytorch-mnist\"\n },\n \"schema\": {\n \"id\": \"idmlp_schema\",\n \"fields\": [\n {\n \"name\": \"text\",\n \"type\": \"string\"\n }\n ]\n }\n }\n ],\n \"results_reference\": {\n \"name\": \"MNIST results\",\n \"connection\": {},\n \"location\": {\n \"path\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment\"\n },\n \"type\": \"fs\"\n },\n \"tags\": [\n {\n \"value\": \"tags_pytorch\",\n \"description\": \"Tags PyTorch\"\n }\n ],\n \"name\": \"PyTorch hand-written Digit Recognition\",\n \"description\": \"PyTorch hand-written Digit Recognition\",\n \"model_definition\": {\n \"id\": \"ecea5add-e901-4416-9592-18bcfcf84557\",\n \"command\": \"pytorch_v_1.1_mnist_onnx.py --epochs 10 --debug-level debug\",\n \"hardware_spec\": {\n \"name\": \"K80\",\n \"nodes\": 1\n },\n \"software_spec\": {\n \"name\": \"pytorch-onnx_1.3-py3.7\"\n },\n \"parameters\": {\n \"name\": \"PyTorch_mnist\",\n \"description\": \"PyTorch mnist recognition\"\n }\n },\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\"\n}\n" ], [ "%%bash --out training_id\n\ncurl -sk -X POST \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data \"$TRAINING_PAYLOAD\" \\\n \"$DATAPLATFORM_URL/ml/v4/trainings?version=2020-08-01\" | awk -F'\"id\":' '{print $2}' | cut -c2-37", "_____no_output_____" ], [ "%env TRAINING_ID=$training_id", "env: TRAINING_ID=bf4447f7-b854-4786-b207-4c2784b00882\n" ] ], [ [ "<a id=\"training_details\"></a>\n### Get training details\nTreining is an asynchronous endpoint. In case you want to monitor training status and details,\nyou need to use a GET method and specify which training you want to monitor by usage of training ID.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_get\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Get information about training job</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X GET \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01\" \\\n | python -m json.tool", "_____no_output_____" ] ], [ [ "### Get training status", "_____no_output_____" ] ], [ [ "%%bash\n\nSTATUS=$(curl -sk -X GET\\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/trainings/$TRAINING_ID?space_id=$SPACE_ID&version=2020-08-01\")\n \nSTATUS=${STATUS#*state\\\":\\\"}\nSTATUS=${STATUS%%\\\"*}\necho $STATUS", "completed\n" ] ], [ [ "Please make sure that training is completed before you go to the next sections.\nMonitor `state` of your training by running above cell couple of times.", "_____no_output_____" ], [ "<a id=\"runs\"></a>\n## 4. Historical runs\n\nIn this section you will see cURL examples describing how to get historical training runs information.", "_____no_output_____" ], [ "Output should be similar to the output from training creation but you should see more trainings entries. \nListing trainings:", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_list\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Get list of historical training jobs information</a>", "_____no_output_____" ] ], [ [ "%%bash\n\nHISTORICAL_TRAINING_LIMIT_TO_GET=2\n\ncurl -sk -X GET \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/trainings?space_id=$SPACE_ID&version=2020-08-01&limit=$HISTORICAL_TRAINING_LIMIT_TO_GET\" \\\n | python -m json.tool", "_____no_output_____" ] ], [ [ "<a id=\"training_cancel\"></a>\n### Cancel training run\n\n**Tip:** If you want to cancel your training, please convert below cell to `code`, specify training ID and run.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_delete\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Canceling training</a>", "_____no_output_____" ] ], [ [ "%%bash\n\nTRAINING_ID_TO_CANCEL=...\n\ncurl -sk -X DELETE \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01\"", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "<a id=\"deploy_and_score\"></a>\n## 5. Deploy and Score\n\nIn this section you will learn how to deploy and score pipeline model as webservice using WML instance.", "_____no_output_____" ], [ "Before deployment creation, you need store your model in WML repository.\nPlease see below cURL call example how to do it.", "_____no_output_____" ], [ "Download `request.json` with repository request json for model storing.", "_____no_output_____" ] ], [ [ "%%bash --out request_json\n\ncurl -sk -X GET \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/v2/asset_files/experiment/$TRAINING_ID/assets/$TRAINING_ID/resources/wml_model/request.json?space_id=$SPACE_ID&version=2020-08-01\" \\\n | python -m json.tool", "_____no_output_____" ], [ "%env MODEL_PAYLOAD=$request_json", "env: MODEL_PAYLOAD={\n \"content_location\": {\n \"connection\": {},\n \"contents\": [\n {\n \"content_format\": \"native\",\n \"file_name\": \"twmla-2694.zip\",\n \"location\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment/bf4447f7-b854-4786-b207-4c2784b00882/assets/bf4447f7-b854-4786-b207-4c2784b00882/resources/wml_model/twmla-2694.zip\"\n }\n ],\n \"location\": {\n \"path\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment\",\n \"model\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment/bf4447f7-b854-4786-b207-4c2784b00882/data/model\",\n \"training\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment/bf4447f7-b854-4786-b207-4c2784b00882\",\n \"training_status\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment/bf4447f7-b854-4786-b207-4c2784b00882/training-status.json\",\n \"logs\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment/bf4447f7-b854-4786-b207-4c2784b00882/logs\",\n \"assets_path\": \"spaces/dfabc53a-c862-4a4c-9161-e74d6726629a/assets/experiment/bf4447f7-b854-4786-b207-4c2784b00882/assets\"\n },\n \"type\": \"fs\"\n },\n \"model_definition\": {\n \"id\": \"ecea5add-e901-4416-9592-18bcfcf84557\"\n },\n \"name\": \"model_twmla-2694\",\n \"software_spec\": {\n \"name\": \"pytorch-onnx_1.3-py3.7\"\n },\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\",\n \"training_data_references\": [\n {\n \"connection\": {},\n \"location\": {\n \"path\": \"pytorch-mnist\"\n },\n \"schema\": {\n \"fields\": [\n {\n \"name\": \"text\",\n \"type\": \"string\"\n }\n ],\n \"id\": \"idmlp_schema\"\n },\n \"type\": \"fs\"\n }\n ],\n \"type\": \"pytorch-onnx_1.3\"\n}\n" ] ], [ [ "<a id=\"model_store\"></a>\n### Store Deep Learning model\n\nStore information about your model to WML repository.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Models/models_create\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Model storing</a>", "_____no_output_____" ] ], [ [ "%%bash --out model_details\n\ncurl -sk -X POST \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data \"$MODEL_PAYLOAD\" \\\n \"$DATAPLATFORM_URL/ml/v4/models?version=2020-08-01&space_id=$SPACE_ID\"", "_____no_output_____" ], [ "%env MODEL_DETAILS=$model_details", "env: MODEL_DETAILS={\n \"entity\": {\n \"content_import_state\": \"running\",\n \"model_definition\": {\n \"id\": \"ecea5add-e901-4416-9592-18bcfcf84557\"\n },\n \"software_spec\": {\n \"id\": \"8d5d8a87-a912-54cf-81ec-3914adaa988d\",\n \"name\": \"pytorch-onnx_1.3-py3.7\"\n },\n \"training_data_references\": [{\n \"connection\": {\n\n },\n \"location\": {\n \"path\": \"pytorch-mnist\"\n },\n \"schema\": {\n \"fields\": [{\n \"name\": \"text\",\n \"type\": \"string\"\n }],\n \"id\": \"idmlp_schema\"\n },\n \"type\": \"fs\"\n }],\n \"type\": \"pytorch-onnx_1.3\"\n },\n \"metadata\": {\n \"created_at\": \"2020-12-10T14:08:18.803Z\",\n \"id\": \"a3c15f28-ffe2-4e5a-b7bb-92898150738d\",\n \"modified_at\": \"2020-12-10T14:08:18.803Z\",\n \"name\": \"model_twmla-2694\",\n \"owner\": \"1000331009\",\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\"\n },\n \"system\": {\n \"warnings\": []\n }\n}\n" ], [ "%%bash --out model_id\n\necho $MODEL_DETAILS | awk -F '\"id\": ' '{ print $5 }' | cut -d '\"' -f 2", "_____no_output_____" ], [ "%env MODEL_ID=$model_id", "env: MODEL_ID=a3c15f28-ffe2-4e5a-b7bb-92898150738d\n" ] ], [ [ "<a id=\"deployment_creation\"></a>\n### Deployment creation\n\nAn Deep Learning online deployment creation is presented below.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_create\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Create deployment</a>", "_____no_output_____" ] ], [ [ "%%bash --out deployment_payload\n\nDEPLOYMENT_PAYLOAD='{\"space_id\": \"'\"$SPACE_ID\"'\",\"name\": \"PyTorch Mnist deployment\", \"description\": \"PyTorch model to recognize hand-written digits\",\"online\": {},\"hardware_spec\": {\"name\": \"S\"},\"asset\": {\"id\": \"'\"$MODEL_ID\"'\"}}'\necho $DEPLOYMENT_PAYLOAD | python -m json.tool", "_____no_output_____" ], [ "%env DEPLOYMENT_PAYLOAD=$deployment_payload", "env: DEPLOYMENT_PAYLOAD={\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\",\n \"name\": \"PyTorch Mnist deployment\",\n \"description\": \"PyTorch model to recognize hand-written digits\",\n \"online\": {},\n \"hardware_spec\": {\n \"name\": \"S\"\n },\n \"asset\": {\n \"id\": \"a3c15f28-ffe2-4e5a-b7bb-92898150738d\"\n }\n}\n" ], [ "%%bash\n\ncurl -sk -X POST \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data \"$DEPLOYMENT_PAYLOAD\" \\\n \"$DATAPLATFORM_URL/ml/v4/deployments?version=2020-08-01\"", "{\n \"entity\": {\n \"asset\": {\n \"id\": \"a3c15f28-ffe2-4e5a-b7bb-92898150738d\"\n },\n \"custom\": {\n\n },\n \"deployed_asset_type\": \"model\",\n \"description\": \"PyTorch model to recognize hand-written digits\",\n \"hardware_spec\": {\n \"id\": \"e7ed1d6c-2e89-42d7-aed5-863b972c1d2b\",\n \"name\": \"S\",\n \"num_nodes\": 1\n },\n \"name\": \"PyTorch Mnist deployment\",\n \"online\": {\n\n },\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\",\n \"status\": {\n \"online_url\": {\n \"url\": \"https://wmlgm-cpd-wmlgm.apps.wml1x180.ma.platformlab.ibm.com/ml/v4/deployments/e5133928-182a-436d-b544-4e15f17ebba6/predictions\"\n },\n \"state\": \"initializing\"\n }\n },\n \"metadata\": {\n \"created_at\": \"2020-12-10T14:08:20.222Z\",\n \"description\": \"PyTorch model to recognize hand-written digits\",\n \"id\": \"e5133928-182a-436d-b544-4e15f17ebba6\",\n \"modified_at\": \"2020-12-10T14:08:20.222Z\",\n \"name\": \"PyTorch Mnist deployment\",\n \"owner\": \"1000331009\",\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\"\n }\n}" ], [ "%%bash --out deployment_id\n\ncurl -sk -X POST \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data \"$DEPLOYMENT_PAYLOAD\" \\\n \"$DATAPLATFORM_URL/ml/v4/deployments?version=2020-08-01\" \\\n | grep '\"id\": ' | awk -F '\"' '{ print $4 }' | sed -n 3p", "_____no_output_____" ], [ "%env DEPLOYMENT_ID=$deployment_id", "env: DEPLOYMENT_ID=c772c05f-a2ff-4618-87f5-c8f6dbc26726\n" ] ], [ [ "<a id=\"deployment_details\"></a>\n### Get deployment details\nAs deployment API is asynchronous, please make sure your deployment is in `ready` state before going to the next points.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_get\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Get deployment details</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X GET \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01\" \\\n | python -m json.tool", "{\n \"entity\": {\n \"asset\": {\n \"id\": \"a3c15f28-ffe2-4e5a-b7bb-92898150738d\"\n },\n \"custom\": {},\n \"deployed_asset_type\": \"model\",\n \"description\": \"PyTorch model to recognize hand-written digits\",\n \"hardware_spec\": {\n \"id\": \"Not_Applicable\",\n \"name\": \"S\",\n \"num_nodes\": 1\n },\n \"name\": \"PyTorch Mnist deployment\",\n \"online\": {},\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\",\n \"status\": {\n \"online_url\": {\n \"url\": \"https://wmlgm-cpd-wmlgm.apps.wml1x180.ma.platformlab.ibm.com/ml/v4/deployments/c772c05f-a2ff-4618-87f5-c8f6dbc26726/predictions\"\n },\n \"state\": \"ready\"\n }\n },\n \"metadata\": {\n \"created_at\": \"2020-12-10T14:08:22.766Z\",\n \"description\": \"PyTorch model to recognize hand-written digits\",\n \"id\": \"c772c05f-a2ff-4618-87f5-c8f6dbc26726\",\n \"modified_at\": \"2020-12-10T14:08:22.766Z\",\n \"name\": \"PyTorch Mnist deployment\",\n \"owner\": \"1000331009\",\n \"space_id\": \"dfabc53a-c862-4a4c-9161-e74d6726629a\"\n }\n}\n" ] ], [ [ "<a id=\"input_score\"></a>\n### Prepare scoring input data\n**Hint:** You may need to install numpy using following command `!pip install numpy`", "_____no_output_____" ] ], [ [ "!wget -q https://github.com/IBM/watson-machine-learning-samples/raw/master/cpd3.5/data/mnist/mnist.npz", "_____no_output_____" ], [ "import numpy as np\n\nmnist_dataset = np.load('mnist.npz')\ntest_mnist = mnist_dataset['x_test']", "_____no_output_____" ], [ "image_1 = [test_mnist[0].tolist()]\nimage_2 = [test_mnist[1].tolist()]", "_____no_output_____" ], [ "%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "for i, image in enumerate([test_mnist[0], test_mnist[1]]):\n plt.subplot(2, 2, i + 1)\n plt.axis('off')\n plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')", "_____no_output_____" ] ], [ [ "<a id=\"webservice_score\"></a>\n### Scoring of a webservice\nIf you want to make a `score` call on your deployment, please follow a below method:", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployment%20Jobs/deployment_jobs_create\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Create deployment job</a>", "_____no_output_____" ] ], [ [ "%%bash -s \"$image_1\" \"$image_2\"\n\ncurl -sk -X POST \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n --data '{\"space_id\": \"$SPACE_ID\",\"input_data\": [{\"values\": ['\"$1\"', '\"$2\"']}]}' \\\n \"$DATAPLATFORM_URL/ml/v4/deployments/$DEPLOYMENT_ID/predictions?version=2020-08-01\" \\\n | python -m json.tool", "{\n \"predictions\": [\n {\n \"values\": [\n [\n 0.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0,\n 1.0,\n 0.0,\n 0.0\n ],\n [\n 0.0,\n 0.0,\n 1.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0,\n 0.0\n ]\n ]\n }\n ]\n}\n" ] ], [ [ "<a id=\"deployments_list\"></a>\n### Listing all deployments", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_list\" \ntarget=\"_blank\" rel=\"noopener no referrer\">List deployments details</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X GET \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/deployments?space_id=$SPACE_ID&version=2020-08-01\" \\\n | python -m json.tool", "_____no_output_____" ] ], [ [ "<a id=\"cleaning\"></a>\n## 6. Cleaning section\n\nBelow section is useful when you want to clean all of your previous work within this notebook.\nJust convert below cells into the `code` and run them.", "_____no_output_____" ], [ "<a id=\"training_delete\"></a>\n### Delete training run\n**Tip:** You can completely delete a training run with its metadata.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Trainings/trainings_delete\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Deleting training</a>", "_____no_output_____" ] ], [ [ "%%bash\n\nTRAINING_ID_TO_DELETE=...\n\ncurl -sk -X DELETE \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/trainings/$TRAINING_ID_TO_DELETE?space_id=$SPACE_ID&version=2020-08-01&hard_delete=true\"", "_____no_output_____" ] ], [ [ "<a id=\"deployment_delete\"></a>\n### Deleting deployment\n**Tip:** You can delete existing deployment by calling DELETE method.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Deployments/deployments_delete\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Delete deployment</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X DELETE \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n --header \"Accept: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/deployments/$DEPLOYMENT_ID?space_id=$SPACE_ID&version=2020-08-01\"", "_____no_output_____" ] ], [ [ "<a id=\"model_delete\"></a>\n### Delete model from repository\n**Tip:** If you want to completely remove your stored model and model metadata, just use a DELETE method.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Models/models_delete\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Delete model from repository</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X DELETE \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/models/$MODEL_ID?space_id=$SPACE_ID&version=2020-08-01\"", "_____no_output_____" ] ], [ [ "<a id=\"def_delete\"></a>\n### Delete model definition\n**Tip:** If you want to completely remove your model definition, just use a DELETE method.", "_____no_output_____" ], [ "<a href=\"https://watson-ml-v4-api.mybluemix.net/wml-restapi-cloud.html#/Model%20Definitions/model_definitions_delete\" \ntarget=\"_blank\" rel=\"noopener no referrer\">Delete model definition</a>", "_____no_output_____" ] ], [ [ "%%bash\n\ncurl -sk -X DELETE \\\n --header \"Authorization: Bearer $TOKEN\" \\\n --header \"Content-Type: application/json\" \\\n \"$DATAPLATFORM_URL/ml/v4/model_definitions/$MODEL_DEFINITION_ID?space_id=$SPACE_ID&version=2020-08-01\"", "_____no_output_____" ] ], [ [ "<a id=\"summary\"></a>\n## 7. Summary and next steps\n\n You successfully completed this notebook!.\n \n You learned how to use `cURL` calls to store, deploy and score a PyTorch Deep Learning model in WML. ", "_____no_output_____" ], [ "### Authors\n\n**Jan Sołtysik**, Intern in Watson Machine Learning at IBM", "_____no_output_____" ], [ "Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown", "raw", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "raw", "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown" ], [ "raw" ], [ "markdown", "markdown", "markdown" ] ]
cb81ecc7d4adf71e5f996e778fdc580158779147
610,171
ipynb
Jupyter Notebook
nbs/dl2/selfmade/08_data_block_selfmade.ipynb
Hustens0hn/fastai-course-v3
5f803c0a9b6b7169b8675f16ee81758c3b5f2f14
[ "Apache-2.0" ]
null
null
null
nbs/dl2/selfmade/08_data_block_selfmade.ipynb
Hustens0hn/fastai-course-v3
5f803c0a9b6b7169b8675f16ee81758c3b5f2f14
[ "Apache-2.0" ]
null
null
null
nbs/dl2/selfmade/08_data_block_selfmade.ipynb
Hustens0hn/fastai-course-v3
5f803c0a9b6b7169b8675f16ee81758c3b5f2f14
[ "Apache-2.0" ]
null
null
null
359.770637
181,472
0.930383
[ [ [ "%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline", "_____no_output_____" ], [ "#export \nfrom exp.nb_07a import *", "_____no_output_____" ], [ "path = untar_data(URLs.IMAGENETTE_160)", "_____no_output_____" ], [ "path", "_____no_output_____" ], [ "#export\nimport os, PIL, mimetypes\nPath.ls = lambda x: list(x.iterdir())", "_____no_output_____" ], [ "path.ls()", "_____no_output_____" ], [ "(path/'val').ls()", "_____no_output_____" ], [ "path_tench = path/'val'/'n01440764'", "_____no_output_____" ], [ "img_fn = path_tench.ls()[0]\nimg_fn", "_____no_output_____" ], [ "img = PIL.Image.open(img_fn)\nimg", "_____no_output_____" ], [ "plt.imshow(img);", "_____no_output_____" ], [ "import numpy\nimg_arr = numpy.array(img)\nimg_arr.shape", "_____no_output_____" ], [ "img_arr[:10,:10,0]", "_____no_output_____" ], [ "#export\nimage_extensions = set(k for k,v in mimetypes.types_map.items() if v.startswith('image/'))", "_____no_output_____" ], [ "' '.join(image_extensions)", "_____no_output_____" ], [ "#export\ndef setify(o): return o if isinstance(o, set) else set(listify(o))", "_____no_output_____" ], [ "test_eq(setify({1}), {1})\ntest_eq(setify({1,2,1}), {1,2})\ntest_eq(setify([1,2,1]), {1,2})\ntest_eq(setify(1), {1})\ntest_eq(setify(None), set())\ntest_eq(setify('a'), {'a'})", "_____no_output_____" ], [ "#export\ndef _get_files(p, fs, extensions=None):\n p = Path(p)\n res = [p/f for f in fs if not f.startswith('.') and ((not extensions) or f'.{f.split(\".\")[-1].lower()}' in extensions)]\n return res", "_____no_output_____" ], [ "p = [o.name for o in os.scandir(path_tench)]\np[:3]", "_____no_output_____" ], [ "t = _get_files(path_tench, p, extensions=image_extensions)\nt[:3]", "_____no_output_____" ], [ "#export\ndef get_files(path, extensions=None, recurse=False, include=None):\n path = Path(path)\n extensions = setify(extensions)\n extensions = {e.lower() for e in extensions}\n if recurse:\n res = []\n for i, (p,d,f) in enumerate(os.walk(path)):\n if include is not None and i == 0: d[:] = [o for o in d if o in include]\n else: d[:] = [o for o in d if not o.startswith('.')]\n res += _get_files(p, f, extensions)\n return res\n \n else:\n fs = [o.name for o in os.scandir(path) if o.is_file()]\n return _get_files(path, fs, extensions)", "_____no_output_____" ], [ "get_files(path_tench, image_extensions)[:3]", "_____no_output_____" ], [ "get_files(path, image_extensions, recurse=True)[:3]", "_____no_output_____" ], [ "all_fns = get_files(path, image_extensions, recurse=True)\nlen(all_fns)", "_____no_output_____" ], [ "%timeit -n 10 get_files(path, image_extensions, recurse=True)", "53.4 ms ± 5.52 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ] ], [ [ "# Prepare for modeling", "_____no_output_____" ], [ "## Get files", "_____no_output_____" ] ], [ [ "#export\ndef compose(x, funcs, *args, order_key='_order', **kwargs):\n key = lambda o: getattr(o, order_key, 0)\n for f in sorted(listify(funcs), key=key): x = f(x, **kwargs)\n return x", "_____no_output_____" ], [ "ListContainer??", "_____no_output_____" ], [ "#export\nclass ItemList(ListContainer):\n def __init__(self, items, path='.', tfms=None):\n super().__init__(items)\n self.path, self.tfms = path, tfms\n \n def __repr__(self): return f'{super().__repr__()}\\nPath: {self.path}'\n \n def new(self, items, cls=None):\n if cls is None: cls = self.__class__\n return cls(items, self.path, self.tfms)\n \n def get(self, i): return i\n def _get(self, i): return compose(self.get(i), self.tfms)\n \n def __getitem__(self, idx):\n res = super().__getitem__(idx)\n if isinstance(res, list): return [self._get(o) for o in res]\n return self._get(res)", "_____no_output_____" ], [ "#export\nclass ImageList(ItemList):\n @classmethod\n def from_files(cls, path, extensions=None, recurse=True, include=None, **kwargs):\n if extensions is None: extensions = image_extensions\n return cls(get_files(path, extensions, recurse, include), path, **kwargs)\n \n def get(self, fn): return PIL.Image.open(fn)", "_____no_output_____" ], [ "#export\nclass Transform(): _order = 0\n \nclass MakeRGB(Transform):\n def __call__(self, item): return item.convert('RGB')\n \ndef make_rgb(item): return item.convert('RGB')", "_____no_output_____" ], [ "il = ImageList.from_files(path, tfms=make_rgb)", "_____no_output_____" ], [ "il", "_____no_output_____" ], [ "img = il[0]; img", "_____no_output_____" ], [ "il[:1]", "_____no_output_____" ] ], [ [ "## Split validation set", "_____no_output_____" ] ], [ [ "fn = il.items[0]; fn", "_____no_output_____" ], [ "fn.parent.parent.name", "_____no_output_____" ], [ "#export\ndef grandparent_splitter(fn, train_name='train', valid_name='valid'):\n gp = fn.parent.parent.name\n return True if gp == valid_name else False if gp == train_name else None\n\ndef split_by_func(items, f):\n mask = [f(o) for o in items]\n \n t = [o for o,m in zip(items, mask) if m == False]\n v = [o for o,m in zip(items, mask) if m == True]\n \n return t,v", "_____no_output_____" ], [ "splitter = partial(grandparent_splitter, valid_name='val')", "_____no_output_____" ], [ "%time train, valid = split_by_func(il, splitter)", "CPU times: user 29.1 ms, sys: 0 ns, total: 29.1 ms\nWall time: 28.1 ms\n" ], [ "len(train), len(valid)", "_____no_output_____" ], [ "#export\nclass SplitData():\n def __init__(self, train, valid): self.train, self.valid = train, valid\n \n def __getattr__(self, k): return getattr(self.train, k)\n \n def __setstate__(self, data:Any): self.__dict__.update(data)\n \n @classmethod\n def split_by_func(cls, il, f):\n lists = map(il.new, split_by_func(il.items, f))\n return cls(*lists)\n \n def __repr__(self): return f'{self.__class__.__name__}\\Train: {self.train}\\nValid: {self.valid}'", "_____no_output_____" ], [ "sd = SplitData.split_by_func(il, splitter); sd", "_____no_output_____" ] ], [ [ "## Labeling", "_____no_output_____" ] ], [ [ "#export\nfrom collections import OrderedDict\n\ndef uniqueify(x, sort=False):\n res = list(OrderedDict.fromkeys(x).keys())\n if sort: res.sort()\n return res", "_____no_output_____" ], [ "#export\nclass Processor():\n def process(self, items): return items\n \nclass CategoryProcessor(Processor):\n def __init__(self): self.vocab = None\n \n def __call__(self, items):\n if self.vocab is None:\n self.vocab = uniqueify(items)\n self.otoi = {v:k for k,v in enumerate(self.vocab)}\n return [self.proc1(o) for o in items]\n \n def proc1(self, item): return self.otoi[item]\n \n def deprocess(self, idxs):\n assert self.vocab is not None\n return [self.deproc1(idx) for idx in idxs]\n \n def deproc1(self, idx): return self.vocab[idx]", "_____no_output_____" ], [ "#export\ndef parent_labeler(fn): return fn.parent.name\n\ndef _label_by_func(il, f, cls=ItemList): return cls([f(o) for o in il.items], path=il.path)", "_____no_output_____" ], [ "#export\nclass LabeledData():\n def process(self, il, proc): return il.new(compose(il.items, proc))\n \n def __init__(self, x, y, proc_x=None, proc_y=None):\n self.x, self.y = self.process(x, proc_x), self.process(y, proc_y)\n self.proc_x, self.proc_y = proc_x, proc_y\n \n def __repr__(self): return f'{self.__class__.__name__}\\nx: {self.x}\\ny: {self.y}\\n'\n def __getitem__(self, idx): return self.x[idx], self.y[idx]\n def __len__(self): return len(self.x)\n \n def x_obj(self, idx): return self.obj(self.x, idx, self.proc_x)\n def y_obj(self, idx): return self.obj(self.y, idx, self.proc_y)\n \n def obj(self, items, idx, procs):\n isint = isinstance(idx, int) or (isinstance(idx, torch.LongTensor) and not idx.ndim)\n item = items[idx]\n for proc in reversed(listify(procs)):\n item = proc.deproc1(item) if isint else proc.deprocess(item)\n return item\n \n @classmethod\n def label_by_func(cls, il, f, proc_x=None, proc_y=None):\n return cls(il, _label_by_func(il, f), proc_x, proc_y)\n \ndef label_by_func(sd, f, proc_x=None, proc_y=None):\n train = LabeledData.label_by_func(sd.train, f, proc_x, proc_y)\n valid = LabeledData.label_by_func(sd.valid, f, proc_x, proc_y)\n return SplitData(train, valid) ", "_____no_output_____" ], [ "ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())", "_____no_output_____" ], [ "ll", "_____no_output_____" ], [ "assert ll.train.proc_y is ll.valid.proc_y", "_____no_output_____" ], [ "ll.train.y", "_____no_output_____" ], [ "ll.train.y.items[0], ll.train.y_obj(0), ll.train.y_obj(range(2))", "_____no_output_____" ] ], [ [ "## Transform to tensor", "_____no_output_____" ] ], [ [ "ll.train[0]", "_____no_output_____" ], [ "ll.train[0][0]", "_____no_output_____" ], [ "ll.train[0][0].resize((128,128))", "_____no_output_____" ], [ "#export\nclass ResizeFixed(Transform):\n _order = 10\n def __init__(self, size):\n if isinstance(size, int): size = (size,size)\n self.size = size\n \n def __call__(self, item): return item.resize(self.size, PIL.Image.BILINEAR)\n \ndef to_byte_tensor(item):\n res = torch.ByteTensor(torch.ByteStorage.from_buffer(item.tobytes()))\n w, h = item.size\n return res.view(h,w,-1).permute(2,0,1)\nto_byte_tensor._order = 20\n\ndef to_float_tensor(item): return item.float().div_(255.)\nto_float_tensor._order = 30", "_____no_output_____" ], [ "tfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]\n\nil = ImageList.from_files(path, tfms=tfms)\nsd = SplitData.split_by_func(il, splitter)\nll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())", "_____no_output_____" ], [ "#export\ndef show_image(im, figsize=(3,3)):\n plt.figure(figsize=figsize)\n plt.axis('off')\n plt.imshow(im.permute(1,2,0))", "_____no_output_____" ], [ "x, y = ll.train[0]\nx.shape", "_____no_output_____" ], [ "show_image(x)", "_____no_output_____" ] ], [ [ "# Modeling", "_____no_output_____" ] ], [ [ "bs = 64", "_____no_output_____" ], [ "train_dl, valid_dl = get_dls(ll.train, ll.valid, bs, num_workers=4)", "_____no_output_____" ], [ "x,y = next(iter(train_dl))", "_____no_output_____" ], [ "x.shape", "_____no_output_____" ], [ "show_image(x[0])\nll.train.proc_y.vocab[y[0]]", "_____no_output_____" ], [ "y", "_____no_output_____" ], [ "#export\nclass DataBunch():\n def __init__(self, train_dl, valid_dl, c_in=None, c_out=None):\n self.train_dl, self.valid_dl, self.c_in, self.c_out = train_dl, valid_dl, c_in, c_out\n \n @property\n def train_ds(self): return self.train_dl.dataset\n @property\n def valid_ds(self): return self.valid_dl.dataset", "_____no_output_____" ], [ "#export\ndef databunchify(sd, bs, c_in=None, c_out=None, **kwargs):\n dls = get_dls(sd.train, sd.valid, bs, **kwargs)\n return DataBunch(*dls, c_in=c_in, c_out=c_out)\n\nSplitData.to_databunch = databunchify", "_____no_output_____" ], [ "path = untar_data(URLs.IMAGENETTE_160)\ntfms = [make_rgb, ResizeFixed(128), to_byte_tensor, to_float_tensor]\n\nil = ImageList.from_files(path, tfms=tfms)\nsd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))\nll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())\ndata = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=4)", "_____no_output_____" ] ], [ [ "## Model", "_____no_output_____" ] ], [ [ "cbfs = [CudaCallback, Recorder,\n partial(AvgStatsCallback, accuracy)]", "_____no_output_____" ], [ "m, s = x.mean((0,2,3)).cuda(), x.std((0,2,3)).cuda()\nm, s", "_____no_output_____" ], [ "#export\ndef normalize_chan(x, mean, std):\n return (x - mean[...,None,None]) / std[...,None,None]\n\n_m = tensor([0.47, 0.48, 0.45])\n_s = tensor([0.29, 0.28, 0.30])\nnorm_imagenette = partial(normalize_chan, mean=_m.cuda(), std=_s.cuda())", "_____no_output_____" ], [ "cbfs.append(partial(BatchTransformXCallback, norm_imagenette))", "_____no_output_____" ], [ "nfs = [64,64,128,256]", "_____no_output_____" ], [ "#export\nimport math\ndef prev_pow_2(x): return 2**math.floor(math.log2(x))", "_____no_output_____" ], [ "#export\ndef get_cnn_layers(data, nfs, layer, **kwargs):\n def f(ni, nf, stride=2): return layer(ni, nf, 3, stride=stride, **kwargs)\n l1 = data.c_in\n l2 = prev_pow_2(l1*3*3)\n layers = [f(l1, l2, stride=1),\n f(l2, l2*2, stride=2),\n f(l2*2, l2*4, stride=2)]\n nfs = [l2*4] + nfs\n \n layers += [f(nfs[i], nfs[i+1]) for i in range(len(nfs) - 1)]\n layers += [nn.AdaptiveAvgPool2d(1), Lambda(flatten), nn.Linear(nfs[-1], data.c_out)]\n \n return layers\n\ndef get_cnn_model(data, nfs, layer, **kwargs):\n return nn.Sequential(*get_cnn_layers(data, nfs, layer, **kwargs))\n\ndef get_learn_run(nfs, data, lr, layer, cbs=None, opt_func=None, **kwargs):\n model = get_cnn_model(data, nfs, layer, **kwargs)\n init_cnn(model)\n return get_runner(model, data, lr=lr, cbs=cbs, opt_func=opt_func)", "_____no_output_____" ], [ "sched = combine_scheds([0.3, 0.7], cos_1cycle_anneal(0.1, 0.3, 0.05))", "_____no_output_____" ], [ "learn, run = get_learn_run(nfs, data, 0.2, conv_layer, cbs=cbfs+[partial(ParamScheduler, 'lr', sched)])", "_____no_output_____" ], [ "#export\ndef model_summary(run, learn, data, find_all=False):\n xb, yb = get_batch(data.valid_dl, run)\n device = next(learn.model.parameters()).device\n xb, yb, = xb.to(device), yb.to(device)\n mods = find_modules(learn.model, is_lin_layer) if find_all else learn.model.children()\n f = lambda hook,mod,inp,outp: print(f'{mod}\\n{outp.shape}\\n')\n with Hooks(mods, f) as hooks: learn.model(xb)", "_____no_output_____" ], [ "model_summary(run, learn, data)", "Sequential(\n (0): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)\n (1): GeneralRelu()\n (2): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\ntorch.Size([128, 16, 128, 128])\n\nSequential(\n (0): Conv2d(16, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (1): GeneralRelu()\n (2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\ntorch.Size([128, 32, 64, 64])\n\nSequential(\n (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (1): GeneralRelu()\n (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\ntorch.Size([128, 64, 32, 32])\n\nSequential(\n (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (1): GeneralRelu()\n (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\ntorch.Size([128, 64, 16, 16])\n\nSequential(\n (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (1): GeneralRelu()\n (2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\ntorch.Size([128, 64, 8, 8])\n\nSequential(\n (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (1): GeneralRelu()\n (2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\ntorch.Size([128, 128, 4, 4])\n\nSequential(\n (0): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)\n (1): GeneralRelu()\n (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\n)\ntorch.Size([128, 256, 2, 2])\n\nAdaptiveAvgPool2d(output_size=1)\ntorch.Size([128, 256, 1, 1])\n\nLambda()\ntorch.Size([128, 256])\n\nLinear(in_features=256, out_features=10, bias=True)\ntorch.Size([128, 10])\n\n" ], [ "%time run.fit(5, learn)", "train: [1.7865281197196115, tensor(0.3803, device='cuda:0')]\nvalid: [1.6788043640525478, tensor(0.4346, device='cuda:0')]\ntrain: [1.4219498742607455, tensor(0.5268, device='cuda:0')]\nvalid: [1.3536269655652866, tensor(0.5483, device='cuda:0')]\ntrain: [1.0226649337641252, tensor(0.6621, device='cuda:0')]\nvalid: [1.2033512888136944, tensor(0.6125, device='cuda:0')]\ntrain: [0.6602305570314975, tensor(0.7879, device='cuda:0')]\nvalid: [1.1022392515923567, tensor(0.6499, device='cuda:0')]\ntrain: [0.37651648739472227, tensor(0.8951, device='cuda:0')]\nvalid: [1.094550283638535, tensor(0.6586, device='cuda:0')]\nCPU times: user 43.3 s, sys: 20.5 s, total: 1min 3s\nWall time: 1min 25s\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cb8209bbb3726b0db976face78dfa4dbf70645d5
6,580
ipynb
Jupyter Notebook
_sources/shorts/Matplotlib.ipynb
callysto/shorts-book
aeecdd8c475ad382388095261ce45c81c8fac764
[ "CC0-1.0", "CC-BY-4.0" ]
null
null
null
_sources/shorts/Matplotlib.ipynb
callysto/shorts-book
aeecdd8c475ad382388095261ce45c81c8fac764
[ "CC0-1.0", "CC-BY-4.0" ]
null
null
null
_sources/shorts/Matplotlib.ipynb
callysto/shorts-book
aeecdd8c475ad382388095261ce45c81c8fac764
[ "CC0-1.0", "CC-BY-4.0" ]
null
null
null
28.362069
377
0.578723
[ [ [ "![Callysto.ca Banner](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-top.jpg?raw=true)\n\n<a href=\"https://hub.callysto.ca/jupyter/hub/user-redirect/git-pull?repo=https%3A%2F%2Fgithub.com%2Fcallysto%2Fshorts&branch=master&subPath=Matplotlib.ipynb&depth=1\" target=\"_parent\"><img src=\"https://raw.githubusercontent.com/callysto/curriculum-notebooks/master/open-in-callysto-button.svg?sanitize=true\" width=\"123\" height=\"24\" alt=\"Open in Callysto\"/></a>", "_____no_output_____" ], [ "# Plotting with matplotlib\n\nMatplotlib is a rich collection of commands to create mathematical plots in a Jupyter notebook. It is best to search online for extensive examples.\n\nIn this notebook, we will just touch on the basics, to get you started. Read online to get many more examples and details. \n\nInside Matplotlib is a module called pyplot, that does most of the work for us. This is the module that will be loaded into the notebook before plotting. \n\nIt is also important to tell the notebook that you want your plots to appear \"inline\", which is to say they will appear in a cell inside your notebook. The following two commands set this up for you.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nfrom matplotlib.pyplot import *", "_____no_output_____" ] ], [ [ "## Example 1. A simple plot\n\nWe plot five data points, connecting with lines.\n\nNote the semicolon after the plot command. It removes some extraneous message from the computer. ", "_____no_output_____" ] ], [ [ "plot([1,2,2,3,5]);", "_____no_output_____" ] ], [ [ "## Example 2. Another simple plot\n\nWe plot five data points, marked with circles.", "_____no_output_____" ] ], [ [ "plot([1,2,2,3,5],'o');", "_____no_output_____" ] ], [ [ "## Example 3. Another simple plot, with x and y values\n\nWe plot five data points, y versus x, marked with circles. \n\nNote the x axis now starts at coordinate x=1.", "_____no_output_____" ] ], [ [ "x = [1,2,3,4,5]\ny = [1,2,2,3,5]\nplot(x,y,'o');", "_____no_output_____" ] ], [ [ "## Example 4. We can also do bar plots\n\nWe plot five data points, y versus x, as a bar chart. We also will add a title and axis labels.", "_____no_output_____" ] ], [ [ "x = [1,2,3,4,5]\ny = [1,2,2,3,5]\nbar(x,y);\ntitle(\"Hey, this is my bar chart\");\nxlabel(\"x values\");\nylabel(\"y values\");", "_____no_output_____" ] ], [ [ "## Example 5. Object oriented plotting\n\nFor more precise control of your plotting effort, you can create a figure and axis object, and modify it as necessary. Best to read up on this online, but here is the baseic example.\n\nWe plot five data points, y versus x, by attaching them to the figure or axis object, as appropriate.", "_____no_output_____" ] ], [ [ "fig, ax = subplots()\nax.plot([1,2,3,4,5], [1,2,2,3,5], 'o')\nax.set_title('Object oriented version of plotting');\nshow()", "_____no_output_____" ] ], [ [ "## Example 6: More mathematical plotting\n\nMatplotlib likes to work with Numpy (numerical Python), which gives us arrays and mathematical functions like sine and cosine. \n\nWe must first import the numpy module. I'll do it in the next cell, but keep in mind you can also include it up above when we loaded in matplotlib.\n\nWe then create an array of x values, running from 0 to 1, then evaluate the sine function on those values. We then do an x,y plot of the function. \n\nAs follows:", "_____no_output_____" ] ], [ [ "from numpy import *\nx = linspace(0,1)\ny = sin(2*pi*x)\nplot(x,y);\ntitle(\"One period of the sine function\")\nxlabel(\"x values\")\nylabel(\"sin(2 pi x) values\");", "_____no_output_____" ] ], [ [ "## Example 7: Math notation in labels\n\nJust for fun, we note that Latex can be used in the the plot labels, so you can make your graphs looks more profession. The key is to use the\"$\" delimiter to identify a section of text that is to be typeset as if in Latex. \n\nCheck out these labels:\n", "_____no_output_____" ] ], [ [ "x = linspace(0,1)\ny = sin(2*pi*x)\nplot(x,y);\ntitle(\"One period of the function $\\sin(2 \\pi x)$\")\nxlabel(\"$x$ values\")\nylabel(\"$\\sin(2 \\pi x)$ values\");", "_____no_output_____" ] ], [ [ "[![Callysto.ca License](https://github.com/callysto/curriculum-notebooks/blob/master/callysto-notebook-banner-bottom.jpg?raw=true)](https://github.com/callysto/curriculum-notebooks/blob/master/LICENSE.md)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cb821264cec998481c918110e082f370a2769d3a
19,669
ipynb
Jupyter Notebook
notebooks/introduction_to_tensorflow/labs/write_low_level_code.ipynb
henrypurbreadcom/asl-ml-immersion
d74d919be561f97d19f4af62b90fd67944cee403
[ "Apache-2.0" ]
11
2021-09-08T05:39:02.000Z
2022-03-25T14:35:22.000Z
notebooks/introduction_to_tensorflow/labs/write_low_level_code.ipynb
henrypurbreadcom/asl-ml-immersion
d74d919be561f97d19f4af62b90fd67944cee403
[ "Apache-2.0" ]
118
2021-08-28T03:09:44.000Z
2022-03-31T00:38:44.000Z
notebooks/introduction_to_tensorflow/labs/write_low_level_code.ipynb
henrypurbreadcom/asl-ml-immersion
d74d919be561f97d19f4af62b90fd67944cee403
[ "Apache-2.0" ]
110
2021-09-02T15:01:35.000Z
2022-03-31T12:32:48.000Z
25.812336
553
0.54985
[ [ [ "# Writing Low-Level TensorFlow Code\n\n\n**Learning Objectives**\n\n 1. Practice defining and performing basic operations on constant Tensors\n 2. Use Tensorflow's automatic differentiation capability\n 3. Learn how to train a linear regression from scratch with TensorFLow\n\n\n## Introduction \n\nIn this notebook, we will start by reviewing the main operations on Tensors in TensorFlow and understand how to manipulate TensorFlow Variables. We explain how these are compatible with python built-in list and numpy arrays. \n\nThen we will jump to the problem of training a linear regression from scratch with gradient descent. The first order of business will be to understand how to compute the gradients of a function (the loss here) with respect to some of its arguments (the model weights here). The TensorFlow construct allowing us to do that is `tf.GradientTape`, which we will describe. \n\nAt last we will create a simple training loop to learn the weights of a 1-dim linear regression using synthetic data generated from a linear model. \n\nAs a bonus exercise, we will do the same for data generated from a non linear model, forcing us to manual engineer non-linear features to improve our linear model performance.\n\nEach learning objective will correspond to a #TODO in the [student lab notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/write_low_level_code.ipynb) -- try to complete that notebook first before reviewing this solution notebook.", "_____no_output_____" ] ], [ [ "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst", "_____no_output_____" ], [ "# Ensure the right version of Tensorflow is installed.\n!pip freeze | grep tensorflow==2.1 || pip install tensorflow==2.1", "_____no_output_____" ], [ "import numpy as np\nimport tensorflow as tf\nfrom matplotlib import pyplot as plt", "_____no_output_____" ], [ "print(tf.__version__)", "_____no_output_____" ] ], [ [ "## Operations on Tensors", "_____no_output_____" ], [ "### Variables and Constants", "_____no_output_____" ], [ "Tensors in TensorFlow are either contant (`tf.constant`) or variables (`tf.Variable`).\nConstant values can not be changed, while variables values can be.\n\nThe main difference is that instances of `tf.Variable` have methods allowing us to change \ntheir values while tensors constructed with `tf.constant` don't have these methods, and\ntherefore their values can not be changed. When you want to change the value of a `tf.Variable`\n`x` use one of the following method: \n\n* `x.assign(new_value)`\n* `x.assign_add(value_to_be_added)`\n* `x.assign_sub(value_to_be_subtracted`\n\n", "_____no_output_____" ] ], [ [ "x = tf.constant([2, 3, 4])\nx", "_____no_output_____" ], [ "x = tf.Variable(2.0, dtype=tf.float32, name=\"my_variable\")", "_____no_output_____" ], [ "x.assign(45.8)\nx", "_____no_output_____" ], [ "x.assign_add(4)\nx", "_____no_output_____" ], [ "x.assign_sub(3)\nx", "_____no_output_____" ] ], [ [ "### Point-wise operations", "_____no_output_____" ], [ "Tensorflow offers similar point-wise tensor operations as numpy does:\n \n* `tf.add` allows to add the components of a tensor \n* `tf.multiply` allows us to multiply the components of a tensor\n* `tf.subtract` allow us to substract the components of a tensor\n* `tf.math.*` contains the usual math operations to be applied on the components of a tensor\n* and many more...\n\nMost of the standard aritmetic operations (`tf.add`, `tf.substrac`, etc.) are overloaded by the usual corresponding arithmetic symbols (`+`, `-`, etc.)", "_____no_output_____" ], [ "**Lab Task #1:** Performing basic operations on Tensors \n1. Compute the sum of the constants `a` and `b` below using `tf.add` and `+` and verify both operations produce the same values.\n2. Compute the product of the constants `a` and `b` below using `tf.multiply` and `*` and verify both operations produce the same values.\n3. Compute the exponential of the constant `a` using `tf.math.exp`. Note, you'll need to specify the type for this operation.\n", "_____no_output_____" ] ], [ [ "# TODO 1a\na = # TODO -- Your code here.\nb = # TODO -- Your code here.\nc = # TODO -- Your code here.\nd = # TODO -- Your code here.\n\nprint(\"c:\", c)\nprint(\"d:\", d)", "_____no_output_____" ], [ "# TODO 1b\na = # TODO -- Your code here.\nb = # TODO -- Your code here.\nc = # TODO -- Your code here.\nd = # TODO -- Your code here.\n\nprint(\"c:\", c)\nprint(\"d:\", d)", "_____no_output_____" ], [ "# TODO 1c\n# tf.math.exp expects floats so we need to explicitly give the type\na = # TODO -- Your code here.\nb = # TODO -- Your code here.\n\nprint(\"b:\", b)", "_____no_output_____" ] ], [ [ "### NumPy Interoperability\n\nIn addition to native TF tensors, tensorflow operations can take native python types and NumPy arrays as operands. ", "_____no_output_____" ] ], [ [ "# native python list\na_py = [1, 2]\nb_py = [3, 4]", "_____no_output_____" ], [ "tf.add(a_py, b_py)", "_____no_output_____" ], [ "# numpy arrays\na_np = np.array([1, 2])\nb_np = np.array([3, 4])", "_____no_output_____" ], [ "tf.add(a_np, b_np)", "_____no_output_____" ], [ "# native TF tensor\na_tf = tf.constant([1, 2])\nb_tf = tf.constant([3, 4])", "_____no_output_____" ], [ "tf.add(a_tf, b_tf)", "_____no_output_____" ] ], [ [ "You can convert a native TF tensor to a NumPy array using .numpy()", "_____no_output_____" ] ], [ [ "a_tf.numpy()", "_____no_output_____" ] ], [ [ "## Linear Regression\n\nNow let's use low level tensorflow operations to implement linear regression.\n\nLater in the course you'll see abstracted ways to do this using high level TensorFlow.", "_____no_output_____" ], [ "### Toy Dataset\n\nWe'll model the following function:\n\n\\begin{equation}\ny= 2x + 10\n\\end{equation}", "_____no_output_____" ] ], [ [ "X = tf.constant(range(10), dtype=tf.float32)\nY = 2 * X + 10\n\nprint(f\"X:{X}\")\nprint(f\"Y:{Y}\")", "_____no_output_____" ] ], [ [ "Let's also create a test dataset to evaluate our models:", "_____no_output_____" ] ], [ [ "X_test = tf.constant(range(10, 20), dtype=tf.float32)\nY_test = 2 * X_test + 10\n\nprint(f\"X_test:{X_test}\")\nprint(f\"Y_test:{Y_test}\")", "_____no_output_____" ] ], [ [ "#### Loss Function", "_____no_output_____" ], [ "The simplest model we can build is a model that for each value of x returns the sample mean of the training set:", "_____no_output_____" ] ], [ [ "y_mean = Y.numpy().mean()\n\n\ndef predict_mean(X):\n y_hat = [y_mean] * len(X)\n return y_hat\n\n\nY_hat = predict_mean(X_test)", "_____no_output_____" ] ], [ [ "Using mean squared error, our loss is:\n\\begin{equation}\nMSE = \\frac{1}{m}\\sum_{i=1}^{m}(\\hat{Y}_i-Y_i)^2\n\\end{equation}", "_____no_output_____" ], [ "For this simple model the loss is then:", "_____no_output_____" ] ], [ [ "errors = (Y_hat - Y) ** 2\nloss = tf.reduce_mean(errors)\nloss.numpy()", "_____no_output_____" ] ], [ [ "This values for the MSE loss above will give us a baseline to compare how a more complex model is doing.", "_____no_output_____" ], [ "Now, if $\\hat{Y}$ represents the vector containing our model's predictions when we use a linear regression model\n\\begin{equation}\n\\hat{Y} = w_0X + w_1\n\\end{equation}\n\nwe can write a loss function taking as arguments the coefficients of the model:", "_____no_output_____" ] ], [ [ "def loss_mse(X, Y, w0, w1):\n Y_hat = w0 * X + w1\n errors = (Y_hat - Y) ** 2\n return tf.reduce_mean(errors)", "_____no_output_____" ] ], [ [ "### Gradient Function\n\nTo use gradient descent we need to take the partial derivatives of the loss function with respect to each of the weights. We could manually compute the derivatives, but with Tensorflow's automatic differentiation capabilities we don't have to!\n\nDuring gradient descent we think of the loss as a function of the parameters $w_0$ and $w_1$. Thus, we want to compute the partial derivative with respect to these variables. \n\nFor that we need to wrap our loss computation within the context of `tf.GradientTape` instance which will reccord gradient information:\n\n```python\nwith tf.GradientTape() as tape:\n loss = # computation \n```\n\nThis will allow us to later compute the gradients of any tensor computed within the `tf.GradientTape` context with respect to instances of `tf.Variable`:\n\n```python\ngradients = tape.gradient(loss, [w0, w1])\n```", "_____no_output_____" ], [ "We illustrate this procedure with by computing the loss gradients with respect to the model weights:", "_____no_output_____" ], [ "**Lab Task #2:** Complete the function below to compute the loss gradients with respect to the model weights `w0` and `w1`. ", "_____no_output_____" ] ], [ [ "# TODO 2\ndef compute_gradients(X, Y, w0, w1):\n # TODO -- Your code here.", "_____no_output_____" ], [ "w0 = tf.Variable(0.0)\nw1 = tf.Variable(0.0)\n\ndw0, dw1 = compute_gradients(X, Y, w0, w1)", "_____no_output_____" ], [ "print(\"dw0:\", dw0.numpy())", "_____no_output_____" ], [ "print(\"dw1\", dw1.numpy())", "_____no_output_____" ] ], [ [ "### Training Loop\n\nHere we have a very simple training loop that converges. Note we are ignoring best practices like batching, creating a separate test set, and random weight initialization for the sake of simplicity.", "_____no_output_____" ], [ "**Lab Task #3:** Complete the `for` loop below to train a linear regression. \n1. Use `compute_gradients` to compute `dw0` and `dw1`.\n2. Then, re-assign the value of `w0` and `w1` using the `.assign_sub(...)` method with the computed gradient values and the `LEARNING_RATE`.\n3. Finally, for every 100th step , we'll compute and print the `loss`. Use the `loss_mse` function we created above to compute the `loss`. ", "_____no_output_____" ] ], [ [ "# TODO 3\nSTEPS = 1000\nLEARNING_RATE = .02\nMSG = \"STEP {step} - loss: {loss}, w0: {w0}, w1: {w1}\\n\"\n\n\nw0 = tf.Variable(0.0)\nw1 = tf.Variable(0.0)\n\n\nfor step in range(0, STEPS + 1):\n\n dw0, dw1 = # TODO -- Your code here.\n\n if step % 100 == 0:\n loss = # TODO -- Your code here.\n print(MSG.format(step=step, loss=loss, w0=w0.numpy(), w1=w1.numpy()))", "_____no_output_____" ] ], [ [ "Now let's compare the test loss for this linear regression to the test loss from the baseline model that outputs always the mean of the training set:", "_____no_output_____" ] ], [ [ "loss = loss_mse(X_test, Y_test, w0, w1)\nloss.numpy()", "_____no_output_____" ] ], [ [ "This is indeed much better!", "_____no_output_____" ], [ "## Bonus", "_____no_output_____" ], [ "Try modelling a non-linear function such as: $y=xe^{-x^2}$", "_____no_output_____" ] ], [ [ "X = tf.constant(np.linspace(0, 2, 1000), dtype=tf.float32)\nY = X * tf.exp(-(X ** 2))", "_____no_output_____" ], [ "%matplotlib inline\n\nplt.plot(X, Y)", "_____no_output_____" ], [ "def make_features(X):\n f1 = tf.ones_like(X) # Bias.\n f2 = X\n f3 = tf.square(X)\n f4 = tf.sqrt(X)\n f5 = tf.exp(X)\n return tf.stack([f1, f2, f3, f4, f5], axis=1)", "_____no_output_____" ], [ "def predict(X, W):\n return tf.squeeze(X @ W, -1)", "_____no_output_____" ], [ "def loss_mse(X, Y, W):\n Y_hat = predict(X, W)\n errors = (Y_hat - Y) ** 2\n return tf.reduce_mean(errors)", "_____no_output_____" ], [ "def compute_gradients(X, Y, W):\n with tf.GradientTape() as tape:\n loss = loss_mse(Xf, Y, W)\n return tape.gradient(loss, W)", "_____no_output_____" ], [ "STEPS = 2000\nLEARNING_RATE = 0.02\n\n\nXf = make_features(X)\nn_weights = Xf.shape[1]\n\nW = tf.Variable(np.zeros((n_weights, 1)), dtype=tf.float32)\n\n# For plotting\nsteps, losses = [], []\nplt.figure()\n\n\nfor step in range(1, STEPS + 1):\n\n dW = compute_gradients(X, Y, W)\n W.assign_sub(dW * LEARNING_RATE)\n\n if step % 100 == 0:\n loss = loss_mse(Xf, Y, W)\n steps.append(step)\n losses.append(loss)\n plt.clf()\n plt.plot(steps, losses)\n\n\nprint(f\"STEP: {STEPS} MSE: {loss_mse(Xf, Y, W)}\")\n\nplt.figure()\nplt.plot(X, Y, label=\"actual\")\nplt.plot(X, predict(Xf, W), label=\"predicted\")\nplt.legend()", "_____no_output_____" ] ], [ [ "Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cb821f3fe6718a2596ec0a3bf4157471c35c281d
247,579
ipynb
Jupyter Notebook
my_solutions/chapter_03.ipynb
WilyLynx/handson-ml2
a998e14383fe8c1614c640ed13f83fc3d38edb02
[ "Apache-2.0" ]
null
null
null
my_solutions/chapter_03.ipynb
WilyLynx/handson-ml2
a998e14383fe8c1614c640ed13f83fc3d38edb02
[ "Apache-2.0" ]
null
null
null
my_solutions/chapter_03.ipynb
WilyLynx/handson-ml2
a998e14383fe8c1614c640ed13f83fc3d38edb02
[ "Apache-2.0" ]
null
null
null
197.746805
155,348
0.913038
[ [ [ "# Clasification", "_____no_output_____" ], [ "## MNIST dataset", "_____no_output_____" ] ], [ [ "from sklearn.datasets import fetch_openml\nmnist = fetch_openml('mnist_784', version=1)\nmnist.keys()", "_____no_output_____" ], [ "X, y = mnist['data'], mnist['target']\nprint(X.shape)\nprint(y.shape)", "(70000, 784)\n(70000,)\n" ], [ "%matplotlib inline\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\n\n\nidx = 2\nsome_digit = X[idx]\nsome_digit_image = some_digit.reshape(28, 28)\n\nplt.imshow(some_digit_image, cmap='binary')\nplt.axis('off')\nplt.show()", "_____no_output_____" ], [ "y[idx]", "_____no_output_____" ], [ "import numpy as np\ny = y.astype(np.uint8)", "_____no_output_____" ], [ "split_idx = 60_000\nX_train, X_test, y_train, y_test = X[:split_idx], X[split_idx:], y[:split_idx], y[split_idx:]", "_____no_output_____" ] ], [ [ "## Binary classifier", "_____no_output_____" ] ], [ [ "y_train_5 = (y_train == 5)\ny_test_5 = (y_test == 5)", "_____no_output_____" ], [ "from sklearn.linear_model import SGDClassifier\n\nsgd_clf = SGDClassifier(random_state=42)\nsgd_clf.fit(X_train, y_train_5)", "_____no_output_____" ], [ "sgd_clf.predict([some_digit])", "_____no_output_____" ] ], [ [ "## Performance measurement", "_____no_output_____" ], [ "### Cross-validation", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_score\ncross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring='accuracy', verbose=2, n_jobs=-1)", "[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.\n[Parallel(n_jobs=-1)]: Done 3 out of 3 | elapsed: 10.8s finished\n" ], [ "from sklearn.model_selection import StratifiedKFold\nfrom sklearn.base import clone\n\nskfolds = StratifiedKFold(n_splits=3, random_state=42, shuffle=True)\n\nfor train_index, test_index in skfolds.split(X_train, y_train_5):\n clone_clf = clone(sgd_clf)\n X_train_folds = X_train[train_index]\n y_train_folds = y_train_5[train_index]\n X_test_fold = X_train[test_index]\n y_test_fold = y_train_5[test_index]\n \n clone_clf.fit(X_train_folds, y_train_folds)\n y_pred = clone_clf.predict(X_test_fold)\n n_correct = sum(y_pred == y_test_fold)\n print(n_correct / len(y_pred))", "0.9669\n0.91625\n0.96785\n" ], [ "from sklearn.base import BaseEstimator\n\nclass Never5Classifier(BaseEstimator):\n def fit(self, X, y=None):\n return self\n def predict(self, X):\n return np.zeros((len(X), 1), dtype=bool)", "_____no_output_____" ], [ "never_5_clf = Never5Classifier()\ncross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring='accuracy', verbose=2, n_jobs=-1)", "[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.\n[Parallel(n_jobs=-1)]: Done 3 out of 3 | elapsed: 0.6s finished\n" ] ], [ [ "### Confusion matrix", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import cross_val_predict\n\ny_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, verbose=2, n_jobs=-1)", "[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.\n[Parallel(n_jobs=-1)]: Done 3 out of 3 | elapsed: 10.7s finished\n" ], [ "from sklearn.metrics import confusion_matrix\n\nconfusion_matrix(y_train_5, y_train_pred)", "_____no_output_____" ] ], [ [ "### Precision and recall", "_____no_output_____" ] ], [ [ "from sklearn.metrics import precision_score, recall_score\nprint('Precision: ', precision_score(y_train_5, y_train_pred))\nprint('Recall: ', recall_score(y_train_5, y_train_pred))", "Precision: 0.8370879772350012\nRecall: 0.6511713705958311\n" ], [ "from sklearn.metrics import f1_score\n\nf1_score(y_train_5, y_train_pred)", "_____no_output_____" ], [ "y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method='decision_function', \n n_jobs=-1, verbose=2)", "[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.\n[Parallel(n_jobs=-1)]: Done 3 out of 3 | elapsed: 10.6s finished\n" ], [ "from sklearn.metrics import precision_recall_curve\n\nprecisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)", "_____no_output_____" ], [ "def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):\n plt.plot(thresholds, precisions[:-1], 'b--', label='Precision')\n plt.plot(thresholds, recalls[:-1], 'g-', label='Recall')\n plt.grid(True)\n plt.xlabel('Threshold')\n plt.xlim((-50_000, 45_000))", "_____no_output_____" ], [ "plot_precision_recall_vs_threshold(precisions, recalls, thresholds)\nplt.show()", "_____no_output_____" ], [ "thresholds_90_precision = thresholds[np.argmax(precisions >= 0.90)]\nthresholds_90_precision", "_____no_output_____" ], [ "y_train_pred_90 = (y_scores > thresholds_90_precision)\nprint('Precision: ',precision_score(y_train_5, y_train_pred_90))\nprint('Recall: ',recall_score(y_train_5, y_train_pred_90))", "Precision: 0.9\nRecall: 0.47980077476480354\n" ] ], [ [ "### ROC curve", "_____no_output_____" ] ], [ [ "from sklearn.metrics import roc_curve\n\nfpr, tpr, thresholds = roc_curve(y_train_5, y_scores)", "_____no_output_____" ], [ "def plot_roc_curve(fpr, tpr, label=None):\n plt.plot(fpr, tpr, linewidth=2, label=label)\n plt.plot([0, 1], [0, 1], 'k--')\n \n \nplot_roc_curve(fpr, tpr)\nplt.show()", "_____no_output_____" ], [ "from sklearn.metrics import roc_auc_score\n\nroc_auc_score(y_train_5, y_scores)", "_____no_output_____" ], [ "from sklearn.ensemble import RandomForestClassifier\n\nforest_clf = RandomForestClassifier(random_state=42)\ny_proba_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,\n method='predict_proba',\n n_jobs=-1, verbose=2)", "[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.\n[Parallel(n_jobs=-1)]: Done 3 out of 3 | elapsed: 22.8s finished\n" ], [ "y_scores_forest = y_proba_forest[:, 1]\nfpr_forest, tpr_forest, threshold_forest = roc_curve(y_train_5, y_scores_forest)", "_____no_output_____" ], [ "plt.plot(fpr, tpr, 'b:', label='SGD')\nplot_roc_curve(fpr_forest, tpr_forest, 'Random forrest')\nplt.legend(loc='lower right')\nplt.show()", "_____no_output_____" ], [ "roc_auc_score(y_train_5, y_scores_forest)", "_____no_output_____" ] ], [ [ "## Multi classification", "_____no_output_____" ] ], [ [ "# LONG\nfrom sklearn.svm import SVC\n\nsvm_clf = SVC()\nsvm_clf.fit(X_train, y_train)\nsvm_clf.predict([some_digit])", "_____no_output_____" ], [ "some_digit_scores = svm_clf.decision_function([some_digit])\nsome_digit_scores", "_____no_output_____" ], [ "print(np.argmax(some_digit_scores))\nprint(svm_clf.classes_)\nprint(svm_clf.classes_[4])", "_____no_output_____" ], [ "# LONG\nfrom sklearn.multiclass import OneVsRestClassifier\novr_clf = OneVsRestClassifier(SVC())\novr_clf.fit(X_train, y_train)\nprint(ovr_clf.predict([some_digit]))\nprint(len(ovr_clf.estimators_))", "_____no_output_____" ], [ "sgd_clf.fit(X_train, y_train)\nsgd_clf.predict([some_digit])", "_____no_output_____" ], [ "sgd_clf.decision_function([some_digit])", "_____no_output_____" ], [ "cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring='accuracy')", "_____no_output_____" ], [ "from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train.astype(np.float64))\ncross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring='accuracy',\n verbose=2, n_jobs=-1)", "_____no_output_____" ] ], [ [ "## Error analysis", "_____no_output_____" ] ], [ [ "y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3,\n verbose=2, n_jobs=-1)\nconf_mx = confusion_matrix(y_train, y_train_pred)\nconf_mx", "_____no_output_____" ], [ "plt.matshow(conf_mx, cmap=plt.cm.gray)\nplt.show()", "_____no_output_____" ], [ "row_sums = conf_mx.sum(axis=1, keepdims=True)\nnorm_conf_mx = conf_mx / row_sums\nnp.fill_diagonal(norm_conf_mx, 0)\nplt.matshow(norm_conf_mx, cmap=plt.cm.gray)\nplt.show()", "_____no_output_____" ], [ "# EXTRA\ndef plot_digits(instances, images_per_row=10, **options):\n size = 28\n images_per_row = min(len(instances), images_per_row)\n images = [instance.reshape(size,size) for instance in instances]\n n_rows = (len(instances) - 1) // images_per_row + 1\n row_images = []\n n_empty = n_rows * images_per_row - len(instances)\n images.append(np.zeros((size, size * n_empty)))\n for row in range(n_rows):\n rimages = images[row * images_per_row : (row + 1) * images_per_row]\n row_images.append(np.concatenate(rimages, axis=1))\n image = np.concatenate(row_images, axis=0)\n plt.imshow(image, cmap = mpl.cm.binary, **options)\n plt.axis(\"off\")", "_____no_output_____" ], [ "cl_a, cl_b = 3, 5\nX_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]\nX_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]\nX_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]\nX_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]\n\nplt.figure(figsize=(8,8))\nplt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)\nplt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)\nplt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)\nplt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)\nplt.show()", "_____no_output_____" ] ], [ [ "## Multilablel classification", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import KNeighborsClassifier\n\ny_train_large = (y_train >=7)\ny_train_odd = (y_train % 2 == 1)\ny_multilabel = np.c_[y_train_large, y_train_odd]\n\nknn_clf = KNeighborsClassifier()\nknn_clf.fit(X_train, y_multilabel)\nknn_clf.predict([some_digit])", "_____no_output_____" ], [ "y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3,\n verbose=2, n_jobs=-1)\nf1_score(y_multilabel, y_train_knn_pred, average='macro')", "[Parallel(n_jobs=-1)]: Using backend LokyBackend with 4 concurrent workers.\n" ], [ "noise = np.random.randint(0, 100, (len(X_train), 28*28))\nX_train_mod = X_train + noise\nnoise = np.random.randint(0, 100, (len(X_train), 28*28))\nX_test_mod = X_test + noise\ny_train_mod = X_train\ny_test_mod = X_test", "_____no_output_____" ], [ "fig = plt.figure(figsize=(12,6))\nplt.subplot(121); plot_digit(X_test_mod[some_index])\nplt.subplot(122); plot_digit(y_test_mod[some_index])", "_____no_output_____" ], [ "knn_clf.fit(X_train_mod, y_train_mod)\nclean_digit = knn_clf.predit([X_test_mod[some_index]])\nplot_digit(clean_digit)", "_____no_output_____" ] ], [ [ "## Tasks", "_____no_output_____" ], [ "### Task 1 - MNIST acc 97%", "_____no_output_____" ] ], [ [ "from sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.model_selection import GridSearchCV\n\nknn_clf = KNeighborsClassifier()\nparam_grid = [\n {'weights': ['uniform', 'distance'], 'n_neighbors': [ 3, 4, 5]}\n]\ngrid_search = GridSearchCV(knn_clf, param_grid, cv=3,\n scoring='accuracy',\n n_jobs=-1, verbose=3)\ngrid_search.fit(X_train, y_train)", "Fitting 3 folds for each of 6 candidates, totalling 18 fits\n" ], [ "final_model = grid_search.best_estimator_\ny_pred = final_model.predict(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score\n\nprint(grid_search.best_params_)\nprint(grid_search.best_score_)\nprint(accuracy_score(y_test, y_pred))", "{'n_neighbors': 4, 'weights': 'distance'}\n0.9703500000000002\n0.9714\n" ] ], [ [ "### Task 2 - data augmention", "_____no_output_____" ] ], [ [ "from scipy.ndimage.interpolation import shift\n\ndef shift_set(X, vector):\n return [shift(img.reshape(28,28), vector, cval=0).flatten() for img in X]\n\nX_train_aug_U = shift_set(X_train, [-1, 0])\nX_train_aug_R = shift_set(X_train, [ 0, 1])\nX_train_aug_D = shift_set(X_train, [ 1, 0])\nX_train_aug_L = shift_set(X_train, [ 0,-1])", "_____no_output_____" ], [ "X_train_aug = np.concatenate([X_train, X_train_aug_U, X_train_aug_R, X_train_aug_D, X_train_aug_L])\ny_train_aug = np.tile(y_train, 5)\nprint(len(X_train_aug))\nprint(len(y_train_aug))", "_____no_output_____" ], [ "knn_clf_aug = KNeighborsClassifier(n_neighbors=4, weights='distance',\n n_jobs=-1)\nknn_clf_aug.fit(X_train_aug, y_train_aug)", "_____no_output_____" ], [ "y_pred = knn_clf_aug.predict(X_test)\nprint(accuracy_score(y_test, y_pred))", "0.9763\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
cb8224e6dbeeec84efa26745adc83e96fd26a1f5
3,427
ipynb
Jupyter Notebook
preprocessing.ipynb
alexcoda/independent_study
be6bfb5667860195e92d462b45b572ad27768686
[ "MIT" ]
null
null
null
preprocessing.ipynb
alexcoda/independent_study
be6bfb5667860195e92d462b45b572ad27768686
[ "MIT" ]
null
null
null
preprocessing.ipynb
alexcoda/independent_study
be6bfb5667860195e92d462b45b572ad27768686
[ "MIT" ]
null
null
null
20.39881
83
0.515028
[ [ [ "%load_ext autoreload\n%autoreload 2", "_____no_output_____" ], [ "import pandas as pd\nfrom collections import Counter\nfrom tqdm import tqdm, tqdm_notebook\n\n# Local imports\nfrom preprocessing import clean_entry", "_____no_output_____" ] ], [ [ "# Load data", "_____no_output_____" ] ], [ [ "PATH = 'data/'\nfname = f\"{PATH}raw_emails.csv\"\ndf = pd.read_csv(fname)", "_____no_output_____" ] ], [ [ "# Clean base data", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ], [ "results_dict = {}\nfor row in tqdm_notebook(df.iterrows()):\n message_id, contents = clean_entry(*row)\n results_dict[message_id] = contents", "_____no_output_____" ], [ "%%time\n# Takes ~1:30\nresults = pd.DataFrame(results_dict).T", "_____no_output_____" ], [ "# Re-organize the resulting dataframe\nnew_cols = ['date', 'from_email', 'message_id', 'subject', 'to_email',\n 'from_name', 'to_name', 'bcc_name', 'cc_name', 'fname_prefix',\n 'orig_fname', 'orig_i', 'email_content']\nresults.columns = new_cols\nresults.index.name = 'message_id'\n\nresults.sort_values('orig_i', inplace=True)\nresults.drop(['message_id', 'orig_i'], axis=1, inplace=True)\n\nresults = results[['date', 'from_email', 'to_email', 'subject',\n 'email_content', 'from_name', 'to_name', 'cc_name',\n 'bcc_name', 'orig_fname', 'fname_prefix']]", "_____no_output_____" ], [ "out_path = f\"{PATH}cleaned_emails.csv\"\nresults.to_csv(out_path)", "_____no_output_____" ], [ "results.head()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cb822d142cf6854c64ad0f375bb626ff84c42ac5
10,272
ipynb
Jupyter Notebook
Exp_Numpy-Numba.ipynb
prisae/tmp-share
c09713f6860106cdf203b0644077ed276a9412ca
[ "MIT" ]
9
2018-08-06T18:03:27.000Z
2019-05-10T16:51:10.000Z
Exp_Numpy-Numba.ipynb
prisae/tmp-share
c09713f6860106cdf203b0644077ed276a9412ca
[ "MIT" ]
null
null
null
Exp_Numpy-Numba.ipynb
prisae/tmp-share
c09713f6860106cdf203b0644077ed276a9412ca
[ "MIT" ]
4
2018-09-27T13:53:18.000Z
2020-02-13T07:14:28.000Z
35.05802
224
0.516258
[ [ [ "# Simple Test between NumPy and Numba\n\n$$\nx = \\exp(-\\Gamma_s d)\n$$", "_____no_output_____" ] ], [ [ "import numba\nimport cython\nimport numexpr\nimport numpy as np\n\n%load_ext cython", "_____no_output_____" ], [ "from empymod import filters\nfrom scipy.constants import mu_0 # Magn. permeability of free space [H/m]\nfrom scipy.constants import epsilon_0 # Elec. permittivity of free space [F/m]\n\nres = np.array([2e14, 0.3, 1, 50, 1]) # nlay\nfreq = np.arange(1, 201)/20. # nfre\noff = np.arange(1, 101)*1000 # noff\nlambd = filters.key_201_2009().base/off[:, None] # nwav\n\naniso = np.array([1, 1, 1.5, 2, 1])\nepermH = np.array([1, 80, 9, 20, 1])\nepermV = np.array([1, 40, 9, 10, 1])\nmpermH = np.array([1, 1, 3, 5, 1])\n\netaH = 1/res + np.outer(2j*np.pi*freq, epermH*epsilon_0)\netaV = 1/(res*aniso*aniso) + np.outer(2j*np.pi*freq, epermV*epsilon_0)\nzetaH = np.outer(2j*np.pi*freq, mpermH*mu_0)\n\nGam = np.sqrt((etaH/etaV)[:, None, :, None] * (lambd*lambd)[None, :, None, :] + (zetaH*etaH)[:, None, :, None])", "_____no_output_____" ] ], [ [ "## NumPy\n\nNumpy version to check result and compare times", "_____no_output_____" ] ], [ [ "def test_numpy(lGam, d):\n return np.exp(-lGam*d)", "_____no_output_____" ] ], [ [ "## Numba @vectorize\n\nThis is exactly the same function as with NumPy, just added the @vectorize decorater.", "_____no_output_____" ] ], [ [ "@numba.vectorize('c16(c16, f8)')\ndef test_numba_vnp(lGam, d):\n return np.exp(-lGam*d)\n\[email protected]('c16(c16, f8)', target='parallel')\ndef test_numba_v(lGam, d):\n return np.exp(-lGam*d)", "_____no_output_____" ] ], [ [ "## Numba @njit", "_____no_output_____" ] ], [ [ "@numba.njit\ndef test_numba_nnp(lGam, d):\n out = np.empty_like(lGam)\n for nf in numba.prange(lGam.shape[0]):\n for no in numba.prange(lGam.shape[1]):\n for ni in numba.prange(lGam.shape[2]):\n out[nf, no, ni] = np.exp(-lGam[nf, no, ni] * d)\n return out\n \[email protected](nogil=True, parallel=True)\ndef test_numba_n(lGam, d):\n out = np.empty_like(lGam)\n for nf in numba.prange(lGam.shape[0]):\n for no in numba.prange(lGam.shape[1]):\n for ni in numba.prange(lGam.shape[2]):\n out[nf, no, ni] = np.exp(-lGam[nf, no, ni] * d)\n return out", "_____no_output_____" ] ], [ [ "## Run comparison for a small and a big matrix", "_____no_output_____" ] ], [ [ "lGam = Gam[:, :, 1, :]\nd = 100\n\n# Output shape\nout_shape = (freq.size, off.size, filters.key_201_2009().base.size)\n\nprint(' Shape Test Matrix ::', out_shape, '; total # elements:: '+str(freq.size*off.size*filters.key_201_2009().base.size))\nprint('------------------------------------------------------------------------------------------')\n\nprint(' NumPy :: ', end='')\n# Get NumPy result for comparison\nnumpy_result = test_numpy(lGam, d)\n# Get runtime\n%timeit test_numpy(lGam, d)\n\nprint(' Numba @vectorize :: ', end='')\n# Ensure it agrees with NumPy\nnumba_vnp_result = test_numba_vnp(lGam, d)\nif not np.allclose(numpy_result, numba_vnp_result, atol=0, rtol=1e-10):\n print('\\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')\n# Get runtime\n%timeit test_numba_vnp(lGam, d)\n\nprint(' Numba @vectorize par :: ', end='')\n# Ensure it agrees with NumPy\nnumba_v_result = test_numba_v(lGam, d)\nif not np.allclose(numpy_result, numba_v_result, atol=0, rtol=1e-10):\n print('\\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')\n# Get runtime\n%timeit test_numba_v(lGam, d)\n\nprint(' Numba @njit :: ', end='')\n# Ensure it agrees with NumPy\nnumba_nnp_result = test_numba_nnp(lGam, d)\nif not np.allclose(numpy_result, numba_nnp_result, atol=0, rtol=1e-10):\n print('\\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')\n# Get runtime\n%timeit test_numba_nnp(lGam, d)\n\nprint(' Numba @njit par :: ', end='')\n# Ensure it agrees with NumPy\nnumba_n_result = test_numba_n(lGam, d)\nif not np.allclose(numpy_result, numba_n_result, atol=0, rtol=1e-10):\n print('\\n * FAIL, DOES NOT AGREE WITH NumPy RESULT!')\n# Get runtime\n%timeit test_numba_n(lGam, d)", " Shape Test Matrix :: (200, 100, 201) ; total # elements:: 4020000\n------------------------------------------------------------------------------------------\n NumPy :: 305 ms ± 6.26 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n Numba @vectorize :: 280 ms ± 13.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n Numba @vectorize par :: 236 ms ± 34.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n Numba @njit :: 248 ms ± 11.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n Numba @njit par :: 117 ms ± 704 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)\n" ], [ "from empymod import versions\nversions('HTML', add_pckg=[cython, numba], ncol=5)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]