hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
cbf3d0931a424128688ddd8fa046b2b8e5c521fb
9,032
ipynb
Jupyter Notebook
BestSpiderWeb.ipynb
LorenzoTinfena/BestSpiderWeb
3c911808a96bafbe4d75ac95f2f9430681b6cebc
[ "MIT" ]
null
null
null
BestSpiderWeb.ipynb
LorenzoTinfena/BestSpiderWeb
3c911808a96bafbe4d75ac95f2f9430681b6cebc
[ "MIT" ]
null
null
null
BestSpiderWeb.ipynb
LorenzoTinfena/BestSpiderWeb
3c911808a96bafbe4d75ac95f2f9430681b6cebc
[ "MIT" ]
null
null
null
37.477178
312
0.500554
[ [ [ "<a href=\"https://colab.research.google.com/github/LorenzoTinfena/BestSpiderWeb/blob/master/BestSpiderWeb.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# BestSpiderWeb\n# Problem\nA city without roads has a wheat producer, an egg producer and a hotel.\nThe mayor also wants to build a pasta producer and a restaurant in the future. He also wants to build roads like in the picture, so that the producer can easily take the wheat and eggs to make pasta, and the restaurant can easily buy pasta, welcome hotel people, and buy eggs for other preparations.\n\n<img src=\"https://github.com/LorenzoTinfena/BestSpiderWeb/blob/master/assets/city0.png?raw=1\" width=\"300\"/>\n\n**Goal:** to build roads costs, you have to make them as short as possible.\n\n<img src=\"https://github.com/LorenzoTinfena/BestSpiderWeb/blob/master/assets/city1.png?raw=1\" width=\"300\"/>\n\n\n---\n\n\n**In other words:** In an Euclidean space there is a graph with constant edges and with 2 types of nodes, one with constant coordinates, the other with a variable coordinates.\n\n**Goal:** To find the positions of the variable nodes in order to have the smaller sum of the length of the edges\n", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "# Solution\n$$\nN_0[c] = \\sum_{i \\in N}\\sum_{v \\in P_{N_0 \\longleftrightarrow i}}\\frac{\\sum O_i[c]}{v}\n$$\nwhere\n* $$N_0$$\n\n* $$c$$\ncoordinates\n* $$N$$\nset of nodes with variable coordinates reachable from N with 0 passing only through nodes belonging to N\n* $$O$$\nset of nodes with constant coordinates\n* $$O_i$$\nset of nodes belonging to \"O\" adjacent to \"i\"\n* $$P_{N_0 \\rightarrow i}$$\nset of all possible paths (infinite for lenght of \"N\" greater than 1\") between node \"N with 0\" to node \"i\",\npassing only through nodes belonging to N\n* $$v$$\nOr path, is a multiplication of the number of edges for all the nodes it crosses, \"N with 0\" included, \"i\" included,\n(e.g. if it starts from a node that has 7 adjacent edges, then goes through one that has 2,\nand ends up with one having 3, the calculation will be 7 * 2 * 3 = 42", "_____no_output_____" ], [ "# Implementation", "_____no_output_____" ] ], [ [ "import numpy as np\n\n\nclass Node:\n NoCoordinates = None\n def __init__(self, coordinates: np.ndarray = None):\n self.AdjacentNodes = []\n if coordinates is None:\n self.Constant = False\n else:\n if len(coordinates) != Node.NoCoordinates:\n raise Exception('wrong number of coordinates')\n self.Coordinates = coordinates\n self.Constant = True\n\n def AddAdjacentNode(self, item: 'Node'):\n self.AdjacentNodes.append(item)\n\n class _VirtualNode:\n def __init__(self, nodeBase: 'Node' = None):\n if nodeBase is not None:\n self.ActualNode = nodeBase\n self.SumConstantNodes = np.zeros(Node.NoCoordinates)\n for item in nodeBase.AdjacentNodes:\n if item.Constant:\n self.SumConstantNodes += item.Coordinates\n self.NumTmpPath = len(nodeBase.AdjacentNodes)\n\n def Copy(self, actualNode: 'Node') -> '_VirtualNode':\n item = Node._VirtualNode()\n item.ActualNode = actualNode\n item.SumConstantNodes = self.SumConstantNodes\n item.NumTmpPath = self.NumTmpPath * len(actualNode.AdjacentNodes)\n return item\n\n\ndef ComputeBestSpiderWeb(variablesNodes: list):\n # initialize coordinates of variables nodes\n for item in variablesNodes:\n item.Coordinates = np.zeros(Node.NoCoordinates)\n\n # initialize virtual nodes\n _VirtualNodes = []\n for item in variablesNodes:\n _VirtualNodes.append(Node._VirtualNode(item))\n\n # ALGORITHM\n # more iterations means more accuracy (exponential)\n for i in range(40):\n next_VirtualNodes = []\n # iterate through all variables virtual nodes\n for item in _VirtualNodes:\n # update the coordinates of the actual node\n item.ActualNode.Coordinates += item.SumConstantNodes / item.NumTmpPath\n # iterate through adjacent nodes of the actual node\n for AdjacentItem in item.ActualNode.AdjacentNodes:\n # if the adjacent node is variable add it in a new virtual node (like a tree)\n if not AdjacentItem.Constant:\n next_VirtualNodes.append(item.Copy(AdjacentItem))\n _VirtualNodes = next_VirtualNodes", "_____no_output_____" ], [ "def main():\n Node.NoCoordinates = 2\n\n # constant nodes\n Wheat = Node(np.array([0, 0]))\n eggs = Node(np.array([5, 40]))\n hotel = Node(np.array([50, 10]))\n\n # variables nodes\n pastaProducer = Node()\n restaurant = Node()\n\n # define edges\n pastaProducer.AddAdjacentNode(Wheat)\n pastaProducer.AddAdjacentNode(eggs)\n pastaProducer.AddAdjacentNode(restaurant)\n restaurant.AddAdjacentNode(pastaProducer)\n restaurant.AddAdjacentNode(eggs)\n restaurant.AddAdjacentNode(hotel)\n\n ComputeBestSpiderWeb([pastaProducer, restaurant])\n print('pastaProducer: ' + str(pastaProducer.Coordinates))\n print('restaurant: ' + str(restaurant.Coordinates))\n\n\nif __name__ == '__main__':\n main()", "pastaProducer: [ 8.75 21.25]\nrestaurant: [21.25 23.75]\n" ] ], [ [ "<img src=\"https://github.com/LorenzoTinfena/BestSpiderWeb/blob/master/assets/example.png?raw=1\" width=\"500\"/>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cbf3e652dba6a170fbe0f3c097c8e7b2e7009828
11,044
ipynb
Jupyter Notebook
week1-python/day2.ipynb
scottshepard/bbi-training
27fb0a9c6ca77bb363b4e55defe7004cec58dffe
[ "MIT" ]
null
null
null
week1-python/day2.ipynb
scottshepard/bbi-training
27fb0a9c6ca77bb363b4e55defe7004cec58dffe
[ "MIT" ]
null
null
null
week1-python/day2.ipynb
scottshepard/bbi-training
27fb0a9c6ca77bb363b4e55defe7004cec58dffe
[ "MIT" ]
null
null
null
22.356275
497
0.485422
[ [ [ "# Day 2 - Conditionals", "_____no_output_____" ] ], [ [ "x = 5\nif x > 2:\n print('Bigger than 2')", "Bigger than 2\n" ], [ "for i in range(5):\n print(i)\n if i > 2:\n print('Bigger than 2')", "0\n1\n2\n3\nBigger than 2\n4\nBigger than 2\n" ], [ "x is None", "_____no_output_____" ] ], [ [ "# Labs", "_____no_output_____" ], [ "## Lab 1 Excercise 1\n\n\nWrite a program to prompt the user for hours and rate per hour and compute gross pay (i.e. gross pay = hrs x rate). \n<input prompt>\tEnter Hours: 35 (example input) \n<input prompt>\tEnter Rate ($ per hr): 2.75 (example input) \n\n<output>\t\tPay: $96.25 (output) \n\n\nVerify your program output for the above example input values.\nSave this into a file named W1-D1-lab1-1.\n\n", "_____no_output_____" ] ], [ [ "hrs = int(input('Hours? '))\nrte = float(input('Pay rate? '))\nprint(hrs * rte)", "Hours? 35\nPay rate? 2.75\n" ] ], [ [ "## Lab 2 Exercise 2\n\nDefine the following variables and assign some appropriate values.\n\ncar_name e.g. “Tesla”, “Toyota”, …\nprice1, price2, price3 e.g. any appropriate $ values\n\n\nDefine another variable called average and assign the computed average of the three prices..\n\n\nOutput showing the car name and the average price in the following format:\n \nOn an average, <car_name> costs around $<average>. \n\nBONUS: Use the following line of code to print it rounded up to decimal places only.\n\t\t avg = “{:.2f}\".format(avg)\n", "_____no_output_____" ] ], [ [ "Tesla = 90\nToyota = 45\navg = (Tesla + Toyota) / 2\nprint(\"On average, cars cost about\", \"{:2f}\".format(avg))", "On average, cars cost about 67.500000\n" ] ], [ [ "## Lab 2 Exercise 1\n\nWrite a program that does the following.\n\nPrompt the user for a new file name. \nEnter: “W1-D1-lab2-1.txt” \nOpen that file with write permission. \nWrite the following text into this file. \n“The rain in Spain stays mainly on the plain! \nDid Eliza get drenched in the rainy plains of Spain?” \nClose the file \n\nSave as (this program file) W1-D1-lab2-1 \nVerify that there is a file “W1-D1-lab2-1.ipynb” (Home view).\n\nExecute the program multiple times & look at the file size. Does it make sense?\n\n", "_____no_output_____" ] ], [ [ "filename = input('Filename to save: ')\nfh = open(filename, 'w')\ntxt = '''The rain in Spain stays mainly on the plain! \nDid Eliza get drenched in the rainy plains of Spain?\n'''\nfh.write(txt)\nfh.close()\n", "Filename to save: W1-D1-lab2-1.txt\n" ] ], [ [ "## Lab 2 Exercise 2\n\nWrite a program that does the following. \n\nPrompt the user for a new file name. \nEnter: “W1-D1-lab2-2.txt” \nOpen that file with append permission. \nfh = open(fname, ‘a’) \nWrite the following text into this file. \n“Happy 2020!” \nClose the file \nRe-open the file for read. \n\nSave as (this program file) W1-D1-lab2-2. \nVerify that there is a file “W1-D1-lab2-2.ipynb” (Home view). \n\nExecute the program multiple times & look at the file size. Does it make sense?\n", "_____no_output_____" ] ], [ [ "filename = input(\"File to append: \")\nfh = open(filename, 'a')\nfh.write(\"Happy 2020!\")\nfh.close()", "File to append: W1-D1-lab2-2.txt\n" ], [ "x = \"Mary\"\ny = 'Mary'\nprint(id(x))\nprint(id(y))", "140341284059992\n140341284059992\n" ], [ "x = 3.14\ny = 3.14\nz = 3.14\nprint(id(x))\nprint(id(y))\nprint(id(z))", "140341284327904\n140341284327688\n140341284327544\n" ] ], [ [ "# Lists", "_____no_output_____" ] ], [ [ "x = [1, 2, 3]\ny = x\ny[2] = 4\nx", "_____no_output_____" ] ], [ [ "# Quiz", "_____no_output_____" ] ], [ [ "myDict = dict()\nmyDict['stuff']", "_____no_output_____" ], [ "myDict.get('stuff', -1)", "_____no_output_____" ], [ "myDict['a'] = 1\nmyDict['b'] = 2\nfor i in myDict:\n print(i)", "a\nb\n" ], [ "fruit = 'Banana'\nfruit[0] = 'b'", "_____no_output_____" ], [ "a = [1,2,3]\nb = [4,5,6]\na + b", "_____no_output_____" ], [ "bdfl = [\"Rossum\", \"Guido\", \"van\"]\nbdfl.sort()\nbdfl[0]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cbf3f10051b7df698256e07b3dbaa61081651b04
42,955
ipynb
Jupyter Notebook
Analysis/0.1_getFollowerData_500players_20200603.ipynb
dreandes/LinearRegression_PROJECT
2fe2aa95737c76029c9276ee60c5fc6409d1cf4a
[ "MIT" ]
null
null
null
Analysis/0.1_getFollowerData_500players_20200603.ipynb
dreandes/LinearRegression_PROJECT
2fe2aa95737c76029c9276ee60c5fc6409d1cf4a
[ "MIT" ]
null
null
null
Analysis/0.1_getFollowerData_500players_20200603.ipynb
dreandes/LinearRegression_PROJECT
2fe2aa95737c76029c9276ee60c5fc6409d1cf4a
[ "MIT" ]
null
null
null
38.455685
1,668
0.485648
[ [ [ "from selenium import webdriver\nfrom selenium.webdriver.common.keys import Keys\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.by import By\nBe\ndataFrame= pd.DataFrame(columns=['Name', 'Values'])\nfor i in range(1,20+1):\n url = 'https://www.transfermarkt.com/spieler-statistik/wertvollstespieler/marktwertetop?ajax=yw1&page=' + str(i)\n \n\n options = webdriver.ChromeOptions()\n options.add_argument('headless')\n chrome_driver = r'C:\\Users\\Gk\\Desktop\\DSS\\install\\chromedriver_win32\\chromedriver.exe'\n driver = webdriver.Chrome(chrome_driver, options=options)\n driver.implicitly_wait(3)\n driver.get(url)\n \n src = driver.page_source\n \n driver.close()\n \n resp = BeautifulSoup(src, \"html.parser\")\n values_data = resp.select('table')\n table_html = str(values_data)\n num = 0\n name = ' '\n value = ' '\n for index, row in pd.read_html(table_html)[1].iterrows():\n if index%3 == 0:\n num = row['#']\n value = row['Market value']\n elif index%3 == 1:\n name = row['Player']\n else : \n dataFrame.loc[num] = [name, value]\n\ndataFrame", "_____no_output_____" ], [ "ul = dataFrame['Name'].tolist()", "_____no_output_____" ], [ "dataFrame.to_csv('userlist.csv', encoding='utf-8-sig')", "_____no_output_____" ], [ "ul[9]", "_____no_output_____" ], [ "#userList = ul[0:50]\n#userList = ul[49:78]\n#userList = ul[79:122]\n#userList = ul[123:178]\n#userList = ul[179:188]\n#userList = ul[189:200]\nuserList = ul[200:300]", "_____no_output_____" ], [ "from selenium import webdriver\nfrom bs4 import BeautifulSoup\nimport requests\nimport re\nfrom selenium.common.exceptions import NoSuchElementException\nfrom selenium.common.exceptions import StaleElementReferenceException\nfrom selenium.common.exceptions import ElementNotInteractableException\n\n#userList = ['chris eriksen', 'cristiano ronaldo', 'lionel messi', 'dreandes']\n\n\ndriver = webdriver.Chrome(r'C:\\Users\\Gk\\Desktop\\DSS\\install\\chromedriver_win32\\chromedriver.exe')\ndriver.get('https://www.instagram.com/')\ndelay = 3\ndriver.implicitly_wait(delay)\n\nid = '' #Instagram ID\npw = '' #Instagram PW\n\ndriver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[2]/div/label/input').send_keys(id)\ndriver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[3]/div/label/input').send_keys(pw)\ndriver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[4]/button').click()\n\ndriver.implicitly_wait(delay)\n\nlistUser = []\nlistFollower = []\n\ndef checkInstaFollowers(user):\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/nav/div[2]/div/div/div[2]/input').send_keys(user)\n time.sleep(5)\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/nav/div[2]/div/div/div[2]/div[2]/div[2]/div/a[1]/div').click()\n\n r = requests.get(driver.current_url).text\n followers = re.search('\"edge_followed_by\":{\"count\":([0-9]+)}',r).group(1)\n \n if (r.find('\"is_verified\":true')!=-1):\n# print('{} : {}'.format(user, followers))\n listUser.append(user)\n listFollower.append(followers)\n else:\n# print('{} : user not verified'.format(user))\n listUser.append(user)\n listFollower.append('not verified')\n\nfor a in userList:\n try:\n checkInstaFollowers(a)\n except AttributeError:\n print(\"{}'s top search is returned as hashtag. Continue to next item.\".format(a))\n listUser.append(a)\n listFollower.append('Hashtag')\n except StaleElementReferenceException:\n print(\"{} called StaleElementReferenceException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('SERE/Hashtag')\n except NoSuchElementException:\n print(\"{} called NoSuchElementException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('NSEE/Hashtag')\n except ElementNotInteractableException:\n print(\"{} called ElementNotInteractableException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('ENIE/Hashtag')\n\ndriver.quit()", "Mason Greenwood's top search is returned as hashtag. Continue to next item.\n" ], [ "ul[208]", "_____no_output_____" ], [ "df_follower = pd.DataFrame(list(zip(listUser, listFollower)), columns=['name', 'follower'])\ndf_follower[df_follower['name']=='Rúben Dias']", "_____no_output_____" ], [ "resDf = resDf.append(df_follower, ignore_index = True)", "_____no_output_____" ], [ "resDf", "_____no_output_____" ], [ "#resDf.to_csv('mktval_inst_data.csv', encoding='utf-8')", "_____no_output_____" ], [ "readdf = pd.read_csv('mktval_inst_data.csv', encoding='utf-8')", "_____no_output_____" ], [ "col = ['name', 'follower']\nreaddf = readdf[col]\nreaddf", "_____no_output_____" ], [ "readdf = readdf.append(df_follower, ignore_index = True)\nreaddf", "_____no_output_____" ], [ "readdf.to_csv('test.csv', encoding='utf-8-sig')", "_____no_output_____" ], [ "readdf.truncate(before=0, after=199)", "_____no_output_____" ], [ "from selenium import webdriver\nfrom bs4 import BeautifulSoup\nimport requests\nimport re\nfrom selenium.common.exceptions import NoSuchElementException\nfrom selenium.common.exceptions import StaleElementReferenceException\nfrom selenium.common.exceptions import ElementNotInteractableException\n\n#userList = ['chris eriksen', 'cristiano ronaldo', 'lionel messi', 'dreandes']\n\n\ndriver = webdriver.Chrome(r'C:\\Users\\Gk\\Desktop\\DSS\\install\\chromedriver_win32\\chromedriver.exe')\ndriver.get('https://www.instagram.com/')\ndelay = 3\ndriver.implicitly_wait(delay)\n\nid = '' #Instagram ID\npw = '' #Instagram PW\n\ndriver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[2]/div/label/input').send_keys(id)\ndriver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[3]/div/label/input').send_keys(pw)\ndriver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[4]/button').click()\n\ndriver.implicitly_wait(delay)\n\nlistUser = []\nlistFollower = []\n\ndef checkInstaFollowers(user):\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/nav/div[2]/div/div/div[2]/input').send_keys(user)\n time.sleep(5)\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/nav/div[2]/div/div/div[2]/div[2]/div[2]/div/a[1]/div').click()\n\n r = requests.get(driver.current_url).text\n followers = re.search('\"edge_followed_by\":{\"count\":([0-9]+)}',r).group(1)\n \n if (r.find('\"is_verified\":true')!=-1):\n# print('{} : {}'.format(user, followers))\n listUser.append(user)\n listFollower.append(followers)\n else:\n# print('{} : user not verified'.format(user))\n listUser.append(user)\n listFollower.append('not verified')\n\nn = 1\n\n\nfor a in userList:\n try:\n checkInstaFollowers(a)\n except AttributeError:\n print(\"{}'s top search is returned as hashtag. Continue to next item.\".format(a))\n listUser.append(a)\n listFollower.append('Hashtag')\n except StaleElementReferenceException:\n print(\"{} called StaleElementReferenceException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('SERE/Hashtag')\n except NoSuchElementException:\n print(\"{} called NoSuchElementException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('NSEE/Hashtag')\n except ElementNotInteractableException:\n print(\"{} called ElementNotInteractableException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('ENIE/Hashtag')\n\ndriver.quit()", "_____no_output_____" ], [ "from selenium import webdriver\nfrom bs4 import BeautifulSoup\nimport requests\nimport re\nfrom selenium.common.exceptions import NoSuchElementException\nfrom selenium.common.exceptions import StaleElementReferenceException\nfrom selenium.common.exceptions import ElementNotInteractableException\n\nlistUser = []\nlistFollower = []\n\ndef checkInstaFollowers(user):\n\n try: \n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/nav/div[2]/div/div/div[2]/input').send_keys(user)\n time.sleep(5)\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/nav/div[2]/div/div/div[2]/div[2]/div[2]/div/a[1]/div').click()\n\n r = requests.get(driver.current_url).text\n followers = re.search('\"edge_followed_by\":{\"count\":([0-9]+)}',r).group(1)\n\n except AttributeError:\n print(\"{}'s top search is returned as hashtag. Continue to next item.\".format(a))\n listUser.append(a)\n listFollower.append('Hashtag')\n except StaleElementReferenceException:\n print(\"{} called StaleElementReferenceException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('SERE/Hashtag')\n except NoSuchElementException:\n print(\"{} called NoSuchElementException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('NSEE/Hashtag')\n except ElementNotInteractableException:\n print(\"{} called ElementNotInteractableException\".format(a))\n try:\n checkInstaFollowers(a)\n except AttributeError:\n listUser.append(a)\n listFollower.append('ENIE/Hashtag')\n \n else:\n if (r.find('\"is_verified\":true')!=-1):\n # print('{} : {}'.format(user, followers))\n listUser.append(user)\n listFollower.append(followers)\n else:\n # print('{} : user not verified'.format(user))\n listUser.append(user)\n listFollower.append('not verified')\n \n# finally:\n# driver.quit()\n \n \nfor a in range(1):\n \n driver = webdriver.Chrome(r'C:\\Users\\Gk\\Desktop\\DSS\\install\\chromedriver_win32\\chromedriver.exe')\n driver.get('https://www.instagram.com/')\n delay = 3\n driver.implicitly_wait(delay)\n\n id = '' #Instagram ID\n pw = '' #Instagram PW\n\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[2]/div/label/input').send_keys(id)\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[3]/div/label/input').send_keys(pw)\n driver.find_element_by_xpath('//*[@id=\"react-root\"]/section/main/article/div[2]/div[1]/div/form/div[4]/button').click()\n\n driver.implicitly_wait(delay)\n \n for b in range(10):\n# print('(a*10)+b = {}, a={}, b={}'.format(((a*10) + b), a, b))\n num = (a*10) + b\n userName = ul[num]\n checkInstaFollowers(userName)\n# print('==============================================')\n driver.quit()\n\ndf_follower = pd.DataFrame(list(zip(listUser, listFollower)), columns=['name', 'follower'])\ndf_follower", "0 called StaleElementReferenceException\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf40407bb0d39797ad6840acf82546a38f8dba5
24,890
ipynb
Jupyter Notebook
examples/Highlight_Function.ipynb
artnikitin/folium
4a7532d34c22532167d128dddd07b1578ed08f37
[ "MIT" ]
3
2020-01-08T18:30:07.000Z
2021-07-25T06:54:32.000Z
examples/Highlight_Function.ipynb
artnikitin/folium
4a7532d34c22532167d128dddd07b1578ed08f37
[ "MIT" ]
1
2020-05-21T11:13:30.000Z
2020-05-21T11:13:30.000Z
examples/Highlight_Function.ipynb
artnikitin/folium
4a7532d34c22532167d128dddd07b1578ed08f37
[ "MIT" ]
7
2019-07-21T03:30:26.000Z
2021-12-14T04:41:27.000Z
115.231481
19,180
0.864805
[ [ [ "import os\nimport folium\n\nprint(folium.__version__)", "0.6.0+4.gcecbc85.dirty\n" ], [ "import pandas as pd\n\n\ndf = pd.read_csv(\n os.path.join('data', 'highlight_flight_trajectories.csv')\n)", "_____no_output_____" ] ], [ [ "Let us take a glance at the data.\nEach row represents the trajectory of a flight,\nand the last column contains the coordinates of the flight path in `GeoJSON` format.", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "m = folium.Map(\n location=[40, 10],\n zoom_start=4,\n control_scale=True,\n prefer_canvas=True\n)\n\n\ndef style_function(feature):\n return {\n 'fillColor': '#ffaf00',\n 'color': 'blue',\n 'weight': 1.5,\n 'dashArray': '5, 5'\n }\n\n\ndef highlight_function(feature):\n return {\n 'fillColor': '#ffaf00',\n 'color': 'green',\n 'weight': 3,\n 'dashArray': '5, 5'\n }\n\n\nfor index, row in df.iterrows():\n c = folium.GeoJson(\n row['geojson'],\n name=('{}{}'.format(row['dep'], row['dest'])),\n overlay=True,\n style_function=style_function,\n highlight_function=highlight_function\n )\n folium.Popup('{}\\n{}'.format(row['dep'], row['dest'])).add_to(c)\n c.add_to(m)\n\nfolium.LayerControl().add_to(m)\nm.save(os.path.join('results', 'Highlight_Function.html'))\n\nm", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cbf40b216e385249f41f7ee6393d54463197482d
67,218
ipynb
Jupyter Notebook
examples/dagster_examples/airline_demo/notebooks/Fares_vs_Delays.ipynb
flowersw/dagster
0de6baf2bd6a41bfacf0be532b954e23305fb6b4
[ "Apache-2.0" ]
4,606
2018-06-21T17:45:20.000Z
2022-03-31T23:39:42.000Z
examples/dagster_examples/airline_demo/notebooks/Fares_vs_Delays.ipynb
flowersw/dagster
0de6baf2bd6a41bfacf0be532b954e23305fb6b4
[ "Apache-2.0" ]
6,221
2018-06-12T04:36:01.000Z
2022-03-31T21:43:05.000Z
examples/dagster_examples/airline_demo/notebooks/Fares_vs_Delays.ipynb
flowersw/dagster
0de6baf2bd6a41bfacf0be532b954e23305fb6b4
[ "Apache-2.0" ]
619
2018-08-22T22:43:09.000Z
2022-03-31T22:48:06.000Z
184.664835
37,376
0.897855
[ [ [ "import dagstermill", "_____no_output_____" ], [ "from dagster import ModeDefinition, ResourceDefinition\nfrom collections import namedtuple\n\nurl = 'postgresql://{username}:{password}@{hostname}:5432/{db_name}'.format(\n username='test', password='test', hostname='localhost', db_name='test'\n)\nDbInfo = namedtuple('DbInfo', 'url')\ncontext = dagstermill.get_context(\n mode_def=ModeDefinition(\n resource_defs={'db_info': ResourceDefinition(lambda _: DbInfo(url))}\n )\n)\n\ntable_name = 'delays_vs_fares'", "_____no_output_____" ], [ "db_url = context.resources.db_info.url", "_____no_output_____" ], [ "import os\n\nimport sqlalchemy as sa\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\nfrom dagster.utils import mkdir_p", "_____no_output_____" ], [ "engine = sa.create_engine(db_url)", "/Users/max/.virtualenvs/dagster/lib/python3.6/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.\n \"\"\")\n" ], [ "from matplotlib.backends.backend_pdf import PdfPages\nplots_path = os.path.join(os.getcwd(), 'plots')\nmkdir_p(plots_path)\npdf_path = os.path.join(plots_path, 'fares_vs_delays.pdf')\npp = PdfPages(pdf_path)", "_____no_output_____" ], [ "fares_vs_delays = pd.read_sql('select * from {table_name}'.format(table_name=table_name), engine)", "_____no_output_____" ], [ "fares_vs_delays.head()", "_____no_output_____" ], [ "fares_vs_delays['avg_arrival_delay'].describe()", "_____no_output_____" ], [ "plt.scatter(fares_vs_delays['avg_arrival_delay'], fares_vs_delays['avg_fare'])\n\ntry:\n z = np.polyfit(fares_vs_delays['avg_arrival_delay'], fares_vs_delays['avg_fare'], 1)\n f = np.poly1d(z)\n\n x_fit = np.linspace(fares_vs_delays['avg_arrival_delay'].min(), fares_vs_delays['avg_arrival_delay'].max(), 50)\n y_fit = f(x_fit)\n plt.plot(x_fit, y_fit, 'k--', alpha=0.5)\nexcept:\n pass\n\nplt.title('Arrival Delays vs. Fares (Origin SFO)')\nplt.xlabel('Average Delay at Arrival (Minutes)')\nplt.ylabel('Average Fare ($)')\npp.savefig()", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(10,10))\n\nfor i, _ in enumerate(fares_vs_delays.index):\n plt.text(\n fares_vs_delays['avg_arrival_delay'][i],\n fares_vs_delays['avg_fare_per_mile'][i],\n fares_vs_delays['dest'][i],\n fontsize=8)\n\nplt.scatter(fares_vs_delays['avg_arrival_delay'], fares_vs_delays['avg_fare_per_mile'], alpha=0)\nplt.title('Flight Delays (Origin SFO)')\nplt.xlabel('Average Delay at Arrival (Minutes)')\nplt.ylabel('Average Fare per Mile Flown($)')\n\npp.savefig()", "_____no_output_____" ], [ "pp.close()", "_____no_output_____" ], [ "\n", "_____no_output_____" ], [ "from dagster import LocalFileHandle\ndagstermill.yield_result(LocalFileHandle(pdf_path))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf40ce966b879438cc92ce2aae5d302508af643
2,106
ipynb
Jupyter Notebook
notebooks/explore_zinc_dataset.ipynb
CS-savvy/Essay-scoring
eaad7c974e59e3eb75669d2a2a64b1ce16cfbeb5
[ "MIT" ]
null
null
null
notebooks/explore_zinc_dataset.ipynb
CS-savvy/Essay-scoring
eaad7c974e59e3eb75669d2a2a64b1ce16cfbeb5
[ "MIT" ]
null
null
null
notebooks/explore_zinc_dataset.ipynb
CS-savvy/Essay-scoring
eaad7c974e59e3eb75669d2a2a64b1ce16cfbeb5
[ "MIT" ]
null
null
null
24.206897
113
0.490978
[ [ [ "import numpy as np\nimport torch\nimport pickle\nimport time\nimport os\n%matplotlib inline\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "if not os.path.isfile('Dataset/molecules.zip'):\n print('downloading..')\n !curl https://www.dropbox.com/s/feo9qle74kg48gy/molecules.zip?dl=1 -o Dataset/molecules.zip -J -L -k\n !unzip Dataset/molecules.zip -d Dataset/\n # !tar -xvf molecules.zip -C ../\nelse:\n print('File already downloaded')", "downloading..\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n100 137 0 137 0 0 186 0 --:--:-- --:--:-- --:--:-- 185\n100 320 100 320 0 0 253 0 0:00:01 0:00:01 --:--:-- 253\n 42 366M 42 156M 0 0 5073k 0 0:01:13 0:00:31 0:00:42 5571k" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
cbf410bb49e85b665e9031fa9bf98938c2cf1755
9,265
ipynb
Jupyter Notebook
Pandas Data Series/main.ipynb
AkashKumarSingh11032001/Pandas-Practise-Problems
b4c0c57e51e25117a750ac4d7fd24868009cfbe3
[ "MIT" ]
null
null
null
Pandas Data Series/main.ipynb
AkashKumarSingh11032001/Pandas-Practise-Problems
b4c0c57e51e25117a750ac4d7fd24868009cfbe3
[ "MIT" ]
null
null
null
Pandas Data Series/main.ipynb
AkashKumarSingh11032001/Pandas-Practise-Problems
b4c0c57e51e25117a750ac4d7fd24868009cfbe3
[ "MIT" ]
null
null
null
21.298851
140
0.435186
[ [ [ "# Pandas Data Series [40 exercises]", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "#1 Write a Pandas program to create and display a one-dimensional array-like object containing an array of data using Pandas module\ndata = pd.Series([9,8,7,6,5,4,3,2,1,])\nprint(data)\n", "0 9\n1 8\n2 7\n3 6\n4 5\n5 4\n6 3\n7 2\n8 1\ndtype: int64\n" ], [ "# 2. Write a Pandas program to convert a Panda module Series to Python list and it's type.\ndata = pd.Series([9,8,7,6,5,4,3,2,1,])\nprint(type(data))\nlis = data.tolist()\nprint(type(lis))", "<class 'pandas.core.series.Series'>\n[9, 8, 7, 6, 5, 4, 3, 2, 1]\n<class 'list'>\n" ], [ "# 3. Write a Pandas program to add, subtract, multiple and divide two Pandas Series\n\ndata1 = pd.Series([2, 4, 6, 8, 10])\ndata2 = pd.Series([1, 3, 5, 7, 9])\n\nprint(data1 + data2, data1 - data2, data1 * data2 ,data1 / data2)", "0 3\n1 7\n2 11\n3 15\n4 19\ndtype: int64 0 1\n1 1\n2 1\n3 1\n4 1\ndtype: int64 0 2\n1 12\n2 30\n3 56\n4 90\ndtype: int64 0 2.000000\n1 1.333333\n2 1.200000\n3 1.142857\n4 1.111111\ndtype: float64\n" ], [ "#4 Write a Pandas program to compare the elements of the two Pandas Series.\ndata1 = pd.Series([2, 4, 6, 8, 10])\ndata2 = pd.Series([1, 3, 5, 7, 10])\n\nprint(\"Equal : \")\nprint(data1 == data2)\n\nprint(\"greater : \")\nprint(data1 > data2)\n\nprint(\"lesser : \")\nprint(data1 < data2)\n", "Equal : \n0 False\n1 False\n2 False\n3 False\n4 True\ndtype: bool\ngreater : \n0 True\n1 True\n2 True\n3 True\n4 False\ndtype: bool\nlesser : \n0 False\n1 False\n2 False\n3 False\n4 False\ndtype: bool\n" ], [ "#5 Write a Pandas program to convert a dictionary to a Pandas series\ndic = {'a': 100, 'b': 200, 'c': 300, 'd': 400, 'e': 800}\nser = pd.Series(dic)\nprint(ser)", "a 100\nb 200\nc 300\nd 400\ne 800\ndtype: int64\n" ], [ "# 6. Write a Pandas program to convert a NumPy array to a Pandas series\nnp_arr = np.array([10, 20, 30, 40, 50])\nser = pd.Series(np_arr)\nprint(ser)", "0 10\n1 20\n2 30\n3 40\n4 50\ndtype: int32\n" ], [ "# 7. Write a Pandas program to change the data type of given a column or a Series.\n\ndata = pd.Series([100,200,'python',300.12,400])\ndata = pd.to_numeric(data,errors='coerce')\nprint(data)", "0 100.00\n1 200.00\n2 NaN\n3 300.12\n4 400.00\ndtype: float64\n" ], [ "# 8. Write a Pandas program to convert the first column of a DataFrame as a Series.\n\nd = {'col1': [1, 2, 3, 4, 7, 11], 'col2': [4, 5, 6, 9, 5, 0], 'col3': [7, 5, 8, 12, 1,11]}\ndf = pd.DataFrame(data=d)\n\nprint(pd.Series(df['col1']))", "0 1\n1 2\n2 3\n3 4\n4 7\n5 11\nName: col1, dtype: int64\n" ], [ "# 9. Write a Pandas program to convert a given Series to an array.\n\ndata = pd.Series([100,200,'python',300.12,400])\n\nprint(np.array(data.tolist()))", "['100' '200' 'python' '300.12' '400']\n" ], [ "# 10. Write a Pandas program to convert Series of lists to one Series\n\ns = pd.Series([\n ['Red', 'Green', 'White'],\n ['Red', 'Black'],\n ['Yellow']])\n\nprint(s.apply(pd.Series).stack().reset_index(drop=True))", "0 Red\n1 Green\n2 White\n3 Red\n4 Black\n5 Yellow\ndtype: object\n" ], [ "# 11. Write a Pandas program to sort a given Series. \ndata = pd.Series(['100', '200', 'python', '300.12', '400'])\ndata.sort_values()", "_____no_output_____" ], [ "# 12. Write a Pandas program to add some data to an existing Series.\ndata = pd.Series(['100', '200', 'python', '300.12', '400'])\ndata = data.append(pd.Series(['500','php']))\nprint(data)", "0 100\n1 200\n2 python\n3 300.12\n4 400\n0 500\n1 php\ndtype: object\n" ], [ "# 13. Write a Pandas program to create a subset of a given series based on value and condition. \n\ndata = pd.Series([0,1,2,3,4,5,6,7,8,9])\ndata = data[data < 6]\nprint(data)", "0 0\n1 1\n2 2\n3 3\n4 4\n5 5\ndtype: int64\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf42438a5eb2718ac479f597f6c7afd3612c280
2,086
ipynb
Jupyter Notebook
220-heighway-dragon.ipynb
arkeros/projecteuler
c95db97583034af8fc61d5786692d82eabe50c12
[ "MIT" ]
2
2017-02-19T12:37:13.000Z
2021-01-19T04:58:09.000Z
220-heighway-dragon.ipynb
arkeros/projecteuler
c95db97583034af8fc61d5786692d82eabe50c12
[ "MIT" ]
null
null
null
220-heighway-dragon.ipynb
arkeros/projecteuler
c95db97583034af8fc61d5786692d82eabe50c12
[ "MIT" ]
4
2018-01-05T14:29:09.000Z
2020-01-27T13:37:40.000Z
37.25
340
0.535475
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cbf42969525517d43e821b97e6cda2246f9a0251
114,838
ipynb
Jupyter Notebook
pysal/spatial_dynamics/interaction.ipynb
Aniq55/pysal
2a9218feb81880dd8e00100f4619fd4316f28c36
[ "BSD-3-Clause" ]
null
null
null
pysal/spatial_dynamics/interaction.ipynb
Aniq55/pysal
2a9218feb81880dd8e00100f4619fd4316f28c36
[ "BSD-3-Clause" ]
null
null
null
pysal/spatial_dynamics/interaction.ipynb
Aniq55/pysal
2a9218feb81880dd8e00100f4619fd4316f28c36
[ "BSD-3-Clause" ]
1
2021-07-19T01:46:17.000Z
2021-07-19T01:46:17.000Z
27.665141
87
0.2344
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cbf42bc37771dab5096a9e95a868afb01b7f1b30
44,853
ipynb
Jupyter Notebook
introduction_to_applying_machine_learning/xgboost_customer_churn/xgboost_customer_churn.ipynb
vikramelango/amazon-sagemaker-examples
9a3b8de17c253fc18fc089120885afc6ff36111d
[ "Apache-2.0" ]
null
null
null
introduction_to_applying_machine_learning/xgboost_customer_churn/xgboost_customer_churn.ipynb
vikramelango/amazon-sagemaker-examples
9a3b8de17c253fc18fc089120885afc6ff36111d
[ "Apache-2.0" ]
1
2022-03-15T20:04:30.000Z
2022-03-15T20:04:30.000Z
introduction_to_applying_machine_learning/xgboost_customer_churn/xgboost_customer_churn.ipynb
vivekmadan2/amazon-sagemaker-examples
4ccb050067c5305a50db750df3444dbc85600d5f
[ "Apache-2.0" ]
1
2022-03-19T17:04:30.000Z
2022-03-19T17:04:30.000Z
34.932243
775
0.613827
[ [ [ "# Customer Churn Prediction with XGBoost\n_**Using Gradient Boosted Trees to Predict Mobile Customer Departure**_\n\n---\n\n---\n\n## Runtime\n\nThis notebook takes approximately 8 minutes to run.\n\n## Contents\n\n1. [Background](#Background)\n1. [Setup](#Setup)\n1. [Data](#Data)\n1. [Train](#Train)\n1. [Host](#Host)\n 1. [Evaluate](#Evaluate)\n 1. [Relative cost of errors](#Relative-cost-of-errors)\n1. [Extensions](#Extensions)\n\n---\n\n## Background\n\n_This notebook has been adapted from an [AWS blog post](https://aws.amazon.com/blogs/ai/predicting-customer-churn-with-amazon-machine-learning/)_\n\nLosing customers is costly for any business. Identifying unhappy customers early on gives you a chance to offer them incentives to stay. This notebook describes using machine learning (ML) for the automated identification of unhappy customers, also known as customer churn prediction. ML models rarely give perfect predictions though, so this notebook is also about how to incorporate the relative costs of prediction mistakes when determining the financial outcome of using ML.\n\nWe use a familiar example of churn: leaving a mobile phone operator. Seems like one can always find fault with their provider du jour! And if the provider knows that a customer is thinking of leaving, it can offer timely incentives - such as a phone upgrade or perhaps having a new feature activated – and the customer may stick around. Incentives are often much more cost-effective than losing and reacquiring a customer.\n\n---\n\n## Setup\n\n_This notebook was created and tested on a `ml.m4.xlarge` notebook instance._\n\nLet's start by updating the required packages i.e. SageMaker Python SDK, `pandas` and `numpy`, and specifying:\n\n- The S3 bucket and prefix that you want to use for training and model data. This should be within the same region as the Notebook Instance or Studio, training, and hosting.\n- The IAM role ARN used to give training and hosting access to your data. See the documentation for how to create these. Note: if more than one role is required for notebook instances, training, and/or hosting, please replace the boto regexp with the appropriate full IAM role ARN string(s).", "_____no_output_____" ] ], [ [ "import sys\n\n!{sys.executable} -m pip install sagemaker pandas numpy --upgrade", "_____no_output_____" ], [ "import sagemaker\n\nsess = sagemaker.Session()\nbucket = sess.default_bucket()\nprefix = \"sagemaker/DEMO-xgboost-churn\"\n\n# Define IAM role\nimport boto3\nimport re\nfrom sagemaker import get_execution_role\n\nrole = get_execution_role()", "_____no_output_____" ] ], [ [ "Next, we'll import the Python libraries we'll need for the remainder of the example.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport io\nimport os\nimport sys\nimport time\nimport json\nfrom IPython.display import display\nfrom time import strftime, gmtime\nfrom sagemaker.inputs import TrainingInput\nfrom sagemaker.serializers import CSVSerializer", "_____no_output_____" ] ], [ [ "---\n## Data\n\nMobile operators have historical records on which customers ultimately ended up churning and which continued using the service. We can use this historical information to construct an ML model of one mobile operator’s churn using a process called training. After training the model, we can pass the profile information of an arbitrary customer (the same profile information that we used to train the model) to the model, and have the model predict whether this customer is going to churn. Of course, we expect the model to make mistakes. After all, predicting the future is tricky business! But we'll learn how to deal with prediction errors.\n\nThe dataset we use is publicly available and was mentioned in the book [Discovering Knowledge in Data](https://www.amazon.com/dp/0470908742/) by Daniel T. Larose. It is attributed by the author to the University of California Irvine Repository of Machine Learning Datasets. Let's download and read that dataset in now:", "_____no_output_____" ] ], [ [ "s3 = boto3.client(\"s3\")\ns3.download_file(f\"sagemaker-sample-files\", \"datasets/tabular/synthetic/churn.txt\", \"churn.txt\")", "_____no_output_____" ], [ "churn = pd.read_csv(\"./churn.txt\")\npd.set_option(\"display.max_columns\", 500)\nchurn", "_____no_output_____" ], [ "len(churn.columns)", "_____no_output_____" ] ], [ [ "By modern standards, it’s a relatively small dataset, with only 5,000 records, where each record uses 21 attributes to describe the profile of a customer of an unknown US mobile operator. The attributes are:\n\n- `State`: the US state in which the customer resides, indicated by a two-letter abbreviation; for example, OH or NJ\n- `Account Length`: the number of days that this account has been active\n- `Area Code`: the three-digit area code of the corresponding customer’s phone number\n- `Phone`: the remaining seven-digit phone number\n- `Int’l Plan`: whether the customer has an international calling plan: yes/no\n- `VMail Plan`: whether the customer has a voice mail feature: yes/no\n- `VMail Message`: the average number of voice mail messages per month\n- `Day Mins`: the total number of calling minutes used during the day\n- `Day Calls`: the total number of calls placed during the day\n- `Day Charge`: the billed cost of daytime calls\n- `Eve Mins, Eve Calls, Eve Charge`: the billed cost for calls placed during the evening\n- `Night Mins`, `Night Calls`, `Night Charge`: the billed cost for calls placed during nighttime\n- `Intl Mins`, `Intl Calls`, `Intl Charge`: the billed cost for international calls\n- `CustServ Calls`: the number of calls placed to Customer Service\n- `Churn?`: whether the customer left the service: true/false\n\nThe last attribute, `Churn?`, is known as the target attribute: the attribute that we want the ML model to predict. Because the target attribute is binary, our model will be performing binary prediction, also known as binary classification.\n\nLet's begin exploring the data:", "_____no_output_____" ] ], [ [ "# Frequency tables for each categorical feature\nfor column in churn.select_dtypes(include=[\"object\"]).columns:\n display(pd.crosstab(index=churn[column], columns=\"% observations\", normalize=\"columns\"))\n\n# Histograms for each numeric features\ndisplay(churn.describe())\n%matplotlib inline\nhist = churn.hist(bins=30, sharey=True, figsize=(10, 10))", "_____no_output_____" ] ], [ [ "We can see immediately that:\n- `State` appears to be quite evenly distributed.\n- `Phone` takes on too many unique values to be of any practical use. It's possible that parsing out the prefix could have some value, but without more context on how these are allocated, we should avoid using it.\n- Most of the numeric features are surprisingly nicely distributed, with many showing bell-like `gaussianity`. `VMail Message` is a notable exception (and `Area Code` showing up as a feature we should convert to non-numeric).", "_____no_output_____" ] ], [ [ "churn = churn.drop(\"Phone\", axis=1)\nchurn[\"Area Code\"] = churn[\"Area Code\"].astype(object)", "_____no_output_____" ] ], [ [ "Next let's look at the relationship between each of the features and our target variable.", "_____no_output_____" ] ], [ [ "for column in churn.select_dtypes(include=[\"object\"]).columns:\n if column != \"Churn?\":\n display(pd.crosstab(index=churn[column], columns=churn[\"Churn?\"], normalize=\"columns\"))\n\nfor column in churn.select_dtypes(exclude=[\"object\"]).columns:\n print(column)\n hist = churn[[column, \"Churn?\"]].hist(by=\"Churn?\", bins=30)\n plt.show()", "_____no_output_____" ], [ "display(churn.corr())\npd.plotting.scatter_matrix(churn, figsize=(12, 12))\nplt.show()", "_____no_output_____" ] ], [ [ "We see several features that essentially have 100% correlation with one another. Including these feature pairs in some machine learning algorithms can create catastrophic problems, while in others it will only introduce minor redundancy and bias. Let's remove one feature from each of the highly correlated pairs: `Day Charge` from the pair with `Day Mins`, `Night Charge` from the pair with `Night Mins`, `Intl Charge` from the pair with `Intl Mins`:", "_____no_output_____" ] ], [ [ "churn = churn.drop([\"Day Charge\", \"Eve Charge\", \"Night Charge\", \"Intl Charge\"], axis=1)", "_____no_output_____" ] ], [ [ "Now that we've cleaned up our dataset, let's determine which algorithm to use. As mentioned above, there appear to be some variables where both high and low (but not intermediate) values are predictive of churn. In order to accommodate this in an algorithm like linear regression, we'd need to generate polynomial (or bucketed) terms. Instead, let's attempt to model this problem using gradient boosted trees. Amazon SageMaker provides an XGBoost container that we can use to train in a managed, distributed setting, and then host as a real-time prediction endpoint. XGBoost uses gradient boosted trees which naturally account for non-linear relationships between features and the target variable, as well as accommodating complex interactions between features.\n\nAmazon SageMaker XGBoost can train on data in either a CSV or LibSVM format. For this example, we'll stick with CSV. It should:\n- Have the predictor variable in the first column\n- Not have a header row\n\nBut first, let's convert our categorical features into numeric features.", "_____no_output_____" ] ], [ [ "model_data = pd.get_dummies(churn)\nmodel_data = pd.concat(\n [model_data[\"Churn?_True.\"], model_data.drop([\"Churn?_False.\", \"Churn?_True.\"], axis=1)], axis=1\n)", "_____no_output_____" ] ], [ [ "And now let's split the data into training, validation, and test sets. This will help prevent us from overfitting the model, and allow us to test the model's accuracy on data it hasn't already seen.", "_____no_output_____" ] ], [ [ "train_data, validation_data, test_data = np.split(\n model_data.sample(frac=1, random_state=1729),\n [int(0.7 * len(model_data)), int(0.9 * len(model_data))],\n)\ntrain_data.to_csv(\"train.csv\", header=False, index=False)\nvalidation_data.to_csv(\"validation.csv\", header=False, index=False)", "_____no_output_____" ], [ "len(train_data.columns)", "_____no_output_____" ] ], [ [ "Now we'll upload these files to S3.", "_____no_output_____" ] ], [ [ "boto3.Session().resource(\"s3\").Bucket(bucket).Object(\n os.path.join(prefix, \"train/train.csv\")\n).upload_file(\"train.csv\")\nboto3.Session().resource(\"s3\").Bucket(bucket).Object(\n os.path.join(prefix, \"validation/validation.csv\")\n).upload_file(\"validation.csv\")", "_____no_output_____" ] ], [ [ "---\n## Train\n\nMoving onto training, first we'll need to specify the locations of the XGBoost algorithm containers.", "_____no_output_____" ] ], [ [ "container = sagemaker.image_uris.retrieve(\"xgboost\", sess.boto_region_name, \"latest\")\ndisplay(container)", "_____no_output_____" ] ], [ [ "Then, because we're training with the CSV file format, we'll create `TrainingInput`s that our training function can use as a pointer to the files in S3.", "_____no_output_____" ] ], [ [ "s3_input_train = TrainingInput(\n s3_data=\"s3://{}/{}/train\".format(bucket, prefix), content_type=\"csv\"\n)\ns3_input_validation = TrainingInput(\n s3_data=\"s3://{}/{}/validation/\".format(bucket, prefix), content_type=\"csv\"\n)", "_____no_output_____" ] ], [ [ "Now, we can specify a few parameters like what type of training instances we'd like to use and how many, as well as our XGBoost hyperparameters. A few key hyperparameters are:\n- `max_depth` controls how deep each tree within the algorithm can be built. Deeper trees can lead to better fit, but are more computationally expensive and can lead to overfitting. There is typically some trade-off in model performance that needs to be explored between numerous shallow trees and a smaller number of deeper trees.\n- `subsample` controls sampling of the training data. This technique can help reduce overfitting, but setting it too low can also starve the model of data.\n- `num_round` controls the number of boosting rounds. This is essentially the subsequent models that are trained using the residuals of previous iterations. Again, more rounds should produce a better fit on the training data, but can be computationally expensive or lead to overfitting.\n- `eta` controls how aggressive each round of boosting is. Larger values lead to more conservative boosting.\n- `gamma` controls how aggressively trees are grown. Larger values lead to more conservative models.\n\nMore detail on XGBoost's hyper-parameters can be found on their GitHub [page](https://github.com/dmlc/xgboost/blob/master/doc/parameter.md).", "_____no_output_____" ] ], [ [ "sess = sagemaker.Session()\n\nxgb = sagemaker.estimator.Estimator(\n container,\n role,\n instance_count=1,\n instance_type=\"ml.m4.xlarge\",\n output_path=\"s3://{}/{}/output\".format(bucket, prefix),\n sagemaker_session=sess,\n)\nxgb.set_hyperparameters(\n max_depth=5,\n eta=0.2,\n gamma=4,\n min_child_weight=6,\n subsample=0.8,\n silent=0,\n objective=\"binary:logistic\",\n num_round=100,\n)\n\nxgb.fit({\"train\": s3_input_train, \"validation\": s3_input_validation})", "_____no_output_____" ] ], [ [ "---\n## Host\n\nNow that we've trained the algorithm, let's create a model and deploy it to a hosted endpoint.", "_____no_output_____" ] ], [ [ "xgb_predictor = xgb.deploy(\n initial_instance_count=1, instance_type=\"ml.m4.xlarge\", serializer=CSVSerializer()\n)", "_____no_output_____" ] ], [ [ "### Evaluate\n\nNow that we have a hosted endpoint running, we can make real-time predictions from our model very easily, simply by making a `http` POST request. But first, we'll need to set up serializers and deserializers for passing our `test_data` NumPy arrays to the model behind the endpoint.", "_____no_output_____" ], [ "Now, we'll use a simple function to:\n1. Loop over our test dataset\n1. Split it into mini-batches of rows \n1. Convert those mini-batchs to CSV string payloads\n1. Retrieve mini-batch predictions by invoking the XGBoost endpoint\n1. Collect predictions and convert from the CSV output our model provides into a NumPy array", "_____no_output_____" ] ], [ [ "def predict(data, rows=500):\n split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))\n predictions = \"\"\n for array in split_array:\n predictions = \",\".join([predictions, xgb_predictor.predict(array).decode(\"utf-8\")])\n\n return np.fromstring(predictions[1:], sep=\",\")\n\n\npredictions = predict(test_data.to_numpy()[:, 1:])", "_____no_output_____" ], [ "print(predictions)", "_____no_output_____" ] ], [ [ "There are many ways to compare the performance of a machine learning model, but let's start by simply by comparing actual to predicted values. In this case, we're simply predicting whether the customer churned (`1`) or not (`0`), which produces a confusion matrix.", "_____no_output_____" ] ], [ [ "pd.crosstab(\n index=test_data.iloc[:, 0],\n columns=np.round(predictions),\n rownames=[\"actual\"],\n colnames=[\"predictions\"],\n)", "_____no_output_____" ] ], [ [ "_Note, due to randomized elements of the algorithm, your results may differ slightly._\n\nOf the 48 churners, we've correctly predicted 39 of them (true positives). We also incorrectly predicted 4 customers would churn who then ended up not doing so (false positives). There are also 9 customers who ended up churning, that we predicted would not (false negatives).\n\nAn important point here is that because of the `np.round()` function above, we are using a simple threshold (or cutoff) of 0.5. Our predictions from `xgboost` yield continuous values between 0 and 1, and we force them into the binary classes that we began with. However, because a customer that churns is expected to cost the company more than proactively trying to retain a customer who we think might churn, we should consider lowering this cutoff. That will almost certainly increase the number of false positives, but it can also be expected to increase the number of true positives and reduce the number of false negatives.\n\nTo get a rough intuition here, let's look at the continuous values of our predictions.", "_____no_output_____" ] ], [ [ "plt.hist(predictions)\nplt.xlabel(\"Predicted churn probability\")\nplt.ylabel(\"Number of customers\")\nplt.show()", "_____no_output_____" ] ], [ [ "The continuous valued predictions coming from our model tend to skew toward 0 or 1, but there is sufficient mass between 0.1 and 0.9 that adjusting the cutoff should indeed shift a number of customers' predictions. For example...", "_____no_output_____" ] ], [ [ "pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > 0.3, 1, 0))", "_____no_output_____" ] ], [ [ "We can see that lowering the cutoff from 0.5 to 0.3 results in 1 more true positive, 3 more false positives, and 1 fewer false negative. The numbers are small overall here, but that's 6-10% of customers overall that are shifting because of a change to the cutoff. Was this the right decision? We may end up retaining 3 extra customers, but we also unnecessarily incentivized 5 more customers who would have stayed anyway. Determining optimal cutoffs is a key step in properly applying machine learning in a real-world setting. Let's discuss this more broadly and then apply a specific, hypothetical solution for our current problem.\n\n### Relative cost of errors\n\nAny practical binary classification problem is likely to produce a similarly sensitive cutoff. That by itself isn’t a problem. After all, if the scores for two classes are really easy to separate, the problem probably isn’t very hard to begin with and might even be solvable with deterministic rules instead of ML.\n\nMore important, if we put an ML model into production, there are costs associated with the model erroneously assigning false positives and false negatives. We also need to look at similar costs associated with correct predictions of true positives and true negatives. Because the choice of the cutoff affects all four of these statistics, we need to consider the relative costs to the business for each of these four outcomes for each prediction.\n\n#### Assigning costs\n\nWhat are the costs for our problem of mobile operator churn? The costs, of course, depend on the specific actions that the business takes. Let's make some assumptions here.\n\nFirst, assign the true negatives the cost of \\$0. Our model essentially correctly identified a happy customer in this case, and we don’t need to do anything.\n\nFalse negatives are the most problematic, because they incorrectly predict that a churning customer will stay. We lose the customer and will have to pay all the costs of acquiring a replacement customer, including foregone revenue, advertising costs, administrative costs, point of sale costs, and likely a phone hardware subsidy. A quick search on the Internet reveals that such costs typically run in the hundreds of dollars so, for the purposes of this example, let's assume \\$500. This is the cost of false negatives.\n\nFinally, for customers that our model identifies as churning, let's assume a retention incentive in the amount of \\\\$100. If a provider offered a customer such a concession, they may think twice before leaving. This is the cost of both true positive and false positive outcomes. In the case of false positives (the customer is happy, but the model mistakenly predicted churn), we will “waste” the \\\\$100 concession. We probably could have spent that \\$100 more effectively, but it's possible we increased the loyalty of an already loyal customer, so that’s not so bad.", "_____no_output_____" ], [ "#### Finding the optimal cutoff\n\nIt’s clear that false negatives are substantially more costly than false positives. Instead of optimizing for error based on the number of customers, we should be minimizing a cost function that looks like this:\n\n```\n$500 * FN(C) + $0 * TN(C) + $100 * FP(C) + $100 * TP(C)\n```\n\nFN(C) means that the false negative percentage is a function of the cutoff, C, and similar for TN, FP, and TP. We need to find the cutoff, C, where the result of the expression is smallest.\n\nA straightforward way to do this is to simply run a simulation over numerous possible cutoffs. We test 100 possible values in the for-loop below.", "_____no_output_____" ] ], [ [ "cutoffs = np.arange(0.01, 1, 0.01)\ncosts = []\nfor c in cutoffs:\n costs.append(\n np.sum(\n np.sum(\n np.array([[0, 100], [500, 100]])\n * pd.crosstab(index=test_data.iloc[:, 0], columns=np.where(predictions > c, 1, 0))\n )\n )\n )\n\ncosts = np.array(costs)\nplt.plot(cutoffs, costs)\nplt.xlabel(\"Cutoff\")\nplt.ylabel(\"Cost\")\nplt.show()", "_____no_output_____" ], [ "print(\n \"Cost is minimized near a cutoff of:\",\n cutoffs[np.argmin(costs)],\n \"for a cost of:\",\n np.min(costs),\n)", "_____no_output_____" ] ], [ [ "The above chart shows how picking a threshold too low results in costs skyrocketing as all customers are given a retention incentive. Meanwhile, setting the threshold too high results in too many lost customers, which ultimately grows to be nearly as costly. The overall cost can be minimized at \\\\$8400 by setting the cutoff to 0.46, which is substantially better than the \\$20k+ we would expect to lose by not taking any action.", "_____no_output_____" ], [ "---\n## Extensions\n\nThis notebook showcased how to build a model that predicts whether a customer is likely to churn, and then how to optimally set a threshold that accounts for the cost of true positives, false positives, and false negatives. There are several means of extending it including:\n- Some customers who receive retention incentives will still churn. Including a probability of churning despite receiving an incentive in our cost function would provide a better ROI on our retention programs.\n- Customers who switch to a lower-priced plan or who deactivate a paid feature represent different kinds of churn that could be modeled separately.\n- Modeling the evolution of customer behavior. If usage is dropping and the number of calls placed to Customer Service is increasing, you are more likely to experience churn then if the trend is the opposite. A customer profile should incorporate behavior trends.\n- Actual training data and monetary cost assignments could be more complex.\n- Multiple models for each type of churn could be needed.\n\nRegardless of additional complexity, similar principles described in this notebook are likely applied.", "_____no_output_____" ], [ "### (Optional) Clean-up\n\nIf you're ready to be done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.", "_____no_output_____" ] ], [ [ "xgb_predictor.delete_endpoint()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ] ]
cbf4383f7d4f6bcbd5444be3408e9581a1c5cc67
4,657
ipynb
Jupyter Notebook
Dataset/Translation.ipynb
SaiSakethAluru/DE-LIMIT
dafb16e86e3b7d777afb73bf94dd5c6b7f69a627
[ "MIT" ]
51
2020-04-14T06:05:17.000Z
2021-03-01T10:36:54.000Z
Dataset/Translation.ipynb
DUT-lujunyu/DE-LIMIT
7bb0e9d9c9a10b998d14b5a0c8483d13583893dc
[ "MIT" ]
4
2020-04-15T19:09:00.000Z
2021-02-08T04:12:26.000Z
Dataset/Translation.ipynb
DUT-lujunyu/DE-LIMIT
7bb0e9d9c9a10b998d14b5a0c8483d13583893dc
[ "MIT" ]
9
2021-03-04T17:21:27.000Z
2022-03-04T08:30:09.000Z
28.224242
149
0.533176
[ [ [ "# Necessary imports\nimport re\nimport emoji\nfrom gtrans import translate_text, translate_html\nimport random\nimport pandas as pd\nimport numpy as np\nfrom multiprocessing import Pool\nimport time", "_____no_output_____" ], [ "# Function to remove emojis in text, since these conflict during translation\ndef remove_emoji(text):\n return emoji.get_emoji_regexp().sub(u'', text)\n\n\ndef approximate_emoji_insert(string, index,char):\n if(index<(len(string)-1)):\n \n while(string[index]!=' ' ):\n if(index+1==len(string)):\n break\n index=index+1\n return string[:index] + ' '+char + ' ' + string[index:]\n else:\n return string + ' '+char + ' ' \n \n\n\ndef extract_emojis(str1):\n try:\n return [(c,i) for i,c in enumerate(str1) if c in emoji.UNICODE_EMOJI]\n except AttributeError:\n return []", "_____no_output_____" ], [ "# Use multiprocessing framework for speeding up translation process\ndef parallelize_dataframe(df, func, n_cores=4):\n '''parallelize the dataframe'''\n df_split = np.array_split(df, n_cores)\n pool = Pool(n_cores)\n df = pd.concat(pool.map(func, df_split))\n pool.close()\n pool.join()\n return df\n\n# Main function for translation\ndef translate(x,lang):\n '''provide the translation given text and the language'''\n #x=preprocess_lib.preprocess_multi(x,lang,multiple_sentences=False,stop_word_remove=False, tokenize_word=False, tokenize_sentence=False)\n emoji_list=extract_emojis(x)\n try:\n translated_text=translate_text(x,lang,'en')\n except:\n translated_text=x\n for ele in emoji_list:\n translated_text=approximate_emoji_insert(translated_text, ele[1],ele[0])\n return translated_text\n\ndef add_features(df):\n '''adding new features to the dataframe'''\n translated_text=[]\n for index,row in df.iterrows():\n if(row['lang']in ['en','unk']):\n translated_text.append(row['text'])\n else:\n translated_text.append(translate(row['text'],row['lang'])) \n df[\"translated\"]=translated_text\n return df", "_____no_output_____" ], [ "import glob \ntrain_files = glob.glob('train/*.csv')\ntest_files = glob.glob('test/*.csv')\nval_files = glob.glob('val/*.csv')\nfiles= train_files+test_files+val_files", "_____no_output_____" ], [ "from tqdm import tqdm_notebook\nsize=10\n\nfor file in files:\n wp_data=pd.read_csv(file)\n list_df=[]\n for i in tqdm_notebook(range(0,100,size)):\n print(i,\"_iteration\")\n df_new=parallelize_dataframe(wp_data[i:i+size],add_features,n_cores=20)\n list_df.append(df_new)\n df_translated=pd.concat(list_df,ignore_index=True)\n file_name='translated'+file\n df_translated.to_csv(file_name)\n", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cbf44aed2352466782762ced79c16444a6d21f66
292,176
ipynb
Jupyter Notebook
VideoGame.ipynb
eyesimk/CS210-Introduction-to-Data-Science
14a0f47348a3b7bf6f845f1e2512e71bb80273b7
[ "MIT" ]
null
null
null
VideoGame.ipynb
eyesimk/CS210-Introduction-to-Data-Science
14a0f47348a3b7bf6f845f1e2512e71bb80273b7
[ "MIT" ]
null
null
null
VideoGame.ipynb
eyesimk/CS210-Introduction-to-Data-Science
14a0f47348a3b7bf6f845f1e2512e71bb80273b7
[ "MIT" ]
null
null
null
129.395926
102,704
0.801688
[ [ [ "import pandas\nimport matplotlib.pyplot\nimport numpy\nimport seaborn\nimport sys\nimport os\nfrom os.path import join\nfrom scipy import stats\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import accuracy_score, f1_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.neighbors import KNeighborsClassifier", "_____no_output_____" ] ], [ [ "First, we will open the csv file and shows its structure.", "_____no_output_____" ] ], [ [ "dataFrame=pandas.read_csv(join(\"./data\", \"vgsales.csv\"))\ndataFrame.head()", "_____no_output_____" ] ], [ [ "# Project Detail", "_____no_output_____" ], [ "## Dataset", "_____no_output_____" ], [ "Dataset used in the project is taken from https://www.kaggle.com/gregorut/videogamesales and it is about video game sales around the world. Dataset contains 16598 data points in which each of them has 11 attributes. Attributes and their data types will be introduced in the data preprocessing section.", "_____no_output_____" ], [ "## Project Description", "_____no_output_____" ], [ "In this project, we will explore above mentioned dataset about video games as well as perform statistical analysis, hypothesis testing and machine learning techniques on the dataset respectively. Main focus of our whole project will be to explore relationships between attributes of the dataset and exploring ways to relate some of them with others and finally accurately predict category of data points from our learnings.", "_____no_output_____" ], [ "# Data Preprocessing", "_____no_output_____" ], [ "We will drop NaN values from the table. After that, we will introduce some columns to the table in order to represent some string columns with their numeric representations.", "_____no_output_____" ] ], [ [ "dataFrame=dataFrame.dropna(axis=0,how='any')\nplatforms=dict()\ngenres=dict()\npublishers=dict()\npcount=1\ngcount=1\npubcount=1\nfor index,row in dataFrame.iterrows():\n if row['Platform'] not in platforms.keys():\n platforms[row['Platform']]=pcount\n pcount+=1\n if row['Genre'] not in genres.keys():\n genres[row['Genre']]=gcount\n gcount+=1\n if row['Publisher'] not in publishers.keys():\n publishers[row['Publisher']]=pubcount\n pubcount+=1\ndataFrame['PlatformNum']=0\ndataFrame['GenreNum']=0\ndataFrame['PublisherNum']=0\nfor index, row in dataFrame.iterrows():\n dataFrame.loc[index,'PlatformNum'] = platforms[row['Platform']]\n dataFrame.loc[index,'GenreNum'] = genres[row['Genre']]\n dataFrame.loc[index,'PublisherNum'] = publishers[row['Publisher']]\ndataFrame", "_____no_output_____" ] ], [ [ "Now lets call the dtypes method to see the attributes of final dataframe", "_____no_output_____" ] ], [ [ "dataFrame.dtypes", "_____no_output_____" ] ], [ [ "At this point, our dataset is ready to work on with no null data and complete numeric representation of each of its attributes except name of each game which is not required to be a numeric data due to lack of representative quality.", "_____no_output_____" ], [ "# Data Exploration", "_____no_output_____" ] ], [ [ "fig=matplotlib.pyplot.figure(figsize=(15,20))\nmatplotlib.pyplot.subplot(2, 1, 1)\nseaborn.distplot(dataFrame[\"Year\"].values, norm_hist=True)\nmatplotlib.pyplot.title(\"Distribution of Release Year of Games\")\nmatplotlib.pyplot.xlabel(\"Years\")\nmatplotlib.pyplot.ylabel(\"Density\")\nmatplotlib.pyplot.subplot(2, 1, 2)\nseaborn.distplot(dataFrame[\"Global_Sales\"].values, norm_hist=True)\nmatplotlib.pyplot.title(\"Distribution of Sales of Games\")\nmatplotlib.pyplot.xlabel(\"Sales in Millions\")\nmatplotlib.pyplot.ylabel(\"Density\")\nmatplotlib.pyplot.show()\n", "C:\\Anaconda3\\lib\\site-packages\\scipy\\stats\\stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n" ] ], [ [ " - When we look at the plot of release years, we can see the imploding point at video game releases. It is convenient that peak point of the sales coincides with the introduction of first smart phone in 2009. After that point, it is safe to assume that mobile phones become more accessible and less risky for game companies to invest and therefore other platforms such as Wii in our dataset took a hit.\n - On the other hand sales graph does not provide much information to us besides the fact that most of the games released only sold in the amount of few millons.", "_____no_output_____" ], [ "Now we will look at some relations between data attributes and try to deduce as much as we can from the graphs.", "_____no_output_____" ] ], [ [ "seaborn.pairplot(data=dataFrame, vars=[\"Year\",\"Global_Sales\"])\nmatplotlib.pyplot.show()", "_____no_output_____" ] ], [ [ "This pairplot between Year and Global Sales attribute shows us that these two attributes are not much related. Even though highest sales also coincides with the peak year for game releases, this can only show us statistical relation. It is arguably sample size bias. We can see that by looking at the number of high global sales in around 90s. Even though 90s are the least productive era of game releases, it has couple of hit games appearantly.", "_____no_output_____" ], [ "Similar to other industries, video game companies as well as video game platforms have peak times. Now, lets look at following plots and try to deduce which platforms dominated which years. But first, lets remember platform codes with the following.", "_____no_output_____" ] ], [ [ "platformNames=[]\ngenreNames=[]\nfor key in platforms:\n platformNames.append(str(key))\nfor key in genres:\n genreNames.append(str(key))\nfig=matplotlib.pyplot.figure(figsize=(10,10))\nmatplotlib.pyplot.hexbin(dataFrame['GenreNum'].values, dataFrame['PlatformNum'].values , gridsize=(15,15),cmap=matplotlib.pyplot.cm.Greens )\nmatplotlib.pyplot.colorbar()\nmatplotlib.pyplot.xticks(numpy.arange(1, len(genreNames)+1, step=1),genreNames,rotation=60)\nmatplotlib.pyplot.yticks(numpy.arange(1, len(platformNames)+1, step=1),platformNames)\nmatplotlib.pyplot.show()\n", "_____no_output_____" ] ], [ [ " - Action games are released more than any other genre.\n - PS2, PS3, XBOX 360 and Nintendo DS are the platforms that has most number of games that can be played on.\n - Except for Action genre and above mentioned platforms, there is no visible pattern in the data. This can mean that our observations are heavily affected by some outliers in the data. ", "_____no_output_____" ], [ "# Hypothesis Testing", "_____no_output_____" ], [ "In this section, we will perform some hypothesis testing operations to explore our dataset even more. Selection of hypothesis and related attributes will be arbitary.", "_____no_output_____" ], [ "We think that political tension all over the world is affecting video game genre selections as well as it affects every possible cultural aspect. For this reason, we will test the following hypothesis: \"After the year of 2001 (Terorist attack to US), Action and Shooter genres have more average sales compared to their average sales before.\"", "_____no_output_____" ], [ "For this hypothesis, we define following sets:\nK : Sales of action and shooter genre video games after the year 2001(exclusive)\nL : Sales of action and shooter genre video games before the year 2001(exclusive)\nThus, our hypothesis becomes\n - $H_0 : \\mu_K = \\mu_L$\n - $H_1 : \\mu_K > \\mu_L$", "_____no_output_____" ], [ "At first, we need to find data points for both sets.", "_____no_output_____" ] ], [ [ "K=dataFrame[(dataFrame[\"Year\"]>2001) & ((dataFrame['Genre']=='Action') | (dataFrame['Genre']=='Shooter'))]['Global_Sales']\nK", "_____no_output_____" ], [ "L=dataFrame[(dataFrame[\"Year\"]<=2001) & ((dataFrame['Genre']=='Action') | (dataFrame['Genre']=='Shooter'))]['Global_Sales']\nL", "_____no_output_____" ] ], [ [ "Now, lets see their distribution on a plot", "_____no_output_____" ] ], [ [ "ax = seaborn.kdeplot(K.rename('After 2001'), shade=True)\nseaborn.kdeplot(L.rename('Before 2001'), ax=ax, shade=True)\nmatplotlib.pyplot.show()", "_____no_output_____" ] ], [ [ "Now, with the 0.05 significance level, we call the difference between means hypothesis testing function", "_____no_output_____" ] ], [ [ "t, p = stats.ttest_ind(a=K.values, b=L.values, equal_var=False)\nprint(\"T-statistşic is \", t)\nprint(\"P-value is, \", p)", "T-statistşic is -2.773132477189671\nP-value is, 0.0057087266959941815\n" ] ], [ [ "We can reject the null hypothesis however, t-statistic is negative. Meaning that we sales after 2001 appears to be lowered compared to the value before 2001. Meaning that our hypothesis is not valid.", "_____no_output_____" ], [ "# Regression Analysis", "_____no_output_____" ], [ "In this section, we will look at the correlation among NA Sales and EU Sales. As we said earlier, they should be increasing and decreasing together with strong correlation coefficient.", "_____no_output_____" ] ], [ [ "a, b, correlation, p, sigma = stats.linregress(dataFrame['NA_Sales'],dataFrame['EU_Sales'])\nprint(\"Slope is \",a)\nprint(\"Intercept is \", b)\nprint(\"Correlation is \", correlation)", "Slope is 0.47616662965288575\nIntercept is 0.021239180233422694\nCorrelation is 0.768922992756561\n" ] ], [ [ "As we can see correlation coefficient is high. This means two variables are very correlated. Now lets see on a graph. To do this we will create a sample.", "_____no_output_____" ] ], [ [ "x_vals=[i for i in range(1,20)]\ny_vals=[a*i+b for i in x_vals]\nmatplotlib.pyplot.plot(x_vals,y_vals)\nmatplotlib.pyplot.xlabel(\"NA Sales\")\nmatplotlib.pyplot.ylabel(\"EU Sales\")\nmatplotlib.pyplot.title(\"Correlation of NA and EU Sales\")\nmatplotlib.pyplot.show()", "_____no_output_____" ] ], [ [ "# Machine Learning", "_____no_output_____" ], [ "As machine learning algorithms, we have chosen Support Vector Machine and Naive Bayes. We will classify our data by genres.", "_____no_output_____" ] ], [ [ "from sklearn.svm import SVC\nfrom sklearn.metrics import accuracy_score, f1_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import GaussianNB\nimport warnings\nwarnings.filterwarnings('ignore')", "_____no_output_____" ], [ "Classes=dataFrame['GenreNum'].values\nFeatures=dataFrame.drop(['GenreNum','Genre','Name','Platform','Publisher'],axis=1).values", "_____no_output_____" ] ], [ [ "## Support Vector Machine", "_____no_output_____" ] ], [ [ "X_train, X_test, y_train, y_test = train_test_split(Features, Classes, test_size=0.33, random_state=42)\nclf = SVC(kernel=\"rbf\")\nclf.fit(X_train, y_train)\ny_pred = clf.predict(X_test)\nprint(\"Accuracy is \",accuracy_score(y_test, y_pred))\nprint(\"F1 score is \",f1_score(y_test, y_pred,average='weighted'))", "Accuracy is 0.20159940487260555\nF1 score is 0.07560120837029409\n" ] ], [ [ "## Naive Bayes", "_____no_output_____" ] ], [ [ "clf = GaussianNB()\nX_train, X_test, y_train, y_test = train_test_split(Features, Classes, test_size=0.33, random_state=42)\nclf.fit(X_train, y_train)\ny_pred = clf.predict(X_test)\nprint(\"Accuracy is \",accuracy_score(y_test, y_pred))\nprint(\"F1 score is \",f1_score(y_test, y_pred,average='weighted'))", "Accuracy is 0.13260182257764552\nF1 score is 0.0954174405452282\n" ] ], [ [ "As we can see in the above calculations, both machine learning algorithms work pretty bad on the data. This can mean that attributes in our dataset does not contain sufficient information about the Genre information of games. Maybe adding some other features to the dataset increase the accuracy of the algorithms.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf44d71c169152cfec62e7591c60dd9b46a4bdc
69,283
ipynb
Jupyter Notebook
Sequence Models/Cities -- Character level language model final - v3-Copy1.ipynb
dfmooreqqq/deeplearning
5d90731a1e56a41dda4a586e30f3e43d7dc6e3d9
[ "MIT" ]
null
null
null
Sequence Models/Cities -- Character level language model final - v3-Copy1.ipynb
dfmooreqqq/deeplearning
5d90731a1e56a41dda4a586e30f3e43d7dc6e3d9
[ "MIT" ]
null
null
null
Sequence Models/Cities -- Character level language model final - v3-Copy1.ipynb
dfmooreqqq/deeplearning
5d90731a1e56a41dda4a586e30f3e43d7dc6e3d9
[ "MIT" ]
null
null
null
31.194507
1,436
0.526161
[ [ [ "# Character level language model - Dinosaurus land\n\nWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go beserk, so choose wisely! \n\n<table>\n<td>\n<img src=\"images/dino.jpg\" style=\"width:250;height:300px;\">\n\n</td>\n\n</table>\n\nLuckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](cities.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! \n\nBy completing this assignment you will learn:\n\n- How to store text data for processing using an RNN \n- How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit\n- How to build a character-level text generation recurrent neural network\n- Why clipping the gradients is important\n\nWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. ", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom utils import *\nimport random", "_____no_output_____" ] ], [ [ "## 1 - Problem Statement\n\n### 1.1 - Dataset and Preprocessing\n\nRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size. ", "_____no_output_____" ] ], [ [ "data = open('Britishcities.txt', 'r').read()\ndata = data.lower()\nchars = list(set(data))\ndata_size, vocab_size = len(data), len(chars)\nprint('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))", "There are 6330 total characters and 27 unique characters in your data.\n" ] ], [ [ "The characters are a-z (26 characters) plus the \"\\n\" (or newline character), which in this assignment plays a role similar to the `<EOS>` (or \"End of sentence\") token we had discussed in lecture, only here it indicates the end of the dinosaur name rather than the end of a sentence. In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26. We also create a second python dictionary that maps each index back to the corresponding character character. This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer. Below, `char_to_ix` and `ix_to_char` are the python dictionaries. ", "_____no_output_____" ] ], [ [ "char_to_ix = { ch:i for i,ch in enumerate(sorted(chars)) }\nix_to_char = { i:ch for i,ch in enumerate(sorted(chars)) }\nprint(ix_to_char)", "{0: '\\n', 1: 'a', 2: 'b', 3: 'c', 4: 'd', 5: 'e', 6: 'f', 7: 'g', 8: 'h', 9: 'i', 10: 'j', 11: 'k', 12: 'l', 13: 'm', 14: 'n', 15: 'o', 16: 'p', 17: 'q', 18: 'r', 19: 's', 20: 't', 21: 'u', 22: 'v', 23: 'w', 24: 'x', 25: 'y', 26: 'z'}\n" ] ], [ [ "### 1.2 - Overview of the model\n\nYour model will have the following structure: \n\n- Initialize parameters \n- Run the optimization loop\n - Forward propagation to compute the loss function\n - Backward propagation to compute the gradients with respect to the loss function\n - Clip the gradients to avoid exploding gradients\n - Using the gradients, update your parameter with the gradient descent update rule.\n- Return the learned parameters \n \n<img src=\"images/rnn1.png\" style=\"width:450;height:300px;\">\n<caption><center> **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook \"Building a RNN - Step by Step\". </center></caption>\n\nAt each time-step, the RNN tries to predict what is the next character given the previous characters. The dataset $X = (x^{\\langle 1 \\rangle}, x^{\\langle 2 \\rangle}, ..., x^{\\langle T_x \\rangle})$ is a list of characters in the training set, while $Y = (y^{\\langle 1 \\rangle}, y^{\\langle 2 \\rangle}, ..., y^{\\langle T_x \\rangle})$ is such that at every time-step $t$, we have $y^{\\langle t \\rangle} = x^{\\langle t+1 \\rangle}$. ", "_____no_output_____" ], [ "## 2 - Building blocks of the model\n\nIn this part, you will build two important blocks of the overall model:\n- Gradient clipping: to avoid exploding gradients\n- Sampling: a technique used to generate characters\n\nYou will then apply these two functions to build the model.", "_____no_output_____" ], [ "### 2.1 - Clipping the gradients in the optimization loop\n\nIn this section you will implement the `clip` function that you will call inside of your optimization loop. Recall that your overall loop structure usually consists of a forward pass, a cost computation, a backward pass, and a parameter update. Before updating the parameters, you will perform gradient clipping when needed to make sure that your gradients are not \"exploding,\" meaning taking on overly large values. \n\nIn the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. There are different ways to clip gradients; we will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. More generally, you will provide a `maxValue` (say 10). In this example, if any component of the gradient vector is greater than 10, it would be set to 10; and if any component of the gradient vector is less than -10, it would be set to -10. If it is between -10 and 10, it is left alone. \n\n<img src=\"images/clip.png\" style=\"width:400;height:150px;\">\n<caption><center> **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into slight \"exploding gradient\" problems. </center></caption>\n\n**Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. Your function takes in a maximum threshold and returns the clipped versions of your gradients. You can check out this [hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html) for examples of how to clip in numpy. You will need to use the argument `out = ...`.", "_____no_output_____" ] ], [ [ "### GRADED FUNCTION: clip\n\ndef clip(gradients, maxValue):\n '''\n Clips the gradients' values between minimum and maximum.\n \n Arguments:\n gradients -- a dictionary containing the gradients \"dWaa\", \"dWax\", \"dWya\", \"db\", \"dby\"\n maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue\n \n Returns: \n gradients -- a dictionary with the clipped gradients.\n '''\n \n dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']\n \n ### START CODE HERE ###\n # clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)\n for gradient in [dWax, dWaa, dWya, db, dby]:\n np.clip(gradient, -maxValue, maxValue, out=gradient)\n ### END CODE HERE ###\n \n gradients = {\"dWaa\": dWaa, \"dWax\": dWax, \"dWya\": dWya, \"db\": db, \"dby\": dby}\n \n return gradients", "_____no_output_____" ], [ "np.random.seed(3)\ndWax = np.random.randn(5,3)*10\ndWaa = np.random.randn(5,5)*10\ndWya = np.random.randn(2,5)*10\ndb = np.random.randn(5,1)*10\ndby = np.random.randn(2,1)*10\ngradients = {\"dWax\": dWax, \"dWaa\": dWaa, \"dWya\": dWya, \"db\": db, \"dby\": dby}\ngradients = clip(gradients, 10)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"gradients[\\\"dWax\\\"][3][1] =\", gradients[\"dWax\"][3][1])\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])", "gradients[\"dWaa\"][1][2] = 10.0\ngradients[\"dWax\"][3][1] = -10.0\ngradients[\"dWya\"][1][2] = 0.29713815361\ngradients[\"db\"][4] = [ 10.]\ngradients[\"dby\"][1] = [ 8.45833407]\n" ] ], [ [ "** Expected output:**\n\n<table>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2] **\n </td>\n <td> \n 10.0\n </td>\n</tr>\n\n<tr>\n <td> \n **gradients[\"dWax\"][3][1]**\n </td>\n <td> \n -10.0\n </td>\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> \n0.29713815361\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> \n[ 10.]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td> \n[ 8.45833407]\n </td>\n</tr>\n\n</table>", "_____no_output_____" ], [ "### 2.2 - Sampling\n\nNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below:\n\n<img src=\"images/dinos3.png\" style=\"width:500;height:300px;\">\n<caption><center> **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\\langle 1\\rangle} = \\vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption>\n\n**Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:\n\n- **Step 1**: Pass the network the first \"dummy\" input $x^{\\langle 1 \\rangle} = \\vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\\langle 0 \\rangle} = \\vec{0}$\n\n- **Step 2**: Run one step of forward propagation to get $a^{\\langle 1 \\rangle}$ and $\\hat{y}^{\\langle 1 \\rangle}$. Here are the equations:\n\n$$ a^{\\langle t+1 \\rangle} = \\tanh(W_{ax} x^{\\langle t \\rangle } + W_{aa} a^{\\langle t \\rangle } + b)\\tag{1}$$\n\n$$ z^{\\langle t + 1 \\rangle } = W_{ya} a^{\\langle t + 1 \\rangle } + b_y \\tag{2}$$\n\n$$ \\hat{y}^{\\langle t+1 \\rangle } = softmax(z^{\\langle t + 1 \\rangle })\\tag{3}$$\n\nNote that $\\hat{y}^{\\langle t+1 \\rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\\hat{y}^{\\langle t+1 \\rangle}_i$ represents the probability that the character indexed by \"i\" is the next character. We have provided a `softmax()` function that you can use.\n\n- **Step 3**: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\\hat{y}^{\\langle t+1 \\rangle }$. This means that if $\\hat{y}^{\\langle t+1 \\rangle }_i = 0.16$, you will pick the index \"i\" with 16% probability. To implement it, you can use [`np.random.choice`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html).\n\nHere is an example of how to use `np.random.choice()`:\n```python\nnp.random.seed(0)\np = np.array([0.1, 0.0, 0.7, 0.2])\nindex = np.random.choice([0, 1, 2, 3], p = p.ravel())\n```\nThis means that you will pick the `index` according to the distribution: \n$P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$.\n\n- **Step 4**: The last step to implement in `sample()` is to overwrite the variable `x`, which currently stores $x^{\\langle t \\rangle }$, with the value of $x^{\\langle t + 1 \\rangle }$. You will represent $x^{\\langle t + 1 \\rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\\langle t + 1 \\rangle }$ in Step 1 and keep repeating the process until you get a \"\\n\" character, indicating you've reached the end of the dinosaur name. ", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: sample\n\ndef sample(parameters, char_to_ix, seed):\n \"\"\"\n Sample a sequence of characters according to a sequence of probability distributions output of the RNN\n\n Arguments:\n parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. \n char_to_ix -- python dictionary mapping each character to an index.\n seed -- used for grading purposes. Do not worry about it.\n\n Returns:\n indices -- a list of length n containing the indices of the sampled characters.\n \"\"\"\n \n # Retrieve parameters and relevant shapes from \"parameters\" dictionary\n Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']\n vocab_size = by.shape[0]\n n_a = Waa.shape[1]\n \n ### START CODE HERE ###\n # Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line)\n x = np.zeros((vocab_size, 1))\n # Step 1': Initialize a_prev as zeros (≈1 line)\n a_prev = np.zeros((n_a, 1))\n \n # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)\n indices = []\n \n # Idx is a flag to detect a newline character, we initialize it to -1\n idx = -1 \n \n # Loop over time-steps t. At each time-step, sample a character from a probability distribution and append \n # its index to \"indices\". We'll stop if we reach 50 characters (which should be very unlikely with a well \n # trained model), which helps debugging and prevents entering an infinite loop. \n counter = 0\n newline_character = char_to_ix['\\n']\n \n while (idx != newline_character and counter != 50):\n \n # Step 2: Forward propagate x using the equations (1), (2) and (3)\n a = np.tanh(np.dot(Wax, x) + np.dot(Waa, a_prev) + b)\n z = np.dot(Wya, a) + by\n y = softmax(z)\n \n # for grading purposes\n np.random.seed(counter + seed) \n \n # Step 3: Sample the index of a character within the vocabulary from the probability distribution y\n idx = np.random.choice(list(range(vocab_size)), p=y.ravel())\n\n # Append the index to \"indices\"\n indices.append(idx)\n \n # Step 4: Overwrite the input character as the one corresponding to the sampled index.\n x = np.zeros((vocab_size, 1))\n x[idx] = 1\n \n # Update \"a_prev\" to be \"a\"\n a_prev = a\n \n # for grading purposes\n seed += 1\n counter +=1\n \n ### END CODE HERE ###\n\n if (counter == 50):\n indices.append(char_to_ix['\\n'])\n \n return indices", "_____no_output_____" ], [ "np.random.seed(2)\n_, n_a = 20, 100\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\n\n\nindices = sample(parameters, char_to_ix, 0)\nprint(\"Sampling:\")\nprint(\"list of sampled indices:\", indices)\nprint(\"list of sampled characters:\", [ix_to_char[i] for i in indices])", "Sampling:\nlist of sampled indices: [15, 5, 21, 21, 16, 10, 27, 26, 5, 23, 16, 10, 16, 13, 10, 23, 26, 7, 9, 16, 13, 16, 23, 10, 0]\nlist of sampled characters: ['m', 'c', 't', 't', 'n', 'h', 'z', 'y', 'c', 'v', 'n', 'h', 'n', 'k', 'h', 'v', 'y', 'e', 'g', 'n', 'k', 'n', 'v', 'h', '\\n']\n" ] ], [ [ "** Expected output:**\n<table>\n<tr>\n <td> \n **list of sampled indices:**\n </td>\n <td> \n [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, <br>\n 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 5, 6, 12, 25, 0, 0]\n </td>\n </tr><tr>\n <td> \n **list of sampled characters:**\n </td>\n <td> \n ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', <br>\n 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', <br>\n 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'e', 'f', 'l', 'y', '\\n', '\\n']\n </td>\n \n \n \n</tr>\n</table>", "_____no_output_____" ], [ "## 3 - Building the language model \n\nIt is time to build the character-level language model for text generation. \n\n\n### 3.1 - Gradient descent \n\nIn this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:\n\n- Forward propagate through the RNN to compute the loss\n- Backward propagate through time to compute the gradients of the loss with respect to the parameters\n- Clip the gradients if necessary \n- Update your parameters using gradient descent \n\n**Exercise**: Implement this optimization process (one step of stochastic gradient descent). \n\nWe provide you with the following functions: \n\n```python\ndef rnn_forward(X, Y, a_prev, parameters):\n \"\"\" Performs the forward propagation through the RNN and computes the cross-entropy loss.\n It returns the loss' value as well as a \"cache\" storing values to be used in the backpropagation.\"\"\"\n ....\n return loss, cache\n \ndef rnn_backward(X, Y, parameters, cache):\n \"\"\" Performs the backward propagation through time to compute the gradients of the loss with respect\n to the parameters. It returns also all the hidden states.\"\"\"\n ...\n return gradients, a\n\ndef update_parameters(parameters, gradients, learning_rate):\n \"\"\" Updates parameters using the Gradient Descent Update Rule.\"\"\"\n ...\n return parameters\n```", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: optimize\n\ndef optimize(X, Y, a_prev, parameters, learning_rate = 0.01):\n \"\"\"\n Execute one step of the optimization to train the model.\n \n Arguments:\n X -- list of integers, where each integer is a number that maps to a character in the vocabulary.\n Y -- list of integers, exactly the same as X but shifted one index to the left.\n a_prev -- previous hidden state.\n parameters -- python dictionary containing:\n Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)\n Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)\n Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)\n b -- Bias, numpy array of shape (n_a, 1)\n by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)\n learning_rate -- learning rate for the model.\n \n Returns:\n loss -- value of the loss function (cross-entropy)\n gradients -- python dictionary containing:\n dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)\n dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)\n dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)\n db -- Gradients of bias vector, of shape (n_a, 1)\n dby -- Gradients of output bias vector, of shape (n_y, 1)\n a[len(X)-1] -- the last hidden state, of shape (n_a, 1)\n \"\"\"\n \n ### START CODE HERE ###\n \n # Forward propagate through time (≈1 line)\n loss, cache = rnn_forward(X, Y, a_prev, parameters)\n \n # Backpropagate through time (≈1 line)\n gradients, a = rnn_backward(X, Y, parameters, cache)\n \n # Clip your gradients between -5 (min) and 5 (max) (≈1 line)\n gradients = clip(gradients, 5)\n \n # Update parameters (≈1 line)\n parameters = update_parameters(parameters, gradients, learning_rate)\n \n ### END CODE HERE ###\n \n return loss, gradients, a[len(X)-1]", "_____no_output_____" ], [ "np.random.seed(1)\nvocab_size, n_a = 27, 100\na_prev = np.random.randn(n_a, 1)\nWax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)\nb, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)\nparameters = {\"Wax\": Wax, \"Waa\": Waa, \"Wya\": Wya, \"b\": b, \"by\": by}\nX = [12,3,5,11,22,3]\nY = [4,14,11,22,25, 26]\n\nloss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)\nprint(\"Loss =\", loss)\nprint(\"gradients[\\\"dWaa\\\"][1][2] =\", gradients[\"dWaa\"][1][2])\nprint(\"np.argmax(gradients[\\\"dWax\\\"]) =\", np.argmax(gradients[\"dWax\"]))\nprint(\"gradients[\\\"dWya\\\"][1][2] =\", gradients[\"dWya\"][1][2])\nprint(\"gradients[\\\"db\\\"][4] =\", gradients[\"db\"][4])\nprint(\"gradients[\\\"dby\\\"][1] =\", gradients[\"dby\"][1])\nprint(\"a_last[4] =\", a_last[4])", "Loss = 126.503975722\ngradients[\"dWaa\"][1][2] = 0.194709315347\nnp.argmax(gradients[\"dWax\"]) = 93\ngradients[\"dWya\"][1][2] = -0.007773876032\ngradients[\"db\"][4] = [-0.06809825]\ngradients[\"dby\"][1] = [ 0.01538192]\na_last[4] = [-1.]\n" ] ], [ [ "** Expected output:**\n\n<table>\n\n\n<tr>\n <td> \n **Loss **\n </td>\n <td> \n 126.503975722\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWaa\"][1][2]**\n </td>\n <td> \n 0.194709315347\n </td>\n<tr>\n <td> \n **np.argmax(gradients[\"dWax\"])**\n </td>\n <td> 93\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dWya\"][1][2]**\n </td>\n <td> -0.007773876032\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"db\"][4]**\n </td>\n <td> [-0.06809825]\n </td>\n</tr>\n<tr>\n <td> \n **gradients[\"dby\"][1]**\n </td>\n <td>[ 0.01538192]\n </td>\n</tr>\n<tr>\n <td> \n **a_last[4]**\n </td>\n <td> [-1.]\n </td>\n</tr>\n\n</table>", "_____no_output_____" ], [ "### 3.2 - Training the model ", "_____no_output_____" ], [ "Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. Every 100 steps of stochastic gradient descent, you will sample 10 randomly chosen names to see how the algorithm is doing. Remember to shuffle the dataset, so that stochastic gradient descent visits the examples in random order. \n\n**Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this:\n```python\n index = j % len(examples)\n X = [None] + [char_to_ix[ch] for ch in examples[index]] \n Y = X[1:] + [char_to_ix[\"\\n\"]]\n```\nNote that we use: `index= j % len(examples)`, where `j = 1....num_iterations`, to make sure that `examples[index]` is always a valid statement (`index` is smaller than `len(examples)`).\nThe first entry of `X` being `None` will be interpreted by `rnn_forward()` as setting $x^{\\langle 0 \\rangle} = \\vec{0}$. Further, this ensures that `Y` is equal to `X` but shifted one step to the left, and with an additional \"\\n\" appended to signify the end of the dinosaur name. ", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: model\n\ndef model(data, ix_to_char, char_to_ix, num_iterations = 200000, n_a = 50, dino_names = 10, vocab_size = 27):\n \"\"\"\n Trains the model and generates dinosaur names. \n \n Arguments:\n data -- text corpus\n ix_to_char -- dictionary that maps the index to a character\n char_to_ix -- dictionary that maps a character to an index\n num_iterations -- number of iterations to train the model for\n n_a -- number of units of the RNN cell\n dino_names -- number of dinosaur names you want to sample at each iteration. \n vocab_size -- number of unique characters found in the text, size of the vocabulary\n \n Returns:\n parameters -- learned parameters\n \"\"\"\n \n # Retrieve n_x and n_y from vocab_size\n n_x, n_y = vocab_size, vocab_size\n \n # Initialize parameters\n parameters = initialize_parameters(n_a, n_x, n_y)\n \n # Initialize loss (this is required because we want to smooth our loss, don't worry about it)\n loss = get_initial_loss(vocab_size, dino_names)\n \n # Build list of all dinosaur names (training examples).\n with open(\"Britishcities.txt\") as f:\n examples = f.readlines()\n examples = [x.lower().strip() for x in examples]\n \n # Shuffle list of all dinosaur names\n np.random.seed(0)\n np.random.shuffle(examples)\n \n # Initialize the hidden state of your LSTM\n a_prev = np.zeros((n_a, 1))\n \n # Optimization loop\n for j in range(num_iterations):\n \n ### START CODE HERE ###\n \n # Use the hint above to define one training example (X,Y) (≈ 2 lines)\n index = j % len(examples)\n X = [None] + [char_to_ix[ch] for ch in examples[index]] \n Y = X[1:] + [char_to_ix[\"\\n\"]]\n \n # Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters\n # Choose a learning rate of 0.01\n curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters)\n \n ### END CODE HERE ###\n \n # Use a latency trick to keep the loss smooth. It happens here to accelerate the training.\n loss = smooth(loss, curr_loss)\n\n # Every 2000 Iteration, generate \"n\" characters thanks to sample() to check if the model is learning properly\n if j % 2000 == 0:\n \n print('Iteration: %d, Loss: %f' % (j, loss) + '\\n')\n \n # The number of dinosaur names to print\n seed = 0\n for name in range(dino_names):\n \n # Sample indices and print them\n sampled_indices = sample(parameters, char_to_ix, seed)\n print_sample(sampled_indices, ix_to_char)\n \n seed += 1 # To get the same result for grading purposed, increment the seed by one. \n \n print('\\n')\n \n return parameters", "_____no_output_____" ] ], [ [ "Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names. ", "_____no_output_____" ] ], [ [ "parameters = model(data, ix_to_char, char_to_ix)", "Iteration: 0, Loss: 32.955075\n\nNkzxwtdmeqoeyhsqwasjjjvu\nKneb\nKzxwtdmeqoeyhsqwasjjjvu\nNeb\nZxwtdmeqoeyhsqwasjjjvu\nEb\nXwtdmeqoeyhsqwasjjjvu\nB\nWtdmeqoeyhsqwasjjjvu\n\n\n\nIteration: 2000, Loss: 28.968484\n\nNeystomedhilwesqsarghbrp\nIn\nKwstomedhilwesqsarghbrp\nNac\nXutomedinbwernocpembsr\nE\nUtomedinbwernocpembsr\nA\nTomedinbwernocpembsr\n\n\n\nIteration: 4000, Loss: 25.359447\n\nNeystom\nKfeabestef\nLwoutgh\nNbaadpsecblonshamton\nWprodgherctestod\nE\nTon\nBalsted\nTon\n\n\n\nIteration: 6000, Loss: 23.566671\n\nNeytpool\nKek\nLutrickbsolthinsburthor\nNeaberwfickpish\nVouth\nD\nToodbirtfury\nBalpon\nTonchingverouch\n\n\n\nIteration: 8000, Loss: 22.812100\n\nLeyston\nHam\nKstor\nLea\nWout\nD\nTord\nBamnmebary\nTone\n\n\n\nIteration: 10000, Loss: 22.088897\n\nMewton\nKhal\nKurysfordecsbury\nMal\nWotten\nCabnoveaborlydbrsom\nTor\nBampsom\nTon\n\n\n\nIteration: 12000, Loss: 21.853953\n\nLeyouth\nHel\nHutrm\nLea\nWouth\nCaaclombery\nTon\nBalynfaampord\nTon\nAloth\n\n\nIteration: 14000, Loss: 21.532611\n\nLeycorambrokthinsarileyabmory\nHel\nHrrordingnby\nLea\nWort\nCacesseaalooreigstonudinscaveschelmiplenioleogorii\nStoge\nBalssfaehton\nStinchmavesnolringtochintle\nAlnon\n\n\nIteration: 16000, Loss: 21.228467\n\nLeymoudgesintesry\nHelcamoweaciskscerton\nHustonbrod\nLecbery\nWort\nCacfrrd\nTpokinnkey\nBainsom\nStinburowglns\nAlnpol\n\n\nIteration: 18000, Loss: 21.127703\n\nLeymoukham\nHeid\nKutrock\nLe\nWosten\nCaadpraccholsamiton\nTos\nBaistocglonon\nTole\n\n\n\nIteration: 20000, Loss: 20.978429\n\nLeypstepdor\nHel\nHuton\nLea\nWotte\nCaallford\nStofkarney\nBaiton\nStle\n\n\n\nIteration: 22000, Loss: 20.673604\n\nLeyton\nHelb\nHuton\nLeabenton\nWotten\nCacheskbort\nTrohleighumotrash\nBalton\nStapenonter\nAlton\n\n\nIteration: 24000, Loss: 20.756749\n\nLeytothamarewesoubraldtockley\nHam\nHutpong\nLacchiseabithupertockwalpthattnbridghinger\nWorthamorey\nCaalntolanorth\nWmingdan\nBalptocfirhfharwichum\nTombericlgritcleigstbishwar\nAlstocchorthamricewandwactond\n\n\nIteration: 26000, Loss: 20.770037\n\nLeymoudd\nHelb\nHustole\nLeaberyak\nWiton\nCackmol\nStodeckhatdinscombesk\nBadmood\nStbeigh\n\n\n\nIteration: 28000, Loss: 20.353107\n\nLeystoke\nHalbamptol\nHutstbury\nLad\nWnstapfordemouth\nCaaford\nTridgerfnudley\nBakpthafington\nTonbast\nAlstfaddrith\n\n\nIteration: 30000, Loss: 20.149160\n\nLey\nHal\nHurton\nLad\nWipter\nCacklon\nStode\nBalstlabnort\nStambridgeinsanellviastlvegbradpcal\nAlton\n\n\nIteration: 32000, Loss: 20.144834\n\nLeynstdebordshiot\nHelcalstok\nHyrymarcheltondsarghamparreyfoedchudch\nLea\nWouthamptor\nCabestee\nWishamptor\nBalyracbounshamptontbrire\nWick\nAlnonbarshol\n\n\nIteration: 34000, Loss: 20.094242\n\nLeysstanburyrastollemist\nHelbaossbackley\nHurtham\nLeabchilbarungeary\nWister\nCabhishad\nStock\nBalsteh\nStanburyshilt\nAlsted\n\n\nIteration: 36000, Loss: 20.015942\n\nLeyton\nHeid\nHutton\nLeabasted\nWitmer\nCaaliok\nTorbigh\nBalyhal\nStatesbrretfoombury\nAlniig\n\n\nIteration: 38000, Loss: 20.071488\n\nHeyton\nChefahmpeafmonthamomeey\nDutton\nHacblisccamplington\nWitolestontaort\nChampsol\nStockesbounhevasherstanplylfard\nBackledbridge\nStckepsey\nAltom\n\n\nIteration: 40000, Loss: 19.989844\n\nLeymorde\nHeicalnord\nHwingham\nLed\nWouth\nCaalmrecclleugh\nWmick\nBalsteabordindishaly\nTon\nAlrom\n\n\nIteration: 42000, Loss: 19.967023\n\nLeyton\nKigcalpricansbury\nKwnouth\nLe\nWotter\nCaacosgadfore\nStockeskey\nBalstid\nStchestex\nAlnom\n\n\nIteration: 44000, Loss: 19.939794\n\nLeystone\nHedbadgreadington\nHurymandmantinknarigherantowe\nLeabershalfore\nWinham\nCaafrombartholdey\nStoke\nBaklon\nStce\nAlton\n\n\nIteration: 46000, Loss: 19.900838\n\nLeynruckinghorthfburley\nHeicalning\nHurton\nLeabe\nWorthamineyd\nCabdook\nWinfland\nBaenmeadesey\nWick\nAlton\n\n\nIteration: 48000, Loss: 19.863210\n\nLittor\nHencarring\nHurwill\nLeb\nWinker\nBeadin\nWinchester\nBalstlaclinskermincwarpsadpton\nWifrapstteripburwest\nAlton\n\n\nIteration: 50000, Loss: 19.945864\n\nLitnsreleshat\nFord\nHuton\nLeballom\nWotter\nCacesseachooth\nTridgekboveves\nBahktol\nStbrigh\nAlnon\n\n\nIteration: 52000, Loss: 19.601671\n\nListon\nKigcalmon\nKutpschesteteretanbury\nLeaalmon\nUxton\nCacfronbe\nStogley\nBadipleaqton\nSten\nAlton\n\n\nIteration: 54000, Loss: 19.760352\n\nMivgster\nHigcamishaffretforl\nHurton\nMedalnne\nWotter\nCabdove\nTosborougonford\nAbiosba\nTonfinghondovaweonouchinver\nAlstofeinhim\n\n\nIteration: 56000, Loss: 19.677359\n\nLeytpond\nLiccampton\nLyry\nLad\nWlusbridge\nCabllye\nSutbridgeamchoby\nAbermal\nStbrengawding\nAlpon\n\n\nIteration: 58000, Loss: 19.720987\n\nMitouthmertftestoburontofordon\nHelcamnthaford\nHutosbridge\nMad\nWoster\nCadbury\nTriesburtonept\nBalstmbelter\nSterbrids\nAlstoe\n\n\nIteration: 60000, Loss: 19.862926\n\nKivgton\nCheabeste\nCripter\nKeabestek\nWlotenathas\nBaakmon\nStole\nAbesthagealmbort\nPockerfotdoniclelforampowhaliant\nAllincampton\n\n\nIteration: 62000, Loss: 19.441459\n\nMavisham\nHigd\nHutston\nMadalmon\nWostgresshoorow\nBabestok\nWordbore\nAchiplackinton\nTodford\nAlton\n\n\nIteration: 64000, Loss: 19.558540\n\nNewton\nLich\nLystlescheltede\nNeg\nWostdill\nEgadmilbarplinghumbey\nStoggey\nBadlingcopmpomowordwallieatmilledge\nStapingdshinsbury\nAlmon\n\n\nIteration: 66000, Loss: 19.684427\n\nLewrstincorewastock\nHeld\nKwoster\nLacalmrech\nWnstbore\nCaakmord\nWlld\nBadgor\nTrard\nAlton\n\n\nIteration: 68000, Loss: 19.530390\n\nHevpord\nChedalmorabridge\nExpton\nHalae\nWlnonferidleworandfils\nBacesteichriten\nStolfield\nAcherd\nRololincreston\nAlton\n\n\nIteration: 70000, Loss: 19.538280\n\nHittton\nDodbajowee\nEyossamastlverstarle\nHal\nWottanburts\nBe\nTrickcombt\nAbhescadgrewichlichwalfoldorham\nStapleigomouth\nAlpstackneweldriddwbirscastleiggriste\n\n\nIteration: 72000, Loss: 19.697274\n\nLitmouth\nElidchurd\nGunton\nLeabers\nWorth\nBaberpod\nStodford\nAchipod\nStanbury\nAlstofburymallton\n\n\nIteration: 74000, Loss: 19.638456\n\nMitston\nHelbacleige\nHutton\nMeccasted\nWlores\nCaamithallinon\nTreaseorey\nAchercalleoree\nTraresteves\nAlton\n\n\nIteration: 76000, Loss: 19.908440\n\nFithurhalleclespodlunios\nCle\nCuston\nFad\nWowraugh\nBabershaghtowel\nSurhelrooughas\nAbeston\nSulschektleiparielos\nAlnon\n\n\nIteration: 78000, Loss: 19.911243\n\nNhutwich\nHincalpton\nHystonbury\nNe\nWyster\nCeaditge\nWyrhinlemortincomley\nAberrea\nWolefordinort\nAltrea\n\n\nIteration: 80000, Loss: 19.742198\n\nMitton\nHekbarsteb\nKtotter\nMacbisteaddontencurgdumburgershameliothambe\nWoster\nCaadiseechliod\nStone\nAbistie\nSter\nAlton\n\n\nIteration: 82000, Loss: 19.555120\n\nLeyton\nHigaching\nHtrotford\nLeabennedanonseonupody\nWith\nCacford\nTrickerney\nAberse\nTraletonser\nAlton\n\n\nIteration: 84000, Loss: 19.839611\n\nMewpridge\nGodbamowe\nHystomerpoot\nMacbersea\nWowpeokington\nBaacounbemort\nWoterburnwestrarford\nAberpe\nWigham\nAlnspackreyametle\n\n\nIteration: 86000, Loss: 19.767074\n\nMittridge\nHilb\nHurton\nMedalmincburnerhastertalloffrorkincheton\nWostbr\nBeadford\nWirham\nAbistol\nTodfingdunins\nAlstinburton\n\n\nIteration: 88000, Loss: 19.944323\n\nMiton\nHilccesten\nKvister\nNecblish\nWotter\nCacclfiad\nWorehamelungruckenipsanomurh\nBalston\nWill\nAlstok\n\n\nIteration: 90000, Loss: 19.855567\n\nMewstonbhingtleigdon\nHelcampton\nHurton\nMacalmora\nWoster\nCacclila\nStogily\nAbistfe\nSrembridonkryckemcnick\nAlrmoidgsrumbeth\n\n\nIteration: 92000, Loss: 19.733712\n\nHewston\nEple\nFringham\nHam\nWorten\nBabershal\nWloiney\nAbesseg\nWield\nAinsim\n\n\nIteration: 94000, Loss: 19.931160\n\nMittrich\nFiecberkil\nHurwich\nMacalrte\nWlrich\nBabesre\nWinchark\nAbestea\nWies\nAlnon\n\n\nIteration: 96000, Loss: 19.599791\n\nMautns\nHelcamswah\nHurton\nMaf\nWotol\nCaalough\nWordgetoltinstarlell\nBalton\nWill\nAgmophamelfforthampe\n\n\nIteration: 98000, Loss: 19.484045\n\nGluton\nChicchord\nCrismbrarl\nGad\nWirlbomfort\nBabeson\nSurchestet\nAbletfe\nPich\nAipsefarootelishalvellfiarnencalletile\n\n\nIteration: 100000, Loss: 19.490945\n\nMitonidinincuminsbursham\nEtefanewad\nGurss\nMidalmon\nWouth\nBachipgden\nWotforough\nAbespol\nWolpningtminn\nActrecckorton\n\n\nIteration: 102000, Loss: 19.401346\n\nNewtorbury\nLidd\nLyrton\nNerbetolbelrischevilewallol\nWoutford\nCaaboshallhere\nWorford\nAblisfe\nWlescliftonis\nAgnonborthsealune\n\n\nIteration: 104000, Loss: 19.487333\n\nOltrosford\nLilcampton\nLyrtonbrooknersoburshop\nOld\nWlotd\nCaalhal\nWlmarell\nBaghom\nWilleymavermodine\nAlswicglonham\n\n\nIteration: 106000, Loss: 19.437009\n\nElverwaldon\nChef\nCrister\nEld\nWrutford\nBachlreadis\nRoud\nAbisle\nPedillinghbrock\nAfmood\n\n\nIteration: 108000, Loss: 20.331300\n\nGlyotol\nBreabaster\nCrovon\nGad\nWirkareeshwingtan\nAgampton\nWinchcam\nAbisten\nWiek\nAlnidd\n\n\nIteration: 110000, Loss: 20.629682\n\nLitlutbury\nElcbedford\nFurton\nLeabloun\nWurtford\nBabesshadmorteg\nSurd\nAbiston\nStenburgt\nAkste\n\n\nIteration: 112000, Loss: 20.352637\n\nKitstoek\nEld\nFurton\nKedagloncamley\nWoutham\nBabendea\nTutbury\nAberthackinster\nStcherley\nAhuriaffrowel\n\n\nIteration: 114000, Loss: 20.219901\n\nHgrrough\nChecalough\nCrowtham\nHalberopcanthord\nWottford\nBabeston\nStokingh\nAblotgcamslook\nRock\nAgowel\n\n\n" ] ], [ [ "## Conclusion\n\nYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implemetation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.\n\nIf your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! \n\nThis assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favoriate name is the great, undefeatable, and fierce: Mangosaurus!\n\n<img src=\"images/mangosaurus.jpeg\" style=\"width:250;height:300px;\">", "_____no_output_____" ], [ "## 4 - Writing like Shakespeare\n\nThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. \n\nA similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in ths sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. \n\n\n<img src=\"images/shakespeare.jpg\" style=\"width:500;height:400px;\">\n<caption><center> Let's become poets! </center></caption>\n\nWe have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes. ", "_____no_output_____" ] ], [ [ "from __future__ import print_function\nfrom keras.callbacks import LambdaCallback\nfrom keras.models import Model, load_model, Sequential\nfrom keras.layers import Dense, Activation, Dropout, Input, Masking\nfrom keras.layers import LSTM\nfrom keras.utils.data_utils import get_file\nfrom keras.preprocessing.sequence import pad_sequences\nfrom shakespeare_utils import *\nimport sys\nimport io", "_____no_output_____" ] ], [ [ "To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*\"The Sonnets\"*](shakespeare.txt). ", "_____no_output_____" ], [ "Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try \"Forsooth this maketh no sense \" (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well. \n", "_____no_output_____" ] ], [ [ "print_callback = LambdaCallback(on_epoch_end=on_epoch_end)\n\nmodel.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])", "Epoch 1/1\n31412/31412 [==============================] - 206s - loss: 2.5631 \n" ], [ "# Run this cell to try with different inputs without having to re-train the model \ngenerate_output()", "Write the beginning of your poem, the Shakespeare machine will complete it. Your input is: What a day!\n\n\nHere is your poem: \n\nWhat a day!\nthat lif how ir shi the naal be have not,\nsered but as hise wething my meauty grace.\nthen you wey dae libe me of infeer,\nthe earting that with thing lime thise shourl's stound be goaned.\n.\n\n\nwo true ther pas osed rute as ismebs resed\nthing wate our frull swilt, the leatere i srand be see,\nme you hor julkts date as head fart ever.\nhack she morper reenson doth fow other?\nhow, his moen ound make faw" ] ], [ [ "The RNN-Shakespeare model is very similar to the one you have built for dinosaur names. The only major differences are:\n- LSTMs instead of the basic RNN to capture longer-range dependencies\n- The model is a deeper, stacked LSTM model (2 layer)\n- Using Keras instead of python to simplify the code \n\nIf you want to learn more, you can also check out the Keras Team's text generation implementation on GitHub: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py.\n\nCongratulations on finishing this notebook! ", "_____no_output_____" ], [ "**References**:\n- This exercise took inspiration from Andrej Karpathy's implementation: https://gist.github.com/karpathy/d4dee566867f8291f086. To learn more about text generation, also check out Karpathy's [blog post](http://karpathy.github.io/2015/05/21/rnn-effectiveness/).\n- For the Shakespearian poem generator, our implementation was based on the implementation of an LSTM text generator by the Keras team: https://github.com/keras-team/keras/blob/master/examples/lstm_text_generation.py ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
cbf4573b5fc95be501e6e146cd34b1e85d2907d6
23,536
ipynb
Jupyter Notebook
.ipynb_checkpoints/speed_sigmoid-5-narrative-week-of-7.1-checkpoint.ipynb
annierak/odor_tracking_sim
4600a7be942666c3a5a0f366dab6d14838f332a0
[ "MIT" ]
null
null
null
.ipynb_checkpoints/speed_sigmoid-5-narrative-week-of-7.1-checkpoint.ipynb
annierak/odor_tracking_sim
4600a7be942666c3a5a0f366dab6d14838f332a0
[ "MIT" ]
null
null
null
.ipynb_checkpoints/speed_sigmoid-5-narrative-week-of-7.1-checkpoint.ipynb
annierak/odor_tracking_sim
4600a7be942666c3a5a0f366dab6d14838f332a0
[ "MIT" ]
null
null
null
35.128358
290
0.529147
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport warnings\nwarnings.filterwarnings('ignore')\nfrom ipywidgets import interactive,FloatSlider\nimport matplotlib", "_____no_output_____" ], [ "def sigmoid(x,x_0,L,y_0,k):\n return (x-x_0)+(L/2)+y_0 - L/(np.exp(-k*(x-x_0))+1)\n\n\ndef speed_sigmoid_func(x,x_0a,x_0b,L,k,y_0):\n output = np.zeros_like(x)\n output[(x>=x_0a)&(x<=x_0b)] = y_0\n output[x<x_0a] = sigmoid(x[x<x_0a],x_0a,L,y_0,k)\n output[x>x_0b] = sigmoid(x[x>x_0b],x_0b,L,y_0,k)\n return output\n", "_____no_output_____" ], [ "def f(x_0a,x_0b,L,k,y_0):\n yl,yu = -4,8\n inputs = np.linspace(-5,8,100)\n fig = plt.figure(figsize=(12,8))\n plt.plot(inputs,speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0))\n plt.ylim([yl,yu])\n \ndef slider(start,stop,step,init):#,init):\n return FloatSlider(\n value=init,\n min=start,\n max=stop,\n step=step,\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.2f',\n)\n \ninteractive_plot = interactive(f, x_0a =slider(-2,6,0.1,-0.4),x_0b =slider(-2,6,0.1,3.6),\n L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6))\noutput = interactive_plot.children[-1]\noutput.layout.height = '1000px'\ninteractive_plot\n", "_____no_output_____" ] ], [ [ "**Will's explanation of the perfect PID controller for windspeed/groundspeed.**", "_____no_output_____" ], [ "Start with the force equation:\n$$m\\dot{v} = -c(v+w)$$", "_____no_output_____" ], [ "where $v$ is the fly's groundspeed, and $w$, the wind speed (along the fly body axis) is positive when the wind is blowing against the fly's direct.\n\nBefore we consider the force the fly exerts, the force the fly experiences (right side) is a constant $c$ times the sum of the groundspeed and the windspeed.\n\nFor example in the case where the groundspeed $v=1$ and the windspeed is $-1$ (wind going with the fly), the force is 0. If $v=1$ and $w=1$ (wind going against the fly), the fly is experiencing a force of 2.\n\nThen add in the force (thrust) the fly can produce:", "_____no_output_____" ], [ "$$m\\dot{v} = -c(v+w) + F(v,v_{sp})$$", "_____no_output_____" ], [ "$F$ is some function, $v$ is the current fly speed, $v_{sp}$ is the set point velocity the fly wants.", "_____no_output_____" ], [ "If we set the acceleration in the above to 0, \n\n$$c(v+w) = F(v,v_{sp})$$\n$$ v = \\frac{F}{c} - w $$", "_____no_output_____" ], [ "If we plot groundspeed as a function of windspeed, the system described above will look like this:", "_____no_output_____" ], [ "<img src=\"files/simple_airspeed_controller.JPG\" width=400px></center>", "_____no_output_____" ], [ "there are range of wind values for which the fly's thrust can completely compensate for the wind and achieve equilibrium $\\dot{v} = 0$.\n\n$w_1$ is the maximum postive (into the fly) wind velocity for which the fly can produce a fully compensating counter-force (call this $F_{max}$) into the wind. After this point, the sum of forces becomes negative and so then does $\\dot{v}$. (why is it linear with respect to $w$?)\n\nAs we head towards $w_2$, the thrust decreases and could become negative, ie, the fly is applying force backwards to stop from being pushed forwards (negative w) by the wind.\n\nAt $w_2$, we have the largest backward force the fly can produce in the face of a negative wind (wind going in direction of the fly), after which point the fly starts getting pushed forward.", "_____no_output_____" ] ], [ [ "def pre_sigmoid(x,x_0,L,y_0,k):\n return (L/2)+y_0 - L/(np.exp(-k*(x-x_0))+1)\n\n\ndef sigmoid(x,x_0,L,y_0,k,m):\n return m*(x-x_0)+(L/2)+y_0 - L/(np.exp(-k*(x-x_0))+1)\n\n\ndef speed_sigmoid_func(x,x_0a,x_0b,L,k,y_0,m):\n output = np.zeros_like(x)\n output[(x>=x_0a)&(x<=x_0b)] = y_0\n output[x<x_0a] = sigmoid(x[x<x_0a],x_0a,L,y_0,k,m)\n output[x>x_0b] = sigmoid(x[x>x_0b],x_0b,L,y_0,k,m)\n return output\n\ndef f(x_0a,x_0b,L,k,y_0,m):\n yl,yu = -4,8\n inputs = np.linspace(-5,8,100)\n fig = plt.figure(figsize=(12,8))\n plt.subplot(2,1,1)\n plt.plot(inputs,pre_sigmoid(inputs,x_0a,L,y_0,k))\n plt.plot(inputs,y_0*np.ones_like(inputs),'--')\n plt.ylim([yl,yu])\n ax= plt.subplot(2,1,2)\n plt.plot(inputs,sigmoid(inputs,x_0a,L,y_0,k,m))\n plt.plot(inputs,sigmoid(inputs,x_0b,L,y_0,k,m))\n plt.plot(inputs,speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m),label='final curve',color='blue')\n plt.plot(inputs,m*(inputs-x_0a)+(L/2)+y_0,'--')\n plt.plot(inputs,m*(inputs-x_0b)-(L/2)+y_0,'--')\n \n ax.spines['left'].set_position('center')\n ax.spines['bottom'].set_position('center')\n\n # Eliminate upper and right axes\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n\n # Show ticks in the left and lower axes only\n ax.xaxis.set_ticks_position('bottom')\n ax.yaxis.set_ticks_position('left')\n \n ax.spines['bottom'].set_position('zero')\n ax.spines['left'].set_position('zero')\n\n\n \n plt.ylim([yl,yu])\n plt.legend()\n \ndef slider(start,stop,step,init):#,init):\n return FloatSlider(\n value=init,\n min=start,\n max=stop,\n step=step,\n disabled=False,\n continuous_update=False,\n orientation='horizontal',\n readout=True,\n readout_format='.2f',\n)\n \ninteractive_plot = interactive(f, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,3.6),\n L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6),\n m=slider(0,4,0.1,1.))\noutput = interactive_plot.children[-1]\noutput.layout.height = '600px'\ninteractive_plot\n", "_____no_output_____" ], [ "\ndef f(x_0a,x_0b,L,k,y_0,m,theta):\n yl,yu = -4,8\n inputs = np.linspace(-5,8,100)\n fig = plt.figure(figsize=(12,8))\n ax= plt.subplot(1,3,1)\n ax.set_aspect('equal')\n plt.plot(inputs,pre_sigmoid(inputs,x_0a,L,y_0,k))\n plt.plot(inputs,y_0*np.ones_like(inputs),'--')\n plt.ylim([yl,yu])\n ax = plt.subplot(1,3,2)\n ax.set_aspect('equal')\n plt.plot(inputs,sigmoid(inputs,x_0a,L,y_0,k,m))\n plt.plot(inputs,sigmoid(inputs,x_0b,L,y_0,k,m))\n outputs = speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m)\n plt.plot(inputs,m*(inputs-x_0a)+(L/2)+y_0,'--')\n plt.plot(inputs,m*(inputs-x_0b)-(L/2)+y_0,'--')\n plt.plot(inputs,outputs,label='final curve',color='blue')\n plt.ylim([yl,yu])\n xlim = ax.get_xlim()\n plt.legend()\n \n rot_mat = np.array([[np.cos(theta),-1.*np.sin(theta)],[np.sin(theta),np.cos(theta)]])\n rotation_origin = np.array([x_0a+(x_0b-x_0a)/2,y_0])\n plt.plot(rotation_origin[0],rotation_origin[1],'o',color='r')\n rotation_origin_ones = np.repeat(rotation_origin[:,None],100,axis=1)\n inputs1,outputs1 = np.dot(rot_mat,np.vstack((inputs,outputs))-rotation_origin_ones)+rotation_origin_ones\n ax = plt.subplot(1,3,3)\n ax.set_aspect('equal')\n plt.plot(inputs,outputs,color='blue')\n plt.plot(inputs1,outputs1,label='rotated curve')\n plt.ylim([yl,yu])\n plt.xlim(xlim)\n plt.legend()\n \n\ninteractive_plot = interactive(f, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,3.6),\n L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6),\n m=slider(0,4,0.1,1.),theta=slider(0,np.pi/2,0.1,np.pi/6))\n \n \noutput = interactive_plot.children[-1]\noutput.layout.height = '300px'\ninteractive_plot\n", "_____no_output_____" ], [ "#Concept-proofing the rotation input-output\ndef find_nearest(array, value):\n #For each element in value, returns the index of array it is closest to.\n #array should be 1 x n and value should be m x 1\n idx = (np.abs(array - value)).argmin(axis=1) #this rounds up and down (\n #of the two values in array closest to value, picks the closer. (not the larger or the smaller)\n return idx\n\n\ndef f(x_0a,x_0b,L,k,y_0,m,theta):\n yl,yu = -4,8\n buffer = 10\n num_points = 1000\n inputs = np.linspace(yl-buffer,yu+buffer,num_points)\n outputs = speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m)\n fig = plt.figure(figsize=(12,8))\n rot_mat = np.array([[np.cos(theta),-1.*np.sin(theta)],[np.sin(theta),np.cos(theta)]])\n rotation_origin = np.array([x_0a+(x_0b-x_0a)/2,y_0])\n plt.plot(rotation_origin[0],rotation_origin[1],'o',color='r')\n rotation_origin_ones = np.repeat(rotation_origin[:,None],num_points,axis=1)\n inputs1,outputs1 = np.dot(rot_mat,np.vstack((inputs,outputs))-rotation_origin_ones)+rotation_origin_ones\n ax = plt.subplot()\n ax.set_aspect('equal')\n plt.plot(inputs,outputs,color='blue')\n plt.plot(inputs1,outputs1,label='rotated curve')\n \n which_inputs = find_nearest(inputs1,inputs[:,None])\n \n plt.plot(inputs,outputs1[which_inputs],'o',color='orange')\n \n plt.ylim([yl,yu])\n# plt.xlim(xlim)\n plt.legend()\n \n\ninteractive_plot = interactive(f, x_0a =slider(-2,6,0.1,-0.4),x_0b =slider(-2,6,0.1,3.6),\n L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6),\n m=slider(0,4,0.1,1.),theta=slider(0,np.pi/2,0.1,np.pi/6))\n \n \noutput = interactive_plot.children[-1]\noutput.layout.height = '300px'\ninteractive_plot", "_____no_output_____" ] ], [ [ "It follows then that we can define the rotated function as", "_____no_output_____" ] ], [ [ "def f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta):\n yl,yu = -4,8\n buffer = 10\n num_points = len(inputs)\n outputs = speed_sigmoid_func(inputs,x_0a,x_0b,L,k,y_0,m)\n rot_mat = np.array([[np.cos(theta),-1.*np.sin(theta)],[np.sin(theta),np.cos(theta)]])\n rotation_origin = np.array([x_0a+(x_0b-x_0a)/2,y_0])\n rotation_origin_ones = np.repeat(rotation_origin[:,None],num_points,axis=1)\n inputs1,outputs1 = np.dot(rot_mat,np.vstack((inputs,outputs))-rotation_origin_ones)+rotation_origin_ones \n which_inputs = find_nearest(inputs1,inputs[:,None])\n return outputs1[which_inputs]\n\n", "_____no_output_____" ], [ "def plot_f_rotated(x_0a,x_0b,L,k,y_0,m,theta):\n yl,yu = -4,8\n buffer = 10\n num_points = 1000\n inputs = np.linspace(yl-buffer,yu+buffer,num_points)\n outputs = f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta)\n plt.figure(figsize=(8,8))\n ax = plt.subplot()\n ax.set_aspect('equal')\n plt.plot(inputs,outputs,'o',color='orange')\n plt.xlim([-10,10])\n plt.ylim([-10,10])\n ax.spines['left'].set_position('center')\n ax.spines['bottom'].set_position('center')\n\n # Eliminate upper and right axes\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n\n # Show ticks in the left and lower axes only\n ax.xaxis.set_ticks_position('bottom')\n ax.yaxis.set_ticks_position('left')\n\n\n\ninteractive_plot = interactive(plot_f_rotated, x_0a =slider(-2,6,0.1,-0.4),x_0b =slider(-2,6,0.1,3.6),\n L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6),\n m=slider(0,4,0.1,1.),theta=slider(0,np.pi/2,0.1,np.pi/6))\n \noutput = interactive_plot.children[-1]\noutput.layout.height = '500px'\ninteractive_plot", "_____no_output_____" ] ], [ [ "Now, replicate the above, and add in the un-rotated version with fixed parameters (the working version of the sigmoid function up till this point), and drag to find the parameters that best work for the rotated to match up with it in the left and right limit sections. ", "_____no_output_____" ] ], [ [ "def plot_f_rotated(x_0a,x_0b,L,k,y_0,m,theta):\n yl,yu = -4,8\n buffer = 10\n num_points = 1000\n inputs = np.linspace(yl-buffer,yu+buffer,num_points)\n \n plt.figure(figsize=(8,8))\n ax = plt.subplot()\n ax.set_aspect('equal')\n \n #The updating part of the plot is the (scatter) plot of the rotated function\n outputs = f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta)\n plt.plot(inputs,outputs,'o',color='orange')\n \n #The fixed part is the non-rotated plot of the sigmoid with the previously determined parameters\n outputs1 = f_rotated(inputs, \n x_0a = -0.4,\n x_0b= 1.45,\n L=0.8,\n k=4.,\n y_0=1.6,\n m=1.,\n theta=0.)\n \n plt.plot(inputs,outputs1,color='blue')\n \n \n plt.xlim([-10,10])\n plt.ylim([-10,10])\n ax.spines['left'].set_position('center')\n ax.spines['bottom'].set_position('center')\n\n # Eliminate upper and right axes\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n\n # Show ticks in the left and lower axes only\n ax.xaxis.set_ticks_position('bottom')\n ax.yaxis.set_ticks_position('left')\n ax.set_xticks(np.arange(-10,10,1))\n ax.set_yticks(np.arange(-10,10,1))\n\n\n\ninteractive_plot = interactive(plot_f_rotated, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,1.45),\n L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,4.),y_0=slider(0,4,0.1,1.6),\n m=slider(0,1,0.01,1.),theta=slider(0,np.pi/4,0.01,np.pi/6))\n \noutput = interactive_plot.children[-1]\noutput.layout.height = '500px'\ninteractive_plot", "_____no_output_____" ] ], [ [ "Final plot to check selected parameter values\nx_0a = -0.4\nx_0b = 1.45\nL = 0.8\nk = 2.4\ny0 = 1.6\nm = 0.43\ntheta = 0.37", "_____no_output_____" ] ], [ [ "def plot_f_rotated(x_0a,x_0b,L,k,y_0,m,theta):\n yl,yu = -4,8\n buffer = 10\n num_points = 1000\n inputs = np.linspace(yl-buffer,yu+buffer,num_points)\n \n plt.figure(figsize=(8,8))\n ax = plt.subplot()\n ax.set_aspect('equal')\n \n #The updating part of the plot is the (scatter) plot of the rotated function\n outputs = f_rotated(inputs,x_0a,x_0b,L,k,y_0,m,theta)\n plt.plot(inputs,outputs,'o',color='orange',label='Leaky Controller \\'A\\'')\n \n #The fixed part is the non-rotated plot of the sigmoid with the previously determined parameters\n outputs1 = f_rotated(inputs, \n x_0a = -0.4,\n x_0b= 1.45,\n L=0.8,\n k=4.,\n y_0=1.6,\n m=1.,\n theta=0.)\n \n plt.plot(inputs,outputs1,color='blue',label='Perfect Controller')\n \n \n plt.xlim([-10,10])\n plt.ylim([-10,10])\n ax.spines['left'].set_position('center')\n ax.spines['bottom'].set_position('center')\n\n # Eliminate upper and right axes\n ax.spines['right'].set_color('none')\n ax.spines['top'].set_color('none')\n\n # Show ticks in the left and lower axes only\n ax.xaxis.set_ticks_position('bottom')\n ax.yaxis.set_ticks_position('left')\n ax.set_xticks(np.arange(-10,10,1))\n ax.set_yticks(np.arange(-10,10,1))\n plt.legend()\n\n\n\ninteractive_plot = interactive(plot_f_rotated, x_0a =slider(-2,6,0.01,-0.4),x_0b =slider(-2,6,0.01,1.45),\n L=slider(0,4,0.1,0.8),k=slider(.1,10,0.01,2.4),y_0=slider(0,4,0.1,1.6),\n m=slider(0,1,0.01,0.43),theta=slider(0,np.pi/4,0.01,0.37))\n \noutput = interactive_plot.children[-1]\noutput.layout.height = '500px'\ninteractive_plot", "_____no_output_____" ] ], [ [ "Now let's do the blue one as the first modified map we use in the direct arrival computations, and second try using the orange one.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf45b6a1575bfbf0b25410e534050177833128d
1,008
ipynb
Jupyter Notebook
Dishify/notebooks/ingredient_populater/6_future_ideas.ipynb
NealWhitlock/MyDish-DS
a6d6e87bf7f6d697a145811f65532f83bd742bf9
[ "MIT" ]
1
2020-06-05T14:41:29.000Z
2020-06-05T14:41:29.000Z
Dishify/notebooks/ingredient_populater/6_future_ideas.ipynb
NealWhitlock/MyDish-DS
a6d6e87bf7f6d697a145811f65532f83bd742bf9
[ "MIT" ]
7
2021-03-31T19:48:24.000Z
2021-09-08T02:05:28.000Z
Dishify/notebooks/ingredient_populater/6_future_ideas.ipynb
NealWhitlock/MyDish-DS
a6d6e87bf7f6d697a145811f65532f83bd742bf9
[ "MIT" ]
8
2020-04-08T06:36:29.000Z
2020-06-05T14:34:46.000Z
17.684211
63
0.517857
[ [ [ "## Part 6 of creating auto-populate feature\n\n## Possible ideas to implement in the future", "_____no_output_____" ], [ "### Reorganize the data science team's database on AWS\n\n", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown" ] ]
cbf48e4157e30a1ab9248828c0cefb2806995eea
191,448
ipynb
Jupyter Notebook
1/1.ipynb
jrieke/aand
64731f8401f70783722afabb412fe850fd3ab685
[ "MIT" ]
null
null
null
1/1.ipynb
jrieke/aand
64731f8401f70783722afabb412fe850fd3ab685
[ "MIT" ]
null
null
null
1/1.ipynb
jrieke/aand
64731f8401f70783722afabb412fe850fd3ab685
[ "MIT" ]
null
null
null
898.816901
171,204
0.944424
[ [ [ "# Estimating Firing Rates", "_____no_output_____" ] ], [ [ "from __future__ import division, print_function\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "spike_times = np.loadtxt('ExampleSpikeTimes1.dat')\nspike_times[:10]", "_____no_output_____" ] ], [ [ "#### 1. Construct spike histograms", "_____no_output_____" ] ], [ [ "dts = [30, 90, 200]\n\nfig, axes = plt.subplots(1, 3, figsize=(15, 5))\n\nfor ax, dt in zip(axes, dts):\n plt.sca(ax)\n plt.hist(spike_times, np.arange(0, 4001, dt))\n plt.xlabel('t / ms')\n plt.ylabel('Spike Count')\n plt.title('$\\Delta t$ = {} ms'.format(dt))", "_____no_output_____" ] ], [ [ "The firing rates are the spike counts (vertical axis) divided by $\\Delta t$.", "_____no_output_____" ], [ "#### 2. Response function", "_____no_output_____" ] ], [ [ "sliding_window_function = lambda tau, dt: 1 / dt if tau >= -dt/2 and tau < dt/2 else 0\ngaussian_window_function = lambda tau, sigma_w: 1 / (np.sqrt(2 * np.pi) * sigma_w) * np.exp(- tau**2 / (2 * sigma_w**2))\nalpha_function = lambda tau, inverse_alpha: max(0, (1 / inverse_alpha)**2 * tau * np.exp(- 1 / inverse_alpha * tau))", "_____no_output_____" ], [ "dts = [30, 90, 200]\nresponse_functions = [sliding_window_function, gaussian_window_function, alpha_function]\nresponse_function_names = ['Sliding Window Function', 'Gaussian Window Function', 'Alpha Function']\n\nfig, axes = plt.subplots(3, 3, figsize=(15, 15))\n\nfor i, (horizontal_axes, response_function, response_function_name) in enumerate(zip(axes, response_functions, response_function_names)):\n for j, (ax, dt) in enumerate(zip(horizontal_axes, dts)):\n \n response = np.zeros(4001)\n for t in range(len(response)):\n # Compute the response function for time t by summing over all spike times.\n for spike_time in spike_times:\n response[t] += response_function(t - spike_time, dt)\n \n plt.sca(ax)\n plt.plot(response)\n \n if i == 0:\n plt.title('$\\Delta t$ = {} ms'.format(dt))\n \n if i == len(horizontal_axes) - 1:\n plt.xlabel('t / ms')\n \n if j == 0:\n plt.ylabel(response_function_name)", "_____no_output_____" ] ], [ [ "#### 3. Calculate the spike count rate", "_____no_output_____" ] ], [ [ "r = len(spike_times) / 4000 # in 1 / ms\nr", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbf4a983b2a970bf7db43370b92a8b0a9abbd1c9
7,294
ipynb
Jupyter Notebook
python-base/jupyter/quantitative-investment/quant-2-11.ipynb
lovelifeming/AI-Studies-Road
d92e234211f89cc92c74dd49e9e5b9394b7fa4ed
[ "Apache-2.0" ]
null
null
null
python-base/jupyter/quantitative-investment/quant-2-11.ipynb
lovelifeming/AI-Studies-Road
d92e234211f89cc92c74dd49e9e5b9394b7fa4ed
[ "Apache-2.0" ]
null
null
null
python-base/jupyter/quantitative-investment/quant-2-11.ipynb
lovelifeming/AI-Studies-Road
d92e234211f89cc92c74dd49e9e5b9394b7fa4ed
[ "Apache-2.0" ]
null
null
null
31.304721
130
0.510282
[ [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\nimport mplfinance as mpf\nimport numpy as np\n#分析上证指数 数据", "_____no_output_____" ], [ "# df=pd.read_csv('D:/Program Files/收藏/笔记/量化投资/上证指数000001.csv',encoding='gbk',parse_dates=['candle_end_time'])\ndf=pd.read_csv('E:/Personal/中证白酒指数399997.csv',encoding='gbk',parse_dates=['日期'])\ndf.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 2864 entries, 0 to 2863\nData columns (total 12 columns):\n日期 2864 non-null datetime64[ns]\n股票代码 2864 non-null object\n名称 2864 non-null object\n收盘价 2864 non-null float64\n最高价 2864 non-null float64\n最低价 2864 non-null float64\n开盘价 2864 non-null float64\n前收盘 2864 non-null object\n涨跌额 2864 non-null object\n涨跌幅 2864 non-null object\n成交量 2864 non-null int64\n成交金额 2864 non-null float64\ndtypes: datetime64[ns](1), float64(5), int64(1), object(5)\nmemory usage: 268.6+ KB\n" ], [ "df['ndate']=pd.to_datetime(df.date)\ndf['month']=df.ndate.apply(lambda x:x.month)\ndf.head()\n# 日期\t股票代码\t名称\t收盘价\t最高价\t最低价\t开盘价\t前收盘\t涨跌额\t涨跌幅\t换手率\t成交量\t成交金额\t总市值\t流通市值", "_____no_output_____" ], [ "# df_1=df[df.month ==6][df.ndate>'2019-01-01']\ndf_1=df\nfig,axl=plt.subplots(figsize=(20,20))\nmpf.candlestick2_ohlc(axl,df_1.openingPrice.values,df_1.topPrice.values,df_1.bottomPrice.values,df_1.closingPrice.values,\n width=1,colorup='b',colordown='r')\n#显示均线\ndf['ma5']=df.openingPrice.rolling(window=5).mean()\ndf['ma30']=df.openingPrice.rolling(window=30).mean()\nl=[i for i in range(df_1.date.count())]\n", "_____no_output_____" ], [ "plt.plot(df.date.values,df.closingPrice.values,color='red',linewidth=2.0,linestyle='--')\nplt.plot(df.date.values,df.openingPrice.values,color='blue',linewidth=3.0,linestyle='-.')\nplt.show()", "_____no_output_____" ], [ "da=np.asarray(df)\nda", "_____no_output_____" ], [ "#统计沪深300每个工作日的涨跌幅\npd.set_option('expand_frame_repr',False)\ndf['涨跌幅']=df['收盘价']/df['收盘价'].shift(1)-1\n\n#计算工作日\ndf['星期']=df['日期'].dt.dayofweek\ndf['星期']+=1\n#统计各个工作日的均值,涨跌幅等特征\nresult=df.groupby('星期')['涨跌幅'].describe()\ntmp1=df.groupby('星期')['涨跌幅'].size()\ntmp2=df[df['涨跌幅']>0].groupby('星期')['涨跌幅'].size()\nresult['胜率']=tmp2/tmp1\nprint(result.T)", "星期 1 2 3 4 5\ncount 559.000000 578.000000 582.000000 576.000000 568.000000\nmean -0.001913 -0.000131 0.000627 -0.001144 -0.000870\nstd 0.018893 0.018982 0.019501 0.019890 0.024695\nmin -0.068989 -0.066143 -0.078916 -0.068841 -0.080367\n25% -0.012628 -0.010876 -0.010444 -0.012559 -0.014502\n50% -0.001761 0.000447 0.000988 -0.001441 -0.002552\n75% 0.008417 0.010457 0.011063 0.009916 0.012291\nmax 0.083527 0.094096 0.106899 0.102037 0.105317\n胜率 0.447227 0.510381 0.529210 0.451389 0.453427\n" ], [ "# 查看在牛熊不同状况下的周内效应\n# 牛市中,周一和周五表现较好,周二和周四表现较差\n# 熊市中,周二和周三表现较好,周一和周四表现较差\n#计算工作日\ndf['星期']=df['日期'].dt.dayofweek\ndf['星期']+=1\n# 插入均线计算以及判断上涨市和下跌市\ndf.reset_index(drop=True,inplace=True)\ndf.loc[(df['收盘价']>df['收盘价'].rolling(20,min_periods=1).mean()),'上涨市_mean']=True\ndf['上涨市_mean'].fillna(value=False,inplace=True)\n# 选择上涨市还是下跌市\ndf=df[df['上涨市_mean']==True]\n\n#统计各个工作日的均值,涨跌幅等特征\nresult=df.groupby('星期')['涨跌幅'].describe()\ntmp1=df.groupby('星期')['涨跌幅'].size()\ntmp2=df[df['涨跌幅']>0].groupby('星期')['涨跌幅'].size()\nresult['胜率']=tmp2/tmp1\nprint(result.T)", "星期 1 2 3 4 5\ncount 217.000000 238.000000 242.000000 237.000000 232.000000\nmean 0.005127 0.004848 0.006889 0.006021 0.011035\nstd 0.018992 0.019524 0.019100 0.020861 0.025411\nmin -0.049751 -0.060953 -0.050163 -0.052100 -0.061338\n25% -0.005965 -0.005468 -0.002871 -0.004682 -0.003910\n50% 0.003578 0.005412 0.005886 0.004694 0.008738\n75% 0.014483 0.015488 0.015767 0.015543 0.023584\nmax 0.083527 0.094096 0.106899 0.102037 0.105317\n胜率 0.617512 0.634454 0.673554 0.594937 0.676724\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf4ab20f286a97fd544c4a3c0ed87a495fcb396
146,457
ipynb
Jupyter Notebook
v0.1.0/notebooks/alpha_cpu/RMSORN_pattern_recognition.ipynb
Saran-nns/PySORN_0.1
f3d684cb92ed22692b8cbf7ad79729bf7ffd20b7
[ "Apache-2.0" ]
9
2019-07-01T17:01:49.000Z
2021-06-22T18:56:29.000Z
v0.1.0/notebooks/alpha_cpu/RMSORN_pattern_recognition.ipynb
Saran-nns/PySORN_0.1
f3d684cb92ed22692b8cbf7ad79729bf7ffd20b7
[ "Apache-2.0" ]
1
2019-09-13T09:45:40.000Z
2019-09-13T11:48:46.000Z
v0.1.0/notebooks/alpha_cpu/RMSORN_pattern_recognition.ipynb
Saran-nns/PySORN_0.1
f3d684cb92ed22692b8cbf7ad79729bf7ffd20b7
[ "Apache-2.0" ]
1
2021-03-13T08:06:10.000Z
2021-03-13T08:06:10.000Z
67.429558
1,422
0.485938
[ [ [ "# REWARD-MODULATED SELF ORGANISING RECURRENT NEURAL NETWORK", "_____no_output_____" ], [ "https://www.frontiersin.org/articles/10.3389/fncom.2015.00036/full", "_____no_output_____" ], [ "### IMPORT REQUIRED LIBRARIES\n\n\n\n", "_____no_output_____" ] ], [ [ "from __future__ import division\nimport numpy as np\nfrom scipy.stats import norm\nimport random\nimport tqdm\nimport pandas as pd\nfrom collections import OrderedDict\nimport matplotlib.pyplot as plt\nimport heapq\nimport pickle\nimport torch as torch\n\nfrom sorn.utils import Initializer\ntorch.manual_seed(1)\nrandom.seed(1)\nnp.random.seed(1)\n", "_____no_output_____" ] ], [ [ "### UTILS", "_____no_output_____" ] ], [ [ "def normalize_weight_matrix(weight_matrix):\n \n # Applied only while initializing the weight. Later Synaptic scalling applied on weight matrices\n \n \"\"\" Normalize the weights in the matrix such that incoming connections to a neuron sum up to 1\n \n Args:\n weight_matrix(array) -- Incoming Weights from W_ee or W_ei or W_ie\n \n Returns:\n weight_matrix(array) -- Normalized weight matrix\"\"\"\n\n normalized_weight_matrix = weight_matrix / np.sum(weight_matrix,axis = 0)\n\n return normalized_weight_matrix", "_____no_output_____" ] ], [ [ "### Implement lambda incoming connections for Excitatory neurons and outgoing connections per Inhibitory neuron", "_____no_output_____" ] ], [ [ "\n\ndef generate_lambd_connections(synaptic_connection,ne,ni, lambd_w,lambd_std):\n \n \n \"\"\"\n Args:\n synaptic_connection - Type of sysnpatic connection (EE,EI or IE)\n ne - Number of excitatory units\n ni - Number of inhibitory units\n lambd_w - Average number of incoming connections\n lambd_std - Standard deviation of average number of connections per neuron\n \n Returns:\n \n connection_weights - Weight matrix\n \n \"\"\"\n \n \n if synaptic_connection == 'EE':\n \n \n \"\"\"Choose random lamda connections per neuron\"\"\"\n\n # Draw normally distribued ne integers with mean lambd_w\n\n lambdas_incoming = norm.ppf(np.random.random(ne), loc=lambd_w, scale=lambd_std).astype(int)\n \n # lambdas_outgoing = norm.ppf(np.random.random(ne), loc=lambd_w, scale=lambd_std).astype(int)\n \n # List of neurons \n\n list_neurons= list(range(ne))\n\n # Connection weights\n\n connection_weights = np.zeros((ne,ne))\n\n # For each lambd value in the above list,\n # generate weights for incoming and outgoing connections\n \n #-------------Gaussian Distribution of weights --------------\n \n # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution\n # Centered around 2 to make all values positive \n \n # ------------Uniform Distribution --------------------------\n global_incoming_weights = np.random.uniform(0.0,0.1,sum(lambdas_incoming))\n \n # Index Counter\n global_incoming_weights_idx = 0\n \n # Choose the neurons in order [0 to 199]\n \n for neuron in list_neurons:\n\n ### Choose ramdom unique (lambdas[neuron]) neurons from list_neurons\n possible_connections = list_neurons.copy()\n \n possible_connections.remove(neuron) # Remove the selected neuron from possible connections i!=j\n \n # Choose random presynaptic neurons\n possible_incoming_connections = random.sample(possible_connections,lambdas_incoming[neuron]) \n\n \n incoming_weights_neuron = global_incoming_weights[global_incoming_weights_idx:global_incoming_weights_idx+lambdas_incoming[neuron]]\n \n # ---------- Update the connection weight matrix ------------\n\n # Update incoming connection weights for selected 'neuron'\n\n for incoming_idx,incoming_weight in enumerate(incoming_weights_neuron): \n connection_weights[possible_incoming_connections[incoming_idx]][neuron] = incoming_weight\n \n global_incoming_weights_idx += lambdas_incoming[neuron]\n \n return connection_weights\n \n if synaptic_connection == 'EI':\n \n \"\"\"Choose random lamda connections per neuron\"\"\"\n\n # Draw normally distribued ni integers with mean lambd_w\n lambdas = norm.ppf(np.random.random(ni), loc=lambd_w, scale=lambd_std).astype(int)\n \n # List of neurons \n\n list_neurons= list(range(ni)) # Each i can connect with random ne neurons \n\n # Initializing connection weights variable\n\n connection_weights = np.zeros((ni,ne))\n\n # ------------Uniform Distribution -----------------------------\n global_outgoing_weights = np.random.uniform(0.0,0.1,sum(lambdas))\n \n # Index Counter\n global_outgoing_weights_idx = 0\n \n # Choose the neurons in order [0 to 40]\n\n for neuron in list_neurons:\n\n ### Choose ramdom unique (lambdas[neuron]) neurons from list_neurons\n possible_connections = list(range(ne))\n \n possible_outgoing_connections = random.sample(possible_connections,lambdas[neuron]) # possible_outgoing connections to the neuron\n\n # Update weights\n outgoing_weights = global_outgoing_weights[global_outgoing_weights_idx:global_outgoing_weights_idx+lambdas[neuron]]\n\n # ---------- Update the connection weight matrix ------------\n\n # Update outgoing connections for the neuron\n\n for outgoing_idx,outgoing_weight in enumerate(outgoing_weights): # Update the columns in the connection matrix\n connection_weights[neuron][possible_outgoing_connections[outgoing_idx]] = outgoing_weight\n \n # Update the global weight values index\n global_outgoing_weights_idx += lambdas[neuron]\n \n \n return connection_weights\n \n ", "_____no_output_____" ] ], [ [ "### More Util functions", "_____no_output_____" ] ], [ [ "def get_incoming_connection_dict(weights):\n \n # Get the non-zero entires in columns is the incoming connections for the neurons\n \n # Indices of nonzero entries in the columns\n connection_dict=dict.fromkeys(range(1,len(weights)+1),0)\n \n for i in range(len(weights[0])): # For each neuron\n connection_dict[i] = list(np.nonzero(weights[:,i])[0])\n \n return connection_dict\n ", "_____no_output_____" ], [ "def get_outgoing_connection_dict(weights):\n # Get the non-zero entires in rows is the outgoing connections for the neurons\n \n # Indices of nonzero entries in the rows\n connection_dict=dict.fromkeys(range(1,len(weights)+1),1)\n \n for i in range(len(weights[0])): # For each neuron\n connection_dict[i] = list(np.nonzero(weights[i,:])[0])\n \n return connection_dict", "_____no_output_____" ], [ "def prune_small_weights(weights,cutoff_weight):\n \n \"\"\" Prune the connections with negative connection strength\"\"\"\n weights[weights <= cutoff_weight] = cutoff_weight\n \n return weights\n ", "_____no_output_____" ], [ "def set_max_cutoff_weight(weights, cutoff_weight):\n \n \"\"\" Set cutoff limit for the values in given array\"\"\"\n \n weights[weights > cutoff_weight] = cutoff_weight\n \n return weights", "_____no_output_____" ], [ "def get_unconnected_indexes(wee):\n \n \"\"\"\n Helper function for Structural plasticity to randomly select the unconnected units\n \n Args: \n wee - Weight matrix\n \n Returns:\n list (indices) // indices = (row_idx,col_idx)\"\"\"\n \n\n i,j = np.where(wee <= 0.)\n indices = list(zip(i,j))\n \n self_conn_removed = []\n for i,idxs in enumerate(indices):\n \n if idxs[0] != idxs[1]:\n \n self_conn_removed.append(indices[i])\n \n return self_conn_removed", "_____no_output_____" ], [ "def white_gaussian_noise(mu, sigma,t):\n\n \"\"\"Generates white gaussian noise with mean mu, standard deviation sigma and\n the noise length equals t \"\"\"\n \n noise = np.random.normal(mu, sigma, t) \n \n return np.expand_dims(noise,1)\n", "_____no_output_____" ], [ "### SANITY CHECK EACH WEIGHTS\n#### Note this function has no influence in weight matrix, will be deprecated in next version\n\ndef zero_sum_incoming_check(weights):\n \n zero_sum_incomings = np.where(np.sum(weights,axis = 0) == 0.)\n \n if len(zero_sum_incomings[-1]) == 0:\n return weights\n else:\n for zero_sum_incoming in zero_sum_incomings[-1]:\n \n rand_indices = np.random.randint(40,size = 2) # 5 because each excitatory neuron connects with 5 inhibitory neurons \n # given the probability of connections 0.2\n rand_values = np.random.uniform(0.0,0.1,2)\n \n for i,idx in enumerate(rand_indices):\n \n weights[:,zero_sum_incoming][idx] = rand_values[i]\n \n return weights", "_____no_output_____" ] ], [ [ "### SORN ", "_____no_output_____" ] ], [ [ "class Sorn(object):\n \n \"\"\"SORN 1 network model Initialization\"\"\"\n\n def __init__(self):\n pass\n\n \"\"\"Initialize network variables as class variables of SORN\"\"\"\n \n nu = 4 # Number of input units\n ne = 30 # Number of excitatory units\n ni = int(0.2*ne) # Number of inhibitory units in the network\n no = 1\n eta_stdp = 0.004\n eta_inhib = 0.001\n eta_ip = 0.01\n te_max = 1.0 \n ti_max = 0.5\n ti_min = 0.0\n te_min = 0.0\n mu_ip = 0.1\n sigma_ip = 0.0 # Standard deviation, variance == 0 \n \n \n # Initialize weight matrices\n\n def initialize_weight_matrix(self, network_type,synaptic_connection, self_connection, lambd_w): \n\n \n \"\"\"\n Args:\n \n network_type(str) - Spare or Dense\n synaptic_connection(str) - EE,EI,IE: Note that Spare connection is defined only for EE connections\n self_connection(str) - True or False: i-->i ; Network is tested only using j-->i\n lambd_w(int) - Average number of incoming and outgoing connections per neuron\n \n Returns:\n weight_matrix(array) - Array of connection strengths \n \"\"\"\n \n if (network_type == \"Sparse\") and (self_connection == \"False\"):\n\n \"\"\"Generate weight matrix for E-E/ E-I connections with mean lamda incoming and outgiong connections per neuron\"\"\"\n \n weight_matrix = generate_lambd_connections(synaptic_connection,Sorn.ne,Sorn.ni,lambd_w,lambd_std = 1)\n \n # Dense matrix for W_ie\n\n elif (network_type == 'Dense') and (self_connection == 'False'):\n\n # Gaussian distribution of weights\n # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution\n # Centered around 1 \n # weight_matrix.reshape(Sorn.ne, Sorn.ni) \n # weight_matrix *= 0.01 # Setting spectral radius \n \n # Uniform distribution of weights\n weight_matrix = np.random.uniform(0.0,0.1,(Sorn.ne, Sorn.ni))\n weight_matrix.reshape((Sorn.ne,Sorn.ni))\n \n elif (network_type == 'Dense_output') and (self_connection == 'False'):\n\n # Gaussian distribution of weights\n # weight_matrix = np.random.randn(Sorn.ne, Sorn.ni) + 2 # Small random values from gaussian distribution\n # Centered around 1 \n # weight_matrix.reshape(Sorn.ne, Sorn.ni) \n # weight_matrix *= 0.01 # Setting spectral radius \n \n # Uniform distribution of weights\n weight_matrix = np.random.uniform(0.0,0.1,(Sorn.no, Sorn.ne))\n weight_matrix.reshape((Sorn.no,Sorn.ne))\n\n return weight_matrix\n\n def initialize_threshold_matrix(self, te_min,te_max, ti_min,ti_max):\n\n # Initialize the threshold for excitatory and inhibitory neurons\n \n \"\"\"Args:\n te_min(float) -- Min threshold value for excitatory units\n ti_min(float) -- Min threshold value for inhibitory units\n te_max(float) -- Max threshold value for excitatory units\n ti_max(float) -- Max threshold value for inhibitory units\n Returns:\n te(vector) -- Threshold values for excitatory units\n ti(vector) -- Threshold values for inhibitory units\"\"\"\n\n te = np.random.uniform(0., te_max, (Sorn.ne, 1))\n ti = np.random.uniform(0., ti_max, (Sorn.ni, 1))\n \n # For patter recognition task: Heavyside step function with fixed threshold\n to = 0.5\n\n return te, ti,to\n\n def initialize_activity_vector(self,ne, ni, no):\n \n # Initialize the activity vectors X and Y for excitatory and inhibitory neurons\n \n \"\"\"Args:\n ne(int) -- Number of excitatory neurons\n ni(int) -- Number of inhibitory neurons\n Returns:\n x(array) -- Array of activity vectors of excitatory population\n y(array) -- Array of activity vectors of inhibitory population\"\"\"\n\n x = np.zeros((ne, 2))\n y = np.zeros((ni, 2))\n o = np.zeros((no, 2))\n\n return x, y, o", "_____no_output_____" ], [ "class Plasticity(Sorn):\n \"\"\"\n Instance of class Sorn. Inherits the variables and functions defined in class Sorn\n Encapsulates all plasticity mechanisms mentioned in the article \"\"\"\n\n # Initialize the global variables for the class //Class attributes\n\n def __init__(self):\n \n super().__init__()\n self.nu = Sorn.nu # Number of input units\n self.ne = Sorn.ne # Number of excitatory units\n self.no = Sorn.no\n self.eta_stdp = Sorn.eta_stdp # STDP plasticity Learning rate constant; SORN1 and SORN2\n self.eta_ip = Sorn.eta_ip # Intrinsic plasticity learning rate constant; SORN1 and SORN2\n self.eta_inhib = Sorn.eta_inhib # Intrinsic plasticity learning rate constant; SORN2 only\n self.h_ip = 2 * Sorn.nu / Sorn.ne # Target firing rate\n self.mu_ip = Sorn.mu_ip # Mean target firing rate \n self.ni = Sorn.ni # Number of inhibitory units in the network\n self.time_steps = Sorn.time_steps # Total time steps of simulation\n self.te_min = Sorn.te_min # Excitatory minimum Threshold\n self.te_max = Sorn.te_max # Excitatory maximum Threshold\n \n def stdp(self, wee, x, mr, cutoff_weights):\n \n \"\"\" Apply STDP rule : Regulates synaptic strength between the pre(Xj) and post(Xi) synaptic neurons\"\"\"\n\n x = np.asarray(x)\n xt_1 = x[:,0]\n xt = x[:,1]\n wee_t = wee.copy()\n \n # STDP applies only on the neurons which are connected.\n \n for i in range(len(wee_t[0])): # Each neuron i, Post-synaptic neuron\n \n for j in range(len(wee_t[0:])): # Incoming connection from jth pre-synaptic neuron to ith neuron\n \n if wee_t[j][i] != 0. : # Check connectivity\n \n # Get the change in weight\n delta_wee_t = mr*self.eta_stdp * (xt[i] * xt_1[j] - xt_1[i]*xt[j])\n\n # Update the weight between jth neuron to i \"\"Different from notation in article \n\n wee_t[j][i] = wee[j][i] + delta_wee_t\n \n \"\"\" Prune the smallest weights induced by plasticity mechanisms; Apply lower cutoff weight\"\"\"\n wee_t = prune_small_weights(wee_t,cutoff_weights[0])\n \n \"\"\"Check and set all weights < upper cutoff weight \"\"\"\n wee_t = set_max_cutoff_weight(wee_t,cutoff_weights[1])\n\n return wee_t\n\n def ostdp(self,woe, x, mo):\n \n \"\"\" Apply STDP rule : Regulates synaptic strength between the pre(Xj) and post(Xi) synaptic neurons\"\"\"\n x = np.asarray(x)\n xt_1 = x[:, 0]\n xt = x[:, 1]\n woe_t = woe.copy()\n # STDP applies only on the neurons which are connected.\n for i in range(len(woe_t[0])): # Each neuron i, Post-synaptic neuron\n for j in range(len(woe_t[0:])): # Incoming connection from jth pre-synaptic neuron to ith neuron\n if woe_t[j][i] != 0.: # Check connectivity\n # Get the change in weight\n delta_woe_t = mo*self.eta_stdp * (xt[i] * xt_1[j] - xt_1[i] * xt[j])\n # Update the weight between jth neuron to i \"\"Different from notation in article\n woe_t[j][i] = woe[j][i] + delta_woe_t\n return woe_t\n \n def ip(self, te, x):\n \n # IP rule: Active unit increases its threshold and inactive decreases its threshold.\n\n xt = x[:, 1]\n\n te_update = te + self.eta_ip * (xt.reshape(self.ne, 1) - self.h_ip)\n \n \"\"\" Check whether all te are in range [0.0,1.0] and update acordingly\"\"\"\n \n # Update te < 0.0 ---> 0.0\n # te_update = prune_small_weights(te_update,self.te_min)\n \n # Set all te > 1.0 --> 1.0\n # te_update = set_max_cutoff_weight(te_update,self.te_max)\n\n return te_update\n\n def ss(self, wee_t):\n \n \"\"\"Synaptic Scaling or Synaptic Normalization\"\"\"\n \n wee_t = wee_t / np.sum(wee_t,axis=0)\n\n return wee_t\n\n @staticmethod\n def modulation_factor(reward_history, current_reward ,window_sizes):\n \"\"\" Grid search for Modulation factor. Returns the maximum moving average over history of rewards with corresponding window\n Args:\n reward_history (list): List with the history of rewards \n window_sizes (list): List of window sizes for gridsearch\n Returns:\n [int]: Modulation factor\n \"\"\"\n \n def running_mean(x, K):\n cumsum = np.cumsum(np.insert(x, 0, 0)) \n return (cumsum[K:] - cumsum[:-K]) / float(K)\n\n reward_avgs = [] # Holds the mean of all rolling averages for each window\n for window_size in window_sizes:\n if window_size<=len(reward_history):\n reward_avgs.append(np.mean(running_mean(reward_history, window_size)))\n \n best_reward= np.max(reward_avgs) \n best_reward_window = window_sizes[np.argmax(best_reward)]\n print(\"current_reward %s | Best rolling avergage reward %s | Best Rolling average window %s\"%(current_reward, best_reward, best_reward_window ))\n mo = current_reward - best_reward\n mr = mo.copy()\n # TODO: What if mo != mr ?\n return mo, mr, best_reward, best_reward_window\n\n ###########################################################\n\n @staticmethod\n def initialize_plasticity():\n\n \n \"\"\"NOTE: DO NOT TRANSPOSE THE WEIGHT MATRIX WEI FOR SORN 2 MODEL\"\"\"\n\n # Create and initialize sorn object and variables\n\n sorn_init = Sorn()\n WEE_init = sorn_init.initialize_weight_matrix(network_type=\"Sparse\", synaptic_connection='EE',\n self_connection='False',\n lambd_w=20)\n WEI_init = sorn_init.initialize_weight_matrix(network_type=\"Dense\", synaptic_connection='EI',\n self_connection='False',\n lambd_w=100)\n WIE_init = sorn_init.initialize_weight_matrix(network_type=\"Dense\", synaptic_connection='IE',\n self_connection='False',\n lambd_w=100)\n WOE_init = sorn_init.initialize_weight_matrix(network_type=\"Dense_output\", synaptic_connection='OE',\n self_connection='False',\n lambd_w=100)\n\n Wee_init = Initializer.zero_sum_incoming_check(WEE_init)\n Wei_init = Initializer.zero_sum_incoming_check(WEI_init.T) # For SORN 1\n# Wei_init = Initializer.zero_sum_incoming_check(WEI_init)\n Wie_init = Initializer.zero_sum_incoming_check(WIE_init)\n Woe_init = Initializer.zero_sum_incoming_check(WOE_init.T)\n c = np.count_nonzero(Wee_init)\n v = np.count_nonzero(Wei_init)\n b = np.count_nonzero(Wie_init)\n d = np.count_nonzero(Woe_init)\n print('Network Initialized')\n print('Number of connections in Wee %s , Wei %s, Wie %s Woe %s' %(c, v, b, d))\n print('Shapes Wee %s Wei %s Wie %s Woe %s' % (Wee_init.shape, Wei_init.shape, Wie_init.shape, Woe_init.shape))\n\n # Normalize the incoming weights\n\n normalized_wee = Initializer.normalize_weight_matrix(Wee_init)\n normalized_wei = Initializer.normalize_weight_matrix(Wei_init)\n normalized_wie = Initializer.normalize_weight_matrix(Wie_init)\n\n te_init, ti_init, to_init = sorn_init.initialize_threshold_matrix(Sorn.te_min, Sorn.te_max, Sorn.ti_min, Sorn.ti_max)\n x_init, y_init, o_init = sorn_init.initialize_activity_vector(Sorn.ne, Sorn.ni,Sorn.no)\n \n return Wee_init, Wei_init, Wie_init,Woe_init, te_init, ti_init, to_init,x_init, y_init, o_init\n\n @staticmethod\n def reorganize_network():\n pass", "_____no_output_____" ], [ "class MatrixCollection(Sorn):\n def __init__(self,phase, matrices = None):\n super().__init__()\n \n self.phase = phase\n self.matrices = matrices\n if self.phase == 'Plasticity' and self.matrices == None :\n\n self.time_steps = Sorn.time_steps + 1 # Total training steps\n self.Wee, self.Wei, self.Wie,self.Woe, self.Te, self.Ti, self.To, self.X, self.Y, self.O = [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps\n \n wee, wei, wie, woe, te, ti, to, x, y, o = Plasticity.initialize_plasticity()\n\n # Assign initial matrix to the master matrices\n self.Wee[0] = wee\n self.Wei[0] = wei\n self.Wie[0] = wie\n self.Woe[0] = woe\n self.Te[0] = te\n self.Ti[0] = ti\n self.To[0] = to\n self.X[0] = x\n self.Y[0] = y\n self.O[0] = o\n \n elif self.phase == 'Plasticity' and self.matrices != None:\n \n self.time_steps = Sorn.time_steps + 1 # Total training steps\n self.Wee, self.Wei, self.Wie,self.Woe, self.Te, self.Ti,self.To, self.X, self.Y,self.O = [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps\n # Assign matrices from plasticity phase to the new master matrices for training phase\n self.Wee[0] = matrices['Wee']\n self.Wei[0] = matrices['Wei']\n self.Wie[0] = matrices['Wie']\n self.Woe[0] = matrices['Woe']\n self.Te[0] = matrices['Te']\n self.Ti[0] = matrices['Ti']\n self.To[0] = matrices['To']\n self.X[0] = matrices['X']\n self.Y[0] = matrices['Y']\n self.O[0] = matrices['O']\n \n elif self.phase == 'Training':\n\n \"\"\"NOTE:\n time_steps here is diferent for plasticity or trianing phase\"\"\"\n self.time_steps = Sorn.time_steps + 1 # Total training steps\n self.Wee, self.Wei, self.Wie,self.Woe, self.Te, self.Ti,self.To, self.X, self.Y,self.O = [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps, [0] * self.time_steps, [0] * self.time_steps, \\\n [0] * self.time_steps\n # Assign matrices from plasticity phase to new respective matrices for training phase\n self.Wee[0] = matrices['Wee']\n self.Wei[0] = matrices['Wei']\n self.Wie[0] = matrices['Wie']\n self.Woe[0] = matrices['Woe']\n self.Te[0] = matrices['Te']\n self.Ti[0] = matrices['Ti']\n self.To[0] = matrices['To']\n self.X[0] = matrices['X']\n self.Y[0] = matrices['Y']\n self.O[0] = matrices['O']\n \n # @staticmethod\n def weight_matrix(self, wee, wei, wie, woe, i):\n # Get delta_weight from Plasticity.stdp \n # i - training step\n self.Wee[i + 1] = wee\n self.Wei[i + 1] = wei\n self.Wie[i + 1] = wie\n self.Woe[i + 1] = woe\n return self.Wee, self.Wei, self.Wie, self.Woe\n\n # @staticmethod\n def threshold_matrix(self, te, ti,to, i):\n self.Te[i + 1] = te\n self.Ti[i + 1] = ti\n self.To[i + 1] = to\n return self.Te, self.Ti, self.To\n\n # @staticmethod\n def network_activity_t(self, excitatory_net, inhibitory_net, output_net, i):\n self.X[i + 1] = excitatory_net\n self.Y[i + 1] = inhibitory_net\n self.O[i + 1] = output_net\n return self.X, self.Y, self.O\n\n # @staticmethod\n def network_activity_t_1(self, x, y,o, i):\n x_1, y_1, o_1 = [0] * self.time_steps, [0] * self.time_steps, [0] * self.time_steps\n x_1[i] = x\n y_1[i] = y\n o_1[i] = o\n\n return x_1, y_1, o_1", "_____no_output_____" ], [ "class NetworkState(Plasticity):\n \n \"\"\"The evolution of network states\"\"\"\n\n def __init__(self, v_t):\n super().__init__()\n self.v_t = v_t\n \n def incoming_drive(self,weights,activity_vector):\n \n # Broadcasting weight*acivity vectors \n \n incoming = weights* activity_vector\n incoming = np.array(incoming.sum(axis=0))\n return incoming\n \n def excitatory_network_state(self, wee, wei, te, x, y,white_noise_e):\n \n \"\"\" Activity of Excitatory neurons in the network\"\"\"\n xt = x[:, 1] \n xt = xt.reshape(self.ne, 1)\n yt = y[:, 1]\n yt = yt.reshape(self.ni, 1)\n\n incoming_drive_e = np.expand_dims(self.incoming_drive(weights = wee,activity_vector=xt),1)\n incoming_drive_i = np.expand_dims(self.incoming_drive(weights = wei,activity_vector=yt),1)\n \n if self.v_t.shape[0] < self.ne:\n \n inp = [0]*self.ne\n inp[:len(self.v_t)] = self.v_t\n self.v_t = inp.copy()\n \n tot_incoming_drive = incoming_drive_e - incoming_drive_i + white_noise_e + np.expand_dims(np.asarray(self.v_t),1) - te\n \n \"\"\"Heaviside step function\"\"\"\n\n \"\"\"Implement Heaviside step function\"\"\"\n heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1)\n heaviside_step[tot_incoming_drive > 0] = 1.\n\n xt_next = np.asarray(heaviside_step.copy())\n\n return xt_next\n\n def inhibitory_network_state(self, wie, ti, x,white_noise_i):\n\n # Activity of inhibitory neurons\n wie = np.asarray(wie)\n xt = x[:, 1]\n xt = xt.reshape(Sorn.ne, 1) \n incoming_drive_e = np.expand_dims(self.incoming_drive(weights = wie, activity_vector=xt),1) \n tot_incoming_drive = incoming_drive_e + white_noise_i - ti\n\n \"\"\"Implement Heaviside step function\"\"\"\n heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1)\n heaviside_step[tot_incoming_drive > 0] = 1.\n\n yt_next = np.asarray(heaviside_step.copy()) \n\n return yt_next\n\n \n def recurrent_drive(self, wee, wei, te, x, y,white_noise_e):\n \n \"\"\"Network state due to recurrent drive received by the each unit at time t+1\"\"\"\n \n \n xt = x[:, 1] \n xt = xt.reshape(self.ne, 1)\n yt = y[:, 1]\n yt = yt.reshape(self.ni, 1)\n \n incoming_drive_e = np.expand_dims(self.incoming_drive(weights = wee,activity_vector=xt),1)\n incoming_drive_i = np.expand_dims(self.incoming_drive(weights = wei,activity_vector=yt),1)\n \n tot_incoming_drive = incoming_drive_e - incoming_drive_i + white_noise_e - te\n \n \"\"\"Implement Heaviside step function\"\"\"\n heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1)\n heaviside_step[tot_incoming_drive > 0] = 1.\n\n xt_next = np.asarray(heaviside_step.copy())\n\n return xt_next\n \n def output_network_state(self,woe, to, x):\n \"\"\" Output layer states\n Args:\n woe (array): Connection weights between Reurrent network and Output layer\n to (array): Threshold of Ouput layer neurons\n x (array): Excitatory recurrent network states\n \"\"\"\n woe = np.asarray(woe)\n xt = x[:, 1]\n xt = xt.reshape(Sorn.ne, 1)\n \n incoming_drive_o = np.expand_dims(self.incoming_drive(weights=woe, activity_vector=xt), 1)\n tot_incoming_drive = incoming_drive_o - to\n \n # TODO: If output neuron is 1, the use Heavyside step function\n if type(to) == list:\n \n \"\"\"Winner takes all\"\"\"\n ot_next = np.where(tot_incoming_drive == tot_incoming_drive.max(), tot_incoming_drive, 0.)\n return ot_next\n else:\n \"\"\"Implement Heaviside step function\"\"\"\n heaviside_step = np.expand_dims([0.] * len(tot_incoming_drive),1)\n heaviside_step[tot_incoming_drive > 0] = 1.\n return heaviside_step\n \n ", "_____no_output_____" ] ], [ [ "### Helper class for training SORN", "_____no_output_____" ], [ "#### Build separate class to feed inputs to SORN with plasticity ON", "_____no_output_____" ] ], [ [ "class SimulateRMSorn(Sorn):\n \"\"\"\n Args:\n inputs - one hot vector of inputs\n Returns:\n matrix_collection - collection of all weight matrices in dictionaries\n \"\"\"\n def __init__(self,phase,matrices,inputs,sequence_length, targets, reward_window_sizes, epochs):\n super().__init__()\n self.time_steps = np.shape(inputs)[0]*sequence_length * epochs\n Sorn.time_steps = np.shape(inputs)[0]*sequence_length* epochs\n# self.inputs = np.asarray(np.tile(inputs,(1,epochs)))\n self.inputs = inputs\n self.phase = phase\n self.matrices = matrices\n self.epochs = epochs\n self.reward_window_sizes = reward_window_sizes\n self.sequence_length = sequence_length\n \n def train_sorn(self): \n # Collect the network activity at all time steps\n X_all = [0]*self.time_steps\n Y_all = [0]*self.time_steps\n R_all = [0]*self.time_steps\n O_all = [0]*self.time_steps\n Rewards,Mo,Mr = [],[],[]\n frac_pos_active_conn = []\n \n \"\"\" DONOT INITIALIZE WEIGHTS\"\"\"\n matrix_collection = MatrixCollection(phase = self.phase, matrices = self.matrices) \n time_steps_counter= 0\n \"\"\" Generate white noise\"\"\"\n white_noise_e = white_gaussian_noise(mu= 0., sigma = 0.04,t = Sorn.ne)\n white_noise_i = white_gaussian_noise(mu= 0., sigma = 0.04,t = Sorn.ni)\n\n # Buffers to get the resulting x, y and o vectors at the current time step and update the master matrix\n x_buffer, y_buffer, o_buffer = np.zeros(( Sorn.ne, 2)), np.zeros((Sorn.ni, 2)), np.zeros(( Sorn.no, 2))\n\n te_buffer, ti_buffer, to_buffer = np.zeros((Sorn.ne, 1)), np.zeros((Sorn.ni, 1)), np.zeros(( Sorn.no, 1))\n\n # Get the matrices and rename them for ease of reading\n Wee, Wei, Wie,Woe = matrix_collection.Wee, matrix_collection.Wei, matrix_collection.Wie, matrix_collection.Woe\n Te, Ti,To = matrix_collection.Te, matrix_collection.Ti,matrix_collection.To\n X, Y, O = matrix_collection.X, matrix_collection.Y, matrix_collection.O\n i = 0 \n for k in tqdm.tqdm(range(self.inputs.shape[0])):\n \n for j in range(self.sequence_length):\n \"\"\" Fraction of active connections between E-E network\"\"\"\n frac_pos_active_conn.append((Wee[i] > 0.0).sum())\n network_state = NetworkState(self.inputs[k][j]) # Feed Input as an argument to the class\n # Recurrent drive,excitatory, inhibitory and output network states \n r = network_state.recurrent_drive(Wee[i], Wei[i], Te[i], X[i], Y[i], white_noise_e = 0.)\n excitatory_state_xt_buffer = network_state.excitatory_network_state(Wee[i], Wei[i], Te[i], X[i], Y[i],white_noise_e = 0.)\n inhibitory_state_yt_buffer = network_state.inhibitory_network_state(Wie[i], Ti[i], X[i],white_noise_i = 0.)\n output_state_ot_buffer = network_state.output_network_state(Woe[i], To[i], X[i])\n \"\"\" Update X and Y \"\"\"\n x_buffer[:, 0] = X[i][:, 1] # xt -->(becomes) xt_1\n x_buffer[:, 1] = excitatory_state_xt_buffer.T # New_activation; x_buffer --> xt\n y_buffer[:, 0] = Y[i][:, 1]\n y_buffer[:, 1] = inhibitory_state_yt_buffer.T\n o_buffer[:, 0] = O[i][:, 1]\n o_buffer[:, 1] = output_state_ot_buffer.T\n\n \"\"\"Plasticity phase\"\"\"\n plasticity = Plasticity()\n # Reward and mo, mr \n current_reward = output_state_ot_buffer*targets[k][j]\n Rewards.extend(current_reward)\n mo, mr, best_reward, best_reward_window = plasticity.modulation_factor(Rewards, current_reward, self.reward_window_sizes) \n print('Input %s | Target %s | predicted %s | mr %s, mo %s'%(self.inputs[k].tolist(), targets[k][j],output_state_ot_buffer, mr, mo))\n Mo.append(mo)\n Mr.append(mr)\n \n # STDP, Intrinsic plasticity and Synaptic scaling\n Wee_t = plasticity.stdp(Wee[i],x_buffer,mr, cutoff_weights = (0.0,1.0))\n Woe_t = plasticity.ostdp(Woe[i],x_buffer,mo)\n Te_t = plasticity.ip(Te[i],x_buffer)\n Wee_t = Plasticity().ss(Wee_t)\n Woe_t = Plasticity().ss(Woe_t)\n\n \"\"\"Assign the matrices to the matrix collections\"\"\"\n matrix_collection.weight_matrix(Wee_t, Wei[i], Wie[i],Woe_t, i)\n matrix_collection.threshold_matrix(Te_t, Ti[i],To[i], i)\n matrix_collection.network_activity_t(x_buffer, y_buffer,o_buffer, i)\n\n X_all[i] = x_buffer[:,1]\n Y_all[i] = y_buffer[:,1]\n R_all[i] = r\n O_all[i] = o_buffer[:,1]\n i+=1 \n plastic_matrices = {'Wee':matrix_collection.Wee[-1], \n 'Wei': matrix_collection.Wei[-1], \n 'Wie':matrix_collection.Wie[-1],\n 'Woe':matrix_collection.Woe[-1],\n 'Te': matrix_collection.Te[-1], 'Ti': matrix_collection.Ti[-1],\n 'X': X[-1], 'Y': Y[-1]}\n \n return plastic_matrices,X_all,Y_all,R_all,frac_pos_active_conn", "_____no_output_____" ], [ "\ntraining_sequence = np.repeat(np.array([[[0,0,0,1], [0,0,1,0], [0,1,0,0], [1,0,0,0]],\n [[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]], \n [[1,0,0,0], [0,0,1,0], [0,0,0,1], [0,1,0,0]], \n [[0,0,1,0], [0,0,0,0], [0,1,0,0], [0,0,0,1]]]), \n repeats=1000, axis=0)\n\nsequence_targets = np.repeat(np.array([1,0,0,0]),repeats=1000,axis=0)\n", "_____no_output_____" ], [ "input_str = ['1234','4321', '4213', '2431']\ntraining_input = []\ntargets = []\nfor i in range(100):\n idx = random.randint(0,3)\n inp = input_str[idx]\n if inp == '1234':\n input_seq = [[0,0,0,1], [0,0,1,0], [0,1,0,0], [1,0,0,0]]\n target = [1,1,1,1]\n elif inp == '4321':\n input_seq = [[1,0,0,0], [0,1,0,0], [0,0,1,0], [0,0,0,1]]\n target = [0,0,0,0]\n elif inp == '4213':\n input_seq = [[1,0,0,0], [0,0,1,0], [0,0,0,1], [0,1,0,0]]\n target = [0,0,0,0]\n else:\n input_seq = [[0,0,1,0], [0,0,0,0], [0,1,0,0], [0,0,0,1]]\n target = [0,0,0,0]\n training_input.append(input_seq)\n targets.append(target)\n \nprint(np.asarray(training_input).shape, targets)\n", "(100, 4, 4) [[0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [1, 1, 1, 1], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]\n" ], [ "train_plast_inp_mat,X_all_inp,Y_all_inp,R_all, frac_pos_active_conn = SimulateRMSorn(phase = 'Plasticity', \n matrices = None,\n inputs = np.asarray(training_input),sequence_length = 4, targets = targets,\n reward_window_sizes = [1,5,10,20],\n epochs = 1).train_sorn() ", "Network Initialized\nNumber of connections in Wee 579 , Wei 180, Wie 180 Woe 30\nShapes Wee (30, 30) Wei (6, 30) Wie (30, 6) Woe (30, 1)\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ] ]
cbf4bf0c522fe526ecc6dc94ffd1c893c97dddbd
151,525
ipynb
Jupyter Notebook
BiLSTM(GloVe)_Sampled.ipynb
GourabGoswami/AIML-Capstone_Project-
67ccf8b90e6dfabc6d706bbcaf7ce4ed5c87176e
[ "MIT" ]
null
null
null
BiLSTM(GloVe)_Sampled.ipynb
GourabGoswami/AIML-Capstone_Project-
67ccf8b90e6dfabc6d706bbcaf7ce4ed5c87176e
[ "MIT" ]
null
null
null
BiLSTM(GloVe)_Sampled.ipynb
GourabGoswami/AIML-Capstone_Project-
67ccf8b90e6dfabc6d706bbcaf7ce4ed5c87176e
[ "MIT" ]
1
2020-11-28T12:37:39.000Z
2020-11-28T12:37:39.000Z
119.593528
69,414
0.788563
[ [ [ "# Mount the drive\n\nfrom google.colab import drive\ndrive.mount('/content/drive')", "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n" ], [ "# Import the necessary libraries\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport tensorflow as tf\n\nfrom IPython.display import display\nfrom keras.preprocessing.text import Tokenizer\n\n# Model preprocessing APIs\nfrom sklearn import preprocessing\nfrom sklearn.utils import resample\n\n# Model accuracy plotting APIs\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score \nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import f1_score\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import recall_score\nfrom sklearn.metrics import roc_auc_score\n\n# Model building APIs\nfrom tensorflow.keras.layers import Activation\nfrom tensorflow.keras.layers import Bidirectional\nfrom tensorflow.keras.layers import Dropout\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras.layers import Embedding\nfrom tensorflow.keras.layers import Input\nfrom tensorflow.keras.layers import LSTM\n\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.preprocessing.text import Tokenizer\n\nfrom tensorflow.keras.callbacks import ModelCheckpoint\nfrom tensorflow.keras.callbacks import ReduceLROnPlateau\nfrom tensorflow.keras.utils import plot_model\n\nfrom zipfile import ZipFile\n\npd.options.display.max_columns = None\npd.options.display.max_rows = None", "_____no_output_____" ], [ "# Set path variables\n\nproject_path = '/content/drive/My Drive/Colab/'\nfile_name ='TempOutput_1.xlsx'", "_____no_output_____" ], [ "# Import the dataframe\n\nunsampled_df=pd.read_excel(project_path+file_name)\nunsampled_df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 7909 entries, 0 to 7908\nData columns (total 7 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 7909 non-null int64 \n 1 Short description 7909 non-null object\n 2 Description 7906 non-null object\n 3 Assignment group 7909 non-null object\n 4 New Description 7876 non-null object\n 5 Language 7909 non-null object\n 6 Lemmatized clean 7909 non-null object\ndtypes: int64(1), object(6)\nmemory usage: 432.6+ KB\n" ], [ "# Drop columns not needed for Model building\n\nunsampled_df.drop([\"Unnamed: 0\",\"Short description\", \"Description\", \"Language\"],axis=1,inplace=True)", "_____no_output_____" ], [ "unsampled_df.head(5)", "_____no_output_____" ], [ "unsampled_df = unsampled_df.dropna(axis=0)", "_____no_output_____" ], [ "# Non-Grp_0 dataframe\n\nothers_df = unsampled_df[unsampled_df['Assignment group'] != 'GRP_0']", "_____no_output_____" ], [ "# Get the upper/lower limit to resample\n\nmaxOthers = others_df['Assignment group'].value_counts().max()\nmaxOthers", "_____no_output_____" ], [ "# Upsample the minority classes and downsample the majority classes\n\ndf_to_process = unsampled_df[0:0]\nfor grp in unsampled_df['Assignment group'].unique():\n assign_grp_df = unsampled_df[unsampled_df['Assignment group'] == grp]\n resampled = resample(assign_grp_df, replace=True, n_samples=maxOthers, random_state=123)\n df_to_process = df_to_process.append(resampled)", "_____no_output_____" ], [ "# Label encode the assignment groups\n\nlabel_encoder = preprocessing.LabelEncoder() \n \ndf_to_process['Assignment group ID']= label_encoder.fit_transform(df_to_process['Assignment group']) \ndf_to_process['Assignment group ID'].unique()", "_____no_output_____" ], [ "# Function to generate word tokens\n\ndef wordTokenizer(dataframe):\n tokenizer = Tokenizer(num_words=numWords,filters='!\"#$%&()*+,-./:;<=>?@[\\\\]^_`{|}~\\t\\n',lower=True,split=' ', char_level=False)\n tokenizer.fit_on_texts(dataframe)\n dataframe = tokenizer.texts_to_sequences(dataframe)\n return tokenizer,dataframe", "_____no_output_____" ], [ "# GloVe the dataframe and store the embedding result\n\nglove_file = project_path + \"glove.6B.zip\"\nprint(glove_file)", "/content/drive/My Drive/Colab/glove.6B.zip\n" ], [ "#Extract Glove embedding zip file\n\nwith ZipFile(glove_file, 'r') as z:\n z.extractall()", "_____no_output_____" ], [ "# Perform GloVe embeddings\n\nEMBEDDING_FILE = './glove.6B.100d.txt'\nembeddings_glove = {}\nfor o in open(EMBEDDING_FILE):\n word = o.split(\" \")[0]\n embd = o.split(\" \")[1:]\n embd = np.asarray(embd, dtype='float32')\n embeddings_glove[word] = embd", "_____no_output_____" ], [ "results = pd.DataFrame()\npredictedResults = pd.DataFrame()", "_____no_output_____" ], [ "max_len = 300\ntokenizer = Tokenizer(split=' ')\ntokenizer.fit_on_texts(df_to_process[\"New Description\"].values)\nX_seq = tokenizer.texts_to_sequences(df_to_process[\"New Description\"].values)\nX_padded = pad_sequences(X_seq, maxlen=max_len)", "_____no_output_____" ], [ "numWords = len(tokenizer.word_index) + 1\nepochs = 10\nbatch_size=100\nnumWords", "_____no_output_____" ], [ "# Try the BiLSTM model on the sampled data and predict the accuracy.\n\ntokenizer, X = wordTokenizer(df_to_process['New Description'])\ny = np.asarray(df_to_process['Assignment group ID'])\nX = pad_sequences(X,maxlen=max_len)", "_____no_output_____" ], [ "# Create embedding matrix\n\nembedding_matrix = np.zeros((numWords+1,100))\n\nfor i,word in tokenizer.index_word.items():\n if i<numWords+1:\n embedding_vector = embeddings_glove.get(word)\n if embedding_vector is not None:\n embedding_matrix[i] = embedding_vector\n\nembedding_matrix", "_____no_output_____" ], [ "# Perform the train-test split\n\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size = 0.3)\nX_train,X_test,y_train,y_test", "_____no_output_____" ], [ "# Build the Bi-LSTM model\n\ninput_layer = Input(shape=(max_len,),dtype=tf.int64)\nembed = Embedding(numWords+1,output_dim=100,input_length=max_len,weights=[embedding_matrix], trainable=True)(input_layer) #weights=[embedding_matrix]\nlstm=Bidirectional(LSTM(128))(embed)\ndrop=Dropout(0.3)(lstm)\ndense =Dense(100,activation='relu')(drop)\nout=Dense((len((pd.Series(y_train)).unique())+1),activation='softmax')(dense) \n\nmodel = Model(input_layer,out)\nmodel.compile(loss='sparse_categorical_crossentropy',optimizer=\"adam\",metrics=['accuracy'])\n\nmodel.summary()\nplot_model(model,to_file=project_path + \"Bi-LSTM(GloVe)_Sampled_Model.jpg\")\n\ncheckpoint = ModelCheckpoint('model-{epoch:03d}-{val_accuracy:03f}.h5', verbose=1, monitor='val_accuracy',save_best_only=True, mode='auto') \nreduceLoss = ReduceLROnPlateau(monitor='val_loss', factor=0.2,patience=2, min_lr=0.0001)", "Model: \"functional_1\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ninput_1 (InputLayer) [(None, 300)] 0 \n_________________________________________________________________\nembedding (Embedding) (None, 300, 100) 1234400 \n_________________________________________________________________\nbidirectional (Bidirectional (None, 256) 234496 \n_________________________________________________________________\ndropout (Dropout) (None, 256) 0 \n_________________________________________________________________\ndense (Dense) (None, 100) 25700 \n_________________________________________________________________\ndense_1 (Dense) (None, 75) 7575 \n=================================================================\nTotal params: 1,502,171\nTrainable params: 1,502,171\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# Run the model and get the model history\n\nmodel_history = model.fit(X_train,y_train,batch_size=batch_size, epochs=epochs, callbacks=[checkpoint,reduceLoss], validation_data=(X_test,y_test))", "Epoch 1/10\n334/335 [============================>.] - ETA: 0s - loss: 2.2523 - accuracy: 0.4653\nEpoch 00001: val_accuracy improved from -inf to 0.72226, saving model to model-001-0.722257.h5\n335/335 [==============================] - 22s 64ms/step - loss: 2.2518 - accuracy: 0.4654 - val_loss: 1.0282 - val_accuracy: 0.7223\nEpoch 2/10\n334/335 [============================>.] - ETA: 0s - loss: 0.7540 - accuracy: 0.7902\nEpoch 00002: val_accuracy improved from 0.72226 to 0.85956, saving model to model-002-0.859557.h5\n335/335 [==============================] - 21s 63ms/step - loss: 0.7540 - accuracy: 0.7902 - val_loss: 0.5015 - val_accuracy: 0.8596\nEpoch 3/10\n334/335 [============================>.] - ETA: 0s - loss: 0.4198 - accuracy: 0.8784\nEpoch 00003: val_accuracy improved from 0.85956 to 0.90244, saving model to model-003-0.902437.h5\n335/335 [==============================] - 21s 63ms/step - loss: 0.4198 - accuracy: 0.8783 - val_loss: 0.3491 - val_accuracy: 0.9024\nEpoch 4/10\n334/335 [============================>.] - ETA: 0s - loss: 0.2815 - accuracy: 0.9163\nEpoch 00004: val_accuracy improved from 0.90244 to 0.92423, saving model to model-004-0.924227.h5\n335/335 [==============================] - 21s 63ms/step - loss: 0.2816 - accuracy: 0.9163 - val_loss: 0.2682 - val_accuracy: 0.9242\nEpoch 5/10\n334/335 [============================>.] - ETA: 0s - loss: 0.2224 - accuracy: 0.9325\nEpoch 00005: val_accuracy improved from 0.92423 to 0.93191, saving model to model-005-0.931909.h5\n335/335 [==============================] - 21s 64ms/step - loss: 0.2223 - accuracy: 0.9325 - val_loss: 0.2360 - val_accuracy: 0.9319\nEpoch 6/10\n334/335 [============================>.] - ETA: 0s - loss: 0.1879 - accuracy: 0.9427\nEpoch 00006: val_accuracy improved from 0.93191 to 0.93568, saving model to model-006-0.935680.h5\n335/335 [==============================] - 21s 63ms/step - loss: 0.1878 - accuracy: 0.9427 - val_loss: 0.2179 - val_accuracy: 0.9357\nEpoch 7/10\n334/335 [============================>.] - ETA: 0s - loss: 0.1675 - accuracy: 0.9477\nEpoch 00007: val_accuracy improved from 0.93568 to 0.94238, saving model to model-007-0.942384.h5\n335/335 [==============================] - 21s 63ms/step - loss: 0.1675 - accuracy: 0.9477 - val_loss: 0.1970 - val_accuracy: 0.9424\nEpoch 8/10\n334/335 [============================>.] - ETA: 0s - loss: 0.1607 - accuracy: 0.9486\nEpoch 00008: val_accuracy improved from 0.94238 to 0.94315, saving model to model-008-0.943152.h5\n335/335 [==============================] - 21s 63ms/step - loss: 0.1606 - accuracy: 0.9486 - val_loss: 0.1971 - val_accuracy: 0.9432\nEpoch 9/10\n334/335 [============================>.] - ETA: 0s - loss: 0.1456 - accuracy: 0.9528\nEpoch 00009: val_accuracy did not improve from 0.94315\n335/335 [==============================] - 21s 63ms/step - loss: 0.1457 - accuracy: 0.9528 - val_loss: 0.1967 - val_accuracy: 0.9411\nEpoch 10/10\n334/335 [============================>.] - ETA: 0s - loss: 0.1364 - accuracy: 0.9546\nEpoch 00010: val_accuracy improved from 0.94315 to 0.94490, saving model to model-010-0.944898.h5\n335/335 [==============================] - 21s 63ms/step - loss: 0.1363 - accuracy: 0.9547 - val_loss: 0.1918 - val_accuracy: 0.9449\n" ], [ "# predict probabilities for test set\nyhat_probs = model.predict(X_test, verbose=0)\n# predict crisp classes for test set\n# use argmax per Jason, instead of model.predict_classes if not available. https://machinelearningmastery.com/how-to-calculate-precision-recall-f1-and-more-for-deep-learning-models/\nyhat_classes = np.argmax(yhat_probs, axis=1)", "_____no_output_____" ], [ "# Generate the target label to plot the Confusion Matrix\n\ntarget_names = df_to_process['Assignment group'].unique()\ntarget_names", "_____no_output_____" ], [ "# Generate the Confusion Matrix\nmatrix = confusion_matrix(y_test, yhat_classes)\nprint(matrix)", "[[106 0 1 ... 0 0 0]\n [ 0 192 0 ... 0 0 0]\n [ 3 0 166 ... 0 0 0]\n ...\n [ 0 0 0 ... 172 0 0]\n [ 0 4 2 ... 0 111 0]\n [ 4 0 0 ... 0 1 56]]\n" ], [ "# Generate the Classification Report to print the class level accuracies, here, in the case of multiclass classification.\n\nprint('Classification Report')\nprint(classification_report(y_test, yhat_classes, target_names=target_names))\n", "Classification Report\n precision recall f1-score support\n\n GRP_0 0.77 0.51 0.61 207\n GRP_1 0.89 1.00 0.94 192\n GRP_3 0.98 0.85 0.91 196\n GRP_4 0.98 1.00 0.99 195\n GRP_5 0.97 0.84 0.90 199\n GRP_6 1.00 0.98 0.99 188\n GRP_7 0.99 0.96 0.97 186\n GRP_8 1.00 1.00 1.00 173\n GRP_9 0.97 1.00 0.98 198\n GRP_10 0.99 1.00 1.00 191\n GRP_11 0.98 0.95 0.97 175\n GRP_12 0.97 0.84 0.90 180\n GRP_13 0.83 0.89 0.86 197\n GRP_14 0.99 1.00 1.00 189\n GRP_15 1.00 1.00 1.00 202\n GRP_16 0.98 1.00 0.99 197\n GRP_17 0.98 1.00 0.99 188\n GRP_18 0.97 0.95 0.96 194\n GRP_19 0.99 1.00 0.99 205\n GRP_2 0.99 1.00 0.99 198\n GRP_20 0.98 1.00 0.99 200\n GRP_21 0.94 1.00 0.97 202\n GRP_22 0.98 0.97 0.97 193\n GRP_23 0.87 0.94 0.90 186\n GRP_24 0.99 0.91 0.95 192\n GRP_25 0.97 0.93 0.95 182\n GRP_26 0.99 1.00 1.00 198\n GRP_27 0.95 0.94 0.95 202\n GRP_28 0.97 0.98 0.98 202\n GRP_29 1.00 1.00 1.00 181\n GRP_30 0.99 1.00 0.99 193\n GRP_31 1.00 1.00 1.00 187\n GRP_33 1.00 1.00 1.00 203\n GRP_34 0.99 0.97 0.98 202\n GRP_35 0.98 0.96 0.97 192\n GRP_36 0.98 1.00 0.99 205\n GRP_37 1.00 1.00 1.00 207\n GRP_38 0.98 1.00 0.99 193\n GRP_39 1.00 1.00 1.00 192\n GRP_40 1.00 0.94 0.97 185\n GRP_41 0.99 0.83 0.90 196\n GRP_42 1.00 1.00 1.00 190\n GRP_43 0.77 0.93 0.84 208\n GRP_44 0.89 1.00 0.94 179\n GRP_45 1.00 1.00 1.00 182\n GRP_46 0.95 0.62 0.75 185\n GRP_47 0.97 1.00 0.99 204\n GRP_49 1.00 1.00 1.00 206\n GRP_50 1.00 1.00 1.00 195\n GRP_51 1.00 1.00 1.00 213\n GRP_52 1.00 1.00 1.00 176\n GRP_53 0.99 1.00 1.00 189\n GRP_54 0.99 1.00 0.99 198\n GRP_55 0.36 1.00 0.53 208\n GRP_48 1.00 1.00 1.00 183\n GRP_56 0.99 1.00 1.00 206\n GRP_57 0.88 0.60 0.72 197\n GRP_58 0.98 0.92 0.95 203\n GRP_59 1.00 1.00 1.00 200\n GRP_60 0.98 1.00 0.99 189\n GRP_61 1.00 1.00 1.00 196\n GRP_32 1.00 1.00 1.00 198\n GRP_62 0.99 1.00 1.00 193\n GRP_63 1.00 1.00 1.00 199\n GRP_64 1.00 1.00 1.00 187\n GRP_65 1.00 1.00 1.00 194\n GRP_66 1.00 1.00 1.00 201\n GRP_67 0.97 1.00 0.98 180\n GRP_68 1.00 1.00 1.00 170\n GRP_69 1.00 1.00 1.00 174\n GRP_70 0.97 1.00 0.98 194\n GRP_71 1.00 1.00 1.00 172\n GRP_72 0.67 0.57 0.61 195\n GRP_73 0.97 0.26 0.41 212\n\n accuracy 0.94 14319\n macro avg 0.96 0.95 0.95 14319\nweighted avg 0.96 0.94 0.95 14319\n\n" ], [ "# Plot the Confusion matrix\n\nax= plt.subplot()\n#plt.subplots(figsize=(10,10))\nsns.heatmap(matrix,annot=True,ax=ax,cmap='Blues',fmt='d');\nax.set_xlabel('Predicted labels');ax.set_ylabel('True/Actual labels');\nax.set_title('Confusion Matrix');\nax.xaxis.set_ticklabels(target_names);\nax.yaxis.set_ticklabels(target_names);", "_____no_output_____" ], [ "# Summarize the accuracies\n\nfrom sklearn.metrics import accuracy_score\n\naccuracy = accuracy_score(y_test, yhat_classes)\nprint('Accuracy: %f' % accuracy)\nprecision = precision_score(y_test, yhat_classes, average='micro')\nprint('Precision: %f' % precision)\nrecall = recall_score(y_test, yhat_classes, average='micro')\nprint('Recall: %f' % recall)\nf1 = f1_score(y_test, yhat_classes,average='micro')\nprint('F1 score: %f' % f1)", "Accuracy: 0.944898\nPrecision: 0.944898\nRecall: 0.944898\nF1 score: 0.944898\n" ], [ "# Plot model losses\n\nloss_values = model_history.history['loss']\nval_loss_values = model_history.history['val_loss']\n\nepochs = range(1, len(loss_values) + 1)\n\nplt.plot(epochs, loss_values, 'bo', label=\"Training Loss\")\nplt.plot(epochs, val_loss_values, 'b', label=\"Validation Loss\")\n\nplt.title('Bi-LSTM(GloVe) on sampled data - Training and Validation Loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss Value')\nplt.legend()\n\nplt.show()", "_____no_output_____" ], [ "# Plot training and validation accuracies\n\nacc_values = model_history.history['accuracy']\nval_acc_values = model_history.history['val_accuracy']\n\nepochs = range(1, len(loss_values) + 1)\n\nplt.plot(epochs, acc_values, 'ro', label=\"Training Accuracy\")\nplt.plot(epochs, val_acc_values, 'r', label=\"Validation Accuracy\")\n\nplt.title('Bi-LSTM(GloVe) on sampled data - Training and Validation Accuraccy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend()\n\nplt.show()", "_____no_output_____" ], [ "# Summarize the model history\n\nmodel_df = pd.DataFrame(model_history.history)\nmodel_df", "_____no_output_____" ], [ "# Thank you.", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf4cc590ecd482f8eb799f6bf5a864b21f2b4d1
221,166
ipynb
Jupyter Notebook
content/ch-machine-learning/machine-learning-qiskit-pytorch.ipynb
muneerqu/qiskit-textbook
b574b7e55c3d737e477f47316812d1d227763b7e
[ "Apache-2.0" ]
null
null
null
content/ch-machine-learning/machine-learning-qiskit-pytorch.ipynb
muneerqu/qiskit-textbook
b574b7e55c3d737e477f47316812d1d227763b7e
[ "Apache-2.0" ]
2
2021-09-28T05:31:05.000Z
2022-02-26T09:51:13.000Z
content/ch-machine-learning/machine-learning-qiskit-pytorch.ipynb
muneerqu/qiskit-textbook
b574b7e55c3d737e477f47316812d1d227763b7e
[ "Apache-2.0" ]
1
2022-02-23T02:43:58.000Z
2022-02-23T02:43:58.000Z
50.026238
5,677
0.602783
[ [ [ "# Hybrid quantum-classical Neural Networks with PyTorch and Qiskit", "_____no_output_____" ], [ "Machine learning (ML) has established itself as a successful interdisciplinary field which seeks to mathematically extract generalizable information from data. Throwing in quantum computing gives rise to interesting areas of research which seek to leverage the principles of quantum mechanics to augment machine learning or vice-versa. Whether you're aiming to enhance classical ML algorithms by outsourcing difficult calculations to a quantum computer or optimise quantum algorithms using classical ML architectures - both fall under the diverse umbrella of quantum machine learning (QML).\n\nIn this chapter, we explore how a classical neural network can be partially quantized to create a hybrid quantum-classical neural network. We will code up a simple example that integrates **Qiskit** with a state-of-the-art open-source software package - **[PyTorch](https://pytorch.org/)**. The purpose of this example is to demonstrate the ease of integrating Qiskit with existing ML tools and to encourage ML practitioners to explore what is possible with quantum computing.\n\n## Contents\n\n1. [How Does it Work?](#how) \n 1.1 [Preliminaries](#prelims) \n2. [So How Does Quantum Enter the Picture?](#quantumlayer)\n3. [Let's code!](#code) \n 3.1 [Imports](#imports) \n 3.2 [Create a \"Quantum Class\" with Qiskit](#q-class) \n 3.3 [Create a \"Quantum-Classical Class\" with PyTorch](#qc-class) \n 3.4 [Data Loading and Preprocessing](#data-loading-preprocessing) \n 3.5 [Creating the Hybrid Neural Network](#hybrid-nn) \n 3.6 [Training the Network](#training) \n 3.7 [Testing the Network](#testing)\n4. [What Now?](#what-now)", "_____no_output_____" ], [ "## 1. How does it work? <a id='how'></a>\n<img src=\"hybridnetwork.png\" />\n\n**Fig.1** Illustrates the framework we will construct in this chapter. Ultimately, we will create a hybrid quantum-classical neural network that seeks to classify hand drawn digits. Note that the edges shown in this image are all directed downward; however, the directionality is not visually indicated. ", "_____no_output_____" ], [ "### 1.1 Preliminaries <a id='prelims'></a>\nThe background presented here on classical neural networks is included to establish relevant ideas and shared terminology; however, it is still extremely high-level. __If you'd like to dive one step deeper into classical neural networks, see the well made video series by youtuber__ [3Blue1Brown](https://youtu.be/aircAruvnKk). Alternatively, if you are already familiar with classical networks, you can [skip to the next section](#quantumlayer).\n\n###### Neurons and Weights\nA neural network is ultimately just an elaborate function that is built by composing smaller building blocks called neurons. A ***neuron*** is typically a simple, easy-to-compute, and nonlinear function that maps one or more inputs to a single real number. The single output of a neuron is typically copied and fed as input into other neurons. Graphically, we represent neurons as nodes in a graph and we draw directed edges between nodes to indicate how the output of one neuron will be used as input to other neurons. It's also important to note that each edge in our graph is often associated with a scalar-value called a [***weight***](https://en.wikipedia.org/wiki/Artificial_neural_network#Connections_and_weights). The idea here is that each of the inputs to a neuron will be multiplied by a different scalar before being collected and processed into a single value. The objective when training a neural network consists primarily of choosing our weights such that the network behaves in a particular way. \n\n###### Feed Forward Neural Networks\nIt is also worth noting that the particular type of neural network we will concern ourselves with is called a **[feed-forward neural network (FFNN)](https://en.wikipedia.org/wiki/Feedforward_neural_network)**. This means that as data flows through our neural network, it will never return to a neuron it has already visited. Equivalently, you could say that the graph which describes our neural network is a **[directed acyclic graph (DAG)](https://en.wikipedia.org/wiki/Directed_acyclic_graph)**. Furthermore, we will stipulate that neurons within the same layer of our neural network will not have edges between them. \n\n###### IO Structure of Layers\nThe input to a neural network is a classical (real-valued) vector. Each component of the input vector is multiplied by a different weight and fed into a layer of neurons according to the graph structure of the network. After each neuron in the layer has been evaluated, the results are collected into a new vector where the i'th component records the output of the i'th neuron. This new vector can then treated as input for a new layer, and so on. We will use the standard term ***hidden layer*** to describe all but the first and last layers of our network.\n", "_____no_output_____" ], [ "## 2. So How Does Quantum Enter the Picture? <a id='quantumlayer'> </a>\n\nTo create a quantum-classical neural network, one can implement a hidden layer for our neural network using a parameterized quantum circuit. By \"parameterized quantum circuit\", we mean a quantum circuit where the rotation angles for each gate are specified by the components of a classical input vector. The outputs from our neural network's previous layer will be collected and used as the inputs for our parameterized circuit. The measurement statistics of our quantum circuit can then be collected and used as inputs for the following layer. A simple example is depicted below:\n\n<img src=\"neuralnetworkQC.png\" />\n\nHere, $\\sigma$ is a [nonlinear function](https://en.wikipedia.org/wiki/Activation_function) and $h_i$ is the value of neuron $i$ at each hidden layer. $R(h_i)$ represents any rotation gate about an angle equal to $h_i$ and $y$ is the final prediction value generated from the hybrid network. \n\n### What about backpropagation?\nIf you're familiar with classical ML, you may immediately be wondering *how do we calculate gradients when quantum circuits are involved?* This would be necessary to enlist powerful optimisation techniques such as **[gradient descent](https://en.wikipedia.org/wiki/Gradient_descent)**. It gets a bit technical, but in short, we can view a quantum circuit as a black box and the gradient of this black box with respect to its parameters can be calculated as follows: \n\n<img src=\"quantumgradient.png\" />\n\nwhere $\\theta$ represents the parameters of the quantum circuit and $s$ is a macroscopic shift. The gradient is then simply the difference between our quantum circuit evaluated at $\\theta+s$ and $\\theta - s$. Thus, we can systematically differentiate our quantum circuit as part of a larger backpropogation routine. This closed form rule for calculating the gradient of quantum circuit parameters is known as **[the parameter shift rule](https://arxiv.org/pdf/1905.13311.pdf)**. ", "_____no_output_____" ], [ "## 3. Let's code! <a id='code'></a>\n\n\n### 3.1 Imports <a id='imports'></a>\nFirst, we import some handy packages that we will need, including Qiskit and PyTorch.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nimport torch\nfrom torch.autograd import Function\nfrom torchvision import datasets, transforms\nimport torch.optim as optim\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nimport qiskit\nfrom qiskit.visualization import *", "_____no_output_____" ] ], [ [ "### 3.2 Create a \"Quantum Class\" with Qiskit <a id='q-class'></a>\nWe can conveniently put our Qiskit quantum functions into a class. First, we specify how many trainable quantum parameters and how many shots we wish to use in our quantum circuit. In this example, we will keep it simple and use a 1-qubit circuit with one trainable quantum parameter $\\theta$. We hard code the circuit for simplicity and use a $RY-$rotation by the angle $\\theta$ to train the output of our circuit. The circuit looks like this:\n\n<img src=\"1qubitcirc.png\" width=\"400\"/>\n\nIn order to measure the output in the $z-$basis, we calculate the $\\sigma_\\mathbf{z}$ expectation. \n$$\\sigma_\\mathbf{z} = \\sum_i z_i p(z_i)$$\nWe will see later how this all ties into the hybrid neural network.", "_____no_output_____" ] ], [ [ "class QuantumCircuit:\n \"\"\" \n This class provides a simple interface for interaction \n with the quantum circuit \n \"\"\"\n \n def __init__(self, n_qubits, backend, shots):\n # --- Circuit definition ---\n self._circuit = qiskit.QuantumCircuit(n_qubits)\n \n all_qubits = [i for i in range(n_qubits)]\n self.theta = qiskit.circuit.Parameter('theta')\n \n self._circuit.h(all_qubits)\n self._circuit.barrier()\n self._circuit.ry(self.theta, all_qubits)\n \n self._circuit.measure_all()\n # ---------------------------\n\n self.backend = backend\n self.shots = shots\n \n def run(self, thetas):\n job = qiskit.execute(self._circuit, \n self.backend, \n shots = self.shots,\n parameter_binds = [{self.theta: theta} for theta in thetas])\n result = job.result().get_counts(self._circuit)\n \n counts = np.array(list(result.values()))\n states = np.array(list(result.keys())).astype(float)\n \n # Compute probabilities for each state\n probabilities = counts / self.shots\n # Get state expectation\n expectation = np.sum(states * probabilities)\n \n return np.array([expectation])", "_____no_output_____" ] ], [ [ "Let's test the implementation", "_____no_output_____" ] ], [ [ "simulator = qiskit.Aer.get_backend('qasm_simulator')\n\ncircuit = QuantumCircuit(1, simulator, 100)\nprint('Expected value for rotation pi {}'.format(circuit.run([np.pi])[0]))\ncircuit._circuit.draw()", "Expected value for rotation pi 0.55\n" ] ], [ [ "### 3.3 Create a \"Quantum-Classical Class\" with PyTorch <a id='qc-class'></a>\nNow that our quantum circuit is defined, we can create the functions needed for backpropagation using PyTorch. [The forward and backward passes](http://www.ai.mit.edu/courses/6.034b/backprops.pdf) contain elements from our Qiskit class. The backward pass directly computes the analytical gradients using the finite difference formula we introduced above.", "_____no_output_____" ] ], [ [ "class HybridFunction(Function):\n \"\"\" Hybrid quantum - classical function definition \"\"\"\n \n @staticmethod\n def forward(ctx, input, quantum_circuit, shift):\n \"\"\" Forward pass computation \"\"\"\n ctx.shift = shift\n ctx.quantum_circuit = quantum_circuit\n\n expectation_z = ctx.quantum_circuit.run(input[0].tolist())\n result = torch.tensor([expectation_z])\n ctx.save_for_backward(input, result)\n\n return result\n \n @staticmethod\n def backward(ctx, grad_output):\n \"\"\" Backward pass computation \"\"\"\n input, expectation_z = ctx.saved_tensors\n input_list = np.array(input.tolist())\n \n shift_right = input_list + np.ones(input_list.shape) * ctx.shift\n shift_left = input_list - np.ones(input_list.shape) * ctx.shift\n \n gradients = []\n for i in range(len(input_list)):\n expectation_right = ctx.quantum_circuit.run(shift_right[i])\n expectation_left = ctx.quantum_circuit.run(shift_left[i])\n \n gradient = torch.tensor([expectation_right]) - torch.tensor([expectation_left])\n gradients.append(gradient)\n gradients = np.array([gradients]).T\n return torch.tensor([gradients]).float() * grad_output.float(), None, None\n\nclass Hybrid(nn.Module):\n \"\"\" Hybrid quantum - classical layer definition \"\"\"\n \n def __init__(self, backend, shots, shift):\n super(Hybrid, self).__init__()\n self.quantum_circuit = QuantumCircuit(1, backend, shots)\n self.shift = shift\n \n def forward(self, input):\n return HybridFunction.apply(input, self.quantum_circuit, self.shift)", "_____no_output_____" ] ], [ [ "### 3.4 Data Loading and Preprocessing <a id='data-loading-preprocessing'></a>\n##### Putting this all together:\nWe will create a simple hybrid neural network to classify images of two types of digits (0 or 1) from the [MNIST dataset](http://yann.lecun.com/exdb/mnist/). We first load MNIST and filter for pictures containing 0's and 1's. These will serve as inputs for our neural network to classify.", "_____no_output_____" ], [ "#### Training data", "_____no_output_____" ] ], [ [ "# Concentrating on the first 100 samples\nn_samples = 100\n\nX_train = datasets.MNIST(root='./data', train=True, download=True,\n transform=transforms.Compose([transforms.ToTensor()]))\n\n# Leaving only labels 0 and 1 \nidx = np.append(np.where(X_train.targets == 0)[0][:n_samples], \n np.where(X_train.targets == 1)[0][:n_samples])\n\nX_train.data = X_train.data[idx]\nX_train.targets = X_train.targets[idx]\n\ntrain_loader = torch.utils.data.DataLoader(X_train, batch_size=1, shuffle=True)", "_____no_output_____" ], [ "n_samples_show = 6\n\ndata_iter = iter(train_loader)\nfig, axes = plt.subplots(nrows=1, ncols=n_samples_show, figsize=(10, 3))\n\nwhile n_samples_show > 0:\n images, targets = data_iter.__next__()\n\n axes[n_samples_show - 1].imshow(images[0].numpy().squeeze(), cmap='gray')\n axes[n_samples_show - 1].set_xticks([])\n axes[n_samples_show - 1].set_yticks([])\n axes[n_samples_show - 1].set_title(\"Labeled: {}\".format(targets.item()))\n \n n_samples_show -= 1", "_____no_output_____" ] ], [ [ "#### Testing data", "_____no_output_____" ] ], [ [ "n_samples = 50\n\nX_test = datasets.MNIST(root='./data', train=False, download=True,\n transform=transforms.Compose([transforms.ToTensor()]))\n\nidx = np.append(np.where(X_test.targets == 0)[0][:n_samples], \n np.where(X_test.targets == 1)[0][:n_samples])\n\nX_test.data = X_test.data[idx]\nX_test.targets = X_test.targets[idx]\n\ntest_loader = torch.utils.data.DataLoader(X_test, batch_size=1, shuffle=True)", "_____no_output_____" ] ], [ [ "So far, we have loaded the data and coded a class that creates our quantum circuit which contains 1 trainable parameter. This quantum parameter will be inserted into a classical neural network along with the other classical parameters to form the hybrid neural network. We also created backward and forward pass functions that allow us to do backpropagation and optimise our neural network. Lastly, we need to specify our neural network architecture such that we can begin to train our parameters using optimisation techniques provided by PyTorch. \n\n\n### 3.5 Creating the Hybrid Neural Network <a id='hybrid-nn'></a>\nWe can use a neat PyTorch pipeline to create a neural network architecture. The network will need to be compatible in terms of its dimensionality when we insert the quantum layer (i.e. our quantum circuit). Since our quantum in this example contains 1 parameter, we must ensure the network condenses neurons down to size 1. We create a typical Convolutional Neural Network with two fully-connected layers at the end. The value of the last neuron of the fully-connected layer is fed as the parameter $\\theta$ into our quantum circuit. The circuit measurement then serves as the final prediction for 0 or 1 as provided by a $\\sigma_z$ measurement.", "_____no_output_____" ] ], [ [ "class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 32, kernel_size=5)\n self.conv2 = nn.Conv2d(32, 64, kernel_size=5)\n self.dropout = nn.Dropout2d()\n self.fc1 = nn.Linear(256, 64)\n self.fc2 = nn.Linear(64, 1)\n self.hybrid = Hybrid(qiskit.Aer.get_backend('qasm_simulator'), 100, np.pi / 2)\n\n def forward(self, x):\n x = F.relu(self.conv1(x))\n x = F.relu(self.conv2(x))\n x = F.max_pool2d(x, 2)\n x = self.dropout(x)\n x = x.view(-1, 256)\n x = F.relu(self.fc1(x))\n x = self.fc2(x)\n x = self.hybrid(x)\n return torch.cat((x, 1 - x), -1)", "_____no_output_____" ] ], [ [ "### 3.6 Training the Network <a id='training'></a>\nWe now have all the ingredients to train our hybrid network! We can specify any [PyTorch optimiser](https://pytorch.org/docs/stable/optim.html), [learning rate](https://en.wikipedia.org/wiki/Learning_rate) and [cost/loss function](https://en.wikipedia.org/wiki/Loss_function) in order to train over multiple epochs. In this instance, we use the [Adam optimiser](https://arxiv.org/abs/1412.6980), a learning rate of 0.001 and the [negative log-likelihood loss function](https://pytorch.org/docs/stable/_modules/torch/nn/modules/loss.html).", "_____no_output_____" ] ], [ [ "model = Net()\noptimizer = optim.Adam(model.parameters(), lr=0.001)\nloss_func = nn.NLLLoss()\n\nepochs = 20\nloss_list = []\n\nmodel.train()\nfor epoch in range(epochs):\n total_loss = []\n for batch_idx, (data, target) in enumerate(train_loader):\n optimizer.zero_grad()\n # Forward pass\n output = model(data)\n # Calculating loss\n loss = loss_func(output, target)\n # Backward pass\n loss.backward()\n # Optimize the weights\n optimizer.step()\n \n total_loss.append(loss.item())\n loss_list.append(sum(total_loss)/len(total_loss))\n print('Training [{:.0f}%]\\tLoss: {:.4f}'.format(\n 100. * (epoch + 1) / epochs, loss_list[-1]))", "Training [5%]\tLoss: -0.6586\n" ] ], [ [ "Plot the training graph", "_____no_output_____" ] ], [ [ "plt.plot(loss_list)\nplt.title('Hybrid NN Training Convergence')\nplt.xlabel('Training Iterations')\nplt.ylabel('Neg Log Likelihood Loss')", "_____no_output_____" ] ], [ [ "### 3.7 Testing the Network <a id='testing'></a>", "_____no_output_____" ] ], [ [ "model.eval()\nwith torch.no_grad():\n \n correct = 0\n for batch_idx, (data, target) in enumerate(test_loader):\n output = model(data)\n \n pred = output.argmax(dim=1, keepdim=True) \n correct += pred.eq(target.view_as(pred)).sum().item()\n \n loss = loss_func(output, target)\n total_loss.append(loss.item())\n \n print('Performance on test data:\\n\\tLoss: {:.4f}\\n\\tAccuracy: {:.1f}%'.format(\n sum(total_loss) / len(total_loss),\n correct / len(test_loader) * 100)\n )", "Performance on test data:\n\tLoss: -0.8734\n\tAccuracy: 100.0%\n" ], [ "n_samples_show = 6\ncount = 0\nfig, axes = plt.subplots(nrows=1, ncols=n_samples_show, figsize=(10, 3))\n\nmodel.eval()\nwith torch.no_grad():\n for batch_idx, (data, target) in enumerate(test_loader):\n if count == n_samples_show:\n break\n output = model(data)\n \n pred = output.argmax(dim=1, keepdim=True) \n\n axes[count].imshow(data[0].numpy().squeeze(), cmap='gray')\n\n axes[count].set_xticks([])\n axes[count].set_yticks([])\n axes[count].set_title('Predicted {}'.format(pred.item()))\n \n count += 1", "_____no_output_____" ] ], [ [ "## 4. What Now? <a id='what-now'></a>\n\n#### While it is totally possible to create hybrid neural networks, does this actually have any benefit? \n\nIn fact, the classical layers of this network train perfectly fine (in fact, better) without the quantum layer. Furthermore, you may have noticed that the quantum layer we trained here **generates no entanglement**, and will, therefore, continue to be classically simulatable as we scale up this particular architecture. This means that if you hope to achieve a quantum advantage using hybrid neural networks, you'll need to start by extending this code to include a more sophisticated quantum layer. \n\n\nThe point of this exercise was to get you thinking about integrating techniques from ML and quantum computing in order to investigate if there is indeed some element of interest - and thanks to PyTorch and Qiskit, this becomes a little bit easier. ", "_____no_output_____" ] ], [ [ "import qiskit\nqiskit.__qiskit_version__", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbf4d05ac930e0893d431a21f47954b789312fe7
116,560
ipynb
Jupyter Notebook
src/abstractive/BART_Text_Summarization.ipynb
aj-naik/Text-Summarization
764ff95783fdb69cf03c710f7e90094c05567877
[ "MIT" ]
20
2021-05-28T05:23:16.000Z
2022-03-13T10:55:05.000Z
src/abstractive/BART_Text_Summarization.ipynb
aj-naik/Text-Summarization
764ff95783fdb69cf03c710f7e90094c05567877
[ "MIT" ]
null
null
null
src/abstractive/BART_Text_Summarization.ipynb
aj-naik/Text-Summarization
764ff95783fdb69cf03c710f7e90094c05567877
[ "MIT" ]
2
2021-06-23T20:08:22.000Z
2021-12-20T12:10:38.000Z
37.563648
525
0.507893
[ [ [ "!pip install transformers", "Collecting transformers\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d5/43/cfe4ee779bbd6a678ac6a97c5a5cdeb03c35f9eaebbb9720b036680f9a2d/transformers-4.6.1-py3-none-any.whl (2.2MB)\n\u001b[K |████████████████████████████████| 2.3MB 33.5MB/s \n\u001b[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers) (20.9)\nCollecting tokenizers<0.11,>=0.10.1\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/d4/e2/df3543e8ffdab68f5acc73f613de9c2b155ac47f162e725dcac87c521c11/tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3MB)\n\u001b[K |████████████████████████████████| 3.3MB 30.2MB/s \n\u001b[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.41.1)\nRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.19.5)\nRequirement already satisfied: importlib-metadata; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from transformers) (4.5.0)\nCollecting sacremoses\n\u001b[?25l Downloading https://files.pythonhosted.org/packages/75/ee/67241dc87f266093c533a2d4d3d69438e57d7a90abb216fa076e7d475d4a/sacremoses-0.0.45-py3-none-any.whl (895kB)\n\u001b[K |████████████████████████████████| 901kB 31.1MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\nRequirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2019.12.20)\nRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.0.12)\nCollecting huggingface-hub==0.0.8\n Downloading https://files.pythonhosted.org/packages/a1/88/7b1e45720ecf59c6c6737ff332f41c955963090a18e72acbcbeac6b25e86/huggingface_hub-0.0.8-py3-none-any.whl\nRequirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers) (2.4.7)\nRequirement already satisfied: typing-extensions>=3.6.4; python_version < \"3.8\" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.7.4.3)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < \"3.8\"->transformers) (3.4.1)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.15.0)\nRequirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (7.1.2)\nRequirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers) (1.0.1)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2021.5.30)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\nInstalling collected packages: tokenizers, sacremoses, huggingface-hub, transformers\nSuccessfully installed huggingface-hub-0.0.8 sacremoses-0.0.45 tokenizers-0.10.3 transformers-4.6.1\n" ], [ "from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig", "_____no_output_____" ], [ "tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')\nmodel = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')", "_____no_output_____" ], [ "sequence = (\"In May, Churchill was still generally unpopular with many Conservatives and probably most of the Labour Party. Chamberlain \"\n \"remained Conservative Party leader until October when ill health forced his resignation. By that time, Churchill had won the \"\n \"doubters over and his succession as party leader was a formality.\"\n \" \"\n \"He began his premiership by forming a five-man war cabinet which included Chamberlain as Lord President of the Council, \"\n \"Labour leader Clement Attlee as Lord Privy Seal (later as Deputy Prime Minister), Halifax as Foreign Secretary and Labour's \"\n \"Arthur Greenwood as a minister without portfolio. In practice, these five were augmented by the service chiefs and ministers \"\n \"who attended the majority of meetings. The cabinet changed in size and membership as the war progressed, one of the key \"\n \"appointments being the leading trades unionist Ernest Bevin as Minister of Labour and National Service. In response to \"\n \"previous criticisms that there had been no clear single minister in charge of the prosecution of the war, Churchill created \"\n \"and took the additional position of Minister of Defence, making him the most powerful wartime Prime Minister in British \"\n \"history. He drafted outside experts into government to fulfil vital functions, especially on the Home Front. These included \"\n \"personal friends like Lord Beaverbrook and Frederick Lindemann, who became the government's scientific advisor.\"\n \" \"\n \"At the end of May, with the British Expeditionary Force in retreat to Dunkirk and the Fall of France seemingly imminent, \"\n \"Halifax proposed that the government should explore the possibility of a negotiated peace settlement using the still-neutral \"\n \"Mussolini as an intermediary. There were several high-level meetings from 26 to 28 May, including two with the French \"\n \"premier Paul Reynaud. Churchill's resolve was to fight on, even if France capitulated, but his position remained precarious \"\n \"until Chamberlain resolved to support him. Churchill had the full support of the two Labour members but knew he could not \"\n \"survive as Prime Minister if both Chamberlain and Halifax were against him. In the end, by gaining the support of his outer \"\n \"cabinet, Churchill outmanoeuvred Halifax and won Chamberlain over. Churchill believed that the only option was to fight on \"\n \"and his use of rhetoric hardened public opinion against a peaceful resolution and prepared the British people for a long war \"\n \"– Jenkins says Churchill's speeches were 'an inspiration for the nation, and a catharsis for Churchill himself'.\"\n \" \"\n \"His first speech as Prime Minister, delivered to the Commons on 13 May was the 'blood, toil, tears and sweat' speech. It was \"\n \"little more than a short statement but, Jenkins says, 'it included phrases which have reverberated down the decades'.\")\n", "_____no_output_____" ], [ "inputs = tokenizer([sequence], max_length=1024, return_tensors='pt')", "Truncation was not explicitly activated but `max_length` is provided a specific value, please use `truncation=True` to explicitly truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.\n" ], [ "summary_ids = model.generate(inputs['input_ids'])", "_____no_output_____" ], [ "summary = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]", "_____no_output_____" ], [ "summary", "_____no_output_____" ], [ "from transformers import pipeline", "_____no_output_____" ], [ "sequence = (\"In May, Churchill was still generally unpopular with many Conservatives and probably most of the Labour Party. Chamberlain \"\n \"remained Conservative Party leader until October when ill health forced his resignation. By that time, Churchill had won the \"\n \"doubters over and his succession as party leader was a formality.\"\n \" \"\n \"He began his premiership by forming a five-man war cabinet which included Chamberlain as Lord President of the Council, \"\n \"Labour leader Clement Attlee as Lord Privy Seal (later as Deputy Prime Minister), Halifax as Foreign Secretary and Labour's \"\n \"Arthur Greenwood as a minister without portfolio. In practice, these five were augmented by the service chiefs and ministers \"\n \"who attended the majority of meetings. The cabinet changed in size and membership as the war progressed, one of the key \"\n \"appointments being the leading trades unionist Ernest Bevin as Minister of Labour and National Service. In response to \"\n \"previous criticisms that there had been no clear single minister in charge of the prosecution of the war, Churchill created \"\n \"and took the additional position of Minister of Defence, making him the most powerful wartime Prime Minister in British \"\n \"history. He drafted outside experts into government to fulfil vital functions, especially on the Home Front. These included \"\n \"personal friends like Lord Beaverbrook and Frederick Lindemann, who became the government's scientific advisor.\"\n \" \"\n \"At the end of May, with the British Expeditionary Force in retreat to Dunkirk and the Fall of France seemingly imminent, \"\n \"Halifax proposed that the government should explore the possibility of a negotiated peace settlement using the still-neutral \"\n \"Mussolini as an intermediary. There were several high-level meetings from 26 to 28 May, including two with the French \"\n \"premier Paul Reynaud. Churchill's resolve was to fight on, even if France capitulated, but his position remained precarious \"\n \"until Chamberlain resolved to support him. Churchill had the full support of the two Labour members but knew he could not \"\n \"survive as Prime Minister if both Chamberlain and Halifax were against him. In the end, by gaining the support of his outer \"\n \"cabinet, Churchill outmanoeuvred Halifax and won Chamberlain over. Churchill believed that the only option was to fight on \"\n \"and his use of rhetoric hardened public opinion against a peaceful resolution and prepared the British people for a long war \"\n \"– Jenkins says Churchill's speeches were 'an inspiration for the nation, and a catharsis for Churchill himself'.\"\n \" \"\n \"His first speech as Prime Minister, delivered to the Commons on 13 May was the 'blood, toil, tears and sweat' speech. It was \"\n \"little more than a short statement but, Jenkins says, 'it included phrases which have reverberated down the decades'.\")\n", "_____no_output_____" ], [ "summarizer = pipeline(\"summarization\")", "_____no_output_____" ], [ "summarized = summarizer(sequence, min_length = 75, max_length=1024)", "Your max_length is set to 1024, but you input_length is only 509. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)\n" ], [ "summarized", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf50a55fbeb657321fbadd20e7f916929f3fab8
315,459
ipynb
Jupyter Notebook
gpt-eda-combined.ipynb
uSaiPrashanth/eleutherai-experiments
1b8a99c6cc76ffdc0419face9b87a3d2197aa9e9
[ "MIT" ]
1
2021-08-23T16:19:22.000Z
2021-08-23T16:19:22.000Z
gpt-eda-combined.ipynb
uSaiPrashanth/eleutherai-experiments
1b8a99c6cc76ffdc0419face9b87a3d2197aa9e9
[ "MIT" ]
null
null
null
gpt-eda-combined.ipynb
uSaiPrashanth/eleutherai-experiments
1b8a99c6cc76ffdc0419face9b87a3d2197aa9e9
[ "MIT" ]
null
null
null
643.793878
123,896
0.949619
[ [ [ "# About This Notebook\n* The following notebooks utilizes the [generated outputs](https://www.kaggle.com/usaiprashanth/gptmodel-outputs) and performs some Exploratory Data Analysis", "_____no_output_____" ] ], [ [ "#loading the outputs\nimport joblib\nwithoutshuffle = joblib.load('../input/gptmodel-outputs/results (4)/withoutshuffle.pkl')\nwithshuffle = joblib.load('../input/gptmodel-outputs/results (3)/withshuffle.pkl')\ndata29 = joblib.load('../input/gptmodel-outputs/results (5)/data29.pkl')", "_____no_output_____" ] ], [ [ "* Data @param withshuffle, @param withoutshuffle @param data29 are nested arrays with following structure\n\n> array[0] index of the document with respect to THE PILE dataset\n\n> array[1] length of the document\n\n> array[2] the score of the document (number of correctly predicted labels)", "_____no_output_____" ], [ "* The folllowing two graphs compare the score of the model with and without shuffling the evaluation data\n* More information about shuffling can be found [here](https://www.kaggle.com/usaiprashanth/gpt-1-3b-model?scriptVersionId=72760342) and [here](https://www.kaggle.com/usaiprashanth/gpt-1-3b-model?scriptVersionId=72761073)", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nplt.plot(withshuffle[0],withshuffle[2],'r+')", "_____no_output_____" ], [ "plt.plot(withoutshuffle[0],withoutshuffle[2],'r+')", "_____no_output_____" ] ], [ [ "* My original interpretation of this idea (which has been proved wrong) was that the order in which the data would be evaluated would effect the evaluation loss of model. Which is inherently false. The reasoning for this is due to the fact of there being randomness involved with the model.", "_____no_output_____" ] ], [ [ "plt.plot(data29[0],data29[2],'r+')", "_____no_output_____" ] ], [ [ "* Dividing the arrays of 0th and 29th shard into 1000 buckets and plotting their average score", "_____no_output_____" ] ], [ [ "buckets = []\nplt.rcParams[\"figure.figsize\"] = (25,3)\nimport numpy as np\nfor i in range(0,10000,10):\n buckets.append(np.nanmean(withoutshuffle[2][i:i+10]))\nplt.plot(buckets)", "_____no_output_____" ] ], [ [ "* Atleast for the first 10,000 samples, there doesn't seem to be any difference in the memorization of data with respect to it's position in the dataset.\n* However, It is worth noting that 10,000 samples is a very small sampling for a dataset as big as [The Pile](https://pile.eleuther.ai/) and the results can significantly differ when evaluated with another shard of the dataset.\n* This can be generalized by plotting the bucketed version of data29 (outputs of 29th shard of THE PILE)", "_____no_output_____" ] ], [ [ "buckets = []\nplt.rcParams[\"figure.figsize\"] = (25,3)\nimport numpy as np\nfor i in range(0,10000,10):\n buckets.append(np.nanmean(data29[2][i:i+10]))\nplt.plot(buckets)", "_____no_output_____" ], [ "#Finding means and variances\nprint(np.nanmean(withoutshuffle[2]),np.nanmean(data29[2]))\nprint(np.nanvar(withoutshuffle[2]),np.nanvar(data29[2]))", "2.827240506329114 2.868052738336714\n21.352837701650376 21.301352597624344\n" ] ], [ [ "* Atleast of gpt-neo-1.3B model, there doesn't seem to be any correlation between the way data is memorized and the position of data within training dataset", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cbf512bf74309aacdb05e4cc1fd89607bde4d15f
674
ipynb
Jupyter Notebook
Module 008.ipynb
thecortex/ml-course
8b2bf7740adca0d5f27cd9fedab7e9542c360b59
[ "MIT" ]
3
2019-05-25T11:55:03.000Z
2019-12-08T12:29:10.000Z
Module 008.ipynb
thecortex/ml-course
8b2bf7740adca0d5f27cd9fedab7e9542c360b59
[ "MIT" ]
null
null
null
Module 008.ipynb
thecortex/ml-course
8b2bf7740adca0d5f27cd9fedab7e9542c360b59
[ "MIT" ]
6
2019-07-17T16:43:56.000Z
2022-02-15T05:04:33.000Z
16.85
42
0.519288
[ [ [ "<h1>Gradient Boosting (XGBoost)</h1>", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown" ] ]
cbf51be3916e570beb14b4eea866bc628908d871
5,049
ipynb
Jupyter Notebook
data/get-region.ipynb
linflytang/databook
04ced06c85aff50f5e8b8aef1e94bdec788abff5
[ "Apache-2.0" ]
20
2018-07-27T15:14:44.000Z
2022-03-10T06:44:46.000Z
data/get-region.ipynb
linflytang/databook
04ced06c85aff50f5e8b8aef1e94bdec788abff5
[ "Apache-2.0" ]
1
2020-11-18T22:15:54.000Z
2020-11-18T22:15:54.000Z
data/get-region.ipynb
linflytang/databook
04ced06c85aff50f5e8b8aef1e94bdec788abff5
[ "Apache-2.0" ]
19
2018-07-27T07:42:22.000Z
2021-05-12T01:36:10.000Z
28.851429
379
0.517726
[ [ [ "# coding:utf-8\n# 引入requests包和正则表达式包re\nimport requests\nimport re\nimport pprint\nfrom bs4 import BeautifulSoup", "_____no_output_____" ] ], [ [ "Accept:\ttext/html,application/xhtml+xm…ml;q=0.9,image/webp,*/*;q=0.8\nAccept-Encoding: gzip, deflate\nAccept-Language: zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2\nConnection: keep-alive\nCookie:\tHm_lvt_83424855675cf222978f8cc…22978f8cc8a317290a=1552462473\nHost: pic.rivermap.cn\nUser-Agent:\tMozilla/5.0 (X11; Ubuntu; Linu…) Gecko/20100101 Firefox/65.0", "_____no_output_____" ] ], [ [ "headers = {'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', \\\n 'Accept-Encoding':'gzip, deflate', \\\n 'Accept-Language':'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2', \\\n 'Connection':'keep-alive', \\\n 'Cookie':'', \\\n 'Host':'www.rivermap.cn', \\\n 'User-Agent':'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0'} ", "_____no_output_____" ], [ "print(headers)", "{'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate', 'Accept-Language': 'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2', 'Connection': 'keep-alive', 'Cookie': '', 'Host': 'www.rivermap.cn', 'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0'}\n" ] ], [ [ "url ='http://www.rivermap.cn/help/show-2097.html'\n#response=requests.get(url,headers = headers)\nresponse=requests.get(url)\nprint(response.content)", "_____no_output_____" ] ], [ [ "def load_page(url):\n response=requests.get(url,headers = headers)\n data=response.content\n return data\n\ndef load_index():\n url ='http://www.rivermap.cn/help/show-2097.html'\n html = load_page(url)\n #pprint.pprint(html)\n return html\n\ndef get_index():\n html = load_index() \n regx=r'show-[\\S]*html' # 定义图片正则表达式\n pattern=re.compile(regx) # 编译表达式构造匹配模式\n get_images=re.findall(pattern,repr(html)) # 在页面中匹配图片链接\n print(\"发现Region,共计:\",len(get_images))\n return get_images\n\ndef get_bs4():\n html_doc = load_index() \n soup = BeautifulSoup(html_doc)\n #print(soup.prettify())\n return soup\n\nsoup = get_bs4()\n#get_index()", "_____no_output_____" ], [ "print(soup.title)", "<title>天津市谷歌高清卫星地图下载(百度网盘离线包下载) - 技术文章 - 谷歌卫星地图下载器_谷歌地图高清卫星地图_北斗卫星地图_水经注万能地图下载器-水经注软件</title>\n" ], [ "alist = soup.find_all('a')\nwith open('region.txt', \"w\") as fregion:\n fregion.write('{') \n for a in alist:\n if 'data-href' in a.attrs:\n if a.attrs['title'].find('地图离线包下载')>2 and len(str(a.attrs['class']))>=13 :\n #print(\"#\",str(a.attrs['class']).strip()[7:11],\",\",a.attrs['title'])\n fregion.write(\"'\" + str(a.attrs['class']).strip()[7:11] + \"-\" + a.attrs['title'] + \"':\")\n #print(\"http://www.rivermap.cn\"+a.attrs['href'])\n fregion.write(\"'http://www.rivermap.cn\" + a.attrs['href'] + \"',\\n\")\n fregion.write('}')\n \n#print(alist.attrs['data-href'])", "_____no_output_____" ] ] ]
[ "code", "raw", "code", "raw", "code" ]
[ [ "code" ], [ "raw" ], [ "code", "code" ], [ "raw" ], [ "code", "code", "code" ] ]
cbf51e2a86a2ac583219359b6ab89944e046b996
86,910
ipynb
Jupyter Notebook
Simple-SIR-model-in-network-and-MFT/SIR-MF.ipynb
shahmari/Some-minor-effort-in-the-epidemic
227dcd4ca5846faa832d3c82f4c1d7c551b6a398
[ "MIT" ]
2
2021-08-25T09:56:26.000Z
2021-10-17T16:11:17.000Z
Simple-SIR-model-in-network-and-MFT/SIR-MF.ipynb
shahmari/Some-minor-effort-in-the-epidemic
227dcd4ca5846faa832d3c82f4c1d7c551b6a398
[ "MIT" ]
null
null
null
Simple-SIR-model-in-network-and-MFT/SIR-MF.ipynb
shahmari/Some-minor-effort-in-the-epidemic
227dcd4ca5846faa832d3c82f4c1d7c551b6a398
[ "MIT" ]
null
null
null
877.878788
47,732
0.778276
[ [ [ "import scipy.integrate as spi\nimport numpy as np\nimport pylab as pl\n\nbeta=1/3\ngamma=1/5\nTS=1.0\nND=100\nS0=1-1/1600\nI0=1/1600\nINPUT = (S0, I0, 0.0)\n\n\ndef diff_eqs(INP,t): \n\t'''The main set of equations'''\n\tY=np.zeros((3))\n\tV = INP \n\tY[0] = - beta * V[0] * V[1]\n\tY[1] = beta * V[0] * V[1] - gamma * V[1]\n\tY[2] = gamma * V[1]\n\treturn Y # For odeint\n\nt_start = 0.0; t_end = ND; t_inc = TS\nt_range = np.arange(t_start, t_end+t_inc, t_inc)\nRES = spi.odeint(diff_eqs,INPUT,t_range)\n\nfor i in range(len(RES)):\n\tRES[i,0]*=1600\n\tRES[i,1]*=1600\n\tRES[i,2]*=1600\n\npl.figure(figsize=(10,7))\npl.rcParams.update({'font.size': 10})\npl.title('SIR Mean-field simulation')\npl.plot(RES[:,0], '-b', label='Susceptibles')\npl.plot(RES[:,1], '-r', label='Infectious')\npl.plot(RES[:,2], '-g', label='Recovereds')\npl.legend(loc=0)\npl.xlabel('Time(day)')\npl.ylabel('people number')\npl.savefig('SIR-Mean-field')\npl.show()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
cbf51f74fa7d3388491a1d73a1e5cf74fa579c97
24,164
ipynb
Jupyter Notebook
notebooks/Preprocessing.ipynb
bereketkibru/AChallenge
4bfb33626c8647a6d1493b4cb6531bf67c2ce36c
[ "MIT" ]
null
null
null
notebooks/Preprocessing.ipynb
bereketkibru/AChallenge
4bfb33626c8647a6d1493b4cb6531bf67c2ce36c
[ "MIT" ]
null
null
null
notebooks/Preprocessing.ipynb
bereketkibru/AChallenge
4bfb33626c8647a6d1493b4cb6531bf67c2ce36c
[ "MIT" ]
null
null
null
32.522207
116
0.407052
[ [ [ "## Import Libraries\r\nimport sys\r\nimport os\r\nsys.path.append(os.path.abspath(os.path.join('..')))\r\nimport pandas as pd\r\nimport seaborn as sns\r\nimport matplotlib.pyplot as plt\r\nimport numpy as np \r\nfrom pandas.api.types import is_string_dtype, is_numeric_dtype\r\n%matplotlib inline", "_____no_output_____" ], [ "CSV_PATH = \"../data/impression_log.csv\"", "_____no_output_____" ], [ "# taking a csv file path and reading a dataframe\r\n\r\ndef read_proccessed_data(csv_path):\r\n try: \r\n df = pd.read_csv(csv_path)\r\n print(\"file read as csv\")\r\n return df\r\n except FileNotFoundError:\r\n print(\"file not found\")", "_____no_output_____" ], [ "## getting number of columns, row and column information\r\ndef get_data_info(Ilog_df: pd.DataFrame):\r\n \r\n row_count, col_count = Ilog_df.shape\r\n \r\n print(f\"Number of rows: {row_count}\")\r\n print(f\"Number of columns: {col_count}\")\r\n\r\n return Ilog_df.info()", "_____no_output_____" ], [ "## basic statistics of each column and see the data at glance\r\ndef get_statistics_info(Ilog_df: pd.DataFrame):\r\n \r\n return Ilog_df.describe(include='all')", "_____no_output_____" ], [ "# reading the extracted impression_log data and getting information\r\nIlog_df = read_proccessed_data(CSV_PATH)\r\nget_data_info(Ilog_df)\r\n\r\nget_statistics_info(Ilog_df)\r\nIlog_df.head()", "file read as csv\nNumber of rows: 100000\nNumber of columns: 24\n<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 100000 entries, 0 to 99999\nData columns (total 24 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 100000 non-null object \n 1 LogEntryTime 100000 non-null object \n 2 AdvertiserId 100000 non-null object \n 3 CampaignId 100000 non-null object \n 4 AdGroupId 100000 non-null object \n 5 AudienceID 96546 non-null object \n 6 CreativeId 100000 non-null object \n 7 AdFormat 100000 non-null object \n 8 Frequency 100000 non-null int64 \n 9 Site 100000 non-null object \n 10 FoldPosition 100000 non-null int64 \n 11 Country 100000 non-null object \n 12 Region 99999 non-null object \n 13 City 99999 non-null object \n 14 DeviceType 100000 non-null int64 \n 15 OSFamily 99995 non-null float64\n 16 OS 99993 non-null float64\n 17 Browser 99993 non-null float64\n 18 DeviceMake 99993 non-null object \n 19 AdvertiserCurrency 100000 non-null float64\n 20 click 100000 non-null int64 \n 21 engagement 100000 non-null int64 \n 22 video-end 100000 non-null int64 \n 23 video-start 100000 non-null int64 \ndtypes: float64(4), int64(7), object(13)\nmemory usage: 18.3+ MB\n" ] ], [ [ "## Missing Values", "_____no_output_____" ] ], [ [ "def percent_missing(df):\r\n\r\n totalCells = np.product(df.shape)\r\n missingCount = df.isnull().sum()\r\n totalMissing = missingCount.sum()\r\n return round((totalMissing / totalCells) * 100, 2)\r\nprint(\"The Impression_log data dataset contains\", percent_missing(Ilog_df), \"%\", \"missing values.\")", "The Impression_log data dataset contains 0.15 % missing values.\n" ] ], [ [ "## Handling Missing Values", "_____no_output_____" ] ], [ [ "def percent_missing_for_col(df, col_name: str):\r\n total_count = len(df[col_name])\r\n if total_count <= 0:\r\n return 0.0\r\n missing_count = df[col_name].isnull().sum()\r\n \r\n return round((missing_count / total_count) * 100, 2)", "_____no_output_____" ], [ "null_percent_df = pd.DataFrame(columns = ['column', 'null_percent'])\r\ncolumns = Ilog_df.columns.values.tolist()\r\nnull_percent_df['column'] = columns\r\nnull_percent_df['null_percent'] = null_percent_df['column'].map(lambda x: percent_missing_for_col(Ilog_df, x))", "_____no_output_____" ], [ "null_percent_df.sort_values(by=['null_percent'], ascending = False)", "_____no_output_____" ] ], [ [ "### I used forward fill method to fill the missing values", "_____no_output_____" ] ], [ [ "Ilog_df['AudienceID'] = Ilog_df['AudienceID'].fillna(method='ffill')\r\nIlog_df['DeviceMake'] = Ilog_df['DeviceMake'].fillna(method='ffill')\r\nIlog_df['Browser'] = Ilog_df['Browser'].fillna(method='ffill')\r\nIlog_df['OS'] = Ilog_df['OS'].fillna(method='ffill')", "_____no_output_____" ], [ "Ilog_df['OSFamily'] = Ilog_df['OSFamily'].fillna(method='ffill')\r\nIlog_df['Region'] = Ilog_df['Region'].fillna(method='ffill')\r\nIlog_df['City'] = Ilog_df['City'].fillna(method='ffill')", "_____no_output_____" ], [ "#checking after handling the missing values\r\ndef percent_missing(df):\r\n totalCells = np.product(df.shape)\r\n missingCount = df.isnull().sum()\r\n totalMissing = missingCount.sum()\r\n return round((totalMissing / totalCells) * 100, 2)\r\nprint(\"The Impression_log data dataset contains\", percent_missing(Ilog_df), \"%\", \"missing values.\")", "The Impression_log data dataset contains 0.0 % missing values.\n" ] ], [ [ "Remove dupilicate rows", "_____no_output_____" ] ], [ [ "Ilog_df.drop_duplicates(inplace=True)", "_____no_output_____" ], [ "Ilog_df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 99999 entries, 0 to 99999\nData columns (total 24 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 Unnamed: 0 99999 non-null object \n 1 LogEntryTime 99999 non-null object \n 2 AdvertiserId 99999 non-null object \n 3 CampaignId 99999 non-null object \n 4 AdGroupId 99999 non-null object \n 5 AudienceID 99999 non-null object \n 6 CreativeId 99999 non-null object \n 7 AdFormat 99999 non-null object \n 8 Frequency 99999 non-null int64 \n 9 Site 99999 non-null object \n 10 FoldPosition 99999 non-null int64 \n 11 Country 99999 non-null object \n 12 Region 99999 non-null object \n 13 City 99999 non-null object \n 14 DeviceType 99999 non-null int64 \n 15 OSFamily 99999 non-null float64\n 16 OS 99999 non-null float64\n 17 Browser 99999 non-null float64\n 18 DeviceMake 99999 non-null object \n 19 AdvertiserCurrency 99999 non-null float64\n 20 click 99999 non-null int64 \n 21 engagement 99999 non-null int64 \n 22 video-end 99999 non-null int64 \n 23 video-start 99999 non-null int64 \ndtypes: float64(4), int64(7), object(13)\nmemory usage: 19.1+ MB\n" ], [ "Ilog_df.to_csv(\"../data/processed.csv\",index=False)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cbf521fac18986e339d6ff78eb80a9637a674eaf
11,627
ipynb
Jupyter Notebook
autoencoder/denoising-autoencoder/Denoising_Autoencoder_Exercise.ipynb
ruddyscent/deep-learning-v2-pytorch
330a229c197bd145701b41a97707ac7143f5532b
[ "MIT" ]
null
null
null
autoencoder/denoising-autoencoder/Denoising_Autoencoder_Exercise.ipynb
ruddyscent/deep-learning-v2-pytorch
330a229c197bd145701b41a97707ac7143f5532b
[ "MIT" ]
1
2022-02-10T06:56:37.000Z
2022-02-10T06:56:37.000Z
autoencoder/denoising-autoencoder/Denoising_Autoencoder_Exercise.ipynb
ruddyscent/deep-learning-v2-pytorch
330a229c197bd145701b41a97707ac7143f5532b
[ "MIT" ]
null
null
null
36.108696
359
0.567472
[ [ [ "# Denoising Autoencoder\n\nSticking with the MNIST dataset, let's add noise to our data and see if we can define and train an autoencoder to _de_-noise the images.\n\n<img src='notebook_ims/autoencoder_denoise.png' width=70%/>\n\nLet's get started by importing our libraries and getting the dataset.", "_____no_output_____" ] ], [ [ "import torch\nimport numpy as np\nfrom torchvision import datasets\nfrom torchvision import transforms\n\n# convert data to torch.FloatTensor\ntransform = transforms.ToTensor()\n\n# load the training and test datasets\ntrain_data = datasets.MNIST(root='data', train=True,\n download=True, transform=transform)\ntest_data = datasets.MNIST(root='data', train=False,\n download=True, transform=transform)\n\n# Create training and test dataloaders\nnum_workers = 0\n# how many samples per batch to load\nbatch_size = 20\n\n# prepare data loaders\ntrain_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)\ntest_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)", "_____no_output_____" ], [ "train_on_gpu = torch.cuda.is_available()", "_____no_output_____" ] ], [ [ "### Visualize the Data", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(train_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# get one image from the batch\nimg = np.squeeze(images[0])\n\nfig = plt.figure(figsize = (5,5)) \nax = fig.add_subplot(111)\nax.imshow(img, cmap='gray')", "_____no_output_____" ] ], [ [ "---\n# Denoising\n\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1.\n\n>**We'll use noisy images as input and the original, clean images as targets.** \n\nBelow is an example of some of the noisy images I generated and the associated, denoised images.\n\n<img src='notebook_ims/denoising.png' />\n\n\nSince this is a harder problem for the network, we'll want to use _deeper_ convolutional layers here; layers with more feature maps. You might also consider adding additional layers. I suggest starting with a depth of 32 for the convolutional layers in the encoder, and the same depths going backward through the decoder.\n\n#### TODO: Build the network for the denoising autoencoder. Add deeper and/or additional layers compared to the model above.", "_____no_output_____" ] ], [ [ "import torch.nn as nn\nimport torch.nn.functional as F\n\n# define the NN architecture\nclass ConvDenoiser(nn.Module):\n def __init__(self):\n super(ConvDenoiser, self).__init__()\n ## encoder layers ##\n self.conv1 = nn.Conv2d(1, 4, 3, padding=1)\n self.conv2 = nn.Conv2d(4, 16, 3, padding=1)\n self.conv3 = nn.Conv2d(16, 32, 3, padding=1)\n self.maxpool = nn.MaxPool2d(2)\n \n ## decoder layers ##\n ## a kernel of 2 and a stride of 2 will increase the spatial dims by 2\n self.t_conv1 = nn.ConvTranspose2d(32, 16, 3, stride=2)\n self.t_conv2 = nn.ConvTranspose2d(16, 4, 2, stride=2)\n self.t_conv3 = nn.ConvTranspose2d(4, 1, 2, stride=2)\n\n def forward(self, x):\n ## encode ##\n x = F.relu(self.conv1(x))\n x = self.maxpool(x)\n x = F.relu(self.conv2(x))\n x = self.maxpool(x)\n x = F.relu(self.conv3(x))\n x = self.maxpool(x)\n \n ## decode ##\n ## apply ReLu to all hidden layers *except for the output layer\n ## apply a sigmoid to the output layer\n x = F.relu(self.t_conv1(x))\n x = F.relu(self.t_conv2(x))\n x = torch.sigmoid(self.t_conv3(x))\n \n return x\n\n# initialize the NN\nmodel = ConvDenoiser()\nif train_on_gpu:\n model.cuda()\nprint(model)\n", "ConvDenoiser(\n (conv1): Conv2d(1, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (conv2): Conv2d(4, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (conv3): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))\n (maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (t_conv1): ConvTranspose2d(32, 16, kernel_size=(3, 3), stride=(2, 2))\n (t_conv2): ConvTranspose2d(16, 4, kernel_size=(2, 2), stride=(2, 2))\n (t_conv3): ConvTranspose2d(4, 1, kernel_size=(2, 2), stride=(2, 2))\n)\n" ] ], [ [ "---\n## Training\n\nWe are only concerned with the training images, which we can get from the `train_loader`.\n\n>In this case, we are actually **adding some noise** to these images and we'll feed these `noisy_imgs` to our model. The model will produce reconstructed images based on the noisy input. But, we want it to produce _normal_ un-noisy images, and so, when we calculate the loss, we will still compare the reconstructed outputs to the original images!\n\nBecause we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows:\n```\nloss = criterion(outputs, images)\n```", "_____no_output_____" ] ], [ [ "# specify loss function\ncriterion = nn.MSELoss()\n\n# specify loss function\noptimizer = torch.optim.Adam(model.parameters(), lr=0.001)", "_____no_output_____" ], [ "# number of epochs to train the model\nn_epochs = 20\n\n# for adding noise to images\nnoise_factor=0.5\n\nfor epoch in range(1, n_epochs+1):\n # monitor training loss\n train_loss = 0.0\n \n ###################\n # train the model #\n ###################\n for images, _ in train_loader:\n ## add random noise to the input images\n noisy_imgs = images + noise_factor * torch.randn(*images.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n if train_on_gpu:\n images, noisy_imgs = images.cuda(), noisy_imgs.cuda()\n \n # clear the gradients of all optimized variables\n optimizer.zero_grad()\n ## forward pass: compute predicted outputs by passing *noisy* images to the model\n outputs = model(noisy_imgs)\n # calculate the loss\n # the \"target\" is still the original, not-noisy images\n loss = criterion(outputs, images)\n # backward pass: compute gradient of the loss with respect to model parameters\n loss.backward()\n # perform a single optimization step (parameter update)\n optimizer.step()\n # update running training loss\n train_loss += loss.item()*images.size(0)\n \n # print avg training statistics \n train_loss = train_loss/len(train_loader)\n print(f\"Epoch: {epoch} \\tTraining Loss: {train_loss:.6f}\")", "_____no_output_____" ] ], [ [ "## Checking out the results\n\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.", "_____no_output_____" ] ], [ [ "# obtain one batch of test images\ndataiter = iter(test_loader)\nimages, _ = dataiter.next()\n\n# add noise to the test images\nnoisy_imgs = images + noise_factor * torch.randn(*images.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0, 1)\n\nif train_on_gpu:\n images, noisy_imgs = images.cuda(), noisy_imgs.cuda()\n \n# get sample outputs\noutput = model(noisy_imgs)\n# prep images for display\nnoisy_imgs = noisy_imgs.cpu().numpy() if train_on_gpu else noisy_imgs.numpy()\n\n# output is resized into a batch of iages\noutput = output.view(batch_size, 1, 28, 28)\n# use detach when it's an output that requires_grad\noutput = output.detach().cpu().numpy()\n\n# plot the first ten input images and then reconstructed images\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))\n\n# input images on top row, reconstructions on bottom\nfor noisy_imgs, row in zip([noisy_imgs, output], axes):\n for img, ax in zip(noisy_imgs, row):\n ax.imshow(np.squeeze(img), cmap='gray')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbf5270689cf943deffe35f42bce2afa2c2635b4
4,505
ipynb
Jupyter Notebook
tests/benchmarks/CompilerServiceBenchmarks/benchmarks.ipynb
MecuStefan/fsharp
746b5024b3e0390f17d66d49241b39f6fed6a058
[ "MIT" ]
1,672
2019-05-15T21:18:43.000Z
2022-03-30T04:35:50.000Z
tests/benchmarks/CompilerServiceBenchmarks/benchmarks.ipynb
MecuStefan/fsharp
746b5024b3e0390f17d66d49241b39f6fed6a058
[ "MIT" ]
3,530
2019-05-15T20:32:51.000Z
2022-03-31T23:44:10.000Z
tests/benchmarks/CompilerServiceBenchmarks/benchmarks.ipynb
MecuStefan/fsharp
746b5024b3e0390f17d66d49241b39f6fed6a058
[ "MIT" ]
389
2019-05-15T20:24:05.000Z
2022-03-31T22:01:45.000Z
29.253247
182
0.497447
[ [ [ "#!pwsh\r\ndotnet build -c release\r\n", "_____no_output_____" ], [ "#r \"../../../artifacts/bin/FSharp.Compiler.Benchmarks/Release/net5.0/FSharp.Compiler.Benchmarks.dll\"\r\n#r \"../../../artifacts/bin/FSharp.Compiler.Benchmarks/Release/net5.0/BenchmarkDotNet.dll\"", "_____no_output_____" ], [ "open BenchmarkDotNet.Running\r\nopen FSharp.Compiler.Benchmarks\r\n\r\nlet summary = BenchmarkRunner.Run<TypeCheckingBenchmark1>()", "_____no_output_____" ], [ "// https://benchmarkdotnet.org/api/BenchmarkDotNet.Reports.BenchmarkReport.html\r\n#r \"nuget: XPlot.Plotly.Interactive, 4.0.2\"\r\n\r\nopen XPlot.Plotly\r\n\r\nlet gcStats = summary.Reports |> Seq.map (fun x -> x.GcStats)\r\n\r\nlet gen0Series =\r\n Bar(\r\n name = \"Gen 0\",\r\n y = (gcStats |> Seq.map (fun x -> x.Gen0Collections))\r\n )\r\n\r\nlet gen1Series =\r\n Bar(\r\n name = \"Gen 1\",\r\n y = (gcStats |> Seq.map (fun x -> x.Gen1Collections))\r\n )\r\n\r\nlet gen2Series =\r\n Bar(\r\n name = \"Gen 2\",\r\n y = (gcStats |> Seq.map (fun x -> x.Gen2Collections))\r\n )\r\n\r\n[gen0Series;gen1Series;gen2Series]\r\n|> Chart.Plot\r\n|> Chart.WithTitle(\"F# Type-Checking Benchmark 1 - GC Collection Counts\")", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
cbf53320fa6f4bf30ef192b881505717afdae27c
258,653
ipynb
Jupyter Notebook
try_runs/Sparkify_2nd_run.ipynb
kkitsara/Udacity_Capstone_Sparkify
2cd7fb6c923bdd7679afa25575b3b48087d57e9f
[ "MIT" ]
null
null
null
try_runs/Sparkify_2nd_run.ipynb
kkitsara/Udacity_Capstone_Sparkify
2cd7fb6c923bdd7679afa25575b3b48087d57e9f
[ "MIT" ]
null
null
null
try_runs/Sparkify_2nd_run.ipynb
kkitsara/Udacity_Capstone_Sparkify
2cd7fb6c923bdd7679afa25575b3b48087d57e9f
[ "MIT" ]
null
null
null
99.904596
12,244
0.765187
[ [ [ "# Sparkify Project Workspace\nThis workspace contains a tiny subset (128MB) of the full dataset available (12GB). Feel free to use this workspace to build your project, or to explore a smaller subset with Spark before deploying your cluster on the cloud. Instructions for setting up your Spark cluster is included in the last lesson of the Extracurricular Spark Course content.\n\nYou can follow the steps below to guide your data analysis and model building portion of this project.", "_____no_output_____" ] ], [ [ "# import libraries\nfrom pyspark.sql import SparkSession\nimport pandas as pd\nfrom pyspark.sql.functions import isnan, when, count, col, countDistinct, to_timestamp\nfrom pyspark.sql import functions as F\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom pyspark.ml.feature import MinMaxScaler, VectorAssembler\nfrom pyspark.sql.types import IntegerType\nfrom pyspark.ml import Pipeline\nfrom pyspark.ml.classification import LogisticRegression, RandomForestClassifier, LinearSVC, GBTClassifier\nfrom pyspark.ml.evaluation import MulticlassClassificationEvaluator\nfrom pyspark.ml.tuning import CrossValidator, ParamGridBuilder", "_____no_output_____" ], [ "# create a Spark session\nspark = SparkSession \\\n .builder \\\n .appName(\"Python Spark SQL\") \\\n .getOrCreate()", "_____no_output_____" ] ], [ [ "# Load and Clean Dataset\nIn this workspace, the mini-dataset file is `mini_sparkify_event_data.json`. Load and clean the dataset, checking for invalid or missing data - for example, records without userids or sessionids. ", "_____no_output_____" ] ], [ [ "df = spark.read.json('mini_sparkify_event_data.json')", "_____no_output_____" ], [ "df.show(5)", "+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\n| artist| auth|firstName|gender|itemInSession|lastName| length|level| location|method| page| registration|sessionId| song|status| ts| userAgent|userId|\n+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\n| Martha Tilston|Logged In| Colin| M| 50| Freeman|277.89016| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Rockpools| 200|1538352117000|Mozilla/5.0 (Wind...| 30|\n|Five Iron Frenzy|Logged In| Micah| M| 79| Long|236.09424| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8| Canada| 200|1538352180000|\"Mozilla/5.0 (Win...| 9|\n| Adam Lambert|Logged In| Colin| M| 51| Freeman| 282.8273| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Time For Miracles| 200|1538352394000|Mozilla/5.0 (Wind...| 30|\n| Enigma|Logged In| Micah| M| 80| Long|262.71302| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8|Knocking On Forbi...| 200|1538352416000|\"Mozilla/5.0 (Win...| 9|\n| Daft Punk|Logged In| Colin| M| 52| Freeman|223.60771| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29|Harder Better Fas...| 200|1538352676000|Mozilla/5.0 (Wind...| 30|\n+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+\nonly showing top 5 rows\n\n" ], [ "print((df.count(), len(df.columns)))", "(286500, 18)\n" ], [ "df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()", "+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n|artist|auth|firstName|gender|itemInSession|lastName|length|level|location|method|page|registration|sessionId| song|status| ts|userAgent|userId|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n| 58392| 0| 8346| 8346| 0| 8346| 58392| 0| 8346| 0| 0| 8346| 0|58392| 0| 0| 8346| 0|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n\n" ], [ "df.select(col('location')).groupBy('location').count().count()", "_____no_output_____" ], [ "for column in df.columns:\n if df.select(col(column)).groupBy(column).count().count()<30:\n print('\\033[1m' + column + '\\033[0m') , print(df.select(col(column)).groupBy(column).count().show(30, False))", "\u001b[1mauth\u001b[0m\n+----------+------+\n|auth |count |\n+----------+------+\n|Logged Out|8249 |\n|Cancelled |52 |\n|Guest |97 |\n|Logged In |278102|\n+----------+------+\n\nNone\n\u001b[1mgender\u001b[0m\n+------+------+\n|gender|count |\n+------+------+\n|F |154578|\n|null |8346 |\n|M |123576|\n+------+------+\n\nNone\n\u001b[1mlevel\u001b[0m\n+-----+------+\n|level|count |\n+-----+------+\n|free |58338 |\n|paid |228162|\n+-----+------+\n\nNone\n\u001b[1mmethod\u001b[0m\n+------+------+\n|method|count |\n+------+------+\n|PUT |261064|\n|GET |25436 |\n+------+------+\n\nNone\n\u001b[1mpage\u001b[0m\n+-------------------------+------+\n|page |count |\n+-------------------------+------+\n|Cancel |52 |\n|Submit Downgrade |63 |\n|Thumbs Down |2546 |\n|Home |14457 |\n|Downgrade |2055 |\n|Roll Advert |3933 |\n|Logout |3226 |\n|Save Settings |310 |\n|Cancellation Confirmation|52 |\n|About |924 |\n|Submit Registration |5 |\n|Settings |1514 |\n|Login |3241 |\n|Register |18 |\n|Add to Playlist |6526 |\n|Add Friend |4277 |\n|NextSong |228108|\n|Thumbs Up |12551 |\n|Help |1726 |\n|Upgrade |499 |\n|Error |258 |\n|Submit Upgrade |159 |\n+-------------------------+------+\n\nNone\n\u001b[1mstatus\u001b[0m\n+------+------+\n|status|count |\n+------+------+\n|307 |26430 |\n|404 |258 |\n|200 |259812|\n+------+------+\n\nNone\n" ], [ "df.where(col(\"firstName\").isNull()).select(col('auth')).groupBy('auth').count().show()", "+----------+-----+\n| auth|count|\n+----------+-----+\n|Logged Out| 8249|\n| Guest| 97|\n+----------+-----+\n\n" ], [ "df.where(col(\"firstName\").isNull()).select(col('level')).groupBy('level').count().show()", "+-----+-----+\n|level|count|\n+-----+-----+\n| free| 2617|\n| paid| 5729|\n+-----+-----+\n\n" ], [ "df.where(col(\"firstName\").isNull()).select(col('page')).groupBy('page').count().show()", "+-------------------+-----+\n| page|count|\n+-------------------+-----+\n| Home| 4375|\n| About| 429|\n|Submit Registration| 5|\n| Login| 3241|\n| Register| 18|\n| Help| 272|\n| Error| 6|\n+-------------------+-----+\n\n" ], [ "df.where(col(\"artist\").isNotNull()).select(col('page')).groupBy('page').count().show()", "+--------+------+\n| page| count|\n+--------+------+\n|NextSong|228108|\n+--------+------+\n\n" ], [ "df.where(col(\"artist\").isNull()).select(col('page')).groupBy('page').count().show()", "+--------------------+-----+\n| page|count|\n+--------------------+-----+\n| Cancel| 52|\n| Submit Downgrade| 63|\n| Thumbs Down| 2546|\n| Home|14457|\n| Downgrade| 2055|\n| Roll Advert| 3933|\n| Logout| 3226|\n| Save Settings| 310|\n|Cancellation Conf...| 52|\n| About| 924|\n| Submit Registration| 5|\n| Settings| 1514|\n| Login| 3241|\n| Register| 18|\n| Add to Playlist| 6526|\n| Add Friend| 4277|\n| Thumbs Up|12551|\n| Help| 1726|\n| Upgrade| 499|\n| Error| 258|\n+--------------------+-----+\nonly showing top 20 rows\n\n" ] ], [ [ "We have 2 different types of missing values. \n\n1. Missing user data for 8346 entries. From the analysis above it seems that the users that have null data are users that have not logged in the app yet. As these events cannot be correlated with the userId, we cannot use them, so we will drop them\n2. Mssing song data for 58392 entries. From the analysis above it seems that the missing songs are reasonable. The song data are populated only in case the page is the NextSong page, so we will keep all these entries for now", "_____no_output_____" ] ], [ [ "df = df.na.drop(subset=[\"firstName\"])", "_____no_output_____" ], [ "df.select([count(when(isnan(c) | col(c).isNull(), c)).alias(c) for c in df.columns]).show()", "+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n|artist|auth|firstName|gender|itemInSession|lastName|length|level|location|method|page|registration|sessionId| song|status| ts|userAgent|userId|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n| 50046| 0| 0| 0| 0| 0| 50046| 0| 0| 0| 0| 0| 0|50046| 0| 0| 0| 0|\n+------+----+---------+------+-------------+--------+------+-----+--------+------+----+------------+---------+-----+------+---+---------+------+\n\n" ] ], [ [ "# Exploratory Data Analysis\nWhen you're working with the full dataset, perform EDA by loading a small subset of the data and doing basic manipulations within Spark. In this workspace, you are already provided a small subset of data you can explore.\n\n### Define Churn\n\nOnce you've done some preliminary analysis, create a column `Churn` to use as the label for your model. I suggest using the `Cancellation Confirmation` events to define your churn, which happen for both paid and free users. As a bonus task, you can also look into the `Downgrade` events.\n\n### Explore Data\nOnce you've defined churn, perform some exploratory data analysis to observe the behavior for users who stayed vs users who churned. You can start by exploring aggregates on these two groups of users, observing how much of a specific action they experienced per a certain time unit or number of songs played.", "_____no_output_____" ] ], [ [ "df.createOrReplaceTempView(\"DATA\")", "_____no_output_____" ], [ "spark.sql(\"\"\"\nSELECT count(distinct userId) FROM DATA \n\"\"\").show(10, False)", "+----------------------+\n|count(DISTINCT userId)|\n+----------------------+\n|225 |\n+----------------------+\n\n" ], [ "spark.sql(\"\"\"\nSELECT distinct userId,page \nFROM DATA \nwhere page in ('Cancellation Confirmation','Downgrade') \norder by userId,page\n\"\"\").show(10, False)", "+------+-------------------------+\n|userId|page |\n+------+-------------------------+\n|10 |Downgrade |\n|100 |Downgrade |\n|100001|Cancellation Confirmation|\n|100002|Downgrade |\n|100003|Cancellation Confirmation|\n|100004|Downgrade |\n|100005|Cancellation Confirmation|\n|100006|Cancellation Confirmation|\n|100007|Cancellation Confirmation|\n|100007|Downgrade |\n+------+-------------------------+\nonly showing top 10 rows\n\n" ], [ "spark.sql(\"\"\"\nSELECT page,to_timestamp(ts/1000) as ts,level\nFROM DATA \nwhere userId='100001'\norder by ts\n\"\"\").show(500, False)", "+-------------------------+-------------------+-----+\n|page |ts |level|\n+-------------------------+-------------------+-----+\n|Home |2018-10-01 06:48:24|free |\n|NextSong |2018-10-01 06:48:29|free |\n|Roll Advert |2018-10-01 06:49:02|free |\n|NextSong |2018-10-01 06:52:27|free |\n|Roll Advert |2018-10-01 06:53:03|free |\n|NextSong |2018-10-01 07:02:29|free |\n|NextSong |2018-10-01 07:09:08|free |\n|NextSong |2018-10-01 07:12:12|free |\n|NextSong |2018-10-01 07:17:25|free |\n|NextSong |2018-10-01 07:21:23|free |\n|NextSong |2018-10-01 07:24:47|free |\n|NextSong |2018-10-01 07:27:57|free |\n|NextSong |2018-10-01 07:30:41|free |\n|NextSong |2018-10-01 07:33:27|free |\n|Roll Advert |2018-10-01 07:33:50|free |\n|NextSong |2018-10-01 07:37:00|free |\n|NextSong |2018-10-01 07:41:08|free |\n|NextSong |2018-10-01 07:46:40|free |\n|NextSong |2018-10-01 07:49:39|free |\n|Logout |2018-10-01 07:49:40|free |\n|Home |2018-10-01 07:53:49|free |\n|NextSong |2018-10-01 07:54:41|free |\n|NextSong |2018-10-01 08:00:29|free |\n|NextSong |2018-10-01 08:07:53|free |\n|Thumbs Down |2018-10-01 08:07:54|free |\n|NextSong |2018-10-01 08:10:49|free |\n|NextSong |2018-10-01 08:15:08|free |\n|NextSong |2018-10-01 08:19:38|free |\n|NextSong |2018-10-01 08:23:37|free |\n|NextSong |2018-10-01 08:27:56|free |\n|Roll Advert |2018-10-01 08:29:10|free |\n|NextSong |2018-10-01 08:31:14|free |\n|Roll Advert |2018-10-01 08:32:55|free |\n|NextSong |2018-10-01 08:36:04|free |\n|NextSong |2018-10-01 08:40:34|free |\n|NextSong |2018-10-01 08:45:32|free |\n|NextSong |2018-10-01 08:49:17|free |\n|Logout |2018-10-01 08:49:18|free |\n|Home |2018-10-01 08:50:46|free |\n|NextSong |2018-10-01 08:53:23|free |\n|Add Friend |2018-10-01 08:53:24|free |\n|NextSong |2018-10-01 08:57:20|free |\n|NextSong |2018-10-01 09:08:15|free |\n|NextSong |2018-10-01 09:11:45|free |\n|NextSong |2018-10-01 09:15:53|free |\n|Thumbs Up |2018-10-01 09:15:54|free |\n|NextSong |2018-10-01 09:21:22|free |\n|NextSong |2018-10-01 09:25:17|free |\n|Add to Playlist |2018-10-01 09:27:46|free |\n|NextSong |2018-10-01 09:28:02|free |\n|Logout |2018-10-01 09:28:03|free |\n|Home |2018-10-02 03:41:36|free |\n|NextSong |2018-10-02 03:41:40|free |\n|NextSong |2018-10-02 03:50:19|free |\n|Thumbs Up |2018-10-02 03:50:20|free |\n|NextSong |2018-10-02 03:53:14|free |\n|NextSong |2018-10-02 03:56:57|free |\n|NextSong |2018-10-02 04:01:21|free |\n|NextSong |2018-10-02 04:05:20|free |\n|NextSong |2018-10-02 04:10:04|free |\n|Roll Advert |2018-10-02 04:10:36|free |\n|NextSong |2018-10-02 04:14:15|free |\n|NextSong |2018-10-02 04:19:10|free |\n|NextSong |2018-10-02 04:22:24|free |\n|NextSong |2018-10-02 04:27:43|free |\n|NextSong |2018-10-02 04:31:22|free |\n|NextSong |2018-10-02 04:37:11|free |\n|NextSong |2018-10-02 04:43:07|free |\n|NextSong |2018-10-02 04:48:55|free |\n|Settings |2018-10-02 04:51:11|free |\n|NextSong |2018-10-02 04:51:25|free |\n|NextSong |2018-10-02 04:54:59|free |\n|NextSong |2018-10-02 04:59:21|free |\n|NextSong |2018-10-02 05:03:13|free |\n|Home |2018-10-02 05:03:44|free |\n|NextSong |2018-10-02 05:07:06|free |\n|NextSong |2018-10-02 05:09:17|free |\n|Roll Advert |2018-10-02 05:11:01|free |\n|NextSong |2018-10-02 05:13:11|free |\n|NextSong |2018-10-02 05:18:46|free |\n|NextSong |2018-10-02 05:21:41|free |\n|NextSong |2018-10-02 05:27:55|free |\n|NextSong |2018-10-02 05:31:50|free |\n|Home |2018-10-02 05:32:01|free |\n|NextSong |2018-10-02 05:40:51|free |\n|Thumbs Down |2018-10-02 05:40:52|free |\n|NextSong |2018-10-02 05:45:23|free |\n|NextSong |2018-10-02 05:46:11|free |\n|NextSong |2018-10-02 05:48:02|free |\n|NextSong |2018-10-02 05:51:48|free |\n|Upgrade |2018-10-02 05:53:33|free |\n|NextSong |2018-10-02 05:53:59|free |\n|NextSong |2018-10-02 05:57:05|free |\n|NextSong |2018-10-02 06:02:18|free |\n|Roll Advert |2018-10-02 06:02:53|free |\n|NextSong |2018-10-02 06:06:45|free |\n|NextSong |2018-10-02 06:11:23|free |\n|NextSong |2018-10-02 06:14:17|free |\n|NextSong |2018-10-02 06:16:57|free |\n|Thumbs Up |2018-10-02 06:16:58|free |\n|NextSong |2018-10-02 06:20:56|free |\n|NextSong |2018-10-02 06:24:11|free |\n|NextSong |2018-10-02 06:29:19|free |\n|NextSong |2018-10-02 06:33:12|free |\n|NextSong |2018-10-02 06:38:50|free |\n|Thumbs Up |2018-10-02 06:38:51|free |\n|NextSong |2018-10-02 06:43:11|free |\n|NextSong |2018-10-02 06:46:26|free |\n|NextSong |2018-10-02 06:51:21|free |\n|Upgrade |2018-10-02 06:51:38|free |\n|NextSong |2018-10-02 06:57:27|free |\n|Roll Advert |2018-10-02 06:58:09|free |\n|Roll Advert |2018-10-02 06:58:54|free |\n|NextSong |2018-10-02 07:00:06|free |\n|NextSong |2018-10-02 07:03:55|free |\n|NextSong |2018-10-02 07:08:18|free |\n|NextSong |2018-10-02 07:10:35|free |\n|NextSong |2018-10-02 07:12:22|free |\n|NextSong |2018-10-02 07:14:55|free |\n|Add to Playlist |2018-10-02 07:16:53|free |\n|NextSong |2018-10-02 12:19:15|free |\n|Logout |2018-10-02 12:19:16|free |\n|Home |2018-10-02 12:24:29|free |\n|NextSong |2018-10-02 12:25:41|free |\n|Roll Advert |2018-10-02 12:26:04|free |\n|NextSong |2018-10-02 12:30:24|free |\n|Roll Advert |2018-10-02 12:35:27|free |\n|NextSong |2018-10-02 12:35:38|free |\n|NextSong |2018-10-02 12:39:01|free |\n|Logout |2018-10-02 12:39:02|free |\n|Home |2018-10-02 12:51:02|free |\n|NextSong |2018-10-02 12:56:37|free |\n|NextSong |2018-10-02 12:58:25|free |\n|Thumbs Up |2018-10-02 12:58:26|free |\n|NextSong |2018-10-02 13:02:20|free |\n|NextSong |2018-10-02 13:07:03|free |\n|Add Friend |2018-10-02 13:07:04|free |\n|NextSong |2018-10-02 13:12:00|free |\n|Home |2018-10-02 13:20:29|free |\n|NextSong |2018-10-02 13:21:26|free |\n|NextSong |2018-10-02 13:24:52|free |\n|Thumbs Up |2018-10-02 13:24:53|free |\n|NextSong |2018-10-02 13:28:55|free |\n|Thumbs Up |2018-10-02 13:28:56|free |\n|NextSong |2018-10-02 13:34:56|free |\n|NextSong |2018-10-02 13:39:11|free |\n|NextSong |2018-10-02 13:44:06|free |\n|Add to Playlist |2018-10-02 13:44:07|free |\n|Thumbs Up |2018-10-02 13:44:08|free |\n|NextSong |2018-10-02 13:48:26|free |\n|NextSong |2018-10-02 13:52:38|free |\n|NextSong |2018-10-02 13:56:20|free |\n|NextSong |2018-10-02 14:01:00|free |\n|NextSong |2018-10-02 14:03:16|free |\n|NextSong |2018-10-02 14:10:01|free |\n|NextSong |2018-10-02 14:14:41|free |\n|NextSong |2018-10-02 14:18:01|free |\n|Error |2018-10-02 14:18:21|free |\n|NextSong |2018-10-02 14:24:38|free |\n|NextSong |2018-10-02 14:29:01|free |\n|NextSong |2018-10-02 14:34:19|free |\n|NextSong |2018-10-02 14:37:17|free |\n|NextSong |2018-10-02 14:41:33|free |\n|NextSong |2018-10-02 14:46:01|free |\n|NextSong |2018-10-02 14:50:00|free |\n|NextSong |2018-10-02 14:53:56|free |\n|Logout |2018-10-02 14:53:57|free |\n|Home |2018-10-02 15:33:45|free |\n|NextSong |2018-10-02 15:39:43|free |\n|NextSong |2018-10-02 15:42:36|free |\n|NextSong |2018-10-02 15:50:27|free |\n|NextSong |2018-10-02 16:00:21|free |\n|NextSong |2018-10-02 16:03:18|free |\n|NextSong |2018-10-02 16:07:00|free |\n|NextSong |2018-10-02 16:12:33|free |\n|Roll Advert |2018-10-02 16:13:59|free |\n|NextSong |2018-10-02 16:16:24|free |\n|Help |2018-10-02 16:16:50|free |\n|NextSong |2018-10-02 16:19:54|free |\n|NextSong |2018-10-02 16:24:34|free |\n|NextSong |2018-10-02 16:29:18|free |\n|Logout |2018-10-02 16:29:19|free |\n|Home |2018-10-02 16:29:46|free |\n|NextSong |2018-10-02 16:33:52|free |\n|Roll Advert |2018-10-02 16:34:33|free |\n|Cancel |2018-10-02 16:34:34|free |\n|Cancellation Confirmation|2018-10-02 16:36:45|free |\n+-------------------------+-------------------+-----+\n\n" ], [ "spark.sql(\"\"\"\nSELECT page,to_timestamp(ts/1000) as ts,level\nFROM DATA \nwhere userId='100002'\norder by ts\n\"\"\").show(500, False)", "+---------------+-------------------+-----+\n|page |ts |level|\n+---------------+-------------------+-----+\n|Home |2018-10-08 22:57:25|paid |\n|NextSong |2018-10-08 22:57:34|paid |\n|NextSong |2018-10-08 23:00:57|paid |\n|NextSong |2018-10-08 23:04:58|paid |\n|Add to Playlist|2018-10-08 23:05:07|paid |\n|NextSong |2018-11-06 15:29:54|paid |\n|NextSong |2018-11-06 15:34:18|paid |\n|NextSong |2018-11-06 15:37:56|paid |\n|NextSong |2018-11-06 15:42:00|paid |\n|Downgrade |2018-11-06 15:42:01|paid |\n|Downgrade |2018-11-06 15:42:01|paid |\n|NextSong |2018-11-06 15:46:13|paid |\n|NextSong |2018-11-06 15:57:08|paid |\n|NextSong |2018-11-06 16:03:09|paid |\n|NextSong |2018-11-06 16:06:30|paid |\n|NextSong |2018-11-06 16:11:19|paid |\n|NextSong |2018-11-06 16:14:03|paid |\n|NextSong |2018-11-06 16:16:42|paid |\n|NextSong |2018-11-06 16:21:40|paid |\n|NextSong |2018-11-06 16:25:36|paid |\n|NextSong |2018-11-06 16:29:47|paid |\n|NextSong |2018-11-06 16:31:49|paid |\n|Thumbs Up |2018-11-06 16:31:50|paid |\n|NextSong |2018-11-06 16:36:13|paid |\n|Roll Advert |2018-11-06 16:36:16|paid |\n|Logout |2018-11-06 16:36:17|paid |\n|Home |2018-11-06 16:36:28|paid |\n|Home |2018-11-14 15:35:19|paid |\n|NextSong |2018-11-14 15:35:21|paid |\n|NextSong |2018-11-14 15:40:03|paid |\n|NextSong |2018-11-14 15:43:03|paid |\n|NextSong |2018-11-14 15:46:54|paid |\n|NextSong |2018-11-14 15:50:35|paid |\n|NextSong |2018-11-14 15:54:19|paid |\n|NextSong |2018-11-14 16:02:26|paid |\n|NextSong |2018-11-14 16:06:27|paid |\n|NextSong |2018-11-14 16:11:13|paid |\n|NextSong |2018-11-14 16:14:51|paid |\n|NextSong |2018-11-14 16:18:06|paid |\n|Thumbs Up |2018-11-14 16:18:07|paid |\n|NextSong |2018-11-14 16:23:05|paid |\n|NextSong |2018-11-14 16:27:50|paid |\n|NextSong |2018-11-14 16:30:01|paid |\n|NextSong |2018-11-14 16:34:39|paid |\n|NextSong |2018-11-14 16:39:26|paid |\n|NextSong |2018-11-14 16:44:28|paid |\n|NextSong |2018-11-14 16:47:45|paid |\n|Roll Advert |2018-11-14 16:48:12|paid |\n|NextSong |2018-11-14 16:52:18|paid |\n|NextSong |2018-11-14 16:55:43|paid |\n|NextSong |2018-11-14 17:03:17|paid |\n|NextSong |2018-11-14 17:06:38|paid |\n|NextSong |2018-11-14 17:09:46|paid |\n|NextSong |2018-11-14 17:12:48|paid |\n|NextSong |2018-11-14 17:16:22|paid |\n|Thumbs Up |2018-11-14 17:16:23|paid |\n|Add to Playlist|2018-11-14 17:16:47|paid |\n|NextSong |2018-11-14 17:20:34|paid |\n|NextSong |2018-11-14 17:23:00|paid |\n|NextSong |2018-11-14 17:28:37|paid |\n|NextSong |2018-11-14 17:31:52|paid |\n|NextSong |2018-11-14 17:36:46|paid |\n|NextSong |2018-11-14 17:40:49|paid |\n|NextSong |2018-11-14 17:44:24|paid |\n|NextSong |2018-11-14 17:50:10|paid |\n|NextSong |2018-11-14 17:55:19|paid |\n|NextSong |2018-11-14 17:58:02|paid |\n|Thumbs Up |2018-11-14 17:58:03|paid |\n|NextSong |2018-11-14 18:01:41|paid |\n|NextSong |2018-11-14 18:05:48|paid |\n|NextSong |2018-11-14 18:08:37|paid |\n|Home |2018-11-14 18:08:49|paid |\n|NextSong |2018-11-14 18:18:26|paid |\n|NextSong |2018-11-14 18:21:27|paid |\n|NextSong |2018-11-14 18:24:53|paid |\n|NextSong |2018-11-14 18:28:31|paid |\n|NextSong |2018-11-14 18:32:40|paid |\n|NextSong |2018-11-14 18:36:31|paid |\n|NextSong |2018-11-14 18:41:03|paid |\n|NextSong |2018-11-14 18:49:26|paid |\n|NextSong |2018-11-14 18:55:46|paid |\n|NextSong |2018-11-14 19:00:11|paid |\n|NextSong |2018-11-14 19:04:12|paid |\n|NextSong |2018-11-14 19:08:43|paid |\n|NextSong |2018-11-14 19:12:49|paid |\n|NextSong |2018-11-14 19:16:15|paid |\n|NextSong |2018-11-14 19:17:23|paid |\n|NextSong |2018-11-14 19:20:26|paid |\n|NextSong |2018-11-14 19:23:48|paid |\n|NextSong |2018-11-14 19:29:19|paid |\n|NextSong |2018-11-14 19:33:31|paid |\n|NextSong |2018-11-14 19:36:00|paid |\n|NextSong |2018-11-14 19:41:25|paid |\n|NextSong |2018-11-14 19:44:24|paid |\n|NextSong |2018-11-14 19:49:26|paid |\n|NextSong |2018-11-14 19:54:44|paid |\n|NextSong |2018-11-14 19:58:39|paid |\n|NextSong |2018-11-14 20:04:36|paid |\n|NextSong |2018-11-14 20:08:55|paid |\n|NextSong |2018-11-14 20:12:21|paid |\n|NextSong |2018-11-14 20:17:38|paid |\n|Home |2018-11-14 20:17:38|paid |\n|NextSong |2018-11-14 20:20:35|paid |\n|NextSong |2018-11-14 20:24:25|paid |\n|NextSong |2018-11-14 20:28:52|paid |\n|NextSong |2018-11-14 20:33:06|paid |\n|NextSong |2018-11-14 20:38:22|paid |\n|NextSong |2018-11-14 20:41:22|paid |\n|NextSong |2018-11-14 20:44:35|paid |\n|NextSong |2018-11-14 20:48:19|paid |\n|NextSong |2018-11-14 20:52:03|paid |\n|NextSong |2018-11-14 20:55:28|paid |\n|NextSong |2018-11-14 20:58:56|paid |\n|NextSong |2018-11-14 21:02:01|paid |\n|NextSong |2018-11-14 21:05:46|paid |\n|NextSong |2018-11-14 21:10:50|paid |\n|NextSong |2018-11-14 21:14:36|paid |\n|NextSong |2018-11-14 21:18:10|paid |\n|NextSong |2018-11-14 21:24:55|paid |\n|NextSong |2018-11-14 21:30:19|paid |\n|NextSong |2018-11-14 21:34:08|paid |\n|NextSong |2018-11-14 21:37:45|paid |\n|NextSong |2018-11-14 21:44:26|paid |\n|NextSong |2018-11-14 21:48:06|paid |\n|NextSong |2018-11-14 21:51:36|paid |\n|NextSong |2018-11-14 21:59:29|paid |\n|NextSong |2018-11-14 22:01:56|paid |\n|NextSong |2018-11-14 22:05:02|paid |\n|NextSong |2018-11-14 22:08:47|paid |\n|NextSong |2018-11-14 22:13:31|paid |\n|NextSong |2018-11-14 22:16:48|paid |\n|NextSong |2018-11-14 22:20:24|paid |\n|NextSong |2018-11-14 22:24:49|paid |\n|NextSong |2018-11-14 22:28:48|paid |\n|NextSong |2018-11-14 22:33:16|paid |\n|NextSong |2018-11-14 22:36:49|paid |\n|Home |2018-11-14 22:36:57|paid |\n|NextSong |2018-11-14 22:40:32|paid |\n|NextSong |2018-11-14 22:44:09|paid |\n|NextSong |2018-11-14 22:48:12|paid |\n|NextSong |2018-11-14 22:52:03|paid |\n|NextSong |2018-11-14 22:59:46|paid |\n|NextSong |2018-11-14 23:04:07|paid |\n|NextSong |2018-11-14 23:08:14|paid |\n|NextSong |2018-11-14 23:12:25|paid |\n|NextSong |2018-11-14 23:16:24|paid |\n|NextSong |2018-11-14 23:20:26|paid |\n|NextSong |2018-11-14 23:24:38|paid |\n|NextSong |2018-11-14 23:29:10|paid |\n|NextSong |2018-11-14 23:32:02|paid |\n|NextSong |2018-11-14 23:34:17|paid |\n|NextSong |2018-11-14 23:38:06|paid |\n|NextSong |2018-11-14 23:41:13|paid |\n|Thumbs Up |2018-11-14 23:41:14|paid |\n|Add to Playlist|2018-11-14 23:41:20|paid |\n|NextSong |2018-11-14 23:45:08|paid |\n|NextSong |2018-11-14 23:47:56|paid |\n|NextSong |2018-11-14 23:51:50|paid |\n|NextSong |2018-11-14 23:56:18|paid |\n|NextSong |2018-11-14 23:59:36|paid |\n|NextSong |2018-11-15 00:02:25|paid |\n|NextSong |2018-11-15 00:05:48|paid |\n|NextSong |2018-11-15 00:09:28|paid |\n|NextSong |2018-11-15 00:15:58|paid |\n|NextSong |2018-11-15 00:19:38|paid |\n|NextSong |2018-11-15 00:22:55|paid |\n|NextSong |2018-11-15 00:30:02|paid |\n|Roll Advert |2018-11-15 00:30:19|paid |\n|NextSong |2018-11-15 00:34:52|paid |\n|NextSong |2018-11-15 00:38:54|paid |\n|NextSong |2018-11-15 00:43:01|paid |\n|Add to Playlist|2018-11-15 00:43:07|paid |\n|NextSong |2018-11-15 00:46:38|paid |\n|NextSong |2018-11-15 00:49:12|paid |\n|NextSong |2018-11-15 00:54:09|paid |\n|NextSong |2018-11-15 00:57:21|paid |\n|NextSong |2018-11-15 01:00:00|paid |\n|NextSong |2018-11-15 01:04:21|paid |\n|NextSong |2018-11-15 01:08:33|paid |\n|Add to Playlist|2018-11-15 01:08:41|paid |\n|NextSong |2018-11-15 01:12:03|paid |\n|NextSong |2018-11-15 01:15:52|paid |\n|NextSong |2018-11-15 01:18:43|paid |\n|NextSong |2018-11-15 01:22:19|paid |\n|NextSong |2018-11-15 01:25:26|paid |\n|NextSong |2018-11-15 01:30:19|paid |\n|Add Friend |2018-11-15 01:30:20|paid |\n|NextSong |2018-11-15 01:33:47|paid |\n|NextSong |2018-11-15 01:38:21|paid |\n|NextSong |2018-11-15 01:43:56|paid |\n|NextSong |2018-11-15 01:56:08|paid |\n|NextSong |2018-11-15 02:00:42|paid |\n|NextSong |2018-11-15 02:03:19|paid |\n|NextSong |2018-11-15 02:10:16|paid |\n|NextSong |2018-11-15 02:13:40|paid |\n|NextSong |2018-11-15 02:18:41|paid |\n|NextSong |2018-11-15 02:29:36|paid |\n|NextSong |2018-11-15 02:33:25|paid |\n|NextSong |2018-11-15 02:37:00|paid |\n|NextSong |2018-11-15 02:40:22|paid |\n|NextSong |2018-11-15 02:44:03|paid |\n|NextSong |2018-11-15 02:47:23|paid |\n|NextSong |2018-11-15 02:51:30|paid |\n|NextSong |2018-11-15 02:55:45|paid |\n|NextSong |2018-11-15 02:59:33|paid |\n|NextSong |2018-11-15 03:02:44|paid |\n|NextSong |2018-11-15 03:07:21|paid |\n|NextSong |2018-11-15 03:11:08|paid |\n|NextSong |2018-11-15 03:15:05|paid |\n|NextSong |2018-11-15 03:20:13|paid |\n|NextSong |2018-11-15 03:23:49|paid |\n|NextSong |2018-11-15 03:27:31|paid |\n|NextSong |2018-11-15 03:30:36|paid |\n|NextSong |2018-11-15 03:34:02|paid |\n|NextSong |2018-11-15 03:38:54|paid |\n|NextSong |2018-11-15 03:42:19|paid |\n|NextSong |2018-11-15 03:45:47|paid |\n|NextSong |2018-12-03 01:11:16|paid |\n+---------------+-------------------+-----+\n\n" ] ], [ [ "We can see that even that the user went to Downgrade page he remained paid. I assume that he should do a Submit Downgrade page to consider his downgrade valid", "_____no_output_____" ] ], [ [ "spark.sql(\"\"\"\nSELECT distinct userId,page \nFROM DATA \nwhere page in ('Cancellation Confirmation','Submit Downgrade') \norder by userId,page\n\"\"\").show(10, False)", "+------+-------------------------+\n|userId|page |\n+------+-------------------------+\n|100 |Submit Downgrade |\n|100001|Cancellation Confirmation|\n|100003|Cancellation Confirmation|\n|100004|Submit Downgrade |\n|100005|Cancellation Confirmation|\n|100006|Cancellation Confirmation|\n|100007|Cancellation Confirmation|\n|100008|Submit Downgrade |\n|100009|Cancellation Confirmation|\n|100009|Submit Downgrade |\n+------+-------------------------+\nonly showing top 10 rows\n\n" ], [ "spark.sql(\"\"\"\nSELECT page,to_timestamp(ts/1000) as ts,level\nFROM DATA \nwhere userId='100009'\norder by ts\n\"\"\").show(500, False)", "+----------------+-------------------+-----+\n|page |ts |level|\n+----------------+-------------------+-----+\n|NextSong |2018-10-01 06:12:51|free |\n|NextSong |2018-10-01 06:16:03|free |\n|Thumbs Up |2018-10-01 06:16:04|free |\n|NextSong |2018-10-01 06:20:02|free |\n|NextSong |2018-10-01 06:23:33|free |\n|NextSong |2018-10-01 06:27:30|free |\n|NextSong |2018-10-01 06:33:44|free |\n|NextSong |2018-10-01 06:38:16|free |\n|NextSong |2018-10-01 06:44:12|free |\n|NextSong |2018-10-01 06:47:28|free |\n|Roll Advert |2018-10-01 06:47:54|free |\n|Roll Advert |2018-10-01 06:47:54|free |\n|NextSong |2018-10-01 06:51:45|free |\n|Roll Advert |2018-10-01 06:52:39|free |\n|NextSong |2018-10-01 06:55:30|free |\n|NextSong |2018-10-01 06:59:52|free |\n|NextSong |2018-10-01 07:03:22|free |\n|NextSong |2018-10-01 07:06:36|free |\n|NextSong |2018-10-01 07:10:19|free |\n|NextSong |2018-10-01 07:13:43|free |\n|NextSong |2018-10-01 07:18:46|free |\n|Upgrade |2018-10-01 07:19:20|free |\n|NextSong |2018-10-01 07:24:37|free |\n|NextSong |2018-10-01 07:28:07|free |\n|Settings |2018-10-01 07:28:23|free |\n|NextSong |2018-10-01 07:36:24|free |\n|NextSong |2018-10-01 07:39:59|free |\n|Logout |2018-10-01 07:40:00|free |\n|Home |2018-10-05 03:19:07|free |\n|NextSong |2018-10-05 03:19:28|free |\n|NextSong |2018-10-05 03:23:22|free |\n|Roll Advert |2018-10-05 03:23:38|free |\n|NextSong |2018-10-09 01:44:03|free |\n|NextSong |2018-10-09 01:47:14|free |\n|NextSong |2018-10-09 01:50:35|free |\n|NextSong |2018-10-09 01:58:18|free |\n|Roll Advert |2018-10-09 02:01:06|free |\n|Roll Advert |2018-10-09 02:01:34|free |\n|NextSong |2018-10-09 02:01:42|free |\n|Thumbs Down |2018-10-09 02:01:43|free |\n|NextSong |2018-10-09 02:03:59|free |\n|NextSong |2018-10-09 02:07:17|free |\n|NextSong |2018-10-09 02:10:18|free |\n|Logout |2018-10-09 02:10:19|free |\n|Home |2018-10-09 02:10:30|free |\n|NextSong |2018-10-09 02:13:36|free |\n|Thumbs Up |2018-10-09 02:13:37|free |\n|NextSong |2018-10-09 02:16:53|free |\n|NextSong |2018-10-09 02:20:18|free |\n|Add to Playlist |2018-10-09 02:20:50|free |\n|NextSong |2018-10-09 02:23:47|free |\n|NextSong |2018-10-09 02:27:37|free |\n|Logout |2018-10-09 02:27:38|free |\n|Home |2018-10-09 02:31:25|free |\n|NextSong |2018-10-09 02:31:35|free |\n|NextSong |2018-10-09 02:38:22|free |\n|Logout |2018-10-09 02:38:23|free |\n|Home |2018-10-09 02:40:00|free |\n|NextSong |2018-10-09 02:41:56|free |\n|NextSong |2018-10-09 02:45:51|free |\n|Logout |2018-10-09 02:45:52|free |\n|Home |2018-10-09 02:48:25|free |\n|NextSong |2018-10-09 02:49:52|free |\n|NextSong |2018-10-09 02:54:08|free |\n|NextSong |2018-10-09 02:57:30|free |\n|NextSong |2018-10-09 03:00:21|free |\n|NextSong |2018-10-09 03:05:12|free |\n|NextSong |2018-10-09 03:09:15|free |\n|Roll Advert |2018-10-09 03:09:16|free |\n|NextSong |2018-10-09 03:15:47|free |\n|NextSong |2018-10-09 03:19:27|free |\n|NextSong |2018-10-09 03:22:02|free |\n|Roll Advert |2018-10-09 03:22:09|free |\n|Roll Advert |2018-10-09 03:22:59|free |\n|NextSong |2018-10-09 03:24:33|free |\n|NextSong |2018-10-09 03:31:01|free |\n|NextSong |2018-10-09 03:35:06|free |\n|NextSong |2018-10-09 03:40:22|free |\n|NextSong |2018-10-09 03:43:21|free |\n|NextSong |2018-10-09 03:45:32|free |\n|Roll Advert |2018-10-09 03:46:00|free |\n|NextSong |2018-10-09 03:49:21|free |\n|NextSong |2018-10-09 03:52:01|free |\n|Thumbs Down |2018-10-09 03:52:02|free |\n|NextSong |2018-10-09 03:56:13|free |\n|Logout |2018-10-09 03:56:14|free |\n|Home |2018-10-09 07:12:00|free |\n|NextSong |2018-10-09 07:12:24|free |\n|NextSong |2018-10-09 07:16:25|free |\n|NextSong |2018-10-09 07:20:58|free |\n|NextSong |2018-10-09 07:24:55|free |\n|NextSong |2018-10-09 07:29:35|free |\n|Help |2018-10-09 07:29:47|free |\n|Home |2018-10-09 07:29:52|free |\n|NextSong |2018-10-09 07:33:51|free |\n|NextSong |2018-10-09 07:37:03|free |\n|NextSong |2018-10-09 07:39:36|free |\n|Thumbs Down |2018-10-09 07:39:37|free |\n|NextSong |2018-10-09 07:43:26|free |\n|NextSong |2018-10-09 07:49:58|free |\n|NextSong |2018-10-09 07:52:47|free |\n|NextSong |2018-10-09 07:55:10|free |\n|NextSong |2018-10-09 07:59:10|free |\n|NextSong |2018-10-09 08:01:17|free |\n|NextSong |2018-10-09 08:03:39|free |\n|Roll Advert |2018-10-09 08:03:54|free |\n|NextSong |2018-10-09 08:07:26|free |\n|Roll Advert |2018-10-09 08:08:24|free |\n|NextSong |2018-10-09 08:14:20|free |\n|NextSong |2018-10-09 08:16:32|free |\n|NextSong |2018-10-09 08:20:58|free |\n|NextSong |2018-10-09 08:26:02|free |\n|NextSong |2018-10-09 08:29:10|free |\n|NextSong |2018-10-09 08:32:53|free |\n|NextSong |2018-10-09 08:35:47|free |\n|NextSong |2018-10-09 08:39:37|free |\n|NextSong |2018-10-09 08:43:25|free |\n|NextSong |2018-10-09 08:47:48|free |\n|Home |2018-10-14 21:51:12|free |\n|NextSong |2018-10-14 21:52:09|free |\n|Roll Advert |2018-10-14 21:52:17|free |\n|NextSong |2018-10-14 21:56:06|free |\n|Roll Advert |2018-10-14 21:56:08|free |\n|NextSong |2018-10-14 21:59:33|free |\n|Roll Advert |2018-10-14 21:59:37|free |\n|NextSong |2018-10-14 22:04:06|free |\n|NextSong |2018-10-14 22:06:57|free |\n|Add Friend |2018-10-14 22:06:58|free |\n|NextSong |2018-10-14 22:10:25|free |\n|Roll Advert |2018-10-14 22:11:00|free |\n|NextSong |2018-10-14 22:14:17|free |\n|Add to Playlist |2018-10-14 22:14:36|free |\n|NextSong |2018-10-14 22:19:10|free |\n|NextSong |2018-10-14 22:21:42|free |\n|NextSong |2018-10-14 22:25:32|free |\n|Thumbs Up |2018-10-14 22:25:33|free |\n|NextSong |2018-10-14 22:27:01|free |\n|NextSong |2018-10-14 22:30:17|free |\n|NextSong |2018-10-14 22:35:18|free |\n|NextSong |2018-10-14 22:38:28|free |\n|Upgrade |2018-10-14 22:39:02|free |\n|Submit Upgrade |2018-10-14 22:39:03|free |\n|Home |2018-10-14 22:39:23|paid |\n|NextSong |2018-10-14 22:42:06|paid |\n|NextSong |2018-10-14 22:46:37|paid |\n|NextSong |2018-10-14 22:51:05|paid |\n|NextSong |2018-10-14 22:54:52|paid |\n|NextSong |2018-10-14 22:58:02|paid |\n|NextSong |2018-10-14 23:01:56|paid |\n|NextSong |2018-10-14 23:05:49|paid |\n|NextSong |2018-10-14 23:09:06|paid |\n|NextSong |2018-10-14 23:11:30|paid |\n|NextSong |2018-10-14 23:16:40|paid |\n|Home |2018-10-14 23:17:37|paid |\n|NextSong |2018-10-14 23:20:40|paid |\n|NextSong |2018-10-14 23:25:34|paid |\n|NextSong |2018-10-14 23:29:06|paid |\n|NextSong |2018-10-14 23:33:16|paid |\n|NextSong |2018-10-14 23:40:08|paid |\n|NextSong |2018-10-14 23:43:24|paid |\n|NextSong |2018-10-14 23:47:00|paid |\n|NextSong |2018-10-14 23:50:39|paid |\n|NextSong |2018-10-14 23:53:25|paid |\n|NextSong |2018-10-14 23:57:02|paid |\n|NextSong |2018-10-15 00:00:47|paid |\n|NextSong |2018-10-15 00:06:09|paid |\n|NextSong |2018-10-15 00:10:40|paid |\n|NextSong |2018-10-15 00:15:33|paid |\n|Settings |2018-10-15 00:16:15|paid |\n|Save Settings |2018-10-15 00:16:16|paid |\n|Settings |2018-10-15 00:17:11|paid |\n|Home |2018-10-15 00:18:06|paid |\n|Downgrade |2018-10-15 00:18:11|paid |\n|NextSong |2018-10-15 00:20:30|paid |\n|NextSong |2018-10-15 00:22:56|paid |\n|NextSong |2018-10-15 00:27:07|paid |\n|NextSong |2018-10-15 00:31:32|paid |\n|NextSong |2018-10-15 00:35:19|paid |\n|NextSong |2018-10-15 00:39:28|paid |\n|NextSong |2018-10-15 00:42:56|paid |\n|Add to Playlist |2018-10-15 00:43:15|paid |\n|NextSong |2018-10-15 00:45:25|paid |\n|Help |2018-10-15 00:45:36|paid |\n|NextSong |2018-10-15 00:49:47|paid |\n|Add Friend |2018-10-15 00:49:48|paid |\n|Add Friend |2018-10-15 00:49:49|paid |\n|NextSong |2018-10-15 00:53:08|paid |\n|NextSong |2018-10-15 00:56:22|paid |\n|Downgrade |2018-10-15 00:56:32|paid |\n|NextSong |2018-10-15 00:59:47|paid |\n|NextSong |2018-10-15 01:03:38|paid |\n|NextSong |2018-10-15 01:08:01|paid |\n|NextSong |2018-10-15 01:11:25|paid |\n|NextSong |2018-10-15 01:20:07|paid |\n|NextSong |2018-10-15 01:24:16|paid |\n|NextSong |2018-10-15 01:28:20|paid |\n|NextSong |2018-10-15 01:32:44|paid |\n|NextSong |2018-10-15 01:35:26|paid |\n|NextSong |2018-10-15 01:45:45|paid |\n|NextSong |2018-10-15 01:51:47|paid |\n|Logout |2018-10-15 01:51:48|paid |\n|Home |2018-10-15 01:53:59|paid |\n|NextSong |2018-10-15 01:55:54|paid |\n|NextSong |2018-10-15 01:59:24|paid |\n|NextSong |2018-10-15 02:02:40|paid |\n|NextSong |2018-10-15 02:05:40|paid |\n|NextSong |2018-10-15 02:09:17|paid |\n|NextSong |2018-10-15 02:12:55|paid |\n|NextSong |2018-10-15 02:15:33|paid |\n|NextSong |2018-10-15 02:19:59|paid |\n|NextSong |2018-10-15 02:24:16|paid |\n|NextSong |2018-10-15 02:28:03|paid |\n|NextSong |2018-10-15 02:31:57|paid |\n|NextSong |2018-10-15 02:37:38|paid |\n|NextSong |2018-10-15 02:42:06|paid |\n|NextSong |2018-10-15 02:44:19|paid |\n|NextSong |2018-10-15 02:47:45|paid |\n|Settings |2018-10-15 02:50:10|paid |\n|NextSong |2018-10-15 02:51:09|paid |\n|NextSong |2018-10-15 02:56:32|paid |\n|NextSong |2018-10-15 03:00:11|paid |\n|NextSong |2018-10-15 03:03:37|paid |\n|NextSong |2018-10-15 03:06:46|paid |\n|NextSong |2018-10-15 03:10:39|paid |\n|NextSong |2018-10-15 03:14:44|paid |\n|NextSong |2018-10-15 03:19:10|paid |\n|NextSong |2018-10-15 03:22:48|paid |\n|NextSong |2018-10-15 03:26:27|paid |\n|NextSong |2018-10-15 03:28:29|paid |\n|NextSong |2018-10-15 03:32:04|paid |\n|NextSong |2018-10-15 03:35:57|paid |\n|NextSong |2018-10-15 03:39:21|paid |\n|NextSong |2018-10-15 03:43:14|paid |\n|NextSong |2018-10-15 03:48:41|paid |\n|Add to Playlist |2018-10-15 03:49:06|paid |\n|NextSong |2018-10-15 03:52:44|paid |\n|NextSong |2018-10-15 03:56:54|paid |\n|NextSong |2018-10-15 04:00:17|paid |\n|NextSong |2018-10-15 04:04:13|paid |\n|NextSong |2018-10-15 04:07:35|paid |\n|NextSong |2018-10-15 04:12:01|paid |\n|NextSong |2018-10-15 04:19:36|paid |\n|Help |2018-10-15 04:20:37|paid |\n|NextSong |2018-10-15 04:23:45|paid |\n|NextSong |2018-10-15 04:27:44|paid |\n|Downgrade |2018-10-15 04:27:44|paid |\n|NextSong |2018-10-15 04:30:40|paid |\n|NextSong |2018-10-15 04:39:10|paid |\n|NextSong |2018-10-15 04:42:49|paid |\n|NextSong |2018-10-15 04:46:51|paid |\n|NextSong |2018-10-15 04:51:11|paid |\n|Logout |2018-10-15 04:51:12|paid |\n|Home |2018-10-15 04:52:47|paid |\n|NextSong |2018-10-15 04:57:46|paid |\n|NextSong |2018-10-15 05:00:59|paid |\n|NextSong |2018-10-15 05:11:54|paid |\n|NextSong |2018-10-15 05:16:21|paid |\n|NextSong |2018-10-15 05:19:05|paid |\n|NextSong |2018-10-15 05:23:25|paid |\n|NextSong |2018-10-15 05:26:26|paid |\n|NextSong |2018-10-15 05:29:23|paid |\n|NextSong |2018-10-15 05:32:12|paid |\n|NextSong |2018-10-15 05:35:55|paid |\n|NextSong |2018-10-15 05:40:10|paid |\n|NextSong |2018-10-15 05:45:19|paid |\n|Add to Playlist |2018-10-15 05:45:22|paid |\n|NextSong |2018-10-15 05:49:52|paid |\n|Thumbs Up |2018-10-15 05:49:53|paid |\n|NextSong |2018-10-15 05:55:40|paid |\n|NextSong |2018-10-15 06:02:00|paid |\n|NextSong |2018-10-15 06:06:12|paid |\n|NextSong |2018-10-15 06:10:15|paid |\n|Thumbs Up |2018-10-15 06:10:16|paid |\n|NextSong |2018-10-15 06:16:34|paid |\n|NextSong |2018-10-15 06:20:30|paid |\n|NextSong |2018-10-15 06:24:21|paid |\n|NextSong |2018-10-15 06:30:05|paid |\n|NextSong |2018-10-15 06:34:54|paid |\n|NextSong |2018-10-15 06:39:20|paid |\n|NextSong |2018-10-15 06:39:41|paid |\n|NextSong |2018-10-15 06:44:14|paid |\n|NextSong |2018-10-15 06:47:42|paid |\n|NextSong |2018-10-15 06:50:35|paid |\n|NextSong |2018-10-15 06:55:08|paid |\n|NextSong |2018-10-15 06:59:14|paid |\n|NextSong |2018-10-15 07:02:55|paid |\n|NextSong |2018-10-15 07:06:28|paid |\n|NextSong |2018-10-15 07:09:36|paid |\n|NextSong |2018-10-15 07:10:47|paid |\n|NextSong |2018-10-15 07:15:15|paid |\n|NextSong |2018-10-15 07:18:56|paid |\n|NextSong |2018-10-15 07:23:49|paid |\n|NextSong |2018-10-15 07:26:15|paid |\n|NextSong |2018-10-15 07:29:43|paid |\n|NextSong |2018-10-15 07:33:41|paid |\n|NextSong |2018-10-15 07:37:47|paid |\n|NextSong |2018-10-15 07:41:02|paid |\n|Add Friend |2018-10-15 07:41:03|paid |\n|NextSong |2018-10-15 07:45:32|paid |\n|About |2018-10-15 07:47:01|paid |\n|NextSong |2018-10-15 07:49:44|paid |\n|Thumbs Up |2018-10-15 07:49:45|paid |\n|NextSong |2018-10-15 07:53:40|paid |\n|NextSong |2018-10-15 07:57:57|paid |\n|Thumbs Up |2018-10-15 07:57:58|paid |\n|NextSong |2018-10-15 07:59:55|paid |\n|NextSong |2018-10-15 08:03:00|paid |\n|NextSong |2018-10-15 08:06:54|paid |\n|NextSong |2018-10-15 08:11:06|paid |\n|NextSong |2018-10-15 08:16:29|paid |\n|Home |2018-10-15 08:16:42|paid |\n|NextSong |2018-10-15 08:20:28|paid |\n|NextSong |2018-10-15 08:23:37|paid |\n|NextSong |2018-10-15 08:27:26|paid |\n|NextSong |2018-10-15 08:31:15|paid |\n|NextSong |2018-10-15 08:34:59|paid |\n|NextSong |2018-10-15 08:38:04|paid |\n|NextSong |2018-10-15 08:41:40|paid |\n|NextSong |2018-10-15 08:48:51|paid |\n|NextSong |2018-10-15 08:53:47|paid |\n|NextSong |2018-10-15 08:57:32|paid |\n|NextSong |2018-10-17 00:34:17|paid |\n|Thumbs Up |2018-10-17 00:34:18|paid |\n|NextSong |2018-10-17 00:37:49|paid |\n|NextSong |2018-10-17 00:43:42|paid |\n|Add to Playlist |2018-10-17 00:43:50|paid |\n|NextSong |2018-10-18 18:19:51|paid |\n|NextSong |2018-10-18 18:24:01|paid |\n|NextSong |2018-10-18 18:28:22|paid |\n|Thumbs Up |2018-10-18 18:28:23|paid |\n|NextSong |2018-10-18 18:33:34|paid |\n|NextSong |2018-10-18 18:38:36|paid |\n|NextSong |2018-10-18 18:44:33|paid |\n|NextSong |2018-10-18 18:48:07|paid |\n|NextSong |2018-10-18 18:51:04|paid |\n|NextSong |2018-10-18 18:54:57|paid |\n|NextSong |2018-10-18 19:02:07|paid |\n|NextSong |2018-10-18 19:06:14|paid |\n|NextSong |2018-10-18 19:09:25|paid |\n|NextSong |2018-10-18 19:13:19|paid |\n|NextSong |2018-10-18 19:16:35|paid |\n|NextSong |2018-10-18 19:20:40|paid |\n|NextSong |2018-10-18 19:24:09|paid |\n|NextSong |2018-10-18 19:28:06|paid |\n|NextSong |2018-10-18 19:31:07|paid |\n|NextSong |2018-10-18 19:34:56|paid |\n|NextSong |2018-10-18 19:40:22|paid |\n|NextSong |2018-10-18 19:46:42|paid |\n|NextSong |2018-10-18 19:50:39|paid |\n|NextSong |2018-10-18 19:59:21|paid |\n|NextSong |2018-10-18 20:04:10|paid |\n|NextSong |2018-10-18 20:07:37|paid |\n|NextSong |2018-10-18 20:12:04|paid |\n|NextSong |2018-10-18 20:15:16|paid |\n|NextSong |2018-10-18 20:20:29|paid |\n|NextSong |2018-10-18 20:24:09|paid |\n|NextSong |2018-10-18 20:29:40|paid |\n|NextSong |2018-10-18 20:32:13|paid |\n|NextSong |2018-10-18 20:37:01|paid |\n|NextSong |2018-10-18 20:40:45|paid |\n|NextSong |2018-10-18 20:44:08|paid |\n|NextSong |2018-10-18 20:48:35|paid |\n|Home |2018-10-18 20:49:24|paid |\n|Add Friend |2018-10-18 20:49:25|paid |\n|NextSong |2018-10-18 20:52:50|paid |\n|NextSong |2018-10-18 20:56:55|paid |\n|NextSong |2018-10-18 21:03:07|paid |\n|NextSong |2018-10-18 21:07:38|paid |\n|NextSong |2018-10-18 21:11:50|paid |\n|NextSong |2018-10-18 21:18:00|paid |\n|NextSong |2018-10-18 21:21:42|paid |\n|NextSong |2018-10-18 21:26:57|paid |\n|NextSong |2018-10-18 21:32:45|paid |\n|NextSong |2018-10-18 21:36:41|paid |\n|NextSong |2018-10-18 21:40:12|paid |\n|NextSong |2018-10-18 21:44:21|paid |\n|NextSong |2018-10-18 21:51:55|paid |\n|Add to Playlist |2018-10-18 21:52:29|paid |\n|NextSong |2018-10-18 21:55:57|paid |\n|NextSong |2018-10-18 21:59:42|paid |\n|NextSong |2018-10-18 22:03:26|paid |\n|NextSong |2018-10-18 22:07:52|paid |\n|NextSong |2018-10-18 22:11:30|paid |\n|NextSong |2018-10-18 22:17:09|paid |\n|NextSong |2018-10-18 22:20:40|paid |\n|NextSong |2018-10-18 22:23:55|paid |\n|NextSong |2018-10-18 22:27:12|paid |\n|NextSong |2018-10-18 22:31:24|paid |\n|NextSong |2018-10-18 22:36:30|paid |\n|NextSong |2018-10-18 22:40:03|paid |\n|NextSong |2018-10-18 22:43:58|paid |\n|NextSong |2018-10-18 22:47:47|paid |\n|Logout |2018-10-18 22:47:48|paid |\n|Home |2018-10-18 22:48:56|paid |\n|NextSong |2018-10-18 22:52:15|paid |\n|NextSong |2018-10-18 22:55:45|paid |\n|NextSong |2018-10-18 22:59:46|paid |\n|NextSong |2018-10-18 23:05:10|paid |\n|NextSong |2018-10-18 23:09:36|paid |\n|NextSong |2018-10-18 23:15:41|paid |\n|Thumbs Up |2018-10-18 23:15:42|paid |\n|NextSong |2018-10-18 23:18:00|paid |\n|NextSong |2018-10-18 23:19:52|paid |\n|NextSong |2018-10-18 23:23:02|paid |\n|Add to Playlist |2018-10-18 23:23:20|paid |\n|NextSong |2018-10-18 23:28:07|paid |\n|NextSong |2018-10-18 23:31:35|paid |\n|Add to Playlist |2018-10-18 23:31:52|paid |\n|NextSong |2018-10-18 23:35:59|paid |\n|NextSong |2018-10-18 23:39:41|paid |\n|NextSong |2018-10-18 23:44:07|paid |\n|NextSong |2018-10-18 23:48:18|paid |\n|NextSong |2018-10-18 23:53:05|paid |\n|NextSong |2018-10-18 23:58:01|paid |\n|NextSong |2018-10-21 09:45:22|paid |\n|NextSong |2018-10-21 09:48:42|paid |\n|NextSong |2018-10-21 09:53:27|paid |\n|NextSong |2018-10-21 09:57:23|paid |\n|NextSong |2018-10-21 10:01:33|paid |\n|NextSong |2018-10-21 10:05:20|paid |\n|NextSong |2018-10-21 10:08:21|paid |\n|NextSong |2018-10-21 10:12:48|paid |\n|NextSong |2018-10-21 10:16:56|paid |\n|NextSong |2018-10-21 10:21:51|paid |\n|NextSong |2018-10-21 10:25:54|paid |\n|NextSong |2018-10-21 10:29:06|paid |\n|NextSong |2018-10-21 10:33:15|paid |\n|NextSong |2018-10-21 10:36:42|paid |\n|NextSong |2018-10-21 10:40:17|paid |\n|NextSong |2018-10-21 10:42:24|paid |\n|NextSong |2018-10-21 10:46:18|paid |\n|NextSong |2018-10-21 10:50:37|paid |\n|NextSong |2018-10-21 10:54:36|paid |\n|NextSong |2018-10-21 10:58:05|paid |\n|NextSong |2018-10-21 11:01:39|paid |\n|NextSong |2018-10-21 11:05:52|paid |\n|NextSong |2018-10-21 11:08:44|paid |\n|NextSong |2018-10-21 11:13:08|paid |\n|NextSong |2018-10-21 11:18:03|paid |\n|Home |2018-10-21 11:19:37|paid |\n|NextSong |2018-10-21 11:21:52|paid |\n|NextSong |2018-10-21 11:25:40|paid |\n|NextSong |2018-10-21 11:29:03|paid |\n|NextSong |2018-10-21 11:35:02|paid |\n|NextSong |2018-10-21 11:37:46|paid |\n|NextSong |2018-10-21 11:41:07|paid |\n|NextSong |2018-10-21 11:47:05|paid |\n|NextSong |2018-10-21 11:58:39|paid |\n|NextSong |2018-10-21 12:02:20|paid |\n|NextSong |2018-10-21 12:06:47|paid |\n|NextSong |2018-10-21 12:11:07|paid |\n|NextSong |2018-10-21 12:15:06|paid |\n|Downgrade |2018-10-21 12:15:07|paid |\n|NextSong |2018-10-21 12:18:49|paid |\n|NextSong |2018-10-21 12:22:46|paid |\n|NextSong |2018-10-21 12:29:12|paid |\n|NextSong |2018-10-21 12:32:34|paid |\n|NextSong |2018-10-21 12:36:43|paid |\n|NextSong |2018-10-21 12:42:18|paid |\n|NextSong |2018-10-21 12:46:36|paid |\n|Roll Advert |2018-10-21 12:47:40|paid |\n|NextSong |2018-10-21 12:55:00|paid |\n|Thumbs Down |2018-10-21 12:55:01|paid |\n|NextSong |2018-10-21 12:57:39|paid |\n|Downgrade |2018-10-21 12:58:22|paid |\n|Submit Downgrade|2018-10-21 12:58:23|paid |\n|Home |2018-10-21 12:58:37|free |\n|NextSong |2018-10-21 13:01:13|free |\n|Thumbs Down |2018-10-21 13:01:14|free |\n|NextSong |2018-10-21 13:05:34|free |\n|NextSong |2018-10-21 13:09:31|free |\n|NextSong |2018-10-21 13:13:41|free |\n|NextSong |2018-10-21 13:16:19|free |\n|NextSong |2018-10-21 13:20:04|free |\n|NextSong |2018-10-21 13:27:36|free |\n|NextSong |2018-10-21 13:31:44|free |\n|NextSong |2018-10-21 13:35:25|free |\n|NextSong |2018-10-21 13:39:18|free |\n|NextSong |2018-10-21 13:42:55|free |\n|Thumbs Up |2018-10-21 13:42:56|free |\n|NextSong |2018-10-21 13:46:19|free |\n|NextSong |2018-10-21 13:55:28|free |\n|Logout |2018-10-21 13:55:29|free |\n|Home |2018-10-21 13:56:17|free |\n|NextSong |2018-10-21 14:00:12|free |\n|Roll Advert |2018-10-21 14:00:26|free |\n|NextSong |2018-10-21 14:03:12|free |\n|NextSong |2018-10-21 14:06:33|free |\n|Thumbs Up |2018-10-21 14:06:34|free |\n|Home |2018-10-22 14:35:47|free |\n|NextSong |2018-10-22 14:35:53|free |\n|NextSong |2018-10-22 14:42:39|free |\n|Roll Advert |2018-10-22 14:43:53|free |\n|NextSong |2018-10-22 14:47:19|free |\n|NextSong |2018-10-22 14:50:40|free |\n|NextSong |2018-10-22 14:55:19|free |\n|NextSong |2018-10-22 14:58:11|free |\n|NextSong |2018-10-22 15:01:52|free |\n|NextSong |2018-10-22 15:05:28|free |\n|Roll Advert |2018-10-22 15:05:39|free |\n+----------------+-------------------+-----+\nonly showing top 500 rows\n\n" ] ], [ [ "This user after the submit upgrade become paid and after the submit downgrade become free again", "_____no_output_____" ] ], [ [ "spark.sql(\"\"\"\nSELECT distinct userId,page \nFROM DATA \nwhere page in ('Cancellation Confirmation','Submit Downgrade') \norder by userId,page\n\"\"\").count()", "_____no_output_____" ], [ "df = df.withColumn(\"churn\", when((col(\"page\")=='Cancellation Confirmation') | (col(\"page\")=='Submit Downgrade'),1).otherwise(0))", "_____no_output_____" ], [ "df.show(5)", "+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+-----+\n| artist| auth|firstName|gender|itemInSession|lastName| length|level| location|method| page| registration|sessionId| song|status| ts| userAgent|userId|churn|\n+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+-----+\n| Martha Tilston|Logged In| Colin| M| 50| Freeman|277.89016| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Rockpools| 200|1538352117000|Mozilla/5.0 (Wind...| 30| 0|\n|Five Iron Frenzy|Logged In| Micah| M| 79| Long|236.09424| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8| Canada| 200|1538352180000|\"Mozilla/5.0 (Win...| 9| 0|\n| Adam Lambert|Logged In| Colin| M| 51| Freeman| 282.8273| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29| Time For Miracles| 200|1538352394000|Mozilla/5.0 (Wind...| 30| 0|\n| Enigma|Logged In| Micah| M| 80| Long|262.71302| free|Boston-Cambridge-...| PUT|NextSong|1538331630000| 8|Knocking On Forbi...| 200|1538352416000|\"Mozilla/5.0 (Win...| 9| 0|\n| Daft Punk|Logged In| Colin| M| 52| Freeman|223.60771| paid| Bakersfield, CA| PUT|NextSong|1538173362000| 29|Harder Better Fas...| 200|1538352676000|Mozilla/5.0 (Wind...| 30| 0|\n+----------------+---------+---------+------+-------------+--------+---------+-----+--------------------+------+--------+-------------+---------+--------------------+------+-------------+--------------------+------+-----+\nonly showing top 5 rows\n\n" ], [ "df.createOrReplaceTempView(\"DATA\")", "_____no_output_____" ], [ "spark.sql(\"\"\"\nSELECT distinct userId,page,churn\nFROM DATA \nwhere page in ('Cancellation Confirmation','Submit Downgrade') \norder by userId,page\n\"\"\").show(5, False)", "+------+-------------------------+-----+\n|userId|page |churn|\n+------+-------------------------+-----+\n|100 |Submit Downgrade |1 |\n|100001|Cancellation Confirmation|1 |\n|100003|Cancellation Confirmation|1 |\n|100004|Submit Downgrade |1 |\n|100005|Cancellation Confirmation|1 |\n+------+-------------------------+-----+\nonly showing top 5 rows\n\n" ] ], [ [ "# Feature Engineering\nOnce you've familiarized yourself with the data, build out the features you find promising to train your model on. To work with the full dataset, you can follow the following steps.\n- Write a script to extract the necessary features from the smaller subset of data\n- Ensure that your script is scalable, using the best practices discussed in Lesson 3\n- Try your script on the full data set, debugging your script if necessary\n\nIf you are working in the classroom workspace, you can just extract features based on the small subset of data contained here. Be sure to transfer over this work to the larger dataset when you work on your Spark cluster.", "_____no_output_____" ] ], [ [ "spark.sql(\"\"\"\nSELECT max(to_timestamp(ts/1000)) as max_ts,min(to_timestamp(ts/1000)) as min_ts\nFROM DATA \n\"\"\").show(5, False)", "+-------------------+-------------------+\n|max_ts |min_ts |\n+-------------------+-------------------+\n|2018-12-03 01:11:16|2018-10-01 00:01:57|\n+-------------------+-------------------+\n\n" ], [ "df_dataset = spark.sql(\"\"\"\nSELECT DATA.userId,\n case when gender='M' then 1 else 0 end as is_male_flag,\n max(churn) as churn,\n count(distinct ts_day) as days_in_app,\n count(distinct song)/sum(case when song is not null then 1 else 0 end) as avg_songs,\n count(distinct artist)/sum(case when song is not null then 1 else 0 end) as avg_artists,\n round(sum(length/60)/sum(case when song is not null then 1 else 0 end),2) as avg_song_length,\n count(1) as events_cnt, \n count(1)/count(distinct ts_day) as avg_sessions_per_day,\n sum(case when DATA.page='NextSong' then 1 else 0 end)/count(distinct ts_day) as avg_pg_song_cnt,\n sum(case when DATA.page='Roll Advert' then 1 else 0 end)/count(distinct ts_day) as avg_pg_advert_cnt,\n sum(case when DATA.page='Logout' then 1 else 0 end)/count(distinct ts_day) as avg_pg_logout_cnt,\n sum(case when DATA.page='Thumbs Down' then 1 else 0 end)/count(distinct ts_day) as avg_pg_down_cnt,\n sum(case when DATA.page='Thumbs Up' then 1 else 0 end)/count(distinct ts_day) as avg_pg_up_cnt,\n sum(case when DATA.page='Add Friend' then 1 else 0 end)/count(distinct ts_day) as avg_pg_friend_cnt,\n sum(case when DATA.page='Add to Playlist' then 1 else 0 end)/count(distinct ts_day) as avg_pg_playlist_cnt,\n sum(case when DATA.page='Help' then 1 else 0 end)/count(distinct ts_day) as avg_pg_help_cnt,\n sum(case when DATA.page='Home' then 1 else 0 end)/count(distinct ts_day) as avg_pg_home_cnt,\n sum(case when DATA.page='Save Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_save_settings_cnt,\n sum(case when DATA.page='About' then 1 else 0 end)/count(distinct ts_day) as avg_pg_about_cnt,\n sum(case when DATA.page='Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_settings_cnt,\n sum(case when DATA.page='Login' then 1 else 0 end)/count(distinct ts_day) as avg_pg_login_cnt,\n sum(case when DATA.page='Submit Registration' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_reg_cnt,\n sum(case when DATA.page='Register' then 1 else 0 end)/count(distinct ts_day) as avg_pg_reg_cnt,\n sum(case when DATA.page='Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_upg_cnt,\n sum(case when DATA.page='Submit Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_upg_cnt,\n sum(case when DATA.page='Error' then 1 else 0 end)/count(distinct ts_day) as avg_pg_error_cnt\n FROM DATA\n LEFT JOIN \n (\n SELECT distinct DATE_TRUNC('day', to_timestamp(ts/1000)) as ts_day, userId FROM DATA \n ) day_ts\n ON day_ts.userId=DATA.userId\n GROUP BY DATA.userId,gender\n\"\"\")", "_____no_output_____" ], [ "churn_cnt = df_dataset.select(col('churn'),col('userId')).groupby('churn').count().toPandas()\n#churn_cnt.show()\nsns.barplot('churn','count', data=churn_cnt)\nplt.title('Churn Distribution')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "is_male_flag_dstr = df_dataset.select(col('is_male_flag'),col('churn')).groupby('is_male_flag','churn').agg(count(\"churn\").alias(\"churn_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'churn_cnt', hue = 'is_male_flag', data=is_male_flag_dstr)\nplt.title('Churn Distribution Per Gender')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "is_male_flag_dstr = df_dataset.select(col('is_male_flag'),col('churn')).groupby('is_male_flag').agg(F.mean(\"churn\").alias(\"avg_churn_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('is_male_flag', 'avg_churn_cnt', data=is_male_flag_dstr)\nplt.title('Average Churn Distribution Per Gender')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('days_in_app'),col('churn')).groupby('churn').agg(F.mean(\"days_in_app\").alias(\"avg_days_in_app\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_days_in_app', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Average Days in App')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_sessions_per_day'),col('churn')).groupby('churn').agg(F.mean(\"avg_sessions_per_day\").alias(\"avg_sessions_per_day\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_sessions_per_day', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Average Sessions Per Day')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_down_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_down_cnt\").alias(\"avg_pg_thumbs_down\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_thumbs_down', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Average Thumbs Down Per Day')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_up_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_up_cnt\").alias(\"avg_pg_thumps_up\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_thumps_up', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Average Thumbs Up Per Day')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_friend_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_friend_cnt\").alias(\"avg_pg_friend_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_friend_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Average Add Friends Per Day')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_playlist_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_playlist_cnt\").alias(\"avg_pg_playlist_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_playlist_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Average Add to Playlist Per Day')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_advert_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_advert_cnt\").alias(\"avg_pg_advert_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_advert_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Average Advert Per Day')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_error_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_error_cnt\").alias(\"avg_pg_error_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_error_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Error Per Day')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('events_cnt'),col('churn')).groupby('churn').agg(F.mean(\"events_cnt\").alias(\"avg_events_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_events_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Events Average')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_song_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_song_cnt\").alias(\"avg_pg_song_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_song_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Songs Average')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_logout_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_logout_cnt\").alias(\"avg_pg_logout_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_logout_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per LogOut Average')\nplt.xticks(rotation = 90)", "_____no_output_____" ], [ "days_in_app_dstr = df_dataset.select(col('avg_pg_sub_upg_cnt'),col('churn')).groupby('churn').agg(F.mean(\"avg_pg_sub_upg_cnt\").alias(\"avg_pg_sub_upg_cnt\")).toPandas()\n#is_male_flag_dstr.show()\nsns.barplot('churn', 'avg_pg_sub_upg_cnt', data=days_in_app_dstr)\nplt.title('Churn Distribution Per Upgrade Average')\nplt.xticks(rotation = 90)", "_____no_output_____" ] ], [ [ "# Modeling\nSplit the full dataset into train, test, and validation sets. Test out several of the machine learning methods you learned. Evaluate the accuracy of the various models, tuning parameters as necessary. Determine your winning model based on test accuracy and report results on the validation set. Since the churned users are a fairly small subset, I suggest using F1 score as the metric to optimize.", "_____no_output_____" ] ], [ [ "df_dataset = spark.sql(\"\"\"\nSELECT DATA.userId,\n case when gender='M' then 1 else 0 end as is_male_flag,\n max(churn) as churn,\n count(distinct ts_day) as days_in_app,\n count(distinct song)/sum(case when song is not null then 1 else 0 end) as avg_songs,\n count(distinct artist)/sum(case when song is not null then 1 else 0 end) as avg_artists,\n round(sum(length/60)/sum(case when song is not null then 1 else 0 end),2) as avg_song_length,\n count(1) as events_cnt, \n count(1)/count(distinct ts_day) as avg_sessions_per_day,\n sum(case when DATA.page='NextSong' then 1 else 0 end)/count(distinct ts_day) as avg_pg_song_cnt,\n sum(case when DATA.page='Roll Advert' then 1 else 0 end)/count(distinct ts_day) as avg_pg_advert_cnt,\n sum(case when DATA.page='Logout' then 1 else 0 end)/count(distinct ts_day) as avg_pg_logout_cnt,\n sum(case when DATA.page='Thumbs Down' then 1 else 0 end)/count(distinct ts_day) as avg_pg_down_cnt,\n sum(case when DATA.page='Thumbs Up' then 1 else 0 end)/count(distinct ts_day) as avg_pg_up_cnt,\n sum(case when DATA.page='Add Friend' then 1 else 0 end)/count(distinct ts_day) as avg_pg_friend_cnt,\n sum(case when DATA.page='Add to Playlist' then 1 else 0 end)/count(distinct ts_day) as avg_pg_playlist_cnt,\n sum(case when DATA.page='Help' then 1 else 0 end)/count(distinct ts_day) as avg_pg_help_cnt,\n sum(case when DATA.page='Home' then 1 else 0 end)/count(distinct ts_day) as avg_pg_home_cnt,\n sum(case when DATA.page='Save Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_save_settings_cnt,\n sum(case when DATA.page='About' then 1 else 0 end)/count(distinct ts_day) as avg_pg_about_cnt,\n sum(case when DATA.page='Settings' then 1 else 0 end)/count(distinct ts_day) as avg_pg_settings_cnt,\n sum(case when DATA.page='Login' then 1 else 0 end)/count(distinct ts_day) as avg_pg_login_cnt,\n sum(case when DATA.page='Submit Registration' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_reg_cnt,\n sum(case when DATA.page='Register' then 1 else 0 end)/count(distinct ts_day) as avg_pg_reg_cnt,\n sum(case when DATA.page='Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_upg_cnt,\n sum(case when DATA.page='Submit Upgrade' then 1 else 0 end)/count(distinct ts_day) as avg_pg_sub_upg_cnt,\n sum(case when DATA.page='Error' then 1 else 0 end)/count(distinct ts_day) as avg_pg_error_cnt\n FROM DATA\n LEFT JOIN \n (\n SELECT distinct DATE_TRUNC('day', to_timestamp(ts/1000)) as ts_day, userId FROM DATA \n ) day_ts\n ON day_ts.userId=DATA.userId\n GROUP BY DATA.userId,gender\n\"\"\")", "_____no_output_____" ], [ "#for column in ['days_in_app','events_cnt','avg_sessions_per_day','avg_pg_song_cnt','avg_pg_advert_cnt',\n# 'avg_pg_friend_cnt','avg_pg_playlist_cnt','avg_songs','avg_artists','avg_song_length',\n# 'avg_pg_logout_cnt','avg_pg_sub_upg_cnt','avg_pg_upg_cnt','avg_pg_down_cnt','avg_pg_up_cnt',\n# 'avg_pg_error_cnt'\n# ]:\nfor column in [ 'days_in_app',\n 'avg_songs',\n 'avg_artists',\n 'avg_song_length',\n 'events_cnt', \n 'avg_sessions_per_day',\n 'avg_pg_song_cnt',\n 'avg_pg_advert_cnt',\n 'avg_pg_logout_cnt',\n 'avg_pg_down_cnt',\n 'avg_pg_up_cnt',\n 'avg_pg_friend_cnt',\n 'avg_pg_playlist_cnt'\n ]:\n # VectorAssembler Transformation - Converting column to vector type\n vector_assempler = VectorAssembler(inputCols=[column],outputCol=column+\"_vect\")\n\n # MinMaxScaler Transformation\n scaler = MinMaxScaler(inputCol=column+\"_vect\", outputCol=column+\"_scaled\")\n\n # Pipeline of VectorAssembler and MinMaxScaler\n pipeline = Pipeline(stages=[vector_assempler, scaler])\n\n # Fitting pipeline on dataframe\n df_dataset = pipeline.fit(df_dataset).transform(df_dataset).drop(column+\"_vect\")", "_____no_output_____" ], [ "#features_vector_assempler = VectorAssembler(inputCols=['days_in_app_scaled','events_cnt_scaled',\n# 'avg_sessions_per_day_scaled','avg_pg_song_cnt_scaled','avg_pg_advert_cnt_scaled',\n# 'avg_pg_friend_cnt_scaled','avg_pg_playlist_cnt_scaled','avg_songs_scaled','avg_artists_scaled',\n# 'avg_song_length_scaled','avg_pg_logout_cnt_scaled','avg_pg_sub_upg_cnt_scaled',\n# 'avg_pg_upg_cnt_scaled','avg_pg_down_cnt_scaled','avg_pg_up_cnt_scaled',\n# 'avg_pg_error_cnt_scaled'\n# ],outputCol=\"features\")\nfeatures_vector_assempler = VectorAssembler(inputCols=['is_male_flag',\n 'days_in_app_scaled',\n 'avg_songs_scaled',\n 'avg_artists_scaled',\n 'avg_song_length_scaled',\n 'events_cnt_scaled', \n 'avg_sessions_per_day_scaled',\n 'avg_pg_song_cnt_scaled',\n 'avg_pg_advert_cnt_scaled',\n 'avg_pg_logout_cnt_scaled',\n 'avg_pg_down_cnt_scaled',\n 'avg_pg_up_cnt_scaled',\n 'avg_pg_friend_cnt_scaled',\n 'avg_pg_playlist_cnt_scaled'],outputCol=\"features\")\ndf_dataset_model = features_vector_assempler.transform(df_dataset)", "_____no_output_____" ], [ "df_dataset_model = df_dataset_model.select(col(\"churn\").alias(\"label\"),col(\"features\"))", "_____no_output_____" ], [ "#Test 1\ntrain, test = df_dataset_model.randomSplit([0.8, 0.2], seed=7)\n#sub_test, validation = test.randomSplit([0.5, 0.5], seed = 7)\nprint(\"Training Dataset Count: \" + str(train.count()))\nprint(\"Test Dataset Count: \" + str(test.count()))", "Training Dataset Count: 187\nTest Dataset Count: 38\n" ], [ "gbt = GBTClassifier(featuresCol = 'features', labelCol = \"label\", maxIter = 10, maxDepth = 10, seed = 7)\ngbt_fitted_model = gbt.fit(train)\npredictions = gbt_fitted_model.transform(test)\nf1 = MulticlassClassificationEvaluator(metricName = 'f1')\nacc = MulticlassClassificationEvaluator(metricName = 'accuracy')\nprec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')\nrec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')\ngbt_f1_score = f1.evaluate(predictions)\ngbt_acc_score = acc.evaluate(predictions)\ngbt_prec_score = prec.evaluate(predictions)\ngbt_rec_score = rec.evaluate(predictions)\nprint('GBT Accuracy: {}, GBT Precision: {}, GBT Recall: {}, GBT F1-Score: {}'.format(round(gbt_acc_score*100,2),round(gbt_prec_score*100,2),round(gbt_rec_score*100,2),round(gbt_f1_score*100,2)))", "GBT Accuracy: 57.89, GBT Precision: 57.89, GBT Recall: 57.89, GBT F1-Score: 57.89\n" ], [ "rf = RandomForestClassifier()\nrf_fitted_model = rf.fit(train)\npredictions = rf_fitted_model.transform(test)\nf1 = MulticlassClassificationEvaluator(metricName = 'f1')\nacc = MulticlassClassificationEvaluator(metricName = 'accuracy')\nprec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')\nrec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')\nrf_f1_score = f1.evaluate(predictions)\nrf_acc_score = acc.evaluate(predictions)\nrf_prec_score = prec.evaluate(predictions)\nrf_rec_score = rec.evaluate(predictions)\nprint('Random Forest Accuracy: {}, Random Forest Precision: {}, Random Forest Recall: {}, Random Forest F1-Score: {}'.format(round(rf_acc_score*100,2),round(rf_prec_score*100,2),round(rf_rec_score*100,2),round(rf_f1_score*100,2)))", "Random Forest Accuracy: 63.16, Random Forest Precision: 62.9, Random Forest Recall: 63.16, Random Forest F1-Score: 62.3\n" ], [ "lr = LogisticRegression(featuresCol=\"features\", labelCol=\"label\", maxIter=10, regParam=0.01)\nlr_fitted_model = lr.fit(train)\npredictions = lr_fitted_model.transform(test)\nf1 = MulticlassClassificationEvaluator(metricName = 'f1')\nacc = MulticlassClassificationEvaluator(metricName = 'accuracy')\nprec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')\nrec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')\nlr_f1_score = f1.evaluate(predictions)\nlr_acc_score = acc.evaluate(predictions)\nlr_prec_score = prec.evaluate(predictions)\nlr_rec_score = rec.evaluate(predictions)\nprint('Logistic Regression Accuracy: {}, Logistic Regression Precision: {}, Logistic Regression Recall: {}, Logistic Regression F1-Score: {}'.format(round(lr_acc_score*100,2),round(lr_prec_score*100,2),round(lr_rec_score*100,2),round(lr_f1_score*100,2)))", "Logistic Regression Accuracy: 63.16, Logistic Regression Precision: 63.26, Logistic Regression Recall: 63.16, Logistic Regression F1-Score: 61.51\n" ], [ "svm = LinearSVC(featuresCol=\"features\", labelCol=\"label\", maxIter=10, regParam=0.1)\nsvm_fitted_model = svm.fit(train)\npredictions = svm_fitted_model.transform(test)\nf1 = MulticlassClassificationEvaluator(metricName = 'f1')\nacc = MulticlassClassificationEvaluator(metricName = 'accuracy')\nprec = MulticlassClassificationEvaluator(metricName = 'weightedPrecision')\nrec = MulticlassClassificationEvaluator(metricName = 'weightedRecall')\nsvm_f1_score = f1.evaluate(predictions)\nsvm_acc_score = acc.evaluate(predictions)\nsvm_prec_score = prec.evaluate(predictions)\nsvm_rec_score = rec.evaluate(predictions)\nprint('SVM Accuracy: {}, SVM Precision: {}, SVM Recall: {}, SVM F1-Score: {}'.format(round(svm_acc_score*100,2),round(svm_prec_score*100,2),round(svm_rec_score*100,2),round(svm_f1_score*100,2)))", "SVM Accuracy: 63.16, SVM Precision: 69.28, SVM Recall: 63.16, SVM F1-Score: 57.2\n" ] ], [ [ "From the above executions and evaluations we will choosse as better performant algorythm the GBT one. \nThis is the algorythm that we will use for the calculation of the churn score with these KPIs.\n\nOf course, the next step is to evaluate and validate the results running the code on the full dataset.\nIf we are happy with the results we can deploy the churn calculation algorythm in production", "_____no_output_____" ], [ "# Final Steps\nClean up your code, adding comments and renaming variables to make the code easier to read and maintain. Refer to the Spark Project Overview page and Data Scientist Capstone Project Rubric to make sure you are including all components of the capstone project and meet all expectations. Remember, this includes thorough documentation in a README file in a Github repository, as well as a web app or blog post.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
cbf5383bf5e3a0971bbe6de98836fbc93cc628cd
130,338
ipynb
Jupyter Notebook
source/archive/ACS_CA_counties.ipynb
benmerrilll/milestone-ii
8e743b4f7bd54a49c8327a36663b71a26d0a818c
[ "MIT" ]
null
null
null
source/archive/ACS_CA_counties.ipynb
benmerrilll/milestone-ii
8e743b4f7bd54a49c8327a36663b71a26d0a818c
[ "MIT" ]
null
null
null
source/archive/ACS_CA_counties.ipynb
benmerrilll/milestone-ii
8e743b4f7bd54a49c8327a36663b71a26d0a818c
[ "MIT" ]
null
null
null
52.896916
22,174
0.481655
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cbf53a5b054b90823f0d1af85c5efec96adbe886
24,328
ipynb
Jupyter Notebook
ROC.ipynb
douglasbarbosadelima/30-seconds-of-code
7d40d53ac34d8f1a1f27cf85c6554a75212785b7
[ "CC0-1.0" ]
null
null
null
ROC.ipynb
douglasbarbosadelima/30-seconds-of-code
7d40d53ac34d8f1a1f27cf85c6554a75212785b7
[ "CC0-1.0" ]
null
null
null
ROC.ipynb
douglasbarbosadelima/30-seconds-of-code
7d40d53ac34d8f1a1f27cf85c6554a75212785b7
[ "CC0-1.0" ]
null
null
null
95.403922
10,686
0.820577
[ [ [ "<a href=\"https://colab.research.google.com/github/douglasbarbosadelima/30-seconds-of-code/blob/master/ROC.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# ROC Curve logistic regression x KNN_res", "_____no_output_____" ] ], [ [ "import numpy as np\nfrom sklearn import metrics\nfrom sklearn import linear_model\nfrom sklearn.neighbors import KNeighborsClassifier\n\n\nimport matplotlib.pyplot as plt\nimport math\n\n\n", "_____no_output_____" ], [ "arq = open('c:\\\\dados\\\\mimimi_raro.csv', 'r')\ntexto = arq.readlines()\narq.close()\n\n\nlx1=[]\nlx2=[]\n\nX=[]\nly=[]\n\nfor l in texto:\n l1=l.split(\";\")\n lx1.append(float(l1[0]))\n lx2.append(float(l1[1]))\n ly.append(float(l1[2]))\n \n\n \n\nX = list(zip(lx1,lx2))\n\n\nX=np.array(X)\n\nY = ly\nY=np.array(Y)\n\n", "_____no_output_____" ], [ "#Gráfico com as amostras para treinamento\n\nfor i in range(len(X)):\n if(Y[i]==1.0):plt.plot(lx1[i],lx2[i],'r+')\n else:plt.plot(lx1[i],lx2[i],'bo')\nplt.title('Amostras para treinamento') \nplt.show()\n\n#É criado o objeto de classificação\n#com Logistic Regression\n\nlogreg = linear_model.LogisticRegression()\n\n#aprendizado\nmodel=logreg.fit(X, Y)\n\n\n#predição\nZ=logreg.predict(X)\n\nprint(logreg.coef_)\nprint(logreg.intercept_)\n\nscr=[]\nfor i in range(len(Y)):\n scr.append(1.0/(1+math.exp(-(-3.096+0.0234*lx1[i]+0.0146597*lx2[i]))))\n\nfprx, tprx, thresholdsx = metrics.roc_curve(Y, scr, pos_label=1)\n \n", "_____no_output_____" ], [ "# Classificando com KNN\n\n\nneigh = KNeighborsClassifier(n_neighbors=3)\nneigh.fit(X,Y) \nscore_knn=neigh.predict_proba(X)\n\n\nscore_1=[]\nfor i in range(len(Y)):\n score_1.append(score_knn[i][1])\nfpr_knn, tpr_knn, thresholds_knn = metrics.roc_curve(Y, score_1, pos_label=1)\n\n\n", "_____no_output_____" ], [ "plt.plot(fprx,tprx,color='blue')\nplt.plot(fpr_knn,tpr_knn,color='black')\nplt.show()\n\n", "_____no_output_____" ], [ "print('AUC logistic=',metrics.auc(fprx,tprx))\nprint('AUC KNN=',metrics.auc(fpr_knn,tpr_knn))", "AUC logistic= 0.9452247191011236\nAUC KNN= 0.9719101123595506\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
cbf542ccdd89db2d569a5eae2afbf93e019d1c58
50,078
ipynb
Jupyter Notebook
eLibrary_testing.ipynb
jjmilburn/eLibrarian
cfd88161cbe9bb54079ac1751118633ab246404a
[ "MIT" ]
null
null
null
eLibrary_testing.ipynb
jjmilburn/eLibrarian
cfd88161cbe9bb54079ac1751118633ab246404a
[ "MIT" ]
null
null
null
eLibrary_testing.ipynb
jjmilburn/eLibrarian
cfd88161cbe9bb54079ac1751118633ab246404a
[ "MIT" ]
null
null
null
208.658333
13,954
0.639602
[ [ [ "import requests\n\n# Get token from Hoopla\n\nusername = 'HOOPLA_LOGIN'\n\n# for test, fake\npassword = 'HOOPLA_PWD'\n\nhoopla_headers = {'accept':'application/json, text/plain, */*', 'accept-encoding': 'gzip, deflate, br',\n 'content-type':'application/x-www-form-urlencoded', 'device-version': 'Chrome', 'referer':'https://www.hoopladigital.com/'}\n\ndata = {'username':username, 'password':password}\n\nresp = requests.post('https://hoopla-ws.hoopladigital.com/tokens', \n headers=hoopla_headers, data=data)\n ", "_____no_output_____" ], [ "# Extract the token from the response.\nhoopla_token = None\nimport json\n\nif resp.status_code == 200:\n print(\"Raw Content: {}\".format(resp.content))\n content = resp.content.decode('utf-8')\n json_content = json.loads(content)\n if json_content['tokenStatus'] == \"SUCCESS\":\n hoopla_token = json_content['token']\n else:\n print(\"Invalid credentials, could not obtain token\")\nelse:\n print(\"Error getting token!\")\n \nprint(hoopla_token)", "Raw Content: b'{\"tokenStatus\":\"SUCCESS\",\"token\":\"a6f91c68-7245-4eb3-b7b4-6a65c3c0915d\"}'\na6f91c68-7245-4eb3-b7b4-6a65c3c0915d\n" ], [ "# Search the raw API\nsearch_param = 'tolkien'\n\n# Try a search against the 'raw' search. Requires an \"OPTIONS\" query first?\nhoopla_headers = {'accept':'application/json, text/plain, */*', 'accept-encoding': 'gzip, deflate, br',\n 'content-type':'application/x-www-form-urlencoded', 'device-version': 'Chrome', 'referer':'https://hoopla-ws.hoopladigital.com',\n 'ws-api': '2.1',\n 'authorization': \"Bearer {}\".format(hoopla_token)}\nraw_search_url = 'https://hoopla-ws.hoopladigital.com/categories/search?q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)\n\nresp = requests.get(raw_search_url, headers=hoopla_headers)\nprint(resp.status_code)\nprint(\"Attempted a search for {}, result={}\".format(search_param, resp.content))\n\n# Search against the Audiobooks endpoint\naudiobooks_search_url = 'https://hoopla-ws.hoopladigital.com/v2/search/AUDIOBOOKS?limit=50&offset=0&q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)\nab_resp = requests.get(audiobooks_search_url, headers=hoopla_headers)\nprint(ab_resp.status_code)\n\nebooks_search_url = 'https://hoopla-ws.hoopladigital.com/v2/search/EBOOKS?limit=50&offset=0&q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)\nebooks_resp = requests.get(audiobooks_search_url, headers=hoopla_headers)\nprint(ebooks_resp.status_code)\n\npeople_search_url = 'https://hoopla-ws.hoopladigital.com/v2/search/PEOPLE?limit=50&offset=0&q={}&sort=TOP&wwwVersion=4.20.3'.format(search_param)\npeople_resp = requests.get(audiobooks_search_url, headers=hoopla_headers)\nprint(people_resp.status_code)\n\n# Now try to search against the 'artist-suggestions' ('unauthorized' if called directly)\n\n#search_artist_sugg_url = 'https://search-api.hoopladigital.com/prod/artist-suggestions?q={}&suggester=name&size=5'.format(search_param)\n##resp = requests.get(search_artist_sugg_url,headers=hoopla_headers)\n#print(resp.status_code)\n#print(resp.content)\n\n#search_title_sugg_url = 'https://search-api.hoopladigital.com/prod/title-suggestions?q=elizabeth+bear&suggester=series&size=5'\n", "200\nAttempted a search for tolkien, result=b'[]'\n200\n200\n200\n" ], [ "ab_results = json.loads(ab_resp.content.decode('utf-8'))\nebook_results = json.loads(ebooks_resp.content.decode('utf-8'))\npeople_results = json.loads(people_resp.content.decode('utf-8'))", "_____no_output_____" ], [ "print(\"Audiobooks:\")\nprint(ab_results)\n\nprint(\"Ebooks\")\nprint(ebook_results)\nprint(\"People\")\n\nprint(people_results)", "Audiobooks:\n{'found': 32, 'titles': [{'titleId': 11001681, 'title': 'Voices of Poetry - Volume 1', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'lla_9781593165185', 'year': 2017, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419448, 'title': 'The Return of the King (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874525', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419431, 'title': 'The Fellowship of the Ring', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874501', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419149, 'title': 'The Hobbit (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874495', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419458, 'title': 'The Two Towers (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874518', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497676, 'title': 'The Gospel According to Tolkien', 'subtitle': 'Visions of the Kingdom in Middle Earth', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Ralph Wood', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781596442139', 'year': 2005, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11638917, 'title': 'Tolkien and the Great War', 'subtitle': 'The Threshold of Middle-earth', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'John Garth', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780007429219', 'year': 2011, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10026401, 'title': \"Tolkien's Ordinary Virtues\", 'subtitle': 'Exploring the Spiritual Themes of The Lord of the Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Mark Eddy Smith', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781433274237', 'year': 2008, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030719, 'title': 'A Hobbit Journey', 'subtitle': \"Discovering the Enchantment of J. R. R. Tolkien's Middle-earth\", 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Matthew Dickerson', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621881001', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497900, 'title': 'Finding God in the Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Jim Ware', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781610455893', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11983096, 'title': \"Bilbo's Journey\", 'subtitle': 'Discovering the Hidden Meaning in The Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781505101317', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11994474, 'title': 'The Inklings', 'subtitle': 'C. S. Lewis, J. R. R. Tolkien, Charles Williams, and Their Friends', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Humphrey Carpenter', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781538483923', 'year': 1991, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11760760, 'title': 'A Hobbit, a Wardrobe, and a Great War', 'subtitle': 'How J.R.R. Tolkien and C.S. Lewis Rediscovered Faith, Friendship, and Heroism in the Cataclysm of 19', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Loconte', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780718079383', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030878, 'title': 'The Spiritual World Of The Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'James Stuart Bell', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621881308', 'year': 2013, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10078531, 'title': 'The Princess and the Goblin', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441796097', 'year': 2006, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10024575, 'title': 'Titus Groan', 'subtitle': 'The Gormenghast Trilogy, Book 1', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Mervyn Peake', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441787118', 'year': 2010, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11810406, 'title': 'The Gray Wolf and Other Fantasy Stories', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781538422335', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030810, 'title': 'Finding God In The Lord Of The Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Kurt Bruner', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621882053', 'year': 2003, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10025773, 'title': 'C.S. Lewis', 'subtitle': 'Memories and Reflections', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'John Lawlor', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441799913', 'year': 2010, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10026560, 'title': 'That Hideous Strength', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'C. S. Lewis', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441713001', 'year': 2009, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11653879, 'title': 'The Princess and The Goblin and The Goblin and the Grocer', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781944785048', 'year': 2016, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11983095, 'title': \"Frodo's Journey\", 'subtitle': 'Discover the Hidden Meaning of The Lord of the Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781505101324', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10756539, 'title': 'Bored of the Rings', 'subtitle': 'A Parody', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'The Harvard Lampoon', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781452690346', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10755153, 'title': 'The Story of the Volsungs', 'subtitle': 'The Volsunga Saga', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Anonymous', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781452623566', 'year': 2011, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11413322, 'title': 'The Night Of The Swarm', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Robert V. S. Redick', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781400192960', 'year': 2013, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419317, 'title': \"Heaven's Net Is Wide\", 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Lian Hearn', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598875171', 'year': 2007, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11994170, 'title': 'C.S. Lewis and the Catholic Church', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781618908087', 'year': 2017, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497505, 'title': 'George MacDonald: His Life and Works', 'subtitle': 'A Short Biography by Roland Hein', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Rolland Hein', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781596440982', 'year': 2005, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11665303, 'title': 'The View from the Cheap Seats', 'subtitle': 'Selected Nonfiction', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Neil Gaiman', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780062262295', 'year': 2016, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11970592, 'title': \"Han'gul, Tengwar, and Other Featural Scripts\", 'subtitle': 'Lecture 22 of 24', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Marc Zender', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'grc_a224122', 'year': 2013, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11342301, 'title': 'Operation Arcana', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Various Authors', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781504618922', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11954619, 'title': 'Reviewing Vocabulary through Literature', 'subtitle': 'Lecture 24 of 36', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Kevin Flanigan', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'grc_937324', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}], 'kindFacets': [{'value': 'AUDIOBOOK', 'label': 'Audiobooks', 'hits': 32, 'kind': 'AUDIOBOOK'}], 'facets': {'kind': [{'value': 'AUDIOBOOK', 'label': 'Audiobooks', 'hits': 32, 'kind': 'AUDIOBOOK'}], 'artists_literal': [{'value': 'Various Readers', 'label': 'Various Readers', 'hits': 6}, {'value': 'J. R. R. Tolkien', 'label': 'J. R. R. Tolkien', 'hits': 5}, {'value': 'George Macdonald', 'label': 'George Macdonald', 'hits': 3}, {'value': 'Joseph Pearce', 'label': 'Joseph Pearce', 'hits': 3}, {'value': 'Simon Vance', 'label': 'Simon Vance', 'hits': 3}, {'value': 'Bernard Mayes', 'label': 'Bernard Mayes', 'hits': 2}, {'value': \"Kevin O'Brien\", 'label': \"Kevin O'Brien\", 'hits': 2}, {'value': 'Alan Sklar', 'label': 'Alan Sklar', 'hits': 1}, {'value': 'Alison Larkin', 'label': 'Alison Larkin', 'hits': 1}, {'value': 'Anonymous', 'label': 'Anonymous', 'hits': 1}], 'series_literal': [{'value': 'Lord of the Rings', 'label': 'Lord of the Rings', 'hits': 3}, {'value': 'Building a Better Vocabulary', 'label': 'Building a Better Vocabulary', 'hits': 1}, {'value': 'Cardboard Box Of The Rings', 'label': 'Cardboard Box Of The Rings', 'hits': 1}, {'value': 'Chathrand Voyage', 'label': 'Chathrand Voyage', 'hits': 1}, {'value': 'Space Trilogy', 'label': 'Space Trilogy', 'hits': 1}, {'value': 'Tales of the Otori', 'label': 'Tales of the Otori', 'hits': 1}, {'value': 'Writing and Civilization: From Ancient Worlds to Modernity', 'label': 'Writing and Civilization: From Ancient Worlds to Modernity', 'hits': 1}], 'genres': [{'value': 'Religious', 'label': 'Religious', 'hits': 8}, {'value': 'Sci-Fi & Fantasy', 'label': 'Sci-Fi & Fantasy', 'hits': 6}, {'value': 'Biography', 'label': 'Biography', 'hits': 4}, {'value': \"Children's\", 'label': \"Children's\", 'hits': 4}, {'value': 'Classics', 'label': 'Classics', 'hits': 4}, {'value': 'Nonfiction', 'label': 'Nonfiction', 'hits': 2}, {'value': 'Comedy', 'label': 'Comedy', 'hits': 1}, {'value': 'Fiction', 'label': 'Fiction', 'hits': 1}, {'value': 'History', 'label': 'History', 'hits': 1}, {'value': 'Poetry', 'label': 'Poetry', 'hits': 1}], 'publisher': [{'value': 'Blackstone Audio, Inc.', 'label': 'Blackstone Audio, Inc.', 'hits': 7}, {'value': 'HighBridge, a division of Recorded Books', 'label': 'HighBridge, a division of Recorded Books', 'hits': 5}, {'value': \"Author's Republic\", 'label': \"Author's Republic\", 'hits': 3}, {'value': 'Oasis Audio', 'label': 'Oasis Audio', 'hits': 3}, {'value': 'Tantor Audio', 'label': 'Tantor Audio', 'hits': 3}, {'value': 'christianaudio.com', 'label': 'christianaudio.com', 'hits': 3}, {'value': 'The Great Courses', 'label': 'The Great Courses', 'hits': 2}, {'value': 'Blackstone Audio, Inc., and Skyboat Media, Inc.', 'label': 'Blackstone Audio, Inc., and Skyboat Media, Inc.', 'hits': 1}, {'value': 'British Classic Audio', 'label': 'British Classic Audio', 'hits': 1}, {'value': 'Harper Collins Publishers', 'label': 'Harper Collins Publishers', 'hits': 1}], 'awards': [], 'rating': [], 'language_literal': [{'value': 'English', 'label': 'English', 'hits': 32}], 'audience': [{'value': '01', 'label': 'general/trade', 'hits': 3}], 'fiction': [{'value': 'false', 'label': 'false', 'hits': 17}, {'value': 'true', 'label': 'true', 'hits': 15}], 'year': [{'value': '2000', 'label': '2000-2009', 'hits': 7}, {'value': '1970', 'label': '1970-1979', 'hits': 4}, {'value': '1990', 'label': '1990-1999', 'hits': 1}, {'value': '2010', 'label': '2010-2019', 'hits': 20}], 'parental_advisory': [{'value': 'false', 'label': 'false', 'hits': 32}]}}\nEbooks\n{'found': 32, 'titles': [{'titleId': 11001681, 'title': 'Voices of Poetry - Volume 1', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'lla_9781593165185', 'year': 2017, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419448, 'title': 'The Return of the King (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874525', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419431, 'title': 'The Fellowship of the Ring', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874501', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419149, 'title': 'The Hobbit (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874495', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419458, 'title': 'The Two Towers (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874518', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497676, 'title': 'The Gospel According to Tolkien', 'subtitle': 'Visions of the Kingdom in Middle Earth', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Ralph Wood', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781596442139', 'year': 2005, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11638917, 'title': 'Tolkien and the Great War', 'subtitle': 'The Threshold of Middle-earth', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'John Garth', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780007429219', 'year': 2011, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10026401, 'title': \"Tolkien's Ordinary Virtues\", 'subtitle': 'Exploring the Spiritual Themes of The Lord of the Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Mark Eddy Smith', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781433274237', 'year': 2008, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030719, 'title': 'A Hobbit Journey', 'subtitle': \"Discovering the Enchantment of J. R. R. Tolkien's Middle-earth\", 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Matthew Dickerson', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621881001', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497900, 'title': 'Finding God in the Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Jim Ware', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781610455893', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11983096, 'title': \"Bilbo's Journey\", 'subtitle': 'Discovering the Hidden Meaning in The Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781505101317', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11994474, 'title': 'The Inklings', 'subtitle': 'C. S. Lewis, J. R. R. Tolkien, Charles Williams, and Their Friends', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Humphrey Carpenter', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781538483923', 'year': 1991, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030878, 'title': 'The Spiritual World Of The Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'James Stuart Bell', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621881308', 'year': 2013, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11760760, 'title': 'A Hobbit, a Wardrobe, and a Great War', 'subtitle': 'How J.R.R. Tolkien and C.S. Lewis Rediscovered Faith, Friendship, and Heroism in the Cataclysm of 19', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Loconte', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780718079383', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10078531, 'title': 'The Princess and the Goblin', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441796097', 'year': 2006, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10024575, 'title': 'Titus Groan', 'subtitle': 'The Gormenghast Trilogy, Book 1', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Mervyn Peake', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441787118', 'year': 2010, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11810406, 'title': 'The Gray Wolf and Other Fantasy Stories', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781538422335', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10025773, 'title': 'C.S. Lewis', 'subtitle': 'Memories and Reflections', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'John Lawlor', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441799913', 'year': 2010, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030810, 'title': 'Finding God In The Lord Of The Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Kurt Bruner', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621882053', 'year': 2003, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10026560, 'title': 'That Hideous Strength', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'C. S. Lewis', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441713001', 'year': 2009, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11653879, 'title': 'The Princess and The Goblin and The Goblin and the Grocer', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781944785048', 'year': 2016, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10756539, 'title': 'Bored of the Rings', 'subtitle': 'A Parody', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'The Harvard Lampoon', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781452690346', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11983095, 'title': \"Frodo's Journey\", 'subtitle': 'Discover the Hidden Meaning of The Lord of the Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781505101324', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10755153, 'title': 'The Story of the Volsungs', 'subtitle': 'The Volsunga Saga', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Anonymous', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781452623566', 'year': 2011, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11413322, 'title': 'The Night Of The Swarm', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Robert V. S. Redick', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781400192960', 'year': 2013, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419317, 'title': \"Heaven's Net Is Wide\", 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Lian Hearn', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598875171', 'year': 2007, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11994170, 'title': 'C.S. Lewis and the Catholic Church', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781618908087', 'year': 2017, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497505, 'title': 'George MacDonald: His Life and Works', 'subtitle': 'A Short Biography by Roland Hein', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Rolland Hein', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781596440982', 'year': 2005, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11665303, 'title': 'The View from the Cheap Seats', 'subtitle': 'Selected Nonfiction', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Neil Gaiman', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780062262295', 'year': 2016, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11970592, 'title': \"Han'gul, Tengwar, and Other Featural Scripts\", 'subtitle': 'Lecture 22 of 24', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Marc Zender', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'grc_a224122', 'year': 2013, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11342301, 'title': 'Operation Arcana', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Various Authors', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781504618922', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11954619, 'title': 'Reviewing Vocabulary through Literature', 'subtitle': 'Lecture 24 of 36', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Kevin Flanigan', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'grc_937324', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}], 'kindFacets': [{'value': 'AUDIOBOOK', 'label': 'Audiobooks', 'hits': 32, 'kind': 'AUDIOBOOK'}], 'facets': {'kind': [{'value': 'AUDIOBOOK', 'label': 'Audiobooks', 'hits': 32, 'kind': 'AUDIOBOOK'}], 'artists_literal': [{'value': 'Various Readers', 'label': 'Various Readers', 'hits': 6}, {'value': 'J. R. R. Tolkien', 'label': 'J. R. R. Tolkien', 'hits': 5}, {'value': 'George Macdonald', 'label': 'George Macdonald', 'hits': 3}, {'value': 'Joseph Pearce', 'label': 'Joseph Pearce', 'hits': 3}, {'value': 'Simon Vance', 'label': 'Simon Vance', 'hits': 3}, {'value': 'Bernard Mayes', 'label': 'Bernard Mayes', 'hits': 2}, {'value': \"Kevin O'Brien\", 'label': \"Kevin O'Brien\", 'hits': 2}, {'value': 'Alan Sklar', 'label': 'Alan Sklar', 'hits': 1}, {'value': 'Alison Larkin', 'label': 'Alison Larkin', 'hits': 1}, {'value': 'Anonymous', 'label': 'Anonymous', 'hits': 1}], 'series_literal': [{'value': 'Lord of the Rings', 'label': 'Lord of the Rings', 'hits': 3}, {'value': 'Building a Better Vocabulary', 'label': 'Building a Better Vocabulary', 'hits': 1}, {'value': 'Cardboard Box Of The Rings', 'label': 'Cardboard Box Of The Rings', 'hits': 1}, {'value': 'Chathrand Voyage', 'label': 'Chathrand Voyage', 'hits': 1}, {'value': 'Space Trilogy', 'label': 'Space Trilogy', 'hits': 1}, {'value': 'Tales of the Otori', 'label': 'Tales of the Otori', 'hits': 1}, {'value': 'Writing and Civilization: From Ancient Worlds to Modernity', 'label': 'Writing and Civilization: From Ancient Worlds to Modernity', 'hits': 1}], 'genres': [{'value': 'Religious', 'label': 'Religious', 'hits': 8}, {'value': 'Sci-Fi & Fantasy', 'label': 'Sci-Fi & Fantasy', 'hits': 6}, {'value': 'Biography', 'label': 'Biography', 'hits': 4}, {'value': \"Children's\", 'label': \"Children's\", 'hits': 4}, {'value': 'Classics', 'label': 'Classics', 'hits': 4}, {'value': 'Nonfiction', 'label': 'Nonfiction', 'hits': 2}, {'value': 'Comedy', 'label': 'Comedy', 'hits': 1}, {'value': 'Fiction', 'label': 'Fiction', 'hits': 1}, {'value': 'History', 'label': 'History', 'hits': 1}, {'value': 'Poetry', 'label': 'Poetry', 'hits': 1}], 'publisher': [{'value': 'Blackstone Audio, Inc.', 'label': 'Blackstone Audio, Inc.', 'hits': 7}, {'value': 'HighBridge, a division of Recorded Books', 'label': 'HighBridge, a division of Recorded Books', 'hits': 5}, {'value': \"Author's Republic\", 'label': \"Author's Republic\", 'hits': 3}, {'value': 'Oasis Audio', 'label': 'Oasis Audio', 'hits': 3}, {'value': 'Tantor Audio', 'label': 'Tantor Audio', 'hits': 3}, {'value': 'christianaudio.com', 'label': 'christianaudio.com', 'hits': 3}, {'value': 'The Great Courses', 'label': 'The Great Courses', 'hits': 2}, {'value': 'Blackstone Audio, Inc., and Skyboat Media, Inc.', 'label': 'Blackstone Audio, Inc., and Skyboat Media, Inc.', 'hits': 1}, {'value': 'British Classic Audio', 'label': 'British Classic Audio', 'hits': 1}, {'value': 'Harper Collins Publishers', 'label': 'Harper Collins Publishers', 'hits': 1}], 'awards': [], 'rating': [], 'language_literal': [{'value': 'English', 'label': 'English', 'hits': 32}], 'audience': [{'value': '01', 'label': 'general/trade', 'hits': 3}], 'fiction': [{'value': 'false', 'label': 'false', 'hits': 17}, {'value': 'true', 'label': 'true', 'hits': 15}], 'year': [{'value': '2000', 'label': '2000-2009', 'hits': 7}, {'value': '1970', 'label': '1970-1979', 'hits': 4}, {'value': '1990', 'label': '1990-1999', 'hits': 1}, {'value': '2010', 'label': '2010-2019', 'hits': 20}], 'parental_advisory': [{'value': 'false', 'label': 'false', 'hits': 32}]}}\nPeople\n{'found': 32, 'titles': [{'titleId': 11001681, 'title': 'Voices of Poetry - Volume 1', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'lla_9781593165185', 'year': 2017, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419448, 'title': 'The Return of the King (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874525', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419431, 'title': 'The Fellowship of the Ring', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874501', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419149, 'title': 'The Hobbit (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874495', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419458, 'title': 'The Two Towers (Dramatized)', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'J. R. R. Tolkien', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598874518', 'year': 1979, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497676, 'title': 'The Gospel According to Tolkien', 'subtitle': 'Visions of the Kingdom in Middle Earth', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Ralph Wood', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781596442139', 'year': 2005, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11638917, 'title': 'Tolkien and the Great War', 'subtitle': 'The Threshold of Middle-earth', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'John Garth', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780007429219', 'year': 2011, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10026401, 'title': \"Tolkien's Ordinary Virtues\", 'subtitle': 'Exploring the Spiritual Themes of The Lord of the Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Mark Eddy Smith', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781433274237', 'year': 2008, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030719, 'title': 'A Hobbit Journey', 'subtitle': \"Discovering the Enchantment of J. R. R. Tolkien's Middle-earth\", 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Matthew Dickerson', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621881001', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497900, 'title': 'Finding God in the Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Jim Ware', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781610455893', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11983096, 'title': \"Bilbo's Journey\", 'subtitle': 'Discovering the Hidden Meaning in The Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781505101317', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11994474, 'title': 'The Inklings', 'subtitle': 'C. S. Lewis, J. R. R. Tolkien, Charles Williams, and Their Friends', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Humphrey Carpenter', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781538483923', 'year': 1991, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11760760, 'title': 'A Hobbit, a Wardrobe, and a Great War', 'subtitle': 'How J.R.R. Tolkien and C.S. Lewis Rediscovered Faith, Friendship, and Heroism in the Cataclysm of 19', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Loconte', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780718079383', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030878, 'title': 'The Spiritual World Of The Hobbit', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'James Stuart Bell', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621881308', 'year': 2013, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10078531, 'title': 'The Princess and the Goblin', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441796097', 'year': 2006, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10024575, 'title': 'Titus Groan', 'subtitle': 'The Gormenghast Trilogy, Book 1', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Mervyn Peake', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441787118', 'year': 2010, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11810406, 'title': 'The Gray Wolf and Other Fantasy Stories', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781538422335', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11030810, 'title': 'Finding God In The Lord Of The Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Kurt Bruner', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'oas_9781621882053', 'year': 2003, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10025773, 'title': 'C.S. Lewis', 'subtitle': 'Memories and Reflections', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'John Lawlor', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441799913', 'year': 2010, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10026560, 'title': 'That Hideous Strength', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'C. S. Lewis', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781441713001', 'year': 2009, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11653879, 'title': 'The Princess and The Goblin and The Goblin and the Grocer', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'George Macdonald', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781944785048', 'year': 2016, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11983095, 'title': \"Frodo's Journey\", 'subtitle': 'Discover the Hidden Meaning of The Lord of the Rings', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781505101324', 'year': 2017, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10756539, 'title': 'Bored of the Rings', 'subtitle': 'A Parody', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'The Harvard Lampoon', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781452690346', 'year': 2012, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 10755153, 'title': 'The Story of the Volsungs', 'subtitle': 'The Volsunga Saga', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Anonymous', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781452623566', 'year': 2011, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11413322, 'title': 'The Night Of The Swarm', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Robert V. S. Redick', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ttm_9781400192960', 'year': 2013, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11419317, 'title': \"Heaven's Net Is Wide\", 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Lian Hearn', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'rcb_9781598875171', 'year': 2007, 'issueNumberDescription': '0', 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11994170, 'title': 'C.S. Lewis and the Catholic Church', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Joseph Pearce', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'aut_9781618908087', 'year': 2017, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11497505, 'title': 'George MacDonald: His Life and Works', 'subtitle': 'A Short Biography by Roland Hein', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Rolland Hein', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'ecr_9781596440982', 'year': 2005, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11665303, 'title': 'The View from the Cheap Seats', 'subtitle': 'Selected Nonfiction', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Neil Gaiman', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'hpc_9780062262295', 'year': 2016, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11970592, 'title': \"Han'gul, Tengwar, and Other Featural Scripts\", 'subtitle': 'Lecture 22 of 24', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Marc Zender', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'grc_a224122', 'year': 2013, 'children': True, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11342301, 'title': 'Operation Arcana', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Various Authors', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'bsa_9781504618922', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}, {'titleId': 11954619, 'title': 'Reviewing Vocabulary through Literature', 'subtitle': 'Lecture 24 of 36', 'kind': 'Audiobooks', 'kindName': 'AUDIOBOOK', 'artistName': 'Kevin Flanigan', 'demo': False, 'pa': False, 'edited': False, 'artKey': 'grc_937324', 'year': 2015, 'children': False, 'fixedLayout': False, 'readAlong': False}], 'kindFacets': [{'value': 'AUDIOBOOK', 'label': 'Audiobooks', 'hits': 32, 'kind': 'AUDIOBOOK'}], 'facets': {'kind': [{'value': 'AUDIOBOOK', 'label': 'Audiobooks', 'hits': 32, 'kind': 'AUDIOBOOK'}], 'artists_literal': [{'value': 'Various Readers', 'label': 'Various Readers', 'hits': 6}, {'value': 'J. R. R. Tolkien', 'label': 'J. R. R. Tolkien', 'hits': 5}, {'value': 'George Macdonald', 'label': 'George Macdonald', 'hits': 3}, {'value': 'Joseph Pearce', 'label': 'Joseph Pearce', 'hits': 3}, {'value': 'Simon Vance', 'label': 'Simon Vance', 'hits': 3}, {'value': 'Bernard Mayes', 'label': 'Bernard Mayes', 'hits': 2}, {'value': \"Kevin O'Brien\", 'label': \"Kevin O'Brien\", 'hits': 2}, {'value': 'Alan Sklar', 'label': 'Alan Sklar', 'hits': 1}, {'value': 'Alison Larkin', 'label': 'Alison Larkin', 'hits': 1}, {'value': 'Anonymous', 'label': 'Anonymous', 'hits': 1}], 'series_literal': [{'value': 'Lord of the Rings', 'label': 'Lord of the Rings', 'hits': 3}, {'value': 'Building a Better Vocabulary', 'label': 'Building a Better Vocabulary', 'hits': 1}, {'value': 'Cardboard Box Of The Rings', 'label': 'Cardboard Box Of The Rings', 'hits': 1}, {'value': 'Chathrand Voyage', 'label': 'Chathrand Voyage', 'hits': 1}, {'value': 'Space Trilogy', 'label': 'Space Trilogy', 'hits': 1}, {'value': 'Tales of the Otori', 'label': 'Tales of the Otori', 'hits': 1}, {'value': 'Writing and Civilization: From Ancient Worlds to Modernity', 'label': 'Writing and Civilization: From Ancient Worlds to Modernity', 'hits': 1}], 'genres': [{'value': 'Religious', 'label': 'Religious', 'hits': 8}, {'value': 'Sci-Fi & Fantasy', 'label': 'Sci-Fi & Fantasy', 'hits': 6}, {'value': 'Biography', 'label': 'Biography', 'hits': 4}, {'value': \"Children's\", 'label': \"Children's\", 'hits': 4}, {'value': 'Classics', 'label': 'Classics', 'hits': 4}, {'value': 'Nonfiction', 'label': 'Nonfiction', 'hits': 2}, {'value': 'Comedy', 'label': 'Comedy', 'hits': 1}, {'value': 'Fiction', 'label': 'Fiction', 'hits': 1}, {'value': 'History', 'label': 'History', 'hits': 1}, {'value': 'Poetry', 'label': 'Poetry', 'hits': 1}], 'publisher': [{'value': 'Blackstone Audio, Inc.', 'label': 'Blackstone Audio, Inc.', 'hits': 7}, {'value': 'HighBridge, a division of Recorded Books', 'label': 'HighBridge, a division of Recorded Books', 'hits': 5}, {'value': \"Author's Republic\", 'label': \"Author's Republic\", 'hits': 3}, {'value': 'Oasis Audio', 'label': 'Oasis Audio', 'hits': 3}, {'value': 'Tantor Audio', 'label': 'Tantor Audio', 'hits': 3}, {'value': 'christianaudio.com', 'label': 'christianaudio.com', 'hits': 3}, {'value': 'The Great Courses', 'label': 'The Great Courses', 'hits': 2}, {'value': 'Blackstone Audio, Inc., and Skyboat Media, Inc.', 'label': 'Blackstone Audio, Inc., and Skyboat Media, Inc.', 'hits': 1}, {'value': 'British Classic Audio', 'label': 'British Classic Audio', 'hits': 1}, {'value': 'Harper Collins Publishers', 'label': 'Harper Collins Publishers', 'hits': 1}], 'awards': [], 'rating': [], 'language_literal': [{'value': 'English', 'label': 'English', 'hits': 32}], 'audience': [{'value': '01', 'label': 'general/trade', 'hits': 3}], 'fiction': [{'value': 'false', 'label': 'false', 'hits': 17}, {'value': 'true', 'label': 'true', 'hits': 15}], 'year': [{'value': '2000', 'label': '2000-2009', 'hits': 7}, {'value': '1970', 'label': '1970-1979', 'hits': 4}, {'value': '1990', 'label': '1990-1999', 'hits': 1}, {'value': '2010', 'label': '2010-2019', 'hits': 20}], 'parental_advisory': [{'value': 'false', 'label': 'false', 'hits': 32}]}}\n" ], [ "### RBDIGITAL!\nusername = 'RBDIGITAL_LOGIN'\npassword = 'RBDIGITAL_PASSWORD'\nhome_library_url = 'mycitymystate.rbdigital.com'\n\nrbdigital_headers = {'accept': '*/*', 'accept-encoding': 'gzip, deflate',\n 'Access-Control-Request-Headers': 'authorization,content-type',\n 'Access-Control-Request-Method': 'POST',\n 'Origin': '{}'.format(home_library_url),\n \n 'content-type':'application/x-www-form-urlencoded', 'device-version': 'Chrome', 'referer':'https://www.hoopladigital.com/'}\n\noptions_resp = requests.options('http://auth.rbdigital.com/v1/authenticate', \n headers=rbdigital_headers, data=data)\nprint(options_resp.status_code)\n\nrbdigital_headers = {'accept': '*/*', 'accept-encoding': 'gzip, deflate',\n 'Content-Type': 'application/json',\n 'Accept-Language': 'en-US',\n 'Authorization': 'bearer 5ab487ad749bbe02e0aef7c8',\n 'Content-Length': '122',\n 'Host': 'auth.rbdigital.com',\n 'Referer': 'http://auth.rbdigital.com',\n 'Origin': '{}'.format(home_library_url)}\n\npatron_data = {'PatronIdentifier':username, 'PatronSecret':password, 'Source': 'oneclick',\n 'auth_state': 'auth_internal', 'libraryId': 75} # XXX libraryID from where?\n\nauth_resp = requests.post('http://auth.rbdigital.com/v1/authenticate',\n headers=rbdigital_headers, \n data=patron_data)\nprint(auth_resp.status_code)\nprint(auth_resp.content)\n\n# http://developer.rbdigital.com/documents/patron-login\n# Having trouble getting the bearer token. Not sure where it comes from...", "200\n400\nb'{\\r\\n \"message\": \"Invalid (empty) request data. Source/LibraryId/PatronIdentifier/PatronSecret values must be provided.\"\\r\\n}'\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cbf55364497a7e21a19b34193aa28bb8d9cdadba
168,676
ipynb
Jupyter Notebook
examples/Examples.ipynb
tueda/mympltools
80b81e9e2c75bf5c0c7e8795d4ce5491978ee54c
[ "MIT" ]
null
null
null
examples/Examples.ipynb
tueda/mympltools
80b81e9e2c75bf5c0c7e8795d4ce5491978ee54c
[ "MIT" ]
null
null
null
examples/Examples.ipynb
tueda/mympltools
80b81e9e2c75bf5c0c7e8795d4ce5491978ee54c
[ "MIT" ]
null
null
null
455.881081
42,942
0.938023
[ [ [ "[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/tueda/mympltools/HEAD?labpath=examples/Examples.ipynb)\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tueda/mympltools/blob/HEAD/examples/Examples.ipynb)", "_____no_output_____" ] ], [ [ "# Install mympltools 22.5.1 only when running on Binder/Colab.\n! [ -n \"$BINDER_SERVICE_HOST$COLAB_GPU\" ] && pip install \"git+https://github.com/tueda/[email protected]#egg=mympltools[fitting]\"", "_____no_output_____" ] ], [ [ "## Basic style", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nimport mympltools as mt\n\n# Use the style.\nmt.use(\"21.10\")\n\nx = np.linspace(-5, 5)\n\nfig, ax = plt.subplots()\nax.plot(x, x**2)\n\n# Show grid lines.\nmt.grid(ax)\n\nplt.show()", "_____no_output_____" ] ], [ [ "## Annotation text for lines", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nimport mympltools as mt\n\nmt.use(\"21.10\")\n\nx = np.linspace(-5, 5)\n\nfig, ax = plt.subplots()\n\n# Plot a curve with an annotation.\nl1 = ax.plot(x, np.exp(x))\nmt.line_annotate(\"awesome function\", l1)\n\n# Plot another curve with an annotation.\nl2 = ax.plot(x, np.exp(-0.2 * x**2))\nmt.line_annotate(\"another function\", l2, x=3.5)\n\nax.set_yscale(\"log\")\nmt.grid(ax)\nplt.show()", "_____no_output_____" ] ], [ [ "To fine-tune the text position, use the `xytext` (default: `(0, 5)`) and `rotation` options.", "_____no_output_____" ], [ "## Handling uncertainties", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nimport mympltools as mt\n\nmt.use(\"21.10\")\n\n# Bounded(x, dx) represents a curve with a symmetric error bound.\nx = np.linspace(0, 10)\ny1 = mt.Bounded(x, 0.1)\ny2 = mt.Bounded(1.5 + np.sin(x), 0.2)\ny3 = y1 / y2 # The error bound of this result is not symmetric.\n\nfig, ax = plt.subplots()\n\n# Plot a curve with an error band.\na1 = mt.errorband(ax, x, y3.x, y3.err, label=\"f\")\nax.legend([a1], ax.get_legend_handles_labels()[1])\n\nmt.grid(ax)\nplt.show()", "_____no_output_____" ] ], [ [ "## Curve fitting (SciPy wrapper)", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n\nimport mympltools as mt\n\nmt.use(\"21.10\")\n\nnp.random.seed(1)\n\n# Fitting function.\ndef f(x, a, b, c):\n return a * np.exp(-b * x) + c\n\n\n# Data to be fitted.\nn = 10\nx = np.linspace(0, 4, 40)\ny = []\ne = []\nfor xi in x:\n di = np.zeros(n)\n di += f(xi, 2.5, 1.3, 0.5)\n di += np.random.randn(n) * 0.5 # noise\n yi = np.average(di)\n ei = np.std(di) / np.sqrt(n)\n y += [yi]\n e += [ei]\n\n# Perform fitting.\nmodel = mt.fit(f, x, y, e)\nprint(model)\n\n# Plot with the fitted curve.\nfig, ax = plt.subplots()\nax.errorbar(x, y, e, fmt=\"o\")\n\nax.plot(x, model(x))\n\nmt.grid(ax)\nplt.show()", "Model(f=<function f at 0x7fa8cf738550>, popt=array([2.4167779 , 1.10284522, 0.47084201]), perr=array([0.09016611, 0.08578057, 0.04401175]), pcov=array([[ 0.00812993, 0.00311033, -0.00031362],\n [ 0.00311033, 0.00735831, 0.00297383],\n [-0.00031362, 0.00297383, 0.00193703]]), chi2=26.552835140540836, ndf=37, p_value=0.8984415249043353)\n" ], [ "# Example taken from https://root.cern.ch/doc/v626/FittingDemo_8C.html\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport mympltools as mt\n\nmt.use(\"21.10\")\n\n# Data.\nxdata = np.linspace(0, 3, 61)\nxdata = (xdata[1:] + xdata[:-1]) / 2\n\nydata_str = \"\"\"\n 6, 1, 10, 12, 6, 13, 23, 22, 15, 21,\n23, 26, 36, 25, 27, 35, 40, 44, 66, 81,\n75, 57, 48, 45, 46, 41, 35, 36, 53, 32,\n40, 37, 38, 31, 36, 44, 42, 37, 32, 32,\n43, 44, 35, 33, 33, 39, 29, 41, 32, 44,\n26, 39, 29, 35, 32, 21, 21, 15, 25, 15\n\"\"\"\nydata = [float(s) for s in ydata_str.split(\",\")]\n\nedata = np.sqrt(ydata)\n\n# Fitting function.\ndef background(x, c0, c1, c2):\n return c0 + c1 * x + c2 * x**2\n\n\ndef signal(x, a0, a1, a2):\n return 0.5 * a0 * a1 / np.pi / ((x - a2) ** 2 + 0.25 * a1**2)\n\n\ndef fit_f(x, c0, c1, c2, a0, a1, a2):\n return background(x, c0, c1, c2) + signal(x, a0, a1, a2)\n\n\n# Perform fitting.\nmodel = mt.fit(fit_f, xdata, ydata, edata, p0=(1, 1, 1, 1, 0.2, 1))\nprint(model)\n\n# Plot with the fitted curve.\nfig, ax = plt.subplots()\nax.errorbar(xdata, ydata, edata, (x[1] - x[0]) / 2, \".\", c=\"k\", label=\"Data\")\n\nx = np.linspace(0, 3, 200)\nax.plot(x, background(x, *model.popt[0:3]), c=\"r\", label=\"Background fit\")\nax.plot(x, signal(x, *model.popt[3:6]), c=\"b\", label=\"Signal fit\")\nax.plot(x, model(x), c=\"m\", label=\"Global fit\")\n\nax.legend()\nmt.grid(ax)\nplt.show()", "Model(f=<function fit_f at 0x7fa89c1a35b0>, popt=array([ -0.86471399, 45.84336458, -13.32141334, -13.80745103,\n -0.1723098 , 0.98728099]), perr=array([0.89170822, 2.63256047, 0.97317695, 2.15310141, 0.0350562 ,\n 0.01134052]), pcov=array([[ 7.95143543e-01, -1.20887505e+00, 3.49733979e-01,\n 1.54541205e-01, 3.61744963e-03, 4.70726944e-04],\n [-1.20887505e+00, 6.93037463e+00, -2.50603781e+00,\n 2.95356490e+00, 3.53801750e-02, -1.43118168e-03],\n [ 3.49733979e-01, -2.50603781e+00, 9.47073369e-01,\n -1.13066258e+00, -1.37041940e-02, 4.38089954e-04],\n [ 1.54541205e-01, 2.95356490e+00, -1.13066258e+00,\n 4.63584569e+00, 5.29738409e-02, -1.15224975e-03],\n [ 3.61744963e-03, 3.53801750e-02, -1.37041940e-02,\n 5.29738409e-02, 1.22893722e-03, -1.45002101e-05],\n [ 4.70726944e-04, -1.43118168e-03, 4.38089954e-04,\n -1.15224975e-03, -1.45002101e-05, 1.28607462e-04]]), chi2=58.92838606212506, ndf=54, p_value=0.3000478240204053)\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
cbf55b6e1047b55ca1127661b45d6bf39d07b90f
24,993
ipynb
Jupyter Notebook
1D_AdvDiff_Stat.ipynb
luiggix/GeoMac_Examen
810193fa858b68d4e4ac5ab6de84f17bac496b7c
[ "CC0-1.0" ]
null
null
null
1D_AdvDiff_Stat.ipynb
luiggix/GeoMac_Examen
810193fa858b68d4e4ac5ab6de84f17bac496b7c
[ "CC0-1.0" ]
null
null
null
1D_AdvDiff_Stat.ipynb
luiggix/GeoMac_Examen
810193fa858b68d4e4ac5ab6de84f17bac496b7c
[ "CC0-1.0" ]
null
null
null
25.633846
181
0.476373
[ [ [ "# Geofísica Matemática y Computacional.\n\n## Examen\n\n### 23 de noviembre de 2021\n\nAntes de entregar este *notebook*, asegúrese de que la ejecución se realiza como se espera.\n1. Reinicie el kernel.\n - Para ello seleccione en el menú principal: Kernel$\\rightarrow$Restart.\n2. Llene todos las celdas que indican:\n - `YOUR CODE HERE` o\n - \"YOUR ANSWER HERE\"\n3. Ponga su nombre en la celda siguiente (y el de sus colaboradores si es el caso).\n4. Una vez terminado el ejercicio haga clic en el botón Validate y asegúrese de que no hay ningún error en la ejecución.", "_____no_output_____" ] ], [ [ "NAME = \"\"\nCOLLABORATORS = \"\"", "_____no_output_____" ] ], [ [ "---", "_____no_output_____" ], [ "# Convección-difusión de calor estacionaria\nConsidere el siguiente problema:\n\n$$\n\\begin{eqnarray*}\nc_p \\rho \\frac{\\partial}{\\partial x} \\left( u T \\right) -\n\\frac{\\partial }{\\partial x} \\left( \\kappa \\frac{\\partial T}{\\partial x}\\right) & = &\nS \\\\\nT(0) & = & 1 \\\\\nT(L) & = & 0\n\\end{eqnarray*}\n$$\n\n<img src=\"conv03.png\" width=\"300\" align=\"middle\">\n\nLa solución analítica es la siguiente:\n\n$$\n\\displaystyle\nT(x) = \\frac{\\exp\\left(\\frac{\\rho u x}{\\kappa}\\right) - 1 }{\\exp\\left(\\frac{\\rho v L}{\\kappa}\\right) - 1} (T_L - T_0) + T_0\n$$", "_____no_output_____" ], [ "Implementar la solución numérica con diferencias finitas en Python.\n\nUtilice los siguientes datos:\n\n- $L = 1.0$ [m], \n- $c_p = 1.0$ [J / Kg $^\\text{o}$K], \n- $\\rho = 1.0$ [kg/m$^3$], \n- $\\kappa = 0.1$ [kg/m s], \n- $S = 0$ ", "_____no_output_____" ], [ "## Diferencias Centradas\n1. Realice la implementación usando el esquema de **Diferencias Centradas para el término advectivo** y haga las siguientes pruebas:\n\n 1. $u = 0.1$ [m/s], con $6$ nodos.<br> \n 2. $u = 2.5$ [m/s], con $6$ nodos.<br>\n 3. $u = 2.5$ [m/s], con $N = $ tal que el error sea menor a $0.005$.<br>\n \nEn todos los casos compare la solución numérica con la analítica calculando el error con la fórmula: $E = ||T_a - T_n||_\\infty$. Genere figuras similares a las siguientes:\n\n<table>\n <tr>\n <td><img src=\"caso1c.png\" width=\"300\"></td>\n <td><img src=\"caso2c.png\" width=\"300\"></td>\n </tr>\n</table>", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n# Parámetros para el estilo de las gráficas\nplt.style.use('seaborn-paper')\nparams = {'figure.figsize' : (10,7),\n# 'text.usetex' : True,\n 'xtick.labelsize': 20,\n 'ytick.labelsize': 20,\n 'axes.labelsize' : 24,\n 'axes.titlesize' : 24,\n 'legend.fontsize': 24,\n 'lines.linewidth': 3,\n 'lines.markersize': 10,\n 'grid.color' : 'darkgray',\n 'grid.linewidth' : 0.5,\n 'grid.linestyle' : '--',\n 'font.family': 'DejaVu Serif',\n }\nplt.rcParams.update(params)", "_____no_output_____" ], [ "def mesh(L,N):\n \"\"\"\n Esta función calcula el h y las coordenadas de la malla\n \n Parameters\n ----------\n L : float\n Longitud del dominio.\n \n N : int\n Número de incógnitas (sin las fronteras)\n \n Returns\n -------\n h, x: el tamaño h de la malla y las coordenadas en la dirección x\n \"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()", "_____no_output_____" ], [ "def Laplaciano1D(par):\n \"\"\"\n Esta función calcula los coeficientes de la matriz de \n diferencias finitas.\n \n Paremeters\n ----------\n par: dict\n Diccionario que contiene todos los datos del problema.\n \n Returns\n ----------\n A : la matriz de la discretización.\n \"\"\"\n N = par['N'] \n h = par['h']\n alpha = par['alpha']\n cp = par['cp']\n rho = par['rho']\n u = par['u'] \n\n # YOUR CODE HERE\n raise NotImplementedError()\n \n a = b + c\n A = np.zeros((N,N))\n A[0,0] = a \n A[0,1] = -b\n for i in range(1,N-1):\n A[i,i] = a \n A[i,i+1] = -b \n A[i,i-1] = -c \n A[N-1,N-2] = -c\n A[N-1,N-1] = a\n \n return A", "_____no_output_____" ], [ "def RHS(par):\n \"\"\"\n Esta función calcula el lado derecho del sistema lineal.\n \n Paremeters\n ----------\n par: dict\n Diccionario que contiene todos los datos del problema.\n \n Returns\n ----------\n f : el vector del lado derecho del sistema.\n \"\"\"\n N = par['N'] \n h = par['h']\n alpha = par['alpha']\n cp = par['cp']\n rho = par['rho']\n u = par['u'] \n T0 = par['BC'][0]\n TL = par['BC'][1]\n \n f = np.zeros(N) \n\n # YOUR CODE HERE\n raise NotImplementedError()\n \n f[0] = c * T0 \n f[N-1] = b * TL\n \n return f", "_____no_output_____" ], [ "def plotSol(par, x, T, E):\n \"\"\"\n Función de graficación de la solución analítica y la numérica\n \"\"\"\n titulo = 'u = {}, N = {}'.format(par['u'], par['N'])\n error = '$||E||_2$ = {:10.8f}'.format(E)\n plt.figure(figsize=(10,5))\n plt.title(titulo + ', ' + error)\n plt.scatter(x,T, zorder=5, s=100, fc='C1', ec='k', alpha=0.75, label='Numérica')\n plt.plot(x,T, 'C1--', lw=1.0)\n xa, Ta = analyticSol(par)\n plt.plot(xa,Ta,'k-', label='Analítica')\n plt.xlim(-0.1,1.1)\n plt.ylim(-0.1,1.3)\n plt.xlabel('x [m]')\n plt.ylabel('T[$^o$C]')\n plt.grid()\n plt.legend(loc='lower left')\n plt.show()", "_____no_output_____" ], [ "def analyticSol(par, NP = 100):\n \"\"\"\n Calcula la solución analítica\n \n Paremeters\n ----------\n par: dict\n Diccionario que contiene todos los datos del problema.\n \n NP: int\n Número de puntos para calcular la solución analítica. Si no se da\n ningún valor usa 100 puntos para hacer el cálculo.\n \n Returns\n ----------\n xa, Ta : un arreglo (xa) con las coordenadas donde se calcula la \n solución analítica y otro arreglo (Ta) con la solución analítica.\n \"\"\"\n L = par['L']\n rho = par['rho']\n u = par['u']\n alpha = par['alpha']\n T0 = par['BC'][0]\n TL = par['BC'][1]\n\n # YOUR CODE HERE\n raise NotImplementedError()", "_____no_output_____" ], [ "def numSol(par):\n \"\"\"\n Función que calcula la matriz del sistema (A), el lado derecho (f)\n y con esta información resuelve el sistema lineal para obtener la \n solución.\n \n Paremeters\n ----------\n par: dict\n Diccionario que contiene todos los datos del problema.\n \n Returns\n ----------\n T : un arreglo (T) con la solución analítica.\n \"\"\"\n \n # YOUR CODE HERE\n raise NotImplementedError()", "_____no_output_____" ], [ "def error(Ta, Tn):\n \"\"\"\n Función que calcula el error de la solución numérica.\n \n Paremeters\n ----------\n Ta: array\n Arreglo con la solución analítica.\n \n T: array\n Arreglo con la solución numérica.\n \n Returns\n ----------\n E : float\n Error de la solución numérica con respecto a la analítica.\n \"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()", "_____no_output_____" ], [ "def casos(u, N):\n \"\"\"\n Función para resolver cada caso que usa las funciones anteriores.\n\n Paremeters\n ----------\n u: float\n Velocidad.\n \n N: int\n Número de incógnitas.\n \"\"\" \n # Definición de un diccionario para almancenar los datos del problema\n par = {}\n par['L'] = 1.0 # m\n par['cp'] = 1.0 # [J / Kg K]\n par['rho'] = 1.0 # kg/m^3\n par['u'] = u # m/s\n par['alpha'] = 0.1 # kg / m.s\n par['BC'] = (1.0, 0.0) # Condiciones de frontera\n par['N'] = N # Número de incógnitas\n h, x = mesh(par['L'], par['N'])\n par['h'] = h\n\n # Definición del arreglo donde se almacenará la solución numérica\n N = par['N']\n T = np.zeros(N+2)\n T[0] = par['BC'][0] # Condición de frontera en x = 0\n T[N+1] = par['BC'][1] # Condición de frontera en x = L\n\n # Se ejecuta la función para obtener la solución\n T[1:N+1] = numSol(par)\n\n # Se calcula la función para calcular la solución analítica\n _, Ta = analyticSol(par, N+2)\n\n # Se calcula el error\n Error = error(Ta, T)\n\n # Se grafica la solución\n plotSol(par, x, T, Error)", "_____no_output_____" ] ], [ [ "### Caso 1.A.\n- u = 0.1\n- N = 6", "_____no_output_____" ] ], [ [ "# YOUR CODE HERE\nraise NotImplementedError()", "_____no_output_____" ] ], [ [ "### Caso 1.B.\n- u = 2.5\n- N = 6", "_____no_output_____" ] ], [ [ "# YOUR CODE HERE\nraise NotImplementedError()", "_____no_output_____" ] ], [ [ "### Caso 1.C.\n- u = 2.5\n- N = ?", "_____no_output_____" ] ], [ [ "# YOUR CODE HERE\nraise NotImplementedError()", "_____no_output_____" ] ], [ [ "## Upwind\n2. Realice la implementación usando el esquema **Upwind para el término advectivo** y haga las siguientes pruebas:\n\n 1. $u = 0.1$ [m/s], con $6$ nodos.<br> \n 2. $u = 2.5$ [m/s], con $6$ nodos.<br>\n 3. $u = 2.5$ [m/s], con $N = $ tal que el error sea menor a $0.1$.<br>\n \nEn todos los casos compare la solución numérica con la analítica calculando el error con la fórmula: $E = ||T_a - T_n||_\\infty$. Genere figuras similares a las siguientes:\n\n<table>\n <tr>\n <td><img src=\"caso1u.png\" width=\"300\"></td>\n <td><img src=\"caso2u.png\" width=\"300\"></td>\n </tr>\n</table>", "_____no_output_____" ] ], [ [ "def Laplaciano1D(par):\n \"\"\"\n Esta función calcula los coeficientes de la matriz de \n diferencias finitas.\n \n Paremeters\n ----------\n par: dict\n Diccionario que contiene todos los datos del problema.\n \n Returns\n ----------\n A : la matriz de la discretización.\n \"\"\"\n N = par['N'] \n h = par['h']\n alpha = par['alpha']\n cp = par['cp']\n rho = par['rho']\n u = par['u'] \n\n # YOUR CODE HERE\n raise NotImplementedError()\n \n a = b + c\n A = np.zeros((N,N))\n A[0,0] = a \n A[0,1] = -b\n for i in range(1,N-1):\n A[i,i] = a \n A[i,i+1] = -b \n A[i,i-1] = -c \n A[N-1,N-2] = -c\n A[N-1,N-1] = a\n \n return A", "_____no_output_____" ], [ "def RHS(par):\n \"\"\"\n Esta función calcula el lado derecho del sistema lineal.\n \n Paremeters\n ----------\n par: dict\n Diccionario que contiene todos los datos del problema.\n \n Returns\n ----------\n f : el vector del lado derecho del sistema.\n \"\"\"\n N = par['N'] \n h = par['h']\n alpha = par['alpha']\n cp = par['cp']\n rho = par['rho']\n u = par['u'] \n T0 = par['BC'][0]\n TL = par['BC'][1]\n \n f = np.zeros(N) \n\n # YOUR CODE HERE\n raise NotImplementedError()\n \n f[0] = c * T0 \n f[N-1] = b * TL\n \n return f", "_____no_output_____" ] ], [ [ "### Caso 2.A.\n- u = 0.1\n- N = 6", "_____no_output_____" ] ], [ [ "# YOUR CODE HERE\nraise NotImplementedError()", "_____no_output_____" ] ], [ [ "### Caso 2.B.\n- u = 2.5\n- N = 6", "_____no_output_____" ] ], [ [ "# YOUR CODE HERE\nraise NotImplementedError()", "_____no_output_____" ] ], [ [ "### Caso 2.C.\n- u = 2.5\n- N = ?", "_____no_output_____" ] ], [ [ "# YOUR CODE HERE\nraise NotImplementedError()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf580a2d3ac76a9599c5a70cb9abfd2cb301f7f
143,013
ipynb
Jupyter Notebook
NeuropynamicsToolboxFirstDraft.ipynb
DiGyt/snippets
30375c2fb3db4bc01b710e22105a67716c98613b
[ "BSD-3-Clause" ]
null
null
null
NeuropynamicsToolboxFirstDraft.ipynb
DiGyt/snippets
30375c2fb3db4bc01b710e22105a67716c98613b
[ "BSD-3-Clause" ]
null
null
null
NeuropynamicsToolboxFirstDraft.ipynb
DiGyt/snippets
30375c2fb3db4bc01b710e22105a67716c98613b
[ "BSD-3-Clause" ]
null
null
null
216.358548
32,206
0.887122
[ [ [ "<a href=\"https://colab.research.google.com/github/DiGyt/snippets/blob/master/NeuropynamicsToolboxFirstDraft.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "BSD 3-Clause License\n\nCopyright (c) 27.07.2020, Dirk Gütlin\n\nAll rights reserved.\n\n", "_____no_output_____" ], [ "# *Simulate biological networks of neurons*\n\n---\n\nThis is a first and very crude idea to build a simple Python Toolbox that should be able to simulate:\n1. Models of Biological Neurons\n2. Models of Neuron Connections (Dendrites/Axons)\n3. Models of Biological Neural Networks, including Neurons and Neuron Connections\n\nThe Toolbox should be modular, easy to use, easily scalabe, and provide meaningful visualizations on all levels.\nThis notebook gives a first example for a possible general workflow of these models.", "_____no_output_____" ], [ "##### Imports\n\nFor now, we only rely on numpy.", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "All other imports are only required for plotting.", "_____no_output_____" ] ], [ [ "!pip install mne\nimport mne\n\nimport networkx as nx\nimport matplotlib.pyplot as plt", "Requirement already satisfied: mne in /usr/local/lib/python3.6/dist-packages (0.20.7)\nRequirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.6/dist-packages (from mne) (1.18.5)\nRequirement already satisfied: scipy>=0.17.1 in /usr/local/lib/python3.6/dist-packages (from mne) (1.4.1)\n" ] ], [ [ "### Define Neuron, Dendrite, and Network models", "_____no_output_____" ] ], [ [ "class IzhikevichNeuron():\n \"\"\"Implementation of an Izhikevich Neuron.\"\"\"\n\n def __init__(self, dt=0.5, Vmax=35, V0=-70, u0=-14):\n # Initialize starting parameters for our neuron\n self.dt = dt\n self.Vmax = Vmax\n self.V = V0\n self.u = u0\n self.I = 0\n\n def __call__(self, I, a=0.02, b=0.2, c=-65, d=8):\n \"\"\"Simulate one timestep of our Izhikevich Model.\"\"\"\n\n if self.V < self.Vmax: # build up spiking potential\n # calculate the membrane potential\n dv = (0.04 * self.V + 5) * self.V + 140 - self.u\n V = self.V + (dv + self.I) * self.dt\n # calculate the recovery variable\n du = a * (b * self.V - self.u)\n u = self.u + self.dt * du\n \n else: # spiking potential is reached\n V = c\n u = self.u + d\n\n # limit the spikes at Vmax\n V = self.Vmax if V > self.Vmax else V\n\n # assign the t-1 states of the model\n self.V = V\n self.u = u\n self.I = I\n return V\n\n\nclass Dendrite():\n \"\"\"A dendrite-axon model capable of storing multiple action potentials over a\n course of time steps.\"\"\"\n\n def __init__(self, weight=1, temp_delay=1):\n self.weight = weight\n self.temp_delay = temp_delay\n self.action_potentials = []\n\n def __call__(self, ap_input):\n \"\"\"Simulate one time step for this dendrite.\"\"\"\n\n # simulate the next timestep in the dendrite\n new_ap_state = []\n ap_output = 0\n for ap, t in self.action_potentials:\n # if the AP has travelled through the dendrite, return output\n if t == 0:\n ap_output += ap * self.weight\n # else countdown the timesteps for remaining APs in the dendrite\n else:\n new_ap_state.append((ap, t - 1))\n\n self.action_potentials = new_ap_state\n\n # enter a new AP into the dendrite\n if ap_input != 0:\n self.action_potentials.append((ap_input, self.temp_delay))\n\n return ap_output\n\n\nclass BNN():\n \"\"\"A biological neural network connecting multiple neuron models.\"\"\"\n\n def __init__(self, neurons, connections):\n self.neurons = neurons\n self.connections = connections\n self.neuron_states = np.zeros(len(neurons))\n\n def __call__(self, inputs=[0]):\n \"\"\"Simulates one timestep in our BNN, while allowing additional external\n input being passed as a list of max length = len(BNN.neurons), where\n one inputs[i] corresponds to an action potential entered into BNN.neurons[i]\n at this timestep.\"\"\"\n\n # add the external inputs to the propagated neuron inputs\n padded_inputs = np.pad(inputs, (0, len(self.neurons) - len(inputs)), 'constant')\n neuron_inputs = self.neuron_states + padded_inputs\n\n # process all the neurons\n #TODO: neuron outputs are atm represented as the deviation from their respective V0 value\n neuron_outputs = [neuron(i) + 70 for neuron, i in zip(self.neurons, neuron_inputs)]\n\n # update the future neuron inputs by propagating them through the connections\n neuron_states = np.zeros(len(self.neurons))\n for (afferent, efferent, connection) in self.connections:\n neuron_states[efferent] += connection(neuron_outputs[afferent])\n # we need to round in order to prevent rounding errors\n neuron_states = np.round(neuron_states, 9)\n self.neuron_states = neuron_states\n\n return neuron_outputs\n\n # TODO: The plotting function is really ugly and should be redone.\n def plot(self, **kwargs):\n \"\"\"A crude way of plotting the network, by transforming it to a networkX graph.\"\"\"\n\n graph = nx.MultiDiGraph()\n graph.add_nodes_from([0, len(self.neurons) - 1])\n graph.add_edges_from([(eff, aff, connection.temp_delay) for aff, eff, connection in self.connections])\n \n pos = nx.circular_layout(graph)\n nx.draw_networkx_nodes(graph, pos, **kwargs)\n ax = plt.gca()\n for e in graph.edges:\n ax.annotate(\"\",\n xy=pos[e[0]], xycoords='data',\n xytext=pos[e[1]], textcoords='data',\n arrowprops=dict(arrowstyle=\"->\", color=\"0.5\",\n shrinkA=10, shrinkB=10,\n patchA=None, patchB=None,\n connectionstyle=\"arc3,rad=rrr\".replace('rrr',str(0.05 + 0.1*e[2])),),)\n plt.axis('off')\n plt.show()\n\n \n\n\n", "_____no_output_____" ] ], [ [ "### Simulate Neurons and Dendrites\n\nStart with simulating an Izhikevich Neuron", "_____no_output_____" ] ], [ [ "# define the neuron\ndelta_time = 0.5 # step size in ms\nneuron = IzhikevichNeuron(dt=0.5)\n\n# define the simulation length (in timesteps of delta_time)\nsim_steps = 1000\ntimes = [t*delta_time for t in range(sim_steps)]\n\n# plot regular spiking\nplt.plot(times, [neuron(I=10) for t in times])\nplt.title(\"Regular Izhikevich Neuron\")\nplt.xlabel(\"Time in ms\")\nplt.ylabel(\"Voltage in mV\")\n\n# plot chattering neuron\nplt.figure()\nplt.plot(times, [neuron(I=10, c=-50, d=2) for t in times])\nplt.title(\"Chattering Izhikevich Neuron\")\nplt.xlabel(\"Time in ms\")\nplt.ylabel(\"Voltage in mV\")\n\n\n# create a single impulse response\ninputs = np.zeros(len(times))\ninputs[200:210] = 10\n\nplt.figure()\nplt.plot(times, [neuron(I=i) for i in inputs])\nplt.title(\"Regular Izhikevich Neuron under single pulse\")\nplt.xlabel(\"Time in ms\")\nplt.ylabel(\"Voltage in mV\")", "_____no_output_____" ] ], [ [ "Get an intuition for how APs travel along the Dendrite", "_____no_output_____" ] ], [ [ "dendrite = Dendrite(weight=1, temp_delay=5)\n\n# print out the APs travelling through our dendrite\nfor ap in [0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]:\n output = dendrite(ap)\n print(output, dendrite.action_potentials)", "0 []\n0 []\n0 [(1, 5)]\n0 [(1, 4), (1, 5)]\n0 [(1, 3), (1, 4), (1, 5)]\n0 [(1, 2), (1, 3), (1, 4), (1, 5)]\n0 [(1, 1), (1, 2), (1, 3), (1, 4)]\n0 [(1, 0), (1, 1), (1, 2), (1, 3)]\n1 [(1, 0), (1, 1), (1, 2)]\n1 [(1, 0), (1, 1)]\n1 [(1, 0), (1, 5)]\n1 [(1, 4)]\n0 [(1, 3), (1, 5)]\n0 [(1, 2), (1, 4)]\n0 [(1, 1), (1, 3)]\n0 [(1, 0), (1, 2)]\n1 [(1, 1)]\n0 [(1, 0)]\n1 [(1, 5)]\n0 [(1, 4), (1, 5)]\n0 [(1, 3), (1, 4), (2, 5)]\n0 [(1, 2), (1, 3), (2, 4), (2, 5)]\n0 [(1, 1), (1, 2), (2, 3), (2, 4), (3, 5)]\n0 [(1, 0), (1, 1), (2, 2), (2, 3), (3, 4), (3, 5)]\n1 [(1, 0), (2, 1), (2, 2), (3, 3), (3, 4), (2, 5)]\n1 [(2, 0), (2, 1), (3, 2), (3, 3), (2, 4), (1, 5)]\n2 [(2, 0), (3, 1), (3, 2), (2, 3), (1, 4)]\n2 [(3, 0), (3, 1), (2, 2), (1, 3)]\n3 [(3, 0), (2, 1), (1, 2)]\n3 [(2, 0), (1, 1)]\n2 [(1, 0)]\n1 []\n0 []\n0 []\n0 []\n0 []\n0 []\n" ] ], [ [ "### Simulate a full network\n\nFirst, we generate a Network Model and visualize it.", "_____no_output_____" ] ], [ [ "# define a network model, created from 5 connected Izhikevich Neurons\nbnn = BNN(neurons=[IzhikevichNeuron() for i in range(5)],\n connections=[(0, 1, Dendrite()),\n (0, 2, Dendrite(weight=0.5)),\n (0, 3, Dendrite(temp_delay=3)),\n (0, 4, Dendrite(temp_delay=5)),\n (1, 2, Dendrite(weight=0.8, temp_delay=4)),\n (2, 3, Dendrite(temp_delay=1)),\n (2, 3, Dendrite(temp_delay=4)),\n (3, 2, Dendrite(weight=0.3, temp_delay=3))])\n\nbnn.plot()", "_____no_output_____" ] ], [ [ "Run the network without any inputs for 1000 timesteps.\nOptimally, the neurons should show no outputs.", "_____no_output_____" ] ], [ [ "timesteps=1000\nstate_log = np.empty([len(bnn.neurons), timesteps])\nfor i in range(timesteps):\n neuron_states = bnn()\n state_log[:, i] = neuron_states", "_____no_output_____" ] ], [ [ "Plot the outputs of each neuron, using the MNE Toolbox for neurophysiological data analysis.", "_____no_output_____" ] ], [ [ "info = mne.create_info(ch_names=[\"Neuron0\", \"Neuron1\",\n \"Neuron2\", \"Neuron3\",\n \"Neuron4\"], sfreq=1000/0.5)\ndata = mne.io.RawArray(state_log, info)\nd = data.plot(scalings=dict(misc=1e2))", "Creating RawArray with float64 data, n_channels=5, n_times=1000\n Range : 0 ... 999 = 0.000 ... 0.499 secs\nReady.\n" ] ], [ [ "Simulate the data again, but inducing a short voltage burst of 10 mV into the first Neuron after 100 ms", "_____no_output_____" ] ], [ [ "# give input to the first neuron\nn_times = 2000\ninputs = np.zeros([n_times, 2])\ninputs[200:220] = 10\n\nstate_log = np.empty([len(bnn.neurons), n_times])\nfor ind, current in enumerate(inputs):\n neuron_states = bnn(current)\n state_log[:, ind] = neuron_states\n\n# plot it\ninfo = mne.create_info(ch_names=[\"Neuron0\", \"Neuron1\",\n \"Neuron2\", \"Neuron3\",\n \"Neuron4\"], sfreq=1000/0.5)\ndata = mne.io.RawArray(state_log, info)\nd = data.plot(scalings=dict(misc=1e2))", "Creating RawArray with float64 data, n_channels=5, n_times=2000\n Range : 0 ... 1999 = 0.000 ... 1.000 secs\nReady.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf59bd53411b2d804e0b6274417447aff2942ed
4,750
ipynb
Jupyter Notebook
notebooks/linear_models_ex_04.ipynb
mehrdad-dev/scikit-learn-mooc
9c03fb14784ab447a2477039c07f8e8a0d191742
[ "CC-BY-4.0" ]
null
null
null
notebooks/linear_models_ex_04.ipynb
mehrdad-dev/scikit-learn-mooc
9c03fb14784ab447a2477039c07f8e8a0d191742
[ "CC-BY-4.0" ]
null
null
null
notebooks/linear_models_ex_04.ipynb
mehrdad-dev/scikit-learn-mooc
9c03fb14784ab447a2477039c07f8e8a0d191742
[ "CC-BY-4.0" ]
null
null
null
24.611399
86
0.586105
[ [ [ "# 📝 Exercise 04\n\nIn the previous notebook, we saw the effect of applying some regularization\non the coefficient of a linear model.\n\nIn this exercise, we will study the advantage of using some regularization\nwhen dealing with correlated features.\n\nWe will first create a regression dataset. This dataset will contain 2,000\nsamples and 5 features from which only 2 features will be informative.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import make_regression\n\ndata, target, coef = make_regression(\n n_samples=2_000, n_features=5, n_informative=2, shuffle=False,\n coef=True, random_state=0, noise=30,\n)", "_____no_output_____" ] ], [ [ "When creating the dataset, `make_regression` returns the true coefficient\nused to generate the dataset. Let's plot this information.", "_____no_output_____" ] ], [ [ "import pandas as pd\n\nfeature_names = [f\"Features {i}\" for i in range(data.shape[1])]\ncoef = pd.Series(coef, index=feature_names)\ncoef.plot.barh()\ncoef", "_____no_output_____" ] ], [ [ "Create a `LinearRegression` regressor and fit on the entire dataset and\ncheck the value of the coefficients. Are the coefficients of the linear\nregressor close to the coefficients used to generate the dataset?", "_____no_output_____" ] ], [ [ "from sklearn.linear_model import LinearRegression\n\nlinear_regression = LinearRegression()\nlinear_regression.fit(data, target)\nlinear_regression.coef_", "_____no_output_____" ], [ "feature_names = [f\"Features {i}\" for i in range(data.shape[1])]\ncoef = pd.Series(linear_regression.coef_, index=feature_names)\n_ = coef.plot.barh()", "_____no_output_____" ] ], [ [ "We see that the coefficients are close to the coefficients used to generate\nthe dataset. The dispersion is indeed cause by the noise injected during the\ndataset generation.", "_____no_output_____" ], [ "Now, create a new dataset that will be the same as `data` with 4 additional\ncolumns that will repeat twice features 0 and 1. This procedure will create\nperfectly correlated features.", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Fit again the linear regressor on this new dataset and check the\ncoefficients. What do you observe?", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Create a ridge regressor and fit on the same dataset. Check the coefficients.\nWhat do you observe?", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ], [ [ "Can you find the relationship between the ridge coefficients and the original\ncoefficients?", "_____no_output_____" ] ], [ [ "# Write your code here.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf5a061ef23042dec9d442e6b01a412c967a923
11,683
ipynb
Jupyter Notebook
src/documentation/Language_Module.ipynb
aadithpm/sos-docs
141831a152f3b4d5e30bc9c645baf4a9d3ab249b
[ "MIT" ]
null
null
null
src/documentation/Language_Module.ipynb
aadithpm/sos-docs
141831a152f3b4d5e30bc9c645baf4a9d3ab249b
[ "MIT" ]
null
null
null
src/documentation/Language_Module.ipynb
aadithpm/sos-docs
141831a152f3b4d5e30bc9c645baf4a9d3ab249b
[ "MIT" ]
null
null
null
49.92735
464
0.669177
[ [ [ "# Add a new language to SoS", "_____no_output_____" ], [ "It is relatively easy to define a new language module to allow SoS to exchange variables with a kernel. To make the extension available to other users, you will need to create a package with proper entry points. Please check documentation on [`Extending SoS`](Extending_SoS.html) for details.", "_____no_output_____" ], [ "SoS needs to know a few things before it can support a language properly,\n\n1. The Jupyter kernel this language uses to work with Jupyer, which is a `ir` kernel for language `R`.\n2. How to translate a Python object to a **similar** object in this language\n3. How to translate an object in this language to a **similar** object in Python.\n4. The color of the prompt of cells executed by this language.\n5. (Optional but recommend). Information of a running session.\n6. Optional options for interacting with the language on frontend.\n\nIt is important to understand that, **SoS does not tranfer any variables among kernels, it creates independent homonymous variables of similar types that are native to the destination language**. For example, for the following two variables\n\n```\na = 1\nb = c(1, 2)\n```\n\nin R, SoS execute the following statements to create variables `a` and `b` in Python\n\n```\na = 1\nb = [1, 2]\n```\nNote that `a` and `b` are of different types in Python although they are of the same type `numeric` in `R`.", "_____no_output_____" ], [ "## Define a new language Module", "_____no_output_____" ], [ "To support a new language, you will need to write a Python package that defines a class, say `mylanguage`, that should provide the following class attributes:\n\n1. `supported_kernels`: a dictionary of language and names of the kernels that the language supports, such as `{'R': ['ir']}`. If multiple kernels are supported, SoS will look for a kernel with matched name in the order that is specified (e.g. `{'JavaScript': ['ijavascript', 'inodejs']}`). Multiple languages can be specified if a language module supports multiple languages (e.g. `Matlab` and `Octave`).\n2. `background_color`: a name or `#XXXXXX` value for a color that will be used in the prompt area of cells that are executed by the subkernel. An empty string can be used for using default notebook color. If the language module defines multiple languages, a dictionary `{language: color}` can be used to specify different colors for supported languages.\n3. `cd_command`: A command to change current working directory, specified with `{dir}` intepolated with option of magic `%cd`. For example, the command for R is `'setwd({dir!r})'` where `!r` quotes the provided `dir`.\n4. `options`: A Python dictionary with options that will be passed to the frontend. Currently two options `variable_pattern` and `assignment_pattern` are supported. Both options should be regular expressions in JS style. \n * Option `variable_pattern` is used to identify if a statement is a simple variable (nothing else). If this option is defined and the input text (if executed at the side panel) matches the pattern, SoS will prepend `%preview` to the code. This option is useful only when `%preview var` displays more information than `var`.\n * Option `assignment_pattern` is used to identify if a statement is an assignment operation. If this option is defined and the input text matches the pattern, SoS will prepend `%preview var` to the code where `var` should be the first matched portion of the pattern (use `( )`). This mechanism allows SoS to automatically display result of an assignment when you step through the code.\n \nAn instance of the class would be initialized with the sos kernel and the name of the subkernel, which does not have to be one of the `supported_kernels` (could be self-defined) and should provide the following attributes and functions:\n\n1. `init_statements`: a statement that will be executed by the sub-kernel when the kernel starts. This statement usually defines functions to convert object to Python.\n2. `get_vars`: a Python function that transfer a Python variable to the subkernel.\n3. `put_vars`: a Python function that put one or more variables in the subkernel to SoS or another subkernel.\n4. `sessioninfo`: a Python function that returns information of the running kernel, usually including version of the language, the kernel, and currently used packages and their versions. For `R`, this means a call to `sessionInfo()` function. The return value of this function can be a string, a list of strings or `(key, value)` pairs, or a dictinary. The function will be called by the `%sessioninfo` magic of SoS.\n", "_____no_output_____" ], [ "## Obtain variable from SoS\n\nThe `get_vars` function should be defined as\n\n```\ndef get_vars(self, var_names)\n```\nwhere \n\n* `self` is the language instance with access to the SoS kernel, and\n* `var_names` are names in the sos dictionary.\n\nThis function is responsible for probing the type of Python variable and create a similar object in the subkernel.", "_____no_output_____" ], [ "For example, to create a Python object `b = [1, 2]` in `R` (magic `%get`), this function could\n\n1. Obtain a R expression to create this variable (e.g. `b <- c(1, 2)`)\n2. Execute the expression in the subkernel to create variable `b` in it.\n\nNote that the function `get_vars` can change the variable name because a valid variable name in Python might not be a valid variable name in another language. The function should give a warning if this happens.", "_____no_output_____" ], [ "## Send variables to other kernels\n\nThe `put_vars` function should be defined as\n\n```\ndef put_vars(self, var_names, to_kernel=None)\n```\nwhere\n\n1. `self` is the language instance with access to the SoS kernel\n2. `var_name` is a list of variables that should exist in the subkernel. Because a subkernel is responsible for sharing variables with names starting with `sos` to SoS automatically, this function should be called to pass these variables even when `var_names` is empty.\n3. `to_kernel` is the destination kernel to which the variables should be passed.\n\nDepending on destination kernel, this function can:\n\n* If the destination kernel is `sos`, the function should return a dictionary of variables that will be merged to the SoS dictionary.\n* If direct variable transfer is not supported by the language, the function can return a Python dictionary, in which case the language transfers the variables to SoS and let SoS pass along to the destination kernel.\n* If direct variable transfer is supported, the function should return a string. SoS will evaluate the string in the destination kernel to pass variables directly to the destination kernel.\n \nSo basically, a language can start with an implementation of `put_vars(to_kernel='sos')` and let SoS handle the rest. If need arises, it can\n\n* Implement variable exchanges between instances of the same language. This can be useful because there are usually lossness and more efficient methods in this case.\n* Put variable to another languages where direct varable transfer is much more efficient than transferring through SoS.", "_____no_output_____" ], [ "For example, to send a `R` object `b <- c(1, 2)` from subkernel `R` to `SoS` (magic `%put`), this function can\n\n1. Execute an statement in the subkernel to get the value(s) of variable(s) in some format, for example, a string `\"{'b': [1, 2]}\"`.\n2. Post-process these varibles to return a dictionary to SoS.\n\nThe [`R` sos extension](https://github.com/vatlab/SOS/blob/master/src/sos/R/kernel.py) provides a good example to get you started.", "_____no_output_____" ], [ "**NOTE**: Unlike other language extension mechanisms in which the python module can get hold of the \"engine\" of the interpreter (e.g. `saspy` and matlab's Python extension start the interpreter for direct communication) or have access to lower level API of the language (e.g. `rpy2`), SoS only have access to the interface of the language and perform all conversions by executing commands in the subkernels and intercepting their response. Consequently,\n\n1. Data exchange can be slower than other methods.\n2. Data exchange is less dependent on version of the interpreter.\n2. Data exchange can happen between a local and a remote kernel.\n\nAlso, although it can be more efficient to save large datasets to disk files and load in another kernel, this method does not work for kernels that do not share the same filesystem. We currently ignore this issue and assume all kernels have access to the same file system.", "_____no_output_____" ], [ "## Registering the new language module", "_____no_output_____" ], [ "To register additional language modules with SoS, you will need to add your modules to section `sos-language` and other relevant sections of entry points of `setup.py`. For example, you can create a package with the following entry_points to provide support for ruby.\n\n```\n entry_points='''\n[sos-language]\nruby = sos_ruby.kernel:sos_ruby\n\n[sos-targets]\nRuby_Library = sos_ruby.target:Ruby-Library\n'''\n```\n\nWith the installation of this package, `sos` would be able to obtain a class `sos_ruby` from module `sos_ruby.kernel`, and use it to work with the `ruby` language.", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cbf5c271f626150ac10c4e6246e290e55ddae27a
5,942
ipynb
Jupyter Notebook
python-tuts/0-beginner/3-Numeric-Types/12 - Decimals - Performance Considerations.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
3,266
2017-08-06T16:51:46.000Z
2022-03-30T07:34:24.000Z
python-tuts/0-beginner/3-Numeric-Types/12 - Decimals - Performance Considerations.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
150
2017-08-28T14:59:36.000Z
2022-03-11T23:21:35.000Z
python-tuts/0-beginner/3-Numeric-Types/12 - Decimals - Performance Considerations.ipynb
AadityaGupta/Artificial-Intelligence-Deep-Learning-Machine-Learning-Tutorials
352dd6d9a785e22fde0ce53a6b0c2e56f4964950
[ "Apache-2.0" ]
1,449
2017-08-06T17:40:59.000Z
2022-03-31T12:03:24.000Z
19.106109
110
0.479468
[ [ [ "### Decimals: Performance Considerations", "_____no_output_____" ], [ "#### Memory Footprint", "_____no_output_____" ], [ "Decimals take up a lot more memory than floats.", "_____no_output_____" ] ], [ [ "import sys\nfrom decimal import Decimal", "_____no_output_____" ], [ "a = 3.1415\nb = Decimal('3.1415')", "_____no_output_____" ], [ "sys.getsizeof(a)", "_____no_output_____" ] ], [ [ "24 bytes are used to store the float 3.1415", "_____no_output_____" ] ], [ [ "sys.getsizeof(b)", "_____no_output_____" ] ], [ [ "104 bytes are used to store the Decimal 3.1415", "_____no_output_____" ], [ "#### Computational Performance", "_____no_output_____" ], [ "Decimal arithmetic is also much slower than float arithmetic (on a CPU, and even more so if using a GPU)", "_____no_output_____" ], [ "We can do some rough timings to illustrate this.", "_____no_output_____" ], [ "First we look at the performance difference creating floats vs decimals:", "_____no_output_____" ] ], [ [ "import time\nfrom decimal import Decimal\n\ndef run_float(n=1):\n for i in range(n):\n a = 3.1415\n \ndef run_decimal(n=1):\n for i in range(n):\n a = Decimal('3.1415')\n", "_____no_output_____" ] ], [ [ "Timing float and Decimal operations:", "_____no_output_____" ] ], [ [ "n = 10000000", "_____no_output_____" ], [ "start = time.perf_counter()\nrun_float(n)\nend = time.perf_counter()\nprint('float: ', end-start)\n\nstart = time.perf_counter()\nrun_decimal(n)\nend = time.perf_counter()\nprint('decimal: ', end-start)", "float: 0.21406484986433047\ndecimal: 2.1353148079910156\n" ] ], [ [ "We make a slight variant here to see how addition compares between the two types:", "_____no_output_____" ] ], [ [ "def run_float(n=1):\n a = 3.1415\n for i in range(n):\n a + a\n \ndef run_decimal(n=1):\n a = Decimal('3.1415')\n for i in range(n):\n a + a\n \nstart = time.perf_counter()\nrun_float(n)\nend = time.perf_counter()\nprint('float: ', end-start)\n\nstart = time.perf_counter()\nrun_decimal(n)\nend = time.perf_counter()\nprint('decimal: ', end-start)", "float: 0.1875864573764936\ndecimal: 0.3911394302055555\n" ] ], [ [ "How about square roots:\n\n(We drop the n count a bit)", "_____no_output_____" ] ], [ [ "n = 5000000\n\nimport math\n\ndef run_float(n=1):\n a = 3.1415\n for i in range(n):\n math.sqrt(a)\n \ndef run_decimal(n=1):\n a = Decimal('3.1415')\n for i in range(n):\n a.sqrt()\n \nstart = time.perf_counter()\nrun_float(n)\nend = time.perf_counter()\nprint('float: ', end-start)\n\nstart = time.perf_counter()\nrun_decimal(n)\nend = time.perf_counter()\nprint('decimal: ', end-start)", "float: 0.673833850211659\ndecimal: 14.73112183459776\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf5c55a8fa6bccb7bf693735f03429b142c006e
2,194
ipynb
Jupyter Notebook
Euler 072 - Counting fractions.ipynb
Radcliffe/project-euler
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
[ "MIT" ]
6
2016-05-11T18:55:35.000Z
2019-12-27T21:38:43.000Z
Euler 072 - Counting fractions.ipynb
Radcliffe/project-euler
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
[ "MIT" ]
null
null
null
Euler 072 - Counting fractions.ipynb
Radcliffe/project-euler
5eb0c56e2bd523f3dc5329adb2fbbaf657e7fa38
[ "MIT" ]
null
null
null
24.651685
165
0.476299
[ [ [ " \nEuler Problem 72\n================\n\nConsider the fraction, n/d, where n and d are positive integers. If n < d and HCF(n,d)=1, it is called a reduced proper fraction.\n\nIf we list the set of reduced proper fractions for d ≤ 8 in ascending order of size, we get:\n\n1/8, 1/7, 1/6, 1/5, 1/4, 2/7, 1/3, 3/8, 2/5, 3/7, 1/2, 4/7, 3/5, 5/8, 2/3, 5/7, 3/4, 4/5, 5/6, 6/7, 7/8\n\nIt can be seen that there are 21 elements in this set.\n\nHow many elements would be contained in the set of reduced proper fractions for d ≤ 1,000,000?", "_____no_output_____" ] ], [ [ "N = 1000001\nphi = [0] * N\nphi[1] = 1\nprimefactor = [0] * N\n\nfor p in range(2, N):\n if primefactor[p] == 0:\n for n in range(p, N, p):\n primefactor[n] = p\n \nfor n in range(2, N):\n q = primefactor[n]\n m = n // q\n power = 1\n while primefactor[m] == q:\n m //= q\n power *= q\n phi[n] = phi[m] * power * (q-1)\n \nprint(sum(phi))\n ", "303963552392\n" ] ], [ [ "**Explanation:** The number of reduced fractions with denominator $n$ is $\\phi(n)$ (Euler's totient function) so the answer is $\\sum_{k \\le 10^6} \\phi(k)$.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf5c5d689b0de3f4628de88cabb312561c08f9f
7,179
ipynb
Jupyter Notebook
jupyter_book/book_template/content/01/3/Plotting_the_Classics.ipynb
mbbroberg/jupyter-book
c37959a8ae8923fafecb64c46e95c026b3739cac
[ "BSD-3-Clause" ]
1
2019-12-16T16:03:47.000Z
2019-12-16T16:03:47.000Z
jupyter_book/book_template/content/01/3/Plotting_the_Classics.ipynb
mbbroberg/jupyter-book
c37959a8ae8923fafecb64c46e95c026b3739cac
[ "BSD-3-Clause" ]
3
2020-04-21T17:06:36.000Z
2021-09-28T00:53:55.000Z
jupyter_book/book_template/content/01/3/Plotting_the_Classics.ipynb
agdiallo/agd-jupyter
005af5788e70ef2609c161b50cde11fa88f4b882
[ "BSD-3-Clause" ]
null
null
null
38.805405
573
0.562474
[ [ [ "from datascience import *\nfrom datascience.predicates import are\npath_data = '../../../data/'\nimport numpy as np\nimport matplotlib\nmatplotlib.use('Agg', warn=False)\n%matplotlib inline\nimport matplotlib.pyplot as plots\nplots.style.use('fivethirtyeight')\nimport warnings\nwarnings.simplefilter(action=\"ignore\", category=FutureWarning)\n\nfrom urllib.request import urlopen \nimport re\ndef read_url(url): \n return re.sub('\\\\s+', ' ', urlopen(url).read().decode())", "_____no_output_____" ] ], [ [ "# Plotting the classics", "_____no_output_____" ], [ "In this example, we will explore statistics for two classic novels: *The Adventures of Huckleberry Finn* by Mark Twain, and *Little Women* by Louisa May Alcott. The text of any book can be read by a computer at great speed. Books published before 1923 are currently in the *public domain*, meaning that everyone has the right to copy or use the text in any way. [Project Gutenberg](http://www.gutenberg.org/) is a website that publishes public domain books online. Using Python, we can load the text of these books directly from the web.\n\nThis example is meant to illustrate some of the broad themes of this text. Don't worry if the details of the program don't yet make sense. Instead, focus on interpreting the images generated below. Later sections of the text will describe most of the features of the Python programming language used below.\n\nFirst, we read the text of both books into lists of chapters, called `huck_finn_chapters` and `little_women_chapters`. In Python, a name cannot contain any spaces, and so we will often use an underscore `_` to stand in for a space. The `=` in the lines below give a name on the left to the result of some computation described on the right. A *uniform resource locator* or *URL* is an address on the Internet for some content; in this case, the text of a book. The `#` symbol starts a comment, which is ignored by the computer but helpful for people reading the code.", "_____no_output_____" ] ], [ [ "# Read two books, fast!\n\nhuck_finn_url = 'https://www.inferentialthinking.com/chapters/01/3/huck_finn.txt'\nhuck_finn_text = read_url(huck_finn_url)\nhuck_finn_chapters = huck_finn_text.split('CHAPTER ')[44:]\n\nlittle_women_url = 'https://www.inferentialthinking.com/chapters/01/3/little_women.txt'\nlittle_women_text = read_url(little_women_url)\nlittle_women_chapters = little_women_text.split('CHAPTER ')[1:]", "_____no_output_____" ] ], [ [ "While a computer cannot understand the text of a book, it can provide us with some insight into the structure of the text. The name `huck_finn_chapters` is currently bound to a list of all the chapters in the book. We can place them into a table to see how each chapter begins.", "_____no_output_____" ] ], [ [ "# Display the chapters of Huckleberry Finn in a table.\n\nTable().with_column('Chapters', huck_finn_chapters)", "_____no_output_____" ] ], [ [ "Each chapter begins with a chapter number in Roman numerals, followed by the first sentence of the chapter. Project Gutenberg has printed the first word of each chapter in upper case. ", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf5c8e92012964e09b7f448cfd475deb1267a8b
604,869
ipynb
Jupyter Notebook
MA477 - Theory and Applications of Data Science/Lessons/Lesson 1 - General Overview/Lesson 1 -- General Overview.ipynb
jkstarling/MA477-copy
67c0d3da587f167d10f2a72700500408704360ad
[ "MIT" ]
null
null
null
MA477 - Theory and Applications of Data Science/Lessons/Lesson 1 - General Overview/Lesson 1 -- General Overview.ipynb
jkstarling/MA477-copy
67c0d3da587f167d10f2a72700500408704360ad
[ "MIT" ]
null
null
null
MA477 - Theory and Applications of Data Science/Lessons/Lesson 1 - General Overview/Lesson 1 -- General Overview.ipynb
jkstarling/MA477-copy
67c0d3da587f167d10f2a72700500408704360ad
[ "MIT" ]
2
2020-01-13T14:01:56.000Z
2020-11-10T15:16:03.000Z
606.689067
191,268
0.940066
[ [ [ "<h2> ====================================================</h2>\n <h1>MA477 - Theory and Applications of Data Science</h1> \n <h1>Lesson 1: General Overview</h1> \n \n <h4>Dr. Valmir Bucaj</h4>\n <br>\n United States Military Academy, West Point, AY20-2\n<h2>=====================================================</h2>", "_____no_output_____" ], [ "<h2> Lecture Outline</h2>\n<html>\n<ol>\n \n <li><b>Notation</b></li>\n <br>\n <li> <b>What is Machine/Statistical Learning?</b></li>\n <br>\n <li><b> Why's and How's of Estimating the Relationship Between <font color='red'> Predictors </font> and <font color='red'> Response</font></b></li>\n <br>\n <li> <b> Prediction Accuracy and Model Interpretability Trade-Off</b></li>\n <br>\n <li><b> Supervised vs. Unsupervised Models</b></li>\n <br>\n <li><b> Regression vs. Classification Models </b></li>\n <br>\n <li><b> Assesing Model Accuracy</b>\n <ol>\n <li> Mean Squared Error (MSE)</li>\n <li> Confusion Matrix</li>\n <li> ROC Curve </li>\n <li> Cross-Validation</li> \n </ol>\n </li>\n <br>\n<li><b> Bias-Variance Trade-Off</b></li>\n</ol>\n\n <hr>\n \n <hr>", "_____no_output_____" ], [ "<h2>Notation</h2>\n\n\n<ul>\n\n <li> $X$: predictors, features, independent variables</li>\n <li> $Y$: response, target, dependent variable</li>\n <li> $p$: number of predictors</li>\n <li> $n$: number of samples </li> \n\n</ul>\n\n\n<br>\n<h2>What is Machine/Statistical Learning?</h2>\n<br>\nWe will use the terms <i> Statistical Learning</i> and <i> Machine Learning</i> interchangeably.\n \n<ul> \n \n<li>Roughly spekaing, Machine Learning refers to a set of methods for estimating the systematic information that the <i> predictors/features</i>, denoted by $X$, provide about the <i> response</i>, denoted by $Y$.</li>\n \n<li> Equivalently: it is a set of approaches for estimating the relationship between the <i> predictor variables </i> and the <i> response variable</i></li>\n\n<br>\nSpecifically, suppose that we observe some quantitative response $Y$ and collect $p$ different features $X_1,\\dots, X_p$ that we believe to be related to the response $Y$. Letting $X=(X_1,\\dots, X_p)$ then we have \n \n$$Y=f(X)+\\epsilon$$\n \n where $f$ represents the relationship or systematic information that the predictors $X$ provide about the response $Y$, and $\\epsilon$ represents some random error term <b>independent</b> of $X-$ this stems from the fact that $Y$ may depend on other factors that are not among the $p$ features $X$. \n<br>\n\nSo, roughly speaking, Machine Learning refers to all the different methods of estimating this $f$.\n<br>\n \nWe will illustrate this with some examples below.\n\n", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom mpl_toolkits import mplot3d\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ] ], [ [ "<h3> Mock Example 1</h3>\n\nLet\n\n<ul>\n <li>$Y$: be yearly income</li>\n <li> $X_1$: years of education post fith grade</li>\n <li>$X_2$: height of the person</li>\n </ul>\n\nSuppose we are interested in \n\n<ul>\n <li>Predicting $Y$ based on $X=(X_1,X_2)$ and\n <li> Understading how each of $X_1$ and $X_2$ is related to and affects $Y$. \n</ul>\n\n<b>Remark:</b> Don't worry about the code for now.\n", "_____no_output_____" ] ], [ [ "# x=np.linspace(4,17,40)\n# y=100/(1+np.exp(-x+10))+40\n# y_noise=np.random.normal(0,15,40)\n# x2=np.linspace(4,6.7,40)\n# np.random.shuffle(x2)\n# y_out=y+y_noise", "_____no_output_____" ], [ "# income=pd.DataFrame({\"X1\":x.round(3), 'X2':x2.round(3), 'Y':y_out})", "_____no_output_____" ], [ "#This is what the information about the first 10 people look like\nincome.head()", "_____no_output_____" ], [ "plt.figure(figsize=(10,6))\nplt.scatter(income['X1'],income['Y'],edgecolor='red',c='red',s=15)\nplt.plot(x,y, label='True f')\nplt.xlabel(\"Years of Education\",fontsize=14)\nplt.ylabel(\"Yearly Income\",fontsize=14)\nplt.legend(loc=2)\nplt.show()", "_____no_output_____" ] ], [ [ "Since, in this mock case, the <b>Income</b> is a simulated data set, we know precisely how years of education is related to the yearly income. In other words, we know exactly what the function $f$ is (the blue curve above). However, \nin practice $f$ is not known, and our goal will be to find a good estimate $\\widehat f$ of $f$.\n<br>\n\nAnother important question that one often wants to answer in practice is which features are most strongly related to the response and would they make a good predictor? Are there any features that seem not to carry any information about $Y$ at all? If so, which? etc. etc. etc.\n\nFor example, in our mock case, <b> height</b> seems not to carry any information regarding the persons yearly income(it would be weird if it did!). See the plot beow.", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(10,6))\nplt.scatter(income['X2'],y_out, c='red', s=15)\nplt.xlabel('Height',fontsize=14)\nplt.ylabel(\"Yearly Income\",fontsize=14)\nplt.show()", "_____no_output_____" ], [ "fig = plt.figure(figsize=(12,8))\nax = plt.axes(projection=\"3d\")\n\ndef z_function(x1, x2):\n return 10*x1+8*x2-100\n\nx1=np.linspace(5,17,40)\nx2=np.linspace(4,6,40)\n\nnp.random.shuffle(x2)\n\nX, Y = np.meshgrid(x1, x2)\nZ = z_function(X, Y)\n\nY_target=100/(1+np.exp(-x+10))+30+np.random.normal(0,20,40)+np.sqrt(x2)\n\n\nax.plot_wireframe(X,Y,Z)\n\nax.plot_wireframe(X, Y, Z, color='green')\nax.set_xlabel('Years of Education',fontsize=14)\nax.set_ylabel('Height',fontsize=14)\nax.set_zlabel('Yearly Income',fontsize=14)\n\nfor i in range(len(x1)):\n \n ax.plot([x1[i]],[x2[i]],[Y_target[i]],marker='o',color='r')\n\nax.set_xlabel('Years of Education',fontsize=14)\nax.set_ylabel('Height',fontsize=14)\nax.set_zlabel('Yearly Income',fontsize=14)\n\nax.view_init(10,245)\nplt.show()\n\n", "_____no_output_____" ] ], [ [ "<h2> Why do we want to estimate $f$ ?</h2>\n\n\n Typically there are two main reasons why it is of interest to estimate $f$: <b> prediction</b> and <b>inference</b>.\n \n<ul>\n \n <li><h3>Prediction</h3></li>\n \n In many situations we can get a hold of the features for a particular target, but obtaining the value of the target variable is difficult and often impossible.\n \nFor example, imagine you want to know whether a patient will have a severe adverse reaction to a particular drug. One, albeit very undesirable, way to figure that out is by administering the drug and observing the effect. However, if the patient has an adverse reaction which may cause damges, the hospital is liable for a lawsuit etc. So, you want to figure out a way to determine if the patient will have an adverse reaction to the drug based say on some blood characteristics, $X_1,\\dots, X_p$. These blook markers may be readily obtained in the lab!\n \n So, in this case we may predict $Y$ by using $$\\widehat Y=\\widehat f(X)$$\n where $\\widehat f$ is some estimate of $f$, and $\\widehat Y$ is the resulting prediction for $Y$.\n \n How accurate our estimate $\\widehat Y$ is depends on two factors:\n<ul>\n <li><b> Reducible Error</b></li>\n\n This error stems from the fact that our estimate $\\widehat f$ of $f$ may not be perfect. However,\n since we may potentially get a better estimate of $f$ via another method, this erros is called <b> reducible</b>, as it may be further reduced.\n \n<br>\n <li><b>Irreducible Error</b></li>\n \n This error stems from the fact that there may be other features, outside of $X=(X_1,\\dots, X_p)$ that we have not measured, but that may play an important role in predicting $Y$. In other words, even if we could find a perfect estimate $\\widehat f$ of $f$, that is $\\widehat Y=f(X)$, there will still be some inherent error in the model for the simple fact that the features $X_1,\\dots, X_p$, that we have measured, are just not sufficient for a perfect prediction of $Y$. \n \n</ul>\n \nTypically, when one is exclusively interested in prediction, then the specific form of $f$ is of little to no importance, and it is taken as a <b>black box</b>.\n\n<li><h3> Inference</h3></li>\n\n\n In practice we are often not so much interested in building the best prediction model, but rather in understanding specifically how $Y$ is affected as the features $X_1,\\dots, X_p$ change. \n \n In inference problems, the estimated $\\widehat f$ may no longer taken as a black box, but rather needs to be understood well.\n \nSome questions of interest that we would want to answer are as follows:\n<ul>\n<li>Which predictors are associated with the response?</li>\n \n<li>What is the relationship between each response and the predictor? Positive, negative, more complex?</li>\n<li>Is the relationship between predictors and response linear or more complex?</li>\n</ul>\n<br>\nDiscuss the <b> Income</b> and the <b> Drug Adverse Reaction</b> cases from this perspective.\n</ul>\n\n", "_____no_output_____" ], [ "<h2> How is $f$ Estimated?</h2>\n\n<br>\n\nTo estimate $f$ you need data...often a lot of data...that will train or teach our method how to estimate $f$. The data used to train our method is refered to as <b> training data</b>.\n\nFor example, if $x_i=(x_{i1},x_{i2},\\dots, x_{in})$ for $i=1,2,\\dots,n$ is the $i^{th}$ observation and $y_i$ the response associated with it, then the <b>training data </b> consists of \n\n$$\\big\\{(x_1,y_1),(x_2,y_2),\\dots,(x_n,y_n)\\big\\}.$$\n\nThere is an ocean of linear and non-linear methods of estimating $f$. Overall, they can be split into two categories: <br><b> parametric</b> and <b> non-parametric</b> methods.\n\n\n<ul> \n \n <li><h3> Parametric Methods</h3></li>\n<br>\n <b> Step 1:</b> Assume the form of $f$<br>\n \n For example, one of the simplest assumptions we can make is that $f$ is linear, that is:\n \n $$f(X)=\\beta_0+\\sum_{i=1}^n\\beta_iX_i$$\n \n <b> Step 2:</b> Estimate the coefficients $\\{\\beta_i\\}_{i=0}^n$\n \n Next, you need to use the training data and select a procedure to estimate the coefficients $\\beta_0,\\beta_1,\\dots, \\beta_n$; that is find $\\widehat \\beta_0,\\dots,\\widehat \\beta_n$ such that $$\\widehat Y=\\widehat \\beta_0+\\sum_{i=1}^n\\widehat \\beta_i X_i$$\n \nOne of the main <b>advantages</b> of parametric methods is that the problem is transformed from estimating an arbitrary and unknown $f$ to estimating a set of parameters, which in general is much easier to do and requires less data!\n\nOne of the main <b>disadvantages</b> of parametic methods is that the assumption that you make about the form of $f$ often may not closely match the true form of $f$, which may lead in poor estimates.\n\n<b> Examples of parametric methods include:</b>\n<ul>\n <li> Simple Linear Regression (Least Squares)</li>\n <li> Lasso & Ridge Regression</li>\n <li> Logistic Regression </li>\n <li> Neural Nets etc.</li>\n</ul>\n\n\n<li><h3> Non-Parametric Methods</h3></li>\n<br>\nNon-parametric methods do not make any assumptions on the form of $f$, but rather try to estimate it by trying to approximate as closely and as smoothly as possible the training data.\n\nOne of the main <b>advantages</b> of non-parametric approaches is that because they do not make any assumptions on the form of $f$ they can accomodate a wide range of possibilities, and as such stand a better chance of approaching the true form of $f$, and as such may have better prediction power.\n\nOne of the main </b>disadvantages</b> is that they typically require far more training data than parametric methods to successfully and correctly estimate $f$ and may be prone to overfitting.\n\n<b> Examples of non-parametric methods include:</b>\n<ul>\n <li> Decision Trees </li>\n <li> K-Nearest Neighbor </li>\n <li> Support Vector Machines etc.</li>\n</ul>\n \n\n</ul>\n\n<hr>\n<hr>", "_____no_output_____" ], [ "<h2> Prediction Accuracy vs. Model Interpretability</h2>\n\nAs a rule of thumb, the less flexible a model is the more interpretable it may be, and vice versa, the more flexible a model is the less interpretable it may be!\n\nFor a pictorial view of where some of the models fall see Fig 1. below.![flexibility_interpretability.png](attachment:flexibility_interpretability.png)\n\n<br>\n<hr>\n <b> Remark:</b> You may ignore this part for now, but if interested, below is the code I used to generate the graph above: ", "_____no_output_____" ] ], [ [ "models={'Lasso':(0.05,0.9), 'Ridge':(0.1,0.8), 'Least Squares':(0.2,0.7),'GANs':(0.45,0.55),\n 'Decision Trees':(0.5,0.5),'SVM':(0.8,0.2),'Boosting':(0.9,0.25), 'ANN':(0.9,0.1)}\n\nplt.figure(figsize=(10,6))\n\nfor item,val in zip(models.keys(),models.values()):\n plt.text(val[0],val[1],item, fontsize=14)\n\n \nplt.xticks([0.1,0.5,1], ('Low','Medium','High'))\nplt.yticks([0.1,0.5,1], ('Low','Medium','High'))\nplt.xlim(0,1.1)\nplt.ylim(0,1.1)\nplt.xlabel(\"Flexibility\",fontsize=14)\nplt.ylabel('Interpretability',fontsize=14)\n\nplt.text(0.1,-0.2,\" Fig 1. Trade-off between flexibility and interpretability \",fontsize=16)\nplt.show()", "_____no_output_____" ] ], [ [ "<h2> Supervised vs. Unsupervised Learning Methods</h2>\n\n<b>Supervised learning</b> describes the situations where for each observation of the predictor measurements $x_i,\\, i=1,\\dots,n$ there is a corresponding response measurement $y_i$. \n\nModels that fall under the <b> supervised learning</b> category try to relate the response to the predictors in an attempt to accurately predict the response for future, previously unseen, observations or better understand the relationship of the response to predictors.\n<br>\n\n<b>Unsupervised learning</b> describes the situations where we observe predictor measurements $x_i,\\, i=1,\\dots,n$ but there is <b>no</b> associated response $y_i.$\n\nSince it's not possible to make predictions without having an associated response variable, what sort of analysis are possible in this scenario?\n\nWe can investigate the relationship between the <b>observations</b> or the <b>features</b> themselves!\n\n\n<h3>Mock Example</h3>\n\nSuppose we suspect there are a few <i>unknown</i> subtypes of skin-cancer, and we have tasked a team of Data Scientists to try and confirm our suspiction. \n\nWe have collected the following measurements for each tissue sample from 150 different subjects: <b> mean radius, texture</b>, and <b> concavity</b>. There is no response/target variable here to supervise our anlaysis, so it is not possible to do any prediction analysis. So, what can we do?\n\nA sample of the data is given on the <b> cancer</b> dataset below.\n\n<font color='red' size='4'>Group Exercise</font>\n\nDiscuss the graphs below. Focus on what some of the important information we can extract from them and on their shortcomings.\n\n<b> Remark</b> For the sake of this exercise you may completely ignore the code below which I used to generate the synthetic dataset and the graphs.", "_____no_output_____" ] ], [ [ "from sklearn.datasets import make_blobs", "_____no_output_____" ], [ "# def create_datfarame(feat_names,n_feat,n_samp,centers,std):\n# X, y = make_blobs(n_samples=n_samp, centers=centers,cluster_std=std, n_features=n_feat,\n# random_state=0,center_box=(0,10))\n# cancer=pd.DataFrame()\n# for name, i in zip(feat_names,range(n_feat)):\n# cancer[name]=X[:,i]\n# return cancer,y", "_____no_output_____" ], [ "# feat_names=['texture','mean_radius','concavity']\n# cancer,y=create_datfarame(feat_names,n_feat=3,n_samp=150,std=0.65,centers=3)", "_____no_output_____" ], [ "cancer.head(10)", "_____no_output_____" ], [ "plt.figure(figsize=(10,6))\nplt.scatter(cancer['mean_radius'],cancer['concavity'], c=y)\n#plt.scatter(cancer['mean_radius'],cancer['concavity'],c=y)\nplt.xlabel('mean_radius',fontsize=13)\nplt.ylabel(\"concavity\",fontsize=14)\nplt.show()", "_____no_output_____" ], [ "plt.figure(figsize=(10,6))\nplt.scatter(cancer['mean_radius'],cancer['texture'], c=y)\nplt.xlabel('mean_radius',fontsize=13)\nplt.ylabel(\"texture\",fontsize=14)\nplt.show()", "_____no_output_____" ], [ "plt.figure(figsize=(10,6))\nplt.scatter(cancer['texture'],cancer['concavity'], c=y)\nplt.xlabel('texture',fontsize=13)\nplt.ylabel(\"concavity\",fontsize=14)\nplt.show()", "_____no_output_____" ] ], [ [ "<h2> Regression vs. Classification Problems</h2>\n\nBoth regression and classification problems fall under the supervised learning realm. \n\nGenerally, problems with a <i> quantitative</i> response variable are referred to as <b> regression problems</b> and those with a <i>qualitative</i> response variable are referred to as <b> classification problems</b>.\n\nExamples of:\n<ul>\n <li> <b>quantiative variables:</b> age, height, weight, average gpa, yearly income, blood pressure etc.</li>\n <li><b>qualitative variables:</b> gender, race, spam email (yes/no), credit card fraud (ye/no) etc.</li> \n</ul>", "_____no_output_____" ], [ "<h2> Assesing Model Accuracy</h2>\n\nBecause there is no one best model that works for all data sets, in practice, it is very important and often very challenging to select the best model for a given dataset. \n\nIn order to be able to select one model over another, it is crutial to have a quantiative way of measuring the models performance and quality of fit. In other words, for a given observation, we need a way to quantify how close the predicted response is to the true response.\n\n<h3>Regression Setting</h3>\n\nOne widely used measure in the regression setting is the <i> mean squared error</i> (MSE):\n\n$$MSE=\\frac{1}{n}\\sum_{i=1}^n\\left(y_i-\\widehat y_i\\right)^2$$\n\nwhere $y_i$ is the true response and $\\widehat y_i$ is the prediction that $\\widehat f$ gives for the the observation $x_i$;\nthat is $\\widehat y_i=\\widehat f(x_i)$.\n\nA small MSE indicated that the true and predicted response are close to each other, on the other hand, if some of the predicted responses are far away from the true ones, MSE will tend to be large. \n\n<h4> Training vs. Test MSE</h4>\n\nTraining MSE is measured using the training dataset, whereas test MSE is measured using observations which have not previously been seen by the model. \n\nIn practice, we care about the performance of our model on previously unseen data, hence <b> test MSE</b> is what should be used. \n\nSelecting a model based on <b> training MSE</b> can lead to extremely poor performance, as training MSE and test MSE may behave very differently. Specifically, a model with very low <b> training MSE</b> may have a very large <b>test MSE</b>(which is what we really care about).\n\nLet's discuss the graphs below which illustrate this phenomenon:\n\n![mse_test_training.PNG](attachment:mse_test_training.PNG)\n\n<br>\n\n<h3>Classification Setting</h3>\n\n<ul>\n <li> <b>Confustion Matrix</b></li>\n \n A confusion matrix is a simple and neat way of portraying how well our algorithm is doing at classifying the data.\n \n <b> Example:</b> Suppose we want to classify credit card transactions as <i> fradulent</i> or <i> normal(non-fradulent)</i> based on a certain number of features we have used to train our algorithm on. \n \nWe will designate <i> normal</i> as our positive class and <i> fradu</i> as the negative class.\n\n<b> Notation:</b>\n<ul>\n <li> <b>TP</b>= True Positive</li>\n <li><b>TN</b>=True Negative</li>\n <li><b>FP</b>=False Positive</li>\n <li><b>FN</b>=False Negative</li>\n</ul>\n \n ![confustion_matrix.PNG](attachment:confustion_matrix.PNG)\n \n \nDepnding on the situation and the type of problem, we may be in some or all of the following measures:\n\n<br>\n<ul>\n \n <li> <b> Accuracy Rate:</b> $$\\frac{TP+TN}{Total} \\text{ where }\\, Total=TP+TN+FP+FN$$</li>\n \n <li><b> Error Rate:</b> $$ 1-\\frac{TP+TN}{Total}$$</li>\n \n <li><b> True Positive Rate or Recall:</b> $$\\frac{TP}{TP+FN}$$</li>\n \n <li><b>False Positive Rate:</b> $$\\frac{FP}{FP+TN}$$</li>\n <br>\n <li><b> Precision (if the algorithm predicts Normal, how often is it correct?):</b> \n \n $$\\frac{TP}{TP+FP}$$</li>\n \n</ul>\n\n<hr>\n<br>\n<font color='red' size='4'>Group Discussion</font>: Considering the credit-card fraud example, which of these measured do you think would be of most interest, and why? Specifically, would <b> accuracy</b> be a good choice for measuring the models performance?\n<hr>\n</ul>", "_____no_output_____" ], [ "<h2> Bias-Variance Trade-Off</h2>\n\nIn every machine learning model there are always two competing properties that are at war with each other, namely <b> bias</b> and <b>variance</b> of the model.\n\nSuppose there is a relationship between the predictor $X$ and the response $Y$, $$y=f(x)+\\epsilon$$ and that using a machine learning model and some training set we estimate this relationship, that is $$\\widehat y=\\widehat f(x)$$\n\nNow, given a new observation $x_0$, the error we observe in our model for this observation is:$$\\widehat f(x_0)-f(x_0)-\\epsilon$$\n\nIt is very important to notice that this erro, among other things, also depends on the training set that was used to estimate $f$. A good and roboust model would give good predictions regardless of what training set was used to compute $\\widehat f$. \n\nHence, we can talk about the <i> average </i> error incurred due to possible different estimates of $f$ using different training sets. \n\nThat is, the average <i> test MSE</i> for a given new observation $x_0$ can always be decomposed into the following three components \n\n$$E\\left[\\left(\\widehat f(x_0)-f(x_0)-\\epsilon\\right)^2\\right]=\\left(Bias \\widehat f(x_0)\\right)^2+Var\\left(\\widehat f(x_0)\\right)+Var(\\epsilon)$$\n\nSo, a great model is one that has <b> low bias</b> and <b>low variance</b>.\n\nWhat do we exactly mean by <b> variance</b> and <b>bias</b> of a ML model?\n\n<ul>\n <li><b> Bias of a ML model </b> refers to the error that is introduced by approximating a complex real-life problem by a simpler model. For example, approximating a real-life situation by a linear regression introduces bias, as it assumes that there is a linear relationship between predictors and the response, which may not necessarily be the case</li>\n <li><b> Variance of a ML model </b> refers to the amount by which $\\widehat f$ would change if we estimated it using different training sets. </li>\n </ul>\n\nThe following picture gives a good pictorial representation of this dynamic.\n\n![bias_variance.PNG](attachment:bias_variance.PNG)\n\nAs a rule of thumb, the more complex and flexible a model is the higher the variance and the lower the bias and vice versa, the less flexible and simple a model the higher the bias and the lower the variance. This dynamic of bias-variance is the reason why <i> test MSE</i> is always U-shaped, as is illustrated in the graph below.\n\n![bias_variance2.PNG](attachment:bias_variance2.PNG)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ] ]
cbf5db63f438eec0791210f88d9611fd1e37682b
20,302
ipynb
Jupyter Notebook
.ipynb_checkpoints/clean_data-checkpoint.ipynb
nithiye/Project-Guru
55e7a0497e09c42d6fe3e1330570afd2f59560c2
[ "MIT" ]
null
null
null
.ipynb_checkpoints/clean_data-checkpoint.ipynb
nithiye/Project-Guru
55e7a0497e09c42d6fe3e1330570afd2f59560c2
[ "MIT" ]
null
null
null
.ipynb_checkpoints/clean_data-checkpoint.ipynb
nithiye/Project-Guru
55e7a0497e09c42d6fe3e1330570afd2f59560c2
[ "MIT" ]
null
null
null
31.185868
96
0.318146
[ [ [ "# Dependencies and Setup\nimport pandas as pd\nimport numpy as np\n", "_____no_output_____" ], [ "# File to read\norg_dest_csv = \"Resources/UN_MigrantStockByOriginAndDestination_2017.csv\"\ncountry_cat_csv = \"Resources/Region_country_catalog.csv\"\n\n# Read data into Pandas data frame\norg_dest_DF = pd.read_csv(org_dest_csv)\ncountry_cat_DF = pd.read_csv(country_cat_csv)", "_____no_output_____" ], [ "# we need countries of origin, country of destination, total numbers of migrants, years\norg_dest_DF = org_dest_DF.replace('..',0)", "_____no_output_____" ], [ "new_country_data = pd.merge(country_cat_DF,org_dest_DF, on=\"Country\")\n\nnew_country_data.loc[new_country_data[\"Country\"] == \"United States of America\"]", "_____no_output_____" ], [ "for idx, value in new_country_data.iterrows():\n destination = value.Country\n reg_dest = value.Region\n year = value.Year\n #print(destination)\n origin_ctrs = value[3:]\n #print(origin_ctrs)\n\nexample = origin_ctrs.to_frame()\nexample", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
cbf5de75d075d16005173efd141e45a43f2ac6cb
12,046
ipynb
Jupyter Notebook
Assignments/Titanic_Pyspark_ML.ipynb
Jeromeschmidt/DS-2.3-Data-Science-in-Production
fbfba72bd6b66b545d60b348f9c0a814a4e92657
[ "MIT" ]
null
null
null
Assignments/Titanic_Pyspark_ML.ipynb
Jeromeschmidt/DS-2.3-Data-Science-in-Production
fbfba72bd6b66b545d60b348f9c0a814a4e92657
[ "MIT" ]
4
2020-09-26T01:29:25.000Z
2021-08-25T16:13:51.000Z
Assignments/Titanic_Pyspark_ML.ipynb
Jeromeschmidt/DS-2.3-Data-Science-in-Production
fbfba72bd6b66b545d60b348f9c0a814a4e92657
[ "MIT" ]
null
null
null
24.684426
228
0.429769
[ [ [ "from pyspark import SparkContext\nsc = SparkContext()\nfrom pyspark.sql import SparkSession\n\nspark = SparkSession \\\n .builder \\\n .appName(\"Python Spark Titanic Final\") \\\n .config(\"spark.some.config.option\", \"some-value\") \\\n .getOrCreate()", "_____no_output_____" ], [ "df = spark.read.csv('data/Titanic.csv',header=True, inferSchema = True)", "_____no_output_____" ], [ "df = df.drop('_c0')", "_____no_output_____" ], [ "# Drop Columns with Null Values\ndf = df.na.drop()", "_____no_output_____" ], [ "df.show(5)", "+-----------+--------+------+--------------------+------+----+-----+-----+--------+-------+-----+--------+\n|PassengerId|Survived|Pclass| Name| Sex| Age|SibSp|Parch| Ticket| Fare|Cabin|Embarked|\n+-----------+--------+------+--------------------+------+----+-----+-----+--------+-------+-----+--------+\n| 2| 1| 1|Cumings, Mrs. Joh...|female|38.0| 1| 0|PC 17599|71.2833| C85| C|\n| 4| 1| 1|Futrelle, Mrs. Ja...|female|35.0| 1| 0| 113803| 53.1| C123| S|\n| 7| 0| 1|McCarthy, Mr. Tim...| male|54.0| 0| 0| 17463|51.8625| E46| S|\n| 11| 1| 3|Sandstrom, Miss. ...|female| 4.0| 1| 1| PP 9549| 16.7| G6| S|\n| 12| 1| 1|Bonnell, Miss. El...|female|58.0| 0| 0| 113783| 26.55| C103| S|\n+-----------+--------+------+--------------------+------+----+-----+-----+--------+-------+-----+--------+\nonly showing top 5 rows\n\n" ], [ "df.describe", "_____no_output_____" ], [ "df.dtypes", "_____no_output_____" ], [ "df.printSchema()", "root\n |-- PassengerId: integer (nullable = true)\n |-- Survived: integer (nullable = true)\n |-- Pclass: integer (nullable = true)\n |-- Name: string (nullable = true)\n |-- Sex: string (nullable = true)\n |-- Age: double (nullable = true)\n |-- SibSp: integer (nullable = true)\n |-- Parch: integer (nullable = true)\n |-- Ticket: string (nullable = true)\n |-- Fare: double (nullable = true)\n |-- Cabin: string (nullable = true)\n |-- Embarked: string (nullable = true)\n\n" ], [ "df = df.select('Survived', 'Sex', 'Embarked', 'Pclass', 'Age', 'SibSp', 'Parch', 'Fare')\ncols = df.columns", "_____no_output_____" ], [ "# Transform datatypes for training\nfrom pyspark.ml.feature import OneHotEncoder, StringIndexer, VectorAssembler\n\ncategoricalColumns = ['Sex', 'Embarked']\nstages = []\nfor categoricalCol in categoricalColumns:\n stringIndexer = StringIndexer(inputCol = categoricalCol, outputCol = categoricalCol + 'Index')\n encoder = OneHotEncoder(inputCols=[stringIndexer.getOutputCol()], outputCols=[categoricalCol + \"classVec\"])\n stages += [stringIndexer, encoder]\nlabel_stringIdx = StringIndexer(inputCol = 'Survived', outputCol = 'label')\nstages += [label_stringIdx]\nnumericCols = ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare']\nassemblerInputs = [c + \"classVec\" for c in categoricalColumns] + numericCols\nassembler = VectorAssembler(inputCols=assemblerInputs, outputCol=\"features\")\nstages += [assembler]", "_____no_output_____" ], [ "from pyspark.ml import Pipeline\npipeline = Pipeline(stages = stages)\npipelineModel = pipeline.fit(df)\ndf = pipelineModel.transform(df)\nselectedCols = ['label', 'features'] + cols\ndf = df.select(selectedCols)\ndf.printSchema()", "root\n |-- label: double (nullable = false)\n |-- features: vector (nullable = true)\n |-- Survived: integer (nullable = true)\n |-- Sex: string (nullable = true)\n |-- Embarked: string (nullable = true)\n |-- Pclass: integer (nullable = true)\n |-- Age: double (nullable = true)\n |-- SibSp: integer (nullable = true)\n |-- Parch: integer (nullable = true)\n |-- Fare: double (nullable = true)\n\n" ], [ "df_RDD = df.rdd.map(list)", "_____no_output_____" ], [ "df_RDD.take(10)", "_____no_output_____" ], [ "train, test = df.randomSplit([0.7, 0.3])", "_____no_output_____" ], [ "# Trains model\nfrom pyspark.ml.classification import LogisticRegression\nlr = LogisticRegression(featuresCol = 'features', labelCol = 'label', maxIter=10)\nlrModel = lr.fit(train)", "_____no_output_____" ], [ "lrModel.numFeatures", "_____no_output_____" ], [ "lrModel.coefficients", "_____no_output_____" ], [ "lrModel.intercept", "_____no_output_____" ], [ "# Get accuracy of trained model\ntrainingSummary = lrModel.summary\nprint(trainingSummary.accuracy)", "0.7518248175182481\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf5e6b93d3cddc25e20d02209a5229690c34b2a
622,878
ipynb
Jupyter Notebook
python/03-data-analysis/mlp-tensorflow-1.x/mlp-regression-tensorflow1.x-house.ipynb
igerardoh/house-price-prediction
b858cfa36ae9c27c411999310e629f1e6c2f86c6
[ "MIT" ]
2
2020-11-22T09:03:34.000Z
2021-05-06T01:14:20.000Z
python/03-data-analysis/mlp-tensorflow-1.x/mlp-regression-tensorflow1.x-house.ipynb
igerardoh/house-price-prediction
b858cfa36ae9c27c411999310e629f1e6c2f86c6
[ "MIT" ]
null
null
null
python/03-data-analysis/mlp-tensorflow-1.x/mlp-regression-tensorflow1.x-house.ipynb
igerardoh/house-price-prediction
b858cfa36ae9c27c411999310e629f1e6c2f86c6
[ "MIT" ]
1
2021-02-13T07:48:40.000Z
2021-02-13T07:48:40.000Z
499.100962
496,211
0.594815
[ [ [ "# House Price Prediction", "_____no_output_____" ], [ "<p><b>Status: <span style=color:orange;>In process</span></b></p>", "_____no_output_____" ], [ "##### LOAD THE FEATURE DATA", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\nX = pd.read_csv('../../../data/preprocessed_data/X.csv', sep=',')\n\nprint ('Feature data, shape:\\nX: {}'.format(X.shape))\nX.head()", "Feature data, shape:\nX: (506, 13)\n" ], [ "y = pd.read_csv('../../../data/preprocessed_data/y.csv', sep=',', header=None)\n\nprint ('Target data, shape:\\ny: {}'.format(y.shape))\ny.head()", "Target data, shape:\ny: (506, 1)\n" ] ], [ [ "##### SPLIT THE DATA", "_____no_output_____" ] ], [ [ "from sklearn.model_selection import train_test_split\n\n# set the seed for reproducibility\nnp.random.seed(127)\n\n# split the dataset into 2 training and 2 testing sets\nX_train, X_test, y_train, y_test = train_test_split(X.values, y.values, test_size=0.2, random_state=13)\n\nprint('Data shapes:\\n')\nprint('X_train : {}\\ny_train : {}\\n\\nX_test : {}\\ny_test : {}'.format(X_train.shape,\n y_train.shape,\n X_test.shape,\n y_test.shape))", "Data shapes:\n\nX_train : (404, 13)\ny_train : (404, 1)\n\nX_test : (102, 13)\ny_test : (102, 1)\n" ] ], [ [ "##### DEFINE NETWORK PARAMETERS", "_____no_output_____" ] ], [ [ "# define number of attributes\nn_features = X_train.shape[1]\nn_target = 1 # quantitative data\n\n# count number of samples in each set of data\nn_train = X_train.shape[0]\nn_test = X_test.shape[0]\n\n# define amount of neurons\nn_layer_in = n_features # 12 neurons in input layer\nn_layer_h1 = 5 # first hidden layer\nn_layer_h2 = 5 # second hidden layer\nn_layer_out = n_target # 1 neurons in output layer\n\nsigma_init = 0.01 # For randomized initialization", "_____no_output_____" ] ], [ [ "##### RESET TENSORFLOW GRAPH IF THERE IS ANY", "_____no_output_____" ] ], [ [ "import tensorflow as tf\n\n# this will set up a specific seed in order to control the output \n# and get more homogeneous results though every model variation\ndef reset_graph(seed=127):\n tf.reset_default_graph()\n tf.set_random_seed(seed)\n np.random.seed(seed)\n \nreset_graph()", "_____no_output_____" ] ], [ [ "##### MODEL ARCHITECTURE", "_____no_output_____" ] ], [ [ "# create symbolic variables\nX = tf.placeholder(tf.float32, [None, n_layer_in], name=\"input\")\nY = tf.placeholder(tf.float32, [None, n_layer_out], name=\"output\")\n\n# deploy the variables that will store the weights\nW = {\n 'W1': tf.Variable(tf.random_normal([n_layer_in, n_layer_h1], stddev = sigma_init), name='W1'),\n 'W2': tf.Variable(tf.random_normal([n_layer_h1, n_layer_h2], stddev = sigma_init), name='W2'),\n 'W3': tf.Variable(tf.random_normal([n_layer_h2, n_layer_out], stddev = sigma_init), name='W3')\n}\n\n# deploy the variables that will store the bias\nb = {\n 'b1': tf.Variable(tf.random_normal([n_layer_h1]), name='b1'),\n 'b2': tf.Variable(tf.random_normal([n_layer_h2]), name='b2'),\n 'b3': tf.Variable(tf.random_normal([n_layer_out]), name='b3')\n}\n\n# this will create the model architecture and output the result\ndef model_MLP(_X, _W, _b):\n with tf.name_scope('hidden_1'):\n layer_h1 = tf.nn.selu(tf.add(tf.matmul(_X,_W['W1']), _b['b1']))\n with tf.name_scope('hidden_2'):\n layer_h2 = tf.nn.selu(tf.add(tf.matmul(layer_h1,_W['W2']), _b['b2']))\n with tf.name_scope('layer_output'):\n layer_out = tf.add(tf.matmul(layer_h2,_W['W3']), _b['b3'])\n return layer_out # these are the predictions\n\nwith tf.name_scope(\"MLP\"):\n y_pred = model_MLP(X, W, b)", "_____no_output_____" ] ], [ [ "##### DEFINE LEARNING RATE", "_____no_output_____" ] ], [ [ "learning_rate = 0.4\n\n# CHOOSE A DECAYING METHOD IN HERE\nmodel_decay = 'none' # [exponential | inverse_time | natural_exponential | polynomial | none]\n\nglobal_step = tf.Variable(0, trainable=False)\ndecay_rate = 0.90\ndecay_step = 10000\n\nif model_decay == 'exponential':\n learning_rate = tf.train.exponential_decay(learning_rate, global_step, decay_step, decay_rate)\n\nelif model_decay == 'inverse_time':\n learning_rate = tf.train.inverse_time_decay(learning_rate, global_step, decay_step, decay_rate)\n \nelif model_decay == 'natural_exponential':\n learning_rate = tf.train.natural_exp_decay(learning_rate, global_step, decay_step, decay_rate)\n \nelif model_decay == 'polynomial':\n end_learning_rate = 0.001\n learning_rate = tf.train.polynomial_decay(learning_rate, global_step, decay_step, end_learning_rate, power=0.5)\n \nelse:\n decay_rate = 1.0\n learning_rate = tf.train.exponential_decay(learning_rate, global_step, decay_step, decay_rate)\n\nprint('Decaying Learning Rate : ', model_decay)", "Decaying Learning Rate : none\n" ] ], [ [ "##### DEFINE MODEL TRAINING AND MEASURE PERFORMANCE", "_____no_output_____" ] ], [ [ "with tf.name_scope(\"loss\"):\n loss = tf.square(Y - y_pred) # squared error\n #loss = tf.nn.softmax(logits=y_pred) # softmax\n #loss = tf.nn.log_softmax(logits=y_pred) # log-softmax\n #loss = tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y, logits=y_pred, dim=-1) # cross-entropy\n #loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=Y, logits=y_pred) # sigmoid-cross-entropy\n #loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y, logits=y_pred) # sparse-softmax-cross-entropy\n loss = tf.reduce_mean(loss, name='MSE')\n \nwith tf.name_scope(\"train\"):\n #optimizer = tf.train.GradientDescentOptimizer(learning_rate) # SGD\n #optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,momentum=0.9) # MOMENTUM\n #optimizer = tf.train.AdagradOptimizer(learning_rate=learning_rate) # ADAGRAD\n optimizer = tf.train.AdadeltaOptimizer(learning_rate=learning_rate) # ADADELTA\n #optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate, decay=1) # RMS\n training_op = optimizer.minimize(loss, global_step=global_step)\n\n# Create summaries \ntf.summary.scalar(\"loss\", loss)\ntf.summary.scalar(\"learn_rate\", learning_rate)\n\n# Merge all summaries into a single op to generate the summary data\nmerged_summary_op = tf.summary.merge_all()", "_____no_output_____" ] ], [ [ "##### DEFINE DIRECTORIES FOR RESULTS", "_____no_output_____" ] ], [ [ "import sys\nimport shutil\nfrom datetime import datetime\n\n# set up the directory to store the results for tensorboard\nnow = datetime.utcnow().strftime('%Y%m%d%H%M%S')\nroot_ckpoint = 'tf_checkpoints'\nroot_logdir = 'tf_logs'\nlogdir = '{}/run-{}/'.format(root_logdir, now) \n\n## Try to remove tree; if failed show an error using try...except on screen\ntry:\n shutil.rmtree(root_ckpoint)\nexcept OSError as e:\n print (\"Error: %s - %s.\" % (e.filename, e.strerror))", "_____no_output_____" ] ], [ [ "##### EXECUTE THE MODEL", "_____no_output_____" ] ], [ [ "from datetime import datetime\n\n# define some parameters\nn_epochs = 40\ndisplay_epoch = 2 # checkpoint will also be created based on this\nbatch_size = 10\nn_batches = int(n_train/batch_size)\n\n# this will help to restore the model to a specific epoch\nsaver = tf.train.Saver(tf.global_variables())\n\n# store the results through every epoch iteration\nmse_train_list = []\nmse_test_list = []\nlearning_list = []\nprediction_results = []\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # write logs for tensorboard\n summary_writer = tf.summary.FileWriter(logdir, graph=tf.get_default_graph())\n \n for epoch in range(n_epochs):\n for i in range(0, n_train, batch_size):\n # create batches\n X_batch = X_train[i:i+batch_size]\n y_batch = y_train[i:i+batch_size]\n \n # improve the model\n _, _summary = sess.run([training_op, merged_summary_op], feed_dict={X:X_batch, Y:y_batch})\n \n # Write logs at every iteration\n summary_writer.add_summary(_summary)\n\n # measure performance and display the results\n if (epoch+1) % display_epoch == 0:\n _mse_train = sess.run(loss, feed_dict={X: X_train, Y: y_train})\n _mse_test = sess.run(loss, feed_dict={X: X_test, Y: y_test})\n mse_train_list.append(_mse_train); mse_test_list.append(_mse_test)\n learning_list.append(sess.run(learning_rate))\n \n # Save model weights to disk for reproducibility\n saver = tf.train.Saver(max_to_keep=15)\n saver.save(sess, \"{}/epoch{:04}.ckpt\".format(root_ckpoint, (epoch+1)))\n \n print(\"Epoch: {:04}\\tTrainMSE: {:06.5f}\\tTestMSE: {:06.5f}, Learning: {:06.7f}\".format((epoch+1),\n _mse_train,\n _mse_test,\n learning_list[-1]))\n # store the predictuve values\n prediction_results = sess.run(y_pred, feed_dict={X: X_test, Y: y_test})\n predictions = sess.run(y_pred, feed_dict={X: X_test, Y: y_test})\n \n # output comparative table\n dataframe = pd.DataFrame(predictions, columns=['Prediction'])\n dataframe['Target'] = y_test\n dataframe['Difference'] = dataframe.Target - dataframe.Prediction\n print('\\nPrinting results :\\n\\n', dataframe)\n \n", "Epoch: 0002\tTrainMSE: 596.19885\tTestMSE: 545.42310, Learning: 0.4000000\nEpoch: 0004\tTrainMSE: 583.20251\tTestMSE: 533.55811, Learning: 0.4000000\nEpoch: 0006\tTrainMSE: 35.75017\tTestMSE: 34.80634, Learning: 0.4000000\nEpoch: 0008\tTrainMSE: 23.05773\tTestMSE: 22.79489, Learning: 0.4000000\nEpoch: 0010\tTrainMSE: 21.23712\tTestMSE: 20.91974, Learning: 0.4000000\nEpoch: 0012\tTrainMSE: 20.45389\tTestMSE: 19.68934, Learning: 0.4000000\nEpoch: 0014\tTrainMSE: 19.87004\tTestMSE: 18.63902, Learning: 0.4000000\nEpoch: 0016\tTrainMSE: 19.40698\tTestMSE: 17.78506, Learning: 0.4000000\nEpoch: 0018\tTrainMSE: 19.03131\tTestMSE: 17.12411, Learning: 0.4000000\nEpoch: 0020\tTrainMSE: 18.72885\tTestMSE: 16.63563, Learning: 0.4000000\nEpoch: 0022\tTrainMSE: 18.51796\tTestMSE: 16.38017, Learning: 0.4000000\nEpoch: 0024\tTrainMSE: 18.35551\tTestMSE: 16.17319, Learning: 0.4000000\nEpoch: 0026\tTrainMSE: 18.25398\tTestMSE: 16.04083, Learning: 0.4000000\nEpoch: 0028\tTrainMSE: 18.16143\tTestMSE: 15.96518, Learning: 0.4000000\nEpoch: 0030\tTrainMSE: 18.10532\tTestMSE: 15.88546, Learning: 0.4000000\nEpoch: 0032\tTrainMSE: 18.05683\tTestMSE: 15.84991, Learning: 0.4000000\nEpoch: 0034\tTrainMSE: 18.01700\tTestMSE: 15.82729, Learning: 0.4000000\nEpoch: 0036\tTrainMSE: 17.97848\tTestMSE: 15.82194, Learning: 0.4000000\nEpoch: 0038\tTrainMSE: 17.95045\tTestMSE: 15.80706, Learning: 0.4000000\nEpoch: 0040\tTrainMSE: 17.92572\tTestMSE: 15.79528, Learning: 0.4000000\n\nPrinting results :\n\n Prediction Target Difference\n0 9.193834 12.0 2.806166\n1 18.926088 15.2 -3.726088\n2 23.117142 21.0 -2.117142\n3 27.829626 24.0 -3.829626\n4 21.263201 19.4 -1.863201\n5 23.466669 22.2 -1.266669\n6 23.411528 23.3 -0.111528\n7 22.245726 15.6 -6.645726\n8 20.158220 20.8 0.641780\n9 7.969501 13.8 5.830499\n10 19.225187 19.6 0.374813\n11 19.696911 27.1 7.403089\n12 36.687702 36.5 -0.187702\n13 9.784266 15.2 5.415734\n14 14.134178 11.7 -2.434178\n15 16.549633 14.1 -2.449633\n16 14.035741 17.2 3.164259\n17 20.723890 16.8 -3.923890\n18 32.069923 32.9 0.830077\n19 21.782356 21.4 -0.382356\n20 34.588329 32.4 -2.188329\n21 28.161507 23.5 -4.661507\n22 21.982279 20.4 -1.582279\n23 15.237765 13.1 -2.137765\n24 18.106474 12.6 -5.506474\n25 8.134708 10.4 2.265292\n26 35.699463 50.0 14.300537\n27 17.739424 23.1 5.360576\n28 13.625247 13.4 -0.225247\n29 21.156153 24.3 3.143847\n.. ... ... ...\n72 17.693649 19.5 1.806351\n73 21.418242 21.0 -0.418242\n74 27.526747 30.1 2.573253\n75 15.347914 18.4 3.052086\n76 34.018070 34.6 0.581930\n77 21.644808 20.1 -1.544808\n78 40.236862 43.5 3.263138\n79 23.894194 21.6 -2.294194\n80 19.206249 18.3 -0.906249\n81 20.132231 21.4 1.267769\n82 21.850767 18.9 -2.950767\n83 11.070068 13.4 2.329932\n84 30.408710 30.8 0.391290\n85 25.289640 25.0 -0.289640\n86 25.768282 25.2 -0.568282\n87 8.118471 8.8 0.681529\n88 30.570581 31.1 0.529419\n89 10.653531 13.4 2.746469\n90 40.412666 48.3 7.887334\n91 21.701283 17.8 -3.901283\n92 8.778216 5.6 -3.178216\n93 12.535055 12.7 0.164945\n94 17.012262 16.1 -0.912262\n95 18.989309 20.9 1.910691\n96 19.273836 19.9 0.626164\n97 14.782319 13.9 -0.882319\n98 25.127522 22.6 -2.527522\n99 20.834837 21.2 0.365163\n100 21.338478 21.2 -0.138478\n101 20.786850 22.9 2.113150\n\n[102 rows x 3 columns]\n" ] ], [ [ "##### VISUALIZE THE MODEL'S IMPROVEMENTS", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as mpatches\n\n# set up legend\nblue_patch = mpatches.Patch(color='blue', label='Train MSE')\nred_patch = mpatches.Patch(color='red', label='Test MSE')\nplt.legend(handles=[blue_patch,red_patch])\nplt.grid()\n\n# plot the data\nplt.plot(mse_train_list, color='blue')\nplt.plot(mse_test_list, color='red')\n\nplt.xlabel('epochs (x{})'.format(display_epoch))\nplt.ylabel('MSE [minimize]');", "_____no_output_____" ] ], [ [ "##### LEARNING RATE EVOLUTION", "_____no_output_____" ] ], [ [ "or_patch = mpatches.Patch(color='orange', label='Learning rate')\nplt.legend(handles=[or_patch])\n\nplt.plot(learning_list, color='orange');\nplt.xlabel('epochs (x{})'.format(display_epoch))\nplt.ylabel('learning rate');", "_____no_output_____" ] ], [ [ "##### VISUALIZE THE RESULTS", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(15,10))\n\n# define legend\nblue_patch = mpatches.Patch(color='blue', label='Prediction')\nred_patch = mpatches.Patch(color='red', label='Expected Value')\ngreen_patch = mpatches.Patch(color='green', label='Abs Error')\nplt.legend(handles=[blue_patch,red_patch, green_patch])\n\n# plot data\nx_array = np.arange(len(prediction_results))\nplt.scatter(x_array, prediction_results, color='blue')\nplt.scatter(x_array, y_test, color='red')\n\nabs_error = abs(y_test-prediction_results)\nplt.plot(x_array, abs_error, color='green')\nplt.grid()\n\n# define legends\nplt.xlabel('index'.format(display_epoch))\nplt.ylabel('MEDV');", "_____no_output_____" ] ], [ [ "##### VISUALIZE TENSORBOARD", "_____no_output_____" ] ], [ [ "from IPython.display import clear_output, Image, display, HTML\n\n# CHECK IT ON TENSORBOARD TYPING THESE LINES IN THE COMMAND PROMPT:\n# tensorboard --logdir=/tmp/tf_logs\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = b\"<stripped %d bytes>\"%size\n return strip_def\n\ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n\n iframe = \"\"\"\n <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '&quot;'))\n display(HTML(iframe))\n \nshow_graph(tf.get_default_graph())", "_____no_output_____" ] ], [ [ "## ----- PREPARE THE MODEL FOR FUTURE RESTORES -----", "_____no_output_____" ], [ "##### SAVED VARIABLE LIST\nThese is the list of variables that were saved on every checkpoint after training.\n\n.data: Contains variable values\n\n.meta: Contains graph structure\n\n.index: Identifies checkpoints", "_____no_output_____" ] ], [ [ "for i, var in enumerate(saver._var_list):\n print('Var {}: {}'.format(i, var))", "Var 0: <tf.Variable 'W1:0' shape=(13, 5) dtype=float32_ref>\nVar 1: <tf.Variable 'W2:0' shape=(5, 5) dtype=float32_ref>\nVar 2: <tf.Variable 'W3:0' shape=(5, 1) dtype=float32_ref>\nVar 3: <tf.Variable 'b1:0' shape=(5,) dtype=float32_ref>\nVar 4: <tf.Variable 'b2:0' shape=(5,) dtype=float32_ref>\nVar 5: <tf.Variable 'b3:0' shape=(1,) dtype=float32_ref>\nVar 6: <tf.Variable 'Variable:0' shape=() dtype=int32_ref>\nVar 7: <tf.Variable 'W1/Adadelta:0' shape=(13, 5) dtype=float32_ref>\nVar 8: <tf.Variable 'W1/Adadelta_1:0' shape=(13, 5) dtype=float32_ref>\nVar 9: <tf.Variable 'W2/Adadelta:0' shape=(5, 5) dtype=float32_ref>\nVar 10: <tf.Variable 'W2/Adadelta_1:0' shape=(5, 5) dtype=float32_ref>\nVar 11: <tf.Variable 'W3/Adadelta:0' shape=(5, 1) dtype=float32_ref>\nVar 12: <tf.Variable 'W3/Adadelta_1:0' shape=(5, 1) dtype=float32_ref>\nVar 13: <tf.Variable 'b1/Adadelta:0' shape=(5,) dtype=float32_ref>\nVar 14: <tf.Variable 'b1/Adadelta_1:0' shape=(5,) dtype=float32_ref>\nVar 15: <tf.Variable 'b2/Adadelta:0' shape=(5,) dtype=float32_ref>\nVar 16: <tf.Variable 'b2/Adadelta_1:0' shape=(5,) dtype=float32_ref>\nVar 17: <tf.Variable 'b3/Adadelta:0' shape=(1,) dtype=float32_ref>\nVar 18: <tf.Variable 'b3/Adadelta_1:0' shape=(1,) dtype=float32_ref>\n" ] ], [ [ "##### RESTORE TO CHECKPOINT", "_____no_output_____" ] ], [ [ "# select the epoch to be restored\nepoch = 38\n\n# Running a new session\nprint('Restoring model to Epoch {}\\n'.format(epoch))\n\nwith tf.Session() as sess:\n # Restore variables from disk\n saver.restore(sess, '{}/epoch{:04}.ckpt'.format(root_ckpoint, epoch))\n\n print('\\nPrint expected values :')\n print(y_test)\n \n print('\\nPrint predicted values :')\n predictions = sess.run(y_pred, feed_dict={X: X_test})\n print(predictions)", "Restoring model to Epoch 38\n\nINFO:tensorflow:Restoring parameters from tf_checkpoints/epoch0038.ckpt\n\nPrint expected values :\n[[12. ]\n [15.2]\n [21. ]\n [24. ]\n [19.4]\n [22.2]\n [23.3]\n [15.6]\n [20.8]\n [13.8]\n [19.6]\n [27.1]\n [36.5]\n [15.2]\n [11.7]\n [14.1]\n [17.2]\n [16.8]\n [32.9]\n [21.4]\n [32.4]\n [23.5]\n [20.4]\n [13.1]\n [12.6]\n [10.4]\n [50. ]\n [23.1]\n [13.4]\n [24.3]\n [25. ]\n [ 7.4]\n [ 7. ]\n [22. ]\n [15.3]\n [ 8.4]\n [16.4]\n [18.1]\n [43.8]\n [ 8.5]\n [18.6]\n [21.1]\n [50. ]\n [11.8]\n [17.4]\n [33.3]\n [14.8]\n [ 8.8]\n [26.6]\n [16.8]\n [30.1]\n [23.7]\n [50. ]\n [19.5]\n [16.1]\n [24.1]\n [20.4]\n [36.4]\n [41.3]\n [21.7]\n [21.7]\n [14. ]\n [21.7]\n [20.4]\n [20. ]\n [34.7]\n [24.5]\n [11.7]\n [14.3]\n [13.1]\n [17.4]\n [20.1]\n [19.5]\n [21. ]\n [30.1]\n [18.4]\n [34.6]\n [20.1]\n [43.5]\n [21.6]\n [18.3]\n [21.4]\n [18.9]\n [13.4]\n [30.8]\n [25. ]\n [25.2]\n [ 8.8]\n [31.1]\n [13.4]\n [48.3]\n [17.8]\n [ 5.6]\n [12.7]\n [16.1]\n [20.9]\n [19.9]\n [13.9]\n [22.6]\n [21.2]\n [21.2]\n [22.9]]\n\nPrint predicted values :\n[[ 9.186178 ]\n [18.921011 ]\n [23.104095 ]\n [27.853848 ]\n [21.293787 ]\n [23.486994 ]\n [23.447182 ]\n [22.228256 ]\n [20.14977 ]\n [ 7.9539094]\n [19.227303 ]\n [19.702423 ]\n [36.68626 ]\n [ 9.793034 ]\n [14.127254 ]\n [16.545424 ]\n [14.034699 ]\n [20.720425 ]\n [32.104176 ]\n [21.77384 ]\n [34.627274 ]\n [28.202139 ]\n [21.989223 ]\n [15.228736 ]\n [18.10283 ]\n [ 8.121423 ]\n [35.687187 ]\n [17.720476 ]\n [13.607772 ]\n [21.194271 ]\n [26.413193 ]\n [ 8.010059 ]\n [ 8.067318 ]\n [19.416222 ]\n [20.101469 ]\n [12.134657 ]\n [19.032537 ]\n [17.056435 ]\n [36.79366 ]\n [15.281774 ]\n [17.207537 ]\n [21.222105 ]\n [37.15053 ]\n [ 9.437347 ]\n [15.504986 ]\n [35.0353 ]\n [17.14711 ]\n [ 8.0007515]\n [24.181913 ]\n [20.708002 ]\n [34.360405 ]\n [10.794023 ]\n [41.080563 ]\n [16.591858 ]\n [21.85249 ]\n [22.844639 ]\n [20.627293 ]\n [33.5397 ]\n [32.679768 ]\n [23.271357 ]\n [21.060022 ]\n [12.197516 ]\n [20.517065 ]\n [20.704296 ]\n [22.068974 ]\n [30.24716 ]\n [28.273783 ]\n [15.8952675]\n [16.967617 ]\n [19.473757 ]\n [22.428452 ]\n [19.028177 ]\n [17.69126 ]\n [21.421364 ]\n [27.57344 ]\n [15.345668 ]\n [34.05561 ]\n [21.708036 ]\n [40.253757 ]\n [23.906248 ]\n [19.205004 ]\n [20.131704 ]\n [21.858553 ]\n [11.084193 ]\n [30.43242 ]\n [25.313902 ]\n [25.774843 ]\n [ 8.105241 ]\n [30.598467 ]\n [10.6537695]\n [40.38739 ]\n [21.698156 ]\n [ 8.771746 ]\n [12.518002 ]\n [17.034147 ]\n [19.007662 ]\n [19.263405 ]\n [14.782638 ]\n [25.136396 ]\n [20.85413 ]\n [21.349863 ]\n [20.814816 ]]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf5ecd6659a240c9cd2ddcbfb012eb580321788
10,046
ipynb
Jupyter Notebook
Keras/.ipynb_checkpoints/keras_11-checkpoint.ipynb
mlwkshops/ML101
79eb48a4553fc6ac5efd29064946db65411e701e
[ "MIT" ]
null
null
null
Keras/.ipynb_checkpoints/keras_11-checkpoint.ipynb
mlwkshops/ML101
79eb48a4553fc6ac5efd29064946db65411e701e
[ "MIT" ]
null
null
null
Keras/.ipynb_checkpoints/keras_11-checkpoint.ipynb
mlwkshops/ML101
79eb48a4553fc6ac5efd29064946db65411e701e
[ "MIT" ]
null
null
null
49.487685
1,080
0.607107
[ [ [ "### Previous: <a href = \"keras_10.ipynb\">1.10 Activation function </a>", "_____no_output_____" ], [ "# <center> Keras </center>\n## <center>1.11 Units</center>", "_____no_output_____" ], [ "# Explanation", "_____no_output_____" ], [ "# Units\n\nunits: Positive integer, dimensionality of the output space.\n\nThe amount of \"neurons\", or \"cells\", or whatever the layer has inside it.\n\nThe \"units\" of each layer will define the output shape (the shape of the tensor that is produced by the layer and that will be the input of the next layer).\n", "_____no_output_____" ] ], [ [ "#previously done\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Dropout\nfrom keras.optimizers import SGD, Adam, Adamax\nfrom keras.utils import np_utils\nfrom keras.utils.vis_utils import model_to_dot\nfrom keras.datasets import mnist\nfrom keras.datasets import mnist\nfrom keras.utils import np_utils\n%matplotlib inline\nimport math\nimport random\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import SVG\n#Load MNIST\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n#Reshape\nX_train = X_train.reshape(60000, 784)\nX_test = X_test.reshape(10000, 784)\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\nX_train /= 255\nX_test /= 255\nY_train = np_utils.to_categorical(y_train, 10)\nY_test = np_utils.to_categorical(y_test, 10)\n#Split\nX_train = X_train[0:10000]\nX_test = X_test[0:1000]\nY_train = Y_train[0:10000]\nY_test = Y_test[0:1000]\n\ndef plot_training_history(history):\n plt.plot(history.history['acc'])\n plt.plot(history.history['val_acc'])\n plt.title('model accuracy')\n plt.ylabel('accuracy')\n plt.xlabel('epoch')\n plt.legend(['train', 'test'], loc='upper left')\n plt.show()\n #loss\n plt.plot(history.history['loss'])\n plt.plot(history.history['val_loss'])\n plt.title('model loss')\n plt.ylabel('loss')\n plt.xlabel('epoch')\n plt.legend(['train', 'test'], loc='upper left')\n plt.show()\n", "Using TensorFlow backend.\n" ] ], [ [ "# Example", "_____no_output_____" ] ], [ [ "model = Sequential()\nmodel.add(Dense(input_dim=28*28, units=500, activation='sigmoid'))\nmodel.add(Dense(units=500, activation='sigmoid'))\nmodel.add(Dense(units=10, activation='softmax'))\nBATCH_SIZE=100\nNP_EPOCHS = 3\n\nmodel.compile(loss='mse',\n optimizer=Adam(),\n metrics=['accuracy'])", "_____no_output_____" ], [ "history = model.fit(X_train, Y_train,\n batch_size=BATCH_SIZE, epochs=NP_EPOCHS,\n verbose=1, validation_data=(X_test, Y_test))\nplot_training_history(history)", "_____no_output_____" ] ], [ [ "# Task\nPlay around with the unit-size. What do you notice?", "_____no_output_____" ], [ "### Next: <a href = \"keras_12.ipynb\">1.12 Dropout </a>", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
cbf5feda67a13aa64aaaf49e43b4ac169cbd4907
18,551
ipynb
Jupyter Notebook
notebooks/kubeflow_pipelines/cicd/labs/kfp_cicd_vertex.ipynb
p-s-vishnu/asl-ml-immersion
6964da87b946214068f32c8a776e80ab5eef620c
[ "Apache-2.0" ]
null
null
null
notebooks/kubeflow_pipelines/cicd/labs/kfp_cicd_vertex.ipynb
p-s-vishnu/asl-ml-immersion
6964da87b946214068f32c8a776e80ab5eef620c
[ "Apache-2.0" ]
null
null
null
notebooks/kubeflow_pipelines/cicd/labs/kfp_cicd_vertex.ipynb
p-s-vishnu/asl-ml-immersion
6964da87b946214068f32c8a776e80ab5eef620c
[ "Apache-2.0" ]
null
null
null
35.538314
346
0.604604
[ [ [ "# CI/CD for a Kubeflow pipeline on Vertex AI", "_____no_output_____" ], [ "**Learning Objectives:**\n1. Learn how to create a custom Cloud Build builder to pilote Vertex AI Pipelines\n1. Learn how to write a Cloud Build config file to build and push all the artifacts for a KFP\n1. Learn how to setup a Cloud Build GitHub trigger a new run of the Kubeflow PIpeline", "_____no_output_____" ], [ "In this lab you will walk through authoring of a **Cloud Build** CI/CD workflow that automatically builds, deploys, and runs a Kubeflow pipeline on Vertex AI. You will also integrate your workflow with **GitHub** by setting up a trigger that starts the workflow when a new tag is applied to the **GitHub** repo hosting the pipeline's code.", "_____no_output_____" ], [ "## Configuring environment settings", "_____no_output_____" ] ], [ [ "PROJECT_ID = !(gcloud config get-value project)\nPROJECT_ID = PROJECT_ID[0]\nREGION = \"us-central1\"\nARTIFACT_STORE = f\"gs://{PROJECT_ID}-kfp-artifact-store\"", "_____no_output_____" ] ], [ [ "Let us make sure that the artifact store exists:", "_____no_output_____" ] ], [ [ "!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}", "gs://qwiklabs-gcp-04-853e5675f5e8-kfp-artifact-store/\n" ] ], [ [ "## Creating the KFP CLI builder for Vertex AI", "_____no_output_____" ], [ "### Exercise\n\nIn the cell below, write a docker file that\n* Uses `gcr.io/deeplearning-platform-release/base-cpu` as base image\n* Install the python packages `kfp` with version `1.6.6 ` and `google-cloud-aiplatform` with version `1.3.0`\n* Starts `/bin/bash` as entrypoint", "_____no_output_____" ] ], [ [ "%%writefile kfp-cli/Dockerfile\n\n# TODO\nFROM gcr.io/deeplearning-platform-release/base-cpu\nRUN pip install kfp==1.6.6 google-cloud-aiplatform==1.3.0\nENTRYPOINT [\"/bin/bash\"]", "Overwriting kfp-cli/Dockerfile\n" ] ], [ [ "### Build the image and push it to your project's **Container Registry**.", "_____no_output_____" ] ], [ [ "KFP_CLI_IMAGE_NAME = \"kfp-cli-vertex\"\nKFP_CLI_IMAGE_URI = f\"gcr.io/{PROJECT_ID}/{KFP_CLI_IMAGE_NAME}:latest\"\nKFP_CLI_IMAGE_URI", "_____no_output_____" ] ], [ [ "### Exercise\n\nIn the cell below, use `gcloud builds` to build the `kfp-cli-vertex` Docker image and push it to the project gcr.io registry.", "_____no_output_____" ] ], [ [ "!{KFP_CLI_IMAGE_URI}", "/bin/bash: gcr.io/qwiklabs-gcp-04-853e5675f5e8/kfp-cli-vertex:latest: No such file or directory\n" ], [ "# COMPLETE THE COMMAND\n# https://cloud.google.com/sdk/gcloud/reference/builds/submit\n!gcloud builds submit --async --timeout 15m --tag {KFP_CLI_IMAGE_URI} kfp-cli", "Creating temporary tarball archive of 1 file(s) totalling 142 bytes before compression.\nUploading tarball of [kfp-cli] to [gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646386617.46244-ac76cb5a80954e03b0d5dcd5dd1b78d4.tgz]\nCreated [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-04-853e5675f5e8/locations/global/builds/64c36658-34b1-4956-9c5a-2a761e728236].\nLogs are available at [https://console.cloud.google.com/cloud-build/builds/64c36658-34b1-4956-9c5a-2a761e728236?project=1076138843678].\nID CREATE_TIME DURATION SOURCE IMAGES STATUS\n64c36658-34b1-4956-9c5a-2a761e728236 2022-03-04T09:36:58+00:00 - gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646386617.46244-ac76cb5a80954e03b0d5dcd5dd1b78d4.tgz - QUEUED\n" ] ], [ [ "## Understanding the **Cloud Build** workflow.\n\n### Exercise\n\nIn the cell below, you'll complete the `cloudbuild_vertex.yaml` file describing the CI/CD workflow and prescribing how environment specific settings are abstracted using **Cloud Build** variables.\n\nThe CI/CD workflow automates the steps you walked through manually during `lab-02_vertex`:\n1. Builds the trainer image\n1. Compiles the pipeline\n1. Uploads and run the pipeline to the Vertex AI Pipeline environment\n1. Pushes the trainer to your project's **Container Registry**\n \n\nThe **Cloud Build** workflow configuration uses both standard and custom [Cloud Build builders](https://cloud.google.com/cloud-build/docs/cloud-builders). The custom builder encapsulates **KFP CLI**. ", "_____no_output_____" ] ], [ [ "%%writefile cloudbuild_vertex.yaml\n# Copyright 2021 Google LLC\n\n# Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this\n# file except in compliance with the License. You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n \n# Unless required by applicable law or agreed to in writing, software \n# distributed under the License is distributed on an \"AS IS\"\n# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either \n# express or implied. See the License for the specific language governing \n# permissions and limitations under the License.\n\nsteps:\n# Build the trainer image\n# TODO\n- name: \n id: 'Build the trainer image'\n args: ['build', '-t', 'gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest', '.']\n dir: $_PIPELINE_FOLDER/trainer_image_vertex\n\n# Push the trainer image, to make it available in the compile step\n- name: 'gcr.io/cloud-builders/docker'\n id: 'Push the trainer image'\n args: ['push', 'gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest']\n dir: $_PIPELINE_FOLDER/trainer_image_vertex\n\n# Compile the pipeline\n- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'\n id: 'Compile the pipeline'\n args:\n - '-c'\n - |\n dsl-compile-v2 # TODO\n env:\n - 'PIPELINE_ROOT=gs://$PROJECT_ID-kfp-artifact-store/pipeline'\n - 'PROJECT_ID=$PROJECT_ID'\n - 'REGION=$_REGION'\n - 'SERVING_CONTAINER_IMAGE_URI=us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest'\n - 'TRAINING_CONTAINER_IMAGE_URI=gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest'\n - 'TRAINING_FILE_PATH=gs://$PROJECT_ID-kfp-artifact-store/data/training/dataset.csv'\n - 'VALIDATION_FILE_PATH=gs://$PROJECT_ID-kfp-artifact-store/data/validation/dataset.csv'\n dir: pipeline_vertex\n \n# Run the pipeline\n- name: 'gcr.io/$PROJECT_ID/kfp-cli-vertex'\n args:\n - '-c'\n - |\n python kfp-cli_vertex/run_pipeline.py # TODO\n \n# Push the images to Container Registry\n# TODO: List the images to be pushed to the project Docker registry\n# TODO\nimages: ['gcr.io/$PROJECT_ID/trainer_image_covertype_vertex:latest']\n\n# This is required since the pipeline run overflows the default timeout\ntimeout: 10800s\n", "Overwriting cloudbuild_vertex.yaml\n" ] ], [ [ "## Manually triggering CI/CD runs\n\nYou can manually trigger **Cloud Build** runs using the [gcloud builds submit command]( https://cloud.google.com/sdk/gcloud/reference/builds/submit).", "_____no_output_____" ] ], [ [ "SUBSTITUTIONS = f\"_REGION={REGION},_PIPELINE_FOLDER=./\"\nSUBSTITUTIONS", "_____no_output_____" ], [ "!gcloud builds submit . --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS} --async", "Creating temporary tarball archive of 19 file(s) totalling 74.4 KiB before compression.\nUploading tarball of [.] to [gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646387873.746347-cd4b828943c44063a86b35bab0c9eac1.tgz]\nCreated [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-04-853e5675f5e8/locations/global/builds/d3aba731-9e62-48ab-921b-474ee184f3cd].\nLogs are available at [https://console.cloud.google.com/cloud-build/builds/d3aba731-9e62-48ab-921b-474ee184f3cd?project=1076138843678].\nID CREATE_TIME DURATION SOURCE IMAGES STATUS\nd3aba731-9e62-48ab-921b-474ee184f3cd 2022-03-04T09:57:54+00:00 - gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646387873.746347-cd4b828943c44063a86b35bab0c9eac1.tgz - QUEUED\n" ] ], [ [ "**Note:** If you experience issues with CloudBuild being able to access Vertex AI, you may need to run the following commands in **CloudShell**:\n\n```\nPROJECT_ID=$(gcloud config get-value project)\nPROJECT_NUMBER=$(gcloud projects list --filter=\"name=$PROJECT_ID\" --format=\"value(PROJECT_NUMBER)\")\ngcloud projects add-iam-policy-binding $PROJECT_ID \\\n --member=\"serviceAccount:[email protected]\" \\\n --role=\"roles/aiplatform.user\"\ngcloud iam service-accounts add-iam-policy-binding \\\n [email protected] \\\n --member=\"serviceAccount:[email protected]\" \\\n --role=\"roles/iam.serviceAccountUser\"\n```", "_____no_output_____" ], [ "## Setting up GitHub integration\n\n## Exercise\n\nIn this exercise you integrate your CI/CD workflow with **GitHub**, using [Cloud Build GitHub App](https://github.com/marketplace/google-cloud-build). \nYou will set up a trigger that starts the CI/CD workflow when a new tag is applied to the **GitHub** repo managing the pipeline source code. You will use a fork of this repo as your source GitHub repository.", "_____no_output_____" ], [ "### Step 1: Create a fork of this repo\n[Follow the GitHub documentation](https://help.github.com/en/github/getting-started-with-github/fork-a-repo) to fork [this repo](https://github.com/GoogleCloudPlatform/asl-ml-immersion)", "_____no_output_____" ], [ "### Step 2: Create a **Cloud Build** trigger\n\nConnect the fork you created in the previous step to your Google Cloud project and create a trigger following the steps in the [Creating GitHub app trigger](https://cloud.google.com/cloud-build/docs/create-github-app-triggers) article. Use the following values on the **Edit trigger** form:\n\n|Field|Value|\n|-----|-----|\n|Name|[YOUR TRIGGER NAME]|\n|Description|[YOUR TRIGGER DESCRIPTION]|\n|Event| Tag|\n|Source| [YOUR FORK]|\n|Tag (regex)|.\\*|\n|Build Configuration|Cloud Build configuration file (yaml or json)|\n|Cloud Build configuration file location| ./notebooks/kubeflow_pipelines/cicd/solutions/cloudbuild_vertex.yaml|\n\n\nUse the following values for the substitution variables:\n\n|Variable|Value|\n|--------|-----|\n|_REGION|us-central1|\n|_PIPELINE_FOLDER|notebooks/kubeflow_pipelines/cicd/solutions", "_____no_output_____" ], [ "### Step 3: Trigger the build\n\nTo start an automated build [create a new release of the repo in GitHub](https://help.github.com/en/github/administering-a-repository/creating-releases). Alternatively, you can start the build by applying a tag using `git`. \n```\ngit tag [TAG NAME]\ngit push origin --tags\n```\n", "_____no_output_____" ] ], [ [ "!git add ", "On branch master\nYour branch is ahead of 'origin/master' by 1 commit.\n (use \"git push\" to publish your local commits)\n\nChanges not staged for commit:\n (use \"git add/rm <file>...\" to update what will be committed)\n (use \"git checkout -- <file>...\" to discard changes in working directory)\n\n\t\u001b[31mmodified: cloudbuild_vertex.yaml\u001b[m\n\t\u001b[31mmodified: kfp-cli/Dockerfile\u001b[m\n\t\u001b[31mmodified: kfp_cicd_vertex.ipynb\u001b[m\n\t\u001b[31mmodified: ../solutions/kfp_cicd_vertex.ipynb\u001b[m\n\t\u001b[31mdeleted: ../../pipelines/labs/kfp_pipeline.ipynb\u001b[m\n\t\u001b[31mmodified: ../../pipelines/labs/kfp_pipeline_vertex.ipynb\u001b[m\n\t\u001b[31mmodified: ../../pipelines/labs/kfp_pipeline_vertex_prebuilt.ipynb\u001b[m\n\t\u001b[31mmodified: ../../pipelines/labs/pipeline_vertex/pipeline.py\u001b[m\n\t\u001b[31mmodified: ../../pipelines/labs/pipeline_vertex/training_lightweight_component.py\u001b[m\n\t\u001b[31mmodified: ../../pipelines/labs/pipeline_vertex/tuning_lightweight_component.py\u001b[m\n\t\u001b[31mmodified: ../../pipelines/solutions/kfp_pipeline_vertex_prebuilt.ipynb\u001b[m\n\t\u001b[31mmodified: ../../walkthrough/labs/kfp_walkthrough_vertex.ipynb\u001b[m\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\n\t\u001b[31m../../pipelines/labs/covertype_kfp_pipeline.json\u001b[m\n\t\u001b[31m../../pipelines/labs/pipeline_vertex/pipeline_prebuilt.py\u001b[m\n\t\u001b[31m../../pipelines/solutions/covertype_kfp_pipeline.json\u001b[m\n\t\u001b[31m../../walkthrough/labs/config.yaml\u001b[m\n\t\u001b[31m../../walkthrough/labs/training_app/\u001b[m\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\n" ] ], [ [ "After running the command above, a build should have been automatically triggered, which you should able to inspect [here](https://console.cloud.google.com/cloud-build/builds).", "_____no_output_____" ], [ "Copyright 2021 Google LLC\n\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n\n https://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbf5ff6b2b349215b03cbb404749ca07721cf1d2
159,805
ipynb
Jupyter Notebook
courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb
Glairly/introduction_to_tensorflow
aa0a44d9c428a6eb86d1f79d73f54c0861b6358d
[ "Apache-2.0" ]
2
2022-01-06T11:52:57.000Z
2022-01-09T01:53:56.000Z
courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb
Glairly/introduction_to_tensorflow
aa0a44d9c428a6eb86d1f79d73f54c0861b6358d
[ "Apache-2.0" ]
null
null
null
courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb
Glairly/introduction_to_tensorflow
aa0a44d9c428a6eb86d1f79d73f54c0861b6358d
[ "Apache-2.0" ]
null
null
null
30.660975
522
0.452301
[ [ [ "# Word2Vec\n\n**Learning Objectives**\n\n1. Compile all steps into one function\n2. Prepare training data for Word2Vec\n3. Model and Training\n4. Embedding lookup and analysis\n\n\n\n\n## Introduction \nWord2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.\n\nNote: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and\n[Distributed\nRepresentations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.\n\nThese papers proposed two methods for learning representations of words: \n\n* **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.\n* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.\n\n\nYou'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).\n\n\nEach learning objective will correspond to a __#TODO__ in the [student lab notebook](../labs/word2vec.ipynb) -- try to complete that notebook first before reviewing this solution notebook.", "_____no_output_____" ], [ "## Skip-gram and Negative Sampling ", "_____no_output_____" ], [ "While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`. ", "_____no_output_____" ], [ "Consider the following sentence of 8 words.\n> The wide road shimmered in the hot sun. \n\nThe context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes.", "_____no_output_____" ], [ "Note: For this tutorial, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.", "_____no_output_____" ], [ "![word2vec_skipgrams](assets/word2vec_skipgram.png)", "_____no_output_____" ], [ "The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>*, the objective can be written as the average log probability", "_____no_output_____" ], [ "![word2vec_skipgram_objective](assets/word2vec_skipgram_objective.png)", "_____no_output_____" ], [ "where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.", "_____no_output_____" ], [ "![word2vec_full_softmax](assets/word2vec_full_softmax.png)", "_____no_output_____" ], [ "where *v* and *v<sup>'<sup>* are target and context vector representations of words and *W* is vocabulary size. ", "_____no_output_____" ], [ "Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms. ", "_____no_output_____" ], [ "The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling. ", "_____no_output_____" ], [ "The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *P<sub>n</sub>(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples. ", "_____no_output_____" ], [ "A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).\n\n```\n(hot, shimmered)\n(wide, hot)\n(wide, sun)\n```", "_____no_output_____" ], [ "In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ] ], [ [ "# Use the chown command to change the ownership of repository to user.\n!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst", "_____no_output_____" ], [ "!pip install -q tqdm", "_____no_output_____" ], [ "# You can use any Python source file as a module by executing an import statement in some other Python source file.\n# The import statement combines two operations; it searches for the named module, then it binds the\n# results of that search to a name in the local scope.\nimport io\nimport itertools\nimport numpy as np\nimport os\nimport re\nimport string\nimport tensorflow as tf\nimport tqdm\n\nfrom tensorflow.keras import Model, Sequential\nfrom tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape\nfrom tensorflow.keras.layers.experimental.preprocessing import TextVectorization", "_____no_output_____" ] ], [ [ "Please check your tensorflow version using the cell below.", "_____no_output_____" ] ], [ [ "# Show the currently installed version of TensorFlow\nprint(\"TensorFlow version: \",tf.version.VERSION)", "TensorFlow version: 2.6.0\n" ], [ "SEED = 42 \nAUTOTUNE = tf.data.experimental.AUTOTUNE", "_____no_output_____" ] ], [ [ "### Vectorize an example sentence", "_____no_output_____" ], [ "Consider the following sentence: \n`The wide road shimmered in the hot sun.`\n\nTokenize the sentence:", "_____no_output_____" ] ], [ [ "sentence = \"The wide road shimmered in the hot sun\"\ntokens = list(sentence.lower().split())\nprint(len(tokens))", "8\n" ] ], [ [ "Create a vocabulary to save mappings from tokens to integer indices.", "_____no_output_____" ] ], [ [ "vocab, index = {}, 1 # start indexing from 1\nvocab['<pad>'] = 0 # add a padding token \nfor token in tokens:\n if token not in vocab: \n vocab[token] = index\n index += 1\nvocab_size = len(vocab)\nprint(vocab)", "{'<pad>': 0, 'the': 1, 'wide': 2, 'road': 3, 'shimmered': 4, 'in': 5, 'hot': 6, 'sun': 7}\n" ] ], [ [ "Create an inverse vocabulary to save mappings from integer indices to tokens.", "_____no_output_____" ] ], [ [ "inverse_vocab = {index: token for token, index in vocab.items()}\nprint(inverse_vocab)", "{0: '<pad>', 1: 'the', 2: 'wide', 3: 'road', 4: 'shimmered', 5: 'in', 6: 'hot', 7: 'sun'}\n" ] ], [ [ "Vectorize your sentence.\n", "_____no_output_____" ] ], [ [ "example_sequence = [vocab[word] for word in tokens]\nprint(example_sequence)", "[1, 2, 3, 4, 5, 1, 6, 7]\n" ] ], [ [ "### Generate skip-grams from one sentence", "_____no_output_____" ], [ "The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.\n\nNote: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.\n", "_____no_output_____" ] ], [ [ "window_size = 2\npositive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(\n example_sequence, \n vocabulary_size=vocab_size,\n window_size=window_size,\n negative_samples=0)\nprint(len(positive_skip_grams))", "26\n" ] ], [ [ "Take a look at few positive skip-grams.", "_____no_output_____" ] ], [ [ "for target, context in positive_skip_grams[:5]:\n print(f\"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})\")", "(1, 3): (the, road)\n(4, 1): (shimmered, the)\n(5, 6): (in, hot)\n(4, 2): (shimmered, wide)\n(3, 2): (road, wide)\n" ] ], [ [ "### Negative sampling for one skip-gram ", "_____no_output_____" ], [ "The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.\n", "_____no_output_____" ], [ "Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets. ", "_____no_output_____" ] ], [ [ "# Get target and context words for one positive skip-gram.\ntarget_word, context_word = positive_skip_grams[0]\n\n# Set the number of negative samples per positive context. \nnum_ns = 4\n\ncontext_class = tf.reshape(tf.constant(context_word, dtype=\"int64\"), (1, 1))\nnegative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(\n true_classes=context_class, # class that should be sampled as 'positive'\n num_true=1, # each positive skip-gram has 1 positive context class\n num_sampled=num_ns, # number of negative context words to sample\n unique=True, # all the negative samples should be unique\n range_max=vocab_size, # pick index of the samples from [0, vocab_size]\n seed=SEED, # seed for reproducibility\n name=\"negative_sampling\" # name of this operation\n)\nprint(negative_sampling_candidates)\nprint([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])", "tf.Tensor([2 1 4 3], shape=(4,), dtype=int64)\n['wide', 'the', 'shimmered', 'road']\n" ] ], [ [ "### Construct one training example", "_____no_output_____" ], [ "For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word.", "_____no_output_____" ] ], [ [ "# Add a dimension so you can use concatenation (on the next step).\nnegative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)\n\n# Concat positive context word with negative sampled words.\ncontext = tf.concat([context_class, negative_sampling_candidates], 0)\n\n# Label first context word as 1 (positive) followed by num_ns 0s (negative).\nlabel = tf.constant([1] + [0]*num_ns, dtype=\"int64\") \n\n# Reshape target to shape (1,) and context and label to (num_ns+1,).\ntarget = tf.squeeze(target_word)\ncontext = tf.squeeze(context)\nlabel = tf.squeeze(label)", "_____no_output_____" ] ], [ [ "Take a look at the context and the corresponding labels for the target word from the skip-gram example above. ", "_____no_output_____" ] ], [ [ "print(f\"target_index : {target}\")\nprint(f\"target_word : {inverse_vocab[target_word]}\")\nprint(f\"context_indices : {context}\")\nprint(f\"context_words : {[inverse_vocab[c.numpy()] for c in context]}\")\nprint(f\"label : {label}\")", "target_index : 1\ntarget_word : the\ncontext_indices : [3 2 1 4 3]\ncontext_words : ['road', 'wide', 'the', 'shimmered', 'road']\nlabel : [1 0 0 0 0]\n" ] ], [ [ "A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)`", "_____no_output_____" ] ], [ [ "print(f\"target :\", target)\nprint(f\"context :\", context )\nprint(f\"label :\", label )", "target : tf.Tensor(1, shape=(), dtype=int32)\ncontext : tf.Tensor([3 2 1 4 3], shape=(5,), dtype=int64)\nlabel : tf.Tensor([1 0 0 0 0], shape=(5,), dtype=int64)\n" ] ], [ [ "### Summary", "_____no_output_____" ], [ "This picture summarizes the procedure of generating training example from a sentence. \n", "_____no_output_____" ], [ "![word2vec_negative_sampling](assets/word2vec_negative_sampling.png)", "_____no_output_____" ], [ "## Lab Task 1: Compile all steps into one function\n", "_____no_output_____" ], [ "### Skip-gram Sampling table ", "_____no_output_____" ], [ "A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality. ", "_____no_output_____" ], [ "The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10.", "_____no_output_____" ] ], [ [ "sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)\nprint(sampling_table)", "[0.00315225 0.00315225 0.00547597 0.00741556 0.00912817 0.01068435\n 0.01212381 0.01347162 0.01474487 0.0159558 ]\n" ] ], [ [ "`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling.", "_____no_output_____" ], [ "Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.", "_____no_output_____" ], [ "### Generate training data", "_____no_output_____" ], [ "Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.", "_____no_output_____" ] ], [ [ "# Generates skip-gram pairs with negative sampling for a list of sequences\n# (int-encoded sentences) based on window size, number of negative samples\n# and vocabulary size.\ndef generate_training_data(sequences, window_size, num_ns, vocab_size, seed):\n # Elements of each training example are appended to these lists.\n targets, contexts, labels = [], [], []\n\n # Build the sampling table for vocab_size tokens.\n # TODO 1a\n sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)\n\n # Iterate over all sequences (sentences) in dataset.\n for sequence in tqdm.tqdm(sequences):\n\n # Generate positive skip-gram pairs for a sequence (sentence).\n positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(\n sequence, \n vocabulary_size=vocab_size,\n sampling_table=sampling_table,\n window_size=window_size,\n negative_samples=0)\n \n # Iterate over each positive skip-gram pair to produce training examples \n # with positive context word and negative samples.\n # TODO 1b\n for target_word, context_word in positive_skip_grams:\n context_class = tf.expand_dims(\n tf.constant([context_word], dtype=\"int64\"), 1)\n negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(\n true_classes=context_class,\n num_true=1, \n num_sampled=num_ns, \n unique=True, \n range_max=vocab_size, \n seed=SEED, \n name=\"negative_sampling\")\n \n # Build context and label vectors (for one target word)\n negative_sampling_candidates = tf.expand_dims(\n negative_sampling_candidates, 1)\n\n context = tf.concat([context_class, negative_sampling_candidates], 0)\n label = tf.constant([1] + [0]*num_ns, dtype=\"int64\")\n\n # Append each element from the training example to global lists.\n targets.append(target_word)\n contexts.append(context)\n labels.append(label)\n\n return targets, contexts, labels", "_____no_output_____" ] ], [ [ "## Lab Task 2: Prepare training data for Word2Vec", "_____no_output_____" ], [ "With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!", "_____no_output_____" ], [ "### Download text corpus\n", "_____no_output_____" ], [ "You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.", "_____no_output_____" ] ], [ [ "path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')", "Downloading data from https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt\n" ] ], [ [ "Read text from the file and take a look at the first few lines. ", "_____no_output_____" ] ], [ [ "with open(path_to_file) as f: \n lines = f.read().splitlines()\nfor line in lines[:20]:\n print(line)", "First Citizen:\nBefore we proceed any further, hear me speak.\n\nAll:\nSpeak, speak.\n\nFirst Citizen:\nYou are all resolved rather to die than to famish?\n\nAll:\nResolved. resolved.\n\nFirst Citizen:\nFirst, you know Caius Marcius is chief enemy to the people.\n\nAll:\nWe know't, we know't.\n\nFirst Citizen:\nLet us kill him, and we'll have corn at our own price.\n" ] ], [ [ "Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps.", "_____no_output_____" ] ], [ [ "# TODO 2a\ntext_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))", "_____no_output_____" ] ], [ [ "### Vectorize sentences from the corpus", "_____no_output_____" ], [ "You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer.", "_____no_output_____" ] ], [ [ "# We create a custom standardization function to lowercase the text and \n# remove punctuation.\ndef custom_standardization(input_data):\n lowercase = tf.strings.lower(input_data)\n return tf.strings.regex_replace(lowercase,\n '[%s]' % re.escape(string.punctuation), '')\n\n# Define the vocabulary size and number of words in a sequence.\nvocab_size = 4096\nsequence_length = 10\n\n# Use the text vectorization layer to normalize, split, and map strings to\n# integers. Set output_sequence_length length to pad all samples to same length.\nvectorize_layer = TextVectorization(\n standardize=custom_standardization,\n max_tokens=vocab_size,\n output_mode='int',\n output_sequence_length=sequence_length)", "_____no_output_____" ] ], [ [ "Call `adapt` on the text dataset to create vocabulary.\n", "_____no_output_____" ] ], [ [ "vectorize_layer.adapt(text_ds.batch(1024))", "_____no_output_____" ] ], [ [ "Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency. ", "_____no_output_____" ] ], [ [ "# Save the created vocabulary for reference.\ninverse_vocab = vectorize_layer.get_vocabulary()\nprint(inverse_vocab[:20])", "['', '[UNK]', 'the', 'and', 'to', 'i', 'of', 'you', 'my', 'a', 'that', 'in', 'is', 'not', 'for', 'with', 'me', 'it', 'be', 'your']\n" ] ], [ [ "The vectorize_layer can now be used to generate vectors for each element in the `text_ds`.", "_____no_output_____" ] ], [ [ "def vectorize_text(text):\n text = tf.expand_dims(text, -1)\n return tf.squeeze(vectorize_layer(text))\n\n# Vectorize the data in text_ds.\ntext_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()", "_____no_output_____" ] ], [ [ "### Obtain sequences from the dataset", "_____no_output_____" ], [ "You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples. \n\nNote: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`.", "_____no_output_____" ] ], [ [ "sequences = list(text_vector_ds.as_numpy_iterator())\nprint(len(sequences))", "32777\n" ] ], [ [ "Take a look at few examples from `sequences`.\n", "_____no_output_____" ] ], [ [ "for seq in sequences[:5]:\n print(f\"{seq} => {[inverse_vocab[i] for i in seq]}\")", "[ 89 270 0 0 0 0 0 0 0 0] => ['first', 'citizen', '', '', '', '', '', '', '', '']\n[138 36 982 144 673 125 16 106 0 0] => ['before', 'we', 'proceed', 'any', 'further', 'hear', 'me', 'speak', '', '']\n[34 0 0 0 0 0 0 0 0 0] => ['all', '', '', '', '', '', '', '', '', '']\n[106 106 0 0 0 0 0 0 0 0] => ['speak', 'speak', '', '', '', '', '', '', '', '']\n[ 89 270 0 0 0 0 0 0 0 0] => ['first', 'citizen', '', '', '', '', '', '', '', '']\n" ] ], [ [ "### Generate training examples from sequences", "_____no_output_____" ], [ "`sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.", "_____no_output_____" ] ], [ [ "targets, contexts, labels = generate_training_data(\n sequences=sequences, \n window_size=2, \n num_ns=4, \n vocab_size=vocab_size, \n seed=SEED)\nprint(len(targets), len(contexts), len(labels))", "\r 0%| | 0/32777 [00:00<?, ?it/s]" ] ], [ [ "### Configure the dataset for performance", "_____no_output_____" ], [ "To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model!", "_____no_output_____" ] ], [ [ "BATCH_SIZE = 1024\nBUFFER_SIZE = 10000\ndataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))\ndataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)\nprint(dataset)", "<BatchDataset shapes: (((1024,), (1024, 5, 1)), (1024, 5)), types: ((tf.int32, tf.int64), tf.int64)>\n" ] ], [ [ "Add `cache()` and `prefetch()` to improve performance.", "_____no_output_____" ] ], [ [ "dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)\nprint(dataset)", "<PrefetchDataset shapes: (((1024,), (1024, 5, 1)), (1024, 5)), types: ((tf.int32, tf.int64), tf.int64)>\n" ] ], [ [ "## Lab Task 3: Model and Training", "_____no_output_____" ], [ "The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.", "_____no_output_____" ], [ "### Subclassed Word2Vec Model", "_____no_output_____" ], [ "Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:\n\n\n* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.\n* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.\n* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.\n* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.\n\nWith the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result.", "_____no_output_____" ], [ "Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.", "_____no_output_____" ] ], [ [ "class Word2Vec(Model):\n def __init__(self, vocab_size, embedding_dim):\n super(Word2Vec, self).__init__()\n self.target_embedding = Embedding(vocab_size, \n embedding_dim,\n input_length=1,\n name=\"w2v_embedding\", )\n self.context_embedding = Embedding(vocab_size, \n embedding_dim, \n input_length=num_ns+1)\n self.dots = Dot(axes=(3,2))\n self.flatten = Flatten()\n\n def call(self, pair):\n target, context = pair\n we = self.target_embedding(target)\n ce = self.context_embedding(context)\n dots = self.dots([ce, we])\n return self.flatten(dots)", "_____no_output_____" ] ], [ [ "### Define loss function and compile model\n", "_____no_output_____" ], [ "For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:\n\n``` python\ndef custom_loss(x_logit, y_true):\n return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)\n```\n\nIt's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer. ", "_____no_output_____" ] ], [ [ "# TODO 3a\nembedding_dim = 128\nword2vec = Word2Vec(vocab_size, embedding_dim)\nword2vec.compile(optimizer='adam',\n loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])", "_____no_output_____" ] ], [ [ "Also define a callback to log training statistics for tensorboard.", "_____no_output_____" ] ], [ [ "tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=\"logs\")", "_____no_output_____" ] ], [ [ "Train the model with `dataset` prepared above for some number of epochs.", "_____no_output_____" ] ], [ [ "word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])", "Epoch 1/20\n" ] ], [ [ "Tensorboard now shows the Word2Vec model's accuracy and loss.", "_____no_output_____" ] ], [ [ "!tensorboard --bind_all --port=8081 --logdir logs", "_____no_output_____" ] ], [ [ "Run the following command in **Cloud Shell:**\n\n<code>gcloud beta compute ssh --zone &lt;instance-zone&gt; &lt;notebook-instance-name&gt; --project &lt;project-id&gt; -- -L 8081:localhost:8081</code> \n\nMake sure to replace &lt;instance-zone&gt;, &lt;notebook-instance-name&gt; and &lt;project-id&gt;.\n\nIn Cloud Shell, click *Web Preview* > *Change Port* and insert port number *8081*. Click *Change and Preview* to open the TensorBoard.", "_____no_output_____" ], [ "![embeddings_classifier_accuracy.png](assets/embeddings_classifier_accuracy.png)", "_____no_output_____" ], [ "**To quit the TensorBoard, click Kernel > Interrupt kernel**.", "_____no_output_____" ], [ "## Lab Task 4: Embedding lookup and analysis", "_____no_output_____" ], [ "Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line. ", "_____no_output_____" ] ], [ [ "# TODO 4a\nweights = word2vec.get_layer('w2v_embedding').get_weights()[0]\nvocab = vectorize_layer.get_vocabulary()", "_____no_output_____" ] ], [ [ "Create and save the vectors and metadata file. ", "_____no_output_____" ] ], [ [ "out_v = io.open('vectors.tsv', 'w', encoding='utf-8')\nout_m = io.open('metadata.tsv', 'w', encoding='utf-8')\n\nfor index, word in enumerate(vocab):\n if index == 0: continue # skip 0, it's padding.\n vec = weights[index] \n out_v.write('\\t'.join([str(x) for x in vec]) + \"\\n\")\n out_m.write(word + \"\\n\")\nout_v.close()\nout_m.close()", "_____no_output_____" ] ], [ [ "Download the `vectors.tsv` and `metadata.tsv` to analyze the obtained embeddings in the [Embedding Projector](https://projector.tensorflow.org/).", "_____no_output_____" ] ], [ [ "try:\n from google.colab import files\n files.download('vectors.tsv')\n files.download('metadata.tsv')\nexcept Exception as e:\n pass", "_____no_output_____" ] ], [ [ "## Next steps\n", "_____no_output_____" ], [ "This tutorial has shown you how to implement a skip-gram Word2Vec model with negative sampling from scratch and visualize the obtained word embeddings.\n\n* To learn more about word vectors and their mathematical representations, refer to these [notes](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf).\n\n* To learn more about advanced text processing, read the [Transformer model for language understanding](https://www.tensorflow.org/tutorials/text/transformer) tutorial.\n\n* If you’re interested in pre-trained embedding models, you may also be interested in [Exploring the TF-Hub CORD-19 Swivel Embeddings](https://www.tensorflow.org/hub/tutorials/cord_19_embeddings_keras), or the [Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder)\n\n* You may also like to train the model on a new dataset (there are many available in [TensorFlow Datasets](https://www.tensorflow.org/datasets)).\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
cbf609101c3a2f7a2f3408b5f306e57678395deb
4,042
ipynb
Jupyter Notebook
Assignment2(day3).ipynb
pathansayama/LetsUpgrade-Python-B7-Sayama
0b9a52cf2559b43d42d656c1a329c378b4bbe34d
[ "Apache-2.0" ]
null
null
null
Assignment2(day3).ipynb
pathansayama/LetsUpgrade-Python-B7-Sayama
0b9a52cf2559b43d42d656c1a329c378b4bbe34d
[ "Apache-2.0" ]
null
null
null
Assignment2(day3).ipynb
pathansayama/LetsUpgrade-Python-B7-Sayama
0b9a52cf2559b43d42d656c1a329c378b4bbe34d
[ "Apache-2.0" ]
null
null
null
24.203593
257
0.371351
[ [ [ "<a href=\"https://colab.research.google.com/github/pathansayama/LetsUpgrade-Python-B7-Sayama/blob/master/Assignment2(day3).ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Assignment 1 (Day 3)", "_____no_output_____" ] ], [ [ "l = 1\nu = 200\n \nfor num in range(l,u + 1): \n if num > 1: \n for i in range(2,num): \n if (num % i) == 0: \n break \n else: \n print(num) ", "2\n3\n5\n7\n11\n13\n17\n19\n23\n29\n31\n37\n41\n43\n47\n53\n59\n61\n67\n71\n73\n79\n83\n89\n97\n101\n103\n107\n109\n113\n127\n131\n137\n139\n149\n151\n157\n163\n167\n173\n179\n181\n191\n193\n197\n199\n" ] ], [ [ "Assignment 2 (Day 3)", "_____no_output_____" ] ], [ [ "altitude = int(input(\"Enter The Current Altitude Of the Plane = \"))\nprint(altitude)\n\nif altitude <= 1000 :\n print(altitude,\" is Safe to Land\")\n\nelif altitude > 1000 and altitude < 5000 :\n print(altitude,\" is not valid. Bring down to 1000\")\n\nelif altitude > 5000 :\n print(altitude,\" is not safe to Land, Turn around!\")\n\nelse :\n print(\"You are alreary Landed\")", "Enter The Current Altitude Of tne Plane = 7000\n7000\n7000 is not safe to Land, Turn around!\n" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf6094ea5893a3b0796d05d4aee44136d439c92
3,902
ipynb
Jupyter Notebook
Create Dataset.ipynb
rajansh87/Auto-Scrolling-by-Hand-Gestures-using-CNN
c63b4fd4da18322b81a15acc09f31aaf754be29f
[ "MIT" ]
1
2020-07-28T21:19:07.000Z
2020-07-28T21:19:07.000Z
Create Dataset.ipynb
rajansh87/Auto-Scrolling-by-Hand-Movement-using-CNN
c63b4fd4da18322b81a15acc09f31aaf754be29f
[ "MIT" ]
null
null
null
Create Dataset.ipynb
rajansh87/Auto-Scrolling-by-Hand-Movement-using-CNN
c63b4fd4da18322b81a15acc09f31aaf754be29f
[ "MIT" ]
null
null
null
28.691176
99
0.512558
[ [ [ "import cv2\n\n# Initialize Webcam\ncap = cv2.VideoCapture(0)\n\n#Load Haarcascade Frontal Face Classifier\nface_classifier = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')\n\n#Function returns cropped face\ndef face_extractor(photo):\n gray_photo = cv2.cvtColor(photo, cv2.COLOR_BGR2GRAY)\n faces = face_classifier.detectMultiScale(gray_photo)\n \n if faces is ():\n return None\n \n else:\n # Crop all faces found\n for (x,y,w,h) in faces:\n cropped_face = photo[y:y+h, x:x+w]\n \n return cropped_face\n\n\ncount = 0\n\n# Collect 100 samples of your face from webcam input\nwhile True:\n status,photo = cap.read()\n \n if face_extractor(photo) is not None:\n count += 1\n face = cv2.resize(face_extractor(photo), (200, 200))\n face = cv2.cvtColor(face, cv2.COLOR_BGR2GRAY)\n \n # Save file in specified directory with unique name\n file_name_path = 'Dataset/train/DOWN/image' + str(count) + '.jpg'\n cv2.imwrite(file_name_path, face)\n\n # Put count on images and display live count\n cv2.putText(face, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)\n cv2.imshow('Face Cropper', face)\n \n else:\n pass\n\n if cv2.waitKey(1) == 13 or count == 100: #13 is the Enter Key for training\n #if cv2.waitKey(1) == 13 or count == 50: #13 is the Enter Key for testing\n break\n \ncap.release()\ncv2.destroyAllWindows() \nprint(\"Collecting Samples Complete\")", "Collecting Samples Complete\n" ], [ "# for making dataset of newperson just replace \"ansh\" from above path and give new name.\n# also create a folder in that path as of that name.", "_____no_output_____" ], [ "import cv2\n\n# Initialize Webcam\ncap = cv2.VideoCapture(0)\ncount=0\nwhile True:\n status,photo = cap.read()\n file_name_path = 'Dataset/train/up/image' + str(count) + '.jpg'\n cv2.imwrite(file_name_path, photo)\n\n \n cv2.putText(photo, str(count), (50, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (0,255,0), 2)\n cv2.imshow('My view', photo)\n \n\n if cv2.waitKey(1) == 13 or count == 100: #13 is the Enter Key for training\n #if cv2.waitKey(1) == 13 or count == 50: #13 is the Enter Key for testing\n break\n count+=1\n \ncap.release()\ncv2.destroyAllWindows() \nprint(\"Collecting Samples Complete\")", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbf61782cc301dd32b0e9b33e030adfb574deb57
93,650
ipynb
Jupyter Notebook
nbs/40_tabular.core.ipynb
hussam789/fastai2
7eaa4a6a10a8836fbbb90360a7df92d170d1bba3
[ "Apache-2.0" ]
null
null
null
nbs/40_tabular.core.ipynb
hussam789/fastai2
7eaa4a6a10a8836fbbb90360a7df92d170d1bba3
[ "Apache-2.0" ]
null
null
null
nbs/40_tabular.core.ipynb
hussam789/fastai2
7eaa4a6a10a8836fbbb90360a7df92d170d1bba3
[ "Apache-2.0" ]
null
null
null
32.382434
196
0.418815
[ [ [ "#default_exp tabular.core", "_____no_output_____" ], [ "#export\nfrom fastai2.torch_basics import *\nfrom fastai2.data.all import *", "_____no_output_____" ], [ "from nbdev.showdoc import *", "_____no_output_____" ], [ "#export\npd.set_option('mode.chained_assignment','raise')", "_____no_output_____" ] ], [ [ "# Tabular core\n\n> Basic function to preprocess tabular data before assembling it in a `DataLoaders`.", "_____no_output_____" ], [ "## Initial preprocessing", "_____no_output_____" ] ], [ [ "#export\ndef make_date(df, date_field):\n \"Make sure `df[date_field]` is of the right date type.\"\n field_dtype = df[date_field].dtype\n if isinstance(field_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):\n field_dtype = np.datetime64\n if not np.issubdtype(field_dtype, np.datetime64):\n df[date_field] = pd.to_datetime(df[date_field], infer_datetime_format=True)", "_____no_output_____" ], [ "df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']})\nmake_date(df, 'date')\ntest_eq(df['date'].dtype, np.dtype('datetime64[ns]'))", "_____no_output_____" ], [ "#export\ndef add_datepart(df, field_name, prefix=None, drop=True, time=False):\n \"Helper function that adds columns relevant to a date in the column `field_name` of `df`.\"\n make_date(df, field_name)\n field = df[field_name]\n prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name))\n attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',\n 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']\n if time: attr = attr + ['Hour', 'Minute', 'Second']\n for n in attr: df[prefix + n] = getattr(field.dt, n.lower())\n df[prefix + 'Elapsed'] = field.astype(np.int64) // 10 ** 9\n if drop: df.drop(field_name, axis=1, inplace=True)\n return df", "_____no_output_____" ], [ "df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24']})\ndf = add_datepart(df, 'date')\ntest_eq(df.columns, ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start', \n 'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start', 'Elapsed'])\ndf.head()", "_____no_output_____" ], [ "#export\ndef _get_elapsed(df,field_names, date_field, base_field, prefix):\n for f in field_names:\n day1 = np.timedelta64(1, 'D')\n last_date,last_base,res = np.datetime64(),None,[]\n for b,v,d in zip(df[base_field].values, df[f].values, df[date_field].values):\n if last_base is None or b != last_base:\n last_date,last_base = np.datetime64(),b\n if v: last_date = d\n res.append(((d-last_date).astype('timedelta64[D]') / day1))\n df[prefix + f] = res\n return df", "_____no_output_____" ], [ "#export\ndef add_elapsed_times(df, field_names, date_field, base_field):\n \"Add in `df` for each event in `field_names` the elapsed time according to `date_field` grouped by `base_field`\"\n field_names = list(L(field_names))\n #Make sure date_field is a date and base_field a bool\n df[field_names] = df[field_names].astype('bool')\n make_date(df, date_field)\n\n work_df = df[field_names + [date_field, base_field]]\n work_df = work_df.sort_values([base_field, date_field])\n work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'After')\n work_df = work_df.sort_values([base_field, date_field], ascending=[True, False])\n work_df = _get_elapsed(work_df, field_names, date_field, base_field, 'Before')\n\n for a in ['After' + f for f in field_names] + ['Before' + f for f in field_names]:\n work_df[a] = work_df[a].fillna(0).astype(int)\n\n for a,s in zip([True, False], ['_bw', '_fw']):\n work_df = work_df.set_index(date_field)\n tmp = (work_df[[base_field] + field_names].sort_index(ascending=a)\n .groupby(base_field).rolling(7, min_periods=1).sum())\n tmp.drop(base_field,1,inplace=True)\n tmp.reset_index(inplace=True)\n work_df.reset_index(inplace=True)\n work_df = work_df.merge(tmp, 'left', [date_field, base_field], suffixes=['', s])\n work_df.drop(field_names,1,inplace=True)\n return df.merge(work_df, 'left', [date_field, base_field])", "_____no_output_____" ], [ "df = pd.DataFrame({'date': ['2019-12-04', '2019-11-29', '2019-11-15', '2019-10-24'], 'event': [False, True, False, True], 'base': [1,1,2,2]})\ndf = add_elapsed_times(df, ['event'], 'date', 'base')\ndf", "_____no_output_____" ], [ "#export\ndef cont_cat_split(df, max_card=20, dep_var=None):\n \"Helper function that returns column names of cont and cat variables from given `df`.\"\n cont_names, cat_names = [], []\n for label in df:\n if label == dep_var: continue\n if df[label].dtype == int and df[label].unique().shape[0] > max_card or df[label].dtype == float: cont_names.append(label)\n else: cat_names.append(label)\n return cont_names, cat_names", "_____no_output_____" ] ], [ [ "## Tabular -", "_____no_output_____" ] ], [ [ "#export\nclass _TabIloc:\n \"Get/set rows by iloc and cols by name\"\n def __init__(self,to): self.to = to\n def __getitem__(self, idxs):\n df = self.to.items\n if isinstance(idxs,tuple):\n rows,cols = idxs\n cols = df.columns.isin(cols) if is_listy(cols) else df.columns.get_loc(cols)\n else: rows,cols = idxs,slice(None)\n return self.to.new(df.iloc[rows, cols])", "_____no_output_____" ], [ "#export\nclass Tabular(CollBase, GetAttr, FilteredBase):\n \"A `DataFrame` wrapper that knows which cols are cont/cat/y, and returns rows in `__getitem__`\"\n _default,with_cont='procs',True\n def __init__(self, df, procs=None, cat_names=None, cont_names=None, y_names=None, block_y=CategoryBlock, splits=None,\n do_setup=True, device=None):\n if splits is None: splits=[range_of(df)]\n df = df.iloc[sum(splits, [])].copy()\n self.dataloaders = delegates(self._dl_type.__init__)(self.dataloaders)\n super().__init__(df)\n\n self.y_names,self.device = L(y_names),device\n if block_y is not None:\n if callable(block_y): block_y = block_y()\n procs = L(procs) + block_y.type_tfms\n self.cat_names,self.cont_names,self.procs = L(cat_names),L(cont_names),Pipeline(procs, as_item=True)\n self.split = len(splits[0])\n if do_setup: self.setup()\n\n def subset(self, i): return self.new(self.items[slice(0,self.split) if i==0 else slice(self.split,len(self))])\n def copy(self): self.items = self.items.copy(); return self\n def new(self, df): return type(self)(df, do_setup=False, block_y=None, **attrdict(self, 'procs','cat_names','cont_names','y_names', 'device'))\n def show(self, max_n=10, **kwargs): display_df(self.all_cols[:max_n])\n def setup(self): self.procs.setup(self)\n def process(self): self.procs(self)\n def loc(self): return self.items.loc\n def iloc(self): return _TabIloc(self)\n def targ(self): return self.items[self.y_names]\n def all_col_names (self): return self.cat_names + self.cont_names + self.y_names\n def n_subsets(self): return 2\n def new_empty(self): return self.new(pd.DataFrame({}, columns=self.items.columns))\n def to_device(self, d=None):\n self.device = d\n return self\n\nproperties(Tabular,'loc','iloc','targ','all_col_names','n_subsets')", "_____no_output_____" ], [ "#export\nclass TabularPandas(Tabular):\n def transform(self, cols, f): self[cols] = self[cols].transform(f)", "_____no_output_____" ], [ "#export\ndef _add_prop(cls, nm):\n @property\n def f(o): return o[list(getattr(o,nm+'_names'))]\n @f.setter\n def fset(o, v): o[getattr(o,nm+'_names')] = v\n setattr(cls, nm+'s', f)\n setattr(cls, nm+'s', fset)\n\n_add_prop(Tabular, 'cat')\n_add_prop(Tabular, 'cont')\n_add_prop(Tabular, 'y')\n_add_prop(Tabular, 'all_col')", "_____no_output_____" ], [ "df = pd.DataFrame({'a':[0,1,2,0,2], 'b':[0,0,0,0,1]})\nto = TabularPandas(df, cat_names='a')\nt = pickle.loads(pickle.dumps(to))\ntest_eq(t.items,to.items)\ntest_eq(to.all_cols,to[['a']])\nto.show() # only shows 'a' since that's the only col in `TabularPandas`", "_____no_output_____" ], [ "#export\nclass TabularProc(InplaceTransform):\n \"Base class to write a non-lazy tabular processor for dataframes\"\n def setup(self, items=None, train_setup=False): #TODO: properly deal with train_setup\n super().setup(getattr(items,'train',items), train_setup=False)\n # Procs are called as soon as data is available\n return self(items.items if isinstance(items,Datasets) else items)", "_____no_output_____" ], [ "#export\ndef _apply_cats (voc, add, c):\n if not is_categorical_dtype(c):\n return pd.Categorical(c, categories=voc[c.name][add:]).codes+add\n return c.cat.codes+add #if is_categorical_dtype(c) else c.map(voc[c.name].o2i)\ndef _decode_cats(voc, c): return c.map(dict(enumerate(voc[c.name].items)))", "_____no_output_____" ], [ "#export\nclass Categorify(TabularProc):\n \"Transform the categorical variables to that type.\"\n order = 1\n def setups(self, to):\n self.classes = {n:CategoryMap(to.iloc[:,n].items, add_na=(n in to.cat_names)) for n in to.cat_names}\n\n def encodes(self, to): to.transform(to.cat_names, partial(_apply_cats, self.classes, 1))\n def decodes(self, to): to.transform(to.cat_names, partial(_decode_cats, self.classes))\n def __getitem__(self,k): return self.classes[k]", "_____no_output_____" ], [ "#export\n@Categorize\ndef setups(self, to:Tabular):\n if len(to.y_names) > 0:\n self.vocab = CategoryMap(getattr(to, 'train', to).iloc[:,to.y_names[0]].items)\n self.c = len(self.vocab)\n return self(to)\n\n@Categorize\ndef encodes(self, to:Tabular):\n to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0))\n return to\n\n@Categorize\ndef decodes(self, to:Tabular):\n to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}))\n return to", "_____no_output_____" ], [ "show_doc(Categorify, title_level=3)", "_____no_output_____" ], [ "df = pd.DataFrame({'a':[0,1,2,0,2]})\nto = TabularPandas(df, Categorify, 'a')\ncat = to.procs.categorify\ntest_eq(cat['a'], ['#na#',0,1,2])\ntest_eq(to['a'], [1,2,3,1,3])", "_____no_output_____" ], [ "df1 = pd.DataFrame({'a':[1,0,3,-1,2]})\nto1 = to.new(df1)\nto1.process()\n#Values that weren't in the training df are sent to 0 (na)\ntest_eq(to1['a'], [2,1,0,0,3])\nto2 = cat.decode(to1)\ntest_eq(to2['a'], [1,0,'#na#','#na#',2])", "_____no_output_____" ], [ "#test with splits\ncat = Categorify()\ndf = pd.DataFrame({'a':[0,1,2,3,2]})\nto = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]])\ntest_eq(cat['a'], ['#na#',0,1,2])\ntest_eq(to['a'], [1,2,3,0,3])", "_____no_output_____" ], [ "df = pd.DataFrame({'a':pd.Categorical(['M','H','L','M'], categories=['H','M','L'], ordered=True)})\nto = TabularPandas(df, Categorify, 'a')\ncat = to.procs.categorify\ntest_eq(cat['a'], ['#na#','H','M','L'])\ntest_eq(to.items.a, [2,1,3,2])\nto2 = cat.decode(to)\ntest_eq(to2['a'], ['M','H','L','M'])", "_____no_output_____" ], [ "#test with targets\ncat = Categorify()\ndf = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'b', 'b']})\nto = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')\ntest_eq(to.vocab, ['a', 'b'])\ntest_eq(to['b'], [0,1,0,1,1])\nto2 = to.procs.decode(to)\ntest_eq(to2['b'], ['a', 'b', 'a', 'b', 'b'])", "_____no_output_____" ], [ "#test with targets and train\ncat = Categorify()\ndf = pd.DataFrame({'a':[0,1,2,3,2], 'b': ['a', 'b', 'a', 'c', 'b']})\nto = TabularPandas(df, cat, 'a', splits=[[0,1,2],[3,4]], y_names='b')\ntest_eq(to.vocab, ['a', 'b'])", "_____no_output_____" ], [ "#export\nclass NormalizeTab(TabularProc):\n \"Normalize the continuous variables.\"\n order = 2\n def setups(self, dsets): self.means,self.stds = dsets.conts.mean(),dsets.conts.std(ddof=0)+1e-7\n def encodes(self, to): to.conts = (to.conts-self.means) / self.stds\n def decodes(self, to): to.conts = (to.conts*self.stds ) + self.means", "_____no_output_____" ], [ "#export\n@Normalize\ndef setups(self, to:Tabular):\n self.means,self.stds = getattr(to, 'train', to).conts.mean(),getattr(to, 'train', to).conts.std(ddof=0)+1e-7\n return self(to)\n\n@Normalize\ndef encodes(self, to:Tabular):\n to.conts = (to.conts-self.means) / self.stds\n return to\n\n@Normalize\ndef decodes(self, to:Tabular):\n to.conts = (to.conts*self.stds ) + self.means\n return to", "_____no_output_____" ], [ "norm = Normalize()\ndf = pd.DataFrame({'a':[0,1,2,3,4]})\nto = TabularPandas(df, norm, cont_names='a')\nx = np.array([0,1,2,3,4])\nm,s = x.mean(),x.std()\ntest_eq(norm.means['a'], m)\ntest_close(norm.stds['a'], s)\ntest_close(to['a'].values, (x-m)/s)", "_____no_output_____" ], [ "df1 = pd.DataFrame({'a':[5,6,7]})\nto1 = to.new(df1)\nto1.process()\ntest_close(to1['a'].values, (np.array([5,6,7])-m)/s)\nto2 = norm.decode(to1)\ntest_close(to2['a'].values, [5,6,7])", "_____no_output_____" ], [ "norm = Normalize()\ndf = pd.DataFrame({'a':[0,1,2,3,4]})\nto = TabularPandas(df, norm, cont_names='a', splits=[[0,1,2],[3,4]])\nx = np.array([0,1,2])\nm,s = x.mean(),x.std()\ntest_eq(norm.means['a'], m)\ntest_close(norm.stds['a'], s)\ntest_close(to['a'].values, (np.array([0,1,2,3,4])-m)/s)", "_____no_output_____" ], [ "#export\nclass FillStrategy:\n \"Namespace containing the various filling strategies.\"\n def median (c,fill): return c.median()\n def constant(c,fill): return fill\n def mode (c,fill): return c.dropna().value_counts().idxmax()", "_____no_output_____" ], [ "#export\nclass FillMissing(TabularProc):\n \"Fill the missing values in continuous columns.\"\n def __init__(self, fill_strategy=FillStrategy.median, add_col=True, fill_vals=None):\n if fill_vals is None: fill_vals = defaultdict(int)\n store_attr(self, 'fill_strategy,add_col,fill_vals')\n\n def setups(self, dsets):\n self.na_dict = {n:self.fill_strategy(dsets[n], self.fill_vals[n])\n for n in pd.isnull(dsets.conts).any().keys()}\n\n def encodes(self, to):\n missing = pd.isnull(to.conts)\n for n in missing.any().keys():\n assert n in self.na_dict, f\"nan values in `{n}` but not in setup training set\"\n to[n].fillna(self.na_dict[n], inplace=True)\n if self.add_col:\n to.loc[:,n+'_na'] = missing[n]\n if n+'_na' not in to.cat_names: to.cat_names.append(n+'_na')", "_____no_output_____" ], [ "show_doc(FillMissing, title_level=3)", "_____no_output_____" ], [ "fill1,fill2,fill3 = (FillMissing(fill_strategy=s) \n for s in [FillStrategy.median, FillStrategy.constant, FillStrategy.mode])\ndf = pd.DataFrame({'a':[0,1,np.nan,1,2,3,4]})\ndf1 = df.copy(); df2 = df.copy()\ntos = TabularPandas(df, fill1, cont_names='a'),TabularPandas(df1, fill2, cont_names='a'),TabularPandas(df2, fill3, cont_names='a')\ntest_eq(fill1.na_dict, {'a': 1.5})\ntest_eq(fill2.na_dict, {'a': 0})\ntest_eq(fill3.na_dict, {'a': 1.0})\n\nfor t in tos: test_eq(t.cat_names, ['a_na'])\n\nfor to_,v in zip(tos, [1.5, 0., 1.]):\n test_eq(to_['a'].values, np.array([0, 1, v, 1, 2, 3, 4]))\n test_eq(to_['a_na'].values, np.array([0, 0, 1, 0, 0, 0, 0]))", "_____no_output_____" ], [ "dfa = pd.DataFrame({'a':[np.nan,0,np.nan]})\ntos = [t.new(o) for t,o in zip(tos,(dfa,dfa.copy(),dfa.copy()))]\nfor t in tos: t.process()\nfor to_,v in zip(tos, [1.5, 0., 1.]):\n test_eq(to_['a'].values, np.array([v, 0, v]))\n test_eq(to_['a_na'].values, np.array([1, 0, 1]))", "_____no_output_____" ] ], [ [ "## TabularPandas Pipelines -", "_____no_output_____" ] ], [ [ "procs = [Normalize, Categorify, FillMissing, noop]\ndf = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4]})\nto = TabularPandas(df, procs, cat_names='a', cont_names='b')\n\n#Test setup and apply on df_main\ntest_eq(to.cat_names, ['a', 'b_na'])\ntest_eq(to['a'], [1,2,3,2,2,3,1])\ntest_eq(to['b_na'], [1,1,2,1,1,1,1])\nx = np.array([0,1,1.5,1,2,3,4])\nm,s = x.mean(),x.std()\ntest_close(to['b'].values, (x-m)/s)\ntest_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})", "_____no_output_____" ], [ "#Test apply on y_names\ndf = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})\nto = TabularPandas(df, procs, 'a', 'b', y_names='c')\n\ntest_eq(to.cat_names, ['a', 'b_na'])\ntest_eq(to['a'], [1,2,3,2,2,3,1])\ntest_eq(to['b_na'], [1,1,2,1,1,1,1])\ntest_eq(to['c'], [1,0,1,0,0,1,0])\nx = np.array([0,1,1.5,1,2,3,4])\nm,s = x.mean(),x.std()\ntest_close(to['b'].values, (x-m)/s)\ntest_eq(to.classes, {'a': ['#na#',0,1,2], 'b_na': ['#na#',False,True]})\ntest_eq(to.vocab, ['a','b'])", "_____no_output_____" ], [ "df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,1,np.nan,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})\nto = TabularPandas(df, procs, 'a', 'b', y_names='c')\n\ntest_eq(to.cat_names, ['a', 'b_na'])\ntest_eq(to['a'], [1,2,3,2,2,3,1])\ntest_eq(df.a.dtype,int)\ntest_eq(to['b_na'], [1,1,2,1,1,1,1])\ntest_eq(to['c'], [1,0,1,0,0,1,0])", "_____no_output_____" ], [ "df = pd.DataFrame({'a':[0,1,2,1,1,2,0], 'b':[0,np.nan,1,1,2,3,4], 'c': ['b','a','b','a','a','b','a']})\nto = TabularPandas(df, procs, cat_names='a', cont_names='b', y_names='c', splits=[[0,1,4,6], [2,3,5]])\n\ntest_eq(to.cat_names, ['a', 'b_na'])\ntest_eq(to['a'], [1,2,2,1,0,2,0])\ntest_eq(df.a.dtype,int)\ntest_eq(to['b_na'], [1,2,1,1,1,1,1])\ntest_eq(to['c'], [1,0,0,0,1,0,1])", "_____no_output_____" ], [ "#export\ndef _maybe_expand(o): return o[:,None] if o.ndim==1 else o", "_____no_output_____" ], [ "#export\nclass ReadTabBatch(ItemTransform):\n def __init__(self, to): self.to = to\n\n def encodes(self, to):\n if not to.with_cont: res = tensor(to.cats).long(), tensor(to.targ)\n else: res = (tensor(to.cats).long(),tensor(to.conts).float(), tensor(to.targ))\n if to.device is not None: res = to_device(res, to.device)\n return res\n\n def decodes(self, o):\n o = [_maybe_expand(o_) for o_ in to_np(o) if o_.size != 0]\n vals = np.concatenate(o, axis=1)\n df = pd.DataFrame(vals, columns=self.to.all_col_names)\n to = self.to.new(df)\n to = self.to.procs.decode(to)\n return to", "_____no_output_____" ], [ "#export\n@typedispatch\ndef show_batch(x: Tabular, y, its, max_n=10, ctxs=None):\n x.show()", "_____no_output_____" ], [ "from torch.utils.data.dataloader import _MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter,_DatasetKind\n_loaders = (_MultiProcessingDataLoaderIter,_SingleProcessDataLoaderIter)", "_____no_output_____" ], [ "#export\n@delegates()\nclass TabDataLoader(TfmdDL):\n do_item = noops\n def __init__(self, dataset, bs=16, shuffle=False, after_batch=None, num_workers=0, **kwargs):\n if after_batch is None: after_batch = L(TransformBlock().batch_tfms)+ReadTabBatch(dataset)\n super().__init__(dataset, bs=bs, shuffle=shuffle, after_batch=after_batch, num_workers=num_workers, **kwargs)\n\n def create_batch(self, b): return self.dataset.iloc[b]\n\nTabularPandas._dl_type = TabDataLoader", "_____no_output_____" ] ], [ [ "## Integration example", "_____no_output_____" ] ], [ [ "path = untar_data(URLs.ADULT_SAMPLE)\ndf = pd.read_csv(path/'adult.csv')\ndf_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()\ndf_main.head()", "_____no_output_____" ], [ "cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']\ncont_names = ['age', 'fnlwgt', 'education-num']\nprocs = [Categorify, FillMissing, Normalize]\nsplits = RandomSplitter()(range_of(df_main))", "_____no_output_____" ], [ "%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names=\"salary\", splits=splits)", "CPU times: user 166 ms, sys: 3.43 ms, total: 169 ms\nWall time: 168 ms\n" ], [ "dls = to.dataloaders()\ndls.valid.show_batch()", "_____no_output_____" ], [ "to_tst = to.new(df_test)\nto_tst.process()\nto_tst.all_cols.head()", "_____no_output_____" ] ], [ [ "## Other target types", "_____no_output_____" ], [ "### Multi-label categories", "_____no_output_____" ], [ "#### one-hot encoded label", "_____no_output_____" ] ], [ [ "def _mock_multi_label(df):\n sal,sex,white = [],[],[]\n for row in df.itertuples():\n sal.append(row.salary == '>=50k')\n sex.append(row.sex == ' Male')\n white.append(row.race == ' White')\n df['salary'] = np.array(sal)\n df['male'] = np.array(sex)\n df['white'] = np.array(white)\n return df", "_____no_output_____" ], [ "path = untar_data(URLs.ADULT_SAMPLE)\ndf = pd.read_csv(path/'adult.csv')\ndf_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()\ndf_main = _mock_multi_label(df_main)", "_____no_output_____" ], [ "df_main.head()", "_____no_output_____" ], [ "#export\n@EncodedMultiCategorize\ndef encodes(self, to:Tabular): return to\n\n@EncodedMultiCategorize\ndef decodes(self, to:Tabular):\n to.transform(to.y_names, lambda c: c==1)\n return to", "_____no_output_____" ], [ "cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']\ncont_names = ['age', 'fnlwgt', 'education-num']\nprocs = [Categorify, FillMissing, Normalize]\nsplits = RandomSplitter()(range_of(df_main))\ny_names=[\"salary\", \"male\", \"white\"]", "_____no_output_____" ], [ "%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names=y_names, block_y=MultiCategoryBlock(encoded=True, vocab=y_names), splits=splits)", "CPU times: user 161 ms, sys: 0 ns, total: 161 ms\nWall time: 159 ms\n" ], [ "dls = to.dataloaders()\ndls.valid.show_batch()", "_____no_output_____" ] ], [ [ "#### Not one-hot encoded", "_____no_output_____" ] ], [ [ "def _mock_multi_label(df):\n targ = []\n for row in df.itertuples():\n labels = []\n if row.salary == '>=50k': labels.append('>50k')\n if row.sex == ' Male': labels.append('male')\n if row.race == ' White': labels.append('white')\n targ.append(' '.join(labels))\n df['target'] = np.array(targ)\n return df", "_____no_output_____" ], [ "path = untar_data(URLs.ADULT_SAMPLE)\ndf = pd.read_csv(path/'adult.csv')\ndf_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()\ndf_main = _mock_multi_label(df_main)", "_____no_output_____" ], [ "df_main.head()", "_____no_output_____" ], [ "@MultiCategorize\ndef encodes(self, to:Tabular): \n #to.transform(to.y_names, partial(_apply_cats, {n: self.vocab for n in to.y_names}, 0))\n return to\n \n@MultiCategorize\ndef decodes(self, to:Tabular): \n #to.transform(to.y_names, partial(_decode_cats, {n: self.vocab for n in to.y_names}))\n return to", "_____no_output_____" ], [ "cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']\ncont_names = ['age', 'fnlwgt', 'education-num']\nprocs = [Categorify, FillMissing, Normalize]\nsplits = RandomSplitter()(range_of(df_main))", "_____no_output_____" ], [ "%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names=\"target\", block_y=MultiCategoryBlock(), splits=splits)", "CPU times: user 167 ms, sys: 0 ns, total: 167 ms\nWall time: 165 ms\n" ], [ "to.procs[2].vocab", "_____no_output_____" ] ], [ [ "### Regression", "_____no_output_____" ] ], [ [ "path = untar_data(URLs.ADULT_SAMPLE)\ndf = pd.read_csv(path/'adult.csv')\ndf_main,df_test = df.iloc[:10000].copy(),df.iloc[10000:].copy()\ndf_main = _mock_multi_label(df_main)", "_____no_output_____" ], [ "cat_names = ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race']\ncont_names = ['fnlwgt', 'education-num']\nprocs = [Categorify, FillMissing, Normalize]\nsplits = RandomSplitter()(range_of(df_main))", "_____no_output_____" ], [ "%time to = TabularPandas(df_main, procs, cat_names, cont_names, y_names='age', block_y=TransformBlock(), splits=splits)", "CPU times: user 140 ms, sys: 969 µs, total: 141 ms\nWall time: 139 ms\n" ], [ "to.procs[-1].means", "_____no_output_____" ], [ "dls = to.dataloaders()\ndls.valid.show_batch()", "_____no_output_____" ] ], [ [ "## Not being used now - for multi-modal", "_____no_output_____" ] ], [ [ "class TensorTabular(Tuple):\n def get_ctxs(self, max_n=10, **kwargs):\n n_samples = min(self[0].shape[0], max_n)\n df = pd.DataFrame(index = range(n_samples))\n return [df.iloc[i] for i in range(n_samples)]\n\n def display(self, ctxs): display_df(pd.DataFrame(ctxs))\n\nclass TabularLine(pd.Series):\n \"A line of a dataframe that knows how to show itself\"\n def show(self, ctx=None, **kwargs): return self if ctx is None else ctx.append(self)\n\nclass ReadTabLine(ItemTransform):\n def __init__(self, proc): self.proc = proc\n\n def encodes(self, row):\n cats,conts = (o.map(row.__getitem__) for o in (self.proc.cat_names,self.proc.cont_names))\n return TensorTabular(tensor(cats).long(),tensor(conts).float())\n\n def decodes(self, o):\n to = TabularPandas(o, self.proc.cat_names, self.proc.cont_names, self.proc.y_names)\n to = self.proc.decode(to)\n return TabularLine(pd.Series({c: v for v,c in zip(to.items[0]+to.items[1], self.proc.cat_names+self.proc.cont_names)}))\n\nclass ReadTabTarget(ItemTransform):\n def __init__(self, proc): self.proc = proc\n def encodes(self, row): return row[self.proc.y_names].astype(np.int64)\n def decodes(self, o): return Category(self.proc.classes[self.proc.y_names][o])", "_____no_output_____" ], [ "# tds = TfmdDS(to.items, tfms=[[ReadTabLine(proc)], ReadTabTarget(proc)])\n# enc = tds[1]\n# test_eq(enc[0][0], tensor([2,1]))\n# test_close(enc[0][1], tensor([-0.628828]))\n# test_eq(enc[1], 1)\n\n# dec = tds.decode(enc)\n# assert isinstance(dec[0], TabularLine)\n# test_close(dec[0], pd.Series({'a': 1, 'b_na': False, 'b': 1}))\n# test_eq(dec[1], 'a')\n\n# test_stdout(lambda: print(show_at(tds, 1)), \"\"\"a 1\n# b_na False\n# b 1\n# category a\n# dtype: object\"\"\")", "_____no_output_____" ] ], [ [ "## Export -", "_____no_output_____" ] ], [ [ "#hide\nfrom nbdev.export import notebook2script\nnotebook2script()", "Converted 00_torch_core.ipynb.\nConverted 01_layers.ipynb.\nConverted 02_data.load.ipynb.\nConverted 03_data.core.ipynb.\nConverted 04_data.external.ipynb.\nConverted 05_data.transforms.ipynb.\nConverted 06_data.block.ipynb.\nConverted 07_vision.core.ipynb.\nConverted 08_vision.data.ipynb.\nConverted 09_vision.augment.ipynb.\nConverted 09b_vision.utils.ipynb.\nConverted 09c_vision.widgets.ipynb.\nConverted 10_tutorial.pets.ipynb.\nConverted 11_vision.models.xresnet.ipynb.\nConverted 12_optimizer.ipynb.\nConverted 13_learner.ipynb.\nConverted 13a_metrics.ipynb.\nConverted 14_callback.schedule.ipynb.\nConverted 14a_callback.data.ipynb.\nConverted 15_callback.hook.ipynb.\nConverted 15a_vision.models.unet.ipynb.\nConverted 16_callback.progress.ipynb.\nConverted 17_callback.tracker.ipynb.\nConverted 18_callback.fp16.ipynb.\nConverted 19_callback.mixup.ipynb.\nConverted 20_interpret.ipynb.\nConverted 20a_distributed.ipynb.\nConverted 21_vision.learner.ipynb.\nConverted 22_tutorial.imagenette.ipynb.\nConverted 23_tutorial.transfer_learning.ipynb.\nConverted 24_vision.gan.ipynb.\nConverted 30_text.core.ipynb.\nConverted 31_text.data.ipynb.\nConverted 32_text.models.awdlstm.ipynb.\nConverted 33_text.models.core.ipynb.\nConverted 34_callback.rnn.ipynb.\nConverted 35_tutorial.wikitext.ipynb.\nConverted 36_text.models.qrnn.ipynb.\nConverted 37_text.learner.ipynb.\nConverted 38_tutorial.ulmfit.ipynb.\nConverted 40_tabular.core.ipynb.\nConverted 41_tabular.data.ipynb.\nConverted 42_tabular.learner.ipynb.\nConverted 43_tabular.model.ipynb.\nConverted 45_collab.ipynb.\nConverted 50_datablock_examples.ipynb.\nConverted 60_medical.imaging.ipynb.\nConverted 65_medical.text.ipynb.\nConverted 70_callback.wandb.ipynb.\nConverted 71_callback.tensorboard.ipynb.\nConverted 97_test_utils.ipynb.\nConverted index.ipynb.\nConverted migrating.ipynb.\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbf6228034807fd6e303cd04d389364927292247
5,793
ipynb
Jupyter Notebook
Codes/btree.ipynb
konantian/Data-Structure
8fda05e10c44719b4a88c3e1868829d797984754
[ "MIT" ]
null
null
null
Codes/btree.ipynb
konantian/Data-Structure
8fda05e10c44719b4a88c3e1868829d797984754
[ "MIT" ]
null
null
null
Codes/btree.ipynb
konantian/Data-Structure
8fda05e10c44719b4a88c3e1868829d797984754
[ "MIT" ]
null
null
null
16.742775
65
0.44502
[ [ [ "from BinaryTree import BinaryTree\ntree=BinaryTree()", "_____no_output_____" ], [ "for i in [21,29,35,26,37,28,40,43,48,52,55,58,63,65,70]:\n tree.insert(i)", "_____no_output_____" ], [ "tree.findPath(70)", "_____no_output_____" ], [ "tree.findAllpath()", "_____no_output_____" ], [ "tree.findDeepest()", "_____no_output_____" ], [ "tree.printLevelorder()", "21 29 35 26 37 28 40 43 48 52 55 58 63 65 70 " ], [ "tree.printInorder()", "43 26 48 29 52 37 55 21 58 28 63 35 65 40 70 " ], [ "tree.printPostorder()", "43 48 26 52 55 37 29 58 63 28 65 70 40 35 21 " ], [ "tree.find(70)", "_____no_output_____" ], [ "tree.isPerfect()", "_____no_output_____" ], [ "tree.isFull()", "_____no_output_____" ], [ "root = tree.getRoot()\ntree.isBalanced(root)", "_____no_output_____" ], [ "tree.countLeaf()", "_____no_output_____" ], [ "root = tree.getRoot()\ntree.getHeight(root)", "_____no_output_____" ], [ "tree.getRoot()", "_____no_output_____" ], [ "tree.getLeft(29)", "_____no_output_____" ], [ "tree.getRight(35)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf63a456c36b36aaf786c031f80c8204854b05e
83,012
ipynb
Jupyter Notebook
Yield Curves!.ipynb
nick-kaufman/portfolio
7c3af540638e63835431ff5d57095ee783534915
[ "MIT" ]
null
null
null
Yield Curves!.ipynb
nick-kaufman/portfolio
7c3af540638e63835431ff5d57095ee783534915
[ "MIT" ]
8
2020-03-24T16:41:00.000Z
2022-03-11T23:39:36.000Z
Yield Curves!.ipynb
nick-kaufman/portfolio
7c3af540638e63835431ff5d57095ee783534915
[ "MIT" ]
null
null
null
336.080972
39,056
0.932407
[ [ [ "# This notebook is dedicated to the visualization of the Yield Curve.\n\n## What is the yield curve?\n\nThe yield curve shows the different yields, or interest rates, across different contract lengths at a snapshot in time (typically daily). The curves below are all based on data obtained from [FRED](https://research.stlouisfed.org/) - Federal Reserve Economic Data.\n\n## Why is the yield curve important?\n\nThe yield curve is a popular indicator among economists (as listeners of Planet Money's The Indicator will know) because it has correctly forecasted a recession in the United States without any false positives, since about 1970. This predictive power is triggered when the yield curve becomes \"inverted\" - meaning that short term interest rates outweigh long term interest rates.\n\n*NOTE*: This is slightly vague, but what is specifically meant here is the difference in ten year yield, when compared to three month yield.\n\n## Longer explanation of the yield curve\n\nThe current view of the yield curve is a relatively recent phenomenon, all things considered. It is only since after the Great Depression would we consider the positively sloping yield curves 'normal.' Prior to this, for most of the 19th and 20th centuries, the United States would have had a negatively sloping yield curve. This is caused because the growth the U.S. experienced was **deflationary**, as opposed to the inflationary growth the Fed targets currently. This is because in deflationary growth, current cash flows are less valuable than future cash flows.\n\nFor a more detailed explanation, see the [Wikipedia page](https://en.wikipedia.org/wiki/Yield_curve).", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\nfrom scipy.interpolate import CubicSpline\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "def read_in_data(filename, index_col=\"DATE\"):\n dataframe = pd.read_csv(filename, index_col=index_col)\n dataframe.drop(\"T10Y3M\", axis=1, inplace=True)\n return dataframe", "_____no_output_____" ], [ "def get_data_for_date(date_, df):\n try:\n datum = df.loc[date_]\n except KeyError:\n print(\"Sorry, no data is available for that day.\")\n return None, None\n x_arr = np.array([1.0/12, 3.0/12, 1.0, 2.0, 3.0, 5.0, 7.0, 10.0, 20.0, 30.0])\n y_arr = np.array(list(filter(lambda x: x is not None, [None if x == \".\" else float(x) for x in datum])))\n mask = [idx for idx, x in enumerate(y_arr) if x < np.inf]\n if not mask:\n print(\"Sorry, no data is available for that day.\")\n return None, None \n x = x_arr[mask]\n y = y_arr[mask]\n if len(x) != len(x_arr):\n print(\"WARNING: some data is missing for this date.\")\n if y_arr[-1] != y[-1]:\n print(\"WARNING: 30 year rate is not available for this date.\")\n return x, y", "_____no_output_____" ], [ "def plot_yield_curve(date_, dataframe):\n x, y = get_data_for_date(date_, dataframe)\n if x is None and y is None:\n return\n cs = CubicSpline(x, y)\n xx = np.linspace(0, 30, num=200)\n plt.plot(x, y, 'k.')\n plt.plot(xx, cs(xx), 'b--')\n plt.title(f\"Yield Curve for {date_}\")\n plt.xlabel(\"Borrowing Period (years)\")\n plt.ylabel(\"Interest Rates (percent)\")\n plt.show()", "_____no_output_____" ], [ "def plot_long_term_difference_measure(filename, index_col=\"DATE\"):\n df = pd.read_csv(filename, index_col=index_col)\n series = df[\"T10Y3M\"].copy()\n y_arr = np.array(list(filter(lambda x: x is not None, [None if x == \".\" else float(x) for x in series.tolist()])))\n mask = [idx for idx, x in enumerate(y_arr) if x < np.inf]\n y_arr = y_arr[mask]\n dates = np.array(series.index.tolist())\n dates_used = dates[mask]\n x_arr = np.arange(len(dates))[mask]\n plt.plot(x_arr, y_arr, 'k--')\n plt.plot(x_arr, [0]*len(x_arr), 'r')\n plt.xticks(x_arr[::365], dates_used[::365], rotation=90)\n plt.tight_layout()\n plt.show()\n return", "_____no_output_____" ], [ "dataframe = read_in_data(\"data/yield_curve_data_ordered.csv\")", "_____no_output_____" ] ], [ [ "#### Normal Yield Curve", "_____no_output_____" ] ], [ [ "plot_yield_curve(\"2019-01-07\", dataframe)", "_____no_output_____" ] ], [ [ "#### Inverted Yield Curve", "_____no_output_____" ] ], [ [ "plot_yield_curve(\"2006-12-05\", dataframe)", "_____no_output_____" ] ], [ [ "#### Trend Over Time\nAs indicated in the notes above, the main consideration we're looking at is 10 year vs 3 month yield. So instead of a snapshot, we can plot this over time. The red line is added to visualize periods of time in which the yield curve is inverted.", "_____no_output_____" ] ], [ [ "plot_long_term_difference_measure(\"data/yield_curve_data_ordered.csv\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf64c5411e24447b3bf2992a75d35a62dc36ef8
5,866
ipynb
Jupyter Notebook
main.ipynb
Karmantez/Dogs_and_Cats_Classifier
30fef1360d90502a69f7392f216fe38c65aabeb1
[ "MIT" ]
null
null
null
main.ipynb
Karmantez/Dogs_and_Cats_Classifier
30fef1360d90502a69f7392f216fe38c65aabeb1
[ "MIT" ]
null
null
null
main.ipynb
Karmantez/Dogs_and_Cats_Classifier
30fef1360d90502a69f7392f216fe38c65aabeb1
[ "MIT" ]
null
null
null
43.132353
1,460
0.599045
[ [ [ "# import testing\nimport resnet_model\nimport preprocessing\nimport importlib\nfrom torchvision import transforms\n\nimport torch\n\n# importlib.reload(testing)\nimportlib.reload(resnet_model)\nimportlib.reload(preprocessing)\n\nprint('Libraries are loaded successfully')", "Libraries are loaded successfully\n" ], [ "# Loading data with PyTorch Tensor\n# 1. Save all image sizes in the same size\n# 2. Normalize the dataset with the mean and standard deviation of the dataset\n# 3. Convert the image dataset to a PyTorch Tensor.\n\nimage_transform = transforms.Compose([\n transforms.Resize((224, 224)),\n transforms.ToTensor(),\n transforms.Normalize([0.485, 0.456, 0.406],\n [0.229, 0.224, 0.225])\n])\n\nbatch_train = preprocessing.batch_processing('/train', image_transform, 32, True, 3)\nbatch_valid = preprocessing.batch_processing('/valid', image_transform, 32, True, 3)", "_____no_output_____" ], [ "model_ft = resnet_model.set_model_config()\nsetted_model = resnet_model.activate_model(model_ft, {'train' : batch_train, 'valid' : batch_valid}, epochs=10)", "cuda available...\nEpoch 0/9\n----------\ntrain Loss: 0.0061 Acc: 0.9272\nvalid Loss: 0.0021 Acc: 0.9773\n\nEpoch 1/9\n----------\ntrain Loss: 0.0019 Acc: 0.9832\nvalid Loss: 0.0015 Acc: 0.9824\n\nEpoch 2/9\n----------\ntrain Loss: 0.0014 Acc: 0.9844\nvalid Loss: 0.0014 Acc: 0.9840\n\nEpoch 3/9\n----------\ntrain Loss: 0.0012 Acc: 0.9880\nvalid Loss: 0.0013 Acc: 0.9857\n\nEpoch 4/9\n----------\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbf67a7224b10a95f74304a099f24a4e8c186205
618,027
ipynb
Jupyter Notebook
analysis/simulation/synthetic_data_analysis_AISTATS_N10_agnostic.ipynb
shamindras/bttv-aistats2020
7a2d5136647519d2c4cc6b0735599abec9c2997a
[ "MIT" ]
1
2020-08-20T09:51:10.000Z
2020-08-20T09:51:10.000Z
analysis/simulation/synthetic_data_analysis_AISTATS_N10_agnostic.ipynb
shamindras/bttv-aistats2020
7a2d5136647519d2c4cc6b0735599abec9c2997a
[ "MIT" ]
8
2020-02-13T04:48:29.000Z
2020-02-20T05:33:49.000Z
analysis/simulation/synthetic_data_analysis_AISTATS_N10_agnostic.ipynb
shamindras/bttv-aistats2020
7a2d5136647519d2c4cc6b0735599abec9c2997a
[ "MIT" ]
1
2021-09-16T14:07:31.000Z
2021-09-16T14:07:31.000Z
409.289404
174,208
0.9336
[ [ [ "from scipy.sparse import diags\nimport random\nimport numpy as np\nimport scipy as sc\nimport pandas as pd\nimport csv\nimport scipy.linalg as spl\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc\nrc('text', usetex=True)\nimport time\nimport sys\nsys.path.insert(0, '../../python/')\nfrom opt_utils import *\nfrom grad_utils import *\nfrom ks_utils import *\nfrom simulation_utils import *\nfrom cv_utils import *\n%matplotlib inline", "_____no_output_____" ] ], [ [ "# Generate synethic data", "_____no_output_____" ] ], [ [ "N = 10 # number of teams\nT = 10 # number of seasons/rounds/years\ntn = [1] * int(T * N * (N - 1)/2) # number of games between each pair of teams", "_____no_output_____" ] ], [ [ "### Gaussian Process", "_____no_output_____" ] ], [ [ "random.seed(0)\nnp.random.seed(0)\nP_list = make_prob_matrix(T,N,r = 1,alpha = 1,mu = [0,0.2])\ngame_matrix_list = get_game_matrix_list_from_P(tn,P_list)", "_____no_output_____" ], [ "data = game_matrix_list # shape: T*N*N", "_____no_output_____" ] ], [ [ "## Oracle estimator", "_____no_output_____" ] ], [ [ "# vanilla BT\nrandom.seed(0)\nnp.random.seed(0)\n_, beta_oracle = gd_bt(data = P_list)", "_____no_output_____" ], [ "latent = beta_oracle\nfor i in range(N):\n plt.plot(latent[:,i], label=\"team %d\"%i)\nplt.xlabel(\"season number\")\nplt.ylabel(\"latent parameter\")\n# plt.legend(loc='upper left', bbox_to_anchor=(1, 1.03, 1, 0))", "_____no_output_____" ] ], [ [ "## Kernel method", "_____no_output_____" ], [ "## $h = T^{-3/4}$", "_____no_output_____" ] ], [ [ "T**(-3/4)", "_____no_output_____" ], [ "T, N = data.shape[0:2]\nks_data = kernel_smooth(data,1/6 * T**(-1/5))", "_____no_output_____" ], [ "ks_data[1,:,:]", "_____no_output_____" ], [ "objective_pgd, beta_pgd = gd_bt(data = ks_data,verbose=True)", "initial objective value: 311.916231\n1-th GD, objective value: 294.623853\n2-th GD, objective value: 291.826408\n3-th GD, objective value: 291.236369\n4-th GD, objective value: 291.034981\n5-th GD, objective value: 291.031352\n6-th GD, objective value: 291.031146\n7-th GD, objective value: 291.031132\n8-th GD, objective value: 291.031131\n9-th GD, objective value: 291.031131\n10-th GD, objective value: 291.031131\n11-th GD, objective value: 291.031131\n12-th GD, objective value: 291.031131\n13-th GD, objective value: 291.031131\n14-th GD, objective value: 291.031131\nConverged!\n" ], [ "T, N = data.shape[0:2]\nbeta = beta_pgd.reshape((T,N))\nf = plt.figure(1, figsize = (9,5))\nax = plt.subplot(111)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],marker = '.',label = 'Team' + str(i),linewidth=1)\n\nbox = ax.get_position()\nax.set_position([box.x0, box.y0, box.width * 0.8, box.height])\n\n# Put a legend to the right of the current axis\n# ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))\n\nplt.show()\n# f.savefig(\"l2_sq_solution.pdf\", bbox_inches='tight')", "_____no_output_____" ] ], [ [ "## LOOCV", "_____no_output_____" ] ], [ [ "start_time = time.time()\n\nrandom.seed(0)\nnp.random.seed(0)\nh_list = np.linspace(0.3, 0.01, 10)\n# h_cv, nll_cv, beta_cv, prob_cv = cv_utils.loocv_ks(data, h_list, gd_bt, num_loocv = 200, return_prob = True, out = \"notebook\")\nh_cv, nll_cv, beta_cv, prob_cv = loocv_ks(data, h_list, gd_bt, num_loocv = 200, return_prob = True, out = \"notebook\")\nloo_nll_DBT, loo_prob_DBT = max(nll_cv), prob_cv[np.argmax(nll_cv)]\n\nprint(\"--- %s seconds ---\" % (time.time() - start_time))", "1-th cv done\n2-th cv done\n3-th cv done\n4-th cv done\n5-th cv done\n6-th cv done\n7-th cv done\n8-th cv done\n9-th cv done\n10-th cv done\n11-th cv done\n12-th cv done\n13-th cv done\n14-th cv done\n15-th cv done\n16-th cv done\n17-th cv done\n18-th cv done\n19-th cv done\n20-th cv done\n21-th cv done\n22-th cv done\n23-th cv done\n24-th cv done\n25-th cv done\n26-th cv done\n27-th cv done\n28-th cv done\n29-th cv done\n30-th cv done\n31-th cv done\n32-th cv done\n33-th cv done\n34-th cv done\n35-th cv done\n36-th cv done\n37-th cv done\n38-th cv done\n39-th cv done\n40-th cv done\n41-th cv done\n42-th cv done\n43-th cv done\n44-th cv done\n45-th cv done\n46-th cv done\n47-th cv done\n48-th cv done\n49-th cv done\n50-th cv done\n51-th cv done\n52-th cv done\n53-th cv done\n54-th cv done\n55-th cv done\n56-th cv done\n57-th cv done\n58-th cv done\n59-th cv done\n60-th cv done\n61-th cv done\n62-th cv done\n63-th cv done\n64-th cv done\n65-th cv done\n66-th cv done\n67-th cv done\n68-th cv done\n69-th cv done\n70-th cv done\n71-th cv done\n72-th cv done\n73-th cv done\n74-th cv done\n75-th cv done\n76-th cv done\n77-th cv done\n78-th cv done\n79-th cv done\n80-th cv done\n81-th cv done\n82-th cv done\n83-th cv done\n84-th cv done\n85-th cv done\n86-th cv done\n87-th cv done\n88-th cv done\n89-th cv done\n90-th cv done\n91-th cv done\n92-th cv done\n93-th cv done\n94-th cv done\n95-th cv done\n96-th cv done\n97-th cv done\n98-th cv done\n99-th cv done\n100-th cv done\n101-th cv done\n102-th cv done\n103-th cv done\n104-th cv done\n105-th cv done\n106-th cv done\n107-th cv done\n108-th cv done\n109-th cv done\n110-th cv done\n111-th cv done\n112-th cv done\n113-th cv done\n114-th cv done\n115-th cv done\n116-th cv done\n117-th cv done\n118-th cv done\n119-th cv done\n120-th cv done\n121-th cv done\n122-th cv done\n123-th cv done\n124-th cv done\n125-th cv done\n126-th cv done\n127-th cv done\n128-th cv done\n129-th cv done\n130-th cv done\n131-th cv done\n132-th cv done\n133-th cv done\n134-th cv done\n135-th cv done\n136-th cv done\n137-th cv done\n138-th cv done\n139-th cv done\n140-th cv done\n141-th cv done\n142-th cv done\n143-th cv done\n144-th cv done\n145-th cv done\n146-th cv done\n147-th cv done\n148-th cv done\n149-th cv done\n150-th cv done\n151-th cv done\n152-th cv done\n153-th cv done\n154-th cv done\n155-th cv done\n156-th cv done\n157-th cv done\n158-th cv done\n159-th cv done\n160-th cv done\n161-th cv done\n162-th cv done\n163-th cv done\n164-th cv done\n165-th cv done\n166-th cv done\n167-th cv done\n168-th cv done\n169-th cv done\n170-th cv done\n171-th cv done\n172-th cv done\n173-th cv done\n174-th cv done\n175-th cv done\n176-th cv done\n177-th cv done\n178-th cv done\n179-th cv done\n180-th cv done\n181-th cv done\n182-th cv done\n183-th cv done\n184-th cv done\n185-th cv done\n186-th cv done\n187-th cv done\n188-th cv done\n189-th cv done\n190-th cv done\n191-th cv done\n192-th cv done\n193-th cv done\n194-th cv done\n195-th cv done\n196-th cv done\n197-th cv done\n198-th cv done\n199-th cv done\n200-th cv done\n--- 65.90615391731262 seconds ---\n" ], [ "h_cv", "_____no_output_____" ], [ "f = plt.figure(1, figsize = (7,5))\nsize_ylabel = 20\nsize_xlabel = 30\nsize_tick = 20\n\nnll_cv = nll_cv\nplt.plot(h_list[::-1], nll_cv)\nplt.xlabel(r'$h$',fontsize = size_xlabel); plt.ylabel(r\"Averaged nll\",fontsize = size_ylabel)\nplt.tick_params(axis='both', which='major', labelsize=size_tick)\n\n# f.savefig(\"cv_curve.pdf\", bbox_inches='tight')", "_____no_output_____" ], [ "import time\nstart_time = time.time()\n\nrandom.seed(0)\nnp.random.seed(0)\nh = h_cv\nnll_DBT, beta_DBT, prob_DBT = loo_DBT(data, h, gd_bt, num_loo = 200, return_prob = True, out = \"notebook\")\n\nprint(\"--- %s seconds ---\" % (time.time() - start_time))", "1-th cv done\n2-th cv done\n3-th cv done\n4-th cv done\n5-th cv done\n6-th cv done\n7-th cv done\n8-th cv done\n9-th cv done\n10-th cv done\n11-th cv done\n12-th cv done\n13-th cv done\n14-th cv done\n15-th cv done\n16-th cv done\n17-th cv done\n18-th cv done\n19-th cv done\n20-th cv done\n21-th cv done\n22-th cv done\n23-th cv done\n24-th cv done\n25-th cv done\n26-th cv done\n27-th cv done\n28-th cv done\n29-th cv done\n30-th cv done\n31-th cv done\n32-th cv done\n33-th cv done\n34-th cv done\n35-th cv done\n36-th cv done\n37-th cv done\n38-th cv done\n39-th cv done\n40-th cv done\n41-th cv done\n42-th cv done\n43-th cv done\n44-th cv done\n45-th cv done\n46-th cv done\n47-th cv done\n48-th cv done\n49-th cv done\n50-th cv done\n51-th cv done\n52-th cv done\n53-th cv done\n54-th cv done\n55-th cv done\n56-th cv done\n57-th cv done\n58-th cv done\n59-th cv done\n60-th cv done\n61-th cv done\n62-th cv done\n63-th cv done\n64-th cv done\n65-th cv done\n66-th cv done\n67-th cv done\n68-th cv done\n69-th cv done\n70-th cv done\n71-th cv done\n72-th cv done\n73-th cv done\n74-th cv done\n75-th cv done\n76-th cv done\n77-th cv done\n78-th cv done\n79-th cv done\n80-th cv done\n81-th cv done\n82-th cv done\n83-th cv done\n84-th cv done\n85-th cv done\n86-th cv done\n87-th cv done\n88-th cv done\n89-th cv done\n90-th cv done\n91-th cv done\n92-th cv done\n93-th cv done\n94-th cv done\n95-th cv done\n96-th cv done\n97-th cv done\n98-th cv done\n99-th cv done\n100-th cv done\n101-th cv done\n102-th cv done\n103-th cv done\n104-th cv done\n105-th cv done\n106-th cv done\n107-th cv done\n108-th cv done\n109-th cv done\n110-th cv done\n111-th cv done\n112-th cv done\n113-th cv done\n114-th cv done\n115-th cv done\n116-th cv done\n117-th cv done\n118-th cv done\n119-th cv done\n120-th cv done\n121-th cv done\n122-th cv done\n123-th cv done\n124-th cv done\n125-th cv done\n126-th cv done\n127-th cv done\n128-th cv done\n129-th cv done\n130-th cv done\n131-th cv done\n132-th cv done\n133-th cv done\n134-th cv done\n135-th cv done\n136-th cv done\n137-th cv done\n138-th cv done\n139-th cv done\n140-th cv done\n141-th cv done\n142-th cv done\n143-th cv done\n144-th cv done\n145-th cv done\n146-th cv done\n147-th cv done\n148-th cv done\n149-th cv done\n150-th cv done\n151-th cv done\n152-th cv done\n153-th cv done\n154-th cv done\n155-th cv done\n156-th cv done\n157-th cv done\n158-th cv done\n159-th cv done\n160-th cv done\n161-th cv done\n162-th cv done\n163-th cv done\n164-th cv done\n165-th cv done\n166-th cv done\n167-th cv done\n168-th cv done\n169-th cv done\n170-th cv done\n171-th cv done\n172-th cv done\n173-th cv done\n174-th cv done\n175-th cv done\n176-th cv done\n177-th cv done\n178-th cv done\n179-th cv done\n180-th cv done\n181-th cv done\n182-th cv done\n183-th cv done\n184-th cv done\n185-th cv done\n186-th cv done\n187-th cv done\n188-th cv done\n189-th cv done\n190-th cv done\n191-th cv done\n192-th cv done\n193-th cv done\n194-th cv done\n195-th cv done\n196-th cv done\n197-th cv done\n198-th cv done\n199-th cv done\n200-th cv done\n--- 7.3762664794921875 seconds ---\n" ], [ "def get_winrate(data):\n T, N = data.shape[:2]\n winrate = np.sum(data, 2) / (np.sum(data,2) + np.sum(data,1))\n return winrate\n\ndef loo_winrate(data,num_loo = 200):\n indices = np.array(np.where(np.full(data.shape, True))).T\n cum_match = np.cumsum(data.flatten())\n \n loglikes_loo = 0\n prob_loo = 0\n for i in range(num_loo):\n data_loo = data.copy()\n rand_match = np.random.randint(np.sum(data))\n rand_index = indices[np.min(np.where(cum_match >= rand_match)[0])]\n data_loo[tuple(rand_index)] -= 1\n \n winrate_loo = get_winrate(data = data_loo)\n prob_loo += 1 - winrate_loo[rand_index[0],rand_index[1]]\n\n return (-loglikes_loo/num_loo, prob_loo/num_loo)\n# winrate\nrandom.seed(0)\nnp.random.seed(0)\nwinrate = get_winrate(data)\nloo_nll_wr, loo_prob_wr = loo_winrate(data)", "_____no_output_____" ], [ "loo_prob_wr", "_____no_output_____" ], [ "# vanilla BT\nimport time\nstart_time = time.time()\n\nrandom.seed(0)\nnp.random.seed(0)\nobjective_vanilla_bt, beta_vanilla_bt = gd_bt(data = data,verbose = True)\nloo_nll_vBT, loo_prob_vBT = loo_vBT(data,num_loo = 200)\n\nprint(\"--- %s seconds ---\" % (time.time() - start_time))", "initial objective value: 311.916231\n1-th GD, objective value: 269.104662\n2-th GD, objective value: 257.165696\n3-th GD, objective value: 256.492261\n4-th GD, objective value: 256.331496\n5-th GD, objective value: 256.283176\n6-th GD, objective value: 256.267146\n7-th GD, objective value: 256.261536\n8-th GD, objective value: 256.259508\n9-th GD, objective value: 256.258758\n10-th GD, objective value: 256.258477\n11-th GD, objective value: 256.258370\n12-th GD, objective value: 256.258329\n13-th GD, objective value: 256.258313\n14-th GD, objective value: 256.258307\n15-th GD, objective value: 256.258304\n16-th GD, objective value: 256.258303\n17-th GD, objective value: 256.258303\n18-th GD, objective value: 256.258303\n19-th GD, objective value: 256.258303\n20-th GD, objective value: 256.258303\n21-th GD, objective value: 256.258303\n22-th GD, objective value: 256.258303\n23-th GD, objective value: 256.258303\n24-th GD, objective value: 256.258303\n25-th GD, objective value: 256.258303\n26-th GD, objective value: 256.258303\n27-th GD, objective value: 256.258303\n28-th GD, objective value: 256.258303\n29-th GD, objective value: 256.258303\n30-th GD, objective value: 256.258303\n31-th GD, objective value: 256.258303\n32-th GD, objective value: 256.258303\nConverged!\n--- 1.0601952075958252 seconds ---\n" ], [ "loo_nll_vBT", "_____no_output_____" ], [ "loo_prob_vBT", "_____no_output_____" ], [ "rank_dif_estimator = [0] * 3\nbeta_all = [winrate,beta_vanilla_bt,beta_cv]\nfor i in range(len(rank_dif_estimator)):\n betai = beta_all[i]\n rank_dif_estimator[i] = np.mean(av_dif_rank(beta_oracle,betai))\nrank_dif_estimator\n\ndf = pd.DataFrame({'estimator':['winrate','vanilla BT','DBT'],'average rank difference':rank_dif_estimator,\n 'LOO Prob':[loo_prob_wr,loo_prob_vBT,loo_prob_DBT],\n 'LOO nll':[loo_nll_wr,loo_nll_vBT,loo_nll_DBT]})", "_____no_output_____" ], [ "print(df.to_latex(index_names=True, escape=False, index=False, \n column_format='c|c|c|c|', float_format=\"{:0.2f}\".format,\n header=True, bold_rows=True))", "\\begin{tabular}{c|c|c|c|}\n\\toprule\n estimator & average rank difference & LOO Prob & LOO nll \\\\\n\\midrule\n winrate & 2.98 & 0.49 & 0.00 \\\\\n vanilla BT & 2.78 & 0.48 & 0.90 \\\\\n DBT & 2.60 & 0.48 & 0.90 \\\\\n\\bottomrule\n\\end{tabular}\n\n" ], [ "T, N = data.shape[0:2]\nf = plt.figure(1, figsize = (10,8))\n\nsize_ylabel = 20\nsize_xlabel = 15\nsize_title = 15\nsize_tick = 13\nsize_legend = 15.4\nfont_title = \"Times New Roman Bold\"\n\nrandom.seed(0)\nnp.random.seed(0)\ncolor_matrix = c=np.random.rand(N,3)\n\nbeta = beta_oracle.reshape((T,N))\nax = plt.subplot(221)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"True $\\beta^*$\",fontsize = size_title)\n plt.xlabel(r\"$t$\",fontsize = size_xlabel); plt.ylabel(r\"${\\beta}^*$\",fontsize = size_ylabel,rotation = \"horizontal\")\n bottom, top = plt.ylim()\n \nbeta = beta_cv.reshape((T,N))\nax = plt.subplot(222)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"Dynamic Bradley-Terry, Gaussian Kernel\",fontsize = size_title)\n plt.xlabel(r\"$t$\",fontsize = size_xlabel); plt.ylabel(r\"$\\hat{\\beta}$\",fontsize = size_ylabel,rotation = \"horizontal\")\n# plt.ylim((bottom, top))\n\n\nbeta = winrate.reshape((T,N))\nax = plt.subplot(223)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"Win Rate\",fontsize = size_title)\n plt.xlabel(\"t\",fontsize = size_xlabel); plt.ylabel(r\"Win Rate\",fontsize = 10,rotation = \"vertical\")\n\n\nax.legend(loc='lower left', fontsize = size_legend,labelspacing = 0.75,bbox_to_anchor=(-0.03,-0.6),ncol = 5)\n\n\nbeta = beta_vanilla_bt.reshape((T,N))\nax = plt.subplot(224)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"Original Bradley-Terry\",fontsize = size_title)\n plt.xlabel(r\"$t$\",fontsize = size_xlabel); plt.ylabel(r\"$\\hat{\\beta}$\",fontsize = size_ylabel,rotation = \"horizontal\")\n \n\nplt.subplots_adjust(hspace = 0.3)\nplt.show()\n# f.savefig(\"compare.pdf\", bbox_inches='tight')", "_____no_output_____" ] ], [ [ "## repeated experiment", "_____no_output_____" ] ], [ [ "import time\nstart_time = time.time()\n\nrandom.seed(0)\nnp.random.seed(0)\nB = 20\nloo_ks = 200\nloo = 200\nh_cv_list = []\nrank_diff_DBT_list, loo_nll_DBT_list, loo_prob_DBT_list = [], [], []\nrank_diff_wr_list, loo_nll_wr_list, loo_prob_wr_list = [], [], []\nrank_diff_vBT_list, loo_nll_vBT_list, loo_prob_vBT_list = [], [], []\n\nfor b in range(B):\n N = 10 # number of teams\n T = 10 # number of seasons/rounds/years\n tn = [1] * int(T * N * (N - 1)/2) # number of games between each pair of teams\n\n [alpha,r] = [1,1]\n ##### get beta here #####\n P_list = make_prob_matrix(T,N,r = 1,alpha = 1,mu = [0,0.2])\n P_winrate = P_list.sum(axis=2)\n \n game_matrix_list = get_game_matrix_list_from_P(tn,P_list)\n data = game_matrix_list # shape: T*N*N\n\n # true beta\n _, beta_oracle = gd_bt(data = P_list)\n\n # ks cv\n h_list = np.linspace(0.15, 0.01, 10)\n h_cv, nll_cv, beta_cv, prob_cv = loocv_ks(data, h_list, gd_bt, num_loocv = loo_ks, verbose = False,\n return_prob = True, out = \"notebook\")\n h_cv_list.append(h_cv)\n loo_nll_DBT_list.append(max(nll_cv)) \n loo_prob_DBT_list.append(prob_cv[np.argmax(nll_cv)])\n rank_diff_DBT_list.append(np.mean(av_dif_rank(beta_oracle,beta_cv)))\n# # fixed h\n# h_cv = 1/6 * T**(-1/5)\n# nll_cv, beta_cv, prob_cv = loo_DBT(data, h_cv, gd_bt, num_loo = 200, return_prob = True, out = \"notebook\")\n# h_cv_list.append(h_cv)\n# loo_nll_DBT_list.append(nll_cv) \n# loo_prob_DBT_list.append(prob_cv)\n# rank_diff_DBT_list.append(np.mean(av_dif_rank(beta_oracle,beta_cv)))\n \n winrate = get_winrate(data)\n loo_nll_wr, loo_prob_wr = loo_winrate(data,num_loo = loo)\n loo_nll_wr_list.append(loo_nll_wr)\n loo_prob_wr_list.append(loo_prob_wr)\n rank_diff_wr_list.append(np.mean(av_dif_rank(beta_oracle,winrate)))\n \n objective_vanilla_bt, beta_vBT = gd_bt(data = data)\n loo_nll_vBT, loo_prob_vBT = loo_vBT(data,num_loo = loo)\n loo_nll_vBT_list.append(loo_nll_vBT)\n loo_prob_vBT_list.append(loo_prob_vBT)\n rank_diff_vBT_list.append(np.mean(av_dif_rank(beta_oracle,beta_vBT)))\n \n print(str(b) + '-th repeat finished.')\n print(\"--- %s seconds ---\" % (time.time() - start_time))\n \n \nrank_dif_estimator = [np.mean(rank_diff_wr_list),\n np.mean(rank_diff_vBT_list),\n np.mean(rank_diff_DBT_list)]\nloo_prob_wr = np.mean(loo_prob_wr_list)\nloo_prob_DBT = np.mean(loo_prob_DBT_list)\nloo_prob_vBT = np.mean(loo_prob_vBT_list)\n\nloo_nll_wr = np.mean(loo_nll_wr_list)\nloo_nll_DBT = np.mean(loo_nll_DBT_list)\nloo_nll_vBT = np.mean(loo_nll_vBT_list)\n\ndf = pd.DataFrame({'estimator':['winrate','vanilla BT','DBT'],'average rank difference':rank_dif_estimator,\n 'LOO Prob':[loo_prob_wr,loo_prob_vBT,loo_prob_DBT],\n 'LOO nll':[loo_nll_wr,loo_nll_vBT,loo_nll_DBT]})\n\nprint(\"--- %s seconds ---\" % (time.time() - start_time))", "0-th repeat finished.\n--- 67.12571978569031 seconds ---\n1-th repeat finished.\n--- 597.455201625824 seconds ---\n2-th repeat finished.\n--- 660.9241359233856 seconds ---\n3-th repeat finished.\n--- 713.6041169166565 seconds ---\n4-th repeat finished.\n--- 769.607390165329 seconds ---\n5-th repeat finished.\n--- 1291.2428710460663 seconds ---\n6-th repeat finished.\n--- 1347.3113181591034 seconds ---\n7-th repeat finished.\n--- 1871.4890196323395 seconds ---\n8-th repeat finished.\n--- 1919.3885896205902 seconds ---\n9-th repeat finished.\n--- 2453.0996985435486 seconds ---\n10-th repeat finished.\n--- 2516.9276571273804 seconds ---\n11-th repeat finished.\n--- 2572.0560116767883 seconds ---\n12-th repeat finished.\n--- 2636.5123143196106 seconds ---\n13-th repeat finished.\n--- 2693.700991630554 seconds ---\n14-th repeat finished.\n--- 2751.1863493919373 seconds ---\n15-th repeat finished.\n--- 2811.14683508873 seconds ---\n16-th repeat finished.\n--- 2871.2653493881226 seconds ---\n17-th repeat finished.\n--- 3410.2227263450623 seconds ---\n18-th repeat finished.\n--- 3473.6225197315216 seconds ---\n19-th repeat finished.\n--- 3536.0268864631653 seconds ---\n--- 3536.028876543045 seconds ---\n" ], [ "print(df.to_latex(index_names=True, escape=False, index=False, \n column_format='c|c|c|c|', float_format=\"{:0.2f}\".format,\n header=True, bold_rows=True))", "\\begin{tabular}{c|c|c|c|}\n\\toprule\n estimator & average rank difference & LOO Prob & LOO nll \\\\\n\\midrule\n winrate & 2.45 & 0.49 & 0.00 \\\\\n vanilla BT & 2.44 & 0.49 & 0.93 \\\\\n DBT & 1.87 & 0.48 & 0.90 \\\\\n\\bottomrule\n\\end{tabular}\n\n" ], [ "T, N = data.shape[0:2]\nf = plt.figure(1, figsize = (10,8))\n\nsize_ylabel = 20\nsize_xlabel = 15\nsize_title = 15\nsize_tick = 13\nsize_legend = 15.4\nfont_title = \"Times New Roman Bold\"\n\nrandom.seed(0)\nnp.random.seed(0)\ncolor_matrix = c=np.random.rand(N,3)\n\nbeta = beta_oracle.reshape((T,N))\nax = plt.subplot(221)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"Oracle $\\beta^o$\",fontsize = size_title)\n plt.xlabel(r\"$T$\",fontsize = size_xlabel); plt.ylabel(r\"${\\beta}^o$\",fontsize = size_ylabel,rotation = \"horizontal\")\n# bottom, top = plt.ylim()\n \nbeta = beta_cv.reshape((T,N))\nax = plt.subplot(222)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"Dynamic Bradley-Terry, Gaussian Kernel\",fontsize = size_title)\n plt.xlabel(r\"$T$\",fontsize = size_xlabel); plt.ylabel(r\"$\\hat{\\beta}$\",fontsize = size_ylabel,rotation = \"horizontal\")\n# plt.ylim((bottom, top))\n\n\nbeta = winrate.reshape((T,N))\nax = plt.subplot(223)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"Win Rate\",fontsize = size_title)\n plt.xlabel(r\"$T$\",fontsize = size_xlabel); plt.ylabel(r\"Win Rate\",fontsize = 10,rotation = \"vertical\")\n\n\n# ax.legend(loc='lower left', fontsize = size_legend,labelspacing = 0.75,bbox_to_anchor=(-0.03,-0.6),ncol = 5)\n\n\nbeta = beta_vBT.reshape((T,N))\nax = plt.subplot(224)\nfor i in range(N):\n ax.plot(range(1,T + 1),beta[:,i],c=color_matrix[i,:],marker = '.',label = 'Team' + str(i),linewidth=1)\n ax.tick_params(axis='both', which='major', labelsize=size_tick)\n plt.title(r\"Vanilla Bradley-Terry\",fontsize = size_title)\n plt.xlabel(r\"$T$\",fontsize = size_xlabel); plt.ylabel(r\"$\\hat{\\beta}$\",fontsize = size_ylabel,rotation = \"horizontal\")\n \n\nplt.subplots_adjust(hspace = 0.3)\nplt.show()\nf.savefig(\"compare_beta_NT10_n1_ag.pdf\", bbox_inches='tight')", "_____no_output_____" ], [ "loo_prob_DBT_list", "_____no_output_____" ], [ "f = plt.figure(1, figsize = (16,8))\n\nsize_ylabel = 20\nsize_xlabel = 15\nsize_title = 15\nsize_tick = 13\nsize_legend = 15.4\nfont_title = \"Times New Roman Bold\"\n\nrandom.seed(0)\nnp.random.seed(0)\ncolor_list = ['red','blue','green']\nx_range = [i for i in range(B)]\n\nax = plt.subplot(311)\nax.plot(x_range,rank_diff_wr_list,c=color_list[0],marker = '.',label = 'win rate',linewidth=1)\nax.plot(x_range,rank_diff_vBT_list,c=color_list[1],marker = '.',label = 'vBT',linewidth=1)\nax.plot(x_range,rank_diff_DBT_list,c=color_list[2],marker = '.',label = 'DBT',linewidth=1)\n\nax.tick_params(axis='both', which='major', labelsize=size_tick)\nplt.title(r\"average rank difference over 20 repeats (agnostic.N,T=10,n=1)\",fontsize = size_title)\nplt.xlabel(r\"Repeat\",fontsize = size_xlabel); plt.ylabel(r\"ave. rank diff.\",fontsize = size_ylabel,rotation = \"vertical\")\nax.legend(loc='upper left', fontsize = size_legend,labelspacing = 0.75,ncol = 1)\n \nax = plt.subplot(312)\nax.plot(x_range,loo_nll_vBT_list,c=color_list[1],marker = '.',label = 'vBT',linewidth=1)\nax.plot(x_range,loo_nll_DBT_list,c=color_list[2],marker = '.',label = 'DBT',linewidth=1)\n\nax.tick_params(axis='both', which='major', labelsize=size_tick)\nplt.title(r\"LOO nll over 20 repeats (agnostic.N,T=10,n=1)\",fontsize = size_title)\nplt.xlabel(r\"Repeat\",fontsize = size_xlabel); plt.ylabel(r\"LOO nll\",fontsize = size_ylabel,rotation = \"vertical\")\nax.legend(loc='upper left', fontsize = size_legend,labelspacing = 0.75,ncol = 1)\n\nax = plt.subplot(313)\nax.plot(x_range,loo_prob_wr_list,c=color_list[0],marker = '.',label = 'win rate',linewidth=1)\nax.plot(x_range,loo_prob_vBT_list,c=color_list[1],marker = '.',label = 'vBT',linewidth=1)\nax.plot(x_range,loo_prob_DBT_list,c=color_list[2],marker = '.',label = 'DBT',linewidth=1)\n\nax.tick_params(axis='both', which='major', labelsize=size_tick)\nplt.title(r\"LOO prob over 20 repeats (agnostic.N,T=10,n=1)\",fontsize = size_title)\nplt.xlabel(r\"Repeat\",fontsize = size_xlabel); plt.ylabel(r\"LOO prob\",fontsize = size_ylabel,rotation = \"vertical\")\nax.legend(loc='upper left', fontsize = size_legend,labelspacing = 0.75,ncol = 1)\n\n\nplt.subplots_adjust(hspace = 0.6)\nplt.show()\nf.savefig(\"perform_NT10_n1_ag.pdf\", bbox_inches='tight')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cbf67ea71443059e40d8aa49aedc89b3082dd38d
221,487
ipynb
Jupyter Notebook
01-neural-networks/neural-network-training.ipynb
williamardianto/neural-network-from-scratch
9e01bcd2d6663a074e8152a83ffe671992b14a89
[ "MIT" ]
null
null
null
01-neural-networks/neural-network-training.ipynb
williamardianto/neural-network-from-scratch
9e01bcd2d6663a074e8152a83ffe671992b14a89
[ "MIT" ]
null
null
null
01-neural-networks/neural-network-training.ipynb
williamardianto/neural-network-from-scratch
9e01bcd2d6663a074e8152a83ffe671992b14a89
[ "MIT" ]
null
null
null
225.546843
74,236
0.902206
[ [ [ "# Neural Networks", "_____no_output_____" ], [ "<a id='Table_of_Content'></a>\n\n**[1. Neural Networks](#1.Neural_Networks)**\n * [1.1. Perceptron](#1.1.Perceptron)\n * [1.2. Sigmoid](#1.2.Sigmoid)\n \n**[2. Neural Networks Architecture](#2.Neural_Networks_Architecture)**\n\n**[3. Training Neural Network](#3.Training_Neural_Network)**\n * [3.1. Forward Propagation](#3.1.Forward_Propagation)\n * [3.2. Compute Error](#3.2.Compute_Error)\n * [3.3. Back Propagation](#3.3.Back_Propagation)\n * [3.4. Gradient Descent](#3.4.Gradient_Descent)\n * [3.5. Computational Graph](#3.5.Computational_Graph)\n * [3.6. Gradient_Checking](#3.6.Gradient_Checking)\n * [3.7. Parameter Update](#3.7.Parameter_Update)\n * [3.8. Learning Rate](#3.8.Learning_Rate)\n\n", "_____no_output_____" ], [ "<a id='1.Neural_Networks'></a>\n\n# 1. Neural Networks\n\n\n\nNeural networks (NN) are a broad family of algorithms that have formed the basis for the recent resurgence in the computational field called deep learning. Early work on neural networks actually began in the 1950s and 60s. And just recently, neural network has experienced a resurgence of interest, as deep learning has achieved impressive state-of-the-art results. \n\nNeural network is basically a mathematical model built from simple functions with changing parameters. Just like a biological neuron has dendrites to receive signals, a cell body to process them, and an axon to send signals out to other neurons, an artificial neuron has a number of input channels, a processing stage, and one output that can branch out to multiple other artificial neurons. Neurons are interconnected and pass message to each other.\n\nTo understand neural networks, Let's get started with **Perceptron**.\n\n<center><img src=\"images/neurons.png\" alt=\"neuron\" width=\"500px\"/></center>", "_____no_output_____" ], [ "<a id='1.1.Perceptron'></a>\n\n## 1.1. Perceptron\n\nA perceptron takes several binary inputs, $x_1,x_2,…,x_n$, and produces a single binary output.\n\n<center><img src=\"images/perceptron.png\" alt=\"neuron\" width=\"300px\"/></center>\n\nThe example above shows a perceptron taking three inputs $x_1, x_2, x_3$. Each input is given a $weight$ $W \\in \\mathbb{R}$ and it serves to express the importance of its corresponding input in the computation of output for that perceptron. The perceptron output, 0 or 1, is determined by the weighted sum $\\sum_i w_ix_i$ with respect to a $threshold$ value as follows:\n\n\\begin{equation}\n output = \\left\\{\n \\begin{array}{rl}\n 0 & \\text{if } \\sum_iw_ix_i \\leq \\text{threshold}\\\\\n 1 & \\text{if } \\sum_iw_ix_i > \\text{threshold}\n \\end{array} \\right.\n\\end{equation}\n\nThe weighted sum can be categorically defined as a dot product between $w$ and $x$ as follows: \n\n$$\\sum_i w_ix_i \\equiv w \\cdot x$$\n\nwhere $w$ and $x$ are vectors corresponding to weights and inputs respectively. Introducing a bias term $b \\equiv -threshold$ results in\n\n\\begin{equation}\n output = \\left\\{\n \\begin{array}{rl}\n 0 & \\text{if } w \\cdot x + b \\leq 0\\\\\n 1 & \\text{if } w \\cdot x + b > 0\n \\end{array} \\right.\n\\end{equation}\n\nYou can think of the $bias$ as a measure of how easy it is to get the perceptron to output 1. For a perceptron with a high positive $bias$, it is extremely easy for the perceptron to output 1. In constrast, if the $bias$ is relatively a negative value, it is difficult for the perceptron to output 1.\n\n<center><img src=\"images/perceptron2.png\" alt=\"neuron\" width=\"300px\"/></center>\n\nA way to think about the perceptron is that it is a device that makes **decisions** by weighing up evidence. \n", "_____no_output_____" ] ], [ [ "import numpy as np\nX = np.array([0, 1, 1])\nW = np.array([5, 1, -3])\nb=5\n\ndef perceptron_neuron(X, W, b):\n return int(X.dot(W)+b > 0)\n\nperceptron_neuron(X,W,b)", "_____no_output_____" ] ], [ [ "<a id='1.2.Sigmoid'></a>\n## 1.2. Sigmoid \n\nSmall changes to $weights$ and $bias$ of any perceptron in a network can cause the output to flip from 0 to 1 or 1 to 0. This flip can cause the behaviour of the rest of the network to change in a complicated way.", "_____no_output_____" ] ], [ [ "x = np.array([100])\nb = np.array([9])\nw1 = np.array([-0.08])\nw2 = np.array([-0.09])\n\nprint(perceptron_neuron(x,w1,b))\nprint(perceptron_neuron(x,w2,b))\n", "1\n0\n" ] ], [ [ "The problem above can be overcome by using a Sigmoid neuron. It functions similarly to a Perceptron but modified such that small changes in $weights$ and $bias$ cause only a small change in the output.\n\nAs with a Perceptron, a Sigmoid neuron also computes $w \\cdot x + b $, but now with the Sigmoid function being incorporated as follows:\n\n\\begin{equation}\n z = w \\cdot x + b \\\\\n \\sigma(z) = \\frac{1}{1+e^{-z}}\n\\end{equation}\n\n<center><img src=\"images/sigmoid_neuron.png\" alt=\"neuron\" width=\"500px\"/></center>\n\nA Sigmoid function produces output between 0 and 1, and the figure below shows the function. If $z$ is large and positive, the output of a sigmoid neuron approximates to 1, just as it would for a perceptron. Alternatively if $z$ is highly negative, the output approxiates to 0.\n\n<center><img src=\"images/sigmoid_shape.png\" alt=\"neuron\" width=\"400px\"/></center>", "_____no_output_____" ] ], [ [ "def sigmoid(x):\n return 1/(1 + np.exp(-x))\n\ndef sigmoid_neuron(X, W, b):\n z = X.dot(W)+b\n return sigmoid(z)\n\nprint(sigmoid_neuron(x,w1,b))\nprint(sigmoid_neuron(x,w2,b))", "[0.73105858]\n[0.5]\n" ] ], [ [ "Click here to go back [Table of Content](#Table_of_Content).", "_____no_output_____" ], [ "<a id='2.Neural_Networks_Architecture'></a>\n \n# 2. Neural Networks Architecture\n\nA neural network can take many forms. A typical architecture consists of an input layer (leftmost), an output layer (rightmost), and a middle layer (hidden layer). Each layer can have multiple neurons while the number of neurons in the output layer is dependent on the number of classes.\n\n<center><img src=\"images/neuralnetworks.png\" alt=\"neuron\" width=\"600px\"/></center>\n\n\nClick here to go back [Table of Content](#Table_of_Content).", "_____no_output_____" ] ], [ [ "# Create dataset\nfrom sklearn.datasets import make_moons, make_circles\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nseed = 123\n\nnp.random.seed(seed)\nX, y = make_circles(n_samples=1000, factor=.5, noise=.1, random_state=seed)", "_____no_output_____" ], [ "colors = {0:'red', 1:'blue'}\ndf = pd.DataFrame(dict(x=X[:,0], y=X[:,1], label=y))\n\nfig, ax = plt.subplots()\ngrouped = df.groupby('label')\nfor key, group in grouped:\n group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])\nplt.show()", "_____no_output_____" ] ], [ [ "<a id='3.Training_Neural_Network'></a>\n\n## 3. Training Neural Network\n\nPreviously we have learnt that $weights$ express the importance of variables, and $bias$ is a threshold to control the behaviour of neurons. So, how can we determine these $weights$ and $bias$?\n\nConsider these steps: \n1. Since we do not know the ideal $weights$ and $bias$, we initialize them using random numbers **(parameters initialization).**\n2. Let the data flow through the network with these initialized $weights$ and $bias$ to get a predicted output. This process is known as **forward propagation**.\n3. Compare the predicted output with the actual output. An error is computed if there is a difference between them. A high error thus indicates that current $weights$ and $bias$ do not give an accurate prediction. **(compute error)**\n4. To fix these $weights$ and $bias$, a backward computation is carried out by finding the partial derivative of error with respect to each $weight$ and $bias$ and then updating their values accordingly. This process is known as **backpropagation**.\n5. Repeat steps (2) to (4) until the error is below a pre-defined threshold to obtain the optimized $weights$ and $bias$.\n", "_____no_output_____" ], [ "<a id='3.1.Forward_Propagation'></a>\n\n### 3.1. Forward Propagation\n\n<center><img src=\"images/network2.png\" alt=\"neuron\" width=\"400px\"/></center>\n\nThe model above has two neurons in the input layer, four neurons (also known as **activation units**) in the hidden layer, and one neuron in the output layer. \n\n$X = [x_1, x_2]$ is the input matrix\n\n$W^{j+1} =$ matrix of $weights$ controlling function mapping from layer $j$ to layer $j + 1$ \n\n\\begin{align}\n W^{j+1} \\equiv \n\\begin{bmatrix}\nw_{11} & w_{12} \\\\\nw_{21} & w_{22} \\\\\nw_{31} & w_{32} \\\\\nw_{41} & w_{42} \\\\\n\\end{bmatrix}^{j+1}\n\\end{align}\n$W^{j+1}_{kl}$, $k$ is node in layer $j+1$, $l$ is node in layer $j$\n\n$B^{j+1} = $ matrix of $bias$ controlling function mapping from layer $j$ to layer $j + 1$\n\n\\begin{align}\n B^{j+1} \\equiv \n\\begin{bmatrix}\nb_{1} & b_{2} & b_{3} & b_{4} \n\\end{bmatrix}^{j+1}\n\\end{align}\n\n$B^{j+1}_k$, $k$ is node in layer $j + 1$\n\nthe activation units can be label as $a_i^j =$ \"activation\" of unit $i$ in layer $j$. \n\nif $j=0,\\quad a_i^j, $ is equivalent to input layer. \n\n\nFinally the activation function in layer 1 can be denode as, \n\n\\begin{align}\n a_1^1 = \\sigma(W_{11}^1x_1+W_{12}^1x_2+B^1_{1}) \\\\\n a_2^1 = \\sigma(W_{21}^1x_1+W_{22}^1x_2+B^1_{2}) \\\\\n a_3^1 = \\sigma(W_{31}^1x_1+W_{32}^1x_2+B^1_{3}) \\\\\n a_4^1 = \\sigma(W_{41}^1x_1+W_{42}^1x_2+B^1_{4})\n\\end{align}\n\nSimplified using vectorization,\n\n\\begin{align}\n a^1 = \\sigma(X \\cdot W^{1T}+B^1) \\\\\n\\end{align}\n\n\\begin{align}\n output = \\sigma( a^1 \\cdot W^{2T}+B^2) \\\\\n\\end{align}", "_____no_output_____" ], [ "Implement forward propagation and feed in the generated data\n\nTips:\n- Using numpy function *random.randn()* to generate a Gaussian distribution with mean 0, and variance 1.", "_____no_output_____" ] ], [ [ "#step 1: parameters initialization\ndef initialize_params():\n params = {\n 'W1': np.random.randn(4,2),\n 'B1': np.random.randn(1,4),\n 'W2': np.random.randn(1,4),\n 'B2': np.random.randn(1,1),\n }\n return params", "_____no_output_____" ], [ "#step 2: forward propagation\nx_ = np.array([X[0]])\ny_ = np.array([y[0]])\n\nnp.random.seed(0)\nparams = initialize_params()\n\ndef forward(X, params):\n a1 = sigmoid(X.dot(params['W1'].T)+params['B1'])\n output = sigmoid(a1.dot(params['W2'].T)+params['B2'])\n cache={'a1':a1, 'params': params}\n return output, cache\n\noutput, cache = forward(x_, params)\nprint('Actual output: ', y_)\nprint('Predicted output: ',output)", "Actual output: [0]\nPredicted output: [[0.91620003]]\n" ] ], [ [ "<a id='3.2.Compute_Error'></a>\n\n### 3.2. Compute Error\n\nError is also known as Loss or Cost.\n\n**Loss function** is usually a function defined on a data point, prediction and label, and measures the penalty. \n\n**Cost function** is usually more general. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization).\n\nTo compute the error, we should first define a $cost$ $function$. For simplicity, we will use **One Half Mean Squared Error** as our cost function. The equation is listed below:\n\n\\begin{equation}\n MSE = \\frac{1}{2n} \\sum (\\hat y - y)^2\n\\end{equation}\n\nwhere $n$ is the number of training samples, $\\hat y$ is the predicted output, and $y$ is the actual output. A low cost results is returned if the predicted output is close to the actual output, which indicates a good measure of accuracy. ", "_____no_output_____" ] ], [ [ "#step 3: cost function\ndef mse(yhat, y):\n n = yhat.shape[0]\n return (1/(2*n)) * np.sum(np.square(yhat-y))\n\nmse(output, y_)", "_____no_output_____" ] ], [ [ "<a id='3.3.Back_Propagation'></a>\n\n### 3.3. Back Propagation\n\nNow we know that to get a good prediction, the cost should be as low/small as possible. To minimize the cost, we have to tune the $weights$ and $bias$, but how can we do that? Do we go with random trial and error or is there a better way to do it? Fortunately, there is a better way and it is called **Gradient Descent**.\n\n", "_____no_output_____" ], [ "<a id='3.4.Gradient_Descent'></a>\n\n### 3.4. Gradient Descent\n\nGradient descent is an optimization algorithm that iteratively looks for optimal $weights$ and $bias$ so that the cost gets smaller and eventually equals zero.\n\nIn the interative process, the gradient (of the cost function with respect to $weights$ and $bias$) is computed. The gradient is the change in cost when $weights$ and $bias$ are changed. This helps us update $weights$ and $bias$ in the direction in which the cost is minimized.\n\nLet's recall the forward propagation equation:\n\n\\begin{align}\n a^1 = \\sigma(X \\cdot W^{1T}+B^1) \\\\\n output = \\sigma( a^1 \\cdot W^{2T}+B^2) \\\\\n cost = \\frac{1}{2n} \\sum (output - y)^2\n\\end{align}\n\nArrange them into a single equation, and $cost$, $L$ can be defined as follows:\n\n\\begin{align}\n L = \\frac{1}{2n} \\sum (\\sigma( \\sigma(X \\cdot W^{1T}+B^1) \\cdot W^{2T}+B^2) - y)^2\n\\end{align}\n\nFrom the equation, we want to find the gradient or derivative of $L$ with respect to $W^1, W^2, B^1, B^2$.\n\\begin{align}\n \\frac{\\partial L}{\\partial W^1}, \\frac{\\partial L}{\\partial W^2}, \\frac{\\partial L}{\\partial B^1}, \\frac{\\partial L}{\\partial B^2}\n\\end{align}\n\nComputation of partial derivatives of $L$ with respect to the $weights$ and $bias$ can become very complex if the layer of the network grows. To make it simple, we can actually break the equation into smaller compenents and use **chain rule** to derive the partial derivative.", "_____no_output_____" ], [ "<a id='3.5.Computational_Graph'></a>\n\n### 3.5. Computational Graph\nEventually, we can think of the forward propagation equation as a computational graph.\n\n<center><img src=\"images/comp_graph.png\" alt=\"neuron\" width=\"500\"/></center>\n<center><img src=\"images/comp_graph2.png\" alt=\"neuron\" width=\"500px\"/></center>\n<center><img src=\"images/comp_graph3.png\" alt=\"neuron\" width=\"500px\"/></center>\n\n\n\n", "_____no_output_____" ], [ "#### Scalar Example\n\\begin{align}\n a_1^1 = \\sigma(W_{11}^1x_1+W_{12}^1x_2+B^1_{1}) \\quad \\equiv \\quad \\frac{1}{1+\\exp^{-(W_1x_1+W_2x_2+B_1)}}\n\\end{align}\n\n<center><img src=\"images/simple_comp_graph.png\" alt=\"neuron\" width=\"600\"/></center>\n\n#### Note:\n\\begin{align}\n L \\quad &\\rightarrow \\quad \\frac{\\partial L}{\\partial L} = 1 \\\\\n L = \\frac{1}{2n} \\sum (output - y)^2 \\quad &\\rightarrow \\quad \\frac{\\partial L}{\\partial output} = \\frac{1}{n} (output - y) \\\\\n f(x) = e^x \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = e^x \\\\\n f(x) = xy \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = y, \\quad \\frac{\\partial f}{\\partial y} = x \\\\\n f(x) = 1/x \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = -1/x^2 \\\\\n f(x) = x+c \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial x} = 1 \\\\\n \\sigma(x) = \\frac{1}{1+e^{-x}} \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial \\sigma} = \\sigma(1-\\sigma)\n\\end{align}", "_____no_output_____" ] ], [ [ "print('X: ', X[0])\nprint('W11: ', params['W1'][0])\nprint('B11: ', params['B1'][0][0])\n\n# how is the graph look like in our case?\n# calculate forward and backward flows", "X: [-0.08769568 1.08597835]\nW11: [1.76405235 0.40015721]\nB11: -0.10321885179355784\n" ] ], [ [ "\n\n#### A Vectorized Example\n\n\\begin{align}\n a^1 = \\sigma(X \\cdot W^{1T}+B^1) \\\\\n\\end{align}\n\n<center><img src=\"images/vec_comp_graph.png\" alt=\"neuron\" width=\"700\"/></center>\n\n#### Note:\n\\begin{align}\n q = X\\cdot(W^T) \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial X} = \\frac{\\partial f}{\\partial q} \\cdot W , \\quad \\frac{\\partial f}{\\partial W} = X^T \\cdot \\frac{\\partial f}{\\partial q} \\\\\n l = q+B \\quad &\\rightarrow \\quad \\frac{\\partial f}{\\partial B} = \\begin{bmatrix}1 & 1 \\end{bmatrix} \\cdot \\frac{\\partial f}{\\partial l} , \\quad \\frac{\\partial f}{\\partial q} = \\frac{\\partial f}{\\partial l}\n\\end{align}\n", "_____no_output_____" ] ], [ [ "def dmse(output, y):\n return (output - y)/output.shape[0]\n \ndef backward(X, output, y, cache):\n grads={}\n a1=cache['a1']\n params=cache['params']\n \n dloss = dmse(output, y)\n \n doutput = output*(1-output)*dloss\n \n #compute gradient of B2 and W2\n dW2 = a1.T.dot(doutput)\n dB2 = np.sum(doutput, axis=0, keepdims=True)\n \n dX2 = doutput.dot(params['W2'])\n da1 = a1*(1-a1)*dX2\n \n #compute gradient of B1 and W1\n dW1 = X.T.dot(da1)\n dB1 = np.sum(da1, axis=0, keepdims=True)\n \n grads['W1'] = dW1.T\n grads['W2'] = dW2.T\n grads['B1'] = dB1\n grads['B2'] = dB2\n \n return grads\n ", "_____no_output_____" ], [ "X_ = X[:3]\nY_ = y[:3].reshape(-1,1)\n\ndef step(X,y,params):\n\n output, cache = forward(X, params)\n\n cost = mse(output, y)\n\n grads = backward(X, output, y, cache)\n\n return (cost, grads)\n\nnp.random.seed(0)\nparams = initialize_params()\n\ncost, grads = step(X_, Y_, params)\n\nprint(cost)\nprint(grads)", "0.4184302966572367\n{'W1': array([[-0.00208634, 0.0076568 ],\n [-0.00053028, 0.00068688],\n [-0.0004186 , 0.00346495],\n [-0.00170289, 0.00300797]]), 'W2': array([[0.03212358, 0.05777044, 0.02222057, 0.05217427]]), 'B1': array([[0.01009638, 0.00114439, 0.00470392, 0.00425563]]), 'B2': array([[0.07033491]])}\n" ] ], [ [ "<a id='3.6.Gradient_Checking'></a>\n\n### 3.6. Gradient Checking\nWe can use Numerical Gradient to evaluate the gradient (Analytical gradient) that we have calculated.\n\n<center><img src=\"images/num_grad.png\" alt=\"neuron\" width=\"400\"/></center>\n\nConsider the image above, where the red line is our function, the blue line is the gradient derived from the point $x$, the green line is the approximated gradient from the point of $x$, and $h$ is the step size. It can then be shown that:\n\n\n$$ \\frac{\\partial f}{\\partial x} \\approx \\frac{Y_C-Y_B}{X_C-X_B} \\quad = \\quad \\frac{f(x+h) - f(x-h)}{(x+h)-(x-h)} \\quad = \\quad \\frac{f(x+h) - f(x-h)}{2h} $$\n\n", "_____no_output_____" ], [ "##### EXAMPLE\n", "_____no_output_____" ] ], [ [ "w1 = 3; x1 = 1; w2 = 2; x2 = -2; b1 = 2\n\nh = 1e-4\n\ndef f(w1, x1, w2, x2, b1):\n linear = (w1*x1)+(w2*x2)+b1\n return 1/(1+np.exp(-linear))\n\n\nnum_grad_x1 = (f(w1, x1+h, w2, x2, b1) - f(w1, x1-h, w2, x2, b1))/(2*h)\nprint(num_grad_x1)\n", "0.5898357981348745\n" ], [ "# vectorized gradient checking\ndef gradient_check(f, x, h=0.00001):\n grad = np.zeros_like(x)\n # iterate over all indexes in x\n it = np.nditer(x, flags=['multi_index'])\n while not it.finished:\n # evaluate function at x+h\n ix = it.multi_index\n oldval = x[ix]\n x[ix] = oldval + h # increment by h\n fxph = f(x) # evalute f(x + h)\n x[ix] = oldval - h\n fxnh = f(x) # evaluate f(x - h)\n x[ix] = oldval # restore\n\n # compute the partial derivative with centered formula\n grad[ix] = ((fxph - fxnh) / (2 * h)).sum() # the slope\n it.iternext() # step to next dimension\n\n return grad\n\ndef rel_error(x, y):\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))", "_____no_output_____" ], [ "for param_name in grads:\n f = lambda W: step(X_, Y_, params)[0]\n \n param_grad_num = gradient_check(f, params[param_name])\n print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))\n", "W1 max relative error: 3.437088e-09\nW2 max relative error: 5.328225e-11\nB1 max relative error: 2.182159e-09\nB2 max relative error: 3.440171e-11\n" ] ], [ [ "<a id='3.7.Parameter_Update'></a>\n\n### 3.7. Parameter Update\nAfter getting the gradient, parameters are updated depend on the gradients; if it is positive, then the updated parameters reduces in value, and if it is negative, then the updated parameter increases in value. Regardless of the gradient, the main goal is to reach the global minimum.\n\n<a id='3.8.Learning_Rate'></a>\n\n### 3.8. Learning Rate\nLarge or small updates are controlled by the learning rate known as $\\alpha$. Hence, gradient descent equations are as follows:\n\n\\begin{align}\n W^1 &= W^1 - \\alpha * \\frac{\\partial L}{\\partial W^1} \\\\\n B^1 &= B^1 - \\alpha * \\frac{\\partial L}{\\partial B^1} \\\\\n W^2 &= W^2 - \\alpha * \\frac{\\partial L}{\\partial W^2} \\\\\n B^2 &= B^2 - \\alpha * \\frac{\\partial L}{\\partial B^2} \\\\\n\\end{align}\n", "_____no_output_____" ] ], [ [ "def update_parameter(params, grads, learning_rate):\n params['W1'] += -learning_rate * grads['W1']\n params['B1'] += -learning_rate * grads['B1']\n params['W2'] += -learning_rate * grads['W2']\n params['B2'] += -learning_rate * grads['B2']\n ", "_____no_output_____" ], [ "params = initialize_params()\n\ndef train(X,y,learning_rate=0.1,num_iters=30000,batch_size=256):\n num_train = X.shape[0]\n costs = []\n for it in range(num_iters):\n random_indices = np.random.choice(num_train, batch_size)\n X_batch = X[random_indices]\n y_batch = y[random_indices]\n \n cost, grads = step(X_batch, y_batch, params)\n costs.append(cost)\n \n # update parameters \n update_parameter(params, grads, learning_rate)\n \n return costs\n\ncosts = train(X,y.reshape(-1,1))\nplt.plot(costs)", "_____no_output_____" ], [ "def predict(X, params):\n W1 = params['W1']\n B1 = params['B1']\n W2 = params['W2']\n B2 = params['B2']\n \n output, _ = forward(X, params)\n return output", "_____no_output_____" ], [ "# test on training samples\ny_pred = []\nfor i in range(len(X)):\n pred = np.squeeze(predict(X[i],params)).round()\n y_pred.append(pred)\n \nplt.scatter(X[:,0], X[:,1], c=y_pred, linewidths=0, s=20);", "_____no_output_____" ], [ "# test on new samples\nX_new, _ = make_circles(n_samples=1000, factor=.5, noise=.1)\ny_pred = []\n\nfor i in range(len(X_new)):\n pred = np.squeeze(predict(X_new[i],params)).round()\n y_pred.append(pred)\n \nplt.scatter(X_new[:,0], X_new[:,1], c=y_pred, linewidths=0, s=20);", "_____no_output_____" ] ], [ [ "# References:\n\n- http://neuralnetworksanddeeplearning.com/\n\n- http://cs231n.github.io/optimization-2/\n\n- http://kineticmaths.com/index.php?title=Numerical_Differentiation\n\n- https://google-developers.appspot.com/machine-learning/crash-course/backprop-scroll/\n\nClick here to go back [Table of Content](#Table_of_Content).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cbf68884f6b2b92be6aabcfca265718bca25b044
13,425
ipynb
Jupyter Notebook
notebooks/Detailed.ipynb
TobiasHilt/scientific_research_scraper
5604bff912a1c9d033dd342944b186d08f27ac61
[ "MIT" ]
null
null
null
notebooks/Detailed.ipynb
TobiasHilt/scientific_research_scraper
5604bff912a1c9d033dd342944b186d08f27ac61
[ "MIT" ]
null
null
null
notebooks/Detailed.ipynb
TobiasHilt/scientific_research_scraper
5604bff912a1c9d033dd342944b186d08f27ac61
[ "MIT" ]
null
null
null
24.320652
283
0.524097
[ [ [ "# <div class=\"girk\">With this notebook you can search all available databases seperately. You can edit the search string for each database and download the results for each. If desired you can join all the results and download a combined excel at the bottom of the page</div>", "_____no_output_____" ], [ "# How-to:\n - Start Institutional VPN\n - Change variables: Location (where do you want to download to?), query and run the cell\n - Search the header for your desired database and run the cells below \n - For each database the query is set to the one declared at the top (default). If needed it can be changed for every database to whatever you want. Count-limit is also adjustable to your needs\n - Api-callable: Scopus, Science Direct and Arxiv\n - The rest is scraped via chromedriver and can take a few minutes, depending how many search results are found", "_____no_output_____" ] ], [ [ "import scraper_tobiashilt as scrape\nfrom datetime import datetime\n\n#Location where the results should be downloaded to\nLocation = '/Users/Tobias/Desktop/Python/'\nDate = datetime.today().strftime('%Y-%m-%d')\n\n\n# Search string\nquery = '(\"DLT\" OR \"Distributed Ledger\") AND (\"Circular Economy\" OR \"Sustainable supply chain\")'", "_____no_output_____" ] ], [ [ "# Scopus", "_____no_output_____" ] ], [ [ "# Api-Key --> https://dev.elsevier.com\n# max value for count: 6000 (Api-limit)\n\nkey = '...'\nquery_scopus = query\ncount = 50\n\ndf_Scopus = scrape.scrape_scopus(key, query_scopus, count)\ndf_Scopus\n", "_____no_output_____" ] ], [ [ "### Download", "_____no_output_____" ] ], [ [ "Excel_Scopus = Location + Date + \"_Scopus.xlsx\"\ndf_Scopus.to_excel(Excel_Scopus)", "_____no_output_____" ] ], [ [ "<font color=red>---------------------------------------------------------------------------------------------- </font>", "_____no_output_____" ], [ "# Science Direct", "_____no_output_____" ] ], [ [ "# Get your Api-Key --> https://dev.elsevier.com\n# Count kann auf max. 6000 gesetzt werden (API-Limit)\n\n\nkey = '...'\ninsttoken = '...'\nquery_sd = query\ncount = 50\n\ndf_sciencedirect = scrape.scrape_sd(key,insttoken, query_sd, count)\ndf_sciencedirect", "_____no_output_____" ] ], [ [ "### Download", "_____no_output_____" ] ], [ [ "Excel_ScienceDirect = Location + Date + \"_ScienceDirect.xlsx\"\ndf_sciencedirect.to_excel(Excel_ScienceDirect)", "_____no_output_____" ] ], [ [ "<font color=red>---------------------------------------------------------------------------------------------- </font>", "_____no_output_____" ], [ "# Arxiv", "_____no_output_____" ] ], [ [ "# Unser Searchstring muss angepasst werden, da keine Ergebnisse gefunden werden. Count anpassen\n\n#query_arxix = query\nquery_arxiv = \"test\"\ncount = 10\n\ndf_arxiv = scrape.scrape_arxiv(query_arxiv, count)\ndf_arxiv", "_____no_output_____" ] ], [ [ "### Download", "_____no_output_____" ] ], [ [ "Excel_Arxiv = Location + Date + \"_Arxiv.xlsx\"\ndf_arxiv.to_excel(Excel_Arxiv)", "_____no_output_____" ] ], [ [ "<font color=red>---------------------------------------------------------------------------------------------- </font>", "_____no_output_____" ], [ "# Web of Science ", "_____no_output_____" ] ], [ [ "# Mit Webdriver, da keine API vorhanden und wir keinen IP-Block bekommen möchten (automatischer request laut robots.txt nicht erlaubt)\n#--> Kaffee trinken gehen und machen lassen\n# Count kann beliebig groß sein\ncount = 5\n#query_wos = \"test\"\nquery_wos = query\n\ndf_wos = scrape.scrape_wos(query_wos, count)\ndf_wos", "_____no_output_____" ] ], [ [ "### Download", "_____no_output_____" ] ], [ [ "Excel_WoS = Location + Date + \"_WoS.xlsx\"\ndf_wos.to_excel(Excel_WoS)", "_____no_output_____" ] ], [ [ "<font color=red>---------------------------------------------------------------------------------------------- </font>", "_____no_output_____" ], [ "# ACM digital", "_____no_output_____" ] ], [ [ "# Mit Webdriver, da keine API vorhanden und wir keinen IP-Block bekommen möchten (automatischer request laut robots.txt nicht erlaubt)\n#--> Kaffee trinken gehen und machen lassen\n# Count kann beliebig groß sein\ncount = 51\n#query_wos = \"test\"\nquery_acm = query\n\ndf_acm = scrape.scrape_acm(query_acm, count)\ndf_acm", "_____no_output_____" ] ], [ [ "## Download", "_____no_output_____" ] ], [ [ "Excel_acm = Location + Date + \"_ACM.xlsx\"\ndf_acm.to_excel(Excel_acm)", "_____no_output_____" ] ], [ [ "<font color=red>---------------------------------------------------------------------------------------------- </font>", "_____no_output_____" ], [ "# IEEE", "_____no_output_____" ] ], [ [ "# Mit Webdriver, da keine API vorhanden und wir keinen IP-Block bekommen möchten (automatischer request laut robots.txt nicht erlaubt)\n#--> Kaffee trinken gehen und machen lassen\n# Count kann beliebig groß sein\ncount = 51\n#query_wos = \"test\"\nquery_ieee = query\n\ndf_ieee = scrape.scrape_ieee(query_ieee, count)\ndf_ieee", "_____no_output_____" ], [ "Excel_ieee = Location + Date + \"_IEEE.xlsx\"\ndf_ieee.to_excel(Excel_ieee)", "_____no_output_____" ] ], [ [ "<font color=red>---------------------------------------------------------------------------------------------- </font>", "_____no_output_____" ], [ "# Emerald Insight", "_____no_output_____" ] ], [ [ "count = \nquery_emerald = query\n\ndf_emerald = scrape.scrape_emerald(query_emerald, count)\ndf_emerald", "_____no_output_____" ], [ "Excel_emerald = Location + Date + \"_Emerald.xlsx\"\ndf_emerald.to_excel(Excel_emerald)", "_____no_output_____" ] ], [ [ "# <div class=\"burk\"> Combine and download all</div><i class=\"fa fa-lightbulb-o \"></i>", "_____no_output_____" ], [ "### Combine your desired databases by including them in \"frames\"", "_____no_output_____" ] ], [ [ "import pandas as pd\n# default\n#frames = [df_Scopus, df_sciencedirect, df_arxiv, df_wos, df_acm, df_ieee, df_emerald]\nframes = [df_Scopus, df_sciencedirect, df_arxiv, df_wos]\n\nresult = pd.concat(frames, ignore_index=True)\npre = len(result)\nresult[result['DOI'].isnull() | ~result[result['DOI'].notnull()].duplicated(subset='DOI',keep='first')]\nresult['Title'] = result['Title'].str.lower()\nresult.drop_duplicates(subset ='Title', keep = 'first', inplace = True)\nafter = len(result)\nprint('Es wurden', pre-after, 'Duplikate entfernt! (Basierend auf DOI oder Titel)')\nresult.reset_index(inplace = True, drop = True)\nresult['Cited_by'] = pd.to_numeric(result['Cited_by'])\nresult", "_____no_output_____" ], [ "detailed_combined = Location + Date + \"_detailed_combined.xlsx\"\nresult.to_excel(detailed_combined)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ] ]
cbf68ec1310aa0149683a3255cd5f6f1641fe02b
544,008
ipynb
Jupyter Notebook
xgboost.ipynb
aniketsharma00411/employee_future_prediction
a911e2d49b4ba1d956e1ade00de30253bcd5ed9b
[ "MIT" ]
null
null
null
xgboost.ipynb
aniketsharma00411/employee_future_prediction
a911e2d49b4ba1d956e1ade00de30253bcd5ed9b
[ "MIT" ]
null
null
null
xgboost.ipynb
aniketsharma00411/employee_future_prediction
a911e2d49b4ba1d956e1ade00de30253bcd5ed9b
[ "MIT" ]
null
null
null
99.708211
7,053
0.641382
[ [ [ "<a href=\"https://colab.research.google.com/github/aniketsharma00411/employee_future_prediction/blob/main/xgboost.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "Dataset link: https://www.kaggle.com/tejashvi14/employee-future-prediction", "_____no_output_____" ], [ "# Uploading dataset", "_____no_output_____" ] ], [ [ "from google.colab import files\n\nuploaded = files.upload()", "_____no_output_____" ] ], [ [ "# Initialization", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\nfrom itertools import product\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.preprocessing import OneHotEncoder\nfrom xgboost import XGBClassifier\n\nimport warnings\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ], [ "df = pd.read_csv('Employee.csv')\n\nX = df.drop(['LeaveOrNot'], axis=1)\ny = df['LeaveOrNot']", "_____no_output_____" ] ], [ [ "# Preparing data", "_____no_output_____" ] ], [ [ "X_full_train, X_test, y_full_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\nX_train, X_val, y_train, y_val = train_test_split(X_full_train, y_full_train, test_size=0.25, random_state=42)", "_____no_output_____" ], [ "numerical = ['Age']\ncategorical = ['Education', 'JoiningYear', 'City', 'PaymentTier', 'Gender', 'EverBenched', 'ExperienceInCurrentDomain']", "_____no_output_____" ] ], [ [ "# Creating a Pipeline", "_____no_output_____" ] ], [ [ "def create_new_pipeline(params):\n numerical_transformer = SimpleImputer(strategy='median')\n\n categorical_transformer = Pipeline(steps=[\n ('imputer', SimpleImputer(strategy='most_frequent')),\n ('encoding', OneHotEncoder(drop='first'))\n ])\n\n preprocessor = ColumnTransformer(\n transformers=[\n ('numerical', numerical_transformer, numerical),\n ('categorical', categorical_transformer, categorical)\n ])\n\n model = XGBClassifier(\n n_jobs=-1,\n random_state=42,\n **params\n )\n\n pipeline = Pipeline(\n steps=[\n ('preprocessing', preprocessor),\n ('model', model)\n ]\n )\n\n return pipeline", "_____no_output_____" ] ], [ [ "# Hyperparameter Tuning", "_____no_output_____" ] ], [ [ "search_space = {\n 'n_estimators': np.linspace(10, 700, num=7).astype('int'),\n 'max_depth': np.linspace(1, 10, num=5).astype('int'),\n 'learning_rate': np.logspace(-3, 1, num=9),\n 'reg_alpha': np.logspace(-1, 1, num=5),\n 'reg_lambda': np.logspace(-1, 1, num=5)\n}", "_____no_output_____" ], [ "max_score = 0\nbest_params = {}\n\nfor val in product(*search_space.values()):\n params = {}\n for i, param in enumerate(search_space.keys()):\n params[param] = val[i]\n print(params)\n\n pipeline = create_new_pipeline(params)\n\n pipeline.fit(X_train, y_train)\n\n score = pipeline.score(X_val, y_val)\n if score > max_score:\n max_score = score\n best_params = params\n print(f'Score: {score}\\tBest score: {max_score}')", "\u001b[1;30;43mStreaming output truncated to the last 5000 lines.\u001b[0m\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6154672395273899\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6842105263157895\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6143931256713212\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6369495166487648\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.3770139634801289\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6638023630504833\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.3351235230934479\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6938775510204082\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.31364124597207305\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8152524167561761\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8131041890440387\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8206229860365198\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8141783029001074\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.5972073039742213\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6702470461868958\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7357679914070892\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7336197636949516\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7593984962406015\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.5155746509129968\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6616541353383458\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7228786251342643\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.5714285714285714\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6842105263157895\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.5832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7303974221267454\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6272824919441461\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6004296455424275\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.723952738990333\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6928034371643395\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.46509129967776586\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5853920515574651\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.719656283566058\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7819548872180451\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6723952738990333\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.611170784103115\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.631578947368421\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6219119226638024\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.31901181525241673\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.3673469387755102\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6981740064446831\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6756176154672395\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7099892588614393\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6788399570354458\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.3147153598281418\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.7218045112781954\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6476906552094522\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6670247046186896\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7089151450053706\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6938775510204082\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.31364124597207305\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 470, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7862513426423201\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7862513426423201\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7862513426423201\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7873254564983888\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.790547798066595\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.790547798066595\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.790547798066595\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.790547798066595\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.790547798066595\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.790547798066595\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8152524167561761\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8152524167561761\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8152524167561761\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8152524167561761\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8141783029001074\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8195488721804511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8195488721804511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8045112781954887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.5241675617615468\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6552094522019334\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7261009667024705\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7046186895810956\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.5542427497314716\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6433941997851772\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6992481203007519\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.664876476906552\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6745435016111708\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6895810955961332\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6176154672395274\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6229860365198711\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6154672395273899\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.682062298603652\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6831364124597207\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.5660580021482277\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6047261009667024\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6369495166487648\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.715359828141783\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.5660580021482277\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6079484425349087\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7411385606874329\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7755102040816326\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.5789473684210527\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7056928034371643\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6756176154672395\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.5929108485499462\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6380236305048335\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6090225563909775\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.706766917293233\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6143931256713212\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6831364124597207\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7056928034371643\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6766917293233082\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6809881847475833\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7013963480128894\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6519871106337272\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.5306122448979592\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7357679914070892\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5370569280343717\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.5950590762620838\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6380236305048335\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8120300751879699\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6691729323308271\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.677765843179377\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7389903329752954\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.5864661654135338\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6573576799140709\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7465091299677766\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7518796992481203\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6970998925886144\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7121374865735768\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7261009667024705\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.602577873254565\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.748657357679914\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6906552094522019\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.640171858216971\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7325456498388829\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7518796992481203\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6433941997851772\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.569280343716434\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6702470461868958\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6079484425349087\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7003222341568206\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6348012889366272\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.5735767991407089\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6186895810955961\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.677765843179377\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.4790547798066595\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5800214822771214\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6691729323308271\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7175080558539205\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8539205155746509\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8539205155746509\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8560687432867884\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8195488721804511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8227712137486574\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.5853920515574651\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6380236305048335\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6337271750805585\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7701396348012889\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6240601503759399\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6595059076262084\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7046186895810956\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7529538131041891\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6627282491944146\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6197636949516648\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6433941997851772\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.715359828141783\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6562835660580022\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5853920515574651\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.719656283566058\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7851772287862513\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6100966702470462\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6842105263157895\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.7024704618689581\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7056928034371643\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6122448979591837\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6369495166487648\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7046186895810956\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6638023630504833\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6519871106337272\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6197636949516648\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7293233082706767\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.5123523093447906\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6938775510204082\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.5864661654135338\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8152524167561761\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8131041890440387\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8206229860365198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8206229860365198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8141783029001074\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8206229860365198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.5972073039742213\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6691729323308271\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7357679914070892\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7336197636949516\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7593984962406015\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.5155746509129968\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6616541353383458\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7132116004296455\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.706766917293233\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.5832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6412459720730398\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6004296455424275\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.723952738990333\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7669172932330827\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5853920515574651\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.719656283566058\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7346938775510204\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.58968850698174\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7035445757250268\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6219119226638024\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6906552094522019\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6981740064446831\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7078410311493019\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6476906552094522\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6670247046186896\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6938775510204082\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 585, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.5864661654135338\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8098818474758325\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8066595059076263\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8141783029001074\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8045112781954887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8055853920515574\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8088077336197637\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8195488721804511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8184747583243824\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.80343716433942\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8045112781954887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8012889366272825\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.5241675617615468\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6552094522019334\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7261009667024705\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.7024704618689581\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.5542427497314716\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.640171858216971\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6992481203007519\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.664876476906552\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6745435016111708\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6240601503759399\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6476906552094522\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 1, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8002148227712137\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7980665950590763\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6917293233082706\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6852846401718582\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6154672395273899\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.682062298603652\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6831364124597207\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6476906552094522\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.4822771213748657\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.5660580021482277\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6047261009667024\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7830290010741139\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.5660580021482277\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6079484425349087\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7411385606874329\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6519871106337272\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.664876476906552\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.5789473684210527\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7056928034371643\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7862513426423201\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.5929108485499462\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6380236305048335\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6090225563909775\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.706766917293233\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7561761546723953\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7142857142857143\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6831364124597207\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7056928034371643\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7013963480128894\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6627282491944146\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.5306122448979592\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6745435016111708\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7540279269602578\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5370569280343717\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.32438238453276047\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 3, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6380236305048335\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8109559613319012\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8163265306122449\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6691729323308271\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.677765843179377\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7830290010741139\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.5864661654135338\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6573576799140709\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7078410311493019\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6713211600429646\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.7121374865735768\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7024704618689581\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.715359828141783\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.602577873254565\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.7529538131041891\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.640171858216971\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7325456498388829\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6433941997851772\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.5359828141783028\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6702470461868958\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6305048335123523\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6079484425349087\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6799140708915145\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7003222341568206\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6348012889366272\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.5800214822771214\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6186895810955961\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.677765843179377\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6885069817400644\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.7035445757250268\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5800214822771214\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6691729323308271\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 5, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7175080558539205\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8549946294307197\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8528464017185822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8195488721804511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6852846401718582\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6380236305048335\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6337271750805585\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6885069817400644\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.5080558539205156\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6240601503759399\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6595059076262084\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.7411385606874329\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6390977443609023\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.7046186895810956\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7046186895810956\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6100966702470462\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6627282491944146\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6197636949516648\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6433941997851772\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.7787325456498388\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6562835660580022\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5853920515574651\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.719656283566058\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7389903329752954\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.4371643394199785\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.38560687432867885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.6154672395273899\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6842105263157895\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7454350161117078\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.715359828141783\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.6638023630504833\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6143931256713212\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6369495166487648\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.677765843179377\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6638023630504833\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6197636949516648\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.3297529538131042\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6938775510204082\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6895810955961332\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 7, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.5864661654135338\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.001, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.0031622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.01, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8356605800214822\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.03162277660168379, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8292158968850698\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8388829215896885\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8517722878625135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.1, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8131041890440387\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8174006444683136\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8238453276047261\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8249194414607949\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8281417830290011\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8345864661654135\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8335123523093448\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8485499462943072\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.8474758324382384\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8442534908700322\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 0.31622776601683794, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8431793770139635\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.8216970998925887\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.8206229860365198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.8141783029001074\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.8206229860365198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.8259935553168636\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.8270676691729323\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.8313641245972073\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.8302900107411385\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.8399570354457573\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.8506981740064447\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.841031149301826\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.849624060150376\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.8464017185821697\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.8421052631578947\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.8367346938775511\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.8453276047261009\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 1.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.8378088077336198\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.5972073039742213\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.6702470461868958\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.7357679914070892\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.7336197636949516\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.7593984962406015\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.5155746509129968\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6616541353383458\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.7475832438238453\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.807733619763695\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.5832438238453276\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6949516648764769\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.7948442534908701\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.5488721804511278\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6004296455424275\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.723952738990333\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6938775510204082\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.4747583243823845\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.5853920515574651\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.719656283566058\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 3.1622776601683795, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.7819548872180451\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 1.0}\nScore: 0.5993555316863588\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 3.1622776601683795}\nScore: 0.6466165413533834\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.1, 'reg_lambda': 10.0}\nScore: 0.6219119226638024\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.1}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 0.31622776601683794}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 1.0}\nScore: 0.6702470461868958\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 3.1622776601683795}\nScore: 0.6981740064446831\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 0.31622776601683794, 'reg_lambda': 10.0}\nScore: 0.6756176154672395\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 1.0}\nScore: 0.3125671321160043\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6788399570354458\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 1.0, 'reg_lambda': 10.0}\nScore: 0.6519871106337272\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 0.31622776601683794}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 1.0}\nScore: 0.6476906552094522\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 3.1622776601683795}\nScore: 0.6670247046186896\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 3.1622776601683795, 'reg_lambda': 10.0}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.1}\nScore: 0.6874328678839957\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 0.31622776601683794}\nScore: 0.6938775510204082\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 1.0}\nScore: 0.6895810955961332\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 3.1622776601683795}\nScore: 0.6509129967776585\tBest score: 0.8592910848549946\n{'n_estimators': 700, 'max_depth': 10, 'learning_rate': 10.0, 'reg_alpha': 10.0, 'reg_lambda': 10.0}\nScore: 0.5864661654135338\tBest score: 0.8592910848549946\n" ], [ "best_params", "_____no_output_____" ], [ "max_score", "_____no_output_____" ] ], [ [ "# Training", "_____no_output_____" ] ], [ [ "pipeline = create_new_pipeline(best_params)", "_____no_output_____" ], [ "pipeline.fit(X_full_train, y_full_train)", "_____no_output_____" ] ], [ [ "# Validation", "_____no_output_____" ] ], [ [ "pipeline.score(X_full_train, y_full_train)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbf693df2e750aede755698f59f37ed499fc1d31
23,621
ipynb
Jupyter Notebook
Google Drive/Learning/Python/General Python Tutorial/Training/2_WorkingWithTextData.ipynb
mobiusworkspace/mobiuswebsite
73eef1bd4fc07ea318aad431de09eac10fc4da3a
[ "CC-BY-3.0" ]
null
null
null
Google Drive/Learning/Python/General Python Tutorial/Training/2_WorkingWithTextData.ipynb
mobiusworkspace/mobiuswebsite
73eef1bd4fc07ea318aad431de09eac10fc4da3a
[ "CC-BY-3.0" ]
null
null
null
Google Drive/Learning/Python/General Python Tutorial/Training/2_WorkingWithTextData.ipynb
mobiusworkspace/mobiuswebsite
73eef1bd4fc07ea318aad431de09eac10fc4da3a
[ "CC-BY-3.0" ]
null
null
null
39.765993
943
0.504255
[ [ [ "message = \"\"\"Hi,\nThis is a multi text string character\"\"\"\n\nprint(message)", " Hi,\nThis is a multi text string character\n" ], [ "message = 'Hello world'\nprint(len(message))\nprint(message[:5])\nprint(message[6:11])", "11\nHello\nworld\n" ], [ "#Lower case\nprint(message.lower())\n#Upper case\nprint(message.upper())\n#Count occurence of o\nprint(message.count('o'))\n#Find the index value\nprint(message.find('world'))\n", "hello world\nHELLO WORLD\n2\n6\n" ], [ "#Replace\nmessageReplace = (message.replace('world', 'Dotun'))\nprint(message)\nprint(messageReplace)", "Hello world\nHello Dotun\n" ], [ "#Place holder\ngreeting = 'Hello'\nname = 'Dotun'\n\nmessage = '{}, {}. Welcome!'.format(greeting, name)\nprint(message)\nmessage = f'{greeting.lower()}, {name.upper()}. Welcome!'\nprint(message)\n\n", "Hello, Dotun. Welcome!\nhello, DOTUN. Welcome!\n" ], [ "# To get all the attributes of a variable\nprint(dir(name))\nprint(help(str))\n", "['__add__', '__class__', '__contains__', '__delattr__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__iter__', '__le__', '__len__', '__lt__', '__mod__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__rmod__', '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'capitalize', 'casefold', 'center', 'count', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'format_map', 'index', 'isalnum', 'isalpha', 'isascii', 'isdecimal', 'isdigit', 'isidentifier', 'islower', 'isnumeric', 'isprintable', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'maketrans', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill']\nHelp on class str in module builtins:\n\nclass str(object)\n | str(object='') -> str\n | str(bytes_or_buffer[, encoding[, errors]]) -> str\n | \n | Create a new string object from the given object. If encoding or\n | errors is specified, then the object must expose a data buffer\n | that will be decoded using the given encoding and error handler.\n | Otherwise, returns the result of object.__str__() (if defined)\n | or repr(object).\n | encoding defaults to sys.getdefaultencoding().\n | errors defaults to 'strict'.\n | \n | Methods defined here:\n | \n | __add__(self, value, /)\n | Return self+value.\n | \n | __contains__(self, key, /)\n | Return key in self.\n | \n | __eq__(self, value, /)\n | Return self==value.\n | \n | __format__(self, format_spec, /)\n | Return a formatted version of the string as described by format_spec.\n | \n | __ge__(self, value, /)\n | Return self>=value.\n | \n | __getattribute__(self, name, /)\n | Return getattr(self, name).\n | \n | __getitem__(self, key, /)\n | Return self[key].\n | \n | __getnewargs__(...)\n | \n | __gt__(self, value, /)\n | Return self>value.\n | \n | __hash__(self, /)\n | Return hash(self).\n | \n | __iter__(self, /)\n | Implement iter(self).\n | \n | __le__(self, value, /)\n | Return self<=value.\n | \n | __len__(self, /)\n | Return len(self).\n | \n | __lt__(self, value, /)\n | Return self<value.\n | \n | __mod__(self, value, /)\n | Return self%value.\n | \n | __mul__(self, value, /)\n | Return self*value.\n | \n | __ne__(self, value, /)\n | Return self!=value.\n | \n | __repr__(self, /)\n | Return repr(self).\n | \n | __rmod__(self, value, /)\n | Return value%self.\n | \n | __rmul__(self, value, /)\n | Return value*self.\n | \n | __sizeof__(self, /)\n | Return the size of the string in memory, in bytes.\n | \n | __str__(self, /)\n | Return str(self).\n | \n | capitalize(self, /)\n | Return a capitalized version of the string.\n | \n | More specifically, make the first character have upper case and the rest lower\n | case.\n | \n | casefold(self, /)\n | Return a version of the string suitable for caseless comparisons.\n | \n | center(self, width, fillchar=' ', /)\n | Return a centered string of length width.\n | \n | Padding is done using the specified fill character (default is a space).\n | \n | count(...)\n | S.count(sub[, start[, end]]) -> int\n | \n | Return the number of non-overlapping occurrences of substring sub in\n | string S[start:end]. Optional arguments start and end are\n | interpreted as in slice notation.\n | \n | encode(self, /, encoding='utf-8', errors='strict')\n | Encode the string using the codec registered for encoding.\n | \n | encoding\n | The encoding in which to encode the string.\n | errors\n | The error handling scheme to use for encoding errors.\n | The default is 'strict' meaning that encoding errors raise a\n | UnicodeEncodeError. Other possible values are 'ignore', 'replace' and\n | 'xmlcharrefreplace' as well as any other name registered with\n | codecs.register_error that can handle UnicodeEncodeErrors.\n | \n | endswith(...)\n | S.endswith(suffix[, start[, end]]) -> bool\n | \n | Return True if S ends with the specified suffix, False otherwise.\n | With optional start, test S beginning at that position.\n | With optional end, stop comparing S at that position.\n | suffix can also be a tuple of strings to try.\n | \n | expandtabs(self, /, tabsize=8)\n | Return a copy where all tab characters are expanded using spaces.\n | \n | If tabsize is not given, a tab size of 8 characters is assumed.\n | \n | find(...)\n | S.find(sub[, start[, end]]) -> int\n | \n | Return the lowest index in S where substring sub is found,\n | such that sub is contained within S[start:end]. Optional\n | arguments start and end are interpreted as in slice notation.\n | \n | Return -1 on failure.\n | \n | format(...)\n | S.format(*args, **kwargs) -> str\n | \n | Return a formatted version of S, using substitutions from args and kwargs.\n | The substitutions are identified by braces ('{' and '}').\n | \n | format_map(...)\n | S.format_map(mapping) -> str\n | \n | Return a formatted version of S, using substitutions from mapping.\n | The substitutions are identified by braces ('{' and '}').\n | \n | index(...)\n | S.index(sub[, start[, end]]) -> int\n | \n | Return the lowest index in S where substring sub is found, \n | such that sub is contained within S[start:end]. Optional\n | arguments start and end are interpreted as in slice notation.\n | \n | Raises ValueError when the substring is not found.\n | \n | isalnum(self, /)\n | Return True if the string is an alpha-numeric string, False otherwise.\n | \n | A string is alpha-numeric if all characters in the string are alpha-numeric and\n | there is at least one character in the string.\n | \n | isalpha(self, /)\n | Return True if the string is an alphabetic string, False otherwise.\n | \n | A string is alphabetic if all characters in the string are alphabetic and there\n | is at least one character in the string.\n | \n | isascii(self, /)\n | Return True if all characters in the string are ASCII, False otherwise.\n | \n | ASCII characters have code points in the range U+0000-U+007F.\n | Empty string is ASCII too.\n | \n | isdecimal(self, /)\n | Return True if the string is a decimal string, False otherwise.\n | \n | A string is a decimal string if all characters in the string are decimal and\n | there is at least one character in the string.\n | \n | isdigit(self, /)\n | Return True if the string is a digit string, False otherwise.\n | \n | A string is a digit string if all characters in the string are digits and there\n | is at least one character in the string.\n | \n | isidentifier(self, /)\n | Return True if the string is a valid Python identifier, False otherwise.\n | \n | Use keyword.iskeyword() to test for reserved identifiers such as \"def\" and\n | \"class\".\n | \n | islower(self, /)\n | Return True if the string is a lowercase string, False otherwise.\n | \n | A string is lowercase if all cased characters in the string are lowercase and\n | there is at least one cased character in the string.\n | \n | isnumeric(self, /)\n | Return True if the string is a numeric string, False otherwise.\n | \n | A string is numeric if all characters in the string are numeric and there is at\n | least one character in the string.\n | \n | isprintable(self, /)\n | Return True if the string is printable, False otherwise.\n | \n | A string is printable if all of its characters are considered printable in\n | repr() or if it is empty.\n | \n | isspace(self, /)\n | Return True if the string is a whitespace string, False otherwise.\n | \n | A string is whitespace if all characters in the string are whitespace and there\n | is at least one character in the string.\n | \n | istitle(self, /)\n | Return True if the string is a title-cased string, False otherwise.\n | \n | In a title-cased string, upper- and title-case characters may only\n | follow uncased characters and lowercase characters only cased ones.\n | \n | isupper(self, /)\n | Return True if the string is an uppercase string, False otherwise.\n | \n | A string is uppercase if all cased characters in the string are uppercase and\n | there is at least one cased character in the string.\n | \n | join(self, iterable, /)\n | Concatenate any number of strings.\n | \n | The string whose method is called is inserted in between each given string.\n | The result is returned as a new string.\n | \n | Example: '.'.join(['ab', 'pq', 'rs']) -> 'ab.pq.rs'\n | \n | ljust(self, width, fillchar=' ', /)\n | Return a left-justified string of length width.\n | \n | Padding is done using the specified fill character (default is a space).\n | \n | lower(self, /)\n | Return a copy of the string converted to lowercase.\n | \n | lstrip(self, chars=None, /)\n | Return a copy of the string with leading whitespace removed.\n | \n | If chars is given and not None, remove characters in chars instead.\n | \n | partition(self, sep, /)\n | Partition the string into three parts using the given separator.\n | \n | This will search for the separator in the string. If the separator is found,\n | returns a 3-tuple containing the part before the separator, the separator\n | itself, and the part after it.\n | \n | If the separator is not found, returns a 3-tuple containing the original string\n | and two empty strings.\n | \n | replace(self, old, new, count=-1, /)\n | Return a copy with all occurrences of substring old replaced by new.\n | \n | count\n | Maximum number of occurrences to replace.\n | -1 (the default value) means replace all occurrences.\n | \n | If the optional argument count is given, only the first count occurrences are\n | replaced.\n | \n | rfind(...)\n | S.rfind(sub[, start[, end]]) -> int\n | \n | Return the highest index in S where substring sub is found,\n | such that sub is contained within S[start:end]. Optional\n | arguments start and end are interpreted as in slice notation.\n | \n | Return -1 on failure.\n | \n | rindex(...)\n | S.rindex(sub[, start[, end]]) -> int\n | \n | Return the highest index in S where substring sub is found,\n | such that sub is contained within S[start:end]. Optional\n | arguments start and end are interpreted as in slice notation.\n | \n | Raises ValueError when the substring is not found.\n | \n | rjust(self, width, fillchar=' ', /)\n | Return a right-justified string of length width.\n | \n | Padding is done using the specified fill character (default is a space).\n | \n | rpartition(self, sep, /)\n | Partition the string into three parts using the given separator.\n | \n | This will search for the separator in the string, starting at the end. If\n | the separator is found, returns a 3-tuple containing the part before the\n | separator, the separator itself, and the part after it.\n | \n | If the separator is not found, returns a 3-tuple containing two empty strings\n | and the original string.\n | \n | rsplit(self, /, sep=None, maxsplit=-1)\n | Return a list of the words in the string, using sep as the delimiter string.\n | \n | sep\n | The delimiter according which to split the string.\n | None (the default value) means split according to any whitespace,\n | and discard empty strings from the result.\n | maxsplit\n | Maximum number of splits to do.\n | -1 (the default value) means no limit.\n | \n | Splits are done starting at the end of the string and working to the front.\n | \n | rstrip(self, chars=None, /)\n | Return a copy of the string with trailing whitespace removed.\n | \n | If chars is given and not None, remove characters in chars instead.\n | \n | split(self, /, sep=None, maxsplit=-1)\n | Return a list of the words in the string, using sep as the delimiter string.\n | \n | sep\n | The delimiter according which to split the string.\n | None (the default value) means split according to any whitespace,\n | and discard empty strings from the result.\n | maxsplit\n | Maximum number of splits to do.\n | -1 (the default value) means no limit.\n | \n | splitlines(self, /, keepends=False)\n | Return a list of the lines in the string, breaking at line boundaries.\n | \n | Line breaks are not included in the resulting list unless keepends is given and\n | true.\n | \n | startswith(...)\n | S.startswith(prefix[, start[, end]]) -> bool\n | \n | Return True if S starts with the specified prefix, False otherwise.\n | With optional start, test S beginning at that position.\n | With optional end, stop comparing S at that position.\n | prefix can also be a tuple of strings to try.\n | \n | strip(self, chars=None, /)\n | Return a copy of the string with leading and trailing whitespace remove.\n | \n | If chars is given and not None, remove characters in chars instead.\n | \n | swapcase(self, /)\n | Convert uppercase characters to lowercase and lowercase characters to uppercase.\n | \n | title(self, /)\n | Return a version of the string where each word is titlecased.\n | \n | More specifically, words start with uppercased characters and all remaining\n | cased characters have lower case.\n | \n | translate(self, table, /)\n | Replace each character in the string using the given translation table.\n | \n | table\n | Translation table, which must be a mapping of Unicode ordinals to\n | Unicode ordinals, strings, or None.\n | \n | The table must implement lookup/indexing via __getitem__, for instance a\n | dictionary or list. If this operation raises LookupError, the character is\n | left untouched. Characters mapped to None are deleted.\n | \n | upper(self, /)\n | Return a copy of the string converted to uppercase.\n | \n | zfill(self, width, /)\n | Pad a numeric string with zeros on the left, to fill a field of the given width.\n | \n | The string is never truncated.\n | \n | ----------------------------------------------------------------------\n | Static methods defined here:\n | \n | __new__(*args, **kwargs) from builtins.type\n | Create and return a new object. See help(type) for accurate signature.\n | \n | maketrans(x, y=None, z=None, /)\n | Return a translation table usable for str.translate().\n | \n | If there is only one argument, it must be a dictionary mapping Unicode\n | ordinals (integers) or characters to Unicode ordinals, strings or None.\n | Character keys will be then converted to ordinals.\n | If there are two arguments, they must be strings of equal length, and\n | in the resulting dictionary, each character in x will be mapped to the\n | character at the same position in y. If there is a third argument, it\n | must be a string, whose characters will be mapped to None in the result.\n\nNone\nHelp on method_descriptor:\n\nlower(self, /)\n Return a copy of the string converted to lowercase.\n\nNone\n" ], [ "print(help(str.lower))", "Help on method_descriptor:\n\nlower(self, /)\n Return a copy of the string converted to lowercase.\n\nNone\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
cbf6a37c6d5fba76e57cbbdf5dda8db1f0c58bc3
14,259
ipynb
Jupyter Notebook
appendix/continuous_time_inference (Burgers)/Untitled.ipynb
jaydm26/PINNs
bde421afb7b280595beebb46ee0d2ab1a4ccf85a
[ "MIT" ]
null
null
null
appendix/continuous_time_inference (Burgers)/Untitled.ipynb
jaydm26/PINNs
bde421afb7b280595beebb46ee0d2ab1a4ccf85a
[ "MIT" ]
null
null
null
appendix/continuous_time_inference (Burgers)/Untitled.ipynb
jaydm26/PINNs
bde421afb7b280595beebb46ee0d2ab1a4ccf85a
[ "MIT" ]
null
null
null
36.844961
478
0.537205
[ [ [ "\"\"\"\n@author: Jay Mehta\n\nBased on the work of Maziar Raissi\n\"\"\"\n\n\nimport sys\n# Include the path that contains a number of files that have txt files containing solutions to the Burger's problem.\nsys.path.insert(0,'../../Utilities/')\n\n\n# Import required modules\nimport torch\nimport torch.nn as nn\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.io\nfrom scipy.interpolate import griddata\nfrom pyDOE import lhs\nfrom mpl_toolkits.mplot3d import Axes3D\nimport time\nimport matplotlib.gridspec as gridspec\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\n\nnp.random.seed(1234)\ntorch.manual_seed(1234)", "_____no_output_____" ], [ "class PhysicsInformedNN:\n # Initialize the class\n \"\"\"\n This class defined the Physics Informed Neural Network. The class is first initialized by the __init__ function. Additional functions related to the class are also defined subsequently.\n \"\"\"\n\n def __init__(self, X_u, u, X_f, layers, lb, ub, nu, epochs):\n\n # Defining the lower and upper bound of the domain.\n self.lb = lb\n self.ub = ub\n\n # Define the initial conditions for X and t\n self.x_u = X_u[:,0:1]\n self.t_u = X_u[:,1:2]\n\n self.x_u_tf = self.x_u\n self.t_u_tf = self.t_u\n\n # Define the final conditions for X and t\n self.x_f = X_f[:,0:1]\n self.t_f = X_f[:,1:2]\n\n self.x_f_tf = self.x_f\n self.t_f_tf = self.t_f\n\n # Declaring the field for the variable to be solved for\n self.u = u\n self.u_tf = u\n\n # Declaring the number of layers in the Neural Network\n self.layers = layers\n # Defininf the diffusion constant in the problem (?)\n self.nu = nu\n\n # Create the structure of the neural network here, or build a function below to build the architecture and send the model here.\n\n self.model = self.neural_net(layers)\n\n # Define the initialize_NN function to obtain the initial weights and biases for the network.\n self.model.apply(self.initialize_NN)\n\n # Select the optimization method for the network. Currently, it is just a placeholder.\n\n self.optimizer = torch.optim.SGD(self.model.parameters(), lr = 0.01)\n\n for epoch in range(0,epochs):\n u_pred = self.net_u(torch.from_numpy(self.x_u_tf), torch.from_numpy(self.t_u_tf))\n f_pred = self.net_f(self.x_f_tf, self.t_f_tf)\n loss = self.calc_loss(u_pred, self.u_tf, f_pred)\n self.optimizer.zero_grad()\n loss.backward()\n self.optimizer.step()\n\n # train(model,epochs,self.x_u_tf,self.t_u_tf,self.x_f_tf,self.t_f_tf,self.u_tf)\n\n def neural_net(self, layers):\n \"\"\"\n A function to build the neural network of the required size using the weights and biases provided. Instead of doing this, can we use a simple constructor method and initalize them post the construction? That would be sensible and faster.\n \"\"\"\n model = nn.Sequential()\n for l in range(0, len(layers) - 1):\n model.add_module(\"layer_\"+str(l), nn.Linear(layers[l],layers[l+1], bias=True))\n model.add_module(\"tanh_\"+str(l), nn.Tanh())\n\n return model\n\n\n def initialize_NN(self, m):\n \"\"\"\n Initialize the neural network with the required layers, the weights and the biases. The input \"layers\" in an array that contains the number of nodes (neurons) in each layer.\n \"\"\"\n\n if type(m) == nn.Linear:\n nn.init.xavier_uniform_(m.weight)\n # print(m.weight)\n\n\n def net_u(self, x, t):\n \"\"\"\n Forward pass through the network to obtain the U field.\n \"\"\"\n\n u = self.model(torch.cat((x,t),1).float())\n return u\n\n\n def net_f(self, x, t):\n u = net_u(self.model, x, t)\n u_x = torch.autograd.grad(u, x, grad_outputs = torch.tensor([[1.0],[1.0]]), create_graph = True)\n u_xx = torch.autograd.grad(u_x, x, grad_outputs = torch.tensor([[1.0],[1.0]]), create_graph = True)\n u = net_u(self.model, x, t)\n u_t = torch.autograd.grad(u, t, create_graph = True)\n\n f = u_t[0] + u * u_x[0] - self.nu * u_xx[0]\n\n return f\n\n def calc_loss(self, u_pred, u_tf, f_pred):\n losses = torch.mean(torch.mul(u_pred - u_tf, u_pred - u_tf)) + torch.mean(torch.mul(f_pred, f_pred))\n\n return losses\n\n def train(self, model, epochs, x_u_tf, t_u_tf, x_f_tf, t_f_tf, u_tf):\n\n for epoch in range(0,epochs):\n # Now, one can perform a forward pass through the network to predict the value of u and f for various locations of x and at various times t. The function to call here is net_u and net_f.\n\n # Here it is crucial to remember to provide x and t as columns and not as rows. Concatenation in the prediction step will fail otherwise.\n\n u_pred = net_u(x_u_tf, t_u_tf)\n f_pred = net_f(x_f_tf, t_f_tf)\n\n # Now, we can define the loss of the network. The loss here is broken into two components: one is the loss due to miscalculating the predicted value of u, the other is for not satisfying the physical governing equation in f which must be equal to 0 at all times and all locations (strong form).\n\n loss = calc_loss(u_pred, u_tf, f_pred)\n\n # Calculate the gradients using the backward() method.\n\n loss.backward() # Here, a tensor may need to be passed so that the gradients can be calculated.\n\n # Optimize the parameters through the optimization step and the learning rate.\n optimizer.step()\n\n # Repeat the prediction, calculation of losses, and optimization a number of times to optimize the network.\n\n\n\n# layers = [2, 20, 20, 20, 20, 20, 20, 20, 20, 1]\n# model = neural_net(layers)\n# model.apply(initialize_NN)\n# model(torch.tensor([1.1,0.5])) # This is how you feed the network forward. Use model(x) where x has two inputs for the location and time.\n# x = torch.tensor([[1.1],[1.2]],requires_grad = True)\n# t = torch.tensor([[0.5],[0.5]],requires_grad = True)\n\n# u = model(torch.cat((x,t),1))\n# # print(torch.cat((x,t),1))\n# u.backward(torch.tensor([[1.0],[1.0]]))\n# # print(x.grad.data)\n# u_x = torch.autograd.grad(u,t,grad_outputs = torch.tensor([[1.0],[1.0]]),create_graph = True)\n\n# y = torch.tensor([1.],requires_grad = True)\n# x = torch.tensor([10.],requires_grad = True)\n# y2 = torch.cat((x,y))\n# print(y2)\n# A = torch.tensor([[2.,3.],[4.,5.]],requires_grad = True)\n# loss = (torch.mul(y2,y2)).sum()\n# print(torch.autograd.grad(loss,x))\n# print(torch.autograd.grad(loss,t))\n\n\n# u = net_u(model, x, t)\n# print(u)\n# u_x = torch.autograd.grad(u, x, create_graph = True)\n# u_xx = torch.autograd.grad(u, x, create_graph = True)\n# u = net_u(model, x, t)\n# u_t = torch.autograd.grad(u,t)", "_____no_output_____" ], [ "nu = 0.01/np.pi\nnoise = 0.0\n\nN_u = 100\nN_f = 10000\n\n# Layer Map\n\nlayers = [2, 20, 20, 20, 20, 20, 20, 20, 20, 1]\n\ndata = scipy.io.loadmat('../../appendix/Data/burgers_shock.mat')\n\nt = data['t'].flatten()[:,None]\nx = data['x'].flatten()[:,None]\nExact = np.real(data['usol']).T\n\nX, T = np.meshgrid(x,t)\nX_star = np.hstack((X.flatten()[:,None],T.flatten()[:,None]))\nu_star = Exact.flatten()[:,None]\n\n# Doman bounds\nlb = X_star.min(0)\nub = X_star.max(0)\n\nxx1 = np.hstack((X[0:1,:].T, T[0:1,:].T))\nuu1 = Exact[0:1,:].T\nxx2 = np.hstack((X[:,0:1], T[:,0:1]))\nuu2 = Exact[:,0:1]\nxx3 = np.hstack((X[:,-1:], T[:,-1:]))\nuu3 = Exact[:,-1:]\n\nX_u_train = np.vstack([xx1, xx2, xx3])\nX_f_train = lb + (ub-lb)*lhs(2, N_f)\nX_f_train = np.vstack((X_f_train, X_u_train))\nu_train = np.vstack([uu1, uu2, uu3])\n\nidx = np.random.choice(X_u_train.shape[0], N_u, replace=False)\nX_u_train = X_u_train[idx, :]\nu_train = u_train[idx,:]\n\n# model = PhysicsInformedNN(X_u_train,u_train,X_f_train,layers,lb,ub,nu,5)\nX_u_train = torch.from_numpy(X_u_train)\nX_u_train.requires_grad = True\nu_train = torch.from_numpy(u_train)\nu_train.requires_grad = True", "_____no_output_____" ], [ "x_u = X_u_train[:,0:1]\nt_u = X_u_train[:,1:2]\nmodel = nn.Sequential()\nfor l in range(0, len(layers) - 1):\n model.add_module(\"layer_\"+str(l), nn.Linear(layers[l],layers[l+1], bias=True))\n model.add_module(\"tanh_\"+str(l), nn.Tanh())\n \noptimizer = torch.optim.LBFGS(model.parameters(), lr = 0.01)\n", "_____no_output_____" ], [ "losses = []\noptimizer = torch.optim.SGD(model.parameters(), lr = 0.001)\nfor epoch in range(0,1000):\n\n u_pred = model(torch.cat((x_u,t_u),1).float())\n u_x = torch.autograd.grad(u_pred,x_u,grad_outputs = torch.ones([len(x_u),1],dtype = torch.float),create_graph=True)\n u_xx = torch.autograd.grad(u_x,x_u,grad_outputs = torch.ones([len(x_u),1],dtype = torch.float),create_graph=True)\n u_t = torch.autograd.grad(u_pred,t_u,grad_outputs = torch.ones([len(t_u),1],dtype = torch.float),create_graph=True)\n f = u_t[0] + u_pred * u_x[0] - nu * u_xx[0]\n\n loss = torch.mean(torch.mul(u_pred - u_train, u_pred - u_train)) + torch.mean(torch.mul(f,f))\n losses.append(loss.detach().numpy())\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()", "_____no_output_____" ], [ "epo = np.array([i for i in range(0,1000)])", "_____no_output_____" ], [ "plt.plot(epo,losses)", "_____no_output_____" ], [ "np.linalg.norm(u_star - model(torch.tensor(X_star).float()).detach().numpy(),2)/np.linalg.norm(u_star,2)", "_____no_output_____" ], [ "type(optimizer) == torch.optim.lbfgs.LBFGS", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf6a80065e4d058aaf1649ae9b7b49a6d0e62a1
23,544
ipynb
Jupyter Notebook
docs/intro.ipynb
tuulos/docs
6f477bb1a064cd64c6876c935abac128c4df12b3
[ "Apache-2.0" ]
null
null
null
docs/intro.ipynb
tuulos/docs
6f477bb1a064cd64c6876c935abac128c4df12b3
[ "Apache-2.0" ]
null
null
null
docs/intro.ipynb
tuulos/docs
6f477bb1a064cd64c6876c935abac128c4df12b3
[ "Apache-2.0" ]
null
null
null
41.744681
462
0.636213
[ [ [ "---\nslug: /\ntitle: Authoring Guide\n---", "_____no_output_____" ] ], [ [ "# Beautiful Technical Documentation with nbdoc and Docusarus\n\n> [nbdoc](https://github.com/outerbounds/nbdoc) is a lightweight version of [nbdev](https://github.com/fastai/nbdev) that allows you to create rich, testable content with notebooks. [Docusarus](https://docusaurus.io/) is a beautiful static site generator for code documentation and blogging. This project brings all of these together to give you a powerful documentation system.", "_____no_output_____" ], [ "## Background: Literate Documentation", "_____no_output_____" ], [ "Writing technical documentation can be an ardous process. Many projects have no documentation at all, and if they do they are often stale or out of date. Our goal is to make writing documentation as easy as possible by providing the following:\n\n\n- [x] **Authoring experience that encourages the creation** of quality documentation: (1) Write & run code in-situ - avoid copy & pasting code (2) WSYIWG or low-latency hot-reload experience so you can get immediate feedback on how your docs will look.\n- [x] **Testing**: Automated testing of code snippets\n- [x] **Unified Search**: Unified search across API docs, tutorials, how-to guides, user guides with a great user interface that helps users understand the source of each\n- [x] Allow you to **highlight specific lines of code.**\n- [x] Allow authors to **hide** cell inputs, outputs or both\n- [ ] **Entity Linking**: Detect entities like modules and class names in backticks and automatically link those to the appropriate documentation.\n- [ ] **Inline and side-by-side explanations** (pop-ups, two-column view, etc)\n- [ ] Allow reader to **collapse/show** code and output\n\n \nThe unchecked items are a work in progress. There are some tools that offer some of these features such as [Jupyter Book](https://jupyterbook.org/intro.html) and [Quarto](https://quarto.org/), but we wanted more flexibility with regards to the static site generator and desired additional features not available on those platforms.", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ], [ "\n1. First, create an isolated python environment using your favorite tool such as `conda`, `pipenv`, `poetry` etc. Then, from the root of this repo run this command in the terminal:\n\n ```sh\n make install\n ```\n\n2. Then you need to open 3 different terminal windows (I recommend using split panes), and run the following commands in three seperate windows:\n\n _Note: we tried to use docker-compose but had trouble getting some things to run on Apple Silicon, so this will have to do for now._\n\n Start the docs server:\n \n ```shell\n make docs\n ```\n\n Watch for changes to notebooks:\n \n ```sh\n make watch\n ```\n\n Start Jupyter Lab:\n \n ```sh\n make nb\n ```\n\n3. Open a browser window for the docs [http://localhost:3000/](http://localhost:3000/). In my experience, you may have to hard-refresh the first time you make a change, but hot-reloading generally works.", "_____no_output_____" ], [ "## Authoring In Notebooks\n\n**For this tutorial to make the most sense, you should view this notebook and the rendered doc side-by-side. This page is called \"Authoring Guide\" and is the default page at the root when you start the site.** This tutorial assumes you have some familiarity with static site generators, if you do not, please visit the [Docusarus docs](https://docusaurus.io/docs).\n\n### Create Pages With Notebooks\n\nYou can create a notebook in any directory. When you do this, an associated markdown file is automatically generated with the same name in the same location. For example `intro.ipynb` generates `intro.md`. For pages that are created with a notebook, you should always edit them in a notebook. The markdown that is generated can be useful for debugging, but should not be directly edited a warning message is present in auto-generated markdown files.\n\nHowever, using notebooks in the first place is optional. You can create Markdown files as you normally would to create pages. We recommend using notebooks whenever possible, as you can embed arbitrary Markdown in notebooks, and also use `raw cells` for things like front matter or MDX components. \n\n### Front Matter & MDX\n\nThe first cell of your notebook should be a `raw` cell with the appropriate front-matter. For example, this notebook has the following front matter:\n\n```\n---\nslug: /\ntitle: Authoring Guide\n---\n```\n\n### Python Scripts In Docs\n\nIf you use the `%%writefile` magic, the magic command will get stripped from the cell, and the cell will be annotated with the appropriate filename as a title to denote that the cell block is referencing a script. Furthermore, any outputs are removed when you use this magic command.", "_____no_output_____" ] ], [ [ "%%writefile myflow.py\nfrom metaflow import FlowSpec, step\n\n\nclass MyFlow(FlowSpec):\n\n @step\n def start(self):\n self.some_data = ['some', 'data']\n self.next(self.middle)\n \n\n @step\n def middle(self):\n self.next(self.end)\n\n @step\n def end(self):\n pass\n\n\nif __name__ == '__main__':\n MyFlow()", "Overwriting myflow.py\n" ] ], [ [ "### Running shell commands\n\nYou can use the `!` magic to run shell commands. When you do this, the cell is marked with the appropriate language automatically. For Metaflow output, the preamble of the logs are automatically removed.", "_____no_output_____" ] ], [ [ "!python myflow.py run", "\u001b[35m\u001b[1mMetaflow 2.5.1.post3+git3b98a67\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mMyFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m2022-02-16 22:57:03.655 \u001b[0m\u001b[1mWorkflow starting (run-id 1645081023652490):\u001b[0m\n\u001b[35m2022-02-16 22:57:03.665 \u001b[0m\u001b[32m[1645081023652490/start/1 (pid 81206)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:04.456 \u001b[0m\u001b[32m[1645081023652490/start/1 (pid 81206)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:04.471 \u001b[0m\u001b[32m[1645081023652490/middle/2 (pid 81211)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:05.287 \u001b[0m\u001b[32m[1645081023652490/middle/2 (pid 81211)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:05.299 \u001b[0m\u001b[32m[1645081023652490/end/3 (pid 81226)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:06.099 \u001b[0m\u001b[32m[1645081023652490/end/3 (pid 81226)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:06.100 \u001b[0m\u001b[1mDone!\u001b[0m\n\u001b[0m" ] ], [ [ "You may wish to only show logs from particular steps when executing a Flow. You can accomplish this by using the `#cell_meta:show_steps=<step_name>` comment:", "_____no_output_____" ] ], [ [ "#cell_meta:show_steps=start\n!python myflow.py run", "\u001b[35m\u001b[1mMetaflow 2.5.1.post3+git3b98a67\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mMyFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m2022-02-16 22:57:07.529 \u001b[0m\u001b[1mWorkflow starting (run-id 1645081027525826):\u001b[0m\n\u001b[35m2022-02-16 22:57:07.536 \u001b[0m\u001b[32m[1645081027525826/start/1 (pid 81238)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:08.359 \u001b[0m\u001b[32m[1645081027525826/start/1 (pid 81238)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:08.369 \u001b[0m\u001b[32m[1645081027525826/middle/2 (pid 81243)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:09.185 \u001b[0m\u001b[32m[1645081027525826/middle/2 (pid 81243)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:09.194 \u001b[0m\u001b[32m[1645081027525826/end/3 (pid 81248)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:10.020 \u001b[0m\u001b[32m[1645081027525826/end/3 (pid 81248)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:10.021 \u001b[0m\u001b[1mDone!\u001b[0m\n\u001b[0m" ] ], [ [ "You can show multiple steps by seperating step names with commas:", "_____no_output_____" ] ], [ [ "#cell_meta:show_steps=start,end\n!python myflow.py run", "\u001b[35m\u001b[1mMetaflow 2.5.1.post3+git3b98a67\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mMyFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m2022-02-16 22:57:11.456 \u001b[0m\u001b[1mWorkflow starting (run-id 1645081031453011):\u001b[0m\n\u001b[35m2022-02-16 22:57:11.464 \u001b[0m\u001b[32m[1645081031453011/start/1 (pid 81258)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:12.281 \u001b[0m\u001b[32m[1645081031453011/start/1 (pid 81258)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:12.289 \u001b[0m\u001b[32m[1645081031453011/middle/2 (pid 81263)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:13.089 \u001b[0m\u001b[32m[1645081031453011/middle/2 (pid 81263)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:13.098 \u001b[0m\u001b[32m[1645081031453011/end/3 (pid 81268)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:13.929 \u001b[0m\u001b[32m[1645081031453011/end/3 (pid 81268)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:13.930 \u001b[0m\u001b[1mDone!\u001b[0m\n\u001b[0m" ] ], [ [ "### Writing Interactive Code & Toggling Visibility\n\nIt can be useful to write interactive code in notebooks as well. If you want to interact with a Flow, we recommend using the `--run-id-file <filemame>` flag.\n\nNote we are hiding both the input and output of the below cell (because it is a bit repetitive in this case) with the `#cell_meta:tag=remove_cell` comment:", "_____no_output_____" ] ], [ [ "#cell_meta:tag=remove_cell\n!python myflow.py run --run-id-file run_id.txt", "\u001b[35m\u001b[1mMetaflow 2.5.1.post3+git3b98a67\u001b[0m\u001b[35m\u001b[22m executing \u001b[0m\u001b[31m\u001b[1mMyFlow\u001b[0m\u001b[35m\u001b[22m\u001b[0m\u001b[35m\u001b[22m for \u001b[0m\u001b[31m\u001b[1muser:hamel\u001b[0m\u001b[35m\u001b[22m\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[35m\u001b[22mValidating your flow...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m The graph looks good!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m\u001b[22mRunning pylint...\u001b[K\u001b[0m\u001b[35m\u001b[22m\u001b[0m\n\u001b[32m\u001b[1m Pylint is happy!\u001b[K\u001b[0m\u001b[32m\u001b[1m\u001b[0m\n\u001b[35m2022-02-16 22:57:15.452 \u001b[0m\u001b[1mWorkflow starting (run-id 1645081035448485):\u001b[0m\n\u001b[35m2022-02-16 22:57:15.464 \u001b[0m\u001b[32m[1645081035448485/start/1 (pid 81281)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:16.279 \u001b[0m\u001b[32m[1645081035448485/start/1 (pid 81281)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:16.289 \u001b[0m\u001b[32m[1645081035448485/middle/2 (pid 81286)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:17.156 \u001b[0m\u001b[32m[1645081035448485/middle/2 (pid 81286)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:17.167 \u001b[0m\u001b[32m[1645081035448485/end/3 (pid 81291)] \u001b[0m\u001b[1mTask is starting.\u001b[0m\n\u001b[35m2022-02-16 22:57:17.986 \u001b[0m\u001b[32m[1645081035448485/end/3 (pid 81291)] \u001b[0m\u001b[1mTask finished successfully.\u001b[0m\n\u001b[35m2022-02-16 22:57:17.987 \u001b[0m\u001b[1mDone!\u001b[0m\n\u001b[0m" ] ], [ [ "Now, you can write and run your code as normal and this will show up in the docs:", "_____no_output_____" ] ], [ [ "run_id = !cat run_id.txt\nfrom metaflow import Run\nrun = Run(f'Myflow/{run_id[0]}')\n\nrun.data.some_data", "_____no_output_____" ] ], [ [ "It is often smart to run tests in your docs. To do this, simply add assert statements. These will get tested automatically when we run the test suite.", "_____no_output_____" ] ], [ [ "assert run.data.some_data == ['some', 'data']\nassert run.successful", "_____no_output_____" ] ], [ [ "But what if you only want to show the cell input, but not the output. Perhaps the output is too long and not necesary. You can do this with the `#cell_meta:tag=remove_output` comment.", "_____no_output_____" ] ], [ [ " #cell_meta:tag=remove_output\nprint(''.join(['This output would be really annoying if shown in the docs\\n'] * 10))", "This output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\nThis output would be really annoying if shown in the docs\n\n" ] ], [ [ "You may want to just show the output and not the input. You can do that with the `#cell_meta:tag=remove_input` comment:", "_____no_output_____" ] ], [ [ "#cell_meta:tag=remove_input\nprint(''.join(['You can only see the output, but not the code that created me\\n'] * 3))", "You can only see the output, but not the code that created me\nYou can only see the output, but not the code that created me\nYou can only see the output, but not the code that created me\n\n" ] ], [ [ "## Running Tests", "_____no_output_____" ], [ "To test the notebooks, run `make test`. This will execute all notebooks in parallel and report an error if there are any errors found:", "_____no_output_____" ], [ "### Skipping tests in cells", "_____no_output_____" ], [ "If you want to skip certain cells from running in tests because they take a really long time, you can place the comment `#notest` at the top of the cell.", "_____no_output_____" ], [ "## Docusaurus\n\nAll Docusarus features will work because notebook markdown just becomes regular markdown. Furthermore, special things such as MDX, JSX, or Front Matter can be created with a raw cell. For more information, visit [the Docusarus docs](https://docusaurus.io/docs).\n\n### Docusaurus Installation\n\n```\n$ yarn\n```\n\n### Docusaurus Local Development\n\n```\n$ yarn start\n```\n\nThis command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.\n\n### Docusaurus Build\n\n```\n$ yarn build\n```\n\nThis command generates static content into the `build` directory and can be served using any static contents hosting service.\n\n### Docusaurus Deployment\n\nUsing SSH:\n\n```\n$ USE_SSH=true yarn deploy\n```\n\nNot using SSH:\n\n```\n$ GIT_USER=<Your GitHub username> yarn deploy\n```\n\nIf you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.", "_____no_output_____" ] ] ]
[ "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "raw" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cbf6abdb045336f37fb1e00d7ee3c7f88f33f796
6,415
ipynb
Jupyter Notebook
examples/.ipynb_checkpoints/main_tinfocnf_unsup_beta_250-checkpoint.ipynb
minhtannguyen/ffjord
f3418249eaa4647f4339aea8d814cf2ce33be141
[ "MIT" ]
null
null
null
examples/.ipynb_checkpoints/main_tinfocnf_unsup_beta_250-checkpoint.ipynb
minhtannguyen/ffjord
f3418249eaa4647f4339aea8d814cf2ce33be141
[ "MIT" ]
null
null
null
examples/.ipynb_checkpoints/main_tinfocnf_unsup_beta_250-checkpoint.ipynb
minhtannguyen/ffjord
f3418249eaa4647f4339aea8d814cf2ce33be141
[ "MIT" ]
null
null
null
43.938356
461
0.673266
[ [ [ "import os\nos.environ['CUDA_VISIBLE_DEVICES']='0,1,2,3,4,5,6,7'", "_____no_output_____" ], [ "%run -p ../beta_tinfocnf.py --softcond False --sup False --cond False --latent_dim 4 --noise_std 0.3 --a_range \"1.0, .08\" --b_range \"0.25, .03\" --noise_std_test 0.3 --a_range_test \"1.0, .08\" --b_range_test \"0.25, .03\" --adjoint False --visualize True --niters 20000 --nsample 200 --lr 0.001 --beta 250.0 --save vis_sup --savedir ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_250_unsup --gpu 1 \n#", "Saved ground truth spiral at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_ground_truth_0_train.png\nSaved ground truth spiral at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_ground_truth_1_train.png\nSaved ground truth spiral at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_ground_truth_2_train.png\nSaved ground truth spiral at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_ground_truth_0_test.png\nSaved ground truth spiral at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_ground_truth_1_test.png\nSaved ground truth spiral at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_ground_truth_2_test.png\nIter: 1, running avg elbo: -21213.2461\nIter: 2, running avg elbo: -21212.0930\nStored ckpt at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/ckpt.pth\nTraining complete after 2 iters.\nSaved visualization figure at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_train_0.png\nSaved visualization figure at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_train_1.png\nSaved visualization figure at ./results_ticnf_nstd_0_3_a_1_0_08_b_0_25_03_nstdt_0_3_at_1_0_08_bt_0_25_03_200_20K_lr_0_001_beta_5_unsup/vis_sup_train_2.png\n" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 0. --b_test 0.4 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_0_bt_0_4 --gpu 0 \n# #", "_____no_output_____" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 0. --b_test 0.5 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_0_bt_0_5 --gpu 0 \n# #", "_____no_output_____" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 1. --b_test 0.3 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_1_bt_0_3 --gpu 0 \n# #", "_____no_output_____" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 1. --b_test 0.4 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_1_bt_0_4 --gpu 0 \n# #", "_____no_output_____" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 1. --b_test 0.5 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_1_bt_0_5 --gpu 0 \n# #", "_____no_output_____" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 2. --b_test 0.3 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_2_bt_0_3 --gpu 0 \n# #", "_____no_output_____" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 2. --b_test 0.4 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_2_bt_0_4 --gpu 0 \n# #", "_____no_output_____" ], [ "# %run -p ../latent_ode_tinfocnf.py --sup True --noise_std 0.3 --a 0. --b 0.3 --noise_std_test 0.3 --a_test 2. --b_test 0.5 --adjoint False --visualize True --niters 2000 --lr 0.01 --save vis_sup --savedir ./results_sup_nstd_0_3_a_0_b_0_3_nstdt_0_3_at_2_bt_0_5 --gpu 0 \n# #", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf6b1aa9734cd7ff5d995375bdb64c0edac9f9c
11,037
ipynb
Jupyter Notebook
Python Absolute Beginner/Module_3_1_Absolute_Beginner.ipynb
kmusgro1/pythonteachingcode
4bba1888bd89cf94acaac37a1c4174d143d6f717
[ "MIT" ]
null
null
null
Python Absolute Beginner/Module_3_1_Absolute_Beginner.ipynb
kmusgro1/pythonteachingcode
4bba1888bd89cf94acaac37a1c4174d143d6f717
[ "MIT" ]
null
null
null
Python Absolute Beginner/Module_3_1_Absolute_Beginner.ipynb
kmusgro1/pythonteachingcode
4bba1888bd89cf94acaac37a1c4174d143d6f717
[ "MIT" ]
null
null
null
27.661654
643
0.530941
[ [ [ "# 1-4.1 Intro Python\n## Conditionals \n- **`if`, `else`, `pass`**\n - **Conditionals using Boolean String Methods**\n - Comparison operators\n - String comparisons\n\n----- \n\n><font size=\"5\" color=\"#00A0B2\" face=\"verdana\"> <B>Student will be able to</B></font> \n- **control code flow with `if`... `else` conditional logic** \n - **using Boolean string methods (`.isupper(), .isalpha(), startswith()...`)** \n - using comparison (`>, <, >=, <=, ==, !=`) \n - using Strings in comparisons ", "_____no_output_____" ], [ "# &nbsp;\n<font size=\"6\" color=\"#00A0B2\" face=\"verdana\"> <B>Concepts</B></font>\n## Conditionals use `True` or `False`\n - **`if`**\n - **`else`**\n - **`pass`** \n \n[![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{\"src\":\"http://jupyternootbookwams.streaming.mediaservices.windows.net/c53fdb30-b2b0-4183-9686-64b0e5b46dd2/Unit1_Section4.1-Conditionals.ism/manifest\",\"type\":\"application/vnd.ms-sstr+xml\"}],[{\"src\":\"http://jupyternootbookwams.streaming.mediaservices.windows.net/c53fdb30-b2b0-4183-9686-64b0e5b46dd2/Unit1_Section4.1-Conditonals.vtt\",\"srclang\":\"en\",\"kind\":\"subtitles\",\"label\":\"english\"}])", "_____no_output_____" ], [ "# &nbsp;\n<font size=\"6\" color=\"#00A0B2\" face=\"verdana\"> <B>Examples</B></font>", "_____no_output_____" ] ], [ [ "if True:\n print(\"True means do something\")\nelse:\n print(\"Not True means do something else\")", "True means do something\n" ], [ "hot_tea = True\n\nif hot_tea:\n print(\"enjoy some hot tea!\")\nelse:\n print(\"enjoy some tea, and perhaps try hot tea next time.\")", "enjoy some hot tea!\n" ], [ "someone_i_know = False\nif someone_i_know:\n print(\"how have you been?\")\nelse:\n # use pass if there is no need to execute code \n pass", "_____no_output_____" ], [ "# changed the value of someone_i_know\nsomeone_i_know = True\nif someone_i_know:\n print(\"how have you been?\")\nelse:\n pass", "how have you been?\n" ] ], [ [ "# &nbsp;\n<font size=\"6\" color=\"#B24C00\" face=\"verdana\"> <B>Task 1</B></font>\n\n## Conditionals\n### Using Boolean with &nbsp; `if, else`\n\n- **Give a weather report using `if, else`**", "_____no_output_____" ] ], [ [ "sunny_today = True\n# [ ] test if it is sunny_today and give proper responses using if and else\nif sunny_today:\n print(\"Put on sunscreen!\")\nelse:\n print(\"Take an umbrella\")\n", "Put on sunscreen!\n" ], [ "sunny_today = False\n# [ ] use code you created above and test sunny_today = False\n\nif sunny_today:\n print(\"Put on sunscreen!\")\nelse:\n print(\"Take an umbrella\")", "Take an umbrella\n" ] ], [ [ "# &nbsp;\n<font size=\"6\" color=\"#00A0B2\" face=\"verdana\"> <B>Concepts</B></font>\n## Conditionals: Boolean String test methods with `if`\n[![view video](https://iajupyterprodblobs.blob.core.windows.net/imagecontainer/common/play_video.png)]( http://edxinteractivepage.blob.core.windows.net/edxpages/f7cff1a7-5601-48a1-95a6-fd1fdfabd20e.html?details=[{\"src\":\"http://jupyternootbookwams.streaming.mediaservices.windows.net/caa56256-733a-4172-96f7-9ecfc12d49d0/Unit1_Section4.1-conditionals-bool.ism/manifest\",\"type\":\"application/vnd.ms-sstr+xml\"}],[{\"src\":\"http://jupyternootbookwams.streaming.mediaservices.windows.net/caa56256-733a-4172-96f7-9ecfc12d49d0/Unit1_Section4.1-conditonals-bool.vtt\",\"srclang\":\"en\",\"kind\":\"subtitles\",\"label\":\"english\"}])\n```python\nif student_name.isalpha():\n``` \n- **`.isalnum()`**\n- **`.istitle()`**\n- **`.isdigit()`**\n- **`.islower()`**\n- **`.startswith()`**\n", "_____no_output_____" ], [ "### &nbsp;\n<font size=\"6\" color=\"#00A0B2\" face=\"verdana\"> <B>Examples</B></font>", "_____no_output_____" ] ], [ [ "# review code and run cell\nfavorite_book = input(\"Enter the title of a favorite book: \")\n\nif favorite_book.istitle():\n print(favorite_book, \"- nice capitalization in that title!\")\nelse:\n print(favorite_book, \"- consider capitalization throughout for book titles.\")", "Enter the title of a favorite book: cat in the hat\ncat in the hat - consider capitalization throughout for book titles.\n" ], [ "# review code and run cell\na_number = input(\"enter a positive integer number: \")\n\nif a_number.isdigit():\n print(a_number, \"is a positive integer\")\nelse:\n print(a_number, \"is not a positive integer\")\n \n# another if\nif a_number.isalpha():\n print(a_number, \"is more like a word\")\nelse:\n pass", "enter a positive integer number: 1.1\n1.1 is not a positive integer\n" ], [ "# review code and run cell\nvehicle_type = input('\"enter a type of vehicle that starts with \"P\": ')\n\nif vehicle_type.upper().startswith(\"P\"):\n print(vehicle_type, 'starts with \"P\"')\nelse:\n print(vehicle_type, 'does not start with \"P\"')", "\"enter a type of vehicle that starts with \"P\": p\np starts with \"P\"\n" ] ], [ [ "# &nbsp;\n<font size=\"6\" color=\"#B24C00\" face=\"verdana\"> <B>Task 2: multi-part</B></font>\n\n## Evaluating Boolean Conditionals \n### create evaluations for `.islower()`\n- print output describing **if** each of the 2 strings is all lower or not\n", "_____no_output_____" ] ], [ [ "test_string_1 = \"welcome\"\ntest_string_2 = \"I have $3\"\n# [ ] use if, else to test for islower() for the 2 strings\nif test_string_1.islower():\n print(test_string_1, \"is lower case.\")\nelse:\n print(test_string_1,\"is not lower case.\")\n \nif test_string_2.islower():\n print(test_string_2, \"is lower case.\")\nelse:\n print(test_string_2,\"is not lower case.\")", "welcome is lower case.\nI have $3 is not lower case.\n" ] ], [ [ "<font size=\"3\" color=\"#B24C00\" face=\"verdana\"> <B>Task 2 continued.. </B></font>\n### create a functions using `startswith('w')`\n- w_start_test() tests if starts with \"w\" \n**function should have a parameter for `test_string` and print the test result**", "_____no_output_____" ] ], [ [ "test_string_1 = \"welcome\"\ntest_string_2 = \"I have $3\"\ntest_string_3 = \"With a function it's efficient to repeat code\"\n# [ ] create a function w_start_test() use if & else to test with startswith('w')\ndef w_start_test(test_string):\n if test_string.startswith(\"w\"):\n print(test_string,\"starts with w\")\n else:\n print(test_string,\"does not start with w\")\n return\n# [ ] Test the 3 string variables provided by calling w_start_test()\nw_start_test(test_string_3)\n", "With a function it's efficient to repeat code does not start with w\n" ] ], [ [ "[Terms of use](http://go.microsoft.com/fwlink/?LinkID=206977) &nbsp; [Privacy & cookies](https://go.microsoft.com/fwlink/?LinkId=521839) &nbsp; © 2017 Microsoft", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf6b44bc9bf3c5adfbc5ed42ba110ac7bb2f0ec
164,062
ipynb
Jupyter Notebook
hownoisy/MIL_Audio_Event_Detection.ipynb
mthaak/hownoisy
e346abd9594c7e27fce0a646b4776128991bb77f
[ "MIT" ]
2
2018-04-16T13:40:54.000Z
2018-04-16T13:41:00.000Z
hownoisy/MIL_Audio_Event_Detection.ipynb
mthaak/hownoisy
e346abd9594c7e27fce0a646b4776128991bb77f
[ "MIT" ]
null
null
null
hownoisy/MIL_Audio_Event_Detection.ipynb
mthaak/hownoisy
e346abd9594c7e27fce0a646b4776128991bb77f
[ "MIT" ]
null
null
null
180.884234
29,236
0.87729
[ [ [ "import glob\nimport os\nimport librosa\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "_____no_output_____" ], [ "def gen_mfcc_fn(fn, mfcc_window_size, mfcc_stride_size):\n \n X, sample_rate = librosa.load(fn, sr=None, mono=True)\n \n if sample_rate != 44100:\n return\n \n mfcc = librosa.feature.mfcc(X, sample_rate, \n n_fft=int(mfcc_window_size * sample_rate), \n hop_length=int(mfcc_stride_size * sample_rate))\n return mfcc.T", "_____no_output_____" ], [ "def generate_mfccs_for_gmm(parent_dir, \n sub_dirs, \n file_ext='*.wav', \n mfcc_window_size=0.02, mfcc_stride_size=0.01):\n \n mfccs = np.empty((0, 20))\n \n for label, sub_dir in enumerate(sub_dirs):\n for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):\n \n mfcc = gen_mfcc_fn(fn, mfcc_window_size, mfcc_stride_size)\n if mfcc is None:\n continue\n \n mfccs = np.vstack([mfccs, mfcc])\n \n return mfccs", "_____no_output_____" ], [ "parent_dir = './UrbanSound8K/audio/'\ntr_sub_dirs = ['fold%d'% d for d in range(1, 2)]\n\nmfccs_for_gmm = generate_mfccs_for_gmm(parent_dir, tr_sub_dirs)\nprint(mfccs_for_gmm.shape)", "(132147, 20)\n" ], [ "from sklearn.mixture import GaussianMixture\n\ngmm = GaussianMixture(n_components=64, verbose=10)\ngmm.fit(mfccs_for_gmm)", "Initialization 0\n Iteration 0\t time lapse 11.38020s\t ll change inf\n Iteration 10\t time lapse 36.51747s\t ll change 0.08164\n Iteration 20\t time lapse 35.59874s\t ll change 0.00713\n Iteration 30\t time lapse 38.07266s\t ll change 0.00539\n Iteration 40\t time lapse 38.19159s\t ll change 0.00250\n Iteration 50\t time lapse 35.89631s\t ll change 0.00089\nInitialization converged: True\t time lapse 195.65732s\t ll -69.63772\n" ], [ "print(mfccs_for_gmm[0].shape)\ny = gmm.predict_proba(mfccs_for_gmm[:1])\nprint(y[0].shape)", "(20,)\n(64,)\n" ], [ "import pickle\n\npickle.dump(gmm, open('gaussian_mixture_model.pkl', 'wb'))", "_____no_output_____" ], [ "gmm_bak = pickle.load(open('gaussian_mixture_model.pkl', 'rb'))", "_____no_output_____" ], [ "gmm_bak", "_____no_output_____" ], [ "sound_class_table = {\n 'air_conditioner' : 0,\n 'car_horn' : 1,\n 'children_playing' : 2,\n 'dog_bark' : 3,\n 'drilling' : 4,\n 'engine_idling' : 5,\n 'gun_shot' : 6,\n 'jackhammer' : 7,\n 'siren' : 8,\n 'street_music' : 9\n}\n\ndef segment_window(audio_len, segment_len, segment_stride):\n \n start = 0\n while start < audio_len:\n yield start, start + segment_len\n start += segment_stride\n \ndef generate_labels(fn, target_class):\n\n return 1 if int(fn.split('-')[-3]) == sound_class_table[target_class] \\\n else -1\n\ndef generate_F_features(parent_dir, \n sub_dirs,\n num_segment_needed,\n target_class,\n file_ext='*.wav', \n mfcc_window_size=0.02, \n mfcc_stride_size=0.01):\n \n F_features, labels = np.empty((0, 64)), np.array([])\n\n for label, sub_dir in enumerate(sub_dirs):\n for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):\n \n X, sample_rate = librosa.load(fn, sr=None, mono=True)\n if sample_rate != 44100:\n continue\n\n segment_len = int(sample_rate * 0.1)\n segment_stride = int(sample_rate * 0.05)\n \n# file_F_features = np.empty((0, 64))\n for start, end in segment_window(X.size, segment_len, segment_stride):\n segment_mfccs = librosa.feature.mfcc(X[start:end], sample_rate, \n n_fft=int(mfcc_window_size * sample_rate), \n hop_length=int(mfcc_stride_size * sample_rate))\n \n segment_F_features = np.sum(gmm.predict_proba(segment_mfccs.T), axis=0) \\\n / (segment_mfccs.shape[1])\n \n F_features = np.vstack([F_features, segment_F_features])\n \n labels = np.append(labels, generate_labels(fn, target_class))\n \n if labels.shape[0] >= num_segment_needed:\n return np.array(F_features), np.array(labels, dtype=np.int)\n# F_features.append(file_F_features)\n \n print(\"Finished!\")\n return np.array(F_features), np.array(labels, dtype=np.int)", "_____no_output_____" ], [ "def extract_test_fn_labels(fn, duration, target_class):\n\n label_file_path = fn.replace('wav', 'txt')\n\n with open(label_file_path) as fd:\n lines = fd.readlines()\n time_sections_with_label = list(map(lambda x: (float(x[0]), float(x[1]), x[2]), map(lambda x : x.split(), lines)))\n \n time_intervals = np.arange(0.0, duration, 0.05)\n labels = np.zeros((time_intervals.shape[0]), dtype=np.int)\n\n for idx, t in enumerate(time_intervals):\n \n labels[idx] = -1\n \n for time_section in time_sections_with_label:\n if t < time_section[0] or t > time_section[1]:\n continue\n \n if time_section[2] == target_class:\n labels[idx] = 1\n break\n\n return labels\n\ndef gen_test_fn_features(fn):\n \n X, sample_rate = librosa.load(fn, sr=None, mono=True)\n if sample_rate != 44100:\n return X, sample_rate, None\n \n segment_len = int(sample_rate * 0.1)\n segment_stride = int(sample_rate * 0.05)\n \n print(fn)\n file_F_features = np.empty((0, 64))\n for start, end in segment_window(X.size, segment_len, segment_stride):\n segment_mfccs = librosa.feature.mfcc(X[start:end], sample_rate, \n n_fft=int(0.02 * sample_rate), \n hop_length=int(0.01 * sample_rate))\n\n segment_F_features = np.sum(gmm.predict_proba(segment_mfccs.T), axis=0) \\\n / (segment_mfccs.shape[1])\n\n file_F_features = np.vstack([file_F_features, segment_F_features])\n \n return X, sample_rate, file_F_features\n \ndef gen_testing_data_for_svm(target_class, parent_dir = '.', \n sub_dirs = ['soundscapes_5_events_sub'],\n file_ext='*.wav'):\n \n F_features, labels = [], []\n \n for label, sub_dir in enumerate(sub_dirs):\n for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):\n \n X, sample_rate, file_F_features = gen_test_fn_features(fn)\n if file_F_features is None:\n continue\n \n fn_labels = extract_test_fn_labels(fn, X.size/sample_rate, target_class)\n labels.append(fn_labels) \n F_features.append(file_F_features)\n \n print(\"Finished!\")\n return F_features, labels\n\n# def gen_testing_data_for_svm(target_class, parent_dir = '.', \n# sub_dirs = ['soundscapes_5_events_sub'],\n# file_ext='*.wav'):\n \n# F_features, labels = [], []\n# fns = []\n# for label, sub_dir in enumerate(sub_dirs):\n# for fn in glob.glob(os.path.join(parent_dir, sub_dir, file_ext)):\n \n# X, sample_rate, file_F_features = gen_test_fn_features(fn)\n# if file_F_features is None:\n# continue\n# fns.append(fn)\n# print(fn)\n# # fn_labels = extract_test_fn_labels(fn, X.size/sample_rate, target_class)\n# # labels.append(fn_labels) \n# F_features.append(file_F_features)\n \n# print(\"Finished!\")\n# return F_features, fns", "_____no_output_____" ], [ "def gen_training_data_for_svm(num_target_class_segment, target_class):\n \n parent_dir = './UrbanSound8K/ByClass'\n \n F_features_target_class, labels_target_class = generate_F_features(parent_dir,\n [target_class],\n num_target_class_segment,\n target_class)\n \n F_features_non_target_class = np.empty((0, 64))\n labels_non_target_class = np.array([])\n\n for k, _ in sound_class_table.items():\n if k == target_class:\n continue\n\n tmp_F_features, tmp_labels = generate_F_features(parent_dir, \n [k],\n int(num_target_class_segment/9),\n target_class)\n \n F_features_non_target_class = np.vstack([F_features_non_target_class, tmp_F_features])\n labels_non_target_class = np.append(labels_non_target_class, tmp_labels)\n \n return np.vstack([F_features_non_target_class, F_features_target_class]), \\\n np.append(labels_non_target_class, labels_target_class)", "_____no_output_____" ], [ "X_all, y_all = gen_training_data_for_svm(1800, target_class='air_conditioner')", "_____no_output_____" ], [ "print(X_all.shape)\nprint(y_all.shape)", "(3600, 64)\n(3600,)\n" ], [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n X_all, y_all, stratify=y_all, train_size=0.85)\n\nprint(X_train.shape)\nprint(y_train.shape)\nprint(X_test.shape)\nprint(y_test.shape)", "(3060, 64)\n(3060,)\n(540, 64)\n(540,)\n" ], [ "from sklearn.svm import SVC\n\nclf = SVC(kernel='rbf', C=100, gamma=10, probability=True)\nclf.fit(X_train, y_train)\n\nprint(\"Training set score: {:.3f}\".format(clf.score(X_train, y_train)))\nprint(\"Test set score: {:.3f}\".format(clf.score(X_test, y_test)))", "Training set score: 0.910\nTest set score: 0.894\n" ], [ "from sklearn.metrics import confusion_matrix\nprint(clf.classes_)\nconfusion_matrix(y_test, clf.predict(X_test))", "[-1. 1.]\n" ], [ "import pickle\npickle.dump(clf, open('./sound_detectors/air_conditioner_detector.pkl', 'wb'))", "_____no_output_____" ], [ "F_features_test, labels_test = gen_testing_data_for_svm(target_class='air_conditioner', \n parent_dir='./soundscapes', \n sub_dirs=['air_conditioner'])", "./soundscapes/air_conditioner/soundscape_56.wav\n./soundscapes/air_conditioner/soundscape_42.wav\n./soundscapes/air_conditioner/soundscape_95.wav\n./soundscapes/air_conditioner/soundscape_81.wav\n./soundscapes/air_conditioner/soundscape_80.wav\n./soundscapes/air_conditioner/soundscape_94.wav\n./soundscapes/air_conditioner/soundscape_43.wav\n./soundscapes/air_conditioner/soundscape_57.wav\n./soundscapes/air_conditioner/soundscape_41.wav\n./soundscapes/air_conditioner/soundscape_55.wav\n./soundscapes/air_conditioner/soundscape_69.wav\n./soundscapes/air_conditioner/soundscape_82.wav\n./soundscapes/air_conditioner/soundscape_96.wav\n./soundscapes/air_conditioner/soundscape_97.wav\n./soundscapes/air_conditioner/soundscape_83.wav\n./soundscapes/air_conditioner/soundscape_68.wav\n./soundscapes/air_conditioner/soundscape_54.wav\n./soundscapes/air_conditioner/soundscape_40.wav\n./soundscapes/air_conditioner/soundscape_78.wav\n./soundscapes/air_conditioner/soundscape_44.wav\n./soundscapes/air_conditioner/soundscape_50.wav\n./soundscapes/air_conditioner/soundscape_87.wav\n./soundscapes/air_conditioner/soundscape_93.wav\n./soundscapes/air_conditioner/soundscape_92.wav\n./soundscapes/air_conditioner/soundscape_86.wav\n./soundscapes/air_conditioner/soundscape_51.wav\n./soundscapes/air_conditioner/soundscape_45.wav\n./soundscapes/air_conditioner/soundscape_79.wav\n./soundscapes/air_conditioner/soundscape_53.wav\n./soundscapes/air_conditioner/soundscape_47.wav\n./soundscapes/air_conditioner/soundscape_90.wav\n./soundscapes/air_conditioner/soundscape_84.wav\n./soundscapes/air_conditioner/soundscape_85.wav\n./soundscapes/air_conditioner/soundscape_91.wav\n./soundscapes/air_conditioner/soundscape_46.wav\n./soundscapes/air_conditioner/soundscape_52.wav\n./soundscapes/air_conditioner/soundscape_6.wav\n./soundscapes/air_conditioner/soundscape_35.wav\n./soundscapes/air_conditioner/soundscape_21.wav\n./soundscapes/air_conditioner/soundscape_20.wav\n./soundscapes/air_conditioner/soundscape_34.wav\n./soundscapes/air_conditioner/soundscape_7.wav\n./soundscapes/air_conditioner/soundscape_5.wav\n./soundscapes/air_conditioner/soundscape_22.wav\n./soundscapes/air_conditioner/soundscape_36.wav\n./soundscapes/air_conditioner/soundscape_37.wav\n./soundscapes/air_conditioner/soundscape_23.wav\n./soundscapes/air_conditioner/soundscape_4.wav\n./soundscapes/air_conditioner/soundscape_0.wav\n./soundscapes/air_conditioner/soundscape_27.wav\n./soundscapes/air_conditioner/soundscape_33.wav\n./soundscapes/air_conditioner/soundscape_32.wav\n./soundscapes/air_conditioner/soundscape_26.wav\n./soundscapes/air_conditioner/soundscape_1.wav\n./soundscapes/air_conditioner/soundscape_3.wav\n./soundscapes/air_conditioner/soundscape_18.wav\n./soundscapes/air_conditioner/soundscape_30.wav\n./soundscapes/air_conditioner/soundscape_24.wav\n./soundscapes/air_conditioner/soundscape_25.wav\n./soundscapes/air_conditioner/soundscape_31.wav\n./soundscapes/air_conditioner/soundscape_19.wav\n./soundscapes/air_conditioner/soundscape_2.wav\n./soundscapes/air_conditioner/soundscape_14.wav\n./soundscapes/air_conditioner/soundscape_28.wav\n./soundscapes/air_conditioner/soundscape_29.wav\n./soundscapes/air_conditioner/soundscape_15.wav\n./soundscapes/air_conditioner/soundscape_17.wav\n./soundscapes/air_conditioner/soundscape_16.wav\n./soundscapes/air_conditioner/soundscape_9.wav\n./soundscapes/air_conditioner/soundscape_12.wav\n./soundscapes/air_conditioner/soundscape_13.wav\n./soundscapes/air_conditioner/soundscape_8.wav\n./soundscapes/air_conditioner/soundscape_39.wav\n./soundscapes/air_conditioner/soundscape_11.wav\n./soundscapes/air_conditioner/soundscape_10.wav\n./soundscapes/air_conditioner/soundscape_38.wav\n./soundscapes/air_conditioner/soundscape_77.wav\n./soundscapes/air_conditioner/soundscape_63.wav\n./soundscapes/air_conditioner/soundscape_88.wav\n./soundscapes/air_conditioner/soundscape_89.wav\n./soundscapes/air_conditioner/soundscape_62.wav\n./soundscapes/air_conditioner/soundscape_76.wav\n./soundscapes/air_conditioner/soundscape_60.wav\n./soundscapes/air_conditioner/soundscape_74.wav\n./soundscapes/air_conditioner/soundscape_48.wav\n./soundscapes/air_conditioner/soundscape_49.wav\n./soundscapes/air_conditioner/soundscape_75.wav\n./soundscapes/air_conditioner/soundscape_61.wav\n./soundscapes/air_conditioner/soundscape_59.wav\n./soundscapes/air_conditioner/soundscape_65.wav\n./soundscapes/air_conditioner/soundscape_71.wav\n./soundscapes/air_conditioner/soundscape_70.wav\n./soundscapes/air_conditioner/soundscape_64.wav\n./soundscapes/air_conditioner/soundscape_58.wav\n./soundscapes/air_conditioner/soundscape_72.wav\n./soundscapes/air_conditioner/soundscape_66.wav\n./soundscapes/air_conditioner/soundscape_99.wav\n./soundscapes/air_conditioner/soundscape_98.wav\n./soundscapes/air_conditioner/soundscape_67.wav\n./soundscapes/air_conditioner/soundscape_73.wav\nFinished!\n" ], [ "np.savetxt(\"./sound_detector_test_data/siren_test_features.csv\", np.array(F_features_test), delimiter=\",\")\nnp.savetxt(\"./sound_detector_test_data/siren_test_labels.csv\", np.array(labels_test), delimiter=\",\")", "_____no_output_____" ], [ "print(np.array(F_features_test).shape)\nprint(np.array(labels_test).shape)\n\nfrom sklearn.metrics import recall_score, precision_score, f1_score, accuracy_score\n\nrecall_scores = []\nprecision_scores = []\nf1_scores = []\naccuracy_scores = []\n\nfor x, y in zip(F_features_test, labels_test):\n preds = clf.predict(x)\n recall_scores.append(recall_score(y, preds))\n precision_scores.append(precision_score(y, preds))\n f1_scores.append(f1_score(y, preds))\n accuracy_scores.append(accuracy_score(y, preds))", "(100,)\n(100,)\n" ], [ "plt.plot(recall_scores)\nplt.show()\nplt.plot(precision_scores)\nplt.show()\nplt.plot(f1_scores)\nplt.show()\nplt.plot(accuracy_scores)\nplt.show()", "_____no_output_____" ], [ "# 0.696400072903\n# 0.697101319222\n# 0.669004671073\n# 0.8357\n\nprint(np.mean(recall_scores))\nprint(np.mean(precision_scores))\nprint(np.mean(f1_scores))\nprint(np.mean(accuracy_scores))", "0.255838121178\n0.412409033263\n0.273474194091\n0.689105798005\n" ], [ "print(len(F_features_test))\nprint(len(fns))\npreds = list(map(lambda d: clf.predict(d), F_features_test))\nprecisions = list(map(lambda d: d.tolist().count(1)/len(d), preds))\nprint(len(precisions))\nplt.plot(precisions)\n\nfs = list(filter(lambda p: p[0]>=.9, zip(precisions, fns)))\nprint(len(fs))\nprint(fs)", "100\n675\n100\n53\n[(0.915, './UrbanSound8K/ByClass/air_conditioner/75743-0-0-6.wav'), (0.9075, './UrbanSound8K/ByClass/air_conditioner/189982-0-0-4.wav'), (0.97, './UrbanSound8K/ByClass/air_conditioner/146845-0-0-21.wav'), (0.9525, './UrbanSound8K/ByClass/air_conditioner/146709-0-0-65.wav'), (0.9325, './UrbanSound8K/ByClass/air_conditioner/134717-0-0-26.wav'), (0.9925, './UrbanSound8K/ByClass/air_conditioner/147926-0-0-44.wav'), (0.905, './UrbanSound8K/ByClass/air_conditioner/75743-0-0-11.wav'), (0.925, './UrbanSound8K/ByClass/air_conditioner/189989-0-0-0.wav'), (0.945, './UrbanSound8K/ByClass/air_conditioner/55018-0-0-106.wav'), (0.9325, './UrbanSound8K/ByClass/air_conditioner/74507-0-0-13.wav'), (0.925, './UrbanSound8K/ByClass/air_conditioner/83502-0-0-10.wav'), (0.9125, './UrbanSound8K/ByClass/air_conditioner/13230-0-0-10.wav'), (0.9675, './UrbanSound8K/ByClass/air_conditioner/189989-0-0-1.wav'), (0.915, './UrbanSound8K/ByClass/air_conditioner/75743-0-0-10.wav'), (0.965, './UrbanSound8K/ByClass/air_conditioner/146714-0-0-21.wav'), (0.94, './UrbanSound8K/ByClass/air_conditioner/134717-0-0-27.wav'), (0.9825, './UrbanSound8K/ByClass/air_conditioner/75743-0-0-7.wav'), (0.9675, './UrbanSound8K/ByClass/air_conditioner/184805-0-0-34.wav'), (0.9175, './UrbanSound8K/ByClass/air_conditioner/55018-0-0-92.wav'), (0.93, './UrbanSound8K/ByClass/air_conditioner/189982-0-0-7.wav'), (0.91, './UrbanSound8K/ByClass/air_conditioner/134717-0-0-25.wav'), (0.9475, './UrbanSound8K/ByClass/air_conditioner/13230-0-0-12.wav'), (0.965, './UrbanSound8K/ByClass/air_conditioner/83502-0-0-12.wav'), (0.955, './UrbanSound8K/ByClass/air_conditioner/30204-0-0-3.wav'), (0.9425, './UrbanSound8K/ByClass/air_conditioner/83502-0-0-13.wav'), (0.9425, './UrbanSound8K/ByClass/air_conditioner/30204-0-0-2.wav'), (0.9325, './UrbanSound8K/ByClass/air_conditioner/74677-0-0-28.wav'), (0.9425, './UrbanSound8K/ByClass/air_conditioner/74677-0-0-14.wav'), (0.9675, './UrbanSound8K/ByClass/air_conditioner/146714-0-0-22.wav'), (0.905, './UrbanSound8K/ByClass/air_conditioner/57320-0-0-32.wav'), (0.935, './UrbanSound8K/ByClass/air_conditioner/146690-0-0-35.wav'), (0.9975, './UrbanSound8K/ByClass/air_conditioner/134717-0-0-18.wav'), (0.9025, './UrbanSound8K/ByClass/air_conditioner/170245-0-0-0.wav'), (0.905, './UrbanSound8K/ByClass/air_conditioner/162103-0-0-8.wav'), (0.925, './UrbanSound8K/ByClass/air_conditioner/189982-0-0-6.wav'), (0.9375, './UrbanSound8K/ByClass/air_conditioner/55018-0-0-87.wav'), (0.9225, './UrbanSound8K/ByClass/air_conditioner/55018-0-0-93.wav'), (0.9475, './UrbanSound8K/ByClass/air_conditioner/184805-0-0-27.wav'), (0.9475, './UrbanSound8K/ByClass/air_conditioner/134717-0-0-20.wav'), (0.95, './UrbanSound8K/ByClass/air_conditioner/57320-0-0-22.wav'), (0.9875, './UrbanSound8K/ByClass/air_conditioner/146845-0-0-33.wav'), (0.94, './UrbanSound8K/ByClass/air_conditioner/146845-0-0-27.wav'), (0.95, './UrbanSound8K/ByClass/air_conditioner/147926-0-0-42.wav'), (0.94, './UrbanSound8K/ByClass/air_conditioner/75743-0-0-17.wav'), (0.9975, './UrbanSound8K/ByClass/air_conditioner/73524-0-0-126.wav'), (0.9175, './UrbanSound8K/ByClass/air_conditioner/13230-0-0-17.wav'), (0.9625, './UrbanSound8K/ByClass/air_conditioner/74677-0-0-38.wav'), (0.9275, './UrbanSound8K/ByClass/air_conditioner/30204-0-0-6.wav'), (0.955, './UrbanSound8K/ByClass/air_conditioner/74507-0-0-28.wav'), (0.94, './UrbanSound8K/ByClass/air_conditioner/74507-0-0-14.wav'), (0.955, './UrbanSound8K/ByClass/air_conditioner/54383-0-0-9.wav'), (0.97, './UrbanSound8K/ByClass/air_conditioner/56385-0-0-1.wav'), (0.92, './UrbanSound8K/ByClass/air_conditioner/13230-0-0-16.wav')]\n" ], [ "import os\n\nfor f in fs:\n# os.system('cp %s ./hownoisy/data/ByClass/air_conditioner' % (f[1]))", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf6d062777e2a7e85f700a61892523a236dc10e
51,914
ipynb
Jupyter Notebook
04-Data-Cleaning/01-Missing-Values.ipynb
meghaks123/Business-Analytics
e66636247187d7516542bff0581e19900f7b9bd5
[ "MIT" ]
34
2019-03-11T21:59:08.000Z
2022-03-31T09:46:02.000Z
04-Data-Cleaning/01-Missing-Values.ipynb
meghaks123/Business-Analytics
e66636247187d7516542bff0581e19900f7b9bd5
[ "MIT" ]
null
null
null
04-Data-Cleaning/01-Missing-Values.ipynb
meghaks123/Business-Analytics
e66636247187d7516542bff0581e19900f7b9bd5
[ "MIT" ]
37
2019-08-09T12:33:41.000Z
2022-03-22T00:07:14.000Z
30.77297
993
0.412394
[ [ [ "# Handling Missing Data", "_____no_output_____" ], [ "The difference between data found in many tutorials and data in the real world is that real-world data is rarely clean and homogeneous.\nIn particular, many interesting datasets will have some amount of data missing.\nTo make matters even more complicated, different data sources may indicate missing data in different ways.\n\nIn this section, we will discuss some general considerations for missing data, discuss how Pandas chooses to represent it, and demonstrate some built-in Pandas tools for handling missing data in Python.\nWe'll refer to missing data in general as the following values: \n* *null* \n* *NaN*\n* *NA*", "_____no_output_____" ], [ "## Trade-Offs in Missing Data Conventions\n\nThere are a number of schemes that have been developed to indicate the presence of missing data in a table or DataFrame.\nGenerally, they revolve around one of two strategies: using a *mask* that globally indicates missing values, or choosing a *sentinel value* that indicates a missing entry.\n\nIn the masking approach, the mask might be an entirely separate Boolean array, or it may involve appropriation of one bit in the data representation to locally indicate the null status of a value.\n\nIn the sentinel approach, the sentinel value could be some data-specific convention, such as indicating a missing integer value with -9999 or some rare bit pattern, or it could be a more global convention, such as indicating a missing floating-point value with NaN (Not a Number), a special value which is part of the IEEE floating-point specification.\n\nNone of these approaches is without trade-offs: use of a separate mask array requires allocation of an additional Boolean array, which adds overhead in both storage and computation. A sentinel value reduces the range of valid values that can be represented, and may require extra (often non-optimized) logic in CPU and GPU arithmetic. Common special values like NaN are not available for all data types.\n\nAs in most cases where no universally optimal choice exists, different languages and systems use different conventions.\nFor example, the R language uses reserved bit patterns within each data type as sentinel values indicating missing data, while the SciDB system uses an extra byte attached to every cell which indicates a NA state.", "_____no_output_____" ], [ "## Missing Data in Pandas\n\nThe way in which Pandas handles missing values is constrained by its reliance on the NumPy package, which does not have a built-in notion of NA values for non-floating-point data types.\n\nPandas could have followed R's lead in specifying bit patterns for each individual data type to indicate nullness, but this approach turns out to be rather unwieldy.\nWhile R contains four basic data types, NumPy supports *far* more than this: for example, while R has a single integer type, NumPy supports *fourteen* basic integer types once you account for available precisions, signedness, and endianness of the encoding.\nReserving a specific bit pattern in all available NumPy types would lead to an unwieldy amount of overhead in special-casing various operations for various types, likely even requiring a new fork of the NumPy package. Further, for the smaller data types (such as 8-bit integers), sacrificing a bit to use as a mask will significantly reduce the range of values it can represent.\n\nNumPy does have support for masked arrays – that is, arrays that have a separate Boolean mask array attached for marking data as \"good\" or \"bad.\"\nPandas could have derived from this, but the overhead in both storage, computation, and code maintenance makes that an unattractive choice.\n\nWith these constraints in mind, Pandas chose to use sentinels for missing data, and further chose to use two already-existing Python null values: the special floating-point ``NaN`` value, and the Python ``None`` object.\nThis choice has some side effects, as we will see, but in practice ends up being a good compromise in most cases of interest.", "_____no_output_____" ], [ "### ``None``: Pythonic missing data\n\nThe first sentinel value used by Pandas is ``None``, a Python singleton object that is often used for missing data in Python code.\nBecause it is a Python object, ``None`` cannot be used in any arbitrary NumPy/Pandas array, but only in arrays with data type ``'object'`` (i.e., arrays of Python objects):", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "vals1 = np.array([1, None, 3, 4])\nvals1", "_____no_output_____" ] ], [ [ "This ``dtype=object`` means that the best common type representation NumPy could infer for the contents of the array is that they are Python objects.\nWhile this kind of object array is useful for some purposes, any operations on the data will be done at the Python level, with much more overhead than the typically fast operations seen for arrays with native types:", "_____no_output_____" ] ], [ [ "for dtype in ['object', 'int']:\n print(\"dtype =\", dtype)\n %timeit np.arange(1E6, dtype=dtype).sum()\n print()", "dtype = object\n10 loops, best of 3: 67.3 ms per loop\n\ndtype = int\n100 loops, best of 3: 2.05 ms per loop\n\n" ] ], [ [ "The use of Python objects in an array also means that if you perform aggregations like ``sum()`` or ``min()`` across an array with a ``None`` value, you will generally get an error:", "_____no_output_____" ] ], [ [ "vals1.sum()", "_____no_output_____" ] ], [ [ "This reflects the fact that addition between an integer and ``None`` is undefined.", "_____no_output_____" ], [ "### ``NaN``: Missing numerical data\n\nThe other missing data representation, ``NaN`` (acronym for *Not a Number*), is different; it is a special floating-point value recognized by all systems that use the standard IEEE floating-point representation:", "_____no_output_____" ] ], [ [ "vals2 = np.array([1, np.nan, 3, 4]) \nvals2.dtype", "_____no_output_____" ] ], [ [ "Notice that NumPy chose a native floating-point type for this array: this means that unlike the object array from before, this array supports fast operations pushed into compiled code.\nYou should be aware that ``NaN`` is a bit like a data virus–it infects any other object it touches.\nRegardless of the operation, the result of arithmetic with ``NaN`` will be another ``NaN``:", "_____no_output_____" ] ], [ [ "1 + np.nan", "_____no_output_____" ], [ "0 * np.nan", "_____no_output_____" ] ], [ [ "Note that this means that aggregates over the values are well defined (i.e., they don't result in an error) but not always useful:", "_____no_output_____" ] ], [ [ "vals2.sum(), vals2.min(), vals2.max()", "_____no_output_____" ] ], [ [ "NumPy does provide some special aggregations that will ignore these missing values:", "_____no_output_____" ] ], [ [ "np.nansum(vals2), np.nanmin(vals2), np.nanmax(vals2)", "_____no_output_____" ] ], [ [ "Keep in mind that ``NaN`` is specifically a floating-point value; there is no equivalent NaN value for integers, strings, or other types.", "_____no_output_____" ], [ "### NaN and None in Pandas\n\n``NaN`` and ``None`` both have their place, and Pandas is built to handle the two of them nearly interchangeably, converting between them where appropriate:", "_____no_output_____" ] ], [ [ "pd.Series([1, np.nan, 2, None])", "_____no_output_____" ] ], [ [ "For types that don't have an available sentinel value, Pandas automatically type-casts when NA values are present.\nFor example, if we set a value in an integer array to ``np.nan``, it will automatically be upcast to a floating-point type to accommodate the NA:", "_____no_output_____" ] ], [ [ "x = pd.Series(range(2), dtype=int)\nx", "_____no_output_____" ], [ "x[0] = None\nx", "_____no_output_____" ] ], [ [ "Notice that in addition to casting the integer array to floating point, Pandas automatically converts the ``None`` to a ``NaN`` value.\n(Be aware that there is a proposal to add a native integer NA to Pandas in the future; as of this writing, it has not been included).\n\nWhile this type of magic may feel a bit hackish compared to the more unified approach to NA values in domain-specific languages like R, the Pandas sentinel/casting approach works quite well in practice and in my experience only rarely causes issues.\n\nThe following table lists the upcasting conventions in Pandas when NA values are introduced:\n\n|Typeclass | Conversion When Storing NAs | NA Sentinel Value |\n|--------------|-----------------------------|------------------------|\n| ``floating`` | No change | ``np.nan`` |\n| ``object`` | No change | ``None`` or ``np.nan`` |\n| ``integer`` | Cast to ``float64`` | ``np.nan`` |\n| ``boolean`` | Cast to ``object`` | ``None`` or ``np.nan`` |\n\nKeep in mind that in Pandas, string data is always stored with an ``object`` dtype.", "_____no_output_____" ], [ "## Operating on Null Values\n\nAs we have seen, Pandas treats ``None`` and ``NaN`` as essentially interchangeable for indicating missing or null values.\nTo facilitate this convention, there are several useful methods for detecting, removing, and replacing null values in Pandas data structures.\nThey are:\n\n- ``isnull()``: Generate a boolean mask indicating missing values\n- ``notnull()``: Opposite of ``isnull()``\n- ``dropna()``: Return a filtered version of the data\n- ``fillna()``: Return a copy of the data with missing values filled or imputed\n\nWe will conclude this section with a brief exploration and demonstration of these routines.", "_____no_output_____" ], [ "### Detecting null values\nPandas data structures have two useful methods for detecting null data: ``isnull()`` and ``notnull()``.\nEither one will return a Boolean mask over the data. For example:", "_____no_output_____" ] ], [ [ "data = pd.Series([1, np.nan, 'hello', None])", "_____no_output_____" ], [ "data.isnull()", "_____no_output_____" ] ], [ [ "Boolean masks can be used directly as a ``Series`` or ``DataFrame`` index:", "_____no_output_____" ] ], [ [ "data[data.notnull()]", "_____no_output_____" ] ], [ [ "The ``isnull()`` and ``notnull()`` methods produce similar Boolean results for ``DataFrame``s.", "_____no_output_____" ], [ "### Dropping null values\n\nIn addition to the masking used before, there are the convenience methods, ``dropna()``\n(which removes NA values) and ``fillna()`` (which fills in NA values). For a ``Series``,\nthe result is straightforward:", "_____no_output_____" ] ], [ [ "data.dropna()", "_____no_output_____" ] ], [ [ "Notice that the default behavior of `dropna()` is to leave the original DataFrame/Series untouched. In order to drop NAs from the original source one could use the argument `inplace=True`. This has to be used with caution.", "_____no_output_____" ] ], [ [ "data", "_____no_output_____" ], [ "data.dropna(inplace=True)", "_____no_output_____" ], [ "data", "_____no_output_____" ] ], [ [ "For a ``DataFrame``, there are more options.\nConsider the following ``DataFrame``:", "_____no_output_____" ] ], [ [ "df = pd.DataFrame([[1, np.nan, 2],\n [2, 3, 5],\n [np.nan, 4, 6]])\ndf", "_____no_output_____" ] ], [ [ "We cannot drop single values from a ``DataFrame``; we can only drop full rows or full columns.\nDepending on the application, you might want one or the other, so ``dropna()`` gives a number of options for a ``DataFrame``.\n\nBy default, ``dropna()`` will drop all rows in which *any* null value is present:", "_____no_output_____" ] ], [ [ "df.dropna()", "_____no_output_____" ] ], [ [ "Alternatively, you can drop NA values along a different axis; ``axis=1`` drops all columns containing a null value:", "_____no_output_____" ] ], [ [ "df.dropna(axis='columns')", "_____no_output_____" ] ], [ [ "But this drops some good data as well; you might rather be interested in dropping rows or columns with *all* NA values, or a majority of NA values.\nThis can be specified through the ``how`` or ``thresh`` parameters, which allow fine control of the number of nulls to allow through.\n\nThe default is ``how='any'``, such that any row or column (depending on the ``axis`` keyword) containing a null value will be dropped.\nYou can also specify ``how='all'``, which will only drop rows/columns that are *all* null values:", "_____no_output_____" ] ], [ [ "df[3] = np.nan\ndf", "_____no_output_____" ], [ "df.dropna(axis='columns', how='all')", "_____no_output_____" ] ], [ [ "For finer-grained control, the ``thresh`` parameter lets you specify a minimum number of non-null values for the row/column to be kept:", "_____no_output_____" ] ], [ [ "df.dropna(axis='rows', thresh=3)", "_____no_output_____" ] ], [ [ "Here the first and last row have been dropped, because they contain only two non-null values.", "_____no_output_____" ], [ "### Filling null values\n\nSometimes rather than dropping NA values, you'd rather replace them with a valid value.\nThis value might be a single number like zero, or it might be some sort of imputation or interpolation from the good values.\nYou could do this in-place using the ``isnull()`` method as a mask, but because it is such a common operation Pandas provides the ``fillna()`` method, which returns a copy of the array with the null values replaced.\n\nConsider the following ``Series``:", "_____no_output_____" ] ], [ [ "data = pd.Series([1, np.nan, 2, None, 3], index=list('abcde'))\ndata", "_____no_output_____" ] ], [ [ "We can fill NA entries with a single value, such as zero:", "_____no_output_____" ] ], [ [ "data.fillna(0)", "_____no_output_____" ] ], [ [ "We can specify a forward-fill to propagate the previous value forward:", "_____no_output_____" ] ], [ [ "# forward-fill\ndata.fillna(method='ffill')", "_____no_output_____" ] ], [ [ "Or we can specify a back-fill to propagate the next values backward:", "_____no_output_____" ] ], [ [ "# back-fill\ndata.fillna(method='bfill')", "_____no_output_____" ] ], [ [ "For ``DataFrame``s, the options are similar, but we can also specify an ``axis`` along which the fills take place:", "_____no_output_____" ] ], [ [ "df", "_____no_output_____" ], [ "df.fillna(method='ffill', axis=1)", "_____no_output_____" ] ], [ [ "Notice that if a previous value is not available during a forward fill, the NA value remains.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cbf6d9ea4685728a23ba47b3c428d56821217676
4,037
ipynb
Jupyter Notebook
easy-scraping-tutorial/notebook/1-1-urllib.ipynb
ilray88/tutorials
0182234002233179ab5c9a0d4ba230e5aa5f170e
[ "MIT" ]
null
null
null
easy-scraping-tutorial/notebook/1-1-urllib.ipynb
ilray88/tutorials
0182234002233179ab5c9a0d4ba230e5aa5f170e
[ "MIT" ]
null
null
null
easy-scraping-tutorial/notebook/1-1-urllib.ipynb
ilray88/tutorials
0182234002233179ab5c9a0d4ba230e5aa5f170e
[ "MIT" ]
null
null
null
24.615854
211
0.528115
[ [ [ "# 用 Python 登录网页\n好了, 对网页结构和 HTML 有了一些基本认识以后, 我们就能用 Python 来爬取这个网页的一些基本信息. 首先要做的, 是使用 Python 来登录这个网页, 并打印出这个网页 HTML 的 source code. 注意, 因为网页中存在中文, 为了正常显示中文, read() 完以后, 我们要对读出来的文字进行转换, decode() 成可以正常显示中文的形式.\nprint 出来就是下面这样啦. 这就证明了我们能够成功读取这个网页的所有信息了. 但我们还没有对网页的信息进行汇总和利用. 我们发现, 想要提取一些形式的信息, 合理的利用 tag 的名字十分重要.", "_____no_output_____" ] ], [ [ "from urllib.request import urlopen\n\n# if has Chinese, apply decode()\nhtml = urlopen(\"https://mofanpy.com/static/scraping/basic-structure.html\").read().decode('utf-8')\nprint(html)\n", "<!DOCTYPE html>\n<html lang=\"cn\">\n<head>\n\t<meta charset=\"UTF-8\">\n\t<title>Scraping tutorial 1 | 莫烦Python</title>\n\t<link rel=\"icon\" href=\"{{ static_url }}/js/description/tab_icon.png\">\n</head>\n<body>\n\t<h1>爬虫测试1</h1>\n\t<p>\n\t\t这是一个在 <a href=\"/\">莫烦Python</a>\n\t\t<a href=\"/tutorials/data-manipulation/scraping/\">爬虫教程</a> 中的简单测试.\n\t</p>\n\n</body>\n</html>\n" ] ], [ [ "# 匹配网页内容\n所以这里我们使用 Python 的正则表达式 RegEx 进行匹配文字, 筛选信息的工作. 我有一个很不错的正则表达式的教程, 如果是初级的网页匹配, 我们使用正则完全就可以了, 高级一点或者比较繁琐的匹配, 我还是推荐使用 BeautifulSoup. 不急不急, 我知道你想偷懒, 我之后马上就会教 beautiful soup 了. 但是现在我们还是使用正则来做几个简单的例子, 让你熟悉一下套路.\n\n如果我们想用代码找到这个网页的 title, 我们就能这样写. 选好要使用的 tag 名称 <title>. 使用正则匹配.", "_____no_output_____" ] ], [ [ "import re\nres = re.findall(r\"<title>(.+?)</title>\", html)\nprint(\"\\nPage title is: \", res[0])", "\nPage title is: Scraping tutorial 1 | 莫烦Python\n" ] ], [ [ "如果想要找到中间的那个段落 <p>, 我们使用下面方法, 因为这个段落在 HTML 中还夹杂着 tab, new line, 所以我们给一个 flags=re.DOTALL 来对这些 tab, new line 不敏感.", "_____no_output_____" ] ], [ [ "res = re.findall(r\"<p>(.*?)</p>\", html, flags=re.DOTALL) # re.DOTALL if multi line\nprint(\"\\nPage paragraph is: \", res[0])", "\nPage paragraph is: \n\t\t这是一个在 <a href=\"/\">莫烦Python</a>\n\t\t<a href=\"/tutorials/data-manipulation/scraping/\">爬虫教程</a> 中的简单测试.\n\t\n" ] ], [ [ "最后一个练习是找一找所有的链接, 这个比较有用, 有时候你想找到网页里的链接, 然后下载一些内容到电脑里, 就靠这样的途径了.", "_____no_output_____" ] ], [ [ "res = re.findall(r'href=\"(.*?)\"', html)\nprint(\"\\nAll links: \", res)", "\nAll links: ['{{ static_url }}/js/description/tab_icon.png', '/', '/tutorials/data-manipulation/scraping/']\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf6da997d3f80b02bb68362f0c7c2672a804862
236,699
ipynb
Jupyter Notebook
Model backlog/Train/4-tweet-train-distilbert-lower-softmax.ipynb
dimitreOliveira/Tweet-Sentiment-Extraction
0a775abe9a92c4bc2db957519c523be7655df8d8
[ "MIT" ]
11
2020-06-17T07:30:20.000Z
2022-03-25T16:56:01.000Z
Model backlog/Train/4-tweet-train-distilbert-lower-softmax.ipynb
dimitreOliveira/Tweet-Sentiment-Extraction
0a775abe9a92c4bc2db957519c523be7655df8d8
[ "MIT" ]
null
null
null
Model backlog/Train/4-tweet-train-distilbert-lower-softmax.ipynb
dimitreOliveira/Tweet-Sentiment-Extraction
0a775abe9a92c4bc2db957519c523be7655df8d8
[ "MIT" ]
null
null
null
225.21313
194,452
0.879657
[ [ [ "## Dependencies", "_____no_output_____" ] ], [ [ "import os, random, warnings\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nfrom matplotlib import pyplot as plt\nfrom sklearn.model_selection import train_test_split\nfrom transformers import TFDistilBertModel\nfrom tokenizers import BertWordPieceTokenizer\nimport tensorflow as tf\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras import optimizers, metrics, losses\nfrom tensorflow.keras.callbacks import EarlyStopping\nfrom tensorflow.keras.layers import Dense, Input, Dropout, GlobalAveragePooling1D, Concatenate\n\ndef seed_everything(seed=0):\n np.random.seed(seed)\n tf.random.set_seed(seed)\n os.environ['PYTHONHASHSEED'] = str(seed)\n os.environ['TF_DETERMINISTIC_OPS'] = '1'\n\nSEED = 0\nseed_everything(SEED)\nwarnings.filterwarnings(\"ignore\")", "_____no_output_____" ], [ "# Auxiliary functions\ndef plot_metrics(history, metric_list):\n fig, axes = plt.subplots(len(metric_list), 1, sharex='col', figsize=(20, len(metric_list) * 5))\n axes = axes.flatten()\n \n for index, metric in enumerate(metric_list):\n axes[index].plot(history[metric], label='Train %s' % metric)\n axes[index].plot(history['val_%s' % metric], label='Validation %s' % metric)\n axes[index].legend(loc='best', fontsize=16)\n axes[index].set_title(metric)\n\n plt.xlabel('Epochs', fontsize=16)\n sns.despine()\n plt.show()\n\ndef jaccard(str1, str2): \n a = set(str1.lower().split()) \n b = set(str2.lower().split())\n c = a.intersection(b)\n return float(len(c)) / (len(a) + len(b) - len(c))\n\ndef evaluate_model(train_set, validation_set):\n train_set['jaccard'] = train_set.apply(lambda x: jaccard(x['selected_text'], x['prediction']), axis=1)\n validation_set['jaccard'] = validation_set.apply(lambda x: jaccard(x['selected_text'], x['prediction']), axis=1)\n\n print('Train set Jaccard: %.3f' % train_set['jaccard'].mean())\n print('Validation set Jaccard: %.3f' % validation_set['jaccard'].mean())\n\n print('\\nMetric by sentiment')\n for sentiment in train_df['sentiment'].unique():\n print('\\nSentiment == %s' % sentiment)\n print('Train set Jaccard: %.3f' % train_set[train_set['sentiment'] == sentiment]['jaccard'].mean())\n print('Validation set Jaccard: %.3f' % validation_set[validation_set['sentiment'] == sentiment]['jaccard'].mean())\n \n# Transformer inputs\ndef get_start_end(text, selected_text, offsets, max_seq_len):\n # find the intersection between text and selected text\n idx_start, idx_end = None, None\n for index in (i for i, c in enumerate(text) if c == selected_text[0]):\n if text[index:index + len(selected_text)] == selected_text:\n idx_start = index\n idx_end = index + len(selected_text)\n break\n \n intersection = [0] * len(text)\n if idx_start != None and idx_end != None:\n for char_idx in range(idx_start, idx_end):\n intersection[char_idx] = 1\n \n \n targets = np.zeros(len(offsets))\n for i, (o1, o2) in enumerate(offsets):\n if sum(intersection[o1:o2]) > 0:\n targets[i] = 1\n \n # OHE targets\n target_start = np.zeros(len(offsets))\n target_end = np.zeros(len(offsets))\n targets_nonzero = np.nonzero(targets)[0]\n if len(targets_nonzero) > 0: \n target_start[targets_nonzero[0]] = 1\n target_end[targets_nonzero[-1]] = 1\n \n return target_start, target_end\n\ndef preprocess(text, selected_text, context, tokenizer, max_seq_len):\n context_encoded = tokenizer.encode(context)\n context_encoded = context_encoded.ids[1:-1]\n \n encoded = tokenizer.encode(text)\n encoded.pad(max_seq_len)\n encoded.truncate(max_seq_len)\n input_ids = encoded.ids\n offsets = encoded.offsets\n attention_mask = encoded.attention_mask\n token_type_ids = ([0] * 3) + ([1] * (max_seq_len - 3))\n \n input_ids = [101] + context_encoded + [102] + input_ids\n # update input ids and attentions masks size\n input_ids = input_ids[:-3]\n attention_mask = [1] * 3 + attention_mask[:-3]\n \n target_start, target_end = get_start_end(text, selected_text, offsets, max_seq_len)\n \n x = [np.asarray(input_ids, dtype=np.int32), \n np.asarray(attention_mask, dtype=np.int32), \n np.asarray(token_type_ids, dtype=np.int32)]\n \n y = [np.asarray(target_start, dtype=np.int32), \n np.asarray(target_end, dtype=np.int32)]\n \n \n return (x, y)\n\ndef get_data(df, tokenizer, MAX_LEN):\n x_input_ids = []\n x_attention_masks = []\n x_token_type_ids = []\n y_start = []\n y_end = []\n for row in df.itertuples(): \n x, y = preprocess(getattr(row, \"text\"), getattr(row, \"selected_text\"), getattr(row, \"sentiment\"), tokenizer, MAX_LEN)\n x_input_ids.append(x[0])\n x_attention_masks.append(x[1])\n x_token_type_ids.append(x[2])\n\n y_start.append(y[0])\n y_end.append(y[1])\n\n x_train = [np.asarray(x_input_ids), np.asarray(x_attention_masks), np.asarray(x_token_type_ids)]\n y_train = [np.asarray(y_start), np.asarray(y_end)]\n return x_train, y_train\n\ndef decode(pred_start, pred_end, text, tokenizer):\n offset = tokenizer.encode(text).offsets\n \n if pred_end >= len(offset):\n pred_end = len(offset)-1\n \n decoded_text = \"\"\n for i in range(pred_start, pred_end+1):\n decoded_text += text[offset[i][0]:offset[i][1]]\n if (i+1) < len(offset) and offset[i][1] < offset[i+1][0]:\n decoded_text += \" \"\n return decoded_text", "_____no_output_____" ] ], [ [ "# Load data", "_____no_output_____" ] ], [ [ "train_df = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/train.csv')\n\nprint('Train samples: %s' % len(train_df))\ndisplay(train_df.head())", "Train samples: 27486\n" ] ], [ [ "# Pre process", "_____no_output_____" ] ], [ [ "train_df['text'].fillna('', inplace=True)\ntrain_df['selected_text'].fillna('', inplace=True)\ntrain_df[\"text\"] = train_df[\"text\"].apply(lambda x: x.lower())\ntrain_df[\"selected_text\"] = train_df[\"selected_text\"].apply(lambda x: x.lower())\n\ntrain_df['text'] = train_df['text'].astype(str)\ntrain_df['selected_text'] = train_df['selected_text'].astype(str)", "_____no_output_____" ] ], [ [ "# Model parameters", "_____no_output_____" ] ], [ [ "MAX_LEN = 128\nBATCH_SIZE = 64\nEPOCHS = 10\nLEARNING_RATE = 1e-5\nES_PATIENCE = 2\n\nbase_path = '/kaggle/input/qa-transformers/distilbert/'\nbase_model_path = base_path + 'distilbert-base-uncased-distilled-squad-tf_model.h5'\nconfig_path = base_path + 'distilbert-base-uncased-distilled-squad-config.json'\nvocab_path = base_path + 'bert-large-uncased-vocab.txt'\nmodel_path = 'model.h5'", "_____no_output_____" ] ], [ [ "# Tokenizer", "_____no_output_____" ] ], [ [ "tokenizer = BertWordPieceTokenizer(vocab_path, lowercase=True)\ntokenizer.save('./')", "_____no_output_____" ] ], [ [ "# Train/validation split", "_____no_output_____" ] ], [ [ "train, validation = train_test_split(train_df, test_size=0.2, random_state=SEED)\n\nx_train, y_train = get_data(train, tokenizer, MAX_LEN)\nx_valid, y_valid = get_data(validation, tokenizer, MAX_LEN)\n\nprint('Train set size: %s' % len(x_train[0]))\nprint('Validation set size: %s' % len(x_valid[0]))", "Train set size: 21988\nValidation set size: 5498\n" ] ], [ [ "# Model", "_____no_output_____" ] ], [ [ "def model_fn():\n input_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')\n attention_mask = Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')\n token_type_ids = Input(shape=(MAX_LEN,), dtype=tf.int32, name='token_type_ids')\n \n base_model = TFDistilBertModel.from_pretrained(base_model_path, config=config_path, name=\"base_model\")\n sequence_output = base_model({'input_ids': input_ids, 'attention_mask': attention_mask, 'token_type_ids': token_type_ids})\n last_state = sequence_output[0]\n \n x = GlobalAveragePooling1D()(last_state)\n \n y_start = Dense(MAX_LEN, activation='softmax', name='y_start')(x)\n y_end = Dense(MAX_LEN, activation='softmax', name='y_end')(x)\n \n model = Model(inputs=[input_ids, attention_mask, token_type_ids], outputs=[y_start, y_end])\n model.compile(optimizers.Adam(lr=LEARNING_RATE), \n loss=losses.CategoricalCrossentropy(), \n metrics=[metrics.CategoricalAccuracy()])\n \n return model\n\nmodel = model_fn()\nmodel.summary()", "Model: \"model\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\nattention_mask (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\ninput_ids (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\ntoken_type_ids (InputLayer) [(None, 128)] 0 \n__________________________________________________________________________________________________\nbase_model (TFDistilBertModel) ((None, 128, 768),) 66362880 attention_mask[0][0] \n input_ids[0][0] \n token_type_ids[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling1d (Globa (None, 768) 0 base_model[0][0] \n__________________________________________________________________________________________________\ny_start (Dense) (None, 128) 98432 global_average_pooling1d[0][0] \n__________________________________________________________________________________________________\ny_end (Dense) (None, 128) 98432 global_average_pooling1d[0][0] \n==================================================================================================\nTotal params: 66,559,744\nTrainable params: 66,559,744\nNon-trainable params: 0\n__________________________________________________________________________________________________\n" ] ], [ [ "# Train", "_____no_output_____" ] ], [ [ "es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, \n restore_best_weights=True, verbose=1)\n\nhistory = model.fit(x_train, y_train,\n validation_data=(x_valid, y_valid),\n callbacks=[es],\n epochs=EPOCHS, \n verbose=1).history\n\nmodel.save_weights(model_path)", "Train on 21988 samples, validate on 5498 samples\nEpoch 1/10\n21988/21988 [==============================] - 191s 9ms/sample - loss: 5.1535 - y_start_loss: 1.9052 - y_end_loss: 3.2469 - y_start_categorical_accuracy: 0.5726 - y_end_categorical_accuracy: 0.0992 - val_loss: 4.1617 - val_y_start_loss: 1.5961 - val_y_end_loss: 2.5661 - val_y_start_categorical_accuracy: 0.5842 - val_y_end_categorical_accuracy: 0.2041\nEpoch 2/10\n21988/21988 [==============================] - 180s 8ms/sample - loss: 3.7698 - y_start_loss: 1.4792 - y_end_loss: 2.2897 - y_start_categorical_accuracy: 0.5860 - y_end_categorical_accuracy: 0.2772 - val_loss: 3.3757 - val_y_start_loss: 1.3588 - val_y_end_loss: 2.0169 - val_y_start_categorical_accuracy: 0.5960 - val_y_end_categorical_accuracy: 0.3561\nEpoch 3/10\n21988/21988 [==============================] - 180s 8ms/sample - loss: 3.1644 - y_start_loss: 1.2970 - y_end_loss: 1.8709 - y_start_categorical_accuracy: 0.6065 - y_end_categorical_accuracy: 0.4098 - val_loss: 3.0652 - val_y_start_loss: 1.2803 - val_y_end_loss: 1.7848 - val_y_start_categorical_accuracy: 0.6091 - val_y_end_categorical_accuracy: 0.4625\nEpoch 4/10\n21988/21988 [==============================] - 180s 8ms/sample - loss: 2.8156 - y_start_loss: 1.1914 - y_end_loss: 1.6238 - y_start_categorical_accuracy: 0.6281 - y_end_categorical_accuracy: 0.4933 - val_loss: 2.8718 - val_y_start_loss: 1.2456 - val_y_end_loss: 1.6262 - val_y_start_categorical_accuracy: 0.6215 - val_y_end_categorical_accuracy: 0.5193\nEpoch 5/10\n21988/21988 [==============================] - 181s 8ms/sample - loss: 2.5601 - y_start_loss: 1.1162 - y_end_loss: 1.4427 - y_start_categorical_accuracy: 0.6462 - y_end_categorical_accuracy: 0.5560 - val_loss: 2.7840 - val_y_start_loss: 1.2276 - val_y_end_loss: 1.5563 - val_y_start_categorical_accuracy: 0.6195 - val_y_end_categorical_accuracy: 0.5427\nEpoch 6/10\n21988/21988 [==============================] - 182s 8ms/sample - loss: 2.3455 - y_start_loss: 1.0480 - y_end_loss: 1.2972 - y_start_categorical_accuracy: 0.6654 - y_end_categorical_accuracy: 0.6021 - val_loss: 2.7500 - val_y_start_loss: 1.2438 - val_y_end_loss: 1.5062 - val_y_start_categorical_accuracy: 0.6210 - val_y_end_categorical_accuracy: 0.5668\nEpoch 7/10\n21988/21988 [==============================] - 181s 8ms/sample - loss: 2.1574 - y_start_loss: 0.9817 - y_end_loss: 1.1756 - y_start_categorical_accuracy: 0.6848 - y_end_categorical_accuracy: 0.6363 - val_loss: 2.6972 - val_y_start_loss: 1.2339 - val_y_end_loss: 1.4632 - val_y_start_categorical_accuracy: 0.6319 - val_y_end_categorical_accuracy: 0.5820\nEpoch 8/10\n21988/21988 [==============================] - 181s 8ms/sample - loss: 1.9727 - y_start_loss: 0.9186 - y_end_loss: 1.0547 - y_start_categorical_accuracy: 0.7043 - y_end_categorical_accuracy: 0.6795 - val_loss: 2.6834 - val_y_start_loss: 1.2383 - val_y_end_loss: 1.4451 - val_y_start_categorical_accuracy: 0.6280 - val_y_end_categorical_accuracy: 0.6002\nEpoch 9/10\n21988/21988 [==============================] - 180s 8ms/sample - loss: 1.7998 - y_start_loss: 0.8499 - y_end_loss: 0.9511 - y_start_categorical_accuracy: 0.7273 - y_end_categorical_accuracy: 0.7087 - val_loss: 2.7384 - val_y_start_loss: 1.2677 - val_y_end_loss: 1.4710 - val_y_start_categorical_accuracy: 0.6290 - val_y_end_categorical_accuracy: 0.5964\nEpoch 10/10\n21984/21988 [============================>.] - ETA: 0s - loss: 1.6432 - y_start_loss: 0.7867 - y_end_loss: 0.8565 - y_start_categorical_accuracy: 0.7461 - y_end_categorical_accuracy: 0.7396Restoring model weights from the end of the best epoch.\n21988/21988 [==============================] - 181s 8ms/sample - loss: 1.6435 - y_start_loss: 0.7873 - y_end_loss: 0.8586 - y_start_categorical_accuracy: 0.7460 - y_end_categorical_accuracy: 0.7395 - val_loss: 2.7478 - val_y_start_loss: 1.2802 - val_y_end_loss: 1.4675 - val_y_start_categorical_accuracy: 0.6328 - val_y_end_categorical_accuracy: 0.6069\nEpoch 00010: early stopping\n" ] ], [ [ "# Model loss graph", "_____no_output_____" ] ], [ [ "sns.set(style=\"whitegrid\")\nplot_metrics(history, metric_list=['loss', 'y_start_loss', 'y_end_loss', 'y_start_categorical_accuracy', 'y_end_categorical_accuracy'])", "_____no_output_____" ] ], [ [ "# Model evaluation", "_____no_output_____" ] ], [ [ "train_preds = model.predict(x_train)\nvalid_preds = model.predict(x_valid)\n\ntrain['start'] = train_preds[0].argmax(axis=-1)\ntrain['end'] = train_preds[1].argmax(axis=-1)\ntrain['prediction'] = train.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1)\ntrain[\"prediction\"] = train[\"prediction\"].apply(lambda x: '.' if x.strip() == '' else x)\n\nvalidation['start'] = valid_preds[0].argmax(axis=-1)\nvalidation['end'] = valid_preds[1].argmax(axis=-1)\nvalidation['prediction'] = validation.apply(lambda x: decode(x['start'], x['end'], x['text'], tokenizer), axis=1)\nvalidation[\"prediction\"] = validation[\"prediction\"].apply(lambda x: '.' if x.strip() == '' else x)\n\nevaluate_model(train, validation)", "Train set Jaccard: 0.771\nValidation set Jaccard: 0.643\n\nMetric by sentiment\n\nSentiment == neutral\nTrain set Jaccard: 0.973\nValidation set Jaccard: 0.963\n\nSentiment == positive\nTrain set Jaccard: 0.631\nValidation set Jaccard: 0.429\n\nSentiment == negative\nTrain set Jaccard: 0.640\nValidation set Jaccard: 0.406\n" ] ], [ [ "# Visualize predictions", "_____no_output_____" ] ], [ [ "print('Train set')\ndisplay(train.head(10))\n\nprint('Validation set')\ndisplay(validation.head(10))", "Train set\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf6df35cc55476d456ab3ee8a73a390a0c331ab
12,459
ipynb
Jupyter Notebook
notebooks/5_Model_Selection_and_Evaluation.ipynb
KIT-IPF/hyperspectral-regression
5d77cbb31b3a0627d6e1365679a34d49c4c4fe15
[ "BSD-3-Clause" ]
14
2019-10-17T21:55:15.000Z
2022-03-08T08:34:33.000Z
notebooks/5_Model_Selection_and_Evaluation.ipynb
KIT-IPF/hyperspectral-regression
5d77cbb31b3a0627d6e1365679a34d49c4c4fe15
[ "BSD-3-Clause" ]
null
null
null
notebooks/5_Model_Selection_and_Evaluation.ipynb
KIT-IPF/hyperspectral-regression
5d77cbb31b3a0627d6e1365679a34d49c4c4fe15
[ "BSD-3-Clause" ]
4
2020-05-22T06:04:15.000Z
2021-09-21T12:13:07.000Z
28.445205
366
0.542339
[ [ [ "<div class=\"alert alert-block alert-info\">\nSection of the book chapter: <b>5.3 Model Selection, Optimization and Evaluation</b>\n</div>\n\n# 5. Model Selection and Evaluation\n\n**Table of Contents**\n\n* [5.1 Hyperparameter Optimization](#5.1-Hyperparameter-Optimization)\n* [5.2 Model Evaluation](#5.2-Model-Evaluation)\n\n**Learnings:**\n\n- how to optimize machine learning (ML) models with grid search, random search and Bayesian optimization,\n- how to evaluate ML models.\n\n\n\n### Packages", "_____no_output_____" ] ], [ [ "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\n# ignore warnings\nimport warnings\nwarnings.filterwarnings('ignore')\n\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nfrom sklearn.ensemble import RandomForestRegressor\n\nimport utils", "_____no_output_____" ] ], [ [ "### Read in Data\n\n**Dataset:** Felix M. Riese and Sina Keller, \"Hyperspectral benchmark dataset on soil moisture\", Dataset, Zenodo, 2018. [DOI:10.5281/zenodo.1227836](http://doi.org/10.5281/zenodo.1227836) and [GitHub](https://github.com/felixriese/hyperspectral-soilmoisture-dataset)\n\n**Introducing paper:** Felix M. Riese and Sina Keller, “Introducing a Framework of Self-Organizing Maps for Regression of Soil Moisture with Hyperspectral Data,” in IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 2018, pp. 6151-6154. [DOI:10.1109/IGARSS.2018.8517812](https://doi.org/10.1109/IGARSS.2018.8517812)", "_____no_output_____" ] ], [ [ "X_train, X_test, y_train, y_test = utils.get_xy_split()\n\nprint(X_train.shape, X_test.shape, y_train.shape, y_test.shape)", "_____no_output_____" ] ], [ [ "### Fix Random State", "_____no_output_____" ] ], [ [ "np.random.seed(42)", "_____no_output_____" ] ], [ [ "***\n\n## 5.1 Hyperparameter Optimization\n\nContent:\n\n- [5.1.1 Grid Search](#5.1.1-Grid-Search)\n- [5.1.2 Randomized Search](#5.1.2-Randomized-Search)\n- [5.1.3 Bayesian Optimization](#5.1.3-Bayesian-Optimization)\n\n### 5.1.1 Grid Search", "_____no_output_____" ] ], [ [ "# NBVAL_IGNORE_OUTPUT\n\nfrom sklearn.svm import SVR\nfrom sklearn.model_selection import GridSearchCV\n\n# example mode: support vector regressor\nmodel = SVR(kernel=\"rbf\")\n\n# define parameter grid to be tested\nparams = {\n \"C\": np.logspace(-4, 4, 9),\n \"gamma\": np.logspace(-4, 4, 9)}\n\n\n# set up grid search and run it on the data\ngs = GridSearchCV(model, params)\n%timeit gs.fit(X_train, y_train)\nprint(\"R2 score = {0:.2f} %\".format(gs.score(X_test, y_test)*100))", "_____no_output_____" ] ], [ [ "### 5.1.2 Randomized Search", "_____no_output_____" ] ], [ [ "# NBVAL_IGNORE_OUTPUT\n\nfrom sklearn.svm import SVR\nfrom sklearn.model_selection import RandomizedSearchCV\n\n# example mode: support vector regressor\nmodel = SVR(kernel=\"rbf\")\n\n# define parameter grid to be tested\nparams = {\n \"C\": np.logspace(-4, 4, 9),\n \"gamma\": np.logspace(-4, 4, 9)}\n\n# set up grid search and run it on the data\ngsr = RandomizedSearchCV(model, params, n_iter=15, refit=True)\n%timeit gsr.fit(X_train, y_train)\nprint(\"R2 score = {0:.2f} %\".format(gsr.score(X_test, y_test)*100))", "_____no_output_____" ] ], [ [ "### 5.1.3 Bayesian Optimization\n\nImplementation: [github.com/fmfn/BayesianOptimization](https://github.com/fmfn/BayesianOptimization)", "_____no_output_____" ] ], [ [ "# NBVAL_IGNORE_OUTPUT\n\nfrom sklearn.svm import SVR\nfrom bayes_opt import BayesianOptimization\n\n# define function to be optimized\ndef opt_func(C, gamma):\n model = SVR(C=C, gamma=gamma)\n return model.fit(X_train, y_train).score(X_test, y_test)\n\n# set bounded region of parameter space\npbounds = {'C': (1e-5, 1e4), 'gamma': (1e-5, 1e4)}\n\n# define optimizer\noptimizer = BayesianOptimization(\n f=opt_func,\n pbounds=pbounds,\n random_state=1)\n\n# optimize\n%time optimizer.maximize(init_points=2, n_iter=15)\nprint(\"R2 score = {0:.2f} %\".format(optimizer.max[\"target\"]*100))", "_____no_output_____" ] ], [ [ "***\n\n## 5.2 Model Evaluation\n\nContent:\n\n- [5.2.1 Generate Exemplary Data](#5.2.1-Generate-Exemplary-Data)\n- [5.2.2 Plot the Data](#5.2.2-Plot-the-Data)\n- [5.2.3 Evaluation Metrics](#5.2.3-Evaluation-Metrics)", "_____no_output_____" ] ], [ [ "import sklearn.metrics as me", "_____no_output_____" ] ], [ [ "### 5.2.1 Generate Exemplary Data", "_____no_output_____" ] ], [ [ "### generate example data\nnp.random.seed(1)\n\n# define x grid\nx_grid = np.linspace(0, 10, 11)\ny_model = x_grid*0.5\n\n# define first dataset without outlier\ny1 = np.array([y + np.random.normal(scale=0.2) for y in y_model])\n\n# define second dataset with outlier\ny2 = np.copy(y1)\ny2[9] = 0.5\n\n# define third dataset with higher variance\ny3 = np.array([y + np.random.normal(scale=1.0) for y in y_model])", "_____no_output_____" ] ], [ [ "### 5.2.2 Plot the Data", "_____no_output_____" ] ], [ [ "# plot example data\nfig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(12,4))\nfontsize = 18\ntitleweight = \"bold\"\ntitlepad = 10\n\nscatter_label = \"Data\"\nscatter_alpha = 0.7\nscatter_s = 100\nax1.scatter(x_grid, y1, label=scatter_label, alpha=scatter_alpha, s=scatter_s)\nax1.set_title(\"(a) Low var.\", fontsize=fontsize, fontweight=titleweight, pad=titlepad)\n\nax2.scatter(x_grid, y2, label=scatter_label, alpha=scatter_alpha, s=scatter_s)\nax2.set_title(\"(b) Low var. + outlier\", fontsize=fontsize, fontweight=titleweight, pad=titlepad)\n\nax3.scatter(x_grid, y3, label=scatter_label, alpha=scatter_alpha, s=scatter_s)\nax3.set_title(\"(c) Higher var.\", fontsize=fontsize, fontweight=titleweight, pad=titlepad)\n\nfor i, ax in enumerate([ax1, ax2, ax3]):\n i += 1\n \n # red line\n ax.plot(x_grid, y_model, label=\"Model\", c=\"tab:red\", linestyle=\"dashed\", linewidth=4, alpha=scatter_alpha)\n \n # x-axis cosmetics\n ax.set_xlabel(\"x in a.u.\", fontsize=fontsize)\n for tick in ax.xaxis.get_major_ticks():\n tick.label.set_fontsize(fontsize) \n \n # y-axis cosmetics\n if i != 1:\n ax.set_yticklabels([])\n else:\n ax.set_ylabel(\"y in a.u.\", fontsize=fontsize, rotation=90)\n for tick in ax.yaxis.get_major_ticks():\n tick.label.set_fontsize(fontsize) \n ax.set_xlim(-0.5, 10.5)\n ax.set_ylim(-0.5, 6.5)\n # ax.set_title(\"Example \"+str(i), fontsize=fontsize)\n if i == 2:\n ax.legend(loc=2, fontsize=fontsize*1.0, frameon=True)\n\nplt.tight_layout()\nplt.savefig(\"plots/metrics_plot.pdf\", bbox_inches=\"tight\")", "_____no_output_____" ] ], [ [ "### 5.2.3 Evaluation Metrics", "_____no_output_____" ] ], [ [ "# calculating the metrics\nfor i, y in enumerate([y1, y2, y3]):\n print(\"Example\", i+1)\n print(\"- MAE = {:.2f}\".format(me.mean_absolute_error(y_model, y)))\n print(\"- MSE = {:.2f}\".format(me.mean_squared_error(y_model, y)))\n print(\"- RMSE = {:.2f}\".format(np.sqrt(me.mean_squared_error(y_model, y))))\n print(\"- R2 = {:.2f}%\".format(me.r2_score(y_model, y)*100))\n print(\"-\"*20)", "_____no_output_____" ] ], [ [ "# print out for LaTeX table\n\ndescriptions = {\n 1: \"Low variance\",\n 2: \"Low variance and one outlier\",\n 3: \"Higher variance\",}\nbold = [[False, False, False, False], [False, True, True, True], [True, False, False, False]]\ndef make_bold(is_bold):\n if is_bold:\n return \"\\\\bfseries\"\n return \"\"\n\nfor i, y in enumerate([y1, y2, y3]):\n print(\"{description} & {bold1} {mae:.2f} & {bold2} {mse:.2f} & {bold3} {rmse:.2f} & {bold4} {r2:.2f} \\\\\\\\\".format(\n description=descriptions[i+1],\n mae=me.mean_absolute_error(y_model, y),\n mse=me.mean_squared_error(y_model, y),\n rmse=np.sqrt(me.mean_squared_error(y_model, y)),\n r2=me.r2_score(y_model, y)*100,\n bold1=make_bold(bold[i][0]),\n bold2=make_bold(bold[i][1]),\n bold3=make_bold(bold[i][2]),\n bold4=make_bold(bold[i][3]),))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "raw" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "raw" ] ]
cbf6eb8718510483ce4fc113c318f5f57dc1491a
6,480
ipynb
Jupyter Notebook
notebooks/ 9. Multiple dispatch.ipynb
elson1996/datascience
5dbe0ad35a780edd3d95f8b5e57ce6d5a92ff203
[ "MIT" ]
null
null
null
notebooks/ 9. Multiple dispatch.ipynb
elson1996/datascience
5dbe0ad35a780edd3d95f8b5e57ce6d5a92ff203
[ "MIT" ]
null
null
null
notebooks/ 9. Multiple dispatch.ipynb
elson1996/datascience
5dbe0ad35a780edd3d95f8b5e57ce6d5a92ff203
[ "MIT" ]
null
null
null
19.459459
168
0.510185
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cbf6f9441a1ba094442c30a1d1c21b49b275cbdd
113,178
ipynb
Jupyter Notebook
Notebooks/Uncategorized/Untitled1.ipynb
nealcaren/Text-Mining-with-Python-Notebooks
aebcd5b0ae515dbb34e9c4de34aa6dc265517638
[ "MIT" ]
2
2020-05-15T17:02:48.000Z
2021-12-14T23:22:38.000Z
Notebooks/Uncategorized/Untitled1.ipynb
nealcaren/Text-Mining-with-Python-Notebooks
aebcd5b0ae515dbb34e9c4de34aa6dc265517638
[ "MIT" ]
null
null
null
Notebooks/Uncategorized/Untitled1.ipynb
nealcaren/Text-Mining-with-Python-Notebooks
aebcd5b0ae515dbb34e9c4de34aa6dc265517638
[ "MIT" ]
null
null
null
33.946611
216
0.498092
[ [ [ "asdfasdf", "_____no_output_____" ] ], [ [ "from jupyterthemes import get_themes\nimport jupyterthemes as jt\nfrom jupyterthemes.stylefx import set_nb_theme", "_____no_output_____" ], [ "# uncomment and execute line to try a new theme\n#set_nb_theme('onedork')\n#set_nb_theme('chesterish')\nset_nb_theme('grade3')\n#set_nb_theme('oceans16')\n#set_nb_theme('solarizedl')\n# set_nb_theme('solarizedd')\n#set_nb_theme('monokai')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ] ]
cbf6fd4694d12a3853ab6b1c2a832ef436a1be19
1,981
ipynb
Jupyter Notebook
docs/auto_examples/plot_sgd.ipynb
RandallBalestriero/TheanoXLA
d8778c2eb3254b478cef4f45d934bf921e695619
[ "Apache-2.0" ]
67
2020-02-21T21:26:46.000Z
2020-06-14T14:25:42.000Z
docs/auto_examples/plot_sgd.ipynb
RandallBalestriero/TheanoXLA
d8778c2eb3254b478cef4f45d934bf921e695619
[ "Apache-2.0" ]
8
2020-02-22T14:45:56.000Z
2020-06-07T16:56:47.000Z
docs/auto_examples/plot_sgd.ipynb
RandallBalestriero/TheanoXLA
d8778c2eb3254b478cef4f45d934bf921e695619
[ "Apache-2.0" ]
4
2020-02-21T17:34:46.000Z
2020-05-30T08:30:14.000Z
36.685185
814
0.546694
[ [ [ "%matplotlib inline", "_____no_output_____" ] ], [ [ "\nBasic gradient descent (and reset)\n==================================\n\ndemonstration on how to compute a gradient and apply a basic gradient update\nrule to minimize some loss function\n", "_____no_output_____" ] ], [ [ "import symjax\nimport symjax.tensor as T\nimport matplotlib.pyplot as plt\n\n# GRADIENT DESCENT\nz = T.Variable(3.0, dtype=\"float32\")\nloss = (z - 1) ** 2\ng_z = symjax.gradients(loss, [z])[0]\nsymjax.current_graph().add_updates({z: z - 0.1 * g_z})\n\ntrain = symjax.function(outputs=[loss, z], updates=symjax.get_updates())\n\nlosses = list()\nvalues = list()\nfor i in range(200):\n if (i + 1) % 50 == 0:\n symjax.reset_variables(\"*\")\n a, b = train()\n losses.append(a)\n values.append(b)\n\nplt.figure()\n\nplt.subplot(121)\nplt.plot(losses, \"-x\")\nplt.ylabel(\"loss\")\nplt.xlabel(\"number of gradient updates\")\n\nplt.subplot(122)\nplt.plot(values, \"-x\")\nplt.axhline(1, c=\"red\")\nplt.ylabel(\"value\")\nplt.xlabel(\"number of gradient updates\")\n\nplt.tight_layout()", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ] ]
cbf702d3add3851f8ce5638a5dc8ae9444ea946f
148,166
ipynb
Jupyter Notebook
week3/parvmaheshwari2002/Q2 - 1/Attempt1_filesubmission_cubic-quintic-spirals.ipynb
naveenmoto/lablet102
24de9daa4ae75cbde93567a3239ede43c735cf03
[ "MIT" ]
1
2021-07-09T16:48:44.000Z
2021-07-09T16:48:44.000Z
week3/parvmaheshwari2002/Q2 - 1/Attempt1_filesubmission_cubic-quintic-spirals.ipynb
naveenmoto/lablet102
24de9daa4ae75cbde93567a3239ede43c735cf03
[ "MIT" ]
null
null
null
week3/parvmaheshwari2002/Q2 - 1/Attempt1_filesubmission_cubic-quintic-spirals.ipynb
naveenmoto/lablet102
24de9daa4ae75cbde93567a3239ede43c735cf03
[ "MIT" ]
null
null
null
148,166
148,166
0.941748
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sympy import Symbol, integrate\n%matplotlib notebook", "_____no_output_____" ] ], [ [ "### Smooth local paths\nWe will use cubic spirals to generate smooth local paths. Without loss of generality, as $\\theta$ smoothly changes from 0 to 1, we impose a condition on the curvature as follows\n\n$\\kappa = f'(\\theta) = K(\\theta(1-\\theta))^n $\n\nThis ensures curvature vanishes at the beginning and end of the path. Integrating, the yaw changes as\n$\\theta = \\int_0^x f'(\\theta)d\\theta$\n\nWith $n = 1$ we get a cubic spiral, $n=2$ we get a quintic spiral and so on. Let us use the sympy package to find the family of spirals\n\n1. Declare $x$ a Symbol\n\n2. You want to find Integral of $f'(x)$\n\n3. You can choose $K$ so that all coefficients are integers\n\nVerify if $\\theta(0) = 0$ and $\\theta(1) = 1$", "_____no_output_____" ] ], [ [ "K = 30#choose for cubic/quintic\nn = 2#choose for cubic/ quintic\nx = Symbol('x')#declare as Symbol\nprint(integrate(K*(x*(1-x))**n, x)) # complete the expression", "6*x**5 - 15*x**4 + 10*x**3\n" ], [ "#write function to compute a cubic spiral\n#input/ output can be any theta\ndef cubic_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n)\n theta = (-2*x**3 + 3*x**2) * (theta_f-theta_i) + theta_i\n return theta\n # pass \n\ndef quintic_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n)\n theta = (6*x**5 - 15*x**4 + 10*x**3)* (theta_f-theta_i) + theta_i\n return theta\n # pass\ndef circular_spiral(theta_i, theta_f, n=10):\n x = np.linspace(0, 1, num=n)\n theta = x* (theta_f-theta_i) + theta_i\n return theta", "_____no_output_____" ] ], [ [ "### Plotting\nPlot cubic, quintic spirals along with how $\\theta$ will change when moving in a circular arc. Remember circular arc is when $\\omega $ is constant\n", "_____no_output_____" ] ], [ [ "theta_i = 1.57\ntheta_f = 0\nn = 10\nx = np.linspace(0, 1, num=n)\nplt.figure()\nplt.plot(x,circular_spiral(theta_i, theta_f, n),label='Circular')\nplt.plot(x,cubic_spiral(theta_i, theta_f, n), label='Cubic')\nplt.plot(x,quintic_spiral(theta_i, theta_f, n), label='Quintic')\n\nplt.grid()\nplt.legend()", "_____no_output_____" ] ], [ [ "## Trajectory\n\nUsing the spirals, convert them to trajectories $\\{(x_i,y_i,\\theta_i)\\}$. Remember the unicycle model \n\n$dx = v\\cos \\theta dt$\n\n$dy = v\\sin \\theta dt$\n\n$\\theta$ is given by the spiral functions you just wrote. Use cumsum() in numpy to calculate {}\n\nWhat happens when you change $v$?", "_____no_output_____" ] ], [ [ "v = 1\ndt = 0.1\ntheta_i = 1.57\ntheta_f = 0\nn = 100\ntheta_cubic = cubic_spiral(theta_i, theta_f, n)\ntheta_quintic = quintic_spiral(theta_i, theta_f, int(n+(23/1000)*n))\ntheta_circular = circular_spiral(theta_i, theta_f, int(n-(48/1000)*n))\n# print(theta)\ndef trajectory(v,dt,theta):\n dx = v*np.cos(theta) *dt\n dy = v*np.sin(theta) *dt\n # print(dx)\n x = np.cumsum(dx)\n y = np.cumsum(dy)\n return x,y\n\n# plot trajectories for circular/ cubic/ quintic\nplt.figure()\nplt.plot(*trajectory(v,dt,theta_circular), label='Circular')\nplt.plot(*trajectory(v,dt,theta_cubic), label='Cubic')\nplt.plot(*trajectory(v,dt,theta_quintic), label='Quintic')\n\n\nplt.grid()\nplt.legend()", "_____no_output_____" ] ], [ [ "## Symmetric poses\n\nWe have been doing only examples with $|\\theta_i - \\theta_f| = \\pi/2$. \n\nWhat about other orientation changes? Given below is an array of terminal angles (they are in degrees!). Start from 0 deg and plot the family of trajectories", "_____no_output_____" ] ], [ [ "dt = 0.1\nthetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians\nplt.figure()\nfor tf in thetas:\n t = cubic_spiral(0, np.deg2rad(tf),50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n plt.plot(x, y, label=f'0 to {tf} degree')\nplt.grid()\nplt.legend()\n# On the same plot, move from 180 to 180 - theta\n#thetas = \nplt.figure()\nfor tf in thetas:\n t = cubic_spiral(np.pi, np.pi-np.deg2rad(tf),50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n plt.plot(x, y, label=f'180 to {180-tf} degree')\n\n\nplt.grid()\nplt.legend()", "_____no_output_____" ] ], [ [ "Modify your code to print the following for the positive terminal angles $\\{\\theta_f\\}$\n1. Final x, y position in corresponding trajectory: $x_f, y_f$ \n2. $\\frac{y_f}{x_f}$ and $\\tan \\frac{\\theta_f}{2}$\n\nWhat do you notice? \nWhat happens when $v$ is doubled?", "_____no_output_____" ] ], [ [ "dt = 0.1\nthetas = [15, 30, 45, 60, 90, 120, 150, 180] #convert to radians\n# plt.figure()\nfor tf in thetas:\n t = cubic_spiral(0, np.deg2rad(tf),50)\n x = np.cumsum(np.cos(t)*dt)\n y = np.cumsum(np.sin(t)*dt)\n print(f'tf: {tf} x_f : {x[-1]} y_f: {y[-1]} y_f/x_f : {y[-1]/x[-1]} tan (theta_f/2) : {np.tan(np.deg2rad(tf)/2)}')\n", "tf: 15 x_f : 4.936181599941893 y_f: 0.6498606361772978 y_f/x_f : 0.13165249758739583 tan (theta_f/2) : 0.13165249758739583\ntf: 30 x_f : 4.747888365557456 y_f: 1.2721928533042435 y_f/x_f : 0.2679491924311227 tan (theta_f/2) : 0.2679491924311227\ntf: 45 x_f : 4.444428497864582 y_f: 1.8409425608129912 y_f/x_f : 0.41421356237309487 tan (theta_f/2) : 0.41421356237309503\ntf: 60 x_f : 4.040733895009051 y_f: 2.3329188020071205 y_f/x_f : 0.5773502691896257 tan (theta_f/2) : 0.5773502691896257\ntf: 90 x_f : 3.0152040529843056 y_f: 3.0152040529843065 y_f/x_f : 1.0000000000000002 tan (theta_f/2) : 0.9999999999999999\ntf: 120 x_f : 1.8653713069408235 y_f: 3.230917878602665 y_f/x_f : 1.7320508075688772 tan (theta_f/2) : 1.7320508075688767\ntf: 150 x_f : 0.8004297415440109 y_f: 2.9872444633314705 y_f/x_f : 3.732050807568873 tan (theta_f/2) : 3.7320508075688776\ntf: 180 x_f : 5.551115123125783e-17 y_f: 2.3817721504025933 y_f/x_f : 4.290619267613818e+16 tan (theta_f/2) : 1.633123935319537e+16\n" ] ], [ [ "These are called *symmetric poses*. With this spiral-fitting approach, only symmetric poses can be reached. \n\nIn order to move between any 2 arbitrary poses, you will have to find an intermediate pose that is pair-wise symmetric to the start and the end pose. \n\nWhat should be the intermediate pose? There are infinite possibilities. We would have to formulate it as an optimization problem. As they say, that has to be left for another time!", "_____no_output_____" ] ], [ [ "", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf705daa58153bdccf84d22f3645be4dca5e2d3
212,942
ipynb
Jupyter Notebook
04_add_cancer_types/plot_add_cancer_results.ipynb
greenelab/pancancer-evaluation
ae5b8150f69d7e629db2e3c54526ed5d00d73395
[ "BSD-3-Clause" ]
3
2020-10-07T18:07:48.000Z
2021-06-04T16:58:53.000Z
04_add_cancer_types/plot_add_cancer_results.ipynb
jjc2718/pancancer-evaluation
3908b2960638e150defd649127c8e7cc190dc6bf
[ "BSD-3-Clause" ]
31
2020-08-03T12:57:54.000Z
2021-01-19T20:23:56.000Z
04_add_cancer_types/plot_add_cancer_results.ipynb
jjc2718/pancancer-evaluation
3908b2960638e150defd649127c8e7cc190dc6bf
[ "BSD-3-Clause" ]
1
2020-07-29T20:09:42.000Z
2020-07-29T20:09:42.000Z
189.787879
54,088
0.870383
[ [ [ "## Add cancer analysis\n\nAnalysis of results from `run_add_cancer_classification.py`.\n\nWe hypothesized that adding cancers in a principled way (e.g. by similarity to the target cancer) would lead to improved performance relative to both a single-cancer model (using only the target cancer type), and a pan-cancer model using all cancer types without regard for similarity to the target cancer.\n\nScript parameters:\n* RESULTS_DIR: directory to read experiment results from\n* IDENTIFIER: {gene}\\_{cancer_type} target identifier to plot results for", "_____no_output_____" ] ], [ [ "import os\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport pancancer_evaluation.config as cfg\nimport pancancer_evaluation.utilities.analysis_utilities as au", "_____no_output_____" ], [ "RESULTS_DIR = os.path.join(cfg.repo_root, 'add_cancer_results', 'add_cancer')", "_____no_output_____" ] ], [ [ "### Load data", "_____no_output_____" ] ], [ [ "add_cancer_df = au.load_add_cancer_results(RESULTS_DIR, load_cancer_types=True)\nprint(add_cancer_df.shape)\nadd_cancer_df.sort_values(by=['gene', 'holdout_cancer_type']).head()", "(10272, 12)\n" ], [ "# load data from previous single-cancer and pan-cancer experiments\n# this is to put the add cancer results in the context of our previous results\npancancer_dir = os.path.join(cfg.results_dir, 'pancancer')\npancancer_dir2 = os.path.join(cfg.results_dir, 'vogelstein_s1_results', 'pancancer')\nsingle_cancer_dir = os.path.join(cfg.results_dir, 'single_cancer')\nsingle_cancer_dir2 = os.path.join(cfg.results_dir, 'vogelstein_s1_results', 'single_cancer')", "_____no_output_____" ], [ "single_cancer_df1 = au.load_prediction_results(single_cancer_dir, 'single_cancer')\nsingle_cancer_df2 = au.load_prediction_results(single_cancer_dir2, 'single_cancer')\nsingle_cancer_df = pd.concat((single_cancer_df1, single_cancer_df2))\nprint(single_cancer_df.shape)\nsingle_cancer_df.head()", "(20772, 10)\n" ], [ "pancancer_df1 = au.load_prediction_results(pancancer_dir, 'pancancer')\npancancer_df2 = au.load_prediction_results(pancancer_dir2, 'pancancer')\npancancer_df = pd.concat((pancancer_df1, pancancer_df2))\nprint(pancancer_df.shape)\npancancer_df.head()", "(20784, 10)\n" ], [ "single_cancer_comparison_df = au.compare_results(single_cancer_df,\n identifier='identifier',\n metric='aupr',\n correction=True,\n correction_alpha=0.001,\n verbose=False)\npancancer_comparison_df = au.compare_results(pancancer_df,\n identifier='identifier',\n metric='aupr',\n correction=True,\n correction_alpha=0.001,\n verbose=False)\nexperiment_comparison_df = au.compare_results(single_cancer_df,\n pancancer_df=pancancer_df,\n identifier='identifier',\n metric='aupr',\n correction=True,\n correction_alpha=0.05,\n verbose=False)\nexperiment_comparison_df.sort_values(by='corr_pval').head(n=10)", "_____no_output_____" ] ], [ [ "### Plot change in performance as cancers are added", "_____no_output_____" ] ], [ [ "IDENTIFIER = 'BRAF_COAD'\n# IDENTIFIER = 'EGFR_ESCA'\n# IDENTIFIER = 'EGFR_LGG'\n# IDENTIFIER = 'KRAS_CESC'\n# IDENTIFIER = 'PIK3CA_ESCA'\n# IDENTIFIER = 'PIK3CA_STAD'\n# IDENTIFIER = 'PTEN_COAD'\n# IDENTIFIER = 'PTEN_BLCA'\n# IDENTIFIER = 'TP53_OV'\n# IDENTIFIER = 'NF1_GBM'\n\nGENE = IDENTIFIER.split('_')[0]", "_____no_output_____" ], [ "gene_df = add_cancer_df[(add_cancer_df.gene == GENE) &\n (add_cancer_df.data_type == 'test') &\n (add_cancer_df.signal == 'signal')].copy()\n\n# make seaborn treat x axis as categorical\ngene_df.num_train_cancer_types = gene_df.num_train_cancer_types.astype(str)\ngene_df.loc[(gene_df.num_train_cancer_types == '-1'), 'num_train_cancer_types'] = 'all'\n\nsns.set({'figure.figsize': (14, 6)})\nsns.pointplot(data=gene_df, x='num_train_cancer_types', y='aupr', hue='identifier',\n order=['0', '1', '2', '4', 'all'])\nplt.legend(bbox_to_anchor=(1.15, 0.5), loc='center right', borderaxespad=0., title='Cancer type')\nplt.title('Adding cancer types by confusion matrix similarity, {} mutation prediction'.format(GENE), size=13)\nplt.xlabel('Number of added cancer types', size=13)\nplt.ylabel('AUPR', size=13)", "_____no_output_____" ], [ "id_df = add_cancer_df[(add_cancer_df.identifier == IDENTIFIER) &\n (add_cancer_df.data_type == 'test') &\n (add_cancer_df.signal == 'signal')].copy()\n\n# make seaborn treat x axis as categorical\nid_df.num_train_cancer_types = id_df.num_train_cancer_types.astype(str)\nid_df.loc[(id_df.num_train_cancer_types == '-1'), 'num_train_cancer_types'] = 'all'\n\nsns.set({'figure.figsize': (14, 6)})\ncat_order = ['0', '1', '2', '4', 'all']\nsns.pointplot(data=id_df, x='num_train_cancer_types', y='aupr', hue='identifier',\n order=cat_order)\nplt.legend([],[], frameon=False)\nplt.title('Adding cancer types by confusion matrix similarity, {} mutation prediction'.format(IDENTIFIER),\n size=13)\nplt.xlabel('Number of added cancer types', size=13)\nplt.ylabel('AUPR', size=13)\n\n# annotate points with cancer types\ndef label_points(x, y, cancer_types, gene, ax):\n a = pd.DataFrame({'x': x, 'y': y, 'cancer_types': cancer_types})\n for i, point in a.iterrows():\n if gene in ['TP53', 'PIK3CA'] and point['x'] == 4:\n ax.text(point['x']+0.05,\n point['y']+0.005,\n str(point['cancer_types'].replace(' ', '\\n')),\n bbox=dict(facecolor='none', edgecolor='black', boxstyle='round'),\n ha='left', va='center')\n else:\n ax.text(point['x']+0.05,\n point['y']+0.005,\n str(point['cancer_types'].replace(' ', '\\n')),\n bbox=dict(facecolor='none', edgecolor='black', boxstyle='round'))\n\ncat_to_loc = {c: i for i, c in enumerate(cat_order)}\ngroup_id_df = (\n id_df.groupby(['num_train_cancer_types', 'train_cancer_types'])\n .mean()\n .reset_index()\n)\nlabel_points([cat_to_loc[c] for c in group_id_df.num_train_cancer_types],\n group_id_df.aupr,\n group_id_df.train_cancer_types,\n GENE,\n plt.gca())", "_____no_output_____" ] ], [ [ "### Plot gene/cancer type \"best model\" performance vs. single/pan-cancer models", "_____no_output_____" ] ], [ [ "id_df = add_cancer_df[(add_cancer_df.identifier == IDENTIFIER) &\n (add_cancer_df.data_type == 'test')].copy()\n\nbest_num = (\n id_df[id_df.signal == 'signal']\n .groupby('num_train_cancer_types')\n .mean()\n .reset_index()\n .sort_values(by='aupr', ascending=False)\n .iloc[0, 0]\n)\nprint(best_num)\nbest_id_df = (\n id_df.loc[id_df.num_train_cancer_types == best_num, :]\n .drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types'])\n)\nbest_id_df['train_set'] = 'best_add'\nsc_id_df = (\n id_df.loc[id_df.num_train_cancer_types == 1, :]\n .drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types'])\n)\nsc_id_df['train_set'] = 'single_cancer'\npc_id_df = (\n id_df.loc[id_df.num_train_cancer_types == -1, :]\n .drop(columns=['num_train_cancer_types', 'how_to_add', 'train_cancer_types'])\n)\npc_id_df['train_set'] = 'pancancer'\nall_id_df = pd.concat((sc_id_df, best_id_df, pc_id_df), sort=False)\nall_id_df.head()", "2\n" ], [ "sns.set()\nsns.boxplot(data=all_id_df, x='train_set', y='aupr', hue='signal', hue_order=['signal', 'shuffled'])\nplt.title('{}, single/best/pancancer predictors'.format(IDENTIFIER))\nplt.xlabel('Training data')\nplt.ylabel('AUPR')\nplt.legend(title='Signal')", "_____no_output_____" ], [ "print('Single cancer significance: {}'.format(\n single_cancer_comparison_df.loc[single_cancer_comparison_df.identifier == IDENTIFIER, 'reject_null'].values[0]\n))\nprint('Pan-cancer significance: {}'.format(\n pancancer_comparison_df.loc[pancancer_comparison_df.identifier == IDENTIFIER, 'reject_null'].values[0]\n))", "Single cancer significance: False\nPan-cancer significance: False\n" ], [ "# Q2: where is this example in the single vs. pan-cancer volcano plot?\n# see pancancer only experiments for an example of this sort of thing", "_____no_output_____" ], [ "experiment_comparison_df['nlog10_p'] = -np.log(experiment_comparison_df.corr_pval)\n\nsns.set({'figure.figsize': (8, 6)})\nsns.scatterplot(data=experiment_comparison_df, x='delta_mean', y='nlog10_p',\n hue='reject_null', alpha=0.3)\nplt.xlabel('AUPRC(pancancer) - AUPRC(single cancer)')\nplt.ylabel(r'$-\\log_{10}($adjusted p-value$)$')\nplt.title('Highlight {} in pancancer vs. single-cancer comparison'.format(IDENTIFIER))\n\ndef highlight_id(x, y, val, ax, id_to_plot):\n a = pd.concat({'x': x, 'y': y, 'val': val}, axis=1)\n for i, point in a.iterrows():\n if point['val'] == id_to_plot:\n ax.scatter(point['x'], point['y'], color='red', marker='+', s=100)\n \nhighlight_id(experiment_comparison_df.delta_mean,\n experiment_comparison_df.nlog10_p,\n experiment_comparison_df.identifier,\n plt.gca(),\n IDENTIFIER)", "_____no_output_____" ] ], [ [ "Overall, these results weren't quite as convincing as we were expecting. Although there are a few gene/cancer type combinations where there is a clear improvement when one or two relevant cancer types are added, overall there isn't much change in many cases (see first line plots of multiple cancer types).\n\nBiologically speaking, this isn't too surprising for a few reasons:\n\n* Some genes aren’t drivers in certain cancer types\n* Some genes have very cancer-specific effects\n* Some genes (e.g. TP53) have very well-preserved effects across all cancers\n\nWe think there could be room for improvement as far as cancer type selection (some of the cancers chosen don't make a ton of sense), but overall we're a bit skeptical that this approach will lead to models that generalize better than a single-cancer model in most cases.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
cbf70c2b1e9fca72227dfeb874ca5efb9caeb263
15,121
ipynb
Jupyter Notebook
proof_of_work/multiagent/turn_based/v6/match.ipynb
michaelneuder/parkes_lab_fa19
18d9f564e0df9c17ac5d54619ed869d778d4f6a4
[ "MIT" ]
null
null
null
proof_of_work/multiagent/turn_based/v6/match.ipynb
michaelneuder/parkes_lab_fa19
18d9f564e0df9c17ac5d54619ed869d778d4f6a4
[ "MIT" ]
null
null
null
proof_of_work/multiagent/turn_based/v6/match.ipynb
michaelneuder/parkes_lab_fa19
18d9f564e0df9c17ac5d54619ed869d778d4f6a4
[ "MIT" ]
null
null
null
30.363454
111
0.408769
[ [ [ "import environmentv6 as e\nimport mdptoolbox\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport progressbar as pb\nimport scipy.sparse as ss\nimport seaborn as sns\nimport warnings\nwarnings.filterwarnings('ignore', category=ss.SparseEfficiencyWarning)", "_____no_output_____" ], [ "# params\nalpha = 0.4\ngamma = 0.5\nT = 8\nepsilon = 10e-5\n\n# game\naction_count = 4\nadopt = 0; override = 1; mine = 2; match = 3\n\n# fork params\nfork_count = 3\nirrelevant = 0; relevant = 1; active = 2;\n\nstate_count = (T+1) * (T+1) * 3\n\n# mapping utils\nstate_mapping = {}\nstates = []\ncount = 0\nfor a in range(T+1):\n for h in range(T+1):\n for fork in range(fork_count):\n state_mapping[(a, h, fork)] = count\n states.append((a, h, fork))\n count += 1\n\n# initialize matrices\ntransitions = []; rewards = []\nfor _ in range(action_count):\n transitions.append(ss.csr_matrix(np.zeros(shape=(state_count, state_count))))\n rewards.append(ss.csr_matrix(np.zeros(shape=(state_count, state_count))))", "_____no_output_____" ], [ "mining_cost = 0.8\n\n# populate matrices\nfor state_index in range(state_count):\n a, h, fork = states[state_index]\n\n # adopt\n transitions[adopt][state_index, state_mapping[0, 0, irrelevant]] = 1\n\n # override\n if a > h:\n transitions[override][state_index, state_mapping[a-h-1, 0, irrelevant]] = 1\n rewards[override][state_index, state_mapping[a-h-1, 0, irrelevant]] = h + 1\n else:\n transitions[override][state_index, 0] = 1\n rewards[override][state_index, 0] = -10000\n\n # mine \n if (fork != active) and (a < T) and (h < T):\n transitions[mine][state_index, state_mapping[a+1, h, irrelevant]] = alpha\n transitions[mine][state_index, state_mapping[a, h+1, relevant]] = (1 - alpha) \n rewards[mine][state_index, state_mapping[a+1, h, irrelevant]] = -1 * alpha * mining_cost\n rewards[mine][state_index, state_mapping[a, h+1, relevant]] = -1 * alpha * mining_cost \n elif (fork == active) and (a > h) and (h > 0) and (a < T) and (h < T):\n transitions[mine][state_index, state_mapping[a+1, h, active]] = alpha\n transitions[mine][state_index, state_mapping[a-h, 1, relevant]] = (1 - alpha) * gamma\n transitions[mine][state_index, state_mapping[a, h+1, relevant]] = (1 - alpha) * (1 - gamma)\n rewards[mine][state_index, state_mapping[a+1, h, active]] = -1 * alpha * mining_cost\n rewards[mine][state_index, state_mapping[a-h, 1, relevant]] = h - alpha * mining_cost\n rewards[mine][state_index, state_mapping[a, h+1, relevant]] = -1 * alpha * mining_cost\n else:\n transitions[mine][state_index, 0] = 1\n rewards[mine][state_index, 0] = -10000\n \n # match \n if (fork == relevant) and (a >= h) and (h > 0) and (a < T) and (h < T):\n transitions[match][state_index, state_mapping[a+1, h, active]] = alpha\n transitions[match][state_index, state_mapping[a-h, 1, relevant]] = (1 - alpha) * gamma\n transitions[match][state_index, state_mapping[a, h+1, relevant]] = (1 - alpha) * (1 - gamma)\n rewards[match][state_index, state_mapping[a+1, h, active]] = -1 * alpha * mining_cost\n rewards[match][state_index, state_mapping[a-h, 1, relevant]] = h - alpha * mining_cost\n rewards[match][state_index, state_mapping[a, h+1, relevant]] = -1 * alpha * mining_cost\n else:\n transitions[match][state_index, 0] = 1\n rewards[match][state_index, 0] = -10000", "_____no_output_____" ], [ "rvi = mdptoolbox.mdp.RelativeValueIteration(transitions, rewards, epsilon/8)\nrvi.run()\npolicy = rvi.policy\nprocessPolicy(policy)", "wwa & aaa & aaa & aaa & aaa & aaa & aaa & aaa & aaa & \\\\ \nooo & wma & wwa & aaa & aaa & aaa & aaa & aaa & aaa & \\\\ \nwwo & ooo & wma & wwa & aaa & aaa & aaa & aaa & aaa & \\\\ \nwwo & wmw & ooo & wma & wwa & aaa & aaa & aaa & aaa & \\\\ \nwwo & wmw & wmw & ooo & wma & wwa & aaa & aaa & aaa & \\\\ \nwwo & wmw & wmw & wmw & ooo & wma & wwa & aaa & aaa & \\\\ \nwwo & wmw & wmw & wmw & wmw & ooo & wma & wwa & aaa & \\\\ \nwwo & wmw & wmw & wmw & wmw & www & ooo & wma & aaa & \\\\ \nooo & ooo & ooo & ooo & ooo & ooo & ooo & ooo & aaa & \\\\ \n\n" ], [ "np.reshape(policy, (9,9,3))", "_____no_output_____" ], [ "def processPolicy(policy):\n results = ''\n for a in range(9):\n for h in range(9):\n for fork in range(3):\n state_index = state_mapping[(a, h, fork)]\n action = policy[state_index]\n if action == 0:\n results += 'a'\n elif action == 1:\n results += 'o'\n elif action == 2:\n results += 'w'\n elif action == 3:\n results += 'm'\n else:\n print('here')\n results += ' & '\n results += '\\\\\\\\ \\n'\n print(results)", "_____no_output_____" ], [ "sm1_policy = np.asarray([\n[2, 0, 9, 9, 9, 9, 9, 9, 9],\n[2, 0, 9, 9, 9, 9, 9, 9, 9],\n[2, 1, 0, 9, 9, 9, 9, 9, 9], \n[2, 2, 1, 0, 9, 9, 9, 9, 9],\n[2, 2, 2, 1, 0, 9, 9, 9, 9],\n[2, 2, 2, 2, 1, 0, 9, 9, 9],\n[2, 2, 2, 2, 2, 1, 0, 9, 9],\n[2, 2, 2, 2, 2, 2, 1, 0, 9],\n[1, 1, 1, 1, 1, 1, 1, 1, 0]\n])\n\nhonest_policy = np.asarray([\n[2, 0, 9, 9, 9, 9, 9, 9, 9],\n[1, 9, 9, 9, 9, 9, 9, 9, 9],\n[9, 9, 9, 9, 9, 9, 9, 9, 9], \n[9, 9, 9, 9, 9, 9, 9, 9, 9],\n[9, 9, 9, 9, 9, 9, 9, 9, 9],\n[9, 9, 9, 9, 9, 9, 9, 9, 9],\n[9, 9, 9, 9, 9, 9, 9, 9, 9],\n[9, 9, 9, 9, 9, 9, 9, 9, 9],\n[9, 9, 9, 9, 9, 9, 9, 9, 9]\n])\n\nopt_policy = np.reshape(policy, (9,9))", "_____no_output_____" ], [ "def get_opt_policy(alpha, T, mining_cost):\n for state_index in range(state_count):\n a, h = states[state_index]\n\n # adopt transitions\n transitions[adopt][state_index, state_mapping[0, 0]] = 1\n\n # override\n if a > h:\n transitions[override][state_index, state_mapping[a-h-1, 0]] = 1\n rewards[override][state_index, state_mapping[a-h-1, 0]] = h + 1\n else:\n transitions[override][state_index, 0] = 1\n rewards[override][state_index, 0] = -10000\n\n # mine transitions\n if (a < T) and (h < T):\n transitions[mine][state_index, state_mapping[a+1, h]] = alpha\n transitions[mine][state_index, state_mapping[a, h+1]] = (1 - alpha) \n rewards[mine][state_index, state_mapping[a+1, h]] = -1 * alpha * mining_cost\n rewards[mine][state_index, state_mapping[a, h+1]] = -1 * alpha * mining_cost \n else:\n transitions[mine][state_index, 0] = 1\n rewards[mine][state_index, 0] = -10000\n \n rvi = mdptoolbox.mdp.RelativeValueIteration(transitions, rewards, epsilon/8)\n rvi.run()\n return np.reshape(rvi.policy, (T+1, T+1))", "_____no_output_____" ], [ "get_opt_policy(alpha=0.4, T=8, mining_cost=0.5)", "_____no_output_____" ], [ "# simulation\nlength = int(1e6)\nalpha = 0.4\nT = 8\nmining_cost = 0.5\nenv = e.Environment(alpha, T, mining_cost)\n\n# simulation\nbar = pb.ProgressBar()\n_ = env.reset()\ncurrent_reward = 0\nfor _ in bar(range(length)):\n a, h = env.current_state\n action = opt_policy[(a,h)]\n _, reward = env.takeAction(action)\n current_reward += reward", "100% (1000000 of 1000000) |##############| Elapsed Time: 0:00:28 Time: 0:00:28\n" ], [ "# opt\nprint(current_reward, current_reward / length)", "101211.00000130651 0.10121100000130652\n" ], [ "# sm1\nprint(current_reward, current_reward / length)", "54266.60000058278 0.05426660000058278\n" ], [ "# honest\nprint(current_reward, current_reward / length)", "100698.00000089758 0.10069800000089758\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf7101a89a01b4fc53d6db0cdefc157e43ed1b2
16,488
ipynb
Jupyter Notebook
research/notebook.ipynb
ilya-panov/neo-titanic-ml
ce26d0d4bc2ff5398c2b4da1d646a86554353b68
[ "MIT" ]
null
null
null
research/notebook.ipynb
ilya-panov/neo-titanic-ml
ce26d0d4bc2ff5398c2b4da1d646a86554353b68
[ "MIT" ]
null
null
null
research/notebook.ipynb
ilya-panov/neo-titanic-ml
ce26d0d4bc2ff5398c2b4da1d646a86554353b68
[ "MIT" ]
1
2021-09-28T00:45:40.000Z
2021-09-28T00:45:40.000Z
30.589981
2,259
0.470281
[ [ [ "import json\n\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelEncoder, StandardScaler\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import accuracy_score\n\npd.options.mode.chained_assignment = None", "_____no_output_____" ], [ "SRC_TRAIN = \"../../../data/src/train.csv\"\nTRAIN = \"../../../data/train.csv\"\nTEST = \"../../../data/test.csv\"\nCONFIG = \"../../../data/conf.json\"", "_____no_output_____" ] ], [ [ "# Чтение данных", "_____no_output_____" ] ], [ [ "df = pd.read_csv(SRC_TRAIN)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "Избавимся от столбцов, которые для выбранного решения \"кажутся\" бесполезными", "_____no_output_____" ] ], [ [ "df = df.drop(\"PassengerId\", axis=1)\ndf = df.drop(\"Name\", axis=1)\ndf = df.drop(\"Ticket\", axis=1)\ndf = df.drop(\"Cabin\", axis=1)", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ] ], [ [ "Случайное разделение исходных данных на train/test датасеты в пропорции 0,7 / 0,3", "_____no_output_____" ] ], [ [ "train, test = train_test_split(df, test_size=0.3)", "_____no_output_____" ] ], [ [ "Сохранение полученных данных для использования в обучении NeoML", "_____no_output_____" ] ], [ [ "train.to_csv(TRAIN, sep=\"|\")\ntest.to_csv(TEST, sep=\"|\")", "_____no_output_____" ], [ "train.count()", "_____no_output_____" ] ], [ [ "Неодинаковое кол-во значений в некоторых столбцах говорит о том, что данные нужно 'полечить'", "_____no_output_____" ] ], [ [ "datasets = [train, test]", "_____no_output_____" ] ], [ [ "# Подготовка данных\n\n## Age\n\nДля исправления пропусков в возрасте будем выбирать случайное значение из интервала (mean - sigma; mean + sigma)", "_____no_output_____" ] ], [ [ "age_mean = train[\"Age\"].mean()\nage_std = train[\"Age\"].std()\n\nprint(\"age-mean='{0}' age-std='{1}'\".format(age_mean, age_std))", "age-mean='29.71656378600823' age-std='14.578734474903417'\n" ], [ "for ds in datasets:\n age_null_count = ds[\"Age\"].isnull().sum()\n rand_age = np.random.randint(age_mean - age_std, age_mean + age_std, size=age_null_count)\n age_slice = ds[\"Age\"].copy()\n age_slice[ np.isnan(age_slice) ] = rand_age\n ds[\"Age\"] = age_slice\n ds[\"Age\"] = ds[\"Age\"].astype(int)", "_____no_output_____" ] ], [ [ "## Embarked\n\nБудем считать, что отсутствующее значение равно S (Southampton)", "_____no_output_____" ] ], [ [ "for ds in datasets:\n ds[\"Embarked\"] = ds[\"Embarked\"].fillna(\"S\")", "_____no_output_____" ], [ "train.count()", "_____no_output_____" ] ], [ [ "## Encodes\n\nСтроковые данные пола и места посадки кодируются с помощью LabelEncoder", "_____no_output_____" ] ], [ [ "sex_encoder = LabelEncoder()\nsex_encoder.fit(train[\"Sex\"])\n\ntrain[\"Sex\"] = sex_encoder.transform(train[\"Sex\"])\ntest[\"Sex\"] = sex_encoder.transform(test[\"Sex\"])", "_____no_output_____" ], [ "embarked_encode = LabelEncoder()\nembarked_encode.fit(train[\"Embarked\"])\n\ntrain[\"Embarked\"] = embarked_encode.transform(train[\"Embarked\"])\ntest[\"Embarked\"] = embarked_encode.transform(test[\"Embarked\"])", "_____no_output_____" ] ], [ [ "## X/Y\n\nИз исходных данных подготовим сэмплы для обучения классификатора", "_____no_output_____" ] ], [ [ "X_train = train.drop(\"Survived\", axis=1)\nY_train = train[\"Survived\"]\n\nX_test = test.drop(\"Survived\", axis=1)\nY_test = test[\"Survived\"]", "_____no_output_____" ] ], [ [ "## Scale\n\nВектора признаков необъодимо скалировать, что бы значения некоторых фич \"не давили\" своим весом", "_____no_output_____" ] ], [ [ "scaler = StandardScaler()\nscaler.fit(X_train)\n\nX_train = scaler.transform(X_train)\nX_test = scaler.transform(X_test)", "_____no_output_____" ] ], [ [ "# Обучение", "_____no_output_____" ], [ "## Train\n\nДля классификатора выберем модель на основе SVM", "_____no_output_____" ] ], [ [ "classifier = SVC(kernel = 'rbf', random_state = 0)\nclassifier.fit(X_train, Y_train)", "_____no_output_____" ] ], [ [ "## Test", "_____no_output_____" ] ], [ [ "Y_train_predict = classifier.predict(X_train)\nacc = accuracy_score(Y_train, Y_train_predict)\n\nprint(\"Train Acc = {:.2f}\".format(acc * 100))", "Train Acc = 83.79\n" ], [ "Y_test_predict = classifier.predict(X_test)\nacc = accuracy_score(Y_test, Y_test_predict)\n\nprint(\"Test Acc = {:.2f}\".format(acc * 100))", "Test Acc = 84.33\n" ] ], [ [ "Запомним значения точности, чтобы сверить с решением на NeoML", "_____no_output_____" ], [ "# Сохранение конфига\n\nДля воспроизведения функционала подготовки данных сохраняем параметры в json-конфиг", "_____no_output_____" ] ], [ [ "# Количество полей + 1 (столбец с индексом)\nfields_count = len(train.columns) + 1\n\nconfig = {\n 'expected-fields': fields_count,\n 'min-age': int(age_mean - age_std),\n 'max-age': int(age_mean + age_std),\n 'sex-labels': list(sex_encoder.classes_),\n 'embarked-labels': list(embarked_encode.classes_),\n 'scaler-mean': list(scaler.mean_),\n 'scaler-std': list(scaler.scale_)\n}\n\nwith open(CONFIG, 'w') as fp:\n json.dump(config, fp)", "_____no_output_____" ] ], [ [ "# NeoML incoming\n\nИтак, после исследования имеется в наличии:\n - процедура подготовки данных и конфиг, описывающий параметры;\n - модель, решающая задачу.\n\nНеобходимо воспроизвести решение с использованием NeoML.\nЭто позволит создать SDK, которое можно интегрировать в произвольное приложение\n\nЧто ж, приступим...", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ] ]
cbf71391a018c2cb314f71e6af80ca5b904f7df4
171,177
ipynb
Jupyter Notebook
scripts/rnn/lstm_sin-wave_2.ipynb
tayutaedomo/tensorflow-sandbox
3af51b73c76d48fffea63c9d33c5e54f6bcf0eca
[ "MIT" ]
null
null
null
scripts/rnn/lstm_sin-wave_2.ipynb
tayutaedomo/tensorflow-sandbox
3af51b73c76d48fffea63c9d33c5e54f6bcf0eca
[ "MIT" ]
8
2020-07-20T10:13:01.000Z
2022-03-12T00:33:29.000Z
scripts/rnn/lstm_sin-wave_2.ipynb
tayutaedomo/tensorflow-sandbox
3af51b73c76d48fffea63c9d33c5e54f6bcf0eca
[ "MIT" ]
null
null
null
144.088384
44,096
0.836684
[ [ [ "Reference: https://qiita.com/sasayabaku/items/b7872a3b8acc7d6261bf \n \nLSTMの学習を倍の長さでしてみる", "_____no_output_____" ] ], [ [ "from tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Activation\nfrom tensorflow.keras.layers import LSTM\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.callbacks import EarlyStopping\nimport numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "def sin(x, T=100):\n return np.sin(2.0 * np.pi * x / T)", "_____no_output_____" ], [ "# sin波にノイズを付与する\ndef toy_problem(T=100, ampl=0.05):\n x = np.arange(0, 2 * T + 1)\n noise = ampl * np.random.uniform(low=-1.0, high=1.0, size=len(x))\n return sin(x) + noise", "_____no_output_____" ], [ "f = toy_problem()", "_____no_output_____" ], [ "def make_dataset(low_data, n_prev=100, maxlen=25):\n data, target = [], []\n\n for i in range(len(low_data)-maxlen):\n data.append(low_data[i:i + maxlen])\n target.append(low_data[i + maxlen])\n\n re_data = np.array(data).reshape(len(data), maxlen, 1)\n re_target = np.array(target).reshape(len(data), 1)\n\n return re_data, re_target", "_____no_output_____" ], [ "# 300周期 (601サンプル)にデータに拡張\nf = toy_problem(T=300)\n\n# 50サンプルごとに分割\ng, h = make_dataset(f, maxlen=50)", "_____no_output_____" ], [ "print(g.shape)\ng", "(551, 50, 1)\n" ], [ "print(h.shape)\nh", "(551, 1)\n" ], [ "#h.reshape(h.shape[0])\n#[i for i in range(h.shape[0])]\n\nplt.scatter([i for i in range(h.shape[0])], h.reshape(h.shape[0]))", "_____no_output_____" ], [ "plt.scatter([i for i in range(len(f))], f)", "_____no_output_____" ] ], [ [ "### モデル構築", "_____no_output_____" ] ], [ [ "# 1つの学習データのStep数(今回は25)\nlength_of_sequence = g.shape[1] \nin_out_neurons = 1\nn_hidden = 300", "_____no_output_____" ], [ "model = Sequential()\n\nmodel.add(LSTM(n_hidden, batch_input_shape=(None, length_of_sequence, in_out_neurons), return_sequences=False))\nmodel.add(Dense(in_out_neurons))\nmodel.add(Activation(\"linear\"))\n\noptimizer = Adam(lr=0.001)\n\nmodel.compile(loss=\"mean_squared_error\", optimizer=optimizer)", "_____no_output_____" ], [ "early_stopping = EarlyStopping(monitor='val_loss', mode='auto', patience=20)\n\nhist = model.fit(g, h,\n batch_size=300,\n epochs=100,\n validation_split=0.1,\n callbacks=[early_stopping])", "Epoch 1/100\n2/2 [==============================] - 1s 264ms/step - loss: 0.4259 - val_loss: 0.1490\nEpoch 2/100\n2/2 [==============================] - 0s 119ms/step - loss: 0.1526 - val_loss: 0.1119\nEpoch 3/100\n2/2 [==============================] - 0s 115ms/step - loss: 0.0835 - val_loss: 0.0365\nEpoch 4/100\n2/2 [==============================] - 0s 116ms/step - loss: 0.0296 - val_loss: 0.0247\nEpoch 5/100\n2/2 [==============================] - 0s 116ms/step - loss: 0.0281 - val_loss: 0.0146\nEpoch 6/100\n2/2 [==============================] - 0s 118ms/step - loss: 0.0155 - val_loss: 0.0017\nEpoch 7/100\n2/2 [==============================] - 0s 117ms/step - loss: 0.0032 - val_loss: 0.0110\nEpoch 8/100\n2/2 [==============================] - 0s 117ms/step - loss: 0.0102 - val_loss: 0.0031\nEpoch 9/100\n2/2 [==============================] - 0s 115ms/step - loss: 0.0044 - val_loss: 0.0106\nEpoch 10/100\n2/2 [==============================] - 0s 117ms/step - loss: 0.0082 - val_loss: 0.0046\nEpoch 11/100\n2/2 [==============================] - 0s 117ms/step - loss: 0.0032 - val_loss: 0.0021\nEpoch 12/100\n2/2 [==============================] - 0s 118ms/step - loss: 0.0024 - val_loss: 0.0047\nEpoch 13/100\n2/2 [==============================] - 0s 115ms/step - loss: 0.0037 - val_loss: 0.0022\nEpoch 14/100\n2/2 [==============================] - 0s 119ms/step - loss: 0.0020 - val_loss: 0.0029\nEpoch 15/100\n2/2 [==============================] - 0s 126ms/step - loss: 0.0028 - val_loss: 0.0038\nEpoch 16/100\n2/2 [==============================] - 0s 126ms/step - loss: 0.0032 - val_loss: 0.0025\nEpoch 17/100\n2/2 [==============================] - 0s 134ms/step - loss: 0.0021 - val_loss: 0.0018\nEpoch 18/100\n2/2 [==============================] - 0s 133ms/step - loss: 0.0018 - val_loss: 0.0023\nEpoch 19/100\n2/2 [==============================] - 0s 133ms/step - loss: 0.0021 - val_loss: 0.0018\nEpoch 20/100\n2/2 [==============================] - 0s 140ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 21/100\n2/2 [==============================] - 0s 131ms/step - loss: 0.0014 - val_loss: 0.0018\nEpoch 22/100\n2/2 [==============================] - 0s 130ms/step - loss: 0.0016 - val_loss: 0.0016\nEpoch 23/100\n2/2 [==============================] - 0s 125ms/step - loss: 0.0014 - val_loss: 0.0013\nEpoch 24/100\n2/2 [==============================] - 0s 131ms/step - loss: 0.0014 - val_loss: 0.0015\nEpoch 25/100\n2/2 [==============================] - 0s 125ms/step - loss: 0.0015 - val_loss: 0.0014\nEpoch 26/100\n2/2 [==============================] - 0s 127ms/step - loss: 0.0013 - val_loss: 0.0013\nEpoch 27/100\n2/2 [==============================] - 0s 126ms/step - loss: 0.0013 - val_loss: 0.0013\nEpoch 28/100\n2/2 [==============================] - 0s 128ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 29/100\n2/2 [==============================] - 0s 126ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 30/100\n2/2 [==============================] - 0s 130ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 31/100\n2/2 [==============================] - 0s 131ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 32/100\n2/2 [==============================] - 0s 132ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 33/100\n2/2 [==============================] - 0s 131ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 34/100\n2/2 [==============================] - 0s 143ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 35/100\n2/2 [==============================] - 0s 175ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 36/100\n2/2 [==============================] - 0s 126ms/step - loss: 0.0013 - val_loss: 0.0012\nEpoch 37/100\n2/2 [==============================] - 0s 124ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 38/100\n2/2 [==============================] - 0s 127ms/step - loss: 0.0012 - val_loss: 0.0013\nEpoch 39/100\n2/2 [==============================] - 0s 124ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 40/100\n2/2 [==============================] - 0s 131ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 41/100\n2/2 [==============================] - 0s 131ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 42/100\n2/2 [==============================] - 0s 135ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 43/100\n2/2 [==============================] - 0s 129ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 44/100\n2/2 [==============================] - 0s 158ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 45/100\n2/2 [==============================] - 0s 134ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 46/100\n2/2 [==============================] - 0s 134ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 47/100\n2/2 [==============================] - 0s 135ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 48/100\n2/2 [==============================] - 0s 130ms/step - loss: 0.0012 - val_loss: 0.0012\nEpoch 49/100\n2/2 [==============================] - 0s 125ms/step - loss: 0.0012 - val_loss: 0.0012\n" ], [ "import pandas as pd\n\nresults = pd.DataFrame(hist.history)\nresults[['loss', 'val_loss']].plot()", "_____no_output_____" ] ], [ [ "### 予測", "_____no_output_____" ] ], [ [ "predicted = model.predict(g)", "_____no_output_____" ], [ "plt.figure()\nplt.plot(range(25,len(predicted)+25), predicted, color=\"r\", label=\"predict_data\")\nplt.plot(range(0, len(f)), f, color=\"b\", label=\"row_data\")\nplt.legend()\nplt.show()", "_____no_output_____" ] ], [ [ "### 未来の予測", "_____no_output_____" ] ], [ [ "future_test = g[-1].T", "_____no_output_____" ], [ "print(future_test.shape)\nfuture_test", "(1, 50)\n" ], [ "# 1つの学習データの時間の長さ -> 25\ntime_length = future_test.shape[1]\n\n# 未来の予測データを保存していく変数\nfuture_result = np.empty((1))\n\n# 未来予想\nfor step2 in range(400):\n test_data = np.reshape(future_test, (1, time_length, 1))\n batch_predict = model.predict(test_data)\n\n future_test = np.delete(future_test, 0)\n future_test = np.append(future_test, batch_predict)\n\n future_result = np.append(future_result, batch_predict)", "_____no_output_____" ], [ "#future_result", "_____no_output_____" ], [ "# sin波をプロット\nplt.figure()\nplt.plot(range(25,len(predicted)+25), predicted, color=\"r\", label=\"predict_data\")\nplt.plot(range(0, len(f)), f, color=\"b\", label=\"row_data\")\nplt.plot(range(0+len(f), len(future_result)+len(f)), future_result, color=\"g\", label=\"future_predict\")\nplt.legend()\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
cbf71ec372d23eb260df811fc6016867023bf700
69,960
ipynb
Jupyter Notebook
References/D.Scraping Basics.ipynb
rprasai/datascience
8f9422a648121e2cf1f0d855defc0f70b2c5b7ec
[ "CC-BY-4.0" ]
3
2019-03-22T04:15:45.000Z
2019-06-29T09:27:06.000Z
References/D.Scraping Basics.ipynb
rprasai/datascience
8f9422a648121e2cf1f0d855defc0f70b2c5b7ec
[ "CC-BY-4.0" ]
null
null
null
References/D.Scraping Basics.ipynb
rprasai/datascience
8f9422a648121e2cf1f0d855defc0f70b2c5b7ec
[ "CC-BY-4.0" ]
2
2019-05-21T04:26:25.000Z
2019-05-21T04:27:15.000Z
66.692088
17,306
0.559648
[ [ [ "# Collecting data from internet\n---", "_____no_output_____" ], [ "Web scraping consists of \n\n- obtaining raw data from source ( requests )\n- filtering out useful data from garbage ( beautifulsoup4 )\n- saving into appropriate format ( csv/json )\n\n`scrapy` is scrapping framework which extends above libraries", "_____no_output_____" ] ], [ [ "# requests is http library for python\n# requests makes http easier than builtin urllib\nimport requests", "_____no_output_____" ], [ "url = 'http://www.mfd.gov.np'", "_____no_output_____" ], [ "req = requests.get(url)", "_____no_output_____" ], [ "req", "_____no_output_____" ], [ "dir(req)", "_____no_output_____" ], [ "req.headers", "_____no_output_____" ], [ "req.status_code", "_____no_output_____" ], [ "req.text", "_____no_output_____" ], [ "from bs4 import BeautifulSoup", "_____no_output_____" ], [ "soup = BeautifulSoup(req.text, \"html.parser\")", "_____no_output_____" ] ], [ [ "*you can use `lxml` instead of `html.parser` which is must faster for large html content*", "_____no_output_____" ] ], [ [ "soup", "_____no_output_____" ], [ "type(soup)", "_____no_output_____" ], [ "dir(soup)", "_____no_output_____" ], [ "tables = soup.find_all('table')", "_____no_output_____" ], [ "tables", "_____no_output_____" ], [ "len(tables)", "_____no_output_____" ], [ "tables[0]", "_____no_output_____" ], [ "tables[0].find_all('tr')", "_____no_output_____" ], [ "cities = []\nheaders = []\nfor row in tables[0].find_all('tr'):\n ths = row.find_all('th')\n if ths:\n headers = [th.text.strip() for th in ths]\n else:\n tds = row.find_all('td')\n data = {}\n if tds and len(tds) >= 4:\n data[headers[0]] = tds[0].text.strip()\n data[headers[1]] = tds[1].text.strip()\n data[headers[2]] = tds[2].text.strip()\n data[headers[3]] = tds[3].text.strip()\n cities.append(data)\nprint(cities)", "[{'Minimum Temp.(°C)': '18.1', '24 hrs Rainfall(mm)': '13.0', 'Maximum Temp.(°C)': '19.4', 'Station': 'Dadeldhura'}, {'Minimum Temp.(°C)': '25.2', '24 hrs Rainfall(mm)': '9.6', 'Maximum Temp.(°C)': '28.0', 'Station': 'Dipayal'}, {'Minimum Temp.(°C)': '24.0', '24 hrs Rainfall(mm)': '88.3', 'Maximum Temp.(°C)': '28.7', 'Station': 'Dhangadi'}, {'Minimum Temp.(°C)': '23.4', '24 hrs Rainfall(mm)': '29.3', 'Maximum Temp.(°C)': '27.2', 'Station': 'Birendranagar'}, {'Minimum Temp.(°C)': '25.5', '24 hrs Rainfall(mm)': '21.4', 'Maximum Temp.(°C)': '29.7', 'Station': 'Nepalgunj'}, {'Minimum Temp.(°C)': '16.5', '24 hrs Rainfall(mm)': '6.2', 'Maximum Temp.(°C)': '23.7', 'Station': 'Jumla'}, {'Minimum Temp.(°C)': '24.5', '24 hrs Rainfall(mm)': '5.4', 'Maximum Temp.(°C)': '29.2', 'Station': 'Dang'}, {'Minimum Temp.(°C)': '22.4', '24 hrs Rainfall(mm)': '24.5', 'Maximum Temp.(°C)': '25.7', 'Station': 'Pokhara'}, {'Minimum Temp.(°C)': '27.5', '24 hrs Rainfall(mm)': '73.5', 'Maximum Temp.(°C)': '27.6', 'Station': 'Bhairahawa'}, {'Minimum Temp.(°C)': '27.0', '24 hrs Rainfall(mm)': '29.2', 'Maximum Temp.(°C)': '31.8', 'Station': 'Simara'}, {'Minimum Temp.(°C)': '20.2', '24 hrs Rainfall(mm)': '39.7', 'Maximum Temp.(°C)': '26.5', 'Station': 'Kathmandu'}, {'Minimum Temp.(°C)': '18.0', '24 hrs Rainfall(mm)': '21.4', 'Maximum Temp.(°C)': '24.5', 'Station': 'Okhaldhunga'}, {'Minimum Temp.(°C)': '18.2', '24 hrs Rainfall(mm)': '2.0', 'Maximum Temp.(°C)': '23.5', 'Station': 'Taplejung'}, {'Minimum Temp.(°C)': '20.5', '24 hrs Rainfall(mm)': '47.4', 'Maximum Temp.(°C)': '26.0', 'Station': 'Dhankuta'}, {'Minimum Temp.(°C)': '25.2', '24 hrs Rainfall(mm)': '62.9', 'Maximum Temp.(°C)': '26.3', 'Station': 'Biratnagar'}, {'Minimum Temp.(°C)': '13.4', '24 hrs Rainfall(mm)': '7.0*', 'Maximum Temp.(°C)': '21.5', 'Station': 'Jomsom'}, {'Minimum Temp.(°C)': '22.9', '24 hrs Rainfall(mm)': '109.6*', 'Maximum Temp.(°C)': '28.5', 'Station': 'Dharan'}, {'Minimum Temp.(°C)': '18.0', '24 hrs Rainfall(mm)': '32.0*', 'Maximum Temp.(°C)': '21.0', 'Station': 'Lumle'}, {'Minimum Temp.(°C)': '27.0', '24 hrs Rainfall(mm)': '62.8*', 'Maximum Temp.(°C)': '29.0', 'Station': 'Jankapur'}, {}]\n" ] ], [ [ "**Alternative Method**", "_____no_output_____" ], [ "*in case of multiple tables within webpage, we can use css selectors*", "_____no_output_____" ] ], [ [ "div = soup.find('div', attrs={'class': 'weather-data-table'})", "_____no_output_____" ], [ "div", "_____no_output_____" ], [ "table = div.find('table')", "_____no_output_____" ], [ "# first_table = tables[0]", "_____no_output_____" ], [ "table.find_all('th', attrs={'class': 'center'})", "_____no_output_____" ], [ "data_set = []\nfor tr in table.find_all('tr'):\n _data = {}\n tds = tr.find_all('td')\n if tds and len(tds) > 3:\n # _data['Station'] = t\n # print(tds)\n _data['Station'] = tds[0].string\n _data['Maximum'] = tds[1].string\n _data['Minimum'] = tds[2].string\n _data['Rainfall'] = tds[3].string\n data_set.append(_data)\nprint(data_set)", "[{'Rainfall': '13.0', 'Maximum': '19.4', 'Station': 'Dadeldhura', 'Minimum': '18.1'}, {'Rainfall': '9.6', 'Maximum': '28.0', 'Station': 'Dipayal', 'Minimum': '25.2'}, {'Rainfall': '88.3', 'Maximum': '28.7', 'Station': 'Dhangadi', 'Minimum': '24.0'}, {'Rainfall': '29.3', 'Maximum': '27.2', 'Station': 'Birendranagar', 'Minimum': '23.4'}, {'Rainfall': '21.4', 'Maximum': '29.7', 'Station': 'Nepalgunj', 'Minimum': '25.5'}, {'Rainfall': '6.2', 'Maximum': '23.7', 'Station': 'Jumla', 'Minimum': '16.5'}, {'Rainfall': '5.4', 'Maximum': '29.2', 'Station': 'Dang', 'Minimum': '24.5'}, {'Rainfall': '24.5', 'Maximum': '25.7', 'Station': 'Pokhara', 'Minimum': '22.4'}, {'Rainfall': '73.5', 'Maximum': '27.6', 'Station': 'Bhairahawa', 'Minimum': '27.5'}, {'Rainfall': '29.2', 'Maximum': '31.8', 'Station': 'Simara', 'Minimum': '27.0'}, {'Rainfall': '39.7', 'Maximum': '26.5', 'Station': 'Kathmandu', 'Minimum': '20.2'}, {'Rainfall': '21.4', 'Maximum': '24.5', 'Station': 'Okhaldhunga', 'Minimum': '18.0'}, {'Rainfall': '2.0', 'Maximum': '23.5', 'Station': 'Taplejung', 'Minimum': '18.2'}, {'Rainfall': '47.4', 'Maximum': '26.0', 'Station': 'Dhankuta', 'Minimum': '20.5'}, {'Rainfall': '62.9', 'Maximum': '26.3', 'Station': 'Biratnagar', 'Minimum': '25.2'}, {'Rainfall': '7.0*', 'Maximum': '21.5', 'Station': 'Jomsom', 'Minimum': '13.4'}, {'Rainfall': '109.6*', 'Maximum': '28.5', 'Station': 'Dharan', 'Minimum': '22.9'}, {'Rainfall': '32.0*', 'Maximum': '21.0', 'Station': 'Lumle', 'Minimum': '18.0'}, {'Rainfall': '62.8*', 'Maximum': '29.0', 'Station': 'Jankapur', 'Minimum': '27.0'}]\n" ], [ "data_set[0].keys()", "_____no_output_____" ] ], [ [ "*writing to csv file*", "_____no_output_____" ] ], [ [ "import csv", "_____no_output_____" ], [ "with open('dataset.csv', 'w') as csvfile:\n csvdoc = csv.DictWriter(csvfile, \n fieldnames=data_set[0].keys())\n csvdoc.writeheader()\n csvdoc.writerows(data_set)", "_____no_output_____" ], [ "data_set[0]", "_____no_output_____" ], [ "data_set[0].keys()", "_____no_output_____" ] ], [ [ "*json output*", "_____no_output_____" ] ], [ [ "import json", "_____no_output_____" ], [ "json.dump(data_set, open('dataset.json', 'w'))", "_____no_output_____" ], [ "json.dumps(data_set)", "_____no_output_____" ] ], [ [ "**Practice ** *Obtain some data from any of website available*", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
cbf72a299ee4d04f719b341d44ea41077b277d9f
12,174
ipynb
Jupyter Notebook
Code-for-teaching.ipynb
NaserNikandish/test
fa88bc61e5fb350b1fe1762fb135d0f94eabddd8
[ "MIT" ]
null
null
null
Code-for-teaching.ipynb
NaserNikandish/test
fa88bc61e5fb350b1fe1762fb135d0f94eabddd8
[ "MIT" ]
null
null
null
Code-for-teaching.ipynb
NaserNikandish/test
fa88bc61e5fb350b1fe1762fb135d0f94eabddd8
[ "MIT" ]
null
null
null
19.957377
249
0.39724
[ [ [ "## numpy codes", "_____no_output_____" ] ], [ [ "import numpy as np\na = np.array([[60,70,80],[30,40,50]]) \nb = np.array([[95,85,75],[35,45,55]]) \nc = np.array([[10,20,25],[99,98,97]])\nnp.hstack((a,b)) ", "_____no_output_____" ], [ "%load_ext lab_black\nimport numpy as np\n\na = np.array([[60, 70, 80], [30, 40, 50]])\nb = np.array([[95, 85, 75], [35, 45, 55]])\nnp.hstack((a, b, a))", "_____no_output_____" ], [ "import pandas as pd\ntest=pd.Series(63.4, [4,11.2,5,20])\ntest\n", "_____no_output_____" ], [ "test[11.2]\n", "_____no_output_____" ], [ "test=pd.Series((30,40,50))\ntest[1]", "_____no_output_____" ], [ "test = pd.Series(12, ['a'])\ntest", "_____no_output_____" ], [ "import pandas as pd\ntest=pd.Series(63.4, [4,11.2,5,3])\ntest, test[4]", "_____no_output_____" ], [ "test", "_____no_output_____" ], [ "import pandas as pd\ntest = pd.Series([12, 15, 20], ['a', 'b', 'c'])\ntest, test[0]", "_____no_output_____" ], [ "test = pd.Series([62, 25, 10, 81, 75, 80, 93, 100, 34])\ntest.describe()", "_____no_output_____" ], [ "!jupyter kernelspec list", "Available kernels:\n python3 C:\\Users\\nnikand1\\Anaconda3\\share\\jupyter\\kernels\\python3\n" ], [ "grade = float(input(\"Hi\"))\nif grade >= 90:\n print(\"A\")\nelif grade >= 80:\n print(\"B\")\nelif grade >= 70:\n print(\"C\")\nelif grade >= 60:\n print(\"D\")\nelse:\n print(\"F\")", "Hi 85\n" ], [ "# %load_ext lab_black\nkeep_going = \"yes\"\nwhile keep_going == \"yes\":\n a = float(input(\"enter a valid number\"))\n b = float(input(\"enter another valid number\"))\n sum = a + b\n print(\"sum of\", a, \"and\", b, \"is\", sum)\n keep_going = input(\"Do you want to calculate sum of another two numbers?\").lower()", "enter a valid number 2\nenter another valid number 3\n" ], [ "i = 2\nwhile i:\n print(i)\n ", "_____no_output_____" ], [ "print(10, 20, 30, ',', sep = ' , ')\n", "10 , 20 , 30 , ,\n" ], [ "for item in \"Python 3.8\":\n print(item)", "P\ny\nt\nh\no\nn\n \n3\n.\n8\n" ], [ "for item in \"Python 3.8\":\n print(item, end = \" \")", "P y t h o n 3 . 8 " ], [ "import time\n\ndef count_items(items):\n print('Counting ', end='', flush=True)\n num = 0\n for item in items:\n num += 1\n time.sleep(1)\n print('.', end='', flush=True)\n\n print(f'\\nThere were {num} items')\n \nitems = [2,3, 4,5]\ncount_items(items)", "Counting ....\nThere were 4 items\n" ], [ "i=10\ni++", "_____no_output_____" ], [ "for i in range(10):\n for j in range(15):\n print('*', end = ' ')\n print('')", "* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n* * * * * * * * * * * * * * * \n" ], [ "import pandas as pd\ntest = pd.Series([62, 25, 10, 81, 75, 80, 93, 100, 34])", "_____no_output_____" ], [ "test", "_____no_output_____" ], [ "pd.set_option('precision', 2)", "_____no_output_____" ], [ "test.describe()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf748da69d7df9f26920595f4b87012ec2c54ed
66,380
ipynb
Jupyter Notebook
4_Death_row/Exploratory_analysis.ipynb
amycurneen/metis
f73ec8d0812e3159e74a1f831f45a6fcd4d6e15c
[ "Apache-2.0" ]
null
null
null
4_Death_row/Exploratory_analysis.ipynb
amycurneen/metis
f73ec8d0812e3159e74a1f831f45a6fcd4d6e15c
[ "Apache-2.0" ]
null
null
null
4_Death_row/Exploratory_analysis.ipynb
amycurneen/metis
f73ec8d0812e3159e74a1f831f45a6fcd4d6e15c
[ "Apache-2.0" ]
null
null
null
75.862857
20,040
0.792648
[ [ [ "# Exploring Texas Execution Data", "_____no_output_____" ], [ "# Setup", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt \nimport matplotlib\nimport seaborn as sns\nimport pandas as pd", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression", "_____no_output_____" ], [ "from sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.pipeline import make_pipeline", "_____no_output_____" ], [ "# Python 2 & 3 Compatibility\nfrom __future__ import print_function, division\n\n# Necessary imports\nimport pandas as pd\nimport numpy as np\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nimport patsy\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import RidgeCV\n%matplotlib inline", "/anaconda3/lib/python3.6/site-packages/statsmodels/compat/pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.\n from pandas.core import datetools\n" ], [ "import pyLDAvis", "_____no_output_____" ] ], [ [ "# Import", "_____no_output_____" ] ], [ [ "df = pd.read_csv(\"data/Death_Row_Data.csv\", encoding = \"latin1\")", "_____no_output_____" ], [ "print(len(df))\ndf.sample(5)", "549\n" ] ], [ [ "# Look at year vs count", "_____no_output_____" ] ], [ [ "years = []\nfor i in range(len(df.Date)):\n a = df.Date[i][-4:]\n years.append(a)", "_____no_output_____" ], [ "df[\"year\"] = years", "_____no_output_____" ], [ "df_count = df.groupby(['year']).count()", "_____no_output_____" ], [ "df_counts.head()", "_____no_output_____" ], [ "df_count.reset_index(inplace=True)", "_____no_output_____" ], [ "df_counts = df_count[['year','Last Statement']]", "_____no_output_____" ], [ "df_counts.rename(columns={'Last Statement': 'count'}, inplace=True)", "/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py:3027: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n return super(DataFrame, self).rename(**kwargs)\n" ], [ "df_counts.to_csv('count_by_year.csv')", "_____no_output_____" ], [ "df_counts[['year']] = df_counts[['year']].astype(int)", "/anaconda3/lib/python3.6/site-packages/pandas/core/frame.py:2540: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n self[k1] = value[k2]\n" ], [ "lr1 = LinearRegression()\n\nX = df_counts[['year']]\ny = df_counts['count']\n\nlr1.fit(X,y)\n\nlr1.score(X,y)", "_____no_output_____" ], [ "degree = 3\n\nest = make_pipeline(PolynomialFeatures(degree), LinearRegression())\n\nest.fit(X, y)", "_____no_output_____" ], [ "fig,ax = plt.subplots(1,1);\nax.scatter(X, y,label='ground truth')\nax.plot(X, est.predict(X), color='red',label='degree=%d' % degree)\nax.set_ylabel('y')\nax.set_xlabel('x')\nax.legend(loc='upper right',frameon=True)", "_____no_output_____" ] ], [ [ "# Sentiment of last words", "_____no_output_____" ] ], [ [ "df.head()\ndf = df.rename(columns={\"Last Statement\": \"last_statement\"})", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "from textblob import TextBlob", "_____no_output_____" ], [ "polarity = []\nsubjectivity = []\n\nfor i in range(len(df.last_statement)):\n state = str(df.last_statement[i])\n a = list(TextBlob(state).sentiment)\n polarity.append(a[0])\n subjectivity.append(a[1])", "_____no_output_____" ], [ "len(subjectivity)", "_____no_output_____" ], [ "df['polarity'] = polarity\ndf['subjectivity'] = subjectivity", "_____no_output_____" ], [ "fig,ax = plt.subplots(1,1);\nax.scatter(subjectivity,polarity)\n\nax.set_xlim(-1,1)\nax.set_ylim(-1,1)", "_____no_output_____" ] ], [ [ "# Race", "_____no_output_____" ] ], [ [ "for i in range(0,len(df)):\n race = df[\"Race\"].iloc[i]\n if race == \"White \":\n df[\"Race\"].iloc[i] = \"White\"\n elif race == \"Hispanic \":\n df[\"Race\"].iloc[i] = \"Hispanic\"\n elif race == \"Histpanic\":\n df[\"Race\"].iloc[i] = \"Hispanic\"\n else:\n pass\n \nraces = df[\"Race\"].unique()\nprint (races)", "['Hispanic' 'Black' 'White' 'Other']\n" ], [ "sns.factorplot('Race',data=df,kind='count')\nplt.title(\"Number of offenders by race\")\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cbf74dcfc90ba61ba3c6acca57c921fe094a8220
39,348
ipynb
Jupyter Notebook
GA.ipynb
shaimaa-elbaklish/funcMinimization
4df1fc12019d50e46b1ceee437d28082ed1148f8
[ "MIT" ]
null
null
null
GA.ipynb
shaimaa-elbaklish/funcMinimization
4df1fc12019d50e46b1ceee437d28082ed1148f8
[ "MIT" ]
null
null
null
GA.ipynb
shaimaa-elbaklish/funcMinimization
4df1fc12019d50e46b1ceee437d28082ed1148f8
[ "MIT" ]
null
null
null
28.575163
247
0.375241
[ [ [ "\"\"\"\nAuthor: Shaimaa K. El-Baklish\nThis file is under MIT License.\nLink: https://github.com/shaimaa-elbaklish/funcMinimization/blob/main/LICENSE.md\n\"\"\"", "_____no_output_____" ], [ "import numpy as np\nimport plotly.graph_objects as go", "_____no_output_____" ] ], [ [ "## Benchmark Multimodal Functions Available\n| Function | Dimension | Bounds | Optimal Function Value |\n| -------- | --------- | ------ | ---------------------- |\n| $$ f_{1} = 4x_1^2 - 2.1x_1^4 + \\frac{1}{3}x_1^6 + x_1 x_2 - 4x_2^2 + 4 x_2^4 $$ | 2 | [-5, 5] | -1.0316 |\n| $$ f_{2} = (x_2 - \\frac{5.1}{4\\pi^2}x_1^2 + \\frac{5}{\\pi}x_1 -6)^2 +10(1 - \\frac{1}{8\\pi})\\cos{x_1} + 10 $$ | 2 | [-5, 5] | 0.398 |\n| $$ f_{3} = -\\sum_{i=1}^{4} c_i exp(-\\sum_{j=1}^{3} a_{ij}(x_j - p_{ij})^2) $$ | 3 | [1, 3] | -3.86 |\n| $$ f_{4} = -\\sum_{i=1}^{4} c_i exp(-\\sum_{j=1}^{6} a_{ij}(x_j - p_{ij})^2) $$ | 6 | [0, 1] | -3.32 |\n| $$ f_{5} = -\\sum_{i=1}^{7} [(X - a_i)(X - a_i)^T + c_i]^{-1} $$ | 4 | [0, 10] | -10.4028 |", "_____no_output_____" ] ], [ [ "class Function:\n def __init__(self, x = None, n = 2, lb = np.array([-5, -5]), ub = np.array([5, 5])):\n self.n_x = n\n if x is not None:\n assert(x.shape[0] == self.n_x)\n self.fvalue = self.getFValue(x)\n self.x = x\n self.fvalue = None\n assert(lb.shape[0] == self.n_x)\n self.lb = lb\n assert(ub.shape[0] == self.n_x)\n self.ub = ub\n self.benchmark_selected = None\n \n def setBenchmarkFunction(self, f_name = \"f1\"):\n benchmarks = {\n \"f1\": [2, np.array([-5, -5]), np.array([5, 5])],\n \"f2\": [2, np.array([-5, -5]), np.array([5, 5])],\n \"f3\": [3, 1*np.ones(shape=(3,)), 3*np.ones(shape=(3,))],\n \"f4\": [6, np.zeros(shape=(6,)), 1*np.ones(shape=(6,))],\n \"f5\": [4, np.zeros(shape=(4,)), 10*np.ones(shape=(4,))]\n }\n self.benchmark_selected = f_name\n [self.n_x, self.lb, self.ub] = benchmarks.get(f_name, benchmarks.get(\"f1\"))\n \n def isFeasible(self, x):\n return np.all(x >= self.lb) and np.all(x <= self.ub)\n \n def getFValue(self, x):\n if self.benchmark_selected is None:\n func_value = 4*x[0]**2 - 2.1*x[0]**4 + (x[0]**6)/3 + x[0]*x[1] - 4*x[1]**2 + 4*x[1]**4\n return func_value\n benchmarks_coeffs = {\n \"f3\": {\"a\": np.array([[3, 10, 30], [0.1, 10, 35], [3, 10, 30], [0.1, 10, 35]]),\n \"c\": np.array([1, 1.2, 3, 3.2]),\n \"p\": np.array([[0.3689, 0.117, 0.2673], [0.4699, 0.4387, 0.747], [0.1091, 0.8732, 0.5547], [0.03815, 0.5743, 0.8828]])},\n \"f4\": {\"a\": np.array([[10, 3, 17, 3.5, 1.7, 8], [0.05, 10, 17, 0.1, 8, 14], [3, 3.5, 1.7, 10, 17, 8], [17, 8, 0.05, 10, 0.1, 14]]),\n \"c\": np.array([1, 1.2, 3, 3.2]),\n \"p\": np.array([[0.1312, 0.1696, 0.5569, 0.0124, 0.8283, 0.5886], [0.2329, 0.4135, 0.8307, 0.3736, 0.1004, 0.9991], [0.2348, 0.1415, 0.3522, 0.2883, 0.3047, 0.6650], [0.4047, 0.8828, 0.8732, 0.5743, 0.1091, 0.0381]])},\n \"f5\": {\"a\": np.array([[4, 4, 4, 4], [1, 1, 1, 1], [8, 8, 8, 8], [6, 6, 6, 6], [3, 7, 3, 7], [2, 9, 2, 9], [5, 5, 3, 3], [8, 1, 8, 1], [6, 2, 6, 2], [7, 3.6, 7, 3.6]]),\n \"c\": np.array([0.1, 0.2, 0.2, 0.4, 0.4, 0.6, 0.3, 0.7, 0.5, 0.5])}\n }\n benchmarks = {\n \"f1\": lambda z: 4*z[0]**2 - 2.1*z[0]**4 + (z[0]**6)/3 + z[0]*z[1] - 4*z[1]**2 + 4*z[1]**4,\n \"f2\": lambda z: (z[1] - (5.1/(4*np.pi**2))*z[0]**2 + (5/np.pi)*z[0] -6)**2 + 10*(1 - (1/(8*np.pi)))*np.cos(z[0]) + 10,\n \"f3\": lambda z: -np.sum(benchmarks_coeffs[\"f3\"][\"c\"] * np.exp(-np.sum(list(map(lambda ai, pi: ai*(z - pi)**2, benchmarks_coeffs[\"f3\"][\"a\"], benchmarks_coeffs[\"f3\"][\"p\"])), axis=1))),\n \"f4\": lambda z: -np.sum(benchmarks_coeffs[\"f4\"][\"c\"] * np.exp(-np.sum(list(map(lambda ai, pi: ai*(z - pi)**2, benchmarks_coeffs[\"f4\"][\"a\"], benchmarks_coeffs[\"f4\"][\"p\"])), axis=1))),\n \"f5\": lambda z: -np.sum(list(map(lambda ai, ci: 1/((z - ai) @ (z - ai).T + ci), benchmarks_coeffs[\"f5\"][\"a\"], benchmarks_coeffs[\"f5\"][\"c\"])))\n }\n func_value = benchmarks.get(self.benchmark_selected)(x)\n return func_value\n \n def initRandomSoln(self):\n self.x = np.random.rand(self.n_x) * (self.ub - self.lb) + self.lb\n assert(self.isFeasible(self.x))\n self.fvalue = self.getFValue(self.x)\n \n def getNeighbourSoln(self):\n r = np.random.rand(self.n_x)\n x_new = self.x + r * (self.ub - self.x) + (1 - r) * (self.lb - self.x)\n assert(self.isFeasible(x_new))\n return x_new", "_____no_output_____" ], [ "class GeneticAlgorithm:\n def __init__(self, problem, n_pop = 50, max_iter = 100, p_elite = 0.1, p_crossover = 0.8, p_mutation = 0.1, \n parents_selection = \"Random\", tournament_size = 5, mutation_selection = \"Worst\", survivors_selection = \"Fitness\"):\n self.problem = problem\n self.n_pop = n_pop\n self.max_iter = max_iter\n self.p_elite = p_elite\n self.p_crossover = p_crossover\n self.p_mutation = p_mutation\n self.parents_selection = parents_selection\n self.tournament_size = tournament_size if tournament_size < n_pop else n_pop\n self.mutation_selection = mutation_selection\n self.survivors_selection = survivors_selection\n self.gen_sols = None\n self.gen_fvalues = None\n self.gen_ages = None\n self.best_sols = None\n self.best_fvalues = None\n \n def initRandomPopulation(self):\n self.gen_sols = []\n self.gen_fvalues = []\n self.gen_ages = []\n self.best_sols = []\n self.best_fvalues = []\n for _ in range(self.n_pop):\n self.problem.initRandomSoln()\n new_sol = self.problem.x\n new_fvalue = self.problem.fvalue\n self.gen_sols.append(new_sol)\n self.gen_fvalues.append(new_fvalue)\n self.gen_ages.append(0)\n if len(self.best_sols) == 0:\n self.best_sols.append(new_sol)\n self.best_fvalues.append(new_fvalue)\n elif (new_fvalue < self.best_fvalues[0]):\n self.best_sols[0], self.best_fvalues[0] = new_sol, new_fvalue\n \n def selectParents(self, numParents, criteria):\n gen_probs = 1 / (1 + np.square(self.gen_fvalues))\n gen_probs = gen_probs / sum(gen_probs)\n lambda_rank = 1.5 # (between 1 and 2) offspring created by best individual\n gen_ranks = list(map(lambda i: np.argwhere(np.argsort(self.gen_fvalues) == i)[0,0], np.arange(self.n_pop)))\n gen_ranks = ((2-lambda_rank) + np.divide(gen_ranks, self.n_pop-1)*(2*lambda_rank-2)) / self.n_pop\n selection_criteria = {\n \"Random\": lambda n: np.random.choice(self.n_pop, size=(n,), replace=False),\n \"RouletteWheel\": lambda n: np.random.choice(self.n_pop, size=(n,), replace=True, p=gen_probs),\n \"SUS\": lambda n: np.random.choice(self.n_pop, size=(n,), replace=False, p=gen_probs),\n \"Rank\": lambda n: np.random.choice(self.n_pop, size=(n,), replace=False, p=gen_ranks),\n \"Tournament\": lambda n: np.array([np.amin(list(map(lambda i: [self.gen_fvalues[i], i], \n np.random.choice(self.n_pop, size=(self.tournament_size,), replace=False))), \n axis=0)[1] for _ in range(n)], dtype=int),\n \"Worst\": lambda n: np.argsort(self.gen_fvalues)[self.n_pop-n:]\n }\n parents_idx = selection_criteria.get(criteria, selection_criteria[\"Random\"])(numParents)\n return parents_idx\n\n def crossover(self, p1_idx, p2_idx):\n # Whole Arithmetic Combination\n alpha = np.random.rand() * (0.9 - 0.7) + 0.7\n child1 = alpha * self.gen_sols[p1_idx] + (1 - alpha) * self.gen_sols[p2_idx]\n child2 = (1 - alpha) * self.gen_sols[p1_idx] + alpha * self.gen_sols[p2_idx]\n return child1, child2\n\n def mutation(self, p_idx):\n # Random noise\n r = np.random.rand(self.problem.n_x)\n child = self.gen_sols[p_idx] + r * (self.problem.ub - self.gen_sols[p_idx]) + (1 - r) * (self.problem.lb - self.gen_sols[p_idx])\n return child\n \n def selectSurvivors(self, numSurvivors, criteria):\n selection_criteria = {\n \"Age\": lambda n: np.argsort(self.gen_ages)[:n],\n \"Fitness\": lambda n: np.argsort(self.gen_fvalues)[:n]\n }\n survivors_idx = selection_criteria.get(criteria, selection_criteria[\"Fitness\"])(numSurvivors)\n return survivors_idx\n\n def perform_algorithm(self):\n self.initRandomPopulation()\n print(\"Best Initial Solution \", self.best_fvalues[0])\n n_crossovers = int(np.ceil(self.p_crossover * self.n_pop / 2))\n n_mutations = int(self.p_mutation * self.n_pop)\n n_elite = int(self.p_elite * self.n_pop)\n n_survivors = self.n_pop - int(self.p_crossover*self.n_pop) - n_mutations - n_elite\n for _ in range(self.max_iter):\n # Crossover and Parents Selection\n parents_idx = self.selectParents(numParents=n_crossovers*2, criteria=self.parents_selection)\n new_gen_sols = []\n new_gen_fvalues = []\n new_gen_ages = []\n for i in range(0, n_crossovers*2, 2):\n [ch1, ch2] = self.crossover(parents_idx[i], parents_idx[i+1])\n new_gen_sols.append(ch1)\n new_gen_fvalues.append(self.problem.getFValue(ch1))\n new_gen_ages.append(0)\n if len(new_gen_sols) == int(self.p_crossover * self.n_pop):\n break\n new_gen_sols.append(ch2)\n new_gen_fvalues.append(self.problem.getFValue(ch2))\n new_gen_ages.append(0)\n # Mutation and Parents Selection\n parents_idx = self.selectParents(numParents=n_mutations, criteria=self.mutation_selection)\n for i in range(n_mutations):\n ch = self.mutation(parents_idx[i])\n new_gen_sols.append(ch)\n new_gen_fvalues.append(self.problem.getFValue(ch))\n new_gen_ages.append(0)\n # Elite Members\n elite_idx = self.selectSurvivors(numSurvivors=n_elite, criteria=\"Fitness\")\n for i in range(n_elite):\n new_gen_sols.append(self.gen_sols[elite_idx[i]])\n new_gen_fvalues.append(self.gen_fvalues[elite_idx[i]])\n new_gen_ages.append(self.gen_ages[elite_idx[i]]+1)\n # Survivors (if any)\n survivors_idx = self.selectSurvivors(numSurvivors=n_survivors, criteria=self.survivors_selection)\n for i in range(n_survivors):\n new_gen_sols.append(self.gen_sols[survivors_idx[i]])\n new_gen_fvalues.append(self.gen_fvalues[survivors_idx[i]])\n new_gen_ages.append(self.gen_ages[survivors_idx[i]]+1)\n assert(len(new_gen_sols) == self.n_pop)\n assert(len(new_gen_fvalues) == self.n_pop)\n assert(len(new_gen_ages) == self.n_pop)\n # New generation becomes current one\n self.gen_sols = new_gen_sols\n self.gen_fvalues = new_gen_fvalues\n self.gen_ages = new_gen_ages\n # update best solution reached so far\n best_idx = np.argmin(self.gen_fvalues)\n if self.gen_fvalues[best_idx] < self.best_fvalues[-1]:\n self.best_sols.append(self.gen_sols[best_idx])\n self.best_fvalues.append(self.gen_fvalues[best_idx])\n else:\n self.best_sols.append(self.best_sols[-1])\n self.best_fvalues.append(self.best_fvalues[-1])\n \n def visualize(self):\n # convergence plot\n fig1 = go.Figure(data=go.Scatter(x=np.arange(0, self.max_iter), y=self.best_fvalues, mode=\"lines\"))\n fig1.update_layout(\n title=\"Convergence Plot\",\n xaxis_title=\"Iteration Number\",\n yaxis_title=\"Fitness Value of Best So Far\"\n )\n fig1.show()\n pass\n", "_____no_output_____" ], [ "problem = Function()\nproblem.setBenchmarkFunction(f_name=\"f2\")\nGA = GeneticAlgorithm(problem, n_pop = 50, max_iter=100, p_elite=0.1, p_crossover=0.7, p_mutation=0.1, \n parents_selection=\"SUS\", tournament_size = 20, mutation_selection = \"Worst\", survivors_selection = \"Age\")\nGA.perform_algorithm()\nprint(GA.best_sols[-1])\nprint(GA.best_fvalues[-1])\nGA.visualize()", "Best Initial Solution 2.149705174772726\n[3.14249292 2.27470172]\n0.39789141182892784\n" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
cbf75738d7a54c90d51c1f544db59316d794f7f6
2,432
ipynb
Jupyter Notebook
soda/SODA_CLUSTER_SETUP.ipynb
anilkulkarni87/databricks_notebooks
5d5d1cffccb23b8e72cec0cf19e2695a846fab82
[ "Apache-2.0" ]
1
2021-12-30T11:03:24.000Z
2021-12-30T11:03:24.000Z
soda/SODA_CLUSTER_SETUP.ipynb
anilkulkarni87/databricks_notebooks
5d5d1cffccb23b8e72cec0cf19e2695a846fab82
[ "Apache-2.0" ]
null
null
null
soda/SODA_CLUSTER_SETUP.ipynb
anilkulkarni87/databricks_notebooks
5d5d1cffccb23b8e72cec0cf19e2695a846fab82
[ "Apache-2.0" ]
null
null
null
1,216
2,431
0.707237
[ [ [ "%sh\npip list | egrep 'thrift-sasl|sasl'\npip install --upgrade thrift\ndpkg -l | egrep 'thrift_sasl|libsasl2-dev|gcc|python-dev'\nsudo apt-get -y install unixodbc-dev libsasl2-dev gcc python-dev", "_____no_output_____" ], [ "#Installing required libraries. Idea is to explore both Great expectations and Soda. Influxdb is to publish metrics as time series data.\n%pip install soda-spark\n# To explore Great expectations and compare it with Soda\n#%pip install great-expectations\n# To explore publishing metrics to Influxdb directly from Databricks\n%pip install influxdb-client", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
cbf75d7d9449dc89705f18b0d3b40169f81efdfa
10,336
ipynb
Jupyter Notebook
NYT/GPT/vectorize-gpt2-cutoff.ipynb
kristjanr/ut-mit-news-classify
d85e32256f36d4a22d727e678adfaa7e0a4a3108
[ "MIT" ]
null
null
null
NYT/GPT/vectorize-gpt2-cutoff.ipynb
kristjanr/ut-mit-news-classify
d85e32256f36d4a22d727e678adfaa7e0a4a3108
[ "MIT" ]
null
null
null
NYT/GPT/vectorize-gpt2-cutoff.ipynb
kristjanr/ut-mit-news-classify
d85e32256f36d4a22d727e678adfaa7e0a4a3108
[ "MIT" ]
1
2021-06-21T09:18:59.000Z
2021-06-21T09:18:59.000Z
31.705521
409
0.563951
[ [ [ "import os\nimport sys\nmodule_path = \"/gpfs/space/home/roosild/ut-mit-news-classify/NYT/\"\nif module_path not in sys.path:\n sys.path.append(module_path)\nfrom torch.utils.data import DataLoader\nimport torch\nimport os\nfrom tqdm.auto import tqdm\nfrom utils import print_f\n\n%load_ext autoreload\n%autoreload 2\n\nprint_f('All imports seem good!')\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint_f('Using device:', device)", "All imports seem good!\nUsing device: cuda\n" ], [ "from transformers import GPT2Model\nfrom utils import GPTVectorizedDataset\n\nMODEL = 'gpt2'\nbatch_size = 8\nchunk_size = 200_000\n\n\n\nif cutoff_end_chars:\n ending = 'min500_cutoff_replace'\nelse:\n ending = 'min500_complete'\n\nos.makedirs('tokenized', exist_ok=True)\n\ntokenized_train_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/tokenized/train_{size_str}_{ending}_chunk{NR}of{TOTAL_NR_OF_CHUNKS}.pt'\ntokenized_test_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/tokenized/test_{size_str}_{ending}_chunk{NR}of{TOTAL_NR_OF_CHUNKS}.pt'\n\nos.makedirs('vectorized', exist_ok=True)\n\nvectorized_train_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/vectorized/train_150k_min500_complete.pt'\nvectorized_test_path = f'/gpfs/space/projects/stud_nlp_share/cutoff/GPT/vectorized/test_150k_min500_complete.pt'\n\nprint_f('Loading NYT dataset...')\n\ntrain_dataset = torch.load(tokenized_train_path)\ntest_dataset = torch.load(tokenized_test_path)", "Loading NYT dataset...\n" ], [ "# start actual vectorization with GPT2\nruns = [(train_dataset, vectorized_train_path), (test_dataset, vectorized_test_path)]\n\nprint_f('Loading model...')\nmodel = GPT2Model.from_pretrained(MODEL)\n\n# resize model embedding to match new tokenizer\nmodel.resize_token_embeddings(len(test_dataset.tokenizer))\n\n# fix model padding token id\nmodel.config.pad_token_id = model.config.eos_token_id\n\n# Load model to defined device.\nmodel.to(device)\n\nfor dataset, output_path in runs:\n\n total_chunks = len(dataset) // chunk_size + 1\n print_f('total chunks', total_chunks)\n\n # skip already embedded articles\n skip_n_articles = 0\n chunk_paths = sorted([chunk_path for chunk_path in os.listdir('.') if f'{output_path}_chunk' in chunk_path])\n\n print_f('chunks', chunk_paths)\n\n if len(chunk_paths) > 0:\n for i, chunk_path in enumerate(chunk_paths):\n chunk = torch.load(chunk_path)\n\n skip_n_articles += len(chunk)\n print_f(f'Chunk at \"{chunk_path}\" has {len(chunk)} articles.')\n\n del chunk\n gc.collect()\n\n print_f('skip:', skip_n_articles)\n\n if skip_n_articles >= len(dataset):\n print_f('Looks like the dataset if fully embedded already. Skipping this dataset...')\n continue\n\n print_f('dataset original', len(dataset))\n\n dataset.input_ids = dataset.input_ids[skip_n_articles:]\n dataset.attention_mask = dataset.attention_mask[skip_n_articles:]\n dataset.labels = dataset.labels[skip_n_articles:]\n\n print_f('dataset after skipping', len(dataset))\n\n iterator = DataLoader(dataset, batch_size=batch_size)\n print_f('Vectorizing dataset for ', output_path)\n\n X_train = []\n y_train = []\n chunk_id = len(chunk_paths) + 1\n\n print_f('Starting at chunk id', chunk_id)\n\n for i, batch in enumerate(tqdm(iterator)):\n inputs, attention_mask, labels = batch\n\n real_batch_size = inputs.shape[0]\n\n inputs = inputs.to(device)\n attention_mask = attention_mask.to(device)\n labels = torch.tensor(labels).to(device)\n\n with torch.no_grad():\n output = model(input_ids=inputs, attention_mask=attention_mask)\n\n output = output[0]\n\n # indices of last non-padded elements in each sequence\n # adopted from https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L1290-L1302\n last_non_padded_ids = torch.ne(inputs, test_dataset.tokenizer.pad_token_id).sum(-1) - 1\n\n embeddings = output[range(real_batch_size), last_non_padded_ids, :]\n\n X_train += embeddings.detach().cpu()\n y_train += labels.detach().cpu()\n\n if len(X_train) >= chunk_size:\n print_f('Saving chunk:', output_path)\n saved_dataset = GPTVectorizedDataset(torch.stack(X_train), torch.stack(y_train))\n torch.save(saved_dataset, output_path, pickle_protocol=4)\n X_train = []\n y_train = []\n chunk_id += 1\n\n # take care of what's left after loop\n if len(X_train) >= 0:\n print_f('Saving chunk:', output_path)\n saved_dataset = GPTVectorizedDataset(torch.stack(X_train), torch.stack(y_train))\n torch.save(saved_dataset, output_path, pickle_protocol=4)\n\nprint_f('All done!')", "Loading model...\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbf76e6650caa8a6e668854864efab1e46a47c01
32,548
ipynb
Jupyter Notebook
Face+Recognition+for+the+Happy+House+-+v3.ipynb
roverman/nn_specialization_homework
43f397721b2ca457ee1e2493e5a5fb80f2bd0553
[ "MIT" ]
null
null
null
Face+Recognition+for+the+Happy+House+-+v3.ipynb
roverman/nn_specialization_homework
43f397721b2ca457ee1e2493e5a5fb80f2bd0553
[ "MIT" ]
null
null
null
Face+Recognition+for+the+Happy+House+-+v3.ipynb
roverman/nn_specialization_homework
43f397721b2ca457ee1e2493e5a5fb80f2bd0553
[ "MIT" ]
null
null
null
41.674776
515
0.5989
[ [ [ "# Face Recognition for the Happy House\n\nWelcome to the first assignment of week 4! Here you will build a face recognition system. Many of the ideas presented here are from [FaceNet](https://arxiv.org/pdf/1503.03832.pdf). In lecture, we also talked about [DeepFace](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf). \n\nFace recognition problems commonly fall into two categories: \n\n- **Face Verification** - \"is this the claimed person?\". For example, at some airports, you can pass through customs by letting a system scan your passport and then verifying that you (the person carrying the passport) are the correct person. A mobile phone that unlocks using your face is also using face verification. This is a 1:1 matching problem. \n- **Face Recognition** - \"who is this person?\". For example, the video lecture showed a face recognition video (https://www.youtube.com/watch?v=wr4rx0Spihs) of Baidu employees entering the office without needing to otherwise identify themselves. This is a 1:K matching problem. \n\nFaceNet learns a neural network that encodes a face image into a vector of 128 numbers. By comparing two such vectors, you can then determine if two pictures are of the same person.\n \n**In this assignment, you will:**\n- Implement the triplet loss function\n- Use a pretrained model to map face images into 128-dimensional encodings\n- Use these encodings to perform face verification and face recognition\n\nIn this exercise, we will be using a pre-trained model which represents ConvNet activations using a \"channels first\" convention, as opposed to the \"channels last\" convention used in lecture and previous programming assignments. In other words, a batch of images will be of shape $(m, n_C, n_H, n_W)$ instead of $(m, n_H, n_W, n_C)$. Both of these conventions have a reasonable amount of traction among open-source implementations; there isn't a uniform standard yet within the deep learning community. \n\nLet's load the required packages. \n", "_____no_output_____" ] ], [ [ "from keras.models import Sequential\nfrom keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate\nfrom keras.models import Model\nfrom keras.layers.normalization import BatchNormalization\nfrom keras.layers.pooling import MaxPooling2D, AveragePooling2D\nfrom keras.layers.merge import Concatenate\nfrom keras.layers.core import Lambda, Flatten, Dense\nfrom keras.initializers import glorot_uniform\nfrom keras.engine.topology import Layer\nfrom keras import backend as K\nK.set_image_data_format('channels_first')\nimport cv2\nimport os\nimport numpy as np\nfrom numpy import genfromtxt\nimport pandas as pd\nimport tensorflow as tf\nfrom fr_utils import *\nfrom inception_blocks_v2 import *\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nnp.set_printoptions(threshold=np.nan)", "The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n" ] ], [ [ "## 0 - Naive Face Verification\n\nIn Face Verification, you're given two images and you have to tell if they are of the same person. The simplest way to do this is to compare the two images pixel-by-pixel. If the distance between the raw images are less than a chosen threshold, it may be the same person! \n\n<img src=\"images/pixel_comparison.png\" style=\"width:380px;height:150px;\">\n<caption><center> <u> <font color='purple'> **Figure 1** </u></center></caption>", "_____no_output_____" ], [ "Of course, this algorithm performs really poorly, since the pixel values change dramatically due to variations in lighting, orientation of the person's face, even minor changes in head position, and so on. \n\nYou'll see that rather than using the raw image, you can learn an encoding $f(img)$ so that element-wise comparisons of this encoding gives more accurate judgements as to whether two pictures are of the same person.", "_____no_output_____" ], [ "## 1 - Encoding face images into a 128-dimensional vector \n\n### 1.1 - Using an ConvNet to compute encodings\n\nThe FaceNet model takes a lot of data and a long time to train. So following common practice in applied deep learning settings, let's just load weights that someone else has already trained. The network architecture follows the Inception model from [Szegedy *et al.*](https://arxiv.org/abs/1409.4842). We have provided an inception network implementation. You can look in the file `inception_blocks.py` to see how it is implemented (do so by going to \"File->Open...\" at the top of the Jupyter notebook). \n", "_____no_output_____" ], [ "The key things you need to know are:\n\n- This network uses 96x96 dimensional RGB images as its input. Specifically, inputs a face image (or batch of $m$ face images) as a tensor of shape $(m, n_C, n_H, n_W) = (m, 3, 96, 96)$ \n- It outputs a matrix of shape $(m, 128)$ that encodes each input face image into a 128-dimensional vector\n\nRun the cell below to create the model for face images.", "_____no_output_____" ] ], [ [ "FRmodel = faceRecoModel(input_shape=(3, 96, 96))", "_____no_output_____" ], [ "print(\"Total Params:\", FRmodel.count_params())", "Total Params: 3743280\n" ] ], [ [ "** Expected Output **\n<table>\n<center>\nTotal Params: 3743280\n</center>\n</table>\n", "_____no_output_____" ], [ "By using a 128-neuron fully connected layer as its last layer, the model ensures that the output is an encoding vector of size 128. You then use the encodings the compare two face images as follows:\n\n<img src=\"images/distance_kiank.png\" style=\"width:680px;height:250px;\">\n<caption><center> <u> <font color='purple'> **Figure 2**: <br> </u> <font color='purple'> By computing a distance between two encodings and thresholding, you can determine if the two pictures represent the same person</center></caption>\n\nSo, an encoding is a good one if: \n- The encodings of two images of the same person are quite similar to each other \n- The encodings of two images of different persons are very different\n\nThe triplet loss function formalizes this, and tries to \"push\" the encodings of two images of the same person (Anchor and Positive) closer together, while \"pulling\" the encodings of two images of different persons (Anchor, Negative) further apart. \n\n<img src=\"images/triplet_comparison.png\" style=\"width:280px;height:150px;\">\n<br>\n<caption><center> <u> <font color='purple'> **Figure 3**: <br> </u> <font color='purple'> In the next part, we will call the pictures from left to right: Anchor (A), Positive (P), Negative (N) </center></caption>", "_____no_output_____" ], [ "\n\n### 1.2 - The Triplet Loss\n\nFor an image $x$, we denote its encoding $f(x)$, where $f$ is the function computed by the neural network.\n\n<img src=\"images/f_x.png\" style=\"width:380px;height:150px;\">\n\n<!--\nWe will also add a normalization step at the end of our model so that $\\mid \\mid f(x) \\mid \\mid_2 = 1$ (means the vector of encoding should be of norm 1).\n!-->\n\nTraining will use triplets of images $(A, P, N)$: \n\n- A is an \"Anchor\" image--a picture of a person. \n- P is a \"Positive\" image--a picture of the same person as the Anchor image.\n- N is a \"Negative\" image--a picture of a different person than the Anchor image.\n\nThese triplets are picked from our training dataset. We will write $(A^{(i)}, P^{(i)}, N^{(i)})$ to denote the $i$-th training example. \n\nYou'd like to make sure that an image $A^{(i)}$ of an individual is closer to the Positive $P^{(i)}$ than to the Negative image $N^{(i)}$) by at least a margin $\\alpha$:\n\n$$\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2 + \\alpha < \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$$\n\nYou would thus like to minimize the following \"triplet cost\":\n\n$$\\mathcal{J} = \\sum^{m}_{i=1} \\large[ \\small \\underbrace{\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2}_\\text{(1)} - \\underbrace{\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2}_\\text{(2)} + \\alpha \\large ] \\small_+ \\tag{3}$$\n\nHere, we are using the notation \"$[z]_+$\" to denote $max(z,0)$. \n\nNotes:\n- The term (1) is the squared distance between the anchor \"A\" and the positive \"P\" for a given triplet; you want this to be small. \n- The term (2) is the squared distance between the anchor \"A\" and the negative \"N\" for a given triplet, you want this to be relatively large, so it thus makes sense to have a minus sign preceding it. \n- $\\alpha$ is called the margin. It is a hyperparameter that you should pick manually. We will use $\\alpha = 0.2$. \n\nMost implementations also normalize the encoding vectors to have norm equal one (i.e., $\\mid \\mid f(img)\\mid \\mid_2$=1); you won't have to worry about that here.\n\n**Exercise**: Implement the triplet loss as defined by formula (3). Here are the 4 steps:\n1. Compute the distance between the encodings of \"anchor\" and \"positive\": $\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2$\n2. Compute the distance between the encodings of \"anchor\" and \"negative\": $\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$\n3. Compute the formula per training example: $ \\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid - \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2 + \\alpha$\n3. Compute the full formula by taking the max with zero and summing over the training examples:\n$$\\mathcal{J} = \\sum^{m}_{i=1} \\large[ \\small \\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2 - \\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2+ \\alpha \\large ] \\small_+ \\tag{3}$$\n\nUseful functions: `tf.reduce_sum()`, `tf.square()`, `tf.subtract()`, `tf.add()`, `tf.maximum()`.\nFor steps 1 and 2, you will need to sum over the entries of $\\mid \\mid f(A^{(i)}) - f(P^{(i)}) \\mid \\mid_2^2$ and $\\mid \\mid f(A^{(i)}) - f(N^{(i)}) \\mid \\mid_2^2$ while for step 4 you will need to sum over the training examples.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: triplet_loss\n\ndef triplet_loss(y_true, y_pred, alpha = 0.2):\n \"\"\"\n Implementation of the triplet loss as defined by formula (3)\n \n Arguments:\n y_true -- true labels, required when you define a loss in Keras, you don't need it in this function.\n y_pred -- python list containing three objects:\n anchor -- the encodings for the anchor images, of shape (None, 128)\n positive -- the encodings for the positive images, of shape (None, 128)\n negative -- the encodings for the negative images, of shape (None, 128)\n \n Returns:\n loss -- real number, value of the loss\n \"\"\"\n \n anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]\n \n ### START CODE HERE ### (≈ 4 lines)\n # Step 1: Compute the (encoding) distance between the anchor and the positive, you will need to sum over axis=-1\n pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)\n # Step 2: Compute the (encoding) distance between the anchor and the negative, you will need to sum over axis=-1\n neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)\n # Step 3: subtract the two previous distances and add alpha.\n basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)\n # Step 4: Take the maximum of basic_loss and 0.0. Sum over the training examples.\n loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))\n ### END CODE HERE ###\n \n return loss", "_____no_output_____" ], [ "with tf.Session() as test:\n tf.set_random_seed(1)\n y_true = (None, None, None)\n y_pred = (tf.random_normal([3, 128], mean=6, stddev=0.1, seed = 1),\n tf.random_normal([3, 128], mean=1, stddev=1, seed = 1),\n tf.random_normal([3, 128], mean=3, stddev=4, seed = 1))\n loss = triplet_loss(y_true, y_pred)\n \n print(\"loss = \" + str(loss.eval()))", "loss = 528.143\n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n **loss**\n </td>\n <td>\n 528.143\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "## 2 - Loading the trained model\n\nFaceNet is trained by minimizing the triplet loss. But since training requires a lot of data and a lot of computation, we won't train it from scratch here. Instead, we load a previously trained model. Load a model using the following cell; this might take a couple of minutes to run. ", "_____no_output_____" ] ], [ [ "FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])\nload_weights_from_FaceNet(FRmodel)", "_____no_output_____" ] ], [ [ "Here're some examples of distances between the encodings between three individuals:\n\n<img src=\"images/distance_matrix.png\" style=\"width:380px;height:200px;\">\n<br>\n<caption><center> <u> <font color='purple'> **Figure 4**:</u> <br> <font color='purple'> Example of distance outputs between three individuals' encodings</center></caption>\n\nLet's now use this model to perform face verification and face recognition! ", "_____no_output_____" ], [ "## 3 - Applying the model", "_____no_output_____" ], [ "Back to the Happy House! Residents are living blissfully since you implemented happiness recognition for the house in an earlier assignment. \n\nHowever, several issues keep coming up: The Happy House became so happy that every happy person in the neighborhood is coming to hang out in your living room. It is getting really crowded, which is having a negative impact on the residents of the house. All these random happy people are also eating all your food. \n\nSo, you decide to change the door entry policy, and not just let random happy people enter anymore, even if they are happy! Instead, you'd like to build a **Face verification** system so as to only let people from a specified list come in. To get admitted, each person has to swipe an ID card (identification card) to identify themselves at the door. The face recognition system then checks that they are who they claim to be. ", "_____no_output_____" ], [ "### 3.1 - Face Verification\n\nLet's build a database containing one encoding vector for each person allowed to enter the happy house. To generate the encoding we use `img_to_encoding(image_path, model)` which basically runs the forward propagation of the model on the specified image. \n\nRun the following code to build the database (represented as a python dictionary). This database maps each person's name to a 128-dimensional encoding of their face.", "_____no_output_____" ] ], [ [ "database = {}\ndatabase[\"danielle\"] = img_to_encoding(\"images/danielle.png\", FRmodel)\ndatabase[\"younes\"] = img_to_encoding(\"images/younes.jpg\", FRmodel)\ndatabase[\"tian\"] = img_to_encoding(\"images/tian.jpg\", FRmodel)\ndatabase[\"andrew\"] = img_to_encoding(\"images/andrew.jpg\", FRmodel)\ndatabase[\"kian\"] = img_to_encoding(\"images/kian.jpg\", FRmodel)\ndatabase[\"dan\"] = img_to_encoding(\"images/dan.jpg\", FRmodel)\ndatabase[\"sebastiano\"] = img_to_encoding(\"images/sebastiano.jpg\", FRmodel)\ndatabase[\"bertrand\"] = img_to_encoding(\"images/bertrand.jpg\", FRmodel)\ndatabase[\"kevin\"] = img_to_encoding(\"images/kevin.jpg\", FRmodel)\ndatabase[\"felix\"] = img_to_encoding(\"images/felix.jpg\", FRmodel)\ndatabase[\"benoit\"] = img_to_encoding(\"images/benoit.jpg\", FRmodel)\ndatabase[\"arnaud\"] = img_to_encoding(\"images/arnaud.jpg\", FRmodel)", "_____no_output_____" ] ], [ [ "Now, when someone shows up at your front door and swipes their ID card (thus giving you their name), you can look up their encoding in the database, and use it to check if the person standing at the front door matches the name on the ID.\n\n**Exercise**: Implement the verify() function which checks if the front-door camera picture (`image_path`) is actually the person called \"identity\". You will have to go through the following steps:\n1. Compute the encoding of the image from image_path\n2. Compute the distance about this encoding and the encoding of the identity image stored in the database\n3. Open the door if the distance is less than 0.7, else do not open.\n\nAs presented above, you should use the L2 distance (np.linalg.norm). (Note: In this implementation, compare the L2 distance, not the square of the L2 distance, to the threshold 0.7.) ", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: verify\n\ndef verify(image_path, identity, database, model):\n \"\"\"\n Function that verifies if the person on the \"image_path\" image is \"identity\".\n \n Arguments:\n image_path -- path to an image\n identity -- string, name of the person you'd like to verify the identity. Has to be a resident of the Happy house.\n database -- python dictionary mapping names of allowed people's names (strings) to their encodings (vectors).\n model -- your Inception model instance in Keras\n \n Returns:\n dist -- distance between the image_path and the image of \"identity\" in the database.\n door_open -- True, if the door should open. False otherwise.\n \"\"\"\n \n ### START CODE HERE ###\n \n # Step 1: Compute the encoding for the image. Use img_to_encoding() see example above. (≈ 1 line)\n encoding = img_to_encoding(image_path, model)\n \n # Step 2: Compute distance with identity's image (≈ 1 line)\n dist = np.linalg.norm(encoding - database.get(identity))\n \n # Step 3: Open the door if dist < 0.7, else don't open (≈ 3 lines)\n if dist < 0.7:\n print(\"It's \" + str(identity) + \", welcome home!\")\n door_open = True\n else:\n print(\"It's not \" + str(identity) + \", please go away\")\n door_open = False\n \n ### END CODE HERE ###\n \n return dist, door_open", "_____no_output_____" ] ], [ [ "Younes is trying to enter the Happy House and the camera takes a picture of him (\"images/camera_0.jpg\"). Let's run your verification algorithm on this picture:\n\n<img src=\"images/camera_0.jpg\" style=\"width:100px;height:100px;\">", "_____no_output_____" ] ], [ [ "verify(\"images/camera_0.jpg\", \"younes\", database, FRmodel)", "It's younes, welcome home!\n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n **It's younes, welcome home!**\n </td>\n <td>\n (0.65939283, True)\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "Benoit, who broke the aquarium last weekend, has been banned from the house and removed from the database. He stole Kian's ID card and came back to the house to try to present himself as Kian. The front-door camera took a picture of Benoit (\"images/camera_2.jpg). Let's run the verification algorithm to check if benoit can enter.\n<img src=\"images/camera_2.jpg\" style=\"width:100px;height:100px;\">", "_____no_output_____" ] ], [ [ "verify(\"images/camera_2.jpg\", \"kian\", database, FRmodel)", "It's not kian, please go away\n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n **It's not kian, please go away**\n </td>\n <td>\n (0.86224014, False)\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "### 3.2 - Face Recognition\n\nYour face verification system is mostly working well. But since Kian got his ID card stolen, when he came back to the house that evening he couldn't get in! \n\nTo reduce such shenanigans, you'd like to change your face verification system to a face recognition system. This way, no one has to carry an ID card anymore. An authorized person can just walk up to the house, and the front door will unlock for them! \n\nYou'll implement a face recognition system that takes as input an image, and figures out if it is one of the authorized persons (and if so, who). Unlike the previous face verification system, we will no longer get a person's name as another input. \n\n**Exercise**: Implement `who_is_it()`. You will have to go through the following steps:\n1. Compute the target encoding of the image from image_path\n2. Find the encoding from the database that has smallest distance with the target encoding. \n - Initialize the `min_dist` variable to a large enough number (100). It will help you keep track of what is the closest encoding to the input's encoding.\n - Loop over the database dictionary's names and encodings. To loop use `for (name, db_enc) in database.items()`.\n - Compute L2 distance between the target \"encoding\" and the current \"encoding\" from the database.\n - If this distance is less than the min_dist, then set min_dist to dist, and identity to name.", "_____no_output_____" ] ], [ [ "# GRADED FUNCTION: who_is_it\n\ndef who_is_it(image_path, database, model):\n \"\"\"\n Implements face recognition for the happy house by finding who is the person on the image_path image.\n \n Arguments:\n image_path -- path to an image\n database -- database containing image encodings along with the name of the person on the image\n model -- your Inception model instance in Keras\n \n Returns:\n min_dist -- the minimum distance between image_path encoding and the encodings from the database\n identity -- string, the name prediction for the person on image_path\n \"\"\"\n \n ### START CODE HERE ### \n \n ## Step 1: Compute the target \"encoding\" for the image. Use img_to_encoding() see example above. ## (≈ 1 line)\n encoding = img_to_encoding(image_path, model)\n \n ## Step 2: Find the closest encoding ##\n \n # Initialize \"min_dist\" to a large value, say 100 (≈1 line)\n min_dist = 100\n \n # Loop over the database dictionary's names and encodings.\n for (name, db_enc) in database.items():\n \n # Compute L2 distance between the target \"encoding\" and the current \"emb\" from the database. (≈ 1 line)\n dist = np.linalg.norm(encoding - db_enc)\n\n # If this distance is less than the min_dist, then set min_dist to dist, and identity to name. (≈ 3 lines)\n if dist < min_dist:\n min_dist = dist\n identity = name\n\n ### END CODE HERE ###\n \n if min_dist > 0.7:\n print(\"Not in the database.\")\n else:\n print (\"it's \" + str(identity) + \", the distance is \" + str(min_dist))\n \n return min_dist, identity", "_____no_output_____" ] ], [ [ "Younes is at the front-door and the camera takes a picture of him (\"images/camera_0.jpg\"). Let's see if your who_it_is() algorithm identifies Younes. ", "_____no_output_____" ] ], [ [ "who_is_it(\"images/camera_0.jpg\", database, FRmodel)", "it's younes, the distance is 0.659393\n" ] ], [ [ "**Expected Output**:\n\n<table>\n <tr>\n <td>\n **it's younes, the distance is 0.659393**\n </td>\n <td>\n (0.65939283, 'younes')\n </td>\n </tr>\n\n</table>", "_____no_output_____" ], [ "You can change \"`camera_0.jpg`\" (picture of younes) to \"`camera_1.jpg`\" (picture of bertrand) and see the result.", "_____no_output_____" ], [ "Your Happy House is running well. It only lets in authorized persons, and people don't need to carry an ID card around anymore! \n\nYou've now seen how a state-of-the-art face recognition system works.\n\nAlthough we won't implement it here, here're some ways to further improve the algorithm:\n- Put more images of each person (under different lighting conditions, taken on different days, etc.) into the database. Then given a new image, compare the new face to multiple pictures of the person. This would increae accuracy.\n- Crop the images to just contain the face, and less of the \"border\" region around the face. This preprocessing removes some of the irrelevant pixels around the face, and also makes the algorithm more robust.\n", "_____no_output_____" ], [ "<font color='blue'>\n**What you should remember**:\n- Face verification solves an easier 1:1 matching problem; face recognition addresses a harder 1:K matching problem. \n- The triplet loss is an effective loss function for training a neural network to learn an encoding of a face image.\n- The same encoding can be used for verification and recognition. Measuring distances between two images' encodings allows you to determine whether they are pictures of the same person. ", "_____no_output_____" ], [ "Congrats on finishing this assignment! \n", "_____no_output_____" ], [ "### References:\n\n- Florian Schroff, Dmitry Kalenichenko, James Philbin (2015). [FaceNet: A Unified Embedding for Face Recognition and Clustering](https://arxiv.org/pdf/1503.03832.pdf)\n- Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf (2014). [DeepFace: Closing the gap to human-level performance in face verification](https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf) \n- The pretrained model we use is inspired by Victor Sy Wang's implementation and was loaded using his code: https://github.com/iwantooxxoox/Keras-OpenFace.\n- Our implementation also took a lot of inspiration from the official FaceNet github repository: https://github.com/davidsandberg/facenet \n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cbf782e7f3b78c93bd508720e46ed9b09ae94ea1
472,913
ipynb
Jupyter Notebook
modern_5_tidy.ipynb
ocowchun/effective-pandas
2779436a9f33c2c985910226fdf5e8812dd760e8
[ "CC-BY-4.0" ]
1
2020-02-12T18:28:40.000Z
2020-02-12T18:28:40.000Z
modern_5_tidy.ipynb
ocowchun/effective-pandas
2779436a9f33c2c985910226fdf5e8812dd760e8
[ "CC-BY-4.0" ]
null
null
null
modern_5_tidy.ipynb
ocowchun/effective-pandas
2779436a9f33c2c985910226fdf5e8812dd760e8
[ "CC-BY-4.0" ]
1
2020-02-12T18:28:52.000Z
2020-02-12T18:28:52.000Z
236.693193
108,192
0.876612
[ [ [ "# Reshaping & Tidy Data\n\n> Structuring datasets to facilitate analysis [(Wickham 2014)](http://www.jstatsoft.org/v59/i10/paper)\n\nSo, you've sat down to analyze a new dataset.\nWhat do you do first?\n\nIn episode 11 of [Not So Standard Deviations](https://www.patreon.com/NSSDeviations?ty=h), Hilary and Roger discussed their typical approaches.\nI'm with Hilary on this one, you should make sure your data is tidy.\nBefore you do any plots, filtering, transformations, summary statistics, regressions...\nWithout a tidy dataset, you'll be fighting your tools to get the result you need.\nWith a tidy dataset, it's relatively easy to do all of those.\n\nHadley Wickham kindly summarized tidiness as a dataset where\n\n1. Each variable forms a column\n2. Each observation forms a row\n3. Each type of observational unit forms a table\n\nAnd today we'll only concern ourselves with the first two.\nAs quoted at the top, this really is about facilitating analysis: going as quickly as possible from question to answer.", "_____no_output_____" ] ], [ [ "%matplotlib inline\n\nimport os\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nif int(os.environ.get(\"MODERN_PANDAS_EPUB\", 0)):\n import prep # noqa\n\npd.options.display.max_rows = 10\nsns.set(style='ticks', context='talk')", "_____no_output_____" ] ], [ [ "## NBA Data\n\n[This](http://stackoverflow.com/questions/22695680/python-pandas-timedelta-specific-rows) StackOverflow question asked about calculating the number of days of rest NBA teams have between games.\nThe answer would have been difficult to compute with the raw data.\nAfter transforming the dataset to be tidy, we're able to quickly get the answer.\n\nWe'll grab some NBA game data from basketball-reference.com using pandas' `read_html` function, which returns a list of DataFrames.", "_____no_output_____" ] ], [ [ "fp = 'data/nba.csv'\n\nif not os.path.exists(fp):\n tables = pd.read_html(\"http://www.basketball-reference.com/leagues/NBA_2016_games.html\")\n games = tables[0]\n games.to_csv(fp)\nelse:\n games = pd.read_csv(fp, index_col=0)\ngames.head()", "_____no_output_____" ] ], [ [ "Side note: pandas' `read_html` is pretty good. On simple websites it almost always works.\nIt provides a couple parameters for controlling what gets selected from the webpage if the defaults fail.\nI'll always use it first, before moving on to [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/) or [lxml](http://lxml.de/) if the page is more complicated.\n\nAs you can see, we have a bit of general munging to do before tidying.\nEach month slips in an extra row of mostly NaNs, the column names aren't too useful, and we have some dtypes to fix up.", "_____no_output_____" ] ], [ [ "column_names = {'Date': 'date', 'Start (ET)': 'start',\n 'Unamed: 2': 'box', 'Visitor/Neutral': 'away_team', \n 'PTS': 'away_points', 'Home/Neutral': 'home_team',\n 'PTS.1': 'home_points', 'Unamed: 7': 'n_ot'}\n\ngames = (games.rename(columns=column_names)\n .dropna(thresh=4)\n [['date', 'away_team', 'away_points', 'home_team', 'home_points']]\n .assign(date=lambda x: pd.to_datetime(x['date'], format='%a, %b %d, %Y'))\n .set_index('date', append=True)\n .rename_axis([\"game_id\", \"date\"])\n .sort_index())\ngames.head()", "_____no_output_____" ] ], [ [ "A quick aside on that last block.\n\n- `dropna` has a `thresh` argument. If at least `thresh` items are missing, the row is dropped. We used it to remove the \"Month headers\" that slipped into the table.\n- `assign` can take a callable. This lets us refer to the DataFrame in the previous step of the chain. Otherwise we would have to assign `temp_df = games.dropna()...` And then do the `pd.to_datetime` on that.\n- `set_index` has an `append` keyword. We keep the original index around since it will be our unique identifier per game.\n- We use `.rename_axis` to set the index names (this behavior is new in pandas 0.18; before `.rename_axis` only took a mapping for changing labels).", "_____no_output_____" ], [ "The Question:\n> **How many days of rest did each team get between each game?**\n\nWhether or not your dataset is tidy depends on your question. Given our question, what is an observation?\n\nIn this case, an observation is a `(team, game)` pair, which we don't have yet. Rather, we have two observations per row, one for home and one for away. We'll fix that with `pd.melt`.\n\n`pd.melt` works by taking observations that are spread across columns (`away_team`, `home_team`), and melting them down into one column with multiple rows. However, we don't want to lose the metadata (like `game_id` and `date`) that is shared between the observations. By including those columns as `id_vars`, the values will be repeated as many times as needed to stay with their observations.", "_____no_output_____" ] ], [ [ "tidy = pd.melt(games.reset_index(),\n id_vars=['game_id', 'date'], value_vars=['away_team', 'home_team'],\n value_name='team')\ntidy.head()", "_____no_output_____" ] ], [ [ "The DataFrame `tidy` meets our rules for tidiness: each variable is in a column, and each observation (`team`, `date` pair) is on its own row.\nNow the translation from question (\"How many days of rest between games\") to operation (\"date of today's game - date of previous game - 1\") is direct:", "_____no_output_____" ] ], [ [ "# For each team... get number of days between games\ntidy.groupby('team')['date'].diff().dt.days - 1", "_____no_output_____" ] ], [ [ "That's the essence of tidy data, the reason why it's worth considering what shape your data should be in.\nIt's about setting yourself up for success so that the answers naturally flow from the data (just kidding, it's usually still difficult. But hopefully less so).\n\nLet's assign that back into our DataFrame", "_____no_output_____" ] ], [ [ "tidy['rest'] = tidy.sort_values('date').groupby('team').date.diff().dt.days - 1\ntidy.dropna().head()", "_____no_output_____" ] ], [ [ "To show the inverse of `melt`, let's take `rest` values we just calculated and place them back in the original DataFrame with a `pivot_table`.", "_____no_output_____" ] ], [ [ "by_game = (pd.pivot_table(tidy, values='rest',\n index=['game_id', 'date'],\n columns='variable')\n .rename(columns={'away_team': 'away_rest',\n 'home_team': 'home_rest'}))\ndf = pd.concat([games, by_game], axis=1)\ndf.dropna().head()", "_____no_output_____" ] ], [ [ "One somewhat subtle point: an \"observation\" depends on the question being asked.\nSo really, we have two tidy datasets, `tidy` for answering team-level questions, and `df` for answering game-level questions.\n\nOne potentially interesting question is \"what was each team's average days of rest, at home and on the road?\" With a tidy dataset (the DataFrame `tidy`, since it's team-level), `seaborn` makes this easy (more on seaborn in a future post):", "_____no_output_____" ] ], [ [ "sns.set(style='ticks', context='paper')", "_____no_output_____" ], [ "g = sns.FacetGrid(tidy, col='team', col_wrap=6, hue='team', size=2)\ng.map(sns.barplot, 'variable', 'rest');", "_____no_output_____" ] ], [ [ "An example of a game-level statistic is the distribution of rest differences in games:", "_____no_output_____" ] ], [ [ "df['home_win'] = df['home_points'] > df['away_points']\ndf['rest_spread'] = df['home_rest'] - df['away_rest']\ndf.dropna().head()", "_____no_output_____" ], [ "delta = (by_game.home_rest - by_game.away_rest).dropna().astype(int)\nax = (delta.value_counts()\n .reindex(np.arange(delta.min(), delta.max() + 1), fill_value=0)\n .sort_index()\n .plot(kind='bar', color='k', width=.9, rot=0, figsize=(12, 6))\n)\nsns.despine()\nax.set(xlabel='Difference in Rest (Home - Away)', ylabel='Games');", "_____no_output_____" ] ], [ [ "Or the win percent by rest difference", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots(figsize=(12, 6))\nsns.barplot(x='rest_spread', y='home_win', data=df.query('-3 <= rest_spread <= 3'),\n color='#4c72b0', ax=ax)\nsns.despine()", "_____no_output_____" ] ], [ [ "## Stack / Unstack\n\nPandas has two useful methods for quickly converting from wide to long format (`stack`) and long to wide (`unstack`).", "_____no_output_____" ] ], [ [ "rest = (tidy.groupby(['date', 'variable'])\n .rest.mean()\n .dropna())\nrest.head()", "_____no_output_____" ] ], [ [ "`rest` is in a \"long\" form since we have a single column of data, with multiple \"columns\" of metadata (in the MultiIndex). We use `.unstack` to move from long to wide.", "_____no_output_____" ] ], [ [ "rest.unstack().head()", "_____no_output_____" ] ], [ [ "`unstack` moves a level of a MultiIndex (innermost by default) up to the columns.\n`stack` is the inverse.", "_____no_output_____" ] ], [ [ "rest.unstack().stack()", "_____no_output_____" ] ], [ [ "With `.unstack` you can move between those APIs that expect there data in long-format and those APIs that work with wide-format data. For example, `DataFrame.plot()`, works with wide-form data, one line per column.", "_____no_output_____" ] ], [ [ "with sns.color_palette() as pal:\n b, g = pal.as_hex()[:2]\n\nax=(rest.unstack()\n .query('away_team < 7')\n .rolling(7)\n .mean()\n .plot(figsize=(12, 6), linewidth=3, legend=False))\nax.set(ylabel='Rest (7 day MA)')\nax.annotate(\"Home\", (rest.index[-1][0], 1.02), color=g, size=14)\nax.annotate(\"Away\", (rest.index[-1][0], 0.82), color=b, size=14)\nsns.despine()", "_____no_output_____" ] ], [ [ "The most conenient form will depend on exactly what you're doing.\nWhen interacting with databases you'll often deal with long form data.\nPandas' `DataFrame.plot` often expects wide-form data, while `seaborn` often expect long-form data. Regressions will expect wide-form data. Either way, it's good to be comfortable with `stack` and `unstack` (and MultiIndexes) to quickly move between the two.", "_____no_output_____" ], [ "## Mini Project: Home Court Advantage?\n\nWe've gone to all that work tidying our dataset, let's put it to use.\nWhat's the effect (in terms of probability to win) of being\nthe home team?", "_____no_output_____" ], [ "### Step 1: Create an outcome variable\n\nWe need to create an indicator for whether the home team won.\nAdd it as a column called `home_win` in `games`.", "_____no_output_____" ] ], [ [ "df['home_win'] = df.home_points > df.away_points", "_____no_output_____" ] ], [ [ "### Step 2: Find the win percent for each team\n\nIn the 10-minute literature review I did on the topic, it seems like people include a team-strength variable in their regressions.\nI suppose that makes sense; if stronger teams happened to play against weaker teams at home more often than away, it'd look like the home-effect is stronger than it actually is.\nWe'll do a terrible job of controlling for team strength by calculating each team's win percent and using that as a predictor.\nIt'd be better to use some kind of independent measure of team strength, but this will do for now.\n\nWe'll use a similar `melt` operation as earlier, only now with the `home_win` variable we just created.", "_____no_output_____" ] ], [ [ "wins = (\n pd.melt(df.reset_index(),\n id_vars=['game_id', 'date', 'home_win'],\n value_name='team', var_name='is_home',\n value_vars=['home_team', 'away_team'])\n .assign(win=lambda x: x.home_win == (x.is_home == 'home_team'))\n .groupby(['team', 'is_home'])\n .win\n .agg({'n_wins': 'sum', 'n_games': 'count', 'win_pct': 'mean'})\n)\nwins.head()", "_____no_output_____" ] ], [ [ "Pause for visualiztion, because why not", "_____no_output_____" ] ], [ [ "g = sns.FacetGrid(wins.reset_index(), hue='team', size=7, aspect=.5, palette=['k'])\ng.map(sns.pointplot, 'is_home', 'win_pct').set(ylim=(0, 1));", "_____no_output_____" ] ], [ [ "(It'd be great if there was a library built on top of matplotlib that auto-labeled each point decently well. Apparently this is a difficult problem to do in general).", "_____no_output_____" ] ], [ [ "g = sns.FacetGrid(wins.reset_index(), col='team', hue='team', col_wrap=5, size=2)\ng.map(sns.pointplot, 'is_home', 'win_pct')", "_____no_output_____" ] ], [ [ "Those two graphs show that most teams have a higher win-percent at home than away. So we can continue to investigate.\nLet's aggregate over home / away to get an overall win percent per team.", "_____no_output_____" ] ], [ [ "win_percent = (\n # Use sum(games) / sum(games) instead of mean\n # since I don't know if teams play the same\n # number of games at home as away\n wins.groupby(level='team', as_index=True)\n .apply(lambda x: x.n_wins.sum() / x.n_games.sum())\n)\nwin_percent.head()", "_____no_output_____" ], [ "win_percent.sort_values().plot.barh(figsize=(6, 12), width=.85, color='k')\nplt.tight_layout()\nsns.despine()\nplt.xlabel(\"Win Percent\")", "_____no_output_____" ] ], [ [ "Is there a relationship between overall team strength and their home-court advantage?", "_____no_output_____" ] ], [ [ "plt.figure(figsize=(8, 5))\n(wins.win_pct\n .unstack()\n .assign(**{'Home Win % - Away %': lambda x: x.home_team - x.away_team,\n 'Overall %': lambda x: (x.home_team + x.away_team) / 2})\n .pipe((sns.regplot, 'data'), x='Overall %', y='Home Win % - Away %')\n)\nsns.despine()\nplt.tight_layout()", "_____no_output_____" ] ], [ [ "Let's get the team strength back into `df`.\nYou could you `pd.merge`, but I prefer `.map` when joining a `Series`.", "_____no_output_____" ] ], [ [ "df = df.assign(away_strength=df['away_team'].map(win_percent),\n home_strength=df['home_team'].map(win_percent),\n point_diff=df['home_points'] - df['away_points'],\n rest_diff=df['home_rest'] - df['away_rest'])\ndf.head()", "_____no_output_____" ], [ "import statsmodels.formula.api as sm\n\ndf['home_win'] = df.home_win.astype(int) # for statsmodels", "_____no_output_____" ], [ "mod = sm.logit('home_win ~ home_strength + away_strength + home_rest + away_rest', df)\nres = mod.fit()\nres.summary()", "Optimization terminated successfully.\n Current function value: 0.552792\n Iterations 6\n" ] ], [ [ "The strength variables both have large coefficeints (really we should be using some independent measure of team strength here, `win_percent` is showing up on the left and right side of the equation). The rest variables don't seem to matter as much.\n\nWith `.assign` we can quickly explore variations in formula.", "_____no_output_____" ] ], [ [ "(sm.Logit.from_formula('home_win ~ strength_diff + rest_spread',\n df.assign(strength_diff=df.home_strength - df.away_strength))\n .fit().summary())", "Optimization terminated successfully.\n Current function value: 0.553499\n Iterations 6\n" ], [ "mod = sm.Logit.from_formula('home_win ~ home_rest + away_rest', df)\nres = mod.fit()\nres.summary()", "Optimization terminated successfully.\n Current function value: 0.676549\n Iterations 4\n" ] ], [ [ "Overall not seeing to much support for rest mattering, but we got to see some more tidy data.\n\nThat's it for today.\nNext time we'll look at data visualization.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
cbf788e491cdc14e1e95d3c457056b1e5db3f2ba
18,641
ipynb
Jupyter Notebook
notebooks/TractableBufferStock-Interactive-Solutions.ipynb
victojensen/TITLARK
4c646b8c29f430f7c854ca85fb104a26c781bccc
[ "Apache-2.0" ]
8
2018-12-16T02:06:12.000Z
2021-05-06T15:40:21.000Z
notebooks/TractableBufferStock-Interactive-Solutions.ipynb
victojensen/TITLARK
4c646b8c29f430f7c854ca85fb104a26c781bccc
[ "Apache-2.0" ]
5
2019-07-24T07:31:08.000Z
2020-02-13T23:07:20.000Z
notebooks/TractableBufferStock-Interactive-Solutions.ipynb
victojensen/TITLARK
4c646b8c29f430f7c854ca85fb104a26c781bccc
[ "Apache-2.0" ]
29
2018-06-28T06:33:24.000Z
2022-01-27T18:55:13.000Z
39.661702
607
0.582533
[ [ [ "# The Tractable Buffer Stock Model\n\n<p style=\"text-align: center;\"><small><small><small>Generator: BufferStockTheory-make/notebooks_byname</small></small></small></p>", "_____no_output_____" ], [ "The [TractableBufferStock](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/) model is a (relatively) simple framework that captures all of the qualitative, and many of the quantitative features of optimal consumption in the presence of labor income uncertainty. ", "_____no_output_____" ] ], [ [ "# This cell has a bit of (uninteresting) initial setup.\n\nimport matplotlib.pyplot as plt\n\nimport numpy as np\nimport HARK \nfrom time import clock\nfrom copy import deepcopy\nmystr = lambda number : \"{:.3f}\".format(number)\n\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets\nfrom HARK.utilities import plotFuncs", "_____no_output_____" ], [ "# Import the model from the toolkit\nfrom HARK.ConsumptionSaving.TractableBufferStockModel import TractableConsumerType", "_____no_output_____" ] ], [ [ "The key assumption behind the model's tractability is that there is only a single, stark form of uncertainty: So long as an employed consumer remains employed, that consumer's labor income $P$ will rise at a constant rate $\\Gamma$:\n\\begin{align}\nP_{t+1} &= \\Gamma P_{t}\n\\end{align}\n\nBut, between any period and the next, there is constant hazard $p$ that the consumer will transition to the \"unemployed\" state. Unemployment is irreversible, like retirement or disability. When unemployed, the consumer receives a fixed amount of income (for simplicity, zero). (See the [linked handout](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/) for details of the model).\n\nDefining $G$ as the growth rate of aggregate wages/productivity, we assume that idiosyncratic wages grow by $\\Gamma = G/(1-\\mho)$ where $(1-\\mho)^{-1}$ is the growth rate of idiosyncratic productivity ('on-the-job learning', say). (This assumption about the relation between idiosyncratic income growth and idiosyncratic risk means that an increase in $\\mho$ is a mean-preserving spread in human wealth; again see [the lecture notes](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/)).\n\nUnder CRRA utility $u(C) = \\frac{C^{1-\\rho}}{1-\\rho}$, the problem can be normalized by $P$. Using lower case for normalized varibles (e.g., $c = C/P$), the normalized problem can be expressed by the Bellman equation:\n\n\\begin{eqnarray*}\nv_t({m}_t) &=& \\max_{{c}_t} ~ U({c}_t) + \\beta \\Gamma^{1-\\rho} \\overbrace{\\mathbb{E}[v_{t+1}^{\\bullet}]}^{=p v_{t+1}^{u}+(1-p)v_{t+1}^{e}} \\\\\n& s.t. & \\\\\n{m}_{t+1} &=& (m_{t}-c_{t})\\mathcal{R} + \\mathbb{1}_{t+1},\n\\end{eqnarray*}\nwhere $\\mathcal{R} = R/\\Gamma$, and $\\mathbb{1}_{t+1} = 1$ if the consumer is employed (and zero if unemployed).\n\nUnder plausible parameter values the model has a target level of $\\check{m} = M/P$ (market resources to permanent income) with an analytical solution that exhibits plausible relationships among all of the parameters. \n\nDefining $\\gamma = \\log \\Gamma$ and $r = \\log R$, the handout shows that an approximation of the target is given by the formula:\n\n\\begin{align}\n\\check{m} & = 1 + \\left(\\frac{1}{(\\gamma-r)+(1+(\\gamma/\\mho)(1-(\\gamma/\\mho)(\\rho-1)/2))}\\right)\n\\end{align}\n", "_____no_output_____" ] ], [ [ "# Define a parameter dictionary and representation of the agents for the tractable buffer stock model\nTBS_dictionary = {'UnempPrb' : .00625, # Prob of becoming unemployed; working life of 1/UnempProb = 160 qtrs\n 'DiscFac' : 0.975, # Intertemporal discount factor\n 'Rfree' : 1.01, # Risk-free interest factor on assets\n 'PermGroFac' : 1.0025, # Permanent income growth factor (uncompensated)\n 'CRRA' : 2.5} # Coefficient of relative risk aversion\nMyTBStype = TractableConsumerType(**TBS_dictionary)", "_____no_output_____" ] ], [ [ "## Target Wealth\n\nWhether the model exhibits a \"target\" or \"stable\" level of the wealth-to-permanent-income ratio for employed consumers depends on whether the 'Growth Impatience Condition' (the GIC) holds:\n\n\\begin{align}\\label{eq:GIC}\n \\left(\\frac{(R \\beta (1-\\mho))^{1/\\rho}}{\\Gamma}\\right) & < 1\n\\\\ \\left(\\frac{(R \\beta (1-\\mho))^{1/\\rho}}{G (1-\\mho)}\\right) &< 1\n\\\\ \\left(\\frac{(R \\beta)^{1/\\rho}}{G} (1-\\mho)^{-\\rho}\\right) &< 1\n\\end{align}\nand recall (from [PerfForesightCRRA](http://econ.jhu.edu/people/ccarroll/public/lecturenotes/consumption/PerfForesightCRRA/)) that the perfect foresight 'Growth Impatience Factor' is \n\\begin{align}\\label{eq:PFGIC}\n\\left(\\frac{(R \\beta)^{1/\\rho}}{G}\\right) &< 1\n\\end{align}\nso since $\\mho > 0$, uncertainty makes it harder to be 'impatient.' To understand this, think of someone who, in the perfect foresight model, was 'poised': Exactly on the knife edge between patience and impatience. Now add a precautionary saving motive; that person will now (to some degree) be pushed off the knife edge in the direction of 'patience.' So, in the presence of uncertainty, the conditions on parameters other than $\\mho$ must be stronger in order to guarantee 'impatience' in the sense of wanting to spend enough for your wealth to decline _despite_ the extra precautionary motive.", "_____no_output_____" ] ], [ [ "# Define a function that plots the employed consumption function and sustainable consumption function \n# for given parameter values\n\ndef makeTBSplot(DiscFac,CRRA,Rfree,PermGroFac,UnempPrb,mMax,mMin,cMin,cMax,plot_emp,plot_ret,plot_mSS,show_targ):\n MyTBStype.DiscFac = DiscFac\n MyTBStype.CRRA = CRRA\n MyTBStype.Rfree = Rfree\n MyTBStype.PermGroFac = PermGroFac\n MyTBStype.UnempPrb = UnempPrb\n \n try:\n MyTBStype.solve()\n except:\n print('Unable to solve; parameter values may be too close to their limiting values') \n \n plt.xlabel('Market resources ${m}_t$')\n plt.ylabel('Consumption ${c}_t$')\n plt.ylim([cMin,cMax])\n plt.xlim([mMin,mMax])\n \n m = np.linspace(mMin,mMax,num=100,endpoint=True)\n if plot_emp:\n c = MyTBStype.solution[0].cFunc(m)\n c[m==0.] = 0.\n plt.plot(m,c,'-b')\n \n if plot_mSS:\n plt.plot([mMin,mMax],[(MyTBStype.PermGroFacCmp/MyTBStype.Rfree + mMin*(1.0-MyTBStype.PermGroFacCmp/MyTBStype.Rfree)),(MyTBStype.PermGroFacCmp/MyTBStype.Rfree + mMax*(1.0-MyTBStype.PermGroFacCmp/MyTBStype.Rfree))],'--k')\n \n if plot_ret:\n c = MyTBStype.solution[0].cFunc_U(m)\n plt.plot(m,c,'-g')\n \n if show_targ:\n mTarg = MyTBStype.mTarg\n cTarg = MyTBStype.cTarg\n targ_label = r'$\\left(\\frac{1}{(\\gamma-r)+(1+(\\gamma/\\mho)(1-(\\gamma/\\mho)(\\rho-1)/2))}\\right) $' #+ mystr(mTarg) + '\\n$\\check{c}^* = $ ' + mystr(cTarg)\n plt.annotate(targ_label,xy=(0.0,0.0),xytext=(0.2,0.1),textcoords='axes fraction',fontsize=18)\n plt.plot(mTarg,cTarg,'ro')\n plt.annotate('↙️ m target',(mTarg,cTarg),xytext=(0.25,0.2),ha='left',textcoords='offset points')\n \n plt.show()\n return None\n\n# Define widgets to control various aspects of the plot\n\n# Define a slider for the discount factor\nDiscFac_widget = widgets.FloatSlider(\n min=0.9,\n max=0.99,\n step=0.0002,\n value=TBS_dictionary['DiscFac'], # Default value\n continuous_update=False,\n readout_format='.4f',\n description='$\\\\beta$')\n\n# Define a slider for relative risk aversion\nCRRA_widget = widgets.FloatSlider(\n min=1.0,\n max=5.0,\n step=0.01,\n value=TBS_dictionary['CRRA'], # Default value\n continuous_update=False,\n readout_format='.2f',\n description='$\\\\rho$')\n\n# Define a slider for the interest factor\nRfree_widget = widgets.FloatSlider(\n min=1.01,\n max=1.04,\n step=0.0001,\n value=TBS_dictionary['Rfree'], # Default value\n continuous_update=False,\n readout_format='.4f',\n description='$R$')\n\n\n# Define a slider for permanent income growth\nPermGroFac_widget = widgets.FloatSlider(\n min=1.00,\n max=1.015,\n step=0.0002,\n value=TBS_dictionary['PermGroFac'], # Default value\n continuous_update=False,\n readout_format='.4f',\n description='$G$')\n\n# Define a slider for unemployment (or retirement) probability\nUnempPrb_widget = widgets.FloatSlider(\n min=0.000001,\n max=TBS_dictionary['UnempPrb']*2, # Go up to twice the default value\n step=0.00001,\n value=TBS_dictionary['UnempPrb'],\n continuous_update=False,\n readout_format='.5f',\n description='$\\\\mho$')\n\n# Define a text box for the lower bound of {m}_t\nmMin_widget = widgets.FloatText(\n value=0.0,\n step=0.1,\n description='$m$ min',\n disabled=False)\n\n# Define a text box for the upper bound of {m}_t\nmMax_widget = widgets.FloatText(\n value=50.0,\n step=0.1,\n description='$m$ max',\n disabled=False)\n\n# Define a text box for the lower bound of {c}_t\ncMin_widget = widgets.FloatText(\n value=0.0,\n step=0.1,\n description='$c$ min',\n disabled=False)\n\n# Define a text box for the upper bound of {c}_t\ncMax_widget = widgets.FloatText(\n value=1.5,\n step=0.1,\n description='$c$ max',\n disabled=False)\n\n# Define a check box for whether to plot the employed consumption function\nplot_emp_widget = widgets.Checkbox(\n value=True,\n description='Plot employed $c$ function',\n disabled=False)\n\n# Define a check box for whether to plot the retired consumption function\nplot_ret_widget = widgets.Checkbox(\n value=False,\n description='Plot retired $c$ function',\n disabled=False)\n\n# Define a check box for whether to plot the sustainable consumption line\nplot_mSS_widget = widgets.Checkbox(\n value=True,\n description='Plot sustainable $c$ line',\n disabled=False)\n\n# Define a check box for whether to show the target annotation\nshow_targ_widget = widgets.Checkbox(\n value=True,\n description = 'Show target $(m,c)$',\n disabled = False)\n\n# Make an interactive plot of the tractable buffer stock solution\n\n# To make some of the widgets not appear, replace X_widget with fixed(desired_fixed_value) in the arguments below.\ninteract(makeTBSplot,\n DiscFac = DiscFac_widget,\n CRRA = CRRA_widget,\n Rfree = Rfree_widget,\n PermGroFac = PermGroFac_widget,\n UnempPrb = UnempPrb_widget,\n mMin = mMin_widget,\n mMax = mMax_widget,\n cMin = cMin_widget,\n cMax = cMax_widget,\n show_targ = show_targ_widget,\n plot_emp = plot_emp_widget,\n plot_ret = plot_ret_widget,\n plot_mSS = plot_mSS_widget,\n );\n\n", "_____no_output_____" ] ], [ [ "# PROBLEM\n\nYour task is to make a simplified slider that involves only $\\beta$. \n\nFirst, create a variable `betaMax` equal to the value of $\\beta$ at which the Growth Impatience Factor is exactly equal to 1 (that is, the consumer is exactly on the border between patience and impatience). (Hint: The formula for this is [here](http://www.econ2.jhu.edu/people/ccarroll/public/LectureNotes/Consumption/TractableBufferStock/#GIFMax)).\n\nNext, create a slider/'widget' like the one above, but where all variables except $\\beta$ are set to their default values, and the slider takes $\\beta$ from 0.05 below its default value up to `betaMax - 0.01`. (The numerical solution algorithm becomes unstable when the GIC is too close to being violated, so you don't want to go all the way up to `betaMax.`)\n\nExplain the logic of the result that you see.\n\n(Hint: You do not need to copy and paste (then edit) the entire contents of the cell that creates the widgets above; you only need to modify the `DiscFac_widget`)", "_____no_output_____" ] ], [ [ "# Define a slider for the discount factor\n\nmy_rho = TBS_dictionary['CRRA'];\nmy_R = TBS_dictionary['Rfree'];\nmy_upsidedownOmega = TBS_dictionary['UnempPrb']; # didnt have time to figure out the right value\nmy_Gamma = TBS_dictionary['PermGroFac']/(1-my_upsidedownOmega);\n\nbetaMax = (my_Gamma**my_rho)/(my_R*(1-my_upsidedownOmega));\n\nDiscFac_widget = widgets.FloatSlider(\n min=TBS_dictionary['DiscFac']-0.05,\n max=betaMax-0.01,\n step=0.0002,\n value=TBS_dictionary['DiscFac'], # Default value\n continuous_update=False,\n readout_format='.4f',\n description='$\\\\beta$')\n\ninteract(makeTBSplot,\n DiscFac = DiscFac_widget,\n CRRA = fixed(TBS_dictionary['CRRA']),\n Rfree = fixed(TBS_dictionary['Rfree']),\n PermGroFac = fixed(TBS_dictionary['PermGroFac']),\n UnempPrb = fixed(TBS_dictionary['UnempPrb']),\n mMin = mMin_widget,\n mMax = mMax_widget,\n cMin = cMin_widget,\n cMax = cMax_widget,\n show_targ = show_targ_widget,\n plot_emp = plot_emp_widget,\n plot_ret = plot_ret_widget,\n plot_mSS = plot_mSS_widget,\n );", "_____no_output_____" ] ], [ [ "# Target level of market resources increases with increased patience as does the consumption because patience is rewarded by the returns on savings.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf78a5011185572d828b7404713b47fe5fac912
4,046
ipynb
Jupyter Notebook
code/phlebotominae.ipynb
kibet-gilbert/co1_metaanalysis
1089cc03bc4dbabab543a8dadf49130d8e399665
[ "CC-BY-3.0" ]
1
2021-01-01T05:57:08.000Z
2021-01-01T05:57:08.000Z
code/phlebotominae.ipynb
kibet-gilbert/co1_metaanalysis
1089cc03bc4dbabab543a8dadf49130d8e399665
[ "CC-BY-3.0" ]
null
null
null
code/phlebotominae.ipynb
kibet-gilbert/co1_metaanalysis
1089cc03bc4dbabab543a8dadf49130d8e399665
[ "CC-BY-3.0" ]
1
2021-01-01T06:15:56.000Z
2021-01-01T06:15:56.000Z
32.111111
420
0.631735
[ [ [ "# **Phlebotominae workflow**\n\n[khoudi et. al. (2016)](https://doi.org/10.1371/journal.pntd.0004349)\nPhlebotominae is a Subfamily within the order Diptera, suborder Nematocera, family Psychodidae. Phlebotomine Sandflies are a well-known vector of leishamnia and has over 800 recognized species that mainly inhabit topical and subtropical regions: \n1. Old world: \n(I) The Palaearctic region - Iran, Pakistan, the former U.S.S.R., France, Turkey, Morocco, Yemen, Spain, Tunisia, Afghanistan, Saudi Arabia, Iraq, Algeria, Egypt, Greece, China, Jordan. Over 200 species in genus phlebotomus subgenera (Adlerius, Anaphlebotomus, Euphlebotomus, Idiophlebotomus, Larroussius, Paraphlebotomus, Phlebotomus, Synphlebotomus and Transphlebotomu), and Chinius and Sergentomyia genera. \n(II) The Afrotropical region - Genus Phlebotomus (subgenera of Anaphlebotomus, Larroussius, Paraphlebotomus, Phlebotomus, Spelaeophlebotomus, and Synphlebotomus) and genus Sergentomyia. Notably, phlebotomus species are stated to be absent in western Afrotropical regions of Gabon, Sudan, Central African Republic, Ethiopia, Southern Africa \n(III)The Malagasy region - Madagascar and nearby Indian Ocean islands\n(IV)\n(V)\n2. New World:\n", "_____no_output_____" ], [ "## **DATA Retrieval**\nData used in the analysis that follows was retrived fro three key sources: \n1. **The Barcode of Life Data Systems (BOLD)**\n2. **The Global Biodiversity Information Facility (GBIF)**\n3. **The International Nucleotide Sequence Database Collaborative (INSDC) through GenBank** \n\nTo ensure that all the data that potentially belongs to the subfamily Phlebotominae were indeed dowloaded we download any all the data from the family Psychodidae and in the down stream analysis extract only data from the subfamily Phlebotominae lineages. \nThe retrieval procedure was as shown bellow:\n1. **The Barcode of Life Data Systems (BOLD)**", "_____no_output_____" ] ], [ [ "%%bash\npwd\n. process_all_input_files.sh\n#bolddata_retrival -a <<EOF\n#psychodidae\n#EOF\ncd ../data/input/bold_data/psychodidae/\n#boldxml2tsv psychodidae.xml\ncp psychodidae.tsv ../../psychodidae\n#boldtsv_cleanup ../../psychodidae/psychodidae_sequences/psychodidae.tsv\n#", "/home/kibet/bioinformatics/github/co1_metaanalysis/code\n" ] ], [ [ "2. **The Global Biodiversity Information Facility (GBIF)**", "_____no_output_____" ] ], [ [ "%%bash\ncd ../data/input/gbif/psychodidae-9164/\n", "/home/kibet/bioinformatics/github/co1_metaanalysis/code\n" ] ], [ [ "3. **The International Nucleotide Sequence Database Collaborative (INSDC) through GenBank**", "_____no_output_____" ] ], [ [ "%%bash", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf798a7cdcf366696d41faf5b742e891b05e2d7
9,173
ipynb
Jupyter Notebook
lab0_student.ipynb
au9ustine/org.edx.cs100
b514dd10efac0cfb2bd09e7f9dcbb128596dd494
[ "MIT" ]
null
null
null
lab0_student.ipynb
au9ustine/org.edx.cs100
b514dd10efac0cfb2bd09e7f9dcbb128596dd494
[ "MIT" ]
null
null
null
lab0_student.ipynb
au9ustine/org.edx.cs100
b514dd10efac0cfb2bd09e7f9dcbb128596dd494
[ "MIT" ]
null
null
null
31.307167
611
0.563829
[ [ [ "#![Spark Logo](http://spark-mooc.github.io/web-assets/images/ta_Spark-logo-small.png) + ![Python Logo](http://spark-mooc.github.io/web-assets/images/python-logo-master-v3-TM-flattened_small.png)\n# **First Notebook: Virtual machine test and assignment submission**\n#### This notebook will test that the virtual machine (VM) is functioning properly and will show you how to submit an assignment to the autograder. To move through the notebook just run each of the cells. You will not need to solve any problems to complete this lab. You can run a cell by pressing \"shift-enter\", which will compute the current cell and advance to the next cell, or by clicking in a cell and pressing \"control-enter\", which will compute the current cell and remain in that cell. At the end of the notebook you will export / download the notebook and submit it to the autograder.\n#### ** This notebook covers: **\n#### *Part 1:* Test Spark functionality\n#### *Part 2:* Check class testing library\n#### *Part 3:* Check plotting\n#### *Part 4:* Check MathJax formulas\n#### *Part 5:* Export / download and submit", "_____no_output_____" ], [ "### ** Part 1: Test Spark functionality **", "_____no_output_____" ], [ "#### ** (1a) Parallelize, filter, and reduce **", "_____no_output_____" ] ], [ [ "# Check that Spark is working\nlargeRange = sc.parallelize(xrange(100000))\nreduceTest = largeRange.reduce(lambda a, b: a + b)\nfilterReduceTest = largeRange.filter(lambda x: x % 7 == 0).sum()\n\nprint reduceTest\nprint filterReduceTest\n\n# If the Spark jobs don't work properly these will raise an AssertionError\nassert reduceTest == 4999950000\nassert filterReduceTest == 714264285", "4999950000\n714264285\n" ] ], [ [ "#### ** (1b) Loading a text file **", "_____no_output_____" ] ], [ [ "# Check loading data with sc.textFile\nimport os.path\nbaseDir = os.path.join('data')\ninputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')\nfileName = os.path.join(baseDir, inputPath)\n\nrawData = sc.textFile(fileName)\nshakespeareCount = rawData.count()\n\nprint shakespeareCount\n\n# If the text file didn't load properly an AssertionError will be raised\nassert shakespeareCount == 122395", "122395\n" ] ], [ [ "### ** Part 2: Check class testing library **", "_____no_output_____" ], [ "#### ** (2a) Compare with hash **", "_____no_output_____" ] ], [ [ "# TEST Compare with hash (2a)\n# Check our testing library/package\n# This should print '1 test passed.' on two lines\nfrom test_helper import Test\n\ntwelve = 12\nTest.assertEquals(twelve, 12, 'twelve should equal 12')\nTest.assertEqualsHashed(twelve, '7b52009b64fd0a2a49e6d8a939753077792b0554',\n 'twelve, once hashed, should equal the hashed value of 12')", "1 test passed.\n1 test passed.\n" ] ], [ [ "#### ** (2b) Compare lists **", "_____no_output_____" ] ], [ [ "# TEST Compare lists (2b)\n# This should print '1 test passed.'\nunsortedList = [(5, 'b'), (5, 'a'), (4, 'c'), (3, 'a')]\nTest.assertEquals(sorted(unsortedList), [(3, 'a'), (4, 'c'), (5, 'a'), (5, 'b')],\n 'unsortedList does not sort properly')", "1 test passed.\n" ] ], [ [ "### ** Part 3: Check plotting **", "_____no_output_____" ], [ "#### ** (3a) Our first plot **\n#### After executing the code cell below, you should see a plot with 50 blue circles. The circles should start at the bottom left and end at the top right.", "_____no_output_____" ] ], [ [ "# Check matplotlib plotting\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nfrom math import log\n\n# function for generating plot layout\ndef preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999', gridWidth=1.0):\n plt.close()\n fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')\n ax.axes.tick_params(labelcolor='#999999', labelsize='10')\n for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:\n axis.set_ticks_position('none')\n axis.set_ticks(ticks)\n axis.label.set_color('#999999')\n if hideLabels: axis.set_ticklabels([])\n plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')\n map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])\n return fig, ax\n\n# generate layout and plot data\nx = range(1, 50)\ny = [log(x1 ** 2) for x1 in x]\nfig, ax = preparePlot(range(5, 60, 10), range(0, 12, 1))\nplt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)\nax.set_xlabel(r'$range(1, 50)$'), ax.set_ylabel(r'$\\log_e(x^2)$')\npass", "_____no_output_____" ] ], [ [ "### ** Part 4: Check MathJax Formulas **", "_____no_output_____" ], [ "#### ** (4a) Gradient descent formula **\n#### You should see a formula on the line below this one: $$ \\scriptsize \\mathbf{w}_{i+1} = \\mathbf{w}_i - \\alpha_i \\sum_j (\\mathbf{w}_i^\\top\\mathbf{x}_j - y_j) \\mathbf{x}_j \\,.$$\n \n#### This formula is included inline with the text and is $ \\scriptsize (\\mathbf{w}^\\top \\mathbf{x} - y) \\mathbf{x} $.", "_____no_output_____" ], [ "#### ** (4b) Log loss formula **\n#### This formula shows log loss for single point. Log loss is defined as: $$ \\begin{align} \\scriptsize \\ell_{log}(p, y) = \\begin{cases} -\\log (p) & \\text{if } y = 1 \\\\\\ -\\log(1-p) & \\text{if } y = 0 \\end{cases} \\end{align} $$", "_____no_output_____" ], [ "### ** Part 5: Export / download and submit **", "_____no_output_____" ], [ "#### ** (5a) Time to submit **", "_____no_output_____" ], [ "#### You have completed the lab. To submit the lab for grading you will need to download it from your IPython Notebook environment. You can do this by clicking on \"File\", then hovering your mouse over \"Download as\", and then clicking on \"Python (.py)\". This will export your IPython Notebook as a .py file to your computer.\n#### To upload this file to the course autograder, go to the edX website and find the page for submitting this assignment. Click \"Choose file\", then navigate to and click on the downloaded .py file. Now click the \"Open\" button and then the \"Check\" button. Your submission will be graded shortly and will be available on the page where you submitted. Note that when submission volumes are high, it may take as long as an hour to receive results.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
cbf7ba89695614274eb7c95a9f17e54662929f84
22,077
ipynb
Jupyter Notebook
a01_PySpark/f01_Pyspark_Solution_to_SQL57ProblemsBook/extra/qn56.ipynb
mindis/Big_Data_Analysis
b4aec3a0e285de5cac02ad7390712635a73a24db
[ "Apache-2.0" ]
null
null
null
a01_PySpark/f01_Pyspark_Solution_to_SQL57ProblemsBook/extra/qn56.ipynb
mindis/Big_Data_Analysis
b4aec3a0e285de5cac02ad7390712635a73a24db
[ "Apache-2.0" ]
null
null
null
a01_PySpark/f01_Pyspark_Solution_to_SQL57ProblemsBook/extra/qn56.ipynb
mindis/Big_Data_Analysis
b4aec3a0e285de5cac02ad7390712635a73a24db
[ "Apache-2.0" ]
1
2021-06-22T10:18:14.000Z
2021-06-22T10:18:14.000Z
34.603448
150
0.285637
[ [ [ "import numpy as np\nimport pandas as pd\n\n\ndf = pd.DataFrame({'orderid': [10315, 10318, 10321, 10473, 10621, 10253, 10541, 10645],\n 'customerid': ['ISLAT', 'ISLAT', 'ISLAT', 'ISLAT', 'ISLAT', 'HANAR', 'HANAR', 'HANAR'],\n 'orderdate': ['1996-09-26', '1996-10-01', '1996-10-03', '1997-03-13', '1997-08-05', '1996-07-10', '1997-05-19', '1997-08-26']})\n\n\ndf['orderdate'] = pd.to_datetime(df['orderdate'])\ndf", "_____no_output_____" ], [ "df['daysbetween'] = df.sort_values(['customerid', 'orderdate'])['orderdate'].diff().dt.days\ndf", "_____no_output_____" ], [ "df.groupby('customerid').shift(-1)\n ", "_____no_output_____" ], [ "df_final = df.join(df.groupby('customerid').shift(-1), \n lsuffix='_initial', rsuffix='_next')\n\ndf_final", "_____no_output_____" ], [ "\ndf_final.drop('daysbetween_initial', axis=1)\\\n .query('daysbetween_next <= 5 and daysbetween_next >=0')\n\n", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
cbf7ca85240f650506d659dfba0a697a307e6ac2
9,506
ipynb
Jupyter Notebook
p3_collab-compet/Tennis.ipynb
royveshovda/deep-reinforcement-learning
64ba7ef5ab44f095b7e8b29f6c4ff1585025981a
[ "MIT" ]
null
null
null
p3_collab-compet/Tennis.ipynb
royveshovda/deep-reinforcement-learning
64ba7ef5ab44f095b7e8b29f6c4ff1585025981a
[ "MIT" ]
null
null
null
p3_collab-compet/Tennis.ipynb
royveshovda/deep-reinforcement-learning
64ba7ef5ab44f095b7e8b29f6c4ff1585025981a
[ "MIT" ]
null
null
null
38.176707
317
0.573638
[ [ [ "# Collaboration and Competition\n\n---\n\nIn this notebook, you will learn how to use the Unity ML-Agents environment for the third project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program.\n\n### 1. Start the Environment\n\nWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).", "_____no_output_____" ] ], [ [ "from unityagents import UnityEnvironment\nimport numpy as np", "_____no_output_____" ] ], [ [ "Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.\n\n- **Mac**: `\"path/to/Tennis.app\"`\n- **Windows** (x86): `\"path/to/Tennis_Windows_x86/Tennis.exe\"`\n- **Windows** (x86_64): `\"path/to/Tennis_Windows_x86_64/Tennis.exe\"`\n- **Linux** (x86): `\"path/to/Tennis_Linux/Tennis.x86\"`\n- **Linux** (x86_64): `\"path/to/Tennis_Linux/Tennis.x86_64\"`\n- **Linux** (x86, headless): `\"path/to/Tennis_Linux_NoVis/Tennis.x86\"`\n- **Linux** (x86_64, headless): `\"path/to/Tennis_Linux_NoVis/Tennis.x86_64\"`\n\nFor instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:\n```\nenv = UnityEnvironment(file_name=\"Tennis.app\")\n```", "_____no_output_____" ] ], [ [ "env = UnityEnvironment(file_name=\"Tennis_Linux/Tennis.x86_64\")", "INFO:unityagents:\n'Academy' started successfully!\nUnity Academy name: Academy\n Number of Brains: 1\n Number of External Brains : 1\n Lesson number : 0\n Reset Parameters :\n\t\t\nUnity brain name: TennisBrain\n Number of Visual Observations (per agent): 0\n Vector Observation space type: continuous\n Vector Observation space size (per agent): 8\n Number of stacked Vector Observation: 3\n Vector Action space type: continuous\n Vector Action space size (per agent): 2\n Vector Action descriptions: , \n" ] ], [ [ "Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.", "_____no_output_____" ] ], [ [ "# get the default brain\nbrain_name = env.brain_names[0]\nbrain = env.brains[brain_name]", "_____no_output_____" ] ], [ [ "### 2. Examine the State and Action Spaces\n\nIn this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.\n\nThe observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping. \n\nRun the code cell below to print some information about the environment.", "_____no_output_____" ] ], [ [ "# reset the environment\nenv_info = env.reset(train_mode=True)[brain_name]\n\n# number of agents \nnum_agents = len(env_info.agents)\nprint('Number of agents:', num_agents)\n\n# size of each action\naction_size = brain.vector_action_space_size\nprint('Size of each action:', action_size)\n\n# examine the state space \nstates = env_info.vector_observations\nstate_size = states.shape[1]\nprint('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))\nprint('The state for the first agent looks like:', states[0])", "Number of agents: 2\nSize of each action: 2\nThere are 2 agents. Each observes a state with length: 24\nThe state for the first agent looks like: [ 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. 0. 0.\n 0. 0. 0. 0. -6.65278625 -1.5\n -0. 0. 6.83172083 6. -0. 0. ]\n" ] ], [ [ "### 3. Take Random Actions in the Environment\n\nIn the next code cell, you will learn how to use the Python API to control the agents and receive feedback from the environment.\n\nOnce this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents.\n\nOf course, as part of the project, you'll have to change the code so that the agents are able to use their experiences to gradually choose better actions when interacting with the environment!", "_____no_output_____" ] ], [ [ "for i in range(1, 6): # play game for 5 episodes\n env_info = env.reset(train_mode=False)[brain_name] # reset the environment \n states = env_info.vector_observations # get the current state (for each agent)\n scores = np.zeros(num_agents) # initialize the score (for each agent)\n while True:\n actions = np.random.randn(num_agents, action_size) # select an action (for each agent)\n actions = np.clip(actions, -1, 1) # all actions between -1 and 1\n env_info = env.step(actions)[brain_name] # send all actions to tne environment\n next_states = env_info.vector_observations # get next state (for each agent)\n rewards = env_info.rewards # get reward (for each agent)\n dones = env_info.local_done # see if episode finished\n scores += env_info.rewards # update the score (for each agent)\n states = next_states # roll over states to next time step\n if np.any(dones): # exit loop if episode finished\n break\n print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))", "Score (max over agents) from episode 1: 0.09000000171363354\nScore (max over agents) from episode 2: 0.09000000171363354\nScore (max over agents) from episode 3: 0.10000000149011612\nScore (max over agents) from episode 4: 0.0\nScore (max over agents) from episode 5: 0.0\n" ] ], [ [ "When finished, you can close the environment.", "_____no_output_____" ] ], [ [ "env.close()", "_____no_output_____" ] ], [ [ "### 4. It's Your Turn!\n\nNow it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:\n```python\nenv_info = env.reset(train_mode=True)[brain_name]\n```", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf7d13499da5c7b1bc807e86507ea1e6b0efa33
3,273
ipynb
Jupyter Notebook
notebook/prime_factorization.ipynb
puyopop/python-snippets
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
[ "MIT" ]
174
2018-05-30T21:14:50.000Z
2022-03-25T07:59:37.000Z
notebook/prime_factorization.ipynb
puyopop/python-snippets
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
[ "MIT" ]
5
2019-08-10T03:22:02.000Z
2021-07-12T20:31:17.000Z
notebook/prime_factorization.ipynb
puyopop/python-snippets
9d70aa3b2a867dd22f5a5e6178a5c0c5081add73
[ "MIT" ]
53
2018-04-27T05:26:35.000Z
2022-03-25T07:59:37.000Z
16.784615
54
0.411549
[ [ [ "import collections", "_____no_output_____" ], [ "def prime_factorize(n):\n a = []\n while n % 2 == 0:\n a.append(2)\n n //= 2\n f = 3\n while f * f <= n:\n if n % f == 0:\n a.append(f)\n n //= f\n else:\n f += 2\n if n != 1:\n a.append(n)\n return a", "_____no_output_____" ], [ "print(prime_factorize(1))", "[]\n" ], [ "print(prime_factorize(36))", "[2, 2, 3, 3]\n" ], [ "print(prime_factorize(840))", "[2, 2, 2, 3, 5, 7]\n" ], [ "c = collections.Counter(prime_factorize(840))\nprint(c)", "Counter({2: 3, 3: 1, 5: 1, 7: 1})\n" ], [ "print(c.keys())", "dict_keys([2, 3, 5, 7])\n" ], [ "print(c.values())", "dict_values([3, 1, 1, 1])\n" ], [ "print(c.items())", "dict_items([(2, 3), (3, 1), (5, 1), (7, 1)])\n" ], [ "print(list(c.keys()))", "[2, 3, 5, 7]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf7dce395563029d71e97ed89a0747e9189edb2
334,161
ipynb
Jupyter Notebook
notebooks/debug_icassp_convnet.ipynb
BirdVox/bv_context_adaptation
9bd446326ac927d72f7c333eac07ee0490fc3127
[ "MIT" ]
5
2018-10-17T21:17:26.000Z
2019-06-14T01:48:29.000Z
notebooks/debug_icassp_convnet.ipynb
BirdVox/bv_context_adaptation
9bd446326ac927d72f7c333eac07ee0490fc3127
[ "MIT" ]
null
null
null
notebooks/debug_icassp_convnet.ipynb
BirdVox/bv_context_adaptation
9bd446326ac927d72f7c333eac07ee0490fc3127
[ "MIT" ]
3
2018-12-22T00:04:43.000Z
2021-06-09T20:02:28.000Z
745.895089
81,210
0.524541
[ [ [ "import h5py\nimport keras\nimport numpy as np\nimport os\nimport random\nimport sys\nimport tensorflow as tf\n\nsys.path.append(\"../src\")\nimport localmodule\n\n\n# Define constants.\ndataset_name = localmodule.get_dataset_name()\nmodels_dir = localmodule.get_models_dir()\nunits = localmodule.get_units()\nn_input_hops = 104\nn_filters = [24, 48, 48]\nkernel_size = [5, 5]\npool_size = [2, 4]\nn_hidden_units = 64\n\n\n# Define and compile Keras model.\n# NB: the original implementation of Justin Salamon in ICASSP 2017 relies on\n# glorot_uniform initialization for all layers, and the optimizer is a\n# stochastic gradient descent (SGD) with a fixed learning rate of 0.1.\n# Instead, we use a he_uniform initialization for the layers followed\n# by rectified linear units (see He ICCV 2015), and replace the SGD by\n# the Adam adaptive stochastic optimizer (see Kingma ICLR 2014).\nmodel = keras.models.Sequential()\n\n# Layer 1\nbn = keras.layers.normalization.BatchNormalization(\n input_shape=(128, n_input_hops, 1))\nmodel.add(bn)\nconv1 = keras.layers.Convolution2D(n_filters[0], kernel_size,\n padding=\"same\", kernel_initializer=\"he_normal\", activation=\"relu\")\nmodel.add(conv1)\npool1 = keras.layers.MaxPooling2D(pool_size=pool_size)\nmodel.add(pool1)\n\n# Layer 2\nconv2 = keras.layers.Convolution2D(n_filters[1], kernel_size,\n padding=\"same\", kernel_initializer=\"he_normal\", activation=\"relu\")\nmodel.add(conv2)\npool2 = keras.layers.MaxPooling2D(pool_size=pool_size)\nmodel.add(pool2)\n\n# Layer 3\nconv3 = keras.layers.Convolution2D(n_filters[2], kernel_size,\n padding=\"same\", kernel_initializer=\"he_normal\", activation=\"relu\")\nmodel.add(conv3)\n\n# Layer 4\ndrop1 = keras.layers.Dropout(0.5)\nmodel.add(drop1)\nflatten = keras.layers.Flatten()\nmodel.add(flatten)\ndense1 = keras.layers.Dense(n_hidden_units,\n kernel_initializer=\"he_normal\", activation=\"relu\",\n kernel_regularizer=keras.regularizers.l2(0.01))\nmodel.add(dense1)\n\n# Layer 5\n# We put a single output instead of 43 in the original paper, because this\n# is binary classification instead of multilabel classification.\ndrop2 = keras.layers.Dropout(0.5)\nmodel.add(drop2)\ndense2 = keras.layers.Dense(1,\n kernel_initializer=\"normal\", activation=\"sigmoid\",\n kernel_regularizer=keras.regularizers.l2(0.0002))\nmodel.add(dense2)\n\n\n# Compile model, print model summary.\nmetrics = [\"accuracy\"]\n#model.compile(loss=\"binary_crossentropy\", optimizer=\"sgd\", metrics=metrics)\n#model.compile(loss=\"mse\", optimizer=\"adam\", metrics=metrics)\nmodel.compile(loss=\"mse\", optimizer=\"sgd\", metrics=metrics)\n#model.summary()\n\n\n# Train model.\nfold_units = [\"unit01\"]\naugs = [\"original\"]\naug_dict = localmodule.get_augmentations()\ndata_dir = localmodule.get_data_dir()\ndataset_name = localmodule.get_dataset_name()\nlogmelspec_name = \"_\".join([dataset_name, \"logmelspec\"])\nlogmelspec_dir = os.path.join(data_dir, logmelspec_name)\noriginal_dir = os.path.join(logmelspec_dir, \"original\")\nn_hops = 104\n\nXs = []\nys = []\nfor unit_str in units[:2]:\n unit_name = \"_\".join([dataset_name, \"original\", unit_str])\n unit_path = os.path.join(original_dir, unit_name + \".hdf5\")\n lms_container = h5py.File(unit_path)\n lms_group = lms_container[\"logmelspec\"]\n keys = list(lms_group.keys())\n\n for key in keys:\n X = lms_group[key]\n X_width = X.shape[1]\n first_col = int((X_width-n_hops) / 2)\n last_col = int((X_width+n_hops) / 2)\n X = X[:, first_col:last_col]\n X = np.array(X)[np.newaxis, :, :, np.newaxis]\n Xs.append(X)\n ys.append(np.float32(key.split(\"_\")[3]))\n X = np.concatenate(Xs, axis=0)\n y = np.array(ys)", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ], [ "# MSE, ADAM\nmodel.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)\nprint(model.evaluate(X, y))", "Epoch 1/1\n5852/5852 [==============================] - 278s - loss: 0.7086 - acc: 0.5050 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n5852/5852 [==============================] - 72s \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n[0.66134808402475398, 0.500000000015278]\n" ], [ "# MSE, SGD\nmodel.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)\nprint(model.evaluate(X, y))", "Epoch 1/1\n15312/15312 [==============================] - 729s - loss: 1.3000 - acc: 0.5426 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n 7264/15312 [=============>................] - ETA: 100s\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b" ], [ "# MSE, SGD\nmodel.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)\nprint(model.evaluate(X, y))", "Epoch 1/1\n5852/5852 [==============================] - 277s - loss: 1.1353 - acc: 0.7239 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n5852/5852 [==============================] - 72s \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n[1.007095343758615, 0.89405331512122466]\n" ], [ "# BCE, SGD\nmodel.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)\nprint(model.evaluate(X, y))", "Epoch 1/1\n5852/5852 [==============================] - 278s - loss: 8.5270 - acc: 0.5041 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n5852/5852 [==============================] - 72s \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n[8.9090316202733426, 0.500000000015278]\n" ], [ "# BCE, ADAM\nmodel.fit(X[:,:,:,:], y[:], epochs=1, verbose=True)\nprint(model.evaluate(X, y))", "Epoch 1/1\n5852/5852 [==============================] - 279s - loss: 1.2048 - acc: 0.5039 \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n5852/5852 [==============================] - 72s \b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n[0.76619562830569721, 0.7893028025829214]\n" ], [ "m = keras.models.Sequential()\nm.add(keras.layers.Dense(1, input_shape=(1,)))\nX = np.array([[0.0], [1.0]])\ny = np.array([0.0, 1.0])\nm.compile(optimizer=\"sgd\", loss=\"binary_crossentropy\")\nprint(m.layers[0].get_weights())\nm.fit(X, y, epochs=500, verbose=False)\nprint(m.predict(X))\nprint(m.layers[0].get_weights())", "[array([[-0.25976813]], dtype=float32), array([ 0.], dtype=float32)]\n[[ 0. ]\n [-0.25976813]]\n[array([[-0.25976813]], dtype=float32), array([ 0.], dtype=float32)]\n" ], [ "import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\n\n# Generate dummy data\nneg_X = np.random.randn(500, 2) + np.array([-2.0, 1.0])\npos_X = np.random.randn(500, 2) + np.array([1.0, -2.0])\nX = np.concatenate((neg_X, pos_X), axis=0)\nneg_Y = np.zeros((500,))\npos_Y = np.ones((500,))\nY = np.concatenate((neg_Y, pos_Y), axis=0)\n\nmodel = Sequential()\nmodel.add(Dense(10, input_dim=2, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(10, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(1, activation='sigmoid'))\n\nmodel.compile(loss='binary_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])\n\nprint(model.layers[0].get_weights())\nmodel.fit(X, Y, epochs=20, batch_size=100, verbose=False)\nprint(model.layers[0].get_weights())", "[array([[-0.11947656, 0.61539656, 0.35237628, -0.42045242, -0.10635978,\n -0.70158517, -0.39677969, -0.5584265 , -0.31452945, -0.64936763],\n [-0.31949806, -0.60264212, -0.04683053, 0.12245125, -0.41256908,\n -0.11353838, -0.41940328, -0.68450022, -0.0062207 , -0.06409407]], dtype=float32), array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)]\n[array([[-0.21787041, 0.59552658, 0.28620219, -0.59701407, -0.26289412,\n -0.8668977 , -0.57356513, -0.64596599, -0.13628362, -0.47922936],\n [-0.21896617, -0.63976008, 0.02422422, 0.29651019, -0.24379101,\n 0.06879032, -0.23495869, -0.5479548 , -0.19217376, -0.21270375]], dtype=float32), array([-0.02156365, 0.04450258, -0.07965007, 0.16395904, -0.15797964,\n 0.05222137, -0.1010966 , -0.0896877 , -0.0907311 , -0.16821335], dtype=float32)]\n" ], [ "from matplotlib import pyplot as plt\n%matplotlib inline\nplt.figure()\nplt.plot(neg_X[:, 0], neg_X[:, 1], '+');\nplt.plot(pos_X[:, 0], pos_X[:, 1], '+');", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf7edf33a4d027c4a748cb10f3e92707c2ef81f
155,208
ipynb
Jupyter Notebook
Tutorial/Plotting/Tutorial_Plotting_tweets_mentioning.ipynb
haller218/AnacondaEstudo
fcf1cfd1e87e731408c8dda8356d73320426b238
[ "MIT" ]
null
null
null
Tutorial/Plotting/Tutorial_Plotting_tweets_mentioning.ipynb
haller218/AnacondaEstudo
fcf1cfd1e87e731408c8dda8356d73320426b238
[ "MIT" ]
null
null
null
Tutorial/Plotting/Tutorial_Plotting_tweets_mentioning.ipynb
haller218/AnacondaEstudo
fcf1cfd1e87e731408c8dda8356d73320426b238
[ "MIT" ]
null
null
null
171.500552
33,092
0.887209
[ [ [ "# Data Row\n\n## Analisys of Tweets of Trump, Clinton, and Sander\n\n#### Ref:\n", "_____no_output_____" ], [ "## https://www.dataquest.io/blog/matplotlib-tutorial/", "_____no_output_____" ], [ "--------------------------------------------", "_____no_output_____" ], [ "## Exploring tweets with Pandas", "_____no_output_____" ] ], [ [ "import pandas as pd", "_____no_output_____" ], [ "tweets = pd.read_csv(\"tweets.csv\")", "_____no_output_____" ], [ "tweets.head()", "_____no_output_____" ] ], [ [ "------------------------------------------------", "_____no_output_____" ], [ "## Generating a candidates column", "_____no_output_____" ] ], [ [ "def get_candidate(row):\n candidates = []\n text = row[\"text\"].lower()\n if \"clinton\" in text or \"hillary\" in text:\n candidates.append(\"clinton\")\n if \"trump\" in text or \"donald\" in text:\n candidates.append(\"trump\")\n if \"sanders\" in text or \"bernie\" in text:\n candidates.append(\"sanders\")\n return \",\".join(candidates)\n\ntweets[\"candidate\"] = tweets.apply(get_candidate,axis=1)", "_____no_output_____" ] ], [ [ "# Importing matplotlib", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline", "_____no_output_____" ] ], [ [ "### Making a bar plot", "_____no_output_____" ] ], [ [ "counts = tweets[\"candidate\"].value_counts()\nplt.bar(range(len(counts)), counts)\nplt.show()", "_____no_output_____" ], [ "print(counts)", "trump 119998\nclinton,trump 30521\n 25429\nsanders 25351\nclinton 22746\nclinton,sanders 6044\nclinton,trump,sanders 4219\ntrump,sanders 3172\nName: candidate, dtype: int64\n" ] ], [ [ "--------------------------------------------", "_____no_output_____" ], [ "## Customizing plots", "_____no_output_____" ] ], [ [ "from datetime import datetime", "_____no_output_____" ], [ "tweets[\"created\"] = pd.to_datetime(tweets[\"created\"])\ntweets[\"user_created\"] = pd.to_datetime(tweets[\"user_created\"])\n\ntweets[\"user_age\"] = tweets[\"user_created\"].apply(lambda x: (datetime.now() - x).total_seconds() / 3600 / 24 / 365)\nplt.hist(tweets[\"user_age\"])", "_____no_output_____" ], [ "plt.show()", "_____no_output_____" ] ], [ [ "### Adding labels", "_____no_output_____" ] ], [ [ "plt.hist(tweets[\"user_age\"])\nplt.title(\"Tweets mentioning candidates\")\nplt.xlabel(\"Twitter account age in years\")\nplt.ylabel(\"# of tweets\")\nplt.show()", "_____no_output_____" ] ], [ [ "### Making a stacked histogram", "_____no_output_____" ] ], [ [ "cl_tweets = tweets[\"user_age\"][tweets[\"candidate\"] == \"clinton\"]\nsa_tweets = tweets[\"user_age\"][tweets[\"candidate\"] == \"sanders\"]\ntr_tweets = tweets[\"user_age\"][tweets[\"candidate\"] == \"trump\"]\nplt.hist([\n cl_tweets, \n sa_tweets, \n tr_tweets\n ], \n stacked=True, \n label=[\"clinton\", \"sanders\", \"trump\"]\n)\nplt.legend()\nplt.title(\"Tweets mentioning each candidate\")\nplt.xlabel(\"Twitter account age in years\")\nplt.ylabel(\"# of tweets\")\nplt.show()", "_____no_output_____" ] ], [ [ "### Annotating the histogram", "_____no_output_____" ] ], [ [ "plt.hist([\n cl_tweets, \n sa_tweets, \n tr_tweets\n ], \n stacked=True, \n label=[\"clinton\", \"sanders\", \"trump\"]\n)\nplt.legend()\nplt.title(\"Tweets mentioning each candidate\")\nplt.xlabel(\"Twitter account age in years\")\nplt.ylabel(\"# of tweets\")\nplt.annotate('More Trump tweets', xy=(1, 35000), xytext=(2, 35000),\n arrowprops=dict(facecolor='black'))\nplt.show()", "_____no_output_____" ] ], [ [ "----------------------------------------------", "_____no_output_____" ], [ "# Extracting colors", "_____no_output_____" ] ], [ [ "import matplotlib.colors as colors\n\ntweets[\"red\"] = tweets[\"user_bg_color\"].apply(lambda x: colors.hex2color('#{0}'.format(x))[0])\ntweets[\"blue\"] = tweets[\"user_bg_color\"].apply(lambda x: colors.hex2color('#{0}'.format(x))[2])\n", "_____no_output_____" ] ], [ [ "### Creating the plot", "_____no_output_____" ] ], [ [ "fig, axes = plt.subplots(nrows=2, ncols=2)\nax0, ax1, ax2, ax3 = axes.flat\n\nax0.hist(tweets[\"red\"])\nax0.set_title('Red in backgrounds')\n\nax1.hist(tweets[\"red\"][tweets[\"candidate\"] == \"trump\"].values)\nax1.set_title('Red in Trump tweeters')\n\nax2.hist(tweets[\"blue\"])\nax2.set_title('Blue in backgrounds')\n\nax3.hist(tweets[\"blue\"][tweets[\"candidate\"] == \"trump\"].values)\nax3.set_title('Blue in Trump tweeters')\n\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "### Removing common background colors", "_____no_output_____" ] ], [ [ "tweets[\"user_bg_color\"].value_counts()", "_____no_output_____" ], [ "tc = tweets[~tweets[\"user_bg_color\"].isin([\"C0DEED\", \"000000\", \"F5F8FA\"])]\n\ndef create_plot(data):\n fig, axes = plt.subplots(nrows=2, ncols=2)\n ax0, ax1, ax2, ax3 = axes.flat\n\n ax0.hist(data[\"red\"])\n ax0.set_title('Red in backgrounds')\n\n ax1.hist(data[\"red\"][data[\"candidate\"] == \"trump\"].values)\n ax1.set_title('Red in Trump tweets')\n\n ax2.hist(data[\"blue\"])\n ax2.set_title('Blue in backgrounds')\n\n ax3.hist(data[\"blue\"][data[\"candidate\"] == \"trump\"].values)\n ax3.set_title('Blue in Trump tweeters')\n\n plt.tight_layout()\n plt.show()\n\ncreate_plot(tc)", "_____no_output_____" ] ], [ [ "## Plotting sentiment", "_____no_output_____" ] ], [ [ "gr = tweets.groupby(\"candidate\").agg([np.mean, np.std])\n\nfig, axes = plt.subplots(nrows=2, ncols=1, figsize=(7, 7))\nax0, ax1 = axes.flat\n\nstd = gr[\"polarity\"][\"std\"].iloc[1:]\nmean = gr[\"polarity\"][\"mean\"].iloc[1:]\nax0.bar(range(len(std)), std)\nax0.set_xticklabels(std.index, rotation=45)\nax0.set_title('Standard deviation of tweet sentiment')\n\nax1.bar(range(len(mean)), mean)\nax1.set_xticklabels(mean.index, rotation=45)\nax1.set_title('Mean tweet sentiment')\n\nplt.tight_layout()\nplt.show()", "_____no_output_____" ] ], [ [ "---------------------------------------------------", "_____no_output_____" ], [ "# Generating a side by side bar plot", "_____no_output_____" ], [ "#### Generating tweet lengths", "_____no_output_____" ] ], [ [ "def tweet_lengths(text):\n if len(text) < 100:\n return \"short\"\n elif 100 <= len(text) <= 135:\n return \"medium\"\n else:\n return \"long\"\n\ntweets[\"tweet_length\"] = tweets[\"text\"].apply(tweet_lengths)\n\ntl = {}\nfor candidate in [\"clinton\", \"sanders\", \"trump\"]:\n tl[candidate] = tweets[\"tweet_length\"][tweets[\"candidate\"] == candidate].value_counts()\n", "_____no_output_____" ] ], [ [ "#### Plotting", "_____no_output_____" ] ], [ [ "fig, ax = plt.subplots()\nwidth = .5\nx = np.array(range(0, 6, 2))\nax.bar(x, tl[\"clinton\"], width, color='g')\nax.bar(x + width, tl[\"sanders\"], width, color='b')\nax.bar(x + (width * 2), tl[\"trump\"], width, color='r')\n\nax.set_ylabel('# of tweets')\nax.set_title('Number of Tweets per candidate by length')\nax.set_xticks(x + (width * 1.5))\nax.set_xticklabels(('long', 'medium', 'short'))\nax.set_xlabel('Tweet length')\nplt.show()", "_____no_output_____" ] ], [ [ "## Next steps", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
cbf7f2cb10a6e108189c569ae5f4751fb373851f
15,920
ipynb
Jupyter Notebook
tutorial1_classical_reg_methods_TV_TGV.ipynb
ckolbPTB/TES_21_22_Tutorials
764cd34e7248830e2c53688fd0a4882ead8d3860
[ "Apache-2.0" ]
2
2021-09-08T11:31:07.000Z
2021-09-08T11:45:45.000Z
tutorial1_classical_reg_methods_TV_TGV.ipynb
MATHplus-Young-Academy/TES_21_22_Tutorials
3d8b12f40cf90f8471e94ef02160523857ded2ba
[ "Apache-2.0" ]
null
null
null
tutorial1_classical_reg_methods_TV_TGV.ipynb
MATHplus-Young-Academy/TES_21_22_Tutorials
3d8b12f40cf90f8471e94ef02160523857ded2ba
[ "Apache-2.0" ]
4
2021-11-02T17:16:06.000Z
2022-01-24T18:39:08.000Z
28.84058
371
0.56407
[ [ [ "For all numerical experiments, we will be using the Chambolle-Pock primal-dual algorithm - details can be found on:\n1. [A First-order Primal-dual Algorithm for Convex Problems with Applications to Imaging](https://link.springer.com/article/10.1007/s10851-010-0251-1), A. Chambolle, T. Pock, Journal of Mathematical Imaging and Vision (2011). [PDF](https://hal.archives-ouvertes.fr/hal-00490826/document)\n2. [Recovering Piecewise Smooth Multichannel Images by Minimization of Convex Functionals with Total Generalized Variation Penalty](https://link.springer.com/chapter/10.1007/978-3-642-54774-4_3), K. Bredies, Efficient algorithms for global optimization methods in computer vision (2014). [PDF](https://imsc.uni-graz.at/mobis/publications/SFB-Report-2012-006.pdf)\n3. [Second Order Total Generalized Variation (TGV) for MRI](https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.22595), F. Knoll, K. Bredies, T. Pock, R. Stollberger (2010). [PDF](https://onlinelibrary.wiley.com/doi/epdf/10.1002/mrm.22595)\n\nIn order to compute the spatia dependent regularization weights we follow:\n\n4. [Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization](https://arxiv.org/pdf/2002.05614.pdf), M. Hintermüller, K. Papafitsoros, C.N. Rautenberg, H. Sun, arXiv preprint, (2020)", "_____no_output_____" ], [ "# Huber Total Variation Denoising", "_____no_output_____" ], [ "We are solving the discretized version of the following minimization problem\n\\begin{equation}\\label{L2-TV}\n\\min_{u} \\int_{\\Omega} (u-f)^{2}dx + \\alpha \\int_{\\Omega} \\varphi_{\\gamma}(\\nabla u)dx\n\\end{equation}\nwere $\\phi_{\\gamma}:\\mathbb{R}^{d}\\to \\mathbb{R}^{+}$ with \n\\begin{equation}\n\\phi_{\\gamma}(v)=\n\\begin{cases}\n|v|-\\frac{1}{2}\\gamma & \\text{ if } |v|\\ge \\gamma,\\\\\n\\frac{1}{2\\gamma}|v(x)|^{2}& \\text{ if } |v|< \\gamma.\\\\\n\\end{cases}\n\\end{equation}\n\n\n\n\n\n", "_____no_output_____" ], [ "## Import data...", "_____no_output_____" ] ], [ [ "import scipy.io as sio\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmat_contents = sio.loadmat('tutorial1_classical_reg_methods/parrot')\nclean=mat_contents['parrot']\nf=mat_contents['parrot_noisy_01']\n\nplt.figure(figsize = (7,7)) \nimgplot2 = plt.imshow(clean)\nimgplot2.set_cmap('gray')\n\nplt.figure(figsize = (7,7)) \nimgplot2 = plt.imshow(f)\nimgplot2.set_cmap('gray')\n\nfrom tutorial1_classical_reg_methods.Tutorial_Codes import psnr, reproject, dxm, dym, dxp, dyp, function_TGV_denoising_CP, P_a_Huber, function_HuberTV_denoising_CP\n", "_____no_output_____" ] ], [ [ "## Task 1", "_____no_output_____" ], [ "Choose different values for $\\alpha$ and $\\gamma$ and interprent your results:\n- Fix $\\gamma$ small, e.g. $\\gamma=0.01$ and play with the values of $\\alpha$. What do you observe for large $\\alpha$? What for small?\n- Fix $\\alpha$ and play with the values of $\\gamma$. What do you observe for large $\\gamma$? What for small?\n", "_____no_output_____" ] ], [ [ "alpha=0.085\ngamma=0.001\nuTV = function_HuberTV_denoising_CP(f,clean, alpha, gamma,1000)", "_____no_output_____" ], [ "uTikhonov = function_HuberTV_denoising_CP(f,clean, 5, 2,1000)", "_____no_output_____" ] ], [ [ "# Total Generalized Variation Denoising", "_____no_output_____" ], [ "We are solving the discretized version of the following minimization problem\n\\begin{equation}\\label{L2-TGV}\n\\min_{u} \\int_{\\Omega} (u-f)^{2}dx + TGV_{\\alpha,\\beta}(u)\n\\end{equation}\n\nwhere \n\\begin{equation}\nTGV_{\\alpha,\\beta}(u)=\\min_{w} \\alpha \\int_{\\Omega} |\\nabla u-w|dx + \\beta \\int_{\\Omega} |Ew|dx\n\\end{equation}", "_____no_output_____" ], [ "## Task 2a", "_____no_output_____" ], [ "Choose different values for $\\alpha$ and $\\beta$ and solve the TGV denoising minimization problem.\n\n- What happens for small $\\alpha$ and large $\\beta$?\n- What happens for large $\\alpha$ and small $\\beta$?\n- What happens where both parameters are small/large?\n- Try to find the combination of parameters that gives the highest PSNR value.", "_____no_output_____" ] ], [ [ "#alpha=0.085\n#beta=0.15\n\nalpha=0.085\nbeta=0.15\n\nuTGV = function_TGV_denoising_CP(f,clean, alpha, beta, 500)", "_____no_output_____" ] ], [ [ "## Task 2b", "_____no_output_____" ], [ "Import the following spatial dependent regularization weights, which are taken from this work:\n\n- [Dualization and Automatic Distributed Parameter Selection of Total Generalized Variation via Bilevel Optimization](https://arxiv.org/pdf/2002.05614.pdf), M. Hintermüller, K. Papafitsoros, C.N. Rautenberg, H. Sun, arXiv preprint, (2020)", "_____no_output_____" ] ], [ [ "weight_contents = sio.loadmat('tutorial1_classical_reg_methods/spatial_dependent_weights')\nalpha_spatial=weight_contents['TGV_alpha_spatial']\nbeta_spatial=weight_contents['TGV_beta_spatial']\n\n#plt.figure(figsize = (7,7)) \n#imgplot2 = plt.imshow(alpha_spatial)\n#imgplot2.set_cmap('gray')\n\n#plt.figure(figsize = (7,7)) \n#imgplot2 = plt.imshow(beta_spatial)\n#imgplot2.set_cmap('gray')\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\n(n,m)=alpha_spatial.shape\nx=range(n)\ny=range(m)\nX, Y = np.meshgrid(x, y) \nhalpha = plt.figure(figsize = (7,7))\nh_alpha = halpha.add_subplot(111, projection='3d')\nh_alpha.plot_surface(X, Y, alpha_spatial)\n\nhbeta = plt.figure(figsize = (7,7))\nh_beta = hbeta.add_subplot(111, projection='3d')\nh_beta.plot_surface(X, Y, beta_spatial)\n\nhclean = plt.figure(figsize = (7,7))\nh_clean = hclean.add_subplot(111, projection='3d')\nh_clean.plot_surface(X, Y, clean)", "_____no_output_____" ] ], [ [ "And run again the algorithm with this weight:", "_____no_output_____" ] ], [ [ "uTGVspatial = function_TGV_denoising_CP(f,clean, alpha_spatial, beta_spatial, 500)", "_____no_output_____" ] ], [ [ "Now you can see all the reconstructions together:\n", "_____no_output_____" ] ], [ [ "plt.rcParams['figure.figsize'] = np.array([4, 3])*3\nplt.rcParams['figure.dpi'] = 120\nfig, axs = plt.subplots(ncols=3, nrows=2)\n\n\n# remove ticks from plot\nfor ax in axs.flat:\n ax.set(xticks=[], yticks=[])\n\naxs[0,0].imshow(clean, cmap='gray')\naxs[0,0].set(xlabel='Clean') \n \naxs[0,1].imshow(f, cmap='gray')\naxs[0,1].set(xlabel='Noisy, PSNR = ' + str(np.around(psnr(f, clean),decimals=2)))\n\naxs[0,2].imshow(uTikhonov, cmap='gray')\naxs[0,2].set(xlabel='Tikhonov, PSNR = ' + str(np.around(psnr(uTikhonov, clean),decimals=2)))\n\n\naxs[1,0].imshow(uTV, cmap='gray')\naxs[1,0].set(xlabel='TV, PSNR = ' + str(np.around(psnr(uTV, clean),decimals=2)))\n\n\naxs[1,1].imshow(uTGV, cmap='gray')\naxs[1,1].set(xlabel = 'TGV, PSNR = ' + str(np.around(psnr(uTGV, clean),decimals=2)))\n\naxs[1,2].imshow(uTGVspatial, cmap='gray')\naxs[1,2].set(xlabel = 'TGV spatial, PSNR = ' + str(np.around(psnr(uTGVspatial, clean),decimals=2)))", "_____no_output_____" ] ], [ [ "# TV and TGV MRI reconstruction", "_____no_output_____" ], [ "Here we will be solving the discretized version of the following minimization problem\n\\begin{equation}\n\\min_{u} \\int_{\\Omega} (S \\circ F u-g)^{2}dx + \\alpha TV(u)\n\\end{equation}\nand \n\\begin{equation}\n\\min_{u} \\int_{\\Omega} (S \\circ F u-g)^{2}dx + TGV_{\\alpha,\\beta}(u)\n\\end{equation}\n\nThe code for the examples below was kindly provided by Clemens Sirotenko.", "_____no_output_____" ], [ "## Import data", "_____no_output_____" ] ], [ [ "from tutorial1_classical_reg_methods.Tutorial_Codes import normalize, subsampling, subsampling_transposed, compute_differential_operators, function_TV_MRI_CP, function_TGV_MRI_CP \nfrom scipy import sparse\nimport scipy.sparse.linalg\n\nimage=np.load('tutorial1_classical_reg_methods/img_example.npy')\nimage=np.abs(image[:,:,3])\nimage = normalize(image)\n\nplt.figure(figsize = (7,7)) \nimgplot2 = plt.imshow(image)\nimgplot2.set_cmap('gray')\n\n", "_____no_output_____" ] ], [ [ "## Simulate noisy data and subsampled data", "_____no_output_____" ], [ "Create noisy data $ S \\circ F x + \\varepsilon = y^{\\delta}$ where $x$ is th clean image and $ \\varepsilon \\sim \\mathcal{N}(0,\\sigma^2)$ normal distributed centered complex noise", "_____no_output_____" ] ], [ [ "mask = np.ones(np.shape(image))\nmask[:,1:-1:3] = 0\n\nFx = np.fft.fft2(image,norm='ortho') #ortho means that the fft2 is unitary\n(M,N) = image.shape\nrate = 0.039 ##noise rate\nnoise = np.random.randn(M,N) + (1j)*np.random.randn(M,N) #cmplx noise\ndistorted_full = Fx + rate*noise \ndistorted = subsampling(distorted_full, mask)\nzero_filling = np.real(np.fft.ifft2(subsampling_transposed(distorted, mask), norm = 'ortho'))\n\nplt.figure(figsize = (7,7)) \nimgplot2 = plt.imshow(mask)\nimgplot2.set_cmap('gray')\n\nplt.figure(figsize = (7,7)) \nimgplot2 = plt.imshow(zero_filling)\nimgplot2.set_cmap('gray')", "_____no_output_____" ] ], [ [ "## TV MRI reconstruction", "_____no_output_____" ] ], [ [ "x_0 = zero_filling\ndata = distorted\nalpha = 0.025\ntau = 1/np.sqrt(12)\nsigma = tau\nh = 1 \nmax_it = 3000\ntol = 1e-4 # algorithm stops if |x_k - x_{k+1}| < tol\nx_TV = function_TV_MRI_CP(data,image,mask,x_0,tau,sigma,h,max_it,tol,alpha)", "_____no_output_____" ], [ "plt.figure(figsize = (7,7)) \nimgplot2 = plt.imshow(x_TV)\nimgplot2.set_cmap('gray')", "_____no_output_____" ] ], [ [ "## TGV MRI reconstruction", "_____no_output_____" ] ], [ [ "alpha = 0.02\nbeta = 0.035\n\nx_0 = zero_filling\ndata = distorted\ntau = 1/np.sqrt(12)\nsigma = tau\nlambda_prox = 1\nh = 1 \ntol = 1e-4\nmax_it = 2500\nx_TGV = function_TGV_MRI_CP(data,image, mask,x_0,tau,sigma,lambda_prox,h,max_it,tol,beta,alpha)", "_____no_output_____" ], [ "plt.figure(figsize = (7,7)) \nimgplot2 = plt.imshow(x_TGV)\nimgplot2.set_cmap('gray')", "_____no_output_____" ] ], [ [ "Now you can see all the reconstructions together:", "_____no_output_____" ] ], [ [ "plt.rcParams['figure.figsize'] = np.array([2, 2])*5\nplt.rcParams['figure.dpi'] = 120\nfig, axs = plt.subplots(ncols=2, nrows=2)\n\n\n# remove ticks from plot\nfor ax in axs.flat:\n ax.set(xticks=[], yticks=[])\n\naxs[0,0].imshow(normalize(image), cmap='gray')\naxs[0,0].set(xlabel='Clean Image')\n\naxs[1,0].imshow(normalize(x_TV), cmap='gray')\naxs[1,0].set(xlabel='TV Reconstruction, PSNR = ' + str(np.around(psnr(x_TV, image),decimals=2)))\n\naxs[0,1].imshow(normalize(x_0), cmap='gray')\naxs[0,1].set(xlabel = 'Zero Filling Solution , PSNR = ' + str(np.around(psnr(x_0, image),decimals=2)))\n\naxs[1,1].imshow(normalize(x_TGV), cmap='gray')\naxs[1,1].set(xlabel='TGV Reconstruction , PSNR = ' + str(np.around(psnr(x_TGV, image),decimals=2)))\n\n\n\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
cbf800e4db480c6a817f167159e42df92c854e46
357,024
ipynb
Jupyter Notebook
data/.ipynb_checkpoints/data_test-checkpoint.ipynb
zero-or-one/URP-Summer-Fall-2021
042fc18ba7db13754264e66de4330d6f7376b9c3
[ "MIT" ]
null
null
null
data/.ipynb_checkpoints/data_test-checkpoint.ipynb
zero-or-one/URP-Summer-Fall-2021
042fc18ba7db13754264e66de4330d6f7376b9c3
[ "MIT" ]
null
null
null
data/.ipynb_checkpoints/data_test-checkpoint.ipynb
zero-or-one/URP-Summer-Fall-2021
042fc18ba7db13754264e66de4330d6f7376b9c3
[ "MIT" ]
null
null
null
222.168015
62,508
0.919787
[ [ [ "'''Visualize all datasets and methods'''\nimport sys\nimport os\nfrom data import get_dataset, dummy_clusters, dummy_half_doughnuts, dummy_linear_points, dummy_polynomial_points, prepare_csv\nfrom data_utils import show_random, AddNoise, remove_random, remove_class, combine_datasets\nfrom matplotlib import pyplot as plt\nimport numpy as np\n%matplotlib inline", "_____no_output_____" ], [ "train, val, test = get_dataset(\"mnist\")\nshow_random(test, 5)", "Dataset sizes: \t train: 50 \t val: 10 \t test: 10\nBatch size: \t 16\ntorch.Size([1, 32, 32])\nLabel: tensor(3)\n" ], [ "# normalised, so not viewed properly\ntrain, val, test = get_dataset(\"cifar10\")\nshow_random(test, 5)", "Files already downloaded and verified\n" ], [ "train, val, test = get_dataset(\"cifar100\")\nshow_random(test, 5)", "Files already downloaded and verified\n" ], [ "train, val, test = get_dataset(\"fashion-mnist\")\nshow_random(test, 5)\n", "Dataset sizes: \t train: 50 \t val: 10 \t test: 10\nBatch size: \t 16\ntorch.Size([1, 32, 32])\nLabel: tensor(3)\n" ], [ "train, val, test = get_dataset(\"csv\")\nfor data, label in train:\n print(\"data\", data[5])\n print(\"label\", label[5])\n break", "data tensor([ 7.0000, 0.5500, 0.1300, 2.2000, 0.0750, 15.0000, 35.0000, 0.9959,\n 3.3600, 0.5900, 9.7000], dtype=torch.float64)\nlabel tensor([3.], dtype=torch.float64)\n" ], [ "(x1, y1), (x2, y2) = dummy_clusters()\nplt.scatter(x1, y1)\nplt.scatter(x2, y2)", "_____no_output_____" ], [ "(x1, y1), (x2, y2) = dummy_half_doughnuts(500, 500)\nplt.scatter(x1, y1)\nplt.scatter(x2, y2)", "_____no_output_____" ], [ "X, Y = dummy_linear_points((100, 1))\n\nfig = plt.figure(figsize=(8, 6))\nax = fig.add_subplot(111, projection='3d')\nxs = X[:, 0]\nys = X[:, 1]\nzs = Y\nax.scatter(xs, ys, zs, s=50, alpha=0.6, edgecolors='w')\n\nax.set_zlabel('Y')", "_____no_output_____" ], [ "X, Y, deg = dummy_polynomial_points((100, 1), 3)\n\nfig = plt.figure(figsize=(8, 6))\nplt.scatter(X, Y)\n#ax = fig.add_subplot(111, projection='3d')\n#xs = X[:]\n#ys = Y\n#ax.scatter(xs, ys, s=50, alpha=0.6, edgecolors='w')\n\n#ax.set_zlabel('Y')", "_____no_output_____" ], [ "# noise into data\nnoise = AddNoise(mean=0, std=0.2)\n\ntrain, val, test = get_dataset(\"mnist\")\n\nfor test_images, test_labels in train:\n img = test_images[0]\n img = noise.encodes(img)\n print(img.size())\n img = img.view(32, 32, -1)\n plt.axis('off')\n plt.imshow(img)\n plt.show() \n break", "Dataset sizes: \t train: 50 \t val: 10 \t test: 10\nBatch size: \t 16\ntorch.Size([1, 32, 32])\n" ], [ "noisy_img = noise.generate()\nnoisy_img = noisy_img.view(32, 32, -1)\nplt.axis('off')\nplt.imshow(noisy_img)\nplt.show()", "_____no_output_____" ], [ "train, val, test = get_dataset(\"mnist\")\nds = remove_random(train, 5)\n\nprint(\"Initial length: \", len(train))\nprint(\"After separating:\", len(ds))\nshow_random(test, 5)", "Dataset sizes: \t train: 50 \t val: 10 \t test: 10\nBatch size: \t 16\nInitial length: 4\nAfter separating: 1\ntorch.Size([1, 32, 32])\nLabel: tensor(2)\n" ], [ "forget, retain = remove_class(train, [1])\nfor img, lab in forget:\n print(\"lab: \", lab)\nprint(\"--FORGET--\")\nshow_random(forget, 5)\nprint(\"--RETAIN--\")\nshow_random(retain, 5)", "lab: tensor([1, 1, 1])\n--FORGET--\ntorch.Size([1, 32, 32])\nLabel: tensor(1)\n" ], [ "noise = AddNoise(mean=0, std=0.2)\ntrain, val, test = get_dataset(\"mnist\")\nnval = noise.encode_data(val)\nntest = noise.encode_data(test)\nshow_random(nval, 5)", "Dataset sizes: \t train: 50 \t val: 10 \t test: 10\nBatch size: \t 16\ntorch.Size([1, 32, 32])\nLabel: tensor(7)\n" ], [ "combine = combine_datasets(ntest, nval)\nprint(\"ntest: \", len(ntest))\nprint(\"nval: \", len(nval))\nprint(\"combine: \", len(combine))", "ntest: 1\nnval: 1\ncombine: 2\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf802079a8c7f60bbbd3c1afe1d98f231777d71
348,852
ipynb
Jupyter Notebook
notebooks/Figure. X Inactivation.ipynb
frazer-lab/cardips-ipsc-eqtl
dc97710dfbb5ed96935fa187d90c0d529ce3a216
[ "MIT" ]
7
2016-07-14T23:09:35.000Z
2019-07-12T20:38:44.000Z
notebooks/Figure. X Inactivation.ipynb
frazer-lab/cardips-ipsc-eqtl
dc97710dfbb5ed96935fa187d90c0d529ce3a216
[ "MIT" ]
null
null
null
notebooks/Figure. X Inactivation.ipynb
frazer-lab/cardips-ipsc-eqtl
dc97710dfbb5ed96935fa187d90c0d529ce3a216
[ "MIT" ]
5
2017-04-11T20:01:55.000Z
2021-04-30T07:41:38.000Z
341.676787
186,959
0.917719
[ [ [ "# Figure. X Inactivation", "_____no_output_____" ] ], [ [ "import cPickle\nimport datetime\nimport glob\nimport os\nimport random\nimport re\nimport subprocess\n\nimport cdpybio as cpb\nimport matplotlib as mpl\nimport matplotlib.gridspec as gridspec\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pybedtools as pbt\nimport scipy.stats as stats \nimport seaborn as sns\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nimport statsmodels as sms\n\nimport cardipspy as cpy\nimport ciepy\n\n%matplotlib inline\n%load_ext rpy2.ipython\n\nimport socket\nif socket.gethostname() == 'fl-hn1' or socket.gethostname() == 'fl-hn2':\n pbt.set_tempdir('/frazer01/home/cdeboever/tmp')\n \noutdir = os.path.join(ciepy.root, 'output',\n 'figure_x_inactivation')\ncpy.makedir(outdir)\n\nprivate_outdir = os.path.join(ciepy.root, 'private_output',\n 'figure_x_inactivation')\ncpy.makedir(private_outdir)", "_____no_output_____" ], [ "plt.rcParams['font.sans-serif'] = ['Arial']\nplt.rcParams['font.size'] = 8", "_____no_output_____" ], [ "fn = os.path.join(ciepy.root, 'output', 'input_data', 'rsem_tpm.tsv')\ntpm = pd.read_table(fn, index_col=0)\nfn = os.path.join(ciepy.root, 'output', 'input_data', 'rnaseq_metadata.tsv')\nrna_meta = pd.read_table(fn, index_col=0)\nfn = os.path.join(ciepy.root, 'output', 'input_data', 'subject_metadata.tsv')\nsubject_meta = pd.read_table(fn, index_col=0)\nfn = os.path.join(ciepy.root, 'output', 'input_data', 'wgs_metadata.tsv')\nwgs_meta = pd.read_table(fn, index_col=0)\n\ngene_info = pd.read_table(cpy.gencode_gene_info, index_col=0)\n\ngenes = pbt.BedTool(cpy.gencode_gene_bed)\n\nfn = os.path.join(ciepy.root, 'output', 'input_data', 'cnvs.tsv')\ncnvs = pd.read_table(fn, index_col=0)\n\nfn = os.path.join(ciepy.root, 'output', 'x_inactivation', 'x_ase_exp.tsv')\nx_exp = pd.read_table(fn, index_col=0)", "_____no_output_____" ], [ "fn = os.path.join(ciepy.root, 'output', 'x_inactivation', 'expression_densities.tsv')\npdfs = pd.read_table(fn, index_col=0)\npdfs.columns = ['No ASE', 'ASE']", "_____no_output_____" ], [ "fn = os.path.join(ciepy.root, 'output', 'input_data', \n 'mbased_major_allele_freq.tsv')\nmaj_af = pd.read_table(fn, index_col=0)\n\nfn = os.path.join(ciepy.root, 'output', 'input_data', \n 'mbased_p_val_ase.tsv')\nase_pval = pd.read_table(fn, index_col=0)\n\nlocus_p = pd.Panel({'major_allele_freq':maj_af, 'p_val_ase':ase_pval})\nlocus_p = locus_p.swapaxes(0, 2)\n\nsnv_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'mbased_snv',\n '*_snv.tsv'))\ncount_fns = glob.glob(os.path.join(ciepy.root, 'private_output', 'input_data', 'allele_counts',\n '*mbased_input.tsv'))\n\nsnv_res = {}\nfor fn in snv_fns:\n snv_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)\n \ncount_res = {}\nfor fn in count_fns:\n count_res[os.path.split(fn)[1].split('_')[0]] = pd.read_table(fn, index_col=0)\n\nsnv_p = pd.Panel(snv_res)", "_____no_output_____" ], [ "# We'll keep female subjects with no CNVs on the X chromosome.\nsf = subject_meta[subject_meta.sex == 'F']\nmeta = sf.merge(rna_meta, left_index=True, right_on='subject_id')\ns = set(meta.subject_id) & set(cnvs.ix[cnvs.chr == 'chrX', 'subject_id'])\nmeta = meta[meta.subject_id.apply(lambda x: x not in s)]\n\nmeta = meta.ix[[x for x in snv_p.items if x in meta.index]]\n\nsnv_p = snv_p.ix[meta.index]", "_____no_output_____" ], [ "snv_p = snv_p.ix[meta.index]\nlocus_p = locus_p.ix[meta.index]", "_____no_output_____" ], [ "# Filter and take log.\ntpm_f = tpm[meta[meta.sex == 'F'].index]\ntpm_f = tpm_f[(tpm_f != 0).sum(axis=1) > 0]\nlog_tpm = np.log10(tpm_f + 1)\n# Mean center.\nlog_tpm_c = (log_tpm.T - log_tpm.mean(axis=1)).T\n# Variance normalize.\nlog_tpm_n = (log_tpm_c.T / log_tpm_c.std(axis=1)).T", "_____no_output_____" ], [ "single = locus_p.ix['071ca248-bcb1-484d-bff2-3aefc84f8688', :, :].dropna()\nx_single = single[gene_info.ix[single.index, 'chrom'] == 'chrX']\nnotx_single = single[gene_info.ix[single.index, 'chrom'] != 'chrX']", "_____no_output_____" ], [ "t = locus_p.ix[:, :, 'major_allele_freq']\nx_all = locus_p.ix[:, set(t.index) & set(gene_info[gene_info.chrom == 'chrX'].index), :]\nnotx_all = locus_p.ix[:, set(t.index) & set(gene_info[gene_info.chrom != 'chrX'].index), :]", "_____no_output_____" ], [ "genes_to_plot = ['XIST', 'TSIX']\nt = pd.Series(gene_info.index, index=gene_info.gene_name)\n \nexp = log_tpm_n.ix[t[genes_to_plot]].T\nexp.columns = genes_to_plot\nexp = exp.ix[x_all.items].sort_values(by='XIST', ascending=False)", "_____no_output_____" ], [ "sns.set_style('white')", "_____no_output_____" ] ], [ [ "## Paper", "_____no_output_____" ] ], [ [ "n = x_exp.shape[0]\nprint('Plotting mean expression for {} X chromosome genes.'.format(n))", "Plotting mean expression for 120 X chromosome genes.\n" ], [ "n = sum(x_exp.mean_sig_exp < x_exp.mean_not_sig_exp)\nprint('{} of {} ({:.2f}%) genes had higher expression for samples without ASE.'.format(\n n, x_exp.shape[0], n / float(x_exp.shape[0]) * 100))", "93 of 120 (77.50%) genes had higher expression for samples without ASE.\n" ], [ "fig = plt.figure(figsize=(6.85, 9), dpi=300)\n\ngs = gridspec.GridSpec(1, 1)\nax = fig.add_subplot(gs[0, 0])\nax.text(0, 1, 'Figure 6',\n size=16, va='top', )\nciepy.clean_axis(ax)\nax.set_xticks([])\nax.set_yticks([])\ngs.tight_layout(fig, rect=[0, 0.90, 0.5, 1])\n\ngs = gridspec.GridSpec(1, 1)\nax = fig.add_subplot(gs[0, 0])\nax.scatter(x_exp.mean_sig_exp, x_exp.mean_not_sig_exp, alpha=0.4, color='grey', s=10)\nax.set_ylabel('Mean expression,\\nno ASE', fontsize=8)\nax.set_xlabel('Mean expression, ASE', fontsize=8)\nxmin,xmax = ax.get_xlim()\nymin,ymax = ax.get_ylim()\nplt.plot([min(xmin, ymin), max(xmax, ymax)], [min(xmin, ymin), max(xmax, ymax)], color='black', ls='--')\nax.set_xlim(-1, 1.75)\nax.set_ylim(-1, 1.75)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(8)\ngs.tight_layout(fig, rect=[0.02, 0.79, 0.32, 0.95])\n\ngs = gridspec.GridSpec(1, 1)\nax = fig.add_subplot(gs[0, 0])\nax.hist(x_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')\nax.set_xlim(0.5, 1)\nax.set_ylabel('Number of genes', fontsize=8)\nax.set_xlabel('Allelic imbalance fraction', fontsize=8)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.set_yticks(np.arange(0, 20, 4))\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(8)\ngs.tight_layout(fig, rect=[0.33, 0.79, 0.66, 0.95])\n\ngs = gridspec.GridSpec(1, 1)\nax = fig.add_subplot(gs[0, 0])\nax.hist(notx_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')\nax.set_xlim(0.5, 1)\nax.set_ylabel('Number of genes', fontsize=8)\nax.set_xlabel('Allelic imbalance fraction', fontsize=8)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nax.yaxis.set_major_formatter(ciepy.comma_format)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(8)\ngs.tight_layout(fig, rect=[0.66, 0.79, 1, 0.95])\n\ngs = gridspec.GridSpec(1, 4, width_ratios=[0.5, 1.2, 3, 3])\n\nax = fig.add_subplot(gs[0, 0])\npassage_im = ax.imshow(np.array([meta.ix[exp.index, 'passage'].values]).T,\n aspect='auto', interpolation='nearest',\n cmap=sns.palettes.cubehelix_palette(light=.95, as_cmap=True))\nciepy.clean_axis(ax)\nax.set_xlabel('Passage', fontsize=8)\n\nax = fig.add_subplot(gs[0, 1])\n\n# Make norm.\nvmin = np.floor(exp.min().min())\nvmax = np.ceil(exp.max().max())\nvmax = max([vmax, abs(vmin)])\nvmin = vmax * -1\nexp_norm = mpl.colors.Normalize(vmin, vmax)\n\nexp_im = ax.imshow(exp, aspect='auto', interpolation='nearest',\n norm=exp_norm, cmap=plt.get_cmap('RdBu_r'))\nciepy.clean_axis(ax)\nax.set_xticks([0, 1])\nax.set_xticklabels(exp.columns, fontsize=8)\nfor t in ax.get_xticklabels():\n t.set_fontstyle('italic') \n #t.set_rotation(30)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\n \npercent_norm = mpl.colors.Normalize(0, 1)\n\nax = fig.add_subplot(gs[0, 2])\nr = x_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False], \n bins=np.arange(0.5, 1.05, 0.05)))\nr = r.apply(lambda z: z.value_counts())\nr = (r.T / r.max(axis=1)).T\nx_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',\n norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))\nciepy.clean_axis(ax)\nxmin,xmax = ax.get_xlim()\nax.set_xticks(np.arange(xmin, xmax + 1, 2))\nax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=8)#, rotation=30)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nax.set_xlabel('Allelic imbalance fraction', fontsize=8)\n \nax = fig.add_subplot(gs[0, 3])\nr = notx_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False], \n bins=np.arange(0.5, 1.05, 0.05)))\nr = r.apply(lambda z: z.value_counts())\nr = (r.T / r.max(axis=1)).T\nnot_x_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',\n norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))\nciepy.clean_axis(ax)\nxmin,xmax = ax.get_xlim()\nax.set_xticks(np.arange(xmin, xmax + 1, 2))\nax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=8)#, rotation=30)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nax.set_xlabel('Allelic imbalance fraction', fontsize=8)\n\ngs.tight_layout(fig, rect=[0, 0.45, 0.8, 0.8])\n\ngs = gridspec.GridSpec(2, 2)\n\n# Plot colormap for gene expression.\nax = fig.add_subplot(gs[0:2, 0])\ncb = plt.colorbar(mappable=exp_im, cax=ax)\ncb.solids.set_edgecolor(\"face\")\ncb.outline.set_linewidth(0)\nfor l in ax.get_yticklines():\n l.set_markersize(0)\ncb.set_label('$\\log$ TPM $z$-score', fontsize=8)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(8)\n\n# Plot colormap for passage number.\nax = fig.add_subplot(gs[0, 1])\ncb = plt.colorbar(mappable=passage_im, cax=ax)\ncb.solids.set_edgecolor(\"face\")\ncb.outline.set_linewidth(0)\nfor l in ax.get_yticklines():\n l.set_markersize(0)\ncb.set_label('Passage number', fontsize=8)\ncb.set_ticks(np.arange(12, 32, 4))\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(8)\n\n# Plot colormap for ASE.\nax = fig.add_subplot(gs[1, 1])\ncb = plt.colorbar(mappable=x_ase_im, cax=ax)\ncb.solids.set_edgecolor(\"face\")\ncb.outline.set_linewidth(0)\nfor l in ax.get_yticklines():\n l.set_markersize(0)\ncb.set_label('Fraction of genes', fontsize=8)\ncb.set_ticks(np.arange(0, 1.2, 0.2))\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(8)\n\ngs.tight_layout(fig, rect=[0.8, 0.45, 1, 0.8])\n\nt = fig.text(0.005, 0.93, 'A', weight='bold', \n size=12)\nt = fig.text(0.315, 0.93, 'B', weight='bold', \n size=12)\nt = fig.text(0.645, 0.93, 'C', weight='bold', \n size=12)\nt = fig.text(0.005, 0.79, 'D', weight='bold', \n size=12)\nt = fig.text(0.005, 0.44, 'E', weight='bold', \n size=12)\nt = fig.text(0.005, 0.22, 'F', weight='bold', \n size=12)\n\nplt.savefig(os.path.join(outdir, 'x_inactivation_skeleton.pdf'))", "/frazer01/home/cdeboever/software/anaconda/envs/cie/lib/python2.7/site-packages/matplotlib/gridspec.py:302: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect.\n warnings.warn(\"This figure includes Axes that are not \"\n" ], [ "%%R\n\nsuppressPackageStartupMessages(library(Gviz))", "_____no_output_____" ], [ "t = x_all.ix[:, :, 'major_allele_freq']\nr = gene_info.ix[t.index, ['start', 'end']]", "_____no_output_____" ], [ "%%R -i t,r\n\nideoTrack <- IdeogramTrack(genome = \"hg19\", chromosome = \"chrX\", fontsize=8, fontsize.legend=8,\n fontcolor='black', cex=1, cex.id=1, cex.axis=1, cex.title=1)\n\nmafTrack <- DataTrack(range=r, data=t, genome=\"hg19\", type=c(\"p\"), alpha=0.5, lwd=8,\n span=0.05, chromosome=\"chrX\", name=\"Allelic imbalance fraction\", fontsize=8,\n fontcolor.legend='black', col.axis='black', col.title='black',\n background.title='transparent', cex=1, cex.id=1, cex.axis=1, cex.title=1,\n fontface=1, fontface.title=1, alpha.title=1)", "_____no_output_____" ], [ "fn = os.path.join(outdir, 'p_maf.pdf')", "_____no_output_____" ], [ "%%R -i fn\n\npdf(fn, 6.85, 2)\nplotTracks(c(ideoTrack, mafTrack), from=0, to=58100000, col.title='black')\ndev.off()", "_____no_output_____" ], [ "fn = os.path.join(outdir, 'q_maf.pdf')", "_____no_output_____" ], [ "%%R -i fn\n\npdf(fn, 6.85, 2)\nplotTracks(c(ideoTrack, mafTrack), from=63000000, to=155270560)\ndev.off()", "_____no_output_____" ], [ "%%R -i fn\n\nplotTracks(c(ideoTrack, mafTrack), from=63000000, to=155270560)", "_____no_output_____" ] ], [ [ "## Presentation", "_____no_output_____" ] ], [ [ "# Set fontsize\nfs = 10\n\nfig = plt.figure(figsize=(6.85, 5), dpi=300)\n\ngs = gridspec.GridSpec(1, 1)\nax = fig.add_subplot(gs[0, 0])\nax.scatter(x_exp.mean_sig_exp, x_exp.mean_not_sig_exp, alpha=0.4, color='grey', s=10)\nax.set_ylabel('Mean expression,\\nno ASE', fontsize=fs)\nax.set_xlabel('Mean expression, ASE', fontsize=fs)\nxmin,xmax = ax.get_xlim()\nymin,ymax = ax.get_ylim()\nplt.plot([min(xmin, ymin), max(xmax, ymax)], [min(xmin, ymin), max(xmax, ymax)], color='black', ls='--')\nax.set_xlim(-1, 1.75)\nax.set_ylim(-1, 1.75)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\ngs.tight_layout(fig, rect=[0.02, 0.62, 0.32, 0.95])\n\ngs = gridspec.GridSpec(1, 1)\nax = fig.add_subplot(gs[0, 0])\nax.hist(x_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')\nax.set_xlim(0.5, 1)\nax.set_ylabel('Number of genes', fontsize=fs)\nax.set_xlabel('Allelic imbalance fraction', fontsize=fs)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.set_yticks(np.arange(0, 20, 4))\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\ngs.tight_layout(fig, rect=[0.33, 0.62, 0.66, 0.95])\n\ngs = gridspec.GridSpec(1, 1)\nax = fig.add_subplot(gs[0, 0])\nax.hist(notx_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')\nax.set_xlim(0.5, 1)\nax.set_ylabel('Number of genes', fontsize=fs)\nax.set_xlabel('Allelic imbalance fraction', fontsize=fs)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nax.yaxis.set_major_formatter(ciepy.comma_format)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\ngs.tight_layout(fig, rect=[0.66, 0.62, 1, 0.95])\n\n#gs.tight_layout(fig, rect=[0, 0.62, 1, 1.0])\n\n# t = fig.text(0.005, 0.88, 'A', weight='bold', \n# size=12)\n# t = fig.text(0.315, 0.88, 'B', weight='bold', \n# size=12)\n# t = fig.text(0.675, 0.88, 'C', weight='bold', \n# size=12)\n\ngs = gridspec.GridSpec(1, 4, width_ratios=[0.5, 1.2, 3, 3])\n\nax = fig.add_subplot(gs[0, 0])\npassage_im = ax.imshow(np.array([meta.ix[exp.index, 'passage'].values]).T,\n aspect='auto', interpolation='nearest',\n cmap=sns.palettes.cubehelix_palette(light=.95, as_cmap=True))\nciepy.clean_axis(ax)\nax.set_xlabel('Passage')\n\nax = fig.add_subplot(gs[0, 1])\n\n# Make norm.\nvmin = np.floor(exp.min().min())\nvmax = np.ceil(exp.max().max())\nvmax = max([vmax, abs(vmin)])\nvmin = vmax * -1\nexp_norm = mpl.colors.Normalize(vmin, vmax)\n\nexp_im = ax.imshow(exp, aspect='auto', interpolation='nearest',\n norm=exp_norm, cmap=plt.get_cmap('RdBu_r'))\nciepy.clean_axis(ax)\nax.set_xticks([0, 1])\nax.set_xticklabels(exp.columns, fontsize=fs)\nfor t in ax.get_xticklabels():\n t.set_fontstyle('italic') \n t.set_rotation(30)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\n \npercent_norm = mpl.colors.Normalize(0, 1)\n\nax = fig.add_subplot(gs[0, 2])\nr = x_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False], \n bins=np.arange(0.5, 1.05, 0.05)))\nr = r.apply(lambda z: z.value_counts())\nr = (r.T / r.max(axis=1)).T\nx_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',\n norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))\nciepy.clean_axis(ax)\nxmin,xmax = ax.get_xlim()\nax.set_xticks(np.arange(xmin, xmax + 1, 2))\nax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=fs)#, rotation=30)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nax.set_xlabel('Allelic imbalance fraction', fontsize=fs)\nax.set_title('X Chromosome')\n \nax = fig.add_subplot(gs[0, 3])\nr = notx_all.ix[:, :, 'major_allele_freq'].apply(lambda z: pd.cut(z[z.isnull() == False], \n bins=np.arange(0.5, 1.05, 0.05)))\nr = r.apply(lambda z: z.value_counts())\nr = (r.T / r.max(axis=1)).T\nnot_x_ase_im = ax.imshow(r.ix[exp.index], aspect='auto', interpolation='nearest',\n norm=percent_norm, cmap=sns.palettes.cubehelix_palette(start=0, rot=-0.5, as_cmap=True))\nciepy.clean_axis(ax)\nxmin,xmax = ax.get_xlim()\nax.set_xticks(np.arange(xmin, xmax + 1, 2))\nax.set_xticklabels(np.arange(0.5, 1.05, 0.1), fontsize=fs)#, rotation=30)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nax.set_xlabel('Allelic imbalance fraction', fontsize=fs)\nax.set_title('Autosomes')\n \n# t = fig.text(0.005, 0.615, 'D', weight='bold', \n# size=12)\n\ngs.tight_layout(fig, rect=[0, 0, 0.75, 0.62])\n\ngs = gridspec.GridSpec(2, 2)\n\n# Plot colormap for gene expression.\nax = fig.add_subplot(gs[0:2, 0])\ncb = plt.colorbar(mappable=exp_im, cax=ax)\ncb.solids.set_edgecolor(\"face\")\ncb.outline.set_linewidth(0)\nfor l in ax.get_yticklines():\n l.set_markersize(0)\ncb.set_label('$\\log$ TPM $z$-score', fontsize=fs)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\n\n# Plot colormap for passage number.\nax = fig.add_subplot(gs[0, 1])\ncb = plt.colorbar(mappable=passage_im, cax=ax)\ncb.solids.set_edgecolor(\"face\")\ncb.outline.set_linewidth(0)\nfor l in ax.get_yticklines():\n l.set_markersize(0)\ncb.set_label('Passage number', fontsize=fs)\ncb.set_ticks(np.arange(12, 32, 4))\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\n\n# Plot colormap for ASE.\nax = fig.add_subplot(gs[1, 1])\ncb = plt.colorbar(mappable=x_ase_im, cax=ax)\ncb.solids.set_edgecolor(\"face\")\ncb.outline.set_linewidth(0)\nfor l in ax.get_yticklines():\n l.set_markersize(0)\ncb.set_label('Fraction of genes', fontsize=fs)\ncb.set_ticks(np.arange(0, 1.2, 0.2))\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\n\ngs.tight_layout(fig, rect=[0.75, 0, 1, 0.62])\n\nplt.savefig(os.path.join(outdir, 'x_inactivation_hists_heatmaps_presentation.pdf'))", "_____no_output_____" ], [ "fig, axs = plt.subplots(1, 2, figsize=(6, 2.4), dpi=300)\n\nax = axs[1]\nax.hist(x_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')\nax.set_xlim(0.5, 1)\nax.set_ylabel('Number of genes', fontsize=fs)\nax.set_xlabel('Allelic imbalance fraction', fontsize=fs)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.set_yticks(np.arange(0, 20, 4))\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\nax.set_title('X Chromosome', fontsize=fs)\n\nax = axs[0]\nax.hist(notx_single.major_allele_freq, bins=np.arange(0.5, 1.05, 0.05), color='grey')\nax.set_xlim(0.5, 1)\nax.set_ylabel('Number of genes', fontsize=fs)\nax.set_xlabel('Allelic imbalance fraction', fontsize=fs)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nfor l in ax.get_xticklines() + ax.get_yticklines(): \n l.set_markersize(0)\nax.yaxis.set_major_formatter(ciepy.comma_format)\nfor t in ax.get_xticklabels() + ax.get_yticklabels():\n t.set_fontsize(fs)\nax.set_title('Autosomes', fontsize=fs)\n \nfig.tight_layout()\n\nplt.savefig(os.path.join(outdir, 'mhf_hists_presentation.pdf'))", "_____no_output_____" ], [ "t = x_all.ix[:, :, 'major_allele_freq']\nr = gene_info.ix[t.index, ['start', 'end']]", "_____no_output_____" ], [ "%%R -i t,r\n\nideoTrack <- IdeogramTrack(genome = \"hg19\", chromosome = \"chrX\", fontsize=16, fontsize.legend=16,\n fontcolor='black', cex=1, cex.id=1, cex.axis=1, cex.title=1)\n\nmafTrack <- DataTrack(range=r, data=t, genome=\"hg19\", type=c(\"smooth\", \"p\"), alpha=0.75, lwd=8,\n span=0.05, chromosome=\"chrX\", name=\"Allelic imbalance fraction\", fontsize=12,\n fontcolor.legend='black', col.axis='black', col.title='black',\n background.title='transparent', cex=1, cex.id=1, cex.axis=1, cex.title=1,\n fontface=1, fontface.title=1, alpha.title=1)", "_____no_output_____" ], [ "fn = os.path.join(outdir, 'p_maf_presentation.pdf')", "_____no_output_____" ], [ "%%R -i fn\n\npdf(fn, 10, 3)\nplotTracks(c(ideoTrack, mafTrack), from=0, to=58100000, col.title='black')\ndev.off()", "_____no_output_____" ], [ "fn = os.path.join(outdir, 'q_maf_presentation.pdf')", "_____no_output_____" ], [ "%%R -i fn\n\npdf(fn, 10, 3)\nplotTracks(c(ideoTrack, mafTrack), from=63000000, to=155270560)\ndev.off()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
cbf80ac3c264c10bfe048a64c8be98e79ba3869e
988,476
ipynb
Jupyter Notebook
Ansys/Jupyter/Supp Fig 1.ipynb
TBBL-UTHealth/DISC1
81d74199673b3a776cc0e107db3bdf2efd1e19a0
[ "MIT" ]
null
null
null
Ansys/Jupyter/Supp Fig 1.ipynb
TBBL-UTHealth/DISC1
81d74199673b3a776cc0e107db3bdf2efd1e19a0
[ "MIT" ]
null
null
null
Ansys/Jupyter/Supp Fig 1.ipynb
TBBL-UTHealth/DISC1
81d74199673b3a776cc0e107db3bdf2efd1e19a0
[ "MIT" ]
null
null
null
4,412.839286
576,852
0.962155
[ [ [ "# Sensing Local Field Potentials with a Directional and Scalable Depth Array: the DISC electrode array\n## Supp Figure 1\n- Associated data: Supp fig 1 data\n- Link: (nature excel link)\n\n## Description:\n#### This module does the following:\n1. Reads .csv data from ANSYS\n2. Calculate F/B ratios for each orientation\n3. Plot results", "_____no_output_____" ], [ "# Settings", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\ndata=pd.read_csv('/home/jovyan/ansys_data/supp-fig-1-final.csv')", "_____no_output_____" ] ], [ [ "# Process data", "_____no_output_____" ] ], [ [ "gap = data.gap\n\nf_b_ratio_monopole = data.v_front_monopole / data.v_back_monopole\nf_b_ratio = data.v_front / data.v_back\nf_b_ratio_orth = data.v_front_orth / data.v_back_orth\nf_b_ratio_large = data.v_front_large / data.v_back_large\nf_b_ratio_wire_monopole = data.v_front_monopole_wire / data.v_back_monopole_wire\nf_b_ratio_wire = data.v_front_wire / data.v_back_wire\nf_b_ratio_wire_orth = data.v_front_wire_orth / data.v_back_wire_orth\nf_b_ratio_wire_large = data.v_front_wire_large / data.v_back_wire_large", "_____no_output_____" ] ], [ [ "# Plot Voltage Ratio", "_____no_output_____" ] ], [ [ "# First, clear old plot if one exists\nplt.clf()\n# Now, create figure & add plots to it\nplt.figure(figsize=[10, 5], dpi=500)\nplt.plot(gap,f_b_ratio_monopole, color='blue', linestyle='dashed')\nplt.plot(gap, f_b_ratio, color='blue')\nplt.plot(gap,f_b_ratio_orth, color='blue', linestyle='dotted')\nplt.plot(gap, f_b_ratio_large, linestyle='dashdot', color='blue')\nplt.plot(gap,f_b_ratio_wire_monopole, color='orange', linestyle='dashed')\nplt.plot(gap, f_b_ratio_wire, color='orange')\nplt.plot(gap,f_b_ratio_wire_orth, color='orange', linestyle='dotted')\nplt.plot(gap, f_b_ratio_wire_large, linestyle='dashdot', color='orange')\nplt.xscale(\"linear\")\nplt.xlabel(\"gap [mm]\")\nplt.yscale(\"log\")\nplt.ylabel(\"Voltage Ratio\")\nplt.legend([\"DISC-monopole\", \"DISC-dipole\", \"DISC-orth\", \"DISC-lg\", \"MW-monopole\", \"MW-dipole\", \"MW-orth\", \"MW-lg\"])\nplt.savefig('/home/jovyan/ansys_data/images/supp-fig-2.eps', format='eps', dpi=500)\nplt.show()", "The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\n" ] ], [ [ "# Plot front electrode voltage", "_____no_output_____" ] ], [ [ "plt.figure(figsize=[10, 5], dpi=500)\nplt.plot(gap, data.v_front_monopole, color='blue', linestyle='dashed')\nplt.plot(gap, data.v_front, color='blue')\nplt.plot(gap, data.v_front_orth, color='blue', linestyle='dotted')\nplt.plot(gap, data.v_front_large, linestyle='dashdot', color='blue')\nplt.plot(gap, data.v_front_monopole_wire, color='orange', linestyle='dashed')\nplt.plot(gap, data.v_front_wire, color='orange')\nplt.plot(gap, data.v_front_wire_orth, color='orange', linestyle='dotted')\nplt.plot(gap, data.v_front_wire_large, linestyle='dashdot', color='orange')\n\nplt.legend([\"DISC-monopole\", \"DISC-dipole\", \"DISC-orth\", \"DISC-lg\", \"MW-monopole\", \"MW-dipole\", \"MW-orth\", \"MW-lg\"])\n\nplt.xscale(\"linear\")\nplt.xlabel(\"gap [mm]\")\nplt.yscale(\"log\")\nplt.ylabel(\"Voltage [uV]\")\nplt.savefig('/home/jovyan/ansys_data/images/supp-fig-2-voltage.eps', format='eps', dpi=500)\nplt.show()", "The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\nThe PostScript backend does not support transparency; partially transparent artists will be rendered opaque.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
cbf818ecd35a58e3a78e7951b2f6c6e958755fac
1,025
ipynb
Jupyter Notebook
notebooks/5.2.1 No-Ack Consumer.ipynb
wangyonghong/RabbitMQ-in-Depth
56a35c6359d500b7597daf1bb2185b4c451a572c
[ "BSD-3-Clause" ]
111
2015-01-06T20:26:31.000Z
2022-03-14T13:17:12.000Z
notebooks/5.2.1 No-Ack Consumer.ipynb
wangyonghong/RabbitMQ-in-Depth
56a35c6359d500b7597daf1bb2185b4c451a572c
[ "BSD-3-Clause" ]
4
2018-06-15T20:35:36.000Z
2021-01-13T16:03:40.000Z
notebooks/5.2.1 No-Ack Consumer.ipynb
wangyonghong/RabbitMQ-in-Depth
56a35c6359d500b7597daf1bb2185b4c451a572c
[ "BSD-3-Clause" ]
43
2015-04-18T13:44:01.000Z
2022-03-14T13:17:13.000Z
19.711538
68
0.534634
[ [ [ "import rabbitpy", "_____no_output_____" ], [ "with rabbitpy.Connection() as connection:\n with connection.channel() as channel:\n queue = rabbitpy.Queue(channel, 'test-messages')\n for message in queue.consume_messages(no_ack=True):\n message.pprint()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code" ] ]
cbf819de55dddbecad1d1c03e6e05502e98e2a9a
15,826
ipynb
Jupyter Notebook
Azure CLI/Tutorial_with_Jupyter_Notebook.ipynb
Azure/ADLAwithR-GettingStarted
9e549520d8396093c57ee0852109f03b2b0ad1b9
[ "MIT" ]
11
2017-10-12T17:06:25.000Z
2020-05-20T12:38:48.000Z
Azure CLI/Tutorial_with_Jupyter_Notebook.ipynb
Azure/ADLAwithR-GettingStarted
9e549520d8396093c57ee0852109f03b2b0ad1b9
[ "MIT" ]
4
2017-10-11T21:04:25.000Z
2018-11-20T15:22:10.000Z
Azure CLI/Tutorial_with_Jupyter_Notebook.ipynb
Azure/ADLAwithR-GettingStarted
9e549520d8396093c57ee0852109f03b2b0ad1b9
[ "MIT" ]
25
2017-08-31T17:33:52.000Z
2022-01-17T11:37:14.000Z
25.44373
244
0.571086
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
cbf8209b818389a917ccfbd9cb28a5c9505f5c4f
305,921
ipynb
Jupyter Notebook
Image Classifier Project.ipynb
roctubre/aipnd-project
84764ab552e070f9502f0af6e4a91535312a956f
[ "MIT" ]
null
null
null
Image Classifier Project.ipynb
roctubre/aipnd-project
84764ab552e070f9502f0af6e4a91535312a956f
[ "MIT" ]
null
null
null
Image Classifier Project.ipynb
roctubre/aipnd-project
84764ab552e070f9502f0af6e4a91535312a956f
[ "MIT" ]
null
null
null
385.290932
136,672
0.927249
[ [ [ "# Developing an AI application\n\nGoing forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. \n\nIn this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. \n\n<img src='assets/Flowers.png' width=500px>\n\nThe project is broken down into multiple steps:\n\n* Load and preprocess the image dataset\n* Train the image classifier on your dataset\n* Use the trained classifier to predict image content\n\nWe'll lead you through each part which you'll implement in Python.\n\nWhen you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.\n\nFirst up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.", "_____no_output_____" ] ], [ [ "# Imports here\nimport json\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom collections import OrderedDict\nfrom PIL import Image\n\nimport torch\nimport torch.nn.functional as F\nfrom torch.autograd import Variable\nfrom torch import nn\nfrom torch import optim\nfrom torchvision import datasets, transforms, models", "_____no_output_____" ] ], [ [ "## Load the data\n\nHere you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.\n\nThe validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.\n\nThe pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.\n ", "_____no_output_____" ] ], [ [ "# Define image paths\ndata_dir = 'flowers'\ntrain_dir = data_dir + '/train'\nvalid_dir = data_dir + '/valid'\ntest_dir = data_dir + '/test'", "_____no_output_____" ], [ "# Define your transforms for the training, validation, and testing sets\ndata_transforms = transforms.Compose([transforms.RandomRotation(180),\n transforms.RandomResizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),\n ])\n\n# Load the datasets with ImageFolder\ntrain_data = datasets.ImageFolder(train_dir, transform=data_transforms)\nvalid_data = datasets.ImageFolder(valid_dir, transform=data_transforms)\ntest_data = datasets.ImageFolder(test_dir, transform=data_transforms)\n\n# Using the image datasets and the trainforms, define the dataloaders\ntrainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)\nvalidloader = torch.utils.data.DataLoader(valid_data, batch_size=32, shuffle=True)\ntestloader = torch.utils.data.DataLoader(test_data, batch_size=16, shuffle=True)", "_____no_output_____" ] ], [ [ "### Label mapping\n\nYou'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.", "_____no_output_____" ] ], [ [ "# Load class name mapping\nwith open('cat_to_name.json', 'r') as f:\n cat_to_name = json.load(f)", "_____no_output_____" ] ], [ [ "# Building and training the classifier\n\nNow that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.\n\nWe're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours.\n\nRefer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:\n\n* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)\n* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout\n* Train the classifier layers using backpropagation using the pre-trained network to get the features\n* Track the loss and accuracy on the validation set to determine the best hyperparameters\n\nWe've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!\n\nWhen training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.", "_____no_output_____" ] ], [ [ "# Load pre-trained network\nmodel = models.densenet161(pretrained=True)\n\n# Freeze parameters\nfor param in model.parameters():\n param.requires_grad = False\n", "C:\\Users\\Ryan\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\torchvision\\models\\densenet.py:212: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.\n nn.init.kaiming_normal(m.weight.data)\n" ], [ "# Create feed forward network and set as classifier\nclassifier = nn.Sequential(nn.Linear(2208, 1024),\n nn.ReLU(),\n nn.Dropout(p=0.5),\n nn.Linear(1024, len(cat_to_name)),\n nn.LogSoftmax(dim=1))\n\nmodel.classifier = classifier", "_____no_output_____" ], [ "# Function for the validation and test pass\ndef validation(model, testloader, criterion):\n # Use GPU for tests and validation, else CPU\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n model.to(device)\n \n loss = 0\n accuracy = 0\n \n for images, labels in testloader:\n \n images = images.to(device)\n labels = labels.to(device)\n\n output = model.forward(images)\n loss += criterion(output, labels).item()\n\n ps = torch.exp(output)\n equality = (labels.data == ps.max(dim=1)[1])\n accuracy += equality.type(torch.FloatTensor).mean()\n \n return loss, accuracy", "_____no_output_____" ], [ "# Hyperparameters\ncriterion = nn.NLLLoss()\noptimizer = optim.Adam(model.classifier.parameters(), lr=0.001)\nepochs = 5", "_____no_output_____" ], [ "# Use GPU for tests and validation, else CPU\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel.to(device)\n\nprint(\"Start Training...\")\nprint(\"Device:\", device)\n\n# Go through epochs\nprint_every=20\nsteps = 0\n\nfor e in range(epochs):\n model.train()\n running_loss = 0\n \n # Train network\n for images, labels in trainloader:\n steps += 1\n \n images = images.to(device)\n labels = labels.to(device)\n \n optimizer.zero_grad()\n \n # Forward and backward passes\n outputs = model.forward(images)\n loss = criterion(outputs, labels)\n loss.backward()\n optimizer.step()\n \n running_loss += loss.item()\n \n # Print losses and accuracy after every interval\n if steps % print_every == 0:\n # Validation pass\n model.eval()\n with torch.no_grad():\n vloss, vaccuracy = validation(model, validloader, criterion)\n\n # Print progress\n print(\"Epoch: {}/{} |\".format(e+1, epochs),\n \"Training Loss: {:.3f} |\".format(running_loss/print_every),\n \"Validation Loss: {:.3f} |\".format(vloss/len(validloader)),\n \"Validation Accuracy: {:.1f}%\".format(100*vaccuracy/len(validloader)))\n \n running_loss = 0\n model.train()", "Start Training...\nDevice: cuda\nEpoch: 1/5 | Training Loss: 4.467 | Validation Loss: 4.127 | Validation Accuracy: 13.0%\nEpoch: 1/5 | Training Loss: 3.887 | Validation Loss: 3.428 | Validation Accuracy: 35.0%\nEpoch: 1/5 | Training Loss: 3.308 | Validation Loss: 2.774 | Validation Accuracy: 47.2%\nEpoch: 1/5 | Training Loss: 2.751 | Validation Loss: 2.223 | Validation Accuracy: 55.1%\nEpoch: 1/5 | Training Loss: 2.318 | Validation Loss: 1.856 | Validation Accuracy: 58.9%\nEpoch: 2/5 | Training Loss: 1.622 | Validation Loss: 1.542 | Validation Accuracy: 66.4%\nEpoch: 2/5 | Training Loss: 1.635 | Validation Loss: 1.356 | Validation Accuracy: 70.9%\nEpoch: 2/5 | Training Loss: 1.614 | Validation Loss: 1.195 | Validation Accuracy: 74.1%\nEpoch: 2/5 | Training Loss: 1.430 | Validation Loss: 1.109 | Validation Accuracy: 76.2%\nEpoch: 2/5 | Training Loss: 1.371 | Validation Loss: 0.993 | Validation Accuracy: 78.5%\nEpoch: 3/5 | Training Loss: 0.821 | Validation Loss: 0.920 | Validation Accuracy: 79.5%\nEpoch: 3/5 | Training Loss: 1.177 | Validation Loss: 0.840 | Validation Accuracy: 82.3%\nEpoch: 3/5 | Training Loss: 1.100 | Validation Loss: 0.826 | Validation Accuracy: 80.3%\nEpoch: 3/5 | Training Loss: 0.996 | Validation Loss: 0.786 | Validation Accuracy: 81.8%\nEpoch: 3/5 | Training Loss: 0.960 | Validation Loss: 0.740 | Validation Accuracy: 81.6%\nEpoch: 4/5 | Training Loss: 0.511 | Validation Loss: 0.707 | Validation Accuracy: 82.5%\nEpoch: 4/5 | Training Loss: 0.829 | Validation Loss: 0.759 | Validation Accuracy: 80.0%\nEpoch: 4/5 | Training Loss: 0.856 | Validation Loss: 0.658 | Validation Accuracy: 84.5%\nEpoch: 4/5 | Training Loss: 0.870 | Validation Loss: 0.629 | Validation Accuracy: 84.1%\nEpoch: 4/5 | Training Loss: 0.831 | Validation Loss: 0.608 | Validation Accuracy: 84.6%\nEpoch: 5/5 | Training Loss: 0.291 | Validation Loss: 0.614 | Validation Accuracy: 85.0%\nEpoch: 5/5 | Training Loss: 0.797 | Validation Loss: 0.622 | Validation Accuracy: 85.4%\nEpoch: 5/5 | Training Loss: 0.774 | Validation Loss: 0.561 | Validation Accuracy: 85.3%\nEpoch: 5/5 | Training Loss: 0.741 | Validation Loss: 0.635 | Validation Accuracy: 84.1%\nEpoch: 5/5 | Training Loss: 0.774 | Validation Loss: 0.538 | Validation Accuracy: 86.6%\n" ] ], [ [ "## Testing your network\n\nIt's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.", "_____no_output_____" ] ], [ [ "# Do validation on test set\nmodel.eval()\nwith torch.no_grad():\n _, accuracy = validation(model, testloader, criterion)\n\nprint(\"Accuracy of the network on the test set: {:.2f}%\".format(100*accuracy/len(testloader)))", "Accuracy of the network on the test set: 87.14%\n" ] ], [ [ "## Save the checkpoint\n\nNow that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.\n\n```model.class_to_idx = image_datasets['train'].class_to_idx```\n\nRemember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.", "_____no_output_____" ] ], [ [ "# Attach mapping and hyper parameters to model\nmodel.class_to_idx = train_data.class_to_idx\nmodel.epochs = epochs\nmodel.optimizer = optimizer", "_____no_output_____" ], [ "def save_checkpoint(model):\n \"\"\" Save model as a checkpoint along with associated parameters \"\"\"\n \n checkpoint = {\n \"feature_arch\": \"densenet161\",\n \"output_size\": model.classifier[-2].out_features,\n \"hidden_layers\": [1024],\n \"epochs\": model.epochs,\n \"optimizer\": model.optimizer,\n \"class_to_idx\": model.class_to_idx,\n \"state_dict\": model.state_dict()\n }\n torch.save(checkpoint, \"checkpoint.pth\")", "_____no_output_____" ], [ "# Save checkpoint\nsave_checkpoint(model)", "_____no_output_____" ] ], [ [ "## Loading the checkpoint\n\nAt this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.", "_____no_output_____" ] ], [ [ "# Function that loads a checkpoint and rebuilds the model\ndef load_checkpoint(file_path):\n \"\"\" Rebuild model based on a checkpoint and returns it \"\"\"\n \n checkpoint = torch.load(file_path)\n \n # Load pretrained feature network\n model = models.__dict__[checkpoint[\"feature_arch\"]](pretrained=True)\n \n # Freeze parameters\n for param in model.parameters():\n param.requires_grad = False\n \n # Add the first layer, input size depends on feature architecture\n classifier = nn.ModuleList([\n nn.Linear(model.classifier.in_features, checkpoint[\"hidden_layers\"][0]),\n nn.ReLU(),\n nn.Dropout(), \n ])\n \n layer_sizes = zip(checkpoint[\"hidden_layers\"][:-1], \n checkpoint[\"hidden_layers\"][1:])\n for h1, h2 in layer_sizes:\n classifier.extend([\n nn.Linear(h1, h2),\n nn.ReLU(),\n nn.Dropout(),\n ])\n \n # Add output layer\n classifier.extend([nn.Linear(checkpoint[\"hidden_layers\"][-1], \n checkpoint[\"output_size\"]), \n nn.LogSoftmax(dim=1)])\n\n # Replace classifier\n model.classifier = nn.Sequential(*classifier)\n \n # Set state dict\n model.load_state_dict(checkpoint[\"state_dict\"])\n \n # Append parameters\n model.epochs = checkpoint[\"epochs\"]\n model.optimizer = checkpoint[\"optimizer\"]\n model.class_to_idx = checkpoint[\"class_to_idx\"]\n \n return model", "_____no_output_____" ], [ "# Load checkpoint\nmodel = load_checkpoint('checkpoint.pth')", "C:\\Users\\Ryan\\AppData\\Local\\conda\\conda\\envs\\py36\\lib\\site-packages\\torchvision\\models\\densenet.py:212: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.\n nn.init.kaiming_normal(m.weight.data)\n" ] ], [ [ "# Inference for classification\n\nNow you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like \n\n```python\nprobs, classes = predict(image_path, model)\nprint(probs)\nprint(classes)\n> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]\n> ['70', '3', '45', '62', '55']\n```\n\nFirst you'll need to handle processing the input image such that it can be used in your network. \n\n## Image Preprocessing\n\nYou'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. \n\nFirst, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.html#PIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.\n\nColor channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.\n\nAs before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. \n\nAnd finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.", "_____no_output_____" ] ], [ [ "def process_image(image):\n ''' Scales, crops, and normalizes a PIL image for a PyTorch model,\n returns an tensor\n '''\n\n # Resize image\n if image.size[0] < image.size[1]:\n image.thumbnail((256, image.size[1]))\n else:\n image.thumbnail((image.size[0], 256))\n \n # Crop image\n image = image.crop((\n image.size[0] / 2 - 112,\n image.size[1] / 2 - 112,\n image.size[0] / 2 + 112,\n image.size[1] / 2 + 112\n ))\n \n # Convert image to numpy array and convert color channel values\n np_image = np.array(image) / 256.\n \n # Normalize image array\n np_image[:,:,0] = (np_image[:,:,0] - 0.485) / 0.229\n np_image[:,:,1] = (np_image[:,:,1] - 0.456) / 0.224\n np_image[:,:,2] = (np_image[:,:,2] - 0.406) / 0.225\n \n # Transpose image array\n np_image = np_image.transpose(2,0,1)\n \n # Return image array as a tensor\n return torch.from_numpy(np_image).float()", "_____no_output_____" ] ], [ [ "To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).", "_____no_output_____" ] ], [ [ "def imshow(image, ax=None, title=None):\n \"\"\"Imshow for Tensor.\"\"\"\n if ax is None:\n fig, ax = plt.subplots()\n \n # PyTorch tensors assume the color channel is the first dimension\n # but matplotlib assumes is the third dimension\n image = image.numpy().transpose((1, 2, 0))\n \n # Undo preprocessing\n mean = np.array([0.485, 0.456, 0.406])\n std = np.array([0.229, 0.224, 0.225])\n image = std * image + mean\n \n # Image needs to be clipped between 0 and 1 or it looks like noise when displayed\n image = np.clip(image, 0, 1)\n \n ax.imshow(image)\n \n return ax", "_____no_output_____" ], [ "# Test\nimage = Image.open(\"flowers/test/1/image_06743.jpg\")\nimshow(process_image(image))", "_____no_output_____" ] ], [ [ "## Class Prediction\n\nOnce you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.\n\nTo get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.html#torch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](#Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.\n\nAgain, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.\n\n```python\nprobs, classes = predict(image_path, model)\nprint(probs)\nprint(classes)\n> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]\n> ['70', '3', '45', '62', '55']\n```", "_____no_output_____" ] ], [ [ "def predict(image_path, model, topk=5):\n ''' Predict the class (or classes) of an image using a trained deep learning model.\n '''\n \n # Check for CUDA\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n model.to(device)\n\n # Load and process image\n image = process_image(Image.open(image_path))\n image.unsqueeze_(0)\n image = image.to(device)\n \n # Predict class\n model.eval()\n with torch.no_grad():\n output = model(image)\n\n # Get topk probabilities and classes\n prediction = F.softmax(output.data, dim=1).topk(topk)\n probs = prediction[0].data.cpu().numpy().squeeze()\n classes = prediction[1].data.cpu().numpy().squeeze()\n \n # Get actual class labels\n inverted_dict = dict([[model.class_to_idx[k], k] for k in model.class_to_idx])\n classes = [inverted_dict[k] for k in classes]\n\n \n return probs, classes", "_____no_output_____" ], [ "# Test\nprobs, classes = predict(\"flowers/test/1/image_06743.jpg\", model)\nprint(probs)\nprint(classes)", "[0.82428044 0.09797678 0.04228763 0.01622757 0.01566004]\n['1', '86', '83', '76', '51']\n" ] ], [ [ "## Sanity Checking\n\nNow that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:\n\n<img src='assets/inference_example.png' width=300px>\n\nYou can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.", "_____no_output_____" ] ], [ [ "def plot_prediction(image_path, true_class_id, topk=5):\n ''' Plot input image and the top k predictions'''\n \n # Get image and prediction\n image = Image.open(image_path)\n probs, classes = predict(image_path, model, topk)\n\n # Create plot grid\n fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=1, nrows=2)\n\n # Prepare plot for top predicted flower\n ax1.set_title(cat_to_name[true_class_id])\n ax1.imshow(image)\n ax1.axis('off')\n\n # Prepare barchart of topk classes\n ax2.barh(range(topk), probs)\n ax2.set_yticks(range(topk))\n ax2.set_yticklabels([cat_to_name[x] for x in classes])\n ax2.invert_yaxis()\n", "_____no_output_____" ], [ "# Test\nplot_prediction(\"flowers/test/1/image_06743.jpg\", \"1\", 5)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
cbf825707296295923624a6750c8fbd6da5b83a5
2,591
ipynb
Jupyter Notebook
pyPhysics2.ipynb
xnorkl/Notebook
2f956d73dc35a20ab166e9b2c732ab8f18df9b63
[ "MIT" ]
null
null
null
pyPhysics2.ipynb
xnorkl/Notebook
2f956d73dc35a20ab166e9b2c732ab8f18df9b63
[ "MIT" ]
null
null
null
pyPhysics2.ipynb
xnorkl/Notebook
2f956d73dc35a20ab166e9b2c732ab8f18df9b63
[ "MIT" ]
null
null
null
18.775362
55
0.427248
[ [ [ "# controlif.py\n# created by: Thomas Gordon\n# date: 1-28-19\n\nx = int (input(\"please enter an integer: \"))\n\nif x < 0:\n print ('Negative')\n\nelif x == 0:\n print ('Zero')\n\nelif x == 1: \n print ('Positive')\n\nelse:\n x = 10\n print ('CHanged to 10')", "please enter an integer: -1\nNegative\n" ], [ "# controlif.py\n# created by: Thomas Gordon\n# date: 1-28-19\n\nx = int (input(\"please enter an integer: \"))\n\nif x < 0:\n print ('Negative')\n\nelif x == 0:\n print ('Zero')\n\nelif x == 1: \n print ('Positive')\n\nelse:\n x = 10\n print ('CHanged to 10')", "please enter an integer: 0\nZero\n" ], [ "# controlif.py\n# created by: Thomas Gordon\n# date: 1-28-19\n\nx = int (input(\"please enter an integer: \"))\n\nif x < 0:\n print ('Negative')\n\nelif x == 0:\n print ('Zero')\n\nelif x == 1: \n print ('Positive')\n\nelse:\n x = 10\n print ('CHanged to 10')", "please enter an integer: 1\nPositive\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code" ] ]
cbf829ca48b8ea92ad68631acbf67d15f59614e5
21,603
ipynb
Jupyter Notebook
houseprice/notebooks_source/05 - Model selection.ipynb
lucabasa/kaggle_competitions
15296375dc303218093aa576533fb809a4540bb8
[ "Apache-2.0" ]
1
2021-01-31T19:33:30.000Z
2021-01-31T19:33:30.000Z
houseprice/notebooks_source/05 - Model selection.ipynb
lucabasa/kaggle_competitions
15296375dc303218093aa576533fb809a4540bb8
[ "Apache-2.0" ]
4
2021-08-23T21:00:16.000Z
2021-08-23T21:07:45.000Z
houseprice/notebooks_source/05 - Model selection.ipynb
lucabasa/kaggle_competitions
15296375dc303218093aa576533fb809a4540bb8
[ "Apache-2.0" ]
null
null
null
30.469676
135
0.445771
[ [ [ "This notebook wants to make use of the evaluation techniques previously developed to select the best algorithms for this problem.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport numpy as np\n\nimport tubesml as tml\n\nfrom sklearn.model_selection import KFold\n\nfrom sklearn.pipeline import Pipeline\n\nfrom sklearn.linear_model import Lasso, Ridge, SGDRegressor\nfrom sklearn.ensemble import RandomForestRegressor, ExtraTreesRegressor\nfrom sklearn.svm import SVR\nfrom sklearn.neighbors import KNeighborsRegressor\nfrom sklearn.neural_network import MLPRegressor\nimport xgboost as xgb\nimport lightgbm as lgb\n\nfrom sklearn.metrics import mean_squared_error, mean_absolute_error\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nimport sys\nsys.path.append(\"..\")\nfrom source.clean import general_cleaner\nfrom source.transf_category import recode_cat, make_ordinal\nfrom source.transf_numeric import tr_numeric\nimport source.transf_univ as dfp\nimport source.utility as ut\nimport source.report as rp\n\nimport warnings\n\nwarnings.filterwarnings(\"ignore\", \n message=\"The dummies in this set do not match the ones in the train set, we corrected the issue.\")\n\npd.set_option('max_columns', 500)", "_____no_output_____" ] ], [ [ "# Data preparation\n\nGet the data ready to flow into the pipeline", "_____no_output_____" ] ], [ [ "df_train = pd.read_csv('../data/train.csv')\ndf_test = pd.read_csv('../data/test.csv')\n\ndf_train['Target'] = np.log1p(df_train.SalePrice)\n\ndf_train = df_train[df_train.GrLivArea < 4500].copy().reset_index()\n\ndel df_train['SalePrice']\n\ntrain_set, test_set = ut.make_test(df_train, \n test_size=0.2, random_state=654, \n strat_feat='Neighborhood')\n\ny = train_set['Target'].copy()\ndel train_set['Target']\n\ny_test = test_set['Target']\ndel test_set['Target']", "_____no_output_____" ] ], [ [ "## Building the pipeline\n\nThis was introduced in another notebook and imported above", "_____no_output_____" ] ], [ [ "numeric_pipe = Pipeline([('fs', tml.DtypeSel(dtype='numeric')),\n ('imputer', tml.DfImputer(strategy='median')),\n ('transf', tr_numeric())])\n\n\ncat_pipe = Pipeline([('fs', tml.DtypeSel(dtype='category')),\n ('imputer', tml.DfImputer(strategy='most_frequent')), \n ('ord', make_ordinal(['BsmtQual', 'KitchenQual',\n 'ExterQual', 'HeatingQC'])), \n ('recode', recode_cat()), \n ('dummies', tml.Dummify(drop_first=True))])\n\n\nprocessing_pipe = tml.FeatureUnionDf(transformer_list=[('cat_pipe', cat_pipe),\n ('num_pipe', numeric_pipe)])\n", "_____no_output_____" ] ], [ [ "## Evaluation method\n\nWe have seen how it works in the previous notebook, we have thus imported the necessary functions above.", "_____no_output_____" ] ], [ [ "models = [('lasso', Lasso(alpha=0.01)), ('ridge', Ridge()), ('sgd', SGDRegressor()), \n ('forest', RandomForestRegressor(n_estimators=200)), ('xtree', ExtraTreesRegressor(n_estimators=200)), \n ('svr', SVR()), \n ('kneig', KNeighborsRegressor()),\n ('xgb', xgb.XGBRegressor(n_estimators=200, objective='reg:squarederror')), \n ('lgb', lgb.LGBMRegressor(n_estimators=200))]", "_____no_output_____" ], [ "mod_name = []\nrmse_train = []\nrmse_test = []\nmae_train = []\nmae_test = []\n\nfolds = KFold(5, shuffle=True, random_state=541)\n\nfor model in models:\n \n train = train_set.copy()\n test = test_set.copy()\n print(model[0])\n mod_name.append(model[0])\n \n pipe = [('gen_cl', general_cleaner()),\n ('processing', processing_pipe),\n ('scl', dfp.df_scaler())] + [model]\n \n model_pipe = Pipeline(pipe)\n \n inf_preds = tml.cv_score(data=train, target=y, cv=folds, estimator=model_pipe)\n \n model_pipe.fit(train, y)\n \n preds = model_pipe.predict(test)\n \n rp.plot_predictions(test, y_test, preds, savename=model[0]+'_preds.png')\n rp.plot_predictions(train, y, inf_preds, savename=model[0]+'_inf_preds.png')\n \n rmse_train.append(mean_squared_error(y, inf_preds))\n rmse_test.append(mean_squared_error(y_test, preds))\n mae_train.append(mean_absolute_error(np.expm1(y), np.expm1(inf_preds)))\n mae_test.append(mean_absolute_error(np.expm1(y_test), np.expm1(preds)))\n \n print(f'\\tTrain set RMSE: {round(np.sqrt(mean_squared_error(y, inf_preds)), 4)}')\n print(f'\\tTrain set MAE: {round(mean_absolute_error(np.expm1(y), np.expm1(inf_preds)), 2)}')\n print(f'\\tTest set RMSE: {round(np.sqrt(mean_squared_error(y_test, preds)), 4)}')\n print(f'\\tTrain set MAE: {round(mean_absolute_error(np.expm1(y_test), np.expm1(preds)), 2)}')\n \n print('_'*40)\n print('\\n')\n \nresults = pd.DataFrame({'model_name': mod_name, \n 'rmse_train': rmse_train, 'rmse_test': rmse_test, \n 'mae_train': mae_train, 'mae_test': mae_test})\n\nresults", "lasso\n\tTrain set RMSE: 0.1217\n\tTrain set MAE: 15576.36\n\tTest set RMSE: 0.1332\n\tTrain set MAE: 15983.01\n________________________________________\n\n\nridge\n\tTrain set RMSE: 0.1181\n\tTrain set MAE: 14713.42\n\tTest set RMSE: 0.1347\n\tTrain set MAE: 15546.02\n________________________________________\n\n\nsgd\n\tTrain set RMSE: 0.1234\n\tTrain set MAE: 15626.24\n\tTest set RMSE: 0.1396\n\tTrain set MAE: 16580.93\n________________________________________\n\n\nforest\n\tTrain set RMSE: 0.141\n\tTrain set MAE: 17817.3\n\tTest set RMSE: 0.1539\n\tTrain set MAE: 18149.16\n________________________________________\n\n\nxtree\n\tTrain set RMSE: 0.1351\n\tTrain set MAE: 17392.39\n\tTest set RMSE: 0.1511\n\tTrain set MAE: 17459.41\n________________________________________\n\n\nsvr\n\tTrain set RMSE: 0.1535\n\tTrain set MAE: 19051.97\n\tTest set RMSE: 0.1556\n\tTrain set MAE: 17500.13\n________________________________________\n\n\nkneig\n\tTrain set RMSE: 0.1781\n\tTrain set MAE: 23218.1\n\tTest set RMSE: 0.1765\n\tTrain set MAE: 21696.33\n________________________________________\n\n\nxgb\n\tTrain set RMSE: 0.1387\n\tTrain set MAE: 17368.85\n\tTest set RMSE: 0.1519\n\tTrain set MAE: 17741.96\n________________________________________\n\n\nlgb\n\tTrain set RMSE: 0.1302\n\tTrain set MAE: 16642.75\n\tTest set RMSE: 0.1441\n\tTrain set MAE: 15966.02\n________________________________________\n\n\n" ], [ "results.sort_values(by='rmse_train').head(2)", "_____no_output_____" ], [ "results.sort_values(by='rmse_test').head(2)", "_____no_output_____" ], [ "results.sort_values(by='mae_train').head(2)", "_____no_output_____" ], [ "results.sort_values(by='mae_test').head(2)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]