hexsha
stringlengths 40
40
| size
int64 6
14.9M
| ext
stringclasses 1
value | lang
stringclasses 1
value | max_stars_repo_path
stringlengths 6
260
| max_stars_repo_name
stringlengths 6
119
| max_stars_repo_head_hexsha
stringlengths 40
41
| max_stars_repo_licenses
list | max_stars_count
int64 1
191k
⌀ | max_stars_repo_stars_event_min_datetime
stringlengths 24
24
⌀ | max_stars_repo_stars_event_max_datetime
stringlengths 24
24
⌀ | max_issues_repo_path
stringlengths 6
260
| max_issues_repo_name
stringlengths 6
119
| max_issues_repo_head_hexsha
stringlengths 40
41
| max_issues_repo_licenses
list | max_issues_count
int64 1
67k
⌀ | max_issues_repo_issues_event_min_datetime
stringlengths 24
24
⌀ | max_issues_repo_issues_event_max_datetime
stringlengths 24
24
⌀ | max_forks_repo_path
stringlengths 6
260
| max_forks_repo_name
stringlengths 6
119
| max_forks_repo_head_hexsha
stringlengths 40
41
| max_forks_repo_licenses
list | max_forks_count
int64 1
105k
⌀ | max_forks_repo_forks_event_min_datetime
stringlengths 24
24
⌀ | max_forks_repo_forks_event_max_datetime
stringlengths 24
24
⌀ | avg_line_length
float64 2
1.04M
| max_line_length
int64 2
11.2M
| alphanum_fraction
float64 0
1
| cells
list | cell_types
list | cell_type_groups
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4aaff9fa14961dedf0acffe1f4b3e1cc48c94ff7
| 38,530 |
ipynb
|
Jupyter Notebook
|
projects/david/lab/test_ac_infeasible_start.ipynb
|
chengsoonong/mclass-sky
|
98219221c233fa490e78246eda1ead05c6cf7c17
|
[
"BSD-3-Clause"
] | 9 |
2016-06-01T12:09:47.000Z
|
2021-01-16T05:28:01.000Z
|
projects/david/lab/test_ac_infeasible_start.ipynb
|
alasdairtran/mclearn
|
98219221c233fa490e78246eda1ead05c6cf7c17
|
[
"BSD-3-Clause"
] | 165 |
2015-01-28T10:37:34.000Z
|
2017-10-23T06:55:13.000Z
|
projects/david/lab/test_ac_infeasible_start.ipynb
|
alasdairtran/mclearn
|
98219221c233fa490e78246eda1ead05c6cf7c17
|
[
"BSD-3-Clause"
] | 9 |
2015-01-24T16:27:54.000Z
|
2020-09-01T08:54:31.000Z
| 38.110781 | 467 | 0.346146 |
[
[
[
"# Analytic center computation using a infeasible start Newton method",
"_____no_output_____"
],
[
"# The set-up",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport accpm\nimport accpm\nfrom IPython.display import display\n%load_ext autoreload\n%autoreload 1\n%aimport accpm",
"_____no_output_____"
]
],
[
[
"$\\DeclareMathOperator{\\domain}{dom}\n\\newcommand{\\transpose}{\\text{T}}\n\\newcommand{\\vec}[1]{\\begin{pmatrix}#1\\end{pmatrix}}$",
"_____no_output_____"
],
[
"# Theory",
"_____no_output_____"
],
[
"To test the $\\texttt{analytic_center}$ function we consider the following example. Suppose we want to find the analytic center $x_{ac} \\in \\mathbb{R}^2$ of the inequalities $x_1 \\leq c_1, x_1 \\geq 0, x_2 \\leq c_2, x_2 \\geq 0$. This is a rectange with dimensions $c_1 \\times c_2$ centered at at $(\\frac{c_1}{2}, \\frac{c_2}{2})$ so we should have $x_{ac} = (\\frac{c_1}{2}, \\frac{c_2}{2})$. Now, $x_{ac}$ is the solution of the minimization problem \n\\begin{equation*}\n \\min_{\\domain \\phi} \\phi(x) = - \\sum_{i=1}^{4}{\\log{(b_i - a_i^\\transpose x)}}\n\\end{equation*}\nwhere \n\\begin{equation*}\n \\domain \\phi = \\{x \\;|\\; a_i^\\transpose x < b_i, i = 1, 2, 3, 4\\}\n\\end{equation*}\nwith\n\\begin{align*}\n &a_1 = \\begin{bmatrix}1\\\\0\\end{bmatrix}, &&b_1 = c_1, \\\\\n &a_2 = \\begin{bmatrix}-1\\\\0\\end{bmatrix}, &&b_2 = 0, \\\\\n &a_3 = \\begin{bmatrix}0\\\\1\\end{bmatrix}, &&b_3 = c_2, \\\\\n &a_4 = \\begin{bmatrix}0\\\\-1\\end{bmatrix}, &&b_4 = 0. \n\\end{align*}\nSo we solve\n\\begin{align*}\n &\\phantom{iff}\\nabla \\phi(x) = \\sum_{i=1}^{4\n } \\frac{1}{b_i - a_i^\\transpose x}a_i = 0 \\\\\n &\\iff \\frac{1}{c_1-x_1}\\begin{bmatrix}1\\\\0\\end{bmatrix} + \\frac{1}{x_1}\\begin{bmatrix}-1\\\\0\\end{bmatrix} + \\frac{1}{c_2-x_2}\\begin{bmatrix}0\\\\1\\end{bmatrix} + \\frac{1}{x_2}\\begin{bmatrix}0\\\\-1\\end{bmatrix} = 0 \\\\\n &\\iff \\frac{1}{c_1-x_1} - \\frac{1}{x_1} = 0, \\frac{1}{c_2-x_2} - \\frac{1}{x_2} = 0 \\\\\n &\\iff x_1 = \\frac{c_1}{2}, x_2 = \\frac{c_2}{2},\n\\end{align*}\nas expected. ",
"_____no_output_____"
],
[
"# Testing",
"_____no_output_____"
],
[
"We test $\\texttt{analytic_center}$ for varying values of $c_1, c_2$ and algorithm parameters $\\texttt{alpha, beta}$:",
"_____no_output_____"
]
],
[
[
"def get_results(A, test_input, alpha, beta, tol=10e-8):\n expected = []\n actual = []\n result = []\n for (c1, c2) in test_input:\n b = np.array([c1, 0, c2, 0])\n ac_expected = np.asarray((c1/2, c2/2))\n ac_actual = accpm.analytic_center(A, b, alpha = alpha, beta = beta)\n expected.append(ac_expected)\n actual.append(ac_actual)\n # if np.array_equal(ac_expected, ac_actual):\n if np.linalg.norm(ac_expected - ac_actual) <= tol: \n result.append(True)\n else:\n result.append(False)\n results = pd.DataFrame([test_input, expected, actual, result])\n results = results.transpose()\n results.columns = ['test_input', 'expected', 'actual', 'result']\n print('alpha =', alpha, 'beta =', beta)\n display(results) ",
"_____no_output_____"
]
],
[
[
"Here we have results for squares of varying sizes and for varying values of $\\texttt{alpha}$ and $\\texttt{beta}$. In general, the algorithm performs worse on large starting polyhedrons than small starting polyhedrons. This seems acceptable given that we are most concerned with smaller polyhedrons.",
"_____no_output_____"
]
],
[
[
"A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])\ntest_input = [(1, 1), (5, 5), (20, 20), (10e2, 10e2), (10e4, 10e4),\n (10e6, 10e6), (10e8, 10e8), (10e10, 10e10), \n (0.5, 0.5), (0.1, 0.1), (0.01, 0.01), \n (0.005, 0.005), (0.001, 0.001),(0.0005, 0.0005), (0.0001, 0.0001),\n (0.00005, 0.00005), (0.00001, 0.00001), (0.00001, 0.00001)] ",
"_____no_output_____"
],
[
"get_results(A, test_input, alpha=0.01, beta=0.7)",
"alpha = 0.01 beta = 0.7\n"
],
[
"get_results(A, test_input, alpha=0.01, beta=0.99)",
"**** Cholesky factorization FAILED or INACCURATE ****\n**** Cholesky factorization FAILED or INACCURATE ****\n**** Cholesky factorization FAILED or INACCURATE ****\n**** Cholesky factorization FAILED or INACCURATE ****\nalpha = 0.01 beta = 0.99\n"
],
[
"get_results(A, test_input, alpha=0.49, beta=0.7)",
"alpha = 0.49 beta = 0.7\n"
],
[
"get_results(A, test_input, alpha=0.25, beta=0.7)",
"alpha = 0.25 beta = 0.7\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4ab00cf34d08dcae9d43c6b5035b9f08c86f7902
| 663,349 |
ipynb
|
Jupyter Notebook
|
examples/aoc2018/fluentpy.ipynb
|
dwt/fluent
|
b259db541aa97d6538e20319a44498269afaee59
|
[
"0BSD"
] | 46 |
2018-04-15T02:07:52.000Z
|
2022-03-23T02:48:24.000Z
|
examples/aoc2018/fluentpy.ipynb
|
dwt/fluent
|
b259db541aa97d6538e20319a44498269afaee59
|
[
"0BSD"
] | 13 |
2018-04-16T07:31:32.000Z
|
2021-12-07T08:27:22.000Z
|
examples/aoc2018/fluentpy.ipynb
|
dwt/fluent
|
b259db541aa97d6538e20319a44498269afaee59
|
[
"0BSD"
] | 3 |
2018-04-15T02:08:06.000Z
|
2021-12-06T18:36:25.000Z
| 114.825861 | 8,836 | 0.094314 |
[
[
[
"import fluentpy as _",
"_____no_output_____"
]
],
[
[
"These are solutions for the Advent of Code puzzles of 2018 in the hopes that this might inspire the reader how to use the fluentpy api to solve problems.\n\nSee https://adventofcode.com/2018/ for the problems.\n\nThe goal of this is not to produce minimal code or neccessarily to be as clear as possible, but to showcase as many of the features of fluentpy. Pull requests to use more of fluentpy are welcome!\n\nI do hope however that you find the solutions relatively succinct as your understanding of how fluentpy works grows.",
"_____no_output_____"
],
[
"# Day 1\n\nhttps://adventofcode.com/2018/day/1",
"_____no_output_____"
]
],
[
[
"_(open('input/day1.txt')).read().replace('\\n','').call(eval)._",
"_____no_output_____"
],
[
"day1_input = (\n _(open('input/day1.txt'))\n .readlines()\n .imap(eval)\n ._\n)\n\nseen = set()\ndef havent_seen(number):\n if number in seen:\n return False\n seen.add(number)\n return True\n\n\n(\n _(day1_input)\n .icycle()\n .iaccumulate()\n .idropwhile(havent_seen)\n .get(0)\n ._\n)",
"_____no_output_____"
]
],
[
[
"# Day 2\n\nhttps://adventofcode.com/2018/day/2",
"_____no_output_____"
]
],
[
[
"day2 = open('input/day2.txt').readlines()",
"_____no_output_____"
],
[
"def has_two_or_three(code):\n counts = _.lib.collections.Counter(code).values()\n return 2 in counts, 3 in counts\n\ntwos, threes = _(day2).map(has_two_or_three).star_call(zip).to(tuple)\n\nsum(twos) * sum(threes)",
"_____no_output_____"
],
[
"def is_different_by_only_one_char(codes):\n # REFACT consider how to more effectively vectorize this function\n # i.e. map ord, elementwise minus, count non zeroes == 1\n code1, code2 = codes\n diff_count = 0\n for index, char in enumerate(code1):\n if char != code2[index]:\n diff_count += 1\n return 1 == diff_count\n\n(\n _(day2)\n .icombinations(r=2)\n .ifilter(is_different_by_only_one_char)\n .get(0)\n .star_call(zip)\n .filter(lambda pair: pair[0] == pair[1])\n .star_call(zip)\n .get(0)\n .join('')\n ._\n)",
"_____no_output_____"
]
],
[
[
"# Day 3\n\nhttps://adventofcode.com/2018/day/3",
"_____no_output_____"
]
],
[
[
"line_regex = r'#(\\d+) @ (\\d+),(\\d+): (\\d+)x(\\d+)'\nclass Entry(_.lib.collections.namedtuple('Entry', ['id', 'left', 'top', 'width', 'height'])._):\n \n def coordinates(self):\n return _.lib.itertools.product._(\n range(claim.left, claim.left + claim.width),\n range(claim.top, claim.top + claim.height)\n )\n\ndef parse_day3(line):\n return _(line).match(line_regex).groups().map(int).star_call(Entry)._\n\nday3 = _(open('input/day3.txt')).read().splitlines().map(parse_day3)._",
"_____no_output_____"
],
[
"plane = dict()\nfor claim in day3:\n for coordinate in claim.coordinates():\n plane[coordinate] = plane.get(coordinate, 0) + 1\n_(plane).values().filter(_.each != 1).len()._",
"_____no_output_____"
],
[
"for claim in day3:\n if _(claim.coordinates()).imap(lambda each: plane[each] == 1).all()._:\n print(claim.id)",
"445\n"
]
],
[
[
"# Day 4\n\nhttps://adventofcode.com/2018/day/4",
"_____no_output_____"
]
],
[
[
"day4_lines = _(open('input/day4.txt')).read().splitlines().sort().self._",
"_____no_output_____"
],
[
"class Sleep(_.lib.collections.namedtuple('Sleep', ['duty_start', 'sleep_start', 'sleep_end'])._):\n \n def minutes(self):\n return (self.sleep_end - self.sleep_start).seconds // 60\n\nclass Guard:\n def __init__(self, guard_id, sleeps=None):\n self.id = guard_id\n self.sleeps = sleeps or list()\n \n def minutes_asleep(self):\n return _(self.sleeps).map(_.each.minutes()._).sum()._\n \n def minutes_and_sleep_counts(self):\n distribution = dict()\n for sleep in self.sleeps:\n # problematic if the hour wraps, but it never does, see check below\n for minute in range(sleep.sleep_start.minute, sleep.sleep_end.minute):\n distribution[minute] = distribution.get(minute, 0) + 1\n\n return _(distribution).items().sorted(key=_.each[1]._, reverse=True)._\n \n def minute_most_asleep(self):\n return _(self.minutes_and_sleep_counts()).get(0, tuple()).get(0, 0)._\n \n def number_of_most_sleeps(self):\n return _(self.minutes_and_sleep_counts()).get(0, tuple()).get(1, 0)._",
"_____no_output_____"
],
[
"guards = dict()\ncurrent_guard = current_duty_start = current_sleep_start = None\n\nfor line in day4_lines:\n time = _.lib.datetime.datetime.fromisoformat(line[1:17])._\n if 'Guard' in line:\n guard_id = _(line[18:]).match(r'.*?(\\d+).*?').group(1).call(int)._\n current_guard = guards.setdefault(guard_id, Guard(guard_id))\n current_duty_start = time\n if 'falls asleep' in line:\n current_sleep_start = time\n if 'wakes up' in line:\n current_guard.sleeps.append(Sleep(current_duty_start, current_sleep_start, time))",
"_____no_output_____"
],
[
"# confirm that we don't really have to do real date calculations but can just work with simplified values\nfor guard in guards.values():\n for sleep in guard.sleeps:\n assert sleep.sleep_start.minute < sleep.sleep_end.minute\n assert sleep.sleep_start.hour == 0\n assert sleep.sleep_end.hour == 0",
"_____no_output_____"
],
[
"guard = (\n _(guards)\n .values()\n .sorted(key=Guard.minutes_asleep, reverse=True)\n .get(0)\n ._\n)\nguard.id * guard.minute_most_asleep()",
"_____no_output_____"
],
[
"guard = (\n _(guards)\n .values()\n .sorted(key=Guard.number_of_most_sleeps, reverse=True)\n .get(0)\n ._\n)\nguard.id * guard.minute_most_asleep()",
"_____no_output_____"
]
],
[
[
"# Day 5\n\nhttps://adventofcode.com/2018/day/5",
"_____no_output_____"
]
],
[
[
"day5 = _(open('input/day5.txt')).read().strip()._",
"_____no_output_____"
],
[
"def is_reacting(a_polymer, an_index):\n if an_index+2 > len(a_polymer):\n return False\n first, second = a_polymer[an_index:an_index+2]\n return first.swapcase() == second\n\ndef reduce(a_polymer):\n for index in range(len(a_polymer) - 2, -1, -1):\n if is_reacting(a_polymer, index):\n a_polymer = a_polymer[:index] + a_polymer[index+2:]\n return a_polymer\n\ndef fully_reduce(a_polymer):\n last_polymer = current_polymer = a_polymer\n while True:\n last_polymer, current_polymer = current_polymer, reduce(current_polymer)\n if last_polymer == current_polymer:\n break\n return current_polymer",
"_____no_output_____"
],
[
"len(fully_reduce(day5))",
"_____no_output_____"
],
[
"alphabet = _(range(26)).map(_.each + ord('a')).map(chr)._\nshortest_length = float('inf')\nfor char in alphabet:\n polymer = day5.replace(char, '').replace(char.swapcase(), '')\n length = len(fully_reduce(polymer))\n if length < shortest_length:\n shortest_length = length\n\nshortest_length",
"_____no_output_____"
]
],
[
[
"# Day 6\n\nhttps://adventofcode.com/2018/day/6",
"_____no_output_____"
]
],
[
[
"Point = _.lib.collections.namedtuple('Point', ['x', 'y'])._\nday6_coordinates = (\n _(open('input/day6.txt'))\n .read()\n .splitlines()\n .map(lambda each: _(each).split(', ').map(int).star_call(Point)._)\n ._\n)",
"_____no_output_____"
],
[
"def manhatten_distance(first, second):\n return abs(first.x - second.x) + abs(first.y - second.y)\n\ndef nearest_two_points_and_distances(a_point):\n return (\n _(day6_coordinates)\n .imap(lambda each: (each, manhatten_distance(each, a_point)))\n .sorted(key=_.each[1]._)\n .slice(2)\n ._\n )\n\ndef has_nearest_point(a_point):\n (nearest_point, nearest_distance), (second_point, second_distance) \\\n = nearest_two_points_and_distances(a_point)\n return nearest_distance < second_distance\n\ndef nearest_point(a_point):\n return nearest_two_points_and_distances(a_point)[0][0]\n\ndef plane_extent():\n all_x, all_y = _(day6_coordinates).imap(lambda each: (each.x, each.y)).star_call(zip).to(tuple)\n min_x, min_y = min(all_x) - 1, min(all_y) - 1\n max_x, max_y = max(all_x) + 2, max(all_y) + 2\n return (\n (min_x, min_y),\n (max_x, max_y)\n )\n\ndef compute_bounding_box():\n (min_x, min_y), (max_x, max_y) = plane_extent()\n return _.lib.itertools.chain(\n (Point(x, min_y) for x in range(min_x, max_x)),\n (Point(x, max_y) for x in range(min_x, max_x)),\n (Point(min_x, y) for y in range(min_y, max_y)),\n (Point(max_x, y) for y in range(min_y, max_y)),\n ).to(tuple)\nbounding_box = compute_bounding_box()\n\ndef internal_points():\n # no point on bounding box is nearest to it\n external_points = _(bounding_box).map(nearest_point).to(set)\n return set(day6_coordinates) - external_points\n\ndef points_by_number_of_nearest_points():\n plane = dict()\n (min_x, min_y), (max_x, max_y) = plane_extent()\n for x in range(min_x, max_x):\n for y in range(min_y, max_y):\n point = Point(x,y)\n if has_nearest_point(point):\n plane[point] = nearest_point(point)\n \n plane_points = _(plane).values().to(tuple)\n counts = dict()\n for point in internal_points():\n counts[point] = plane_points.count(point)\n return counts",
"_____no_output_____"
],
[
"points = points_by_number_of_nearest_points()\n_(points).items().sorted(key=_.each[1]._, reverse=True).get(0)._",
"_____no_output_____"
],
[
"def total_distance(a_point):\n return (\n _(day6_coordinates)\n .imap(lambda each: manhatten_distance(a_point, each))\n .sum()\n ._\n )\n\ndef number_of_points_with_total_distance_less(a_limit):\n plane = dict()\n (min_x, min_y), (max_x, max_y) = plane_extent()\n for x in range(min_x, max_x):\n for y in range(min_y, max_y):\n point = Point(x,y)\n plane[point] = total_distance(point)\n \n return (\n _(plane)\n .values()\n .ifilter(_.each < a_limit)\n .len()\n ._\n )",
"_____no_output_____"
],
[
"number_of_points_with_total_distance_less(10000)",
"_____no_output_____"
]
],
[
[
"# Day 7\n\nhttps://adventofcode.com/2018/day/7",
"_____no_output_____"
]
],
[
[
"import fluentpy as _",
"_____no_output_____"
],
[
"day7_input = (\n _(open('input/day7.txt'))\n .read()\n .findall(r'Step (\\w) must be finished before step (\\w) can begin.', flags=_.lib.re.M._)\n ._\n)",
"_____no_output_____"
],
[
"def execute_in_order(dependencies):\n prerequisites = dict()\n _(dependencies).each(lambda each: prerequisites.setdefault(each[1], []).append(each[0]))\n all_jobs = _(dependencies).flatten().call(set)._\n ready_jobs = all_jobs - prerequisites.keys()\n done_jobs = []\n \n while 0 != len(ready_jobs):\n current_knot = _(ready_jobs).sorted()[0]._\n ready_jobs.discard(current_knot)\n done_jobs.append(current_knot)\n for knot in all_jobs.difference(done_jobs):\n if set(done_jobs).issuperset(prerequisites.get(knot, [])):\n ready_jobs.add(knot)\n\n\n return _(done_jobs).join('')._\nexecute_in_order(day7_input)",
"_____no_output_____"
],
[
"def cached_property(cache_instance_variable_name):\n def outer_wrapper(a_method):\n @property\n @_.lib.functools.wraps._(a_method)\n def wrapper(self):\n if not hasattr(self, cache_instance_variable_name):\n setattr(self, cache_instance_variable_name, a_method(self))\n return getattr(self, cache_instance_variable_name)\n return wrapper\n return outer_wrapper\n \nclass Jobs:\n def __init__(self, dependencies, delays):\n self.dependencies = dependencies\n self.delays = delays\n \n self._ready = self.all.difference(self.prerequisites.keys())\n self._done = []\n self._in_progress = set()\n \n @cached_property('_prerequisites')\n def prerequisites(self):\n prerequisites = dict()\n for prerequisite, job in self.dependencies:\n prerequisites.setdefault(job, []).append(prerequisite)\n return prerequisites\n \n @cached_property('_all')\n def all(self):\n return _(self.dependencies).flatten().call(set)._\n \n def can_start(self, a_job):\n return set(self._done).issuperset(self.prerequisites.get(a_job, []))\n \n def has_ready_jobs(self):\n return 0 != len(self._ready)\n \n def get_ready_job(self):\n assert self.has_ready_jobs()\n \n current_job = _(self._ready).sorted()[0]._\n self._ready.remove(current_job)\n self._in_progress.add(current_job)\n \n return current_job, self.delays[current_job]\n \n def set_job_done(self, a_job):\n assert a_job in self._in_progress\n \n self._done.append(a_job)\n self._in_progress.remove(a_job)\n \n for job in self.unstarted():\n if self.can_start(job):\n self._ready.add(job)\n \n def unstarted(self):\n return self.all.difference(self._in_progress.union(self._done))\n \n def is_done(self):\n return set(self._done) == self.all\n \n def __repr__(self):\n return f'<Jobs(in_progress={self._in_progress}, done={self._done})>'\n\n@_.lib.dataclasses.dataclass._\nclass Worker:\n id: int\n delay: int\n current_job: str\n \n jobs: Jobs\n \n def work_a_second(self):\n self.delay -= 1\n \n if self.delay <= 0:\n self.finish_job_if_working()\n self.accept_job_if_available()\n \n def finish_job_if_working(self):\n if self.current_job is None:\n return\n \n self.jobs.set_job_done(self.current_job)\n self.current_job = None\n \n def accept_job_if_available(self):\n if not self.jobs.has_ready_jobs():\n return\n \n self.current_job, self.delay = self.jobs.get_ready_job()\n\ndef execute_in_parallel(dependencies, delays, number_of_workers):\n jobs = Jobs(dependencies, delays)\n workers = _(range(number_of_workers)).map(_(Worker).curry(\n id=_,\n delay=0, current_job=None, jobs=jobs,\n )._)._\n \n seconds = -1\n while not jobs.is_done():\n seconds += 1\n _(workers).each(_.each.work_a_second()._)\n\n return seconds\n",
"_____no_output_____"
],
[
"test_input = (('C', 'A'), ('C', 'F'), ('A', 'B'), ('A', 'D'), ('B', 'E'), ('D', 'E'), ('F', 'E'))\ntest_delays = _(range(1,27)).map(lambda each: (chr(ord('A') + each - 1), each)).call(dict)._\nexecute_in_parallel(test_input, test_delays, 2)",
"_____no_output_____"
],
[
"day7_delays = _(range(1,27)).map(lambda each: (chr(ord('A') + each - 1), 60 + each)).call(dict)._\n\nassert 1107 == execute_in_parallel(day7_input, day7_delays, 5)\nexecute_in_parallel(day7_input, day7_delays, 5)",
"_____no_output_____"
]
],
[
[
"# Day 8\n\nhttps://adventofcode.com/2018/day/8",
"_____no_output_____"
]
],
[
[
"import fluentpy as _",
"_____no_output_____"
],
[
"@_.lib.dataclasses.dataclass._\nclass Node:\n children: tuple\n metadata: tuple\n \n @classmethod\n def parse(cls, number_iterator):\n child_count = next(number_iterator)\n metadata_count = next(number_iterator)\n return cls(\n children=_(range(child_count)).map(lambda ignored: Node.parse(number_iterator))._,\n metadata=_(range(metadata_count)).map(lambda ignored: next(number_iterator))._,\n )\n \n def sum_all_metadata(self):\n return sum(self.metadata) + _(self.children).imap(_.each.sum_all_metadata()._).sum()._\n \n def value(self):\n if 0 == len(self.children):\n return sum(self.metadata)\n \n return (\n _(self.metadata)\n .imap(_.each - 1) # convert to indexes\n .ifilter(_.each >= 0)\n .ifilter(_.each < len(self.children))\n .imap(self.children.__getitem__)\n .imap(Node.value)\n .sum()\n ._\n )",
"_____no_output_____"
],
[
"test_input = (2,3,0,3,10,11,12,1,1,0,1,99,2,1,1,2)\ntest_node = Node.parse(iter(test_input))\n\nassert 138 == test_node.sum_all_metadata()\nassert 66 == test_node.value()",
"_____no_output_____"
],
[
"day8_input = _(open('input/day8.txt')).read().split(' ').map(int)._\nnode = Node.parse(iter(day8_input))\nnode.sum_all_metadata()",
"_____no_output_____"
]
],
[
[
"# Day 9\n\nhttps://adventofcode.com/2018/day/9",
"_____no_output_____"
]
],
[
[
"class Marble:\n \n def __init__(self, value):\n self.value = value\n self.prev = self.next = self\n \n def insert_after(self, a_marble):\n a_marble.next = self.next\n a_marble.prev = self\n \n a_marble.next.prev = a_marble.prev.next = a_marble\n \n def remove(self):\n self.prev.next = self.next\n self.next.prev = self.prev\n return self\n\nclass Circle:\n \n def __init__(self):\n self.current = None\n \n def play_marble(self, marble):\n if self.current is None:\n self.current = marble\n return 0 # normmal insert, no points, only happens once at the beginning\n elif marble.value % 23 == 0:\n removed = self.current.prev.prev.prev.prev.prev.prev.prev.remove()\n self.current = removed.next\n return marble.value + removed.value\n else:\n self.current.next.insert_after(marble)\n self.current = marble\n return 0 # normal insert, no points\n \ndef marble_game(player_count, marbles):\n player_scores = [0] * player_count\n circle = Circle()\n for marble_value in range(marbles + 1):\n player_scores[marble_value % player_count] += circle.play_marble(Marble(marble_value))\n return max(player_scores)",
"_____no_output_____"
],
[
"assert 8317 == marble_game(player_count=10, marbles=1618)\nassert 146373 == marble_game(player_count=13, marbles=7999)\nassert 2764 == marble_game(player_count=17, marbles=1104)\nassert 54718 == marble_game(player_count=21, marbles=6111)\nassert 37305 == marble_game(player_count=30, marbles=5807)",
"_____no_output_____"
],
[
"marble_game(player_count=455, marbles=71223)",
"_____no_output_____"
],
[
"marble_game(player_count=455, marbles=71223*100)",
"_____no_output_____"
]
],
[
[
"# Day 10\n\nhttps://adventofcode.com/2018/day/10",
"_____no_output_____"
]
],
[
[
"@_.lib.dataclasses.dataclass._\nclass Particle:\n x: int\n y: int\n delta_x: int\n delta_y: int",
"_____no_output_____"
],
[
"day10_input = (\n _(open('input/day10.txt'))\n .read()\n .findall(r'position=<\\s?(-?\\d+),\\s+(-?\\d+)> velocity=<\\s*(-?\\d+),\\s+(-?\\d+)>')\n .map(lambda each: _(each).map(int)._)\n .call(list)\n ._\n)\n",
"_____no_output_____"
],
[
"%matplotlib inline",
"_____no_output_____"
],
[
"def evolve(particles):\n particles.x += particles.delta_x\n particles.y += particles.delta_y\n\ndef devolve(particles):\n particles.x -= particles.delta_x\n particles.y -= particles.delta_y\n\ndef show(particles):\n particles.y *= -1\n particles.plot(x='x', y='y', kind='scatter', s=1)\n particles.y *= -1\n\nlast_width = last_height = float('inf')\ndef particles_are_shrinking(particles):\n global last_width, last_height\n current_width = particles.x.max() - particles.x.min()\n current_height = particles.y.max() - particles.y.min()\n is_shrinking = current_width < last_width and current_height < last_height\n last_width, last_height = current_width, current_height\n return is_shrinking",
"_____no_output_____"
],
[
"particles = _.lib.pandas.DataFrame.from_records(\n data=day10_input, \n columns=['x', 'y', 'delta_x', 'delta_y']\n)._",
"_____no_output_____"
],
[
"last_width = last_height = float('inf')\nseconds = 0\nwhile particles_are_shrinking(particles):\n evolve(particles)\n seconds += 1\ndevolve(particles)\nshow(particles)\nseconds - 1",
"_____no_output_____"
]
],
[
[
"# Day 11\n\nhttps://adventofcode.com/2018/day/11",
"_____no_output_____"
]
],
[
[
"import fluentpy as _\nfrom pyexpect import expect",
"_____no_output_____"
],
[
"def power_level(x, y, grid_serial):\n rack_id = x + 10\n power_level = rack_id * y\n power_level += grid_serial\n power_level *= rack_id\n power_level //= 100\n power_level %= 10\n return power_level - 5",
"_____no_output_____"
],
[
"assert 4 == power_level(x=3, y=5, grid_serial=8)\nassert -5 == power_level( 122,79, grid_serial=57)\nassert 0 == power_level(217,196, grid_serial=39)\nassert 4 == power_level(101,153, grid_serial=71)",
"_____no_output_____"
],
[
"def power_levels(grid_serial):\n return (\n _(range(1, 301))\n .product(repeat=2)\n .star_map(_(power_level).curry(x=_, y=_, grid_serial=grid_serial)._)\n .to(_.lib.numpy.array)\n ._\n .reshape(300, -1)\n .T\n )",
"_____no_output_____"
],
[
"def compute_max_power(matrix, subset_size):\n expect(matrix.shape[0]) == matrix.shape[1]\n expect(subset_size) <= matrix.shape[0]\n expect(subset_size) > 0\n \n # +1 because 300 matrix by 300 subset should produce one value\n width = matrix.shape[0] - subset_size + 1\n height = matrix.shape[1] - subset_size + 1\n output = _.lib.numpy.zeros((width, height))._\n for x in range(width):\n for y in range(height):\n output[x,y] = matrix[y:y+subset_size, x:x+subset_size].sum()\n return output\n\ndef coordinates_with_max_power(matrix, subset_size=3):\n output = compute_max_power(matrix, subset_size=subset_size)\n np = _.lib.numpy._\n index = np.unravel_index(np.argmax(output), output.shape)\n return (\n _(index).map(_.each + 1)._, # turn back into coordinates\n np.amax(output)\n )",
"_____no_output_____"
],
[
"result = coordinates_with_max_power(power_levels(18))\nassert ((33, 45), 29) == result, result\nresult = coordinates_with_max_power(power_levels(42))\nassert ((21, 61), 30) == result, result",
"_____no_output_____"
],
[
"coordinates_with_max_power(power_levels(5034))",
"_____no_output_____"
],
[
"def find_best_subset(matrix):\n best_max_power = best_subset_size = float('-inf')\n best_coordinates = None\n for subset_size in range(1, matrix.shape[0] + 1):\n coordinates, max_power = coordinates_with_max_power(matrix, subset_size=subset_size)\n if max_power > best_max_power:\n best_max_power = max_power\n best_subset_size = subset_size\n best_coordinates = coordinates\n return (\n best_coordinates,\n best_subset_size,\n best_max_power,\n )",
"_____no_output_____"
],
[
"result = coordinates_with_max_power(power_levels(18), subset_size=16)\nexpect(result) == ((90, 269), 113) \nresult = coordinates_with_max_power(power_levels(42), subset_size=12)\nexpect(result) == ((232, 251), 119)",
"_____no_output_____"
],
[
"result = find_best_subset(power_levels(18))\nexpect(result) == ((90, 269), 16, 113)",
"_____no_output_____"
],
[
"find_best_subset(power_levels(5034))",
"_____no_output_____"
]
],
[
[
"# Day 12\n\nhttps://adventofcode.com/2018/day/12",
"_____no_output_____"
]
],
[
[
"import fluentpy as _",
"_____no_output_____"
],
[
"def parse(a_string):\n is_flower = _.each == '#'\n initial_state = (\n _(a_string)\n .match(r'initial state:\\s*([#.]+)')\n .group(1)\n .map(is_flower)\n .enumerate()\n .call(dict)\n ._\n )\n\n patterns = dict(\n _(a_string)\n .findall('([#.]{5})\\s=>\\s([#.])')\n .map(lambda each: (_(each[0]).map(is_flower)._, is_flower(each[1])))\n ._\n )\n return initial_state, patterns\n\ndef print_state(generation, state):\n lowest_offset = min(state)\n print(f'{generation:>5} {sum_state(state):>5} {lowest_offset:>5}: ', end='')\n print(string_from_state(state))\n\ndef string_from_state(state):\n lowest_offset, highest_offset = min(state), max(state)\n return (\n _(range(lowest_offset - 2, highest_offset + 3))\n .map(lambda each: state.get(each, False))\n .map(lambda each: each and '#' or '.')\n .join()\n ._\n )\n\ndef sum_state(state):\n return (\n _(state)\n .items()\n .map(lambda each: each[1] and each[0] or 0)\n .sum()\n ._\n )\n\ndef evolve(initial_state, patterns, number_of_generations, \n show_sums=False, show_progress=False, show_state=False, stop_on_repetition=False):\n current_state = dict(initial_state)\n next_state = dict()\n \n def surrounding_of(state, index):\n return tuple(state.get(each, False) for each in range(index-2, index+3))\n \n def compute_next_generation():\n nonlocal current_state, next_state\n first_key, last_key = min(current_state), max(current_state)\n for index in range(first_key - 2, last_key + 2):\n is_flower = patterns.get(surrounding_of(current_state, index), False)\n if is_flower:\n next_state[index] = is_flower\n \n current_state, next_state = next_state, dict()\n return current_state\n \n seen = set()\n for generation in range(number_of_generations):\n if show_sums:\n print(generation, sum_state(current_state))\n if show_progress and generation % 1000 == 0: print('.', end='')\n if show_state: print_state(generation, current_state)\n if stop_on_repetition:\n stringified = string_from_state(current_state)\n if stringified in seen:\n print(f'repetition on generation {generation}')\n print(stringified)\n return current_state\n seen.add(stringified)\n \n compute_next_generation()\n \n return current_state",
"_____no_output_____"
],
[
"end_state = evolve(*parse(\"\"\"initial state: #..#.#..##......###...###\n\n...## => #\n..#.. => #\n.#... => #\n.#.#. => #\n.#.## => #\n.##.. => #\n.#### => #\n#.#.# => #\n#.### => #\n##.#. => #\n##.## => #\n###.. => #\n###.# => #\n####. => #\n\"\"\"), 20, show_state=True)\nassert 325 == sum_state(end_state)",
" 0 145 0: ..#..#.#..##......###...###..\n 1 91 0: ..#...#....#.....#..#..#..#..\n 2 132 0: ..##..##...##....#..#..#..##..\n 3 102 -1: ..#.#...#..#.#....#..#..#...#..\n 4 154 0: ..#.#..#...#.#...#..#..##..##..\n 5 115 1: ..#...##...#.#..#..#...#...#..\n 6 174 1: ..##.#.#....#...#..##..##..##..\n 7 126 0: ..#..###.#...##..#...#...#...#..\n 8 213 0: ..#....##.#.#.#..##..##..##..##..\n 9 138 0: ..##..#..#####....#...#...#...#..\n 10 213 -1: ..#.#..#...#.##....##..##..##..##..\n 11 136 0: ..#...##...#.#...#.#...#...#...#..\n 12 218 0: ..##.#.#....#.#...#.#..##..##..##..\n 13 133 -1: ..#..###.#....#.#...#....#...#...#..\n 14 235 -1: ..#....##.#....#.#..##...##..##..##..\n 15 149 -1: ..##..#..#.#....#....#..#.#...#...#..\n 16 226 -2: ..#.#..#...#.#...##...#...#.#..##..##..\n 17 170 -1: ..#...##...#.#.#.#...##...#....#...#..\n 18 280 -1: ..##.#.#....#####.#.#.#...##...##..##..\n 19 287 -2: ..#..###.#..#.#.#######.#.#.#..#.#...#..\n"
],
[
"day12_input = open('input/day12.txt').read()\nsum_state(evolve(*parse(day12_input), 20))",
"_____no_output_____"
],
[
"# still very much tooo slow\nnumber_of_iterations = 50000000000\n#number_of_iterations = 200\nsum_state(evolve(*parse(day12_input), number_of_iterations, stop_on_repetition=True))",
"repetition on generation 135\n..####.#.....###.#.....####.#.....###.#.....###.#.....###.#....####.#.....###.#....####.#....####.#.....###.#.....####.#...####.#....###.#.....####.#....###.#.....###.#.....####.#....####.#..\n"
],
[
"last_score = 11959\nincrement_per_generation = 11959 - 11873\nlast_generation = 135\ngenerations_to_go = number_of_iterations - last_generation\nend_score = last_score + generations_to_go * increment_per_generation\nend_score",
"_____no_output_____"
],
[
"import fluentpy as _",
"_____no_output_____"
],
[
"# Numpy implementation for comparison\nnp = _.lib.numpy._\n\nclass State:\n \n @classmethod\n def parse_string(cls, a_string):\n is_flower = lambda each: int(each == '#')\n initial_state = (\n _(a_string)\n .match(r'initial state:\\s*([#\\.]+)')\n .group(1)\n .map(is_flower)\n ._\n )\n\n patterns = (\n _(a_string)\n .findall('([#.]{5})\\s=>\\s([#\\.])')\n .map(lambda each: (_(each[0]).map(is_flower)._, is_flower(each[1])))\n ._\n )\n return initial_state, patterns\n\n @classmethod\n def from_string(cls, a_string):\n return cls(*cls.parse_string(a_string))\n \n def __init__(self, initial_state, patterns):\n self.type = np.uint8\n self.patterns = self.trie_from_patterns(patterns)\n self.state = np.zeros(len(initial_state) * 3, dtype=self.type)\n self.zero = self.state.shape[0] // 2\n self.state[self.zero:self.zero+len(initial_state)] = initial_state\n \n def trie_from_patterns(self, patterns):\n trie = np.zeros((2,) * 5, dtype=self.type)\n for pattern, production in patterns:\n trie[pattern] = production\n return trie\n \n @property\n def size(self):\n return self.state.shape[0]\n \n def recenter_or_grow_if_neccessary(self):\n # check how much empty space there is, and if re-centering the pattern might be good enough\n if self.needs_resize() and self.is_region_empty(0, self.zero - self.size // 4):\n self.move(- self.size // 4)\n if self.needs_resize() and self.is_region_empty(self.zero + self.size // 4, -1):\n self.move(self.size // 4)\n if self.needs_resize():\n self.grow()\n\n def needs_resize(self):\n return any(self.state[:4]) or any(self.state[-4:])\n \n def is_region_empty(self, lower_limit, upper_limit):\n return not any(self.state[lower_limit:upper_limit])\n \n def move(self, move_by):\n assert move_by != 0\n new_state = np.zeros_like(self.state)\n if move_by < 0:\n new_state[:move_by] = self.state[-move_by:]\n else:\n new_state[move_by:] = self.state[:-move_by]\n self.state = new_state\n self.zero += move_by\n \n def grow(self):\n new_state = np.zeros(self.size * 2, dtype=self.type)\n move_by = self.zero - (self.size // 2)\n new_state[self.zero : self.zero + self.size] = self.state\n self.state = new_state\n self.zero -= move_by\n \n def evolve_once(self):\n self.state[2:-2] = self.patterns[\n self.state[:-4],\n self.state[1:-3],\n self.state[2:-2],\n self.state[3:-1],\n self.state[4:]\n ]\n \n self.recenter_or_grow_if_neccessary()\n \n return self\n \n def evolve(self, number_of_iterations, show_progress=False, show_state=False):\n while number_of_iterations:\n self.evolve_once()\n number_of_iterations -= 1\n \n if show_progress and number_of_iterations % 1000 == 0:\n print('.', end='')\n if show_state:\n self.print()\n return self\n \n def __repr__(self):\n return (\n _(self.state)\n .map(lambda each: each and '#' or '.')\n .join()\n ._\n )\n \n def print(self):\n print(f\"{self.zero:>5} {self.sum():>5}\", repr(self))\n \n def sum(self):\n return (\n _(self.state)\n .ienumerate()\n .imap(lambda each: each[1] and (each[0] - self.zero) or 0)\n .sum()\n ._\n )",
"_____no_output_____"
],
[
"test = State.from_string(\"\"\"initial state: #..#.#..##......###...###\n\n...## => #\n..#.. => #\n.#... => #\n.#.#. => #\n.#.## => #\n.##.. => #\n.#### => #\n#.#.# => #\n#.### => #\n##.#. => #\n##.## => #\n###.. => #\n###.# => #\n####. => #\n\"\"\")\n\nassert 325 == test.evolve(20, show_state=True).sum(), test.sum()",
" 37 91 .....................................#...#....#.....#..#..#..#.............\n 37 132 .....................................##..##...##....#..#..#..##............\n 37 102 ....................................#.#...#..#.#....#..#..#...#............\n 37 154 .....................................#.#..#...#.#...#..#..##..##...........\n 37 115 ......................................#...##...#.#..#..#...#...#...........\n 37 174 ......................................##.#.#....#...#..##..##..##..........\n 37 126 .....................................#..###.#...##..#...#...#...#..........\n 37 213 .....................................#....##.#.#.#..##..##..##..##.........\n 37 138 .....................................##..#..#####....#...#...#...#.........\n 37 213 ....................................#.#..#...#.##....##..##..##..##........\n 37 136 .....................................#...##...#.#...#.#...#...#...#........\n 37 218 .....................................##.#.#....#.#...#.#..##..##..##.......\n 37 133 ....................................#..###.#....#.#...#....#...#...#.......\n 37 235 ....................................#....##.#....#.#..##...##..##..##......\n 37 149 ....................................##..#..#.#....#....#..#.#...#...#......\n 37 226 ...................................#.#..#...#.#...##...#...#.#..##..##.....\n 37 170 ....................................#...##...#.#.#.#...##...#....#...#.....\n 37 280 ....................................##.#.#....#####.#.#.#...##...##..##....\n 37 287 ...................................#..###.#..#.#.#######.#.#.#..#.#...#....\n 18 325 ................#....##....#####...#######....#.#..##......................\n"
],
[
"# Much faster initially, but then gets linearly slower as the size of the memory increases. And since it increases linearly with execution time\n# Its still way too slow\nday12_input = open('input/day12.txt').read()\nstate = State.from_string(day12_input)\n#state.evolve(50000000000, show_progress=True)\n#state.evolve(10000, show_progress=True).print()",
"_____no_output_____"
]
],
[
[
"# Day 13\n\nhttps://adventofcode.com/2018/day/13",
"_____no_output_____"
]
],
[
[
"import fluentpy as _\nfrom pyexpect import expect",
"_____no_output_____"
],
[
"Location = _.lib.collections.namedtuple('Location', ('x', 'y'))._\n\nUP, RIGHT, DOWN, LEFT, STRAIGHT = '^>v<|'\nUPDOWN, LEFTRIGHT, UPRIGHT, RIGHTDOWN, DOWNLEFT, LEFTUP = '|-\\/\\/'\nMOVEMENT = {\n '^' : Location(0, -1),\n '>' : Location(1, 0),\n 'v' : Location(0, 1),\n '<' : Location(-1, 0)\n}\n\nCURVE = {\n '\\\\': { '^':'<', '<':'^', 'v':'>', '>':'v'},\n '/': { '^':'>', '<':'v', 'v':'<', '>':'^'},\n}\n\nINTERSECTION = {\n '^': { LEFT:'<', STRAIGHT:'^', RIGHT:'>' },\n '>': { LEFT:'^', STRAIGHT:'>', RIGHT:'v' },\n 'v': { LEFT:'>', STRAIGHT:'v', RIGHT:'<' },\n '<': { LEFT:'v', STRAIGHT:'<', RIGHT:'^' },\n}\n\n\n@_.lib.dataclasses.dataclass._\nclass Cart:\n location: Location\n orientation: str\n world: str\n program: iter = _.lib.dataclasses.field._(default_factory=lambda: _((LEFT, STRAIGHT, RIGHT)).icycle()._)\n \n def tick(self):\n move = MOVEMENT[self.orientation]\n self.location = Location(self.location.x + move.x, self.location.y + move.y)\n \n if self.world_at_current_location() in CURVE:\n self.orientation = CURVE[self.world_at_current_location()][self.orientation]\n if self.world_at_current_location() == '+':\n self.orientation = INTERSECTION[self.orientation][next(self.program)]\n \n return self\n \n def world_at_current_location(self):\n expect(self.location.y) < len(self.world)\n expect(self.location.x) < len(self.world[self.location.y])\n return self.world[self.location.y][self.location.x]\n \n def __repr__(self):\n return f'<Cart(location={self.location}, orientation={self.orientation})'\n\ndef parse_carts(world):\n world = world.splitlines()\n for line_number, line in enumerate(world):\n for line_offset, character in _(line).enumerate():\n if character in '<>^v':\n yield Cart(location=Location(line_offset, line_number), orientation=character, world=world)\n\ndef crashed_carts(cart, carts):\n carts = carts[:]\n if cart not in carts:\n return tuple() # crashed carts already removed\n carts.remove(cart)\n for first, second in _([cart]).icycle().zip(carts):\n if first.location == second.location:\n return first, second\n\ndef did_crash(cart, carts):\n carts = carts[:]\n if cart not in carts: # already removed because of crash\n return True\n carts.remove(cart)\n for first, second in _([cart]).icycle().zip(carts):\n if first.location == second.location:\n return True\n return False\n\ndef location_of_first_crash(input_string):\n carts = list(parse_carts(input_string))\n while True:\n for cart in _(carts).sorted(key=_.each.location._)._:\n cart.tick()\n if did_crash(cart, carts):\n return cart.location\n\ndef location_of_last_cart_after_crashes(input_string):\n carts = list(parse_carts(input_string))\n while True:\n for cart in _(carts).sorted(key=_.each.location._)._:\n cart.tick()\n if did_crash(cart, carts):\n _(crashed_carts(cart, carts)).each(carts.remove)\n if 1 == len(carts):\n return carts[0].location\n",
"_____no_output_____"
],
[
"expect(Cart(location=Location(0,0), orientation='>', world=['>-']).tick().location) == (1,0)\nexpect(Cart(location=Location(0,0), orientation='>', world=['>\\\\']).tick().location) == (1,0)\nexpect(Cart(location=Location(0,0), orientation='>', world=['>\\\\']).tick().orientation) == 'v'\nexpect(Cart(location=Location(0,0), orientation='>', world=['>+']).tick().orientation) == '^'\n\ncart1, cart2 = parse_carts('>--<')\nexpect(cart1).has_attributes(location=(0,0), orientation='>')\nexpect(cart2).has_attributes(location=(3,0), orientation='<')\nexpect(location_of_first_crash('>--<')) == (2,0)",
"_____no_output_____"
],
[
"test_input = r\"\"\"/->-\\ \n| | /----\\\n| /-+--+-\\ |\n| | | | v |\n\\-+-/ \\-+--/\n \\------/ \n\"\"\"\nexpect(location_of_first_crash(test_input)) == (7,3)",
"_____no_output_____"
],
[
"day13_input = open('input/day13.txt').read()\nlocation_of_first_crash(day13_input)",
"_____no_output_____"
],
[
"test_input = r\"\"\"/>-<\\ \n| | \n| /<+-\\\n| | | v\n\\>+</ |\n | ^\n \\<->/\n\"\"\"\nexpect(location_of_last_cart_after_crashes(test_input)) == (6,4)",
"_____no_output_____"
],
[
"location_of_last_cart_after_crashes(day13_input)",
"_____no_output_____"
]
],
[
[
"# Day 14\n\nhttps://adventofcode.com/2018/day/14",
"_____no_output_____"
]
],
[
[
"import fluentpy as _\nfrom pyexpect import expect",
"_____no_output_____"
],
[
"scores = bytearray([3,7])\nelf1 = 0\nelf2 = 1\n\ndef reset():\n global scores, elf1, elf2\n scores = bytearray([3,7])\n elf1 = 0\n elf2 = 1\n\ndef generation():\n global scores, elf1, elf2\n new_recipe = scores[elf1] + scores[elf2]\n first_digit, second_digit = divmod(new_recipe, 10)\n if first_digit: scores.append(first_digit)\n scores.append(second_digit)\n elf1 = (elf1 + 1 + scores[elf1]) % len(scores)\n elf2 = (elf2 + 1 + scores[elf2]) % len(scores)",
"_____no_output_____"
],
[
"def next_10_after(how_many_generations):\n reset()\n while len(scores) < how_many_generations + 10:\n generation()\n\n return _(scores)[how_many_generations:how_many_generations+10].join()._",
"_____no_output_____"
],
[
"expect(next_10_after(9)) == '5158916779'\nexpect(next_10_after(5)) == '0124515891'\nexpect(next_10_after(18)) == '9251071085'\nexpect(next_10_after(2018)) == '5941429882'",
"_____no_output_____"
],
[
"day14_input = 894501\nprint(next_10_after(day14_input))",
"2157138126\n"
],
[
"def generations_till_we_generate(a_number):\n needle = _(a_number).str().map(int).call(bytearray)._\n reset()\n while needle not in scores[-len(needle) - 2:]: # at most two numbers get appended\n generation()\n\n return scores.rindex(needle)",
"_____no_output_____"
],
[
"expect(generations_till_we_generate('51589')) == 9\nexpect(generations_till_we_generate('01245')) == 5\nexpect(generations_till_we_generate('92510')) == 18\nexpect(generations_till_we_generate('59414')) == 2018",
"_____no_output_____"
],
[
"print(generations_till_we_generate(day14_input))",
"20365081\n"
]
],
[
[
"# Day 15\n\nhttps://adventofcode.com/2018/day/15",
"_____no_output_____"
]
],
[
[
"expect = _.lib.pyexpect.expect._\nnp = _.lib.numpy._\ninf = _.lib.math.inf._\ndataclasses = _.lib.dataclasses._\n\ndef tuplify(a_function):\n @_.lib.functools.wraps._(a_function)\n def wrapper(*args, **kwargs):\n return tuple(a_function(*args, **kwargs))\n return wrapper",
"_____no_output_____"
],
[
"Location = _.lib.collections.namedtuple('Location', ['x', 'y'])._\nNO_LOCATION = Location(-1,-1)\n\[email protected]\nclass Player:\n type: chr\n location: Location\n level: 'Level' = dataclasses.field(repr=False)\n\n hitpoints: int = 200\n attack_power: int = dataclasses.field(default=3, repr=False)\n \n distances: np.ndarray = dataclasses.field(default=None, repr=False)\n predecessors: np.ndarray = dataclasses.field(default=None, repr=False)\n \n def turn(self):\n self.distances = self.predecessors = None\n\n if self.hitpoints <= 0:\n return # we're dead\n \n if not self.is_in_range_of_enemies():\n self.move_towards_enemy()\n \n if self.is_in_range_of_enemies():\n self.attack_weakest_enemy_in_range()\n \n return self\n \n def is_in_range_of_enemies(self):\n adjacent_values = (\n _(self).level.adjacent_locations(self.location)\n .map(self.level.level.__getitem__)\n ._\n )\n return self.enemy_class() in adjacent_values\n \n def enemy_class(self):\n if self.type == 'E':\n return 'G'\n return 'E'\n \n def move_towards_enemy(self):\n targets = (\n _(self).level.enemies_of(self)\n .map(_.each.attack_positions()._).flatten(level=1)\n .filter(self.is_location_reachable)\n .sorted(key=self.distance_to_location)\n .groupby(key=self.distance_to_location)\n .get(0, []).get(1, [])\n ._\n )\n \n target = _(targets).sorted().get(0, None)._\n if target is None:\n return # no targets in range\n \n self.move_to(self.one_step_towards(target))\n \n def move_to(self, new_location):\n self.level.level[self.location] = '.'\n self.level.level[new_location] = self.type\n\n self.location = new_location\n self.distances = self.predecessors = None\n \n def attack_positions(self):\n return self.level.reachable_adjacent_locations(self.location)\n \n def is_location_reachable(self, location):\n self.ensure_distances()\n return inf != self.distances[location]\n\n def distance_to_location(self, location):\n self.ensure_distances()\n return self.distances[location]\n \n def one_step_towards(self, location):\n self.ensure_distances()\n if 2 != len(self.predecessors[location]):\n breakpoint()\n while Location(*self.predecessors[location]) != self.location:\n location = Location(*self.predecessors[location])\n return location\n\n def ensure_distances(self):\n if self.distances is not None:\n return\n self.distances, self.predecessors = self.level.shortest_distances_from(self.location)\n \n def attack_weakest_enemy_in_range(self):\n adjacent_locations = _(self).level.adjacent_locations(self.location)._\n target = (\n _(self).level.enemies_of(self)\n .filter(_.each.location.in_(adjacent_locations)._)\n .sorted(key=_.each.hitpoints._)\n .groupby(key=_.each.hitpoints._)\n .get(0).get(1)\n .sorted(key=_.each.location._)\n .get(0)\n ._\n )\n \n target.damage_by(self.attack_power)\n \n def damage_by(self, ammount):\n self.hitpoints -= ammount\n # REFACT this should happen on the level object\n if self.hitpoints <= 0:\n self.level.players = _(self).level.players.filter(_.each != self)._\n self.level.level[self.location] = '.'\n \nclass Level:\n \n def __init__(self, level_description):\n self.level = np.array(_(level_description).strip().split('\\n').map(tuple)._)\n self.players = self.parse_players()\n self.number_of_full_rounds = 0\n \n @tuplify\n def parse_players(self):\n for row_number, row in enumerate(self.level):\n for col_number, char in enumerate(row):\n if char in 'GE':\n yield Player(char, location=Location(row_number,col_number), level=self)\n \n def enemies_of(self, player):\n return _(self).players.filter(_.each.type != player.type)._\n \n def adjacent_locations(self, location):\n return (\n _([\n (location.x-1, location.y),\n (location.x, location.y-1),\n (location.x, location.y+1),\n (location.x+1, location.y),\n ])\n .star_map(Location)\n ._\n )\n \n def reachable_adjacent_locations(self, location):\n return (\n _(self).adjacent_locations(location)\n .filter(self.is_location_in_level)\n .filter(self.is_traversible)\n ._\n )\n \n def is_location_in_level(self, location):\n x_size, y_size = self.level.shape\n return 0 <= location.x < x_size \\\n and 0 <= location.y < y_size\n \n def is_traversible(self, location):\n return '.' == self.level[location]\n \n def shortest_distances_from(self, location):\n distances = np.full(fill_value=_.lib.math.inf._, shape=self.level.shape, dtype=float)\n distances[location] = 0\n predecessors = np.full(fill_value=NO_LOCATION, shape=self.level.shape, dtype=(int, 2))\n next_locations = _.lib.collections.deque._([location])\n \n while len(next_locations) > 0:\n current_location = next_locations.popleft()\n for location in self.reachable_adjacent_locations(current_location):\n if distances[location] <= (distances[current_location] + 1):\n continue\n distances[location] = distances[current_location] + 1\n predecessors[location] = current_location\n next_locations.append(location)\n return distances, predecessors\n \n def __repr__(self):\n return '\\n'.join(''.join(line) for line in self.level)\n \n def print(self):\n print(\n repr(self)\n + f'\\nrounds: {self.number_of_full_rounds}'\n + '\\n' + _(self).players_in_reading_order().join('\\n')._\n )\n \n def round(self):\n for player in self.players_in_reading_order():\n if self.did_battle_end():\n return self\n\n player.turn()\n self.number_of_full_rounds += 1\n return self\n \n def players_in_reading_order(self):\n return _(self).players.sorted(key=_.each.location._)._\n \n def run_battle(self):\n while not self.did_battle_end():\n self.round()\n return self\n \n def run_rounds(self, number_of_full_rounds):\n for ignored in range(number_of_full_rounds):\n self.round()\n if self.did_battle_end():\n break\n \n return self\n\n def did_battle_end(self):\n return _(self).players.map(_.each.type._).call(set).len()._ == 1\n \n def battle_summary(self):\n number_of_remaining_hitpoints = _(self).players.map(_.each.hitpoints._).sum()._\n return self.number_of_full_rounds * number_of_remaining_hitpoints",
"_____no_output_____"
],
[
"level = Level(\"\"\"\\\n#######\n#.G.E.#\n#E....#\n#######\n\"\"\")\nexpect(level.players) == (Player('G',Location(1,2), level), Player('E', Location(1, 4), level), Player('E', Location(2,1), level))\nexpect(level.enemies_of(level.players[0])) == (Player('E', Location(x=1, y=4), level), Player('E', Location(x=2, y=1), level))\nlevel.players[0].damage_by(200)\nexpect(level.players) == (Player('E', Location(1, 4), level), Player('E', Location(2,1), level))\nexpect(repr(level)) == '''\\\n#######\n#...E.#\n#E....#\n#######'''\n\ninf = _.lib.math.inf._\nNO = [-1,-1]\n\ndistances, parents = Level('''\n###\n#G#\n#.#\n#E#\n###\n''').shortest_distances_from(Location(1,1))\n \nexpect(distances.tolist()) == [\n [inf, inf, inf],\n [inf, 0, inf],\n [inf, 1, inf],\n [inf, inf, inf],\n [inf, inf, inf],\n]\nexpect(parents.tolist()) == [\n [NO, NO, NO],\n [NO, NO, NO],\n [NO, [1,1], NO],\n [NO, NO, NO],\n [NO, NO, NO],\n]\n\ndistances, parents = Level('''\n#######\n#E..G.#\n#...#.#\n#.G.#G#\n#######\n''').shortest_distances_from(Location(1,1))\n\nexpect(distances.tolist()) == [\n [inf, inf, inf, inf, inf, inf, inf],\n [inf, 0, 1, 2, inf, inf, inf],\n [inf, 1, 2, 3, inf, inf, inf],\n [inf, 2, inf, 4, inf, inf, inf],\n [inf, inf, inf, inf, inf, inf, inf]\n]\n\nexpect(parents.tolist()) == [\n [NO, NO, NO, NO, NO, NO, NO],\n [NO, NO, [1, 1], [1, 2], NO, NO, NO],\n [NO, [1, 1], [1, 2], [1, 3], NO, NO, NO],\n [NO, [2, 1], NO, [2, 3], NO, NO, NO],\n [NO, NO, NO, NO, NO, NO, NO]\n]\n\ndistances, parents = Level('''\n#######\n#E..G.#\n#.#...#\n#.G.#G#\n#######\n''').shortest_distances_from(Location(1,1))\n\nexpect(distances[1:-1, 1:-1].tolist()) == [\n [0,1,2,inf,6],\n [1,inf,3,4,5],\n [2,inf,4,inf,inf]\n]",
"_____no_output_____"
],
[
"level = Level(\"\"\"\\\n#######\n#.G.E.#\n#E....#\n#######\n\"\"\")\nexpect(level.players[0].location) == Location(1,2)\nexpect(level.players[0].turn().location) == Location(1,1)\nexpect(level.players[0].is_in_range_of_enemies()).is_true()\n\n\nlevel = Level('''\\\n#######\n#..G..#\n#...EG#\n#.#G#G#\n#...#E#\n#.....#\n#######''')\nexpect(level.players[0].is_in_range_of_enemies()).is_false()",
"_____no_output_____"
],
[
"level = Level('''\\\n#########\n#G..G..G#\n#.......#\n#.......#\n#G..E..G#\n#.......#\n#.......#\n#G..G..G#\n#########''')\nexpect(level.round().__repr__()) == '''\\\n#########\n#.G...G.#\n#...G...#\n#...E..G#\n#.G.....#\n#.......#\n#G..G..G#\n#.......#\n#########'''\nexpect(level.round().__repr__()) == '''\\\n#########\n#..G.G..#\n#...G...#\n#.G.E.G.#\n#.......#\n#G..G..G#\n#.......#\n#.......#\n#########'''\nexpect(level.round().__repr__()) == '''\\\n#########\n#.......#\n#..GGG..#\n#..GEG..#\n#G..G...#\n#......G#\n#.......#\n#.......#\n#########'''",
"_____no_output_____"
],
[
"level = Level('''\\\n#######\n#.G...#\n#...EG#\n#.#.#G#\n#..G#E#\n#.....#\n#######''')\nexpect(level.round().__repr__()) == '''\\\n#######\n#..G..#\n#...EG#\n#.#G#G#\n#...#E#\n#.....#\n#######'''\nexpect(level.players[0]).has_attributes(\n hitpoints=200, location=Location(1,3)\n)\nexpect(level.round().__repr__()) == '''\\\n#######\n#...G.#\n#..GEG#\n#.#.#G#\n#...#E#\n#.....#\n#######'''\nlevel.run_rounds(21)\nexpect(level.number_of_full_rounds) == 23\nexpect(repr(level)) == '''\\\n#######\n#...G.#\n#..G.G#\n#.#.#G#\n#...#E#\n#.....#\n#######'''\nexpect(level.players).has_len(5)\nlevel.run_rounds(47-23)\nexpect(level.number_of_full_rounds) == 47\nexpect(repr(level)) == '''\\\n#######\n#G....#\n#.G...#\n#.#.#G#\n#...#.#\n#....G#\n#######'''\nexpect(_(level).players.map(_.each.type._).join()._) == 'GGGG'\nexpect(level.battle_summary()) == 27730\nexpect(level.did_battle_end()).is_true()\n\nlevel = Level('''\\\n#######\n#.G...#\n#...EG#\n#.#.#G#\n#..G#E#\n#.....#\n#######''')\nlevel.run_battle()\nexpect(level.battle_summary()) == 27730\n\nlevel = Level('''\\\n#######\n#G..#E#\n#E#E.E#\n#G.##.#\n#...#E#\n#...E.#\n#######''').run_battle()\nexpect(repr(level)) == '''\\\n#######\n#...#E#\n#E#...#\n#.E##.#\n#E..#E#\n#.....#\n#######'''\nexpect(level.number_of_full_rounds) == 37\nexpect(level.battle_summary()) == 36334\n\nlevel = Level('''\\\n#######\n#E..EG#\n#.#G.E#\n#E.##E#\n#G..#.#\n#..E#.#\n#######''').run_battle()\nexpect(level.battle_summary()) == 39514",
"_____no_output_____"
],
[
"_('input/day15.txt').call(open).read().call(Level).run_battle().battle_summary()._",
"_____no_output_____"
],
[
"def number_of_losses_with_attack_power(level_ascii, attack_power):\n level = Level(level_ascii)\n elves = lambda: _(level).players.filter(_.each.type == 'E')._\n staring_number_of_elves = len(elves())\n for elf in elves():\n elf.attack_power = attack_power\n level.run_battle()\n return staring_number_of_elves - len(elves()), level.battle_summary()\n\ndef minimum_attack_power_for_no_losses(level_ascii):\n for attack_power in range(4, 100):\n number_of_losses, summary = number_of_losses_with_attack_power(level_ascii, attack_power)\n if 0 == number_of_losses:\n return attack_power, summary",
"_____no_output_____"
],
[
"expect(minimum_attack_power_for_no_losses('''\\\n#######\n#.G...#\n#...EG#\n#.#.#G#\n#..G#E#\n#.....#\n#######''')) == (15, 4988)",
"_____no_output_____"
],
[
"expect(minimum_attack_power_for_no_losses('''\\\n#######\n#E..EG#\n#.#G.E#\n#E.##E#\n#G..#.#\n#..E#.#\n#######''')) == (4, 31284)",
"_____no_output_____"
],
[
"expect(minimum_attack_power_for_no_losses('''\\\n#######\n#E.G#.#\n#.#G..#\n#G.#.G#\n#G..#.#\n#...E.#\n#######''')) == (15, 3478)",
"_____no_output_____"
],
[
"expect(minimum_attack_power_for_no_losses('''\\\n#######\n#.E...#\n#.#..G#\n#.###.#\n#E#G#G#\n#...#G#\n#######''')) == (12, 6474)",
"_____no_output_____"
],
[
"expect(minimum_attack_power_for_no_losses('''\\\n#########\n#G......#\n#.E.#...#\n#..##..G#\n#...##..#\n#...#...#\n#.G...G.#\n#.....G.#\n#########''')) == (34, 1140)",
"_____no_output_____"
],
[
"_('input/day15.txt').call(open).read().call(minimum_attack_power_for_no_losses)._",
"_____no_output_____"
]
],
[
[
"# Day 16\n\nhttps://adventofcode.com/2018/day/16\n\n## Registers\n- four registers 0,1,2,3\n- initialized to 0\n\n## instructions\n- 16 opcodes \n- 1 opcode, 2 source registers (A, B), 1 output register (C)\n- inputs can be register addresses, immediate values, \n- output is always register\n\nonly have the opcode numbers, need to check validity",
"_____no_output_____"
]
],
[
[
"import fluentpy as _\nexpect = _.lib.pyexpect.expect._\noperator = _.lib.operator._",
"_____no_output_____"
],
[
"def identity(*args):\n return args[0]\ndef register(self, address):\n return self.registers[address]\ndef immediate(self, value):\n return value\ndef ignored(self, value):\n return None\n\ndef make_operation(namespace, name, operation, a_resolver, b_resolver):\n def instruction(self, a, b, c):\n self.registers[c] = operation(a_resolver(self, a), b_resolver(self, b))\n return self\n instruction.__name__ = instruction.__qualname__ = name\n namespace[name] = instruction\n return instruction\n\nclass CPU:\n \n def __init__(self, initial_registers=(0,0,0,0)):\n self.registers = list(initial_registers)\n \n operations = (\n _([\n ('addr', operator.add, register, register),\n ('addi', operator.add, register, immediate),\n ('mulr', operator.mul, register, register),\n ('muli', operator.mul, register, immediate),\n ('banr', operator.and_, register, register),\n ('bani', operator.and_, register, immediate),\n ('borr', operator.or_, register, register),\n ('bori', operator.or_, register, immediate),\n ('setr', identity, register, ignored),\n ('seti', identity, immediate, ignored),\n ('gtir', operator.gt, immediate, register),\n ('gtri', operator.gt, register, immediate),\n ('gtrr', operator.gt, register, register),\n ('eqir', operator.eq, immediate, register),\n ('eqri', operator.eq, register, immediate),\n ('eqrr', operator.eq, register, register),\n ])\n .star_map(_(make_operation).curry(locals())._)\n ._\n )\n \n def evaluate_program(self, instructions, opcode_map):\n for instruction in instructions:\n opcode, a,b,c = instruction\n operation = opcode_map[opcode]\n operation(self, a,b,c)\n \n return self\n \n @classmethod\n def number_of_qualifying_instructions(cls, input_registers, instruction, expected_output_registers):\n return len(cls.qualifying_instructions(input_registers, instruction, expected_output_registers))\n \n @classmethod\n def qualifying_instructions(cls, input_registers, instruction, expected_output_registers):\n opcode, a, b, c = instruction\n return (\n _(cls)\n .operations\n .filter(lambda operation: operation(CPU(input_registers), a,b,c).registers == expected_output_registers)\n ._\n )",
"_____no_output_____"
],
[
"expect(CPU([3, 2, 1, 1]).mulr(2, 1, 2).registers) == [3, 2, 2, 1]\nexpect(CPU.number_of_qualifying_instructions([3, 2, 1, 1], (9, 2, 1, 2), [3, 2, 2, 1])) == 3",
"_____no_output_____"
],
[
"day16_input = _(open('input/day16.txt')).read()._\ntest_input, test_program_input = day16_input.split('\\n\\n\\n')\n\ndef parse_inputs(before, instruction, after):\n return (\n _(before).split(', ').map(int).to(list),\n _(instruction).split(' ').map(int)._,\n _(after).split(', ').map(int).to(list),\n )\n\ntest_inputs = (\n _(test_input)\n .findall(r'Before: \\[(.*)]\\n(.*)\\nAfter: \\[(.*)\\]')\n .star_map(parse_inputs)\n ._\n)\n\n(\n _(test_inputs)\n .star_map(CPU.number_of_qualifying_instructions)\n .filter(_.each >= 3)\n .len()\n ._\n)",
"_____no_output_____"
],
[
"def add_operations(mapping, opcode_and_operations):\n opcode, operations = opcode_and_operations\n mapping[opcode].append(operations)\n return mapping\n\nopcode_mapping = (\n _(test_inputs)\n .map(_.each[1][0]._) # opcodes\n .zip(\n _(test_inputs).star_map(CPU.qualifying_instructions)._\n )\n # list[tuple[opcode, list[list[functions]]]]\n .reduce(add_operations, _.lib.collections.defaultdict(list)._)\n # dict[opcode, list[list[function]]]\n .items()\n .star_map(lambda opcode, operations: (\n opcode,\n _(operations).map(set).reduce(set.intersection)._\n ))\n .to(dict)\n # dict[opcode, set[functions]]\n)\n\ndef resolved_operations():\n return (\n _(opcode_mapping)\n .values()\n .filter(lambda each: len(each) == 1)\n .reduce(set.union)\n ._\n )\n\ndef has_unresolved_operations():\n return 0 != (\n _(opcode_mapping)\n .values()\n .map(len)\n .filter(_.each > 1)\n .len()\n ._\n )\n\nwhile has_unresolved_operations():\n for opcode, matching_operations in opcode_mapping.items():\n if len(matching_operations) == 1:\n continue # already resolved\n opcode_mapping[opcode] = matching_operations.difference(resolved_operations())\n\nopcode_mapping = _(opcode_mapping).items().star_map(lambda opcode, operations: (opcode, list(operations)[0])).to(dict)\n# dict[opcode, function]\nopcode_mapping",
"_____no_output_____"
],
[
"test_program = _(test_program_input).strip().split('\\n').map(lambda each: _(each).split(' ').map(int)._)._\nCPU().evaluate_program(test_program, opcode_mapping).registers[0]",
"_____no_output_____"
]
],
[
[
"# Day 17\n\nhttps://adventofcode.com/2018/day/17",
"_____no_output_____"
]
],
[
[
"import fluentpy as _",
"_____no_output_____"
],
[
"@_.lib.dataclasses.dataclass._\nclass ClayLine:\n x_from: int\n x_to: int\n y_from: int\n y_to: int\n \n @classmethod\n def from_string(cls, a_string):\n first_var, first_value, second_var, second_value_start, second_value_end = \\\n _(a_string).fullmatch(r'(\\w)=(\\d+), (\\w)=(\\d+)..(\\d+)').groups()._\n first_value, second_value_start, second_value_end = _((first_value, second_value_start, second_value_end)).map(int)._\n if 'x' == first_var:\n return cls(first_value, first_value, second_value_start, second_value_end)\n else:\n return cls(second_value_start, second_value_end, first_value, first_value)\n \n @property\n def x_range(self):\n return range(self.x_from, self.x_to + 1) # last coordinate is included\n \n @property\n def y_range(self):\n return range(self.y_from, self.y_to + 1) # last coordinate is included\n\nclass Underground:\n \n def __init__(self):\n self.earth = dict()\n self.earth[(500, 0)] = '+' # spring\n self.min_x = self.max_x = 500\n self.max_y = - _.lib.math.inf._\n self.min_y = _.lib.math.inf._\n \n def add_clay_line(self, clay_line):\n for x in clay_line.x_range:\n for y in clay_line.y_range:\n self.set_earth(x,y, '#', should_adapt_depth=True)\n return self\n \n def set_earth(self, x,y, to_what, should_adapt_depth=False):\n self.earth[(x,y)] = to_what\n # whatever is set will expand the looked at area\n if x > self.max_x:\n self.max_x = x\n if x < self.min_x:\n self.min_x = x\n \n if should_adapt_depth:\n # only clay setting will expand y (depth)\n if y > self.max_y:\n self.max_y = y\n if y < self.min_y:\n self.min_y = y\n \n def flood_fill_down_from_spring(self):\n return self.flood_fill_down(500,1)\n \n def flood_fill_down(self, x,y):\n while self.can_flow_down(x,y):\n if y > self.max_y:\n return self\n \n if '|' == self.earth.get((x,y), '.'):\n # we've already been here\n return self\n \n self.set_earth(x,y, '|')\n y += 1\n \n while self.is_contained(x,y):\n self.fill_container_level(x,y)\n y -=1\n \n self.mark_flowing_water_around(x,y)\n for overflow_x in self.find_overflows(x, y):\n self.flood_fill_down(overflow_x,y+1)\n \n return self\n \n def fill_container_level(self, x,y):\n leftmost_free, rightmost_free = self.find_furthest_away_free_spots(x,y)\n for mark_x in range(leftmost_free, rightmost_free+1):\n self.set_earth(mark_x,y, '~')\n \n def find_overflows(self, x,y):\n leftmost_flow_border, rightmost_flow_border = self.find_flow_borders(x,y)\n if self.can_flow_down(leftmost_flow_border, y):\n yield leftmost_flow_border\n if self.can_flow_down(rightmost_flow_border, y):\n yield rightmost_flow_border\n \n def is_blocked(self, x,y):\n return self.earth.get((x,y), '.') in '#~'\n\n def can_flow_down(self, x,y):\n return not self.is_blocked(x, y+1)\n \n def can_flow_left(self, x,y):\n return not self.is_blocked(x-1, y)\n\n def can_flow_right(self, x,y):\n return not self.is_blocked(x+1, y)\n \n def x_coordinates_towards(self, x, target_x):\n if target_x < x:\n return range(x, target_x-2, -1)\n else:\n return range(x, target_x+2)\n \n def coordinates_towards(self, x,y, target_x):\n return _(self.x_coordinates_towards(x, target_x)).map(lambda x: (x, y))._\n \n def first_coordinate_that_satisfies(self, coordinates, a_test):\n for x, y in coordinates:\n if a_test(x,y):\n return x\n return None\n \n def is_contained(self, x,y):\n leftmost_flow_border, rightmost_flow_border = self.find_flow_borders(x,y)\n if leftmost_flow_border is None or rightmost_flow_border is None:\n return False\n return not self.can_flow_down(leftmost_flow_border,y) and not self.can_flow_down(rightmost_flow_border,y)\n \n def find_furthest_away_free_spots(self, x,y):\n blocked_right = self.first_coordinate_that_satisfies(\n self.coordinates_towards(x, y, self.max_x),\n lambda x,y: not self.can_flow_right(x,y)\n )\n blocked_left = self.first_coordinate_that_satisfies(\n self.coordinates_towards(x, y, self.min_x),\n lambda x,y: not self.can_flow_left(x,y)\n )\n return (blocked_left, blocked_right)\n\n def mark_flowing_water_around(self, x,y):\n leftmost_free_spot, rightmost_free_spot = self.find_flow_borders(x,y)\n for mark_x in range(leftmost_free_spot, rightmost_free_spot+1):\n self.set_earth(mark_x, y, '|')\n \n def find_flow_borders(self, x, y):\n # REFACT there should be a fluent utility for this? no?\n flow_border_right = self.first_coordinate_that_satisfies(\n self.coordinates_towards(x,y, self.max_x),\n lambda x,y: self.can_flow_down(x,y) or not self.can_flow_right(x,y)\n )\n flow_border_left = self.first_coordinate_that_satisfies(\n self.coordinates_towards(x, y, self.min_x),\n lambda x,y: self.can_flow_down(x,y) or not self.can_flow_left(x,y)\n )\n return (flow_border_left, flow_border_right)\n \n def __str__(self):\n return (\n _(range(0, self.max_y+1))\n .map(lambda y: (\n _(range(self.min_x, self.max_x+1))\n .map(lambda x: self.earth.get((x,y), '.'))\n .join()\n ._\n ))\n .join('\\n')\n ._\n )\n \n def visualize(self):\n print('min_x', self.min_x, 'max_x', self.max_x, 'min_y', self.min_y, 'max_y', self.max_y)\n print(str(self))\n return self\n \n def number_of_water_reachable_tiles(self):\n return (\n _(self).earth.keys()\n .filter(lambda coordinates: self.min_y <= coordinates[1] <= self.max_y)\n .map(self.earth.get)\n .filter(_.each.in_('~|')._)\n .len()\n ._\n )\n \n def number_of_tiles_with_standing_water(self):\n return (\n _(self).earth.keys()\n .filter(lambda coordinates: self.min_y <= coordinates[1] <= self.max_y)\n .map(self.earth.get)\n .filter(_.each.in_('~')._)\n .len()\n ._\n )\n \n\n",
"_____no_output_____"
],
[
"test_input = '''\\\nx=495, y=2..7\ny=7, x=495..501\nx=501, y=3..7\nx=498, y=2..4\nx=506, y=1..2\nx=498, y=10..13\nx=504, y=10..13\ny=13, x=498..504'''\n\nunderground = _(test_input).splitlines().map(ClayLine.from_string).reduce(Underground.add_clay_line, Underground()).visualize()._",
"min_x 495 max_x 506 min_y 1 max_y 13\n.....+......\n...........#\n#..#.......#\n#..#..#.....\n#..#..#.....\n#.....#.....\n#.....#.....\n#######.....\n............\n............\n...#.....#..\n...#.....#..\n...#.....#..\n...#######..\n"
],
[
"underground.flood_fill_down_from_spring().visualize()",
"min_x 495 max_x 506 min_y 1 max_y 13\n.....+......\n.....|.....#\n#..#||||...#\n#..#~~#|....\n#..#~~#|....\n#~~~~~#|....\n#~~~~~#|....\n#######|....\n.......|....\n..|||||||||.\n..|#~~~~~#|.\n..|#~~~~~#|.\n..|#~~~~~#|.\n..|#######|.\n"
],
[
"underground.number_of_water_reachable_tiles()",
"_____no_output_____"
],
[
"underground = _(open('input/day17.txt')).read().splitlines().map(ClayLine.from_string).reduce(Underground.add_clay_line, Underground())._",
"_____no_output_____"
],
[
"underground.flood_fill_down_from_spring()",
"_____no_output_____"
],
[
"from IPython.display import display, HTML\ndisplay(HTML(f'<pre style=\"font-size:6px\">{underground}</pre>'))",
"_____no_output_____"
],
[
"underground.number_of_water_reachable_tiles()",
"_____no_output_____"
],
[
"underground.number_of_tiles_with_standing_water()",
"_____no_output_____"
]
],
[
[
"# Day 18\n\nhttps://adventofcode.com/2018/day/18",
"_____no_output_____"
]
],
[
[
"import fluentpy as _\nfrom pyexpect import expect",
"_____no_output_____"
],
[
"class Area:\n OPEN = '.'\n TREES = '|'\n LUMBERYARD = '#'\n \n def __init__(self, area_description):\n self.area = _(area_description).strip().splitlines().to(tuple)\n self.generation = 0\n self.cache = dict()\n \n def evolve_to_generation(self, target_generation):\n remaining_generations = target_generation - self.generation # so we can restart\n \n while remaining_generations > 0:\n \n if self.area in self.cache:\n # looping pattern detected\n last_identical_generation = self.cache[self.area]\n generation_difference = self.generation - last_identical_generation \n number_of_possible_jumps = remaining_generations // generation_difference\n if number_of_possible_jumps > 0:\n remaining_generations -= generation_difference * number_of_possible_jumps\n continue # jump forward\n\n self.cache[self.area] = self.generation\n self.evolve()\n self.generation += 1\n remaining_generations -= 1\n\n return self\n \n def evolve(self):\n new_area = []\n for x, line in enumerate(self.area):\n new_line = ''\n for y, tile in enumerate(line):\n new_line += self.next_tile(tile, self.counts_around(x,y))\n new_area.append(new_line)\n self.area = tuple(new_area)\n return self\n \n def next_tile(self, current_tile, counts):\n if current_tile == self.OPEN and counts[self.TREES] >= 3:\n return self.TREES\n elif current_tile == self.TREES and counts[self.LUMBERYARD] >= 3:\n return self.LUMBERYARD\n elif current_tile == self.LUMBERYARD:\n if counts[self.LUMBERYARD] >= 1 and counts[self.TREES] >= 1:\n return self.LUMBERYARD\n else:\n return self.OPEN\n else:\n return current_tile\n\n def counts_around(self, x,y):\n return _.lib.collections.Counter(self.tiles_around(x,y))._\n \n def tiles_around(self, x,y):\n if x > 0:\n line = self.area[x-1]\n yield from line[max(0, y-1):y+2]\n \n line = self.area[x]\n if y > 0: yield line[y-1]\n if y+1 < len(line): yield line[y+1]\n \n if x+1 < len(self.area):\n line = self.area[x+1]\n yield from line[max(0, y-1):y+2]\n \n def resource_value(self):\n counts = _(self).area.join().call(_.lib.collections.Counter)._\n return counts[self.TREES] * counts[self.LUMBERYARD]",
"_____no_output_____"
],
[
"test_input = '''\\\n.#.#...|#.\n.....#|##|\n.|..|...#.\n..|#.....#\n#.#|||#|#|\n...#.||...\n.|....|...\n||...#|.#|\n|.||||..|.\n...#.|..|.\n'''\n\ntest_area = _(test_input).call(Area).evolve_to_generation(10)._\nexpect(test_area.area) == _('''\\\n.||##.....\n||###.....\n||##......\n|##.....##\n|##.....##\n|##....##|\n||##.####|\n||#####|||\n||||#|||||\n||||||||||\n''').strip().splitlines().to(tuple)\nexpect(test_area.resource_value()) == 1147",
"_____no_output_____"
],
[
"area = _(open('input/day18.txt')).read().call(Area)._",
"_____no_output_____"
],
[
"area.evolve_to_generation(10).resource_value()",
"_____no_output_____"
],
[
"area.evolve_to_generation(1000000000).resource_value()",
"_____no_output_____"
]
],
[
[
"# Day 19\n\nhttps://adventofcode.com/2018/day/19",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4ab0102c31dfaec70d544a9c6a5eec8fe7f5a4c9
| 184,852 |
ipynb
|
Jupyter Notebook
|
Welcome_To_Colaboratory.ipynb
|
pratishtha-bhatia/tutorials
|
c3fc180223dc56314fd92c3f8b5aa619f8a2c124
|
[
"Apache-2.0"
] | null | null | null |
Welcome_To_Colaboratory.ipynb
|
pratishtha-bhatia/tutorials
|
c3fc180223dc56314fd92c3f8b5aa619f8a2c124
|
[
"Apache-2.0"
] | null | null | null |
Welcome_To_Colaboratory.ipynb
|
pratishtha-bhatia/tutorials
|
c3fc180223dc56314fd92c3f8b5aa619f8a2c124
|
[
"Apache-2.0"
] | null | null | null | 411.697105 | 107,770 | 0.908695 |
[
[
[
"<a href=\"https://colab.research.google.com/github/pratishtha-bhatia/tutorials/blob/main/Welcome_To_Colaboratory.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# **Loading the LFW Dataset**\nThe LFW dataset is loaded onto the system",
"_____no_output_____"
]
],
[
[
"!pip install Qkeras\n\nfrom qkeras import *\nimport os\nimport pathlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom tensorflow.keras.layers.experimental import preprocessing\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import models\nfrom IPython import display\nfrom keras.layers import *",
"Collecting Qkeras\n Downloading QKeras-0.9.0-py3-none-any.whl (152 kB)\n\u001b[?25l\r\u001b[K |██▏ | 10 kB 16.4 MB/s eta 0:00:01\r\u001b[K |████▎ | 20 kB 20.9 MB/s eta 0:00:01\r\u001b[K |██████▍ | 30 kB 15.4 MB/s eta 0:00:01\r\u001b[K |████████▋ | 40 kB 8.1 MB/s eta 0:00:01\r\u001b[K |██████████▊ | 51 kB 7.9 MB/s eta 0:00:01\r\u001b[K |████████████▉ | 61 kB 9.2 MB/s eta 0:00:01\r\u001b[K |███████████████ | 71 kB 8.7 MB/s eta 0:00:01\r\u001b[K |█████████████████▏ | 81 kB 7.2 MB/s eta 0:00:01\r\u001b[K |███████████████████▎ | 92 kB 7.9 MB/s eta 0:00:01\r\u001b[K |█████████████████████▌ | 102 kB 8.6 MB/s eta 0:00:01\r\u001b[K |███████████████████████▋ | 112 kB 8.6 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▊ | 122 kB 8.6 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▉ | 133 kB 8.6 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████ | 143 kB 8.6 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 152 kB 8.6 MB/s \n\u001b[?25hRequirement already satisfied: scipy>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from Qkeras) (1.4.1)\nRequirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/dist-packages (from Qkeras) (57.4.0)\nRequirement already satisfied: numpy>=1.16.0 in /usr/local/lib/python3.7/dist-packages (from Qkeras) (1.19.5)\nRequirement already satisfied: scikit-learn>=0.23.1 in /usr/local/lib/python3.7/dist-packages (from Qkeras) (1.0.2)\nRequirement already satisfied: networkx>=2.1 in /usr/local/lib/python3.7/dist-packages (from Qkeras) (2.6.3)\nRequirement already satisfied: tqdm>=4.48.0 in /usr/local/lib/python3.7/dist-packages (from Qkeras) (4.62.3)\nCollecting pyparser\n Downloading pyparser-1.0.tar.gz (4.0 kB)\nCollecting tensorflow-model-optimization>=0.2.1\n Downloading tensorflow_model_optimization-0.7.0-py2.py3-none-any.whl (213 kB)\n\u001b[K |████████████████████████████████| 213 kB 47.7 MB/s \n\u001b[?25hCollecting keras-tuner>=1.0.1\n Downloading keras_tuner-1.1.0-py3-none-any.whl (98 kB)\n\u001b[K |████████████████████████████████| 98 kB 6.8 MB/s \n\u001b[?25hRequirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from keras-tuner>=1.0.1->Qkeras) (2.23.0)\nRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from keras-tuner>=1.0.1->Qkeras) (21.3)\nRequirement already satisfied: tensorboard in /usr/local/lib/python3.7/dist-packages (from keras-tuner>=1.0.1->Qkeras) (2.7.0)\nRequirement already satisfied: ipython in /usr/local/lib/python3.7/dist-packages (from keras-tuner>=1.0.1->Qkeras) (5.5.0)\nCollecting kt-legacy\n Downloading kt_legacy-1.0.4-py3-none-any.whl (9.6 kB)\nRequirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.23.1->Qkeras) (1.1.0)\nRequirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.23.1->Qkeras) (3.0.0)\nRequirement already satisfied: dm-tree~=0.1.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-model-optimization>=0.2.1->Qkeras) (0.1.6)\nRequirement already satisfied: six~=1.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow-model-optimization>=0.2.1->Qkeras) (1.15.0)\nRequirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner>=1.0.1->Qkeras) (4.4.2)\nRequirement already satisfied: prompt-toolkit<2.0.0,>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner>=1.0.1->Qkeras) (1.0.18)\nRequirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner>=1.0.1->Qkeras) (4.8.0)\nRequirement already satisfied: pygments in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner>=1.0.1->Qkeras) (2.6.1)\nRequirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner>=1.0.1->Qkeras) (0.7.5)\nRequirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner>=1.0.1->Qkeras) (0.8.1)\nRequirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipython->keras-tuner>=1.0.1->Qkeras) (5.1.1)\nRequirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.4->ipython->keras-tuner>=1.0.1->Qkeras) (0.2.5)\nRequirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->keras-tuner>=1.0.1->Qkeras) (3.0.7)\nRequirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.7/dist-packages (from pexpect->ipython->keras-tuner>=1.0.1->Qkeras) (0.7.0)\nCollecting parse==1.6.5\n Downloading parse-1.6.5.tar.gz (24 kB)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner>=1.0.1->Qkeras) (2021.10.8)\nRequirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner>=1.0.1->Qkeras) (3.0.4)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner>=1.0.1->Qkeras) (1.24.3)\nRequirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->keras-tuner>=1.0.1->Qkeras) (2.10)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (3.3.6)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (0.4.6)\nRequirement already satisfied: grpcio>=1.24.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (1.43.0)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (1.8.1)\nRequirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (0.37.1)\nRequirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (1.0.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (1.0.1)\nRequirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (1.35.0)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (0.6.1)\nRequirement already satisfied: protobuf>=3.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard->keras-tuner>=1.0.1->Qkeras) (3.17.3)\nRequirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->Qkeras) (4.8)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->Qkeras) (4.2.4)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->Qkeras) (0.2.8)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->Qkeras) (1.3.0)\nRequirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->Qkeras) (4.10.1)\nRequirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->Qkeras) (3.7.0)\nRequirement already satisfied: typing-extensions>=3.6.4 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->Qkeras) (3.10.0.2)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->Qkeras) (0.4.8)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->Qkeras) (3.1.1)\nBuilding wheels for collected packages: pyparser, parse\n Building wheel for pyparser (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for pyparser: filename=pyparser-1.0-py3-none-any.whl size=4943 sha256=37e49c5a586b16f3fb424a182aeecc59fb051a4101b66fa09d67fb17d31ddb4a\n Stored in directory: /root/.cache/pip/wheels/84/80/fe/49e0cb63aba370d3ef38e733a2266c90a4d837921664320003\n Building wheel for parse (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for parse: filename=parse-1.6.5-py3-none-any.whl size=18176 sha256=b268672a176f5206e67e8fc7a7f28870e14c2a39f912c7e9e7325fe4cc840981\n Stored in directory: /root/.cache/pip/wheels/d3/d2/3e/3df86c4fd6ebac1348fbbda0a551e28cacf7301969935732dd\nSuccessfully built pyparser parse\nInstalling collected packages: parse, kt-legacy, tensorflow-model-optimization, pyparser, keras-tuner, Qkeras\nSuccessfully installed Qkeras-0.9.0 keras-tuner-1.1.0 kt-legacy-1.0.4 parse-1.6.5 pyparser-1.0 tensorflow-model-optimization-0.7.0\n"
],
[
"data_dir = pathlib.Path('data/lfw')\nif not data_dir.exists():\n tf.keras.utils.get_file(\n 'lfw.tgz',\n origin=\"http://vis-www.cs.umass.edu/lfw/lfw.tgz\",\n extract=True,\n cache_dir='.', cache_subdir='data')",
"Downloading data from http://vis-www.cs.umass.edu/lfw/lfw.tgz\n180568064/180566744 [==============================] - 5s 0us/step\n180576256/180566744 [==============================] - 5s 0us/step\n"
],
[
"names = np.array(tf.io.gfile.listdir(str(data_dir)))\nnames = names[names != 'README.md']\nprint('Names:', names)",
"Names: ['Santiago_Botero' 'Ian_Knop' 'Benjamin_Franklin' ... 'Emilio_Botin'\n 'Kristin_Chenoweth' 'Russell_Simmons']\n"
],
[
"filenames = tf.io.gfile.glob(str(data_dir) + '/*/*')\nfilenames = tf.random.shuffle(filenames)\nnum_samples = len(filenames)\nprint('Number of total examples:', num_samples)\nprint('Number of examples per label:',\n len(tf.io.gfile.listdir(str(data_dir/names[1]))))\nprint('Example file tensor:', filenames[0])",
"Number of total examples: 13233\nNumber of examples per label: 1\nExample file tensor: tf.Tensor(b'data/lfw/Frances_Fisher/Frances_Fisher_0002.jpg', shape=(), dtype=string)\n"
],
[
"train_files = filenames[:10587]\nval_files = filenames[10587: 10587 + 1323]\ntest_files = filenames[11910:]\n\nprint('Training set size', len(train_files))\nprint('Validation set size', len(val_files))\nprint('Test set size', len(test_files))\n\n\nimport matplotlib.image as mpimg\nstringname= str(test_files[-1])\nstringpath = (stringname[12:-26])\nplt.imshow(mpimg.imread(stringpath))\nfig = plt.gcf()\nsize = fig.get_size_inches()*fig.dpi # size in pixels\nprint(size)",
"Training set size 10587\nValidation set size 1323\nTest set size 1323\n[432. 288.]\n"
],
[
"import cv2\nfrom google.colab.patches import cv2_imshow\n\n\nimg = cv2.imread(stringpath)\nres = cv2.resize(img, dsize=(128, 128), interpolation=cv2.INTER_LINEAR)\nprint((res))\ncv2_imshow(res)\ngray_image = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)\n \ncv2_imshow(gray_image)",
"[[[210 180 154]\n [209 177 154]\n [205 172 153]\n ...\n [200 173 159]\n [197 171 157]\n [197 171 159]]\n\n [[210 178 154]\n [208 176 154]\n [205 172 154]\n ...\n [199 173 159]\n [197 171 157]\n [196 170 158]]\n\n [[210 177 157]\n [208 175 156]\n [204 172 155]\n ...\n [199 173 159]\n [197 171 157]\n [197 171 159]]\n\n ...\n\n [[ 44 57 74]\n [ 43 59 76]\n [ 38 58 75]\n ...\n [126 162 250]\n [110 151 241]\n [102 145 235]]\n\n [[ 38 58 74]\n [ 38 61 77]\n [ 35 59 75]\n ...\n [119 155 239]\n [110 147 233]\n [100 138 225]]\n\n [[ 37 60 76]\n [ 36 61 77]\n [ 32 60 76]\n ...\n [108 142 225]\n [102 136 220]\n [ 95 131 215]]]\n"
],
[
"def preprocess_image(image_path):\n stringpath = (stringname[12:-26])\n plt.imshow(mpimg.imread(stringpath))\n img = cv2.imread(stringpath)\n res = cv2.resize(img, dsize=(128, 128), interpolation=cv2.INTER_LINEAR)\n cv2_imshow(res)\n gray_image = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)\n \n cv2_imshow(gray_image)\n return gray_image\n\n",
"_____no_output_____"
],
[
"import shutil\nfor i in range(len(train_files)):\n path=str(train_files[i])\n print(path)\n shutil.copy('/Training Dataset/face',path[12:-26])",
"tf.Tensor(b'data/lfw/Frances_Fisher/Frances_Fisher_0002.jpg', shape=(), dtype=string)\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab0110a53a45b0ec6e84334fccafea4eca9aa6e
| 23,608 |
ipynb
|
Jupyter Notebook
|
site/zh-cn/tutorials/keras/regression.ipynb
|
gabrielrufino/docs-l10n
|
9eb7df2cf9e78e1c9df76c57c935db85c79c8c3a
|
[
"Apache-2.0"
] | 1 |
2020-02-07T02:51:36.000Z
|
2020-02-07T02:51:36.000Z
|
site/zh-cn/tutorials/keras/regression.ipynb
|
gabrielrufino/docs-l10n
|
9eb7df2cf9e78e1c9df76c57c935db85c79c8c3a
|
[
"Apache-2.0"
] | null | null | null |
site/zh-cn/tutorials/keras/regression.ipynb
|
gabrielrufino/docs-l10n
|
9eb7df2cf9e78e1c9df76c57c935db85c79c8c3a
|
[
"Apache-2.0"
] | null | null | null | 27.198157 | 258 | 0.476322 |
[
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# Basic regression: Predict fuel efficiency",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/keras/regression\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\" />在 tensorFlow.google.cn 上查看</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/regression.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\" />在 Google Colab 中运行</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/regression.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\" />在 GitHub 上查看源代码</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/keras/regression.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\" />下载 notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的\n[官方英文文档](https://www.tensorflow.org/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到\n[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入\n[[email protected] Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。\n\n在 *回归 (regression)* 问题中,我们的目的是预测出如价格或概率这样连续值的输出。相对于*分类(classification)* 问题,*分类(classification)* 的目的是从一系列的分类出选择出一个分类 (如,给出一张包含苹果或橘子的图片,识别出图片中是哪种水果)。\n\n本 notebook 使用经典的 [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) 数据集,构建了一个用来预测70年代末到80年代初汽车燃油效率的模型。为了做到这一点,我们将为该模型提供许多那个时期的汽车描述。这个描述包含:气缸数,排量,马力以及重量。\n\n本示例使用 `tf.keras` API,相关细节请参阅 [本指南](https://tensorflow.google.cn/guide/keras)。",
"_____no_output_____"
]
],
[
[
"# 使用 seaborn 绘制矩阵图 (pairplot)\n!pip install seaborn",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n\nimport pathlib\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\ntry:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\nimport tensorflow as tf\n\nfrom tensorflow import keras\nfrom tensorflow.keras import layers\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## Auto MPG 数据集\n\n该数据集可以从 [UCI机器学习库](https://archive.ics.uci.edu/ml/) 中获取.\n\n",
"_____no_output_____"
],
[
"### 获取数据\n首先下载数据集。",
"_____no_output_____"
]
],
[
[
"dataset_path = keras.utils.get_file(\"auto-mpg.data\", \"http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data\")\ndataset_path",
"_____no_output_____"
]
],
[
[
"使用 pandas 导入数据集。",
"_____no_output_____"
]
],
[
[
"column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',\n 'Acceleration', 'Model Year', 'Origin']\nraw_dataset = pd.read_csv(dataset_path, names=column_names,\n na_values = \"?\", comment='\\t',\n sep=\" \", skipinitialspace=True)\n\ndataset = raw_dataset.copy()\ndataset.tail()",
"_____no_output_____"
]
],
[
[
"### 数据清洗\n\n数据集中包括一些未知值。",
"_____no_output_____"
]
],
[
[
"dataset.isna().sum()",
"_____no_output_____"
]
],
[
[
"为了保证这个初始示例的简单性,删除这些行。",
"_____no_output_____"
]
],
[
[
"dataset = dataset.dropna()",
"_____no_output_____"
]
],
[
[
"`\"Origin\"` 列实际上代表分类,而不仅仅是一个数字。所以把它转换为独热码 (one-hot):",
"_____no_output_____"
]
],
[
[
"origin = dataset.pop('Origin')",
"_____no_output_____"
],
[
"dataset['USA'] = (origin == 1)*1.0\ndataset['Europe'] = (origin == 2)*1.0\ndataset['Japan'] = (origin == 3)*1.0\ndataset.tail()",
"_____no_output_____"
]
],
[
[
"### 拆分训练数据集和测试数据集\n\n现在需要将数据集拆分为一个训练数据集和一个测试数据集。\n\n我们最后将使用测试数据集对模型进行评估。",
"_____no_output_____"
]
],
[
[
"train_dataset = dataset.sample(frac=0.8,random_state=0)\ntest_dataset = dataset.drop(train_dataset.index)",
"_____no_output_____"
]
],
[
[
"### 数据检查\n\n快速查看训练集中几对列的联合分布。",
"_____no_output_____"
]
],
[
[
"sns.pairplot(train_dataset[[\"MPG\", \"Cylinders\", \"Displacement\", \"Weight\"]], diag_kind=\"kde\")",
"_____no_output_____"
]
],
[
[
"也可以查看总体的数据统计:",
"_____no_output_____"
]
],
[
[
"train_stats = train_dataset.describe()\ntrain_stats.pop(\"MPG\")\ntrain_stats = train_stats.transpose()\ntrain_stats",
"_____no_output_____"
]
],
[
[
"### 从标签中分离特征\n\n将特征值从目标值或者\"标签\"中分离。 这个标签是你使用训练模型进行预测的值。",
"_____no_output_____"
]
],
[
[
"train_labels = train_dataset.pop('MPG')\ntest_labels = test_dataset.pop('MPG')",
"_____no_output_____"
]
],
[
[
"### 数据规范化\n\n再次审视下上面的 `train_stats` 部分,并注意每个特征的范围有什么不同。",
"_____no_output_____"
],
[
"使用不同的尺度和范围对特征归一化是好的实践。尽管模型*可能* 在没有特征归一化的情况下收敛,它会使得模型训练更加复杂,并会造成生成的模型依赖输入所使用的单位选择。\n\n注意:尽管我们仅仅从训练集中有意生成这些统计数据,但是这些统计信息也会用于归一化的测试数据集。我们需要这样做,将测试数据集放入到与已经训练过的模型相同的分布中。",
"_____no_output_____"
]
],
[
[
"def norm(x):\n return (x - train_stats['mean']) / train_stats['std']\nnormed_train_data = norm(train_dataset)\nnormed_test_data = norm(test_dataset)",
"_____no_output_____"
]
],
[
[
"我们将会使用这个已经归一化的数据来训练模型。\n\n警告: 用于归一化输入的数据统计(均值和标准差)需要反馈给模型从而应用于任何其他数据,以及我们之前所获得独热码。这些数据包含测试数据集以及生产环境中所使用的实时数据。",
"_____no_output_____"
],
[
"## 模型",
"_____no_output_____"
],
[
"### 构建模型\n\n让我们来构建我们自己的模型。这里,我们将会使用一个“顺序”模型,其中包含两个紧密相连的隐藏层,以及返回单个、连续值得输出层。模型的构建步骤包含于一个名叫 'build_model' 的函数中,稍后我们将会创建第二个模型。 两个密集连接的隐藏层。",
"_____no_output_____"
]
],
[
[
"def build_model():\n model = keras.Sequential([\n layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),\n layers.Dense(64, activation='relu'),\n layers.Dense(1)\n ])\n\n optimizer = tf.keras.optimizers.RMSprop(0.001)\n\n model.compile(loss='mse',\n optimizer=optimizer,\n metrics=['mae', 'mse'])\n return model",
"_____no_output_____"
],
[
"model = build_model()",
"_____no_output_____"
]
],
[
[
"### 检查模型\n\n使用 `.summary` 方法来打印该模型的简单描述。",
"_____no_output_____"
]
],
[
[
"model.summary()",
"_____no_output_____"
]
],
[
[
"\n现在试用下这个模型。从训练数据中批量获取‘10’条例子并对这些例子调用 `model.predict` 。\n",
"_____no_output_____"
]
],
[
[
"example_batch = normed_train_data[:10]\nexample_result = model.predict(example_batch)\nexample_result",
"_____no_output_____"
]
],
[
[
"它似乎在工作,并产生了预期的形状和类型的结果",
"_____no_output_____"
],
[
"### 训练模型\n\n对模型进行1000个周期的训练,并在 `history` 对象中记录训练和验证的准确性。",
"_____no_output_____"
]
],
[
[
"# 通过为每个完成的时期打印一个点来显示训练进度\nclass PrintDot(keras.callbacks.Callback):\n def on_epoch_end(self, epoch, logs):\n if epoch % 100 == 0: print('')\n print('.', end='')\n\nEPOCHS = 1000\n\nhistory = model.fit(\n normed_train_data, train_labels,\n epochs=EPOCHS, validation_split = 0.2, verbose=0,\n callbacks=[PrintDot()])",
"_____no_output_____"
]
],
[
[
"使用 `history` 对象中存储的统计信息可视化模型的训练进度。",
"_____no_output_____"
]
],
[
[
"hist = pd.DataFrame(history.history)\nhist['epoch'] = history.epoch\nhist.tail()",
"_____no_output_____"
],
[
"def plot_history(history):\n hist = pd.DataFrame(history.history)\n hist['epoch'] = history.epoch\n\n plt.figure()\n plt.xlabel('Epoch')\n plt.ylabel('Mean Abs Error [MPG]')\n plt.plot(hist['epoch'], hist['mae'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mae'],\n label = 'Val Error')\n plt.ylim([0,5])\n plt.legend()\n\n plt.figure()\n plt.xlabel('Epoch')\n plt.ylabel('Mean Square Error [$MPG^2$]')\n plt.plot(hist['epoch'], hist['mse'],\n label='Train Error')\n plt.plot(hist['epoch'], hist['val_mse'],\n label = 'Val Error')\n plt.ylim([0,20])\n plt.legend()\n plt.show()\n\n\nplot_history(history)",
"_____no_output_____"
]
],
[
[
"该图表显示在约100个 epochs 之后误差非但没有改进,反而出现恶化。 让我们更新 `model.fit` 调用,当验证值没有提高上是自动停止训练。\n我们将使用一个 *EarlyStopping callback* 来测试每个 epoch 的训练条件。如果经过一定数量的 epochs 后没有改进,则自动停止训练。\n\n你可以从[这里](https://tensorflow.google.cn/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping)学习到更多的回调。",
"_____no_output_____"
]
],
[
[
"model = build_model()\n\n# patience 值用来检查改进 epochs 的数量\nearly_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)\n\nhistory = model.fit(normed_train_data, train_labels, epochs=EPOCHS,\n validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])\n\nplot_history(history)",
"_____no_output_____"
]
],
[
[
"如图所示,验证集中的平均的误差通常在 +/- 2 MPG左右。 这个结果好么? 我们将决定权留给你。\n\n让我们看看通过使用 **测试集** 来泛化模型的效果如何,我们在训练模型时没有使用测试集。这告诉我们,当我们在现实世界中使用这个模型时,我们可以期望它预测得有多好。",
"_____no_output_____"
]
],
[
[
"loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)\n\nprint(\"Testing set Mean Abs Error: {:5.2f} MPG\".format(mae))",
"_____no_output_____"
]
],
[
[
"### 做预测\n \n最后,使用测试集中的数据预测 MPG 值:",
"_____no_output_____"
]
],
[
[
"test_predictions = model.predict(normed_test_data).flatten()\n\nplt.scatter(test_labels, test_predictions)\nplt.xlabel('True Values [MPG]')\nplt.ylabel('Predictions [MPG]')\nplt.axis('equal')\nplt.axis('square')\nplt.xlim([0,plt.xlim()[1]])\nplt.ylim([0,plt.ylim()[1]])\n_ = plt.plot([-100, 100], [-100, 100])\n",
"_____no_output_____"
]
],
[
[
"这看起来我们的模型预测得相当好。我们来看下误差分布。",
"_____no_output_____"
]
],
[
[
"error = test_predictions - test_labels\nplt.hist(error, bins = 25)\nplt.xlabel(\"Prediction Error [MPG]\")\n_ = plt.ylabel(\"Count\")",
"_____no_output_____"
]
],
[
[
"它不是完全的高斯分布,但我们可以推断出,这是因为样本的数量很小所导致的。",
"_____no_output_____"
],
[
"## 结论\n\n本笔记本 (notebook) 介绍了一些处理回归问题的技术。\n\n* 均方误差(MSE)是用于回归问题的常见损失函数(分类问题中使用不同的损失函数)。\n* 类似的,用于回归的评估指标与分类不同。 常见的回归指标是平均绝对误差(MAE)。\n* 当数字输入数据特征的值存在不同范围时,每个特征应独立缩放到相同范围。\n* 如果训练数据不多,一种方法是选择隐藏层较少的小网络,以避免过度拟合。\n* 早期停止是一种防止过度拟合的有效技术。",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ab0138d076d33b579f6ffdeebe0e99200d6f148
| 5,180 |
ipynb
|
Jupyter Notebook
|
composable_pipeline/notebooks/applications/03_color_detect_app.ipynb
|
mariodruiz/PYNQ_Composable_Pipeline
|
32db3ad60e9b157077e32669ec49bee5419d4062
|
[
"BSD-3-Clause"
] | 31 |
2021-06-17T02:40:37.000Z
|
2022-03-30T23:55:52.000Z
|
composable_pipeline/notebooks/applications/03_color_detect_app.ipynb
|
Xilinx/PYNQ_Composable_Pipeline
|
32db3ad60e9b157077e32669ec49bee5419d4062
|
[
"BSD-3-Clause"
] | 31 |
2021-08-16T14:58:20.000Z
|
2022-03-24T09:47:06.000Z
|
composable_pipeline/notebooks/applications/03_color_detect_app.ipynb
|
mariodruiz/PYNQ_Composable_Pipeline
|
32db3ad60e9b157077e32669ec49bee5419d4062
|
[
"BSD-3-Clause"
] | 6 |
2021-06-22T08:25:34.000Z
|
2022-03-02T11:49:51.000Z
| 25.771144 | 132 | 0.557143 |
[
[
[
"# Color Detect Application\n----\n\n<div class=\"alert alert-box alert-info\">\nPlease use Jupyter labs http://<board_ip_address>/lab for this notebook.\n</div>\n\nThis notebook shows how to download and play with the Color Detect Application\n\n## Aims\n* Instantiate the application\n* Start the application\n* Play with the runtime parameters\n* Stop the application\n\n## Table of Contents\n* [Download Composable Overlay](#download)\n* [Start Application](#start)\n* [Play with the Application](#play)\n* [Stop Application](#stop)\n* [Conclusion](#conclusion)\n\n----\n\n## Revision History\n\n* v1.0 | 30 March 2021 | First notebook revision.\n\n----",
"_____no_output_____"
],
[
"## Download Composable Overlay <a class=\"anchor\" id=\"download\"></a>\n\nDownload the Composable Overlay using the `ColorDetect` class which wraps all the functionality needed to run this application",
"_____no_output_____"
]
],
[
[
"from composable_pipeline import ColorDetect\n\napp = ColorDetect(\"../overlay/cv_dfx_4_pr.bit\")",
"_____no_output_____"
]
],
[
[
"## Start Application <a class=\"anchor\" id=\"start\"></a>\n\nStart the application by calling the `.start()` method, this will:\n\n1. Initialize the pipeline\n1. Setup initial parameters\n1. Display the implemented pipelined\n1. Configure HDMI in and out\n\nThe output image should be visible on the external screen at this point\n\n<div class=\"alert alert-heading alert-danger\">\n <h4 class=\"alert-heading\">Warning:</h4>\n\nFailure to connect HDMI cables to a valid video source and screen may cause the notebook to hang\n</div>",
"_____no_output_____"
]
],
[
[
"app.start()",
"_____no_output_____"
]
],
[
[
"## Play with the Application <a class=\"anchor\" id=\"play\"></a>\n\nThe `.play` attribute exposes several runtime parameters\n\n### Color Space\n\nThis drop-down menu allows you to select between three color spaces\n\n* [HSV](https://en.wikipedia.org/wiki/HSL_and_HSV)\n* [RGB](https://en.wikipedia.org/wiki/RGB_color_space)\n\n $h_{0-2}$, $s_{0-2}$, $v_{0-2}$ represent the thresholding values for the three channels\n\n### Noise reduction\n\nThis drop-down menu allows you to the disable noise reduction in the application\n",
"_____no_output_____"
]
],
[
[
"app.play",
"_____no_output_____"
]
],
[
[
"## Stop Application <a class=\"anchor\" id=\"stop\"></a>\n\nFinally stop the application to release the resources\n\n<div class=\"alert alert-heading alert-danger\">\n <h4 class=\"alert-heading\">Warning:</h4>\n\nFailure to stop the HDMI Video may hang the board \nwhen trying to download another bitstream onto the FPGA\n</div>",
"_____no_output_____"
]
],
[
[
"app.stop()",
"_____no_output_____"
]
],
[
[
"----\n\n## Conclusion <a class=\"anchor\" id=\"conclusion\"></a>\n\nThis notebook has presented the Color Detect Application that leverages the Composable Overlay. \n\nThe runtime parameters of such application can be modified using drop-down and sliders from `ipywidgets`\n\n[⬅️ Corner Detect Application](02_corner_detect_app.ipynb) | | [Filter2D Application ➡️](04_filter2d_app.ipynb)",
"_____no_output_____"
],
[
"Copyright © 2021 Xilinx, Inc\n\nSPDX-License-Identifier: BSD-3-Clause\n\n----",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ab01676b0ddcee27b612eda6435e242916a3417
| 43,100 |
ipynb
|
Jupyter Notebook
|
mmn_11_computability.ipynb
|
roman-smirnov/computability-and-complexity
|
49232b46e441d1943d5d7e233fb25de580738d34
|
[
"MIT"
] | null | null | null |
mmn_11_computability.ipynb
|
roman-smirnov/computability-and-complexity
|
49232b46e441d1943d5d7e233fb25de580738d34
|
[
"MIT"
] | null | null | null |
mmn_11_computability.ipynb
|
roman-smirnov/computability-and-complexity
|
49232b46e441d1943d5d7e233fb25de580738d34
|
[
"MIT"
] | null | null | null | 56.710526 | 243 | 0.544107 |
[
[
[
"# MMN 11 - Computability and Complexity\n\n\n",
"_____no_output_____"
],
[
"\n\n## Question 1\nWe define a Turing Machine (TM) which decides the palindrome language PAL(:= $\\{w \\in \\{0, 1\\}^* | w = w^R\\})$. \n\n### Overview\nThe TM reads the leftmost symbol of the tape, loops forward to the end of the input, reads the rightmost symbol,\ncompares and rejects if leftmost and rightmost symbols are not equivalent, rewinds tape, if not symbols are left the TM accepts the word. \n\n### Decidability\nOur TM _decides_ PAL because it rejects all words such that a symbol at the ith position from the start of the word is not equal to the symbol at the ith position from the end of the word (i.e all non-palindromes),\nand accepts all other words - which are in PAL by way of its definition.\n\n### Definition\n\n1. Finite set of states $Q = \\{ q_0, q_1, q_2, q_3, q_4, q_5, q_accept, q_reject \\} $\n2. Input alphabet $ \\Sigma = \\{0,1\\} $\n3. Tape alphabet $\\Gamma = \\{0,1,\\_ \\}$\n4. Transition function $\\delta : Q \\times \\Gamma \\rightarrow Q \\times \\Gamma \\times \\{ L, R \\} $ (as defined by the diagram).\n5. Start state $q_0$\n6. Accept state $ q_{accept} $ (the input is a palindrome).\n7. Reject state $q_{reject} $ (the input is not a palindrome).\n\n### Diagram",
"_____no_output_____"
]
],
[
[
"from graphviz import Digraph\n\nf = Digraph('palindrome deciding turing machine')\nf.attr(rankdir='LR', size='8,5')\n\nf.attr('node', shape='Mdiamond')\nf.node('start')\n\nf.attr('node', shape='circle')\nf.edge('start', 'q_0')\n\n# initial state - read left symbol\nf.edge('q_0', 'q_1', label='0 -> _, R')\nf.edge('q_0', 'q_3', label='1 -> _, R')\nf.edge('q_0', 'q_accept', label='_ -> R')\n\n# loop forward - case 0\nf.edge('q_1','q_1', label='0, 1 -> R')\nf.edge('q_1','q_2', label='_ -> L')\n# check right symbol - case 0\nf.edge('q_2','q_5', label='0, _ -> _, L')\nf.edge('q_2','q_reject', label='1 -> R')\n\n# loop forward - case 1\nf.edge('q_3','q_3', label='0, 1 -> R')\nf.edge('q_3','q_4', label='_ -> L')\n\n# check right symbol - case 1\nf.edge('q_4','q_5', label='0, _ -> _, L')\nf.edge('q_4','q_reject', label='0 -> R')\n\n\n# rewind to tape start\nf.edge('q_5','q_5', label='0,1 -> L')\nf.edge('q_5','q_0', label='_ -> R')\n\n\nf",
"_____no_output_____"
]
],
[
[
"\n\n## Question 2.A\n\nWe're given $|w| = n$, and $ w \\# w \\in B $ (:= as in example 3.9, p.173).\n\nThe TM $M_1$ (:= as in figure 3.10, p.174) reads the symbol '#' when performing the following:\n1. ($q_2$\\ $q_3$) winding forward to end of input.\n2. ($q_6$) re-winding to start of input.\n3. ($q_1$) all symbols except middle already checked.\n\nSince $ w\\#w \\in B $, $w \\# w$ is not rejected $M_1$ checks every pair of symbols before entering q_accept. \nThus, # is read 2 times per symbol pair check (2n times), and is read once right before reaching q_accept (1 times).\n\nWe conclude # is read for a 2n+1 times.\n\n",
"_____no_output_____"
],
[
"\n\n## Question 2.B\n\nWe can achieve a reduction to n+1 reads of symbol # by utilizing the symbol itself to mark the first unchecked symbol of the duplicate word.\nConsequentially, the # symbol will still be read once when forward-winding, but the read during re-winding will be eliminated (thus 2n-1-n = n-1). \n\n1. We'll modify the forward-winding step (q_2/q_3) by adding symbol x to the self-loop cases, we'll also write an x upon reaching #. \n2. We'll modify the check step (q_4/q_5) by writing a # after doing the comparison.\n3. We''ll modify the first rewind step (q_6) by looping back until the first 0/1 symbol.\n4. We''ll modify the second rewind step (q_7) by looping back until the first x symbol.\n\n",
"_____no_output_____"
],
[
"\n\n## Question 3\nGiven an extened TM $M_e$ with a transition of the type $ \\delta(q, a) = (r, b, R_k)$ or $ \\delta(q, a) = (r, b, L_k)$.\nWe can simulate $M_e$ with a canonical TM ($M_c$) by adding symbols to the tape alphabet $\\Gamma$ or adding states to the set of states $Q$.\nBy way of the above, $M_e$ and $M_c$ are computability equivalent.\n\nWe'll elect to demonstrate the state based method:\nFor each state in $M_e$ with an extended transition (k>1) to it, we'll create k transition states in $M_c$.\nWe'll arrange the transition states sequentially, such that each moves the tape-head exactly once (for total of k), the last state transitions to the target state.\n\n### Diagram",
"_____no_output_____"
]
],
[
[
"from graphviz import Digraph\n\ndot = Digraph('transition exteneded turing machine simulator')\ndot.attr(rankdir='LR')\n\ndot.attr('node', shape='circle')\n\n# Extended TM transition\ndot.edge('q_extended_0', 'q_extended_1', label='a -> b, R_k')\n\n# Canonical TM simulation\ndot.edge('q_canonical_0', 'q_canonical_1_k_1', label='a -> b, R')\ndot.edge('q_canonical_1_k_1', 'q_canonical_1_k_...', label='* -> R')\ndot.edge('q_canonical_1_k_...', 'q_canonical_1_k_{k-1}', label='* -> R')\ndot.edge('q_canonical_1_k_{k-1}', 'q^c_1', label='* -> R')\n\ndot",
"_____no_output_____"
]
],
[
[
"\n\n## Question 4.A\n\n### Overview\nWe accept if |w| is found to be composite prime, we reject |w| is found to be prime.\nThe idea is to use alternate between 2 TMs, such that one recognizes the language and the other recognizes its complement. \nThus, we form a NDTM (Non-Deterministic TM) which answer the definition of a decider. \n\n\n### Steps\n1. accept if |w| <=1 \n2. choose an 1<i<n and mark each ith position until end of input\n3. accept if the last input position is marked \n4. reject if all except last symbol are checked\n5. back to 2\n\nThe NDTM advantage is we don't have to keep track of i.\n\n",
"_____no_output_____"
],
[
"\n## Question 4.B\nIf we replace q_accept with q_reject we'll reject all composites and accept all primes.\nThe NDTM will become a word length primality test - i.e decide the language of prime length words. \n\nThis correct because the proposed NDTM is composed of a prime recognizer and a composite recognizer (which are complements).\n\n",
"_____no_output_____"
],
[
"\n\n## Questions 5\nWe're required to define a two-tape deterministic TM which does a DFS on the search tree defined by a non-deterministic TM decider. \n\n### Notes and Assumptions\n* We're given the NDTM to be simulated is a decider, which means every branch terminates. \n* If we make a choice, we won't ever need to go back because it will terminate. So no need to keep the original input. \n* We need some way to know which choices were made in the NDTM. Because, otherwise we'll always go down the same branch.\n* Having the NDTM choices allows us to not neccasarily start at the root.\n* The simulating TM will reject on any input the NDTM rejects and accept any input the NDTM rejects. Therefore, it decides the same language.\n\n### Overview\n1. first tape contains the input - same as for the NDTM. \n2. second tape contains the NDTM path choices. \n3. an alphabet for the second tape $\\Gamma_2 = \\{ c_1, ... , c_n \\}$ is defined to allow decision tree input.\n\n### Description\n1. Initially tape 1 contains the original input and tape 2 contains the NDTM choices. \n2. Before each transition on tape 1, check tape 2 to see which choice the NDTM has made.\n 2.1 move the 2nd tape-head one symbol right. \n3. Make the transition on the input tape. \n4. accept if $q_accept$ reached, reject if $q_reject$ reached. \n5. Back to step 2.\n",
"_____no_output_____"
],
[
"\n\n## Question 6\nWe're required to define an enumerator and draw a diagram for the language $ A = \\{ 0^{2^n} | n \\in \\mathbb{N} \\}$.\nWe're given: $\\Sigma = \\{ 0 \\}$, $\\Gamma = \\{0, x, \\_ \\}$.\n\n### Notes and Assumptions \n* Enumerator formally defined on p.16 of course manual.\n* A is infinite, so there's no halting state - we leave it unused in our definition. \n* A is the languge of even length words consisting of symbol '0'.\n* The enumerator must print all possible words in A. \n* The enumerator must not print anything not in A.\n* Print order doesn't matter. \n* Printing duplicates is allowed. \n* Printing clears the output tape.\n* We can write $\\epsilon$ (nothing) to the output tape on transitions that do nothing. \n\n### Overview\n0. initially both tapes are empty.\n1. mark the work tape start by skipping a space ($\\_$).\n2. write xx to the work tape starting from current tape-head position (i.e concat at input end).\n3. move work tape-head to start of input. \n3. scan both tapes in tandem: for each x in the work tape, write a 0 to the output tape. \n4. print the output\n5. back to step 2. \n\n### Definition\n1. $Q = \\{ q_0, q_1, q_2, q_3, q_4, q_{print}, q_{halt} \\} $\n2. $\\Gamma = \\{0, x, \\_ \\}$\n3. $\\Sigma = \\{ 0 \\}$\n4. $ \\delta : Q \\times \\Gamma \\rightarrow Q \\times \\Gamma \\times \\{ L, R \\} \\times ( \\Sigma \\cup \\{ \\epsilon \\} ) $\n5. Initial state $q_0$.\n6. Print state $q_{print}$.\n7. Halting state $q_{halt}$ (unused).\n\n### Diagram",
"_____no_output_____"
]
],
[
[
"from graphviz import Digraph\n\nf = Digraph('even length non-empty 0 filled word enumerator')\nf.attr(rankdir='LR', size='8,5')\n\n# draw the start state on the graph\nf.attr('node', shape='Mdiamond')\nf.node('start')\nf.attr('node', shape='circle')\nf.edge('start', 'q_0')\n\n# 1. mark work tape start \nf.edge('q_0', 'q_1', label='* -> R')\n\n# 2. write xx to the work tape\nf.edge('q_1', 'q_2', label='* -> x, R')\nf.edge('q_2', 'q_3', label='* -> x, R')\n\n# 3. rewind work-tape-head to input start\nf.edge('q_3', 'q_3', label='0, x -> L')\nf.edge('q_3', 'q_4', label='_ -> R')\n\n# 4. write 0s to the output tape\nf.edge('q_4', 'q_4', label='0, x -> R, 0')\nf.edge('q_4', 'q_print', label='_ -> R')\n\n# 5. print and continue on to next cycle\nf.edge('q_print', 'q_1', label='* -> L')\n\nf",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
]
] |
4ab01ada7232a37e33c5bbe2aad5dfb349619461
| 12,741 |
ipynb
|
Jupyter Notebook
|
notebooks/Flyway-Setup.ipynb
|
jymillersf/GitForDBAs
|
ccba5a158dca668ad50bb5c86128210a22cfc645
|
[
"MIT"
] | null | null | null |
notebooks/Flyway-Setup.ipynb
|
jymillersf/GitForDBAs
|
ccba5a158dca668ad50bb5c86128210a22cfc645
|
[
"MIT"
] | 1 |
2022-01-04T04:22:15.000Z
|
2022-01-04T04:22:15.000Z
|
notebooks/Flyway-Setup.ipynb
|
jymillersf/GitForDBAs
|
ccba5a158dca668ad50bb5c86128210a22cfc645
|
[
"MIT"
] | null | null | null | 42.755034 | 363 | 0.419826 |
[
[
[
"# To use this notebook\n\n- Open in Azure Data Studio\n- Ensure the Kernel is set to \"PowerShell\"\n\n# You can run Flyway in a variety of ways\n\nCommunity edition is free\n\nYou may download and install locally - [https://flywaydb.org/download/](https://flywaydb.org/download/)\n\nYou may use the flyway docker container - [https://github.com/flyway/flyway-docker](https://github.com/flyway/flyway-docker)",
"_____no_output_____"
],
[
"# Running the Flyway Docker container\n\nInstall Docker and make sure it's running - [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/)\n\nInstructions to run Flyway via Docker are here - [https://github.com/flyway/flyway-docker](https://github.com/flyway/flyway-docker)\n\nSome examples of this are below\n\n# Run Flyway and return info on available commands\n\nIf the image isn't available for you locally yet (first run), this command should automatically pull it.\n\nThe --rm causes Docker to automatically remove the container when it exits.",
"_____no_output_____"
]
],
[
[
"print('hello world')\nprint('hi')",
"hello world\nhi\n"
],
[
"dir\ndocker run --rm flyway/flyway",
"_____no_output_____"
]
],
[
[
"# A simple test of Flyway's info command using the H2 in memory database",
"_____no_output_____"
]
],
[
[
"!docker run --rm flyway/flyway -url=jdbc:h2:mem:test -user=sa info\n",
"Flyway Teams Edition 8.3.0 by Redgate\nDatabase: jdbc:h2:mem:test (H2 2.0)\n----------------------------------------\nFlyway Teams features are enabled by default for the next 27 days. Learn more at https://rd.gt/3A4IWym\n----------------------------------------\nSchema version: << Empty Schema >>\n\n+----------+---------+-------------+------+--------------+-------+----------+\n| Category | Version | Description | Type | Installed On | State | Undoable |\n+----------+---------+-------------+------+--------------+-------+----------+\n| No migrations found |\n+----------+---------+-------------+------+--------------+-------+----------+\n\n"
]
],
[
[
"# Let's talk to a SQL Server\n\nI'm using a config file here, by passing in a volume with -v. We are naming the volume /flyway/conf.\n\n- This needs to be an absolute path to the folder where you have flyway.conf\n- You will need to edit the connection string, user, and password in flyway.conf\n- You will need to create a database named GitForDBAs (or change the config file to reference a database of another name which already exists)\n\nI'm using a second volume mapping to a folder that holds my flyway migrations. We are naming the volume /flyway/sql.\n\n- This needs to be an absolute path to the folder where you have migrations stored\n- The filenames for the migrations matter -- Flyway uses the file names to understand what type of script it is and the order in which it should be run\n\nNote: I have spread this across multiple lines using the \\` character for readability purposes \n\n# Call Flyway info to inspect",
"_____no_output_____"
]
],
[
[
"docker run --rm `\n -v C:\\Git\\GitForDBAs\\flywayconf:/flyway/conf `\n -v C:\\Git\\GitForDBAs\\migrations:/flyway/sql `\n flyway/flyway info",
"_____no_output_____"
]
],
[
[
"# Call Flyway migrate to execute",
"_____no_output_____"
]
],
[
[
"docker run --rm `\r\n -v C:\\Git\\GitForDBAs\\flywayconf:/flyway/conf `\r\n -v C:\\Git\\GitForDBAs\\migrations:/flyway/sql `\r\n flyway/flyway migrate",
"Flyway Community Edition 7.0.3 by Redgate\nDatabase: jdbc:jtds:sqlserver://host.docker.internal:1433/GitForDBAs (Microsoft SQL Server 14.0)\nSuccessfully validated 5 migrations (execution time 00:00.035s)\nCreating Schema History table [GitForDBAs].[dbo].[flyway_schema_history] ...\nCurrent version of schema [dbo]: << Empty Schema >>\nMigrating schema [dbo] to version \"1 - Initial\"\nMigrating schema [dbo] to version \"2 - YOLO\"\nMigrating schema [dbo] to version \"2.1 - ILikeDags\"\nMigrating schema [dbo] to version \"2.2 - InsertRowsInILikeDags\"\nMigrating schema [dbo] to version \"2.3 - livedemo\"\nSuccessfully applied 5 migrations to schema [dbo] (execution time 00:00.235s)\n"
]
],
[
[
"# Examine the table - open a new query\n\nUSE GitForDBAs;\n\nGO\n\n \n\nEXEC sp\\_help 'dbo.HelloWorld';\n\nGO\n\n \n\nSELECT \\* FROM dbo.HelloWorld;\n\nGO",
"_____no_output_____"
],
[
"# Call Flyway clean to drop everything 🔥🔥🔥",
"_____no_output_____"
]
],
[
[
"docker run --rm `\r\n -v C:\\Git\\GitForDBAs\\flywayconf:/flyway/conf `\r\n -v C:\\Git\\GitForDBAs\\migrations:/flyway/sql `\r\n flyway/flyway clean",
"Flyway Community Edition 7.0.3 by Redgate\nDatabase: jdbc:jtds:sqlserver://host.docker.internal:1433/GitForDBAs (Microsoft SQL Server 14.0)\nSuccessfully dropped pre-schema database level objects (execution time 00:00.002s)\nSuccessfully cleaned schema [dbo] (execution time 00:00.177s)\nSuccessfully dropped post-schema database level objects (execution time 00:00.007s)\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4ab01b265c58c640e9c819c913628289c20535b3
| 13,339 |
ipynb
|
Jupyter Notebook
|
nb/030-variables-ops-sequences.ipynb
|
timdavidlee/tf-course
|
70c525d0bd6e1604663c8b8571c3daa2b21b1db9
|
[
"MIT"
] | null | null | null |
nb/030-variables-ops-sequences.ipynb
|
timdavidlee/tf-course
|
70c525d0bd6e1604663c8b8571c3daa2b21b1db9
|
[
"MIT"
] | null | null | null |
nb/030-variables-ops-sequences.ipynb
|
timdavidlee/tf-course
|
70c525d0bd6e1604663c8b8571c3daa2b21b1db9
|
[
"MIT"
] | null | null | null | 23.198261 | 291 | 0.480096 |
[
[
[
"## Constants, Sequences, Variables, Ops",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"/Users/timlee/anaconda2/envs/py3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n from ._conv import register_converters as _register_converters\n"
]
],
[
[
"## Constants\n[https://www.tensorflow.org/api_docs/python/tf/constant](https://www.tensorflow.org/api_docs/python/tf/constant)\n\nConstants are values that will never change through out your calculations. These stand fixed",
"_____no_output_____"
]
],
[
[
"# note that we are reshaping the matrix\na = tf.constant(value=[[1,2,3,4,5],[10,20,30,40,50]],\n dtype=tf.float32,\n shape=[5,2],\n name=\"tf_const\",\n verify_shape=False\n )\na_reshape = tf.reshape(a, shape=[2,5])\n\nwith tf.Session() as sess:\n result = sess.run([a, a_reshape])\n print(result[0])\n print(result[1])",
"[[ 1. 2.]\n [ 3. 4.]\n [ 5. 10.]\n [20. 30.]\n [40. 50.]]\n[[ 1. 2. 3. 4. 5.]\n [10. 20. 30. 40. 50.]]\n"
]
],
[
[
"#### Making Empty Tensors",
"_____no_output_____"
]
],
[
[
"b = tf.zeros_like(a)\n\nwith tf.Session() as sess:\n print(sess.run(b))",
"[[0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]\n [0. 0.]]\n"
]
],
[
[
"#### Making Tensors filled with ones",
"_____no_output_____"
]
],
[
[
"c = tf.ones_like(a)\n\nwith tf.Session() as sess:\n print(sess.run(c))",
"[[1. 1.]\n [1. 1.]\n [1. 1.]\n [1. 1.]\n [1. 1.]]\n"
]
],
[
[
"#### Making Tensors filled with arbitrary value",
"_____no_output_____"
]
],
[
[
"d = tf.fill(dims=[3,3,3], value=0.5)\n\nwith tf.Session() as sess:\n print(sess.run(d))",
"[[[0.5 0.5 0.5]\n [0.5 0.5 0.5]\n [0.5 0.5 0.5]]\n\n [[0.5 0.5 0.5]\n [0.5 0.5 0.5]\n [0.5 0.5 0.5]]\n\n [[0.5 0.5 0.5]\n [0.5 0.5 0.5]\n [0.5 0.5 0.5]]]\n"
]
],
[
[
"#### How to make a range of numbers",
"_____no_output_____"
]
],
[
[
"e = tf.lin_space(start=0., stop=25., num=3, name='by5')\nf = tf.range(start=0., limit=25., delta=5., dtype=tf.float32, name='range')\n\nwith tf.Session() as sess:\n print('linspace', sess.run(e))\n print('range', sess.run(f))",
"linspace [ 0. 12.5 25. ]\nrange [ 0. 5. 10. 15. 20.]\n"
]
],
[
[
"### Random Generators",
"_____no_output_____"
]
],
[
[
"tf.set_random_seed(1)\ng = tf.random_normal(shape=(2,2))\n\nwith tf.Session() as sess:\n\n print('random normal', sess.run(g))",
"random normal [[-0.36537609 1.4068291 ]\n [-1.0580941 -0.66352683]]\n"
]
],
[
[
"## Key Operations\n\n| Category | Examples| \n|---------- | --------|\n|Element-wise mathematical Operations | Add, Sub, Mul, Div, Exp, Log, Greater, Less, Equal|\n|Array operations | Concat, Slice, Split, Constant, Rank, Shape, Shuffle|\n|Matrix Operations | MatMul, MatrixInverse, MatrixDeterminant,...|\n|Stateful Operations | Variable, Assign, AssignAdd|\n|NN Building Blocks | SoftMax, Sigmoid, ReLu, Convolution2D, MaxPool..|\n|Checkpointing Operations | Save, Restore|\n|Queue and synchronization operations | Enqueue, Dequeue, MutexAcquire, MutexRelease, ...|\n|Control flow operations | Merge, Switch, Enter, Leave, NextIteration|",
"_____no_output_____"
],
[
"# Variables\n\n> A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program.\n> Variables must be initialized, to serve as a best \"guess\", once initialized, these variables can be frozen, or changed throughout the graph calculation\n\n> `tf.constant` is an op\n\n> `tf.Variable` is a class with many op\n\nWhy use them? Constants are great, except that they are actually stored WITHIN the graph. The larger the graph, the more constants, and the larger the size of the graph. Use constants for primitive (and simple) types. Use variables and readers for data that will require more memory\n\n### How to make variables Method 1: `tf.Variable`",
"_____no_output_____"
]
],
[
[
"scalar = tf.Variable(2, name='scalar')\nmatrix = tf.Variable([[0,1],[2,3]], name='mtx')\nempty = tf.Variable(tf.zeros([7, 3]), name='empty_mtx')\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n print(sess.run(scalar))\n print(sess.run(matrix))\n print(sess.run(empty))",
"2\n[[0 1]\n [2 3]]\n[[0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]]\n"
]
],
[
[
"### How to make variables Method 2: `tf.get_variable`",
"_____no_output_____"
]
],
[
[
"scalar = tf.get_variable(\"scalar1\", initializer=tf.constant(2))\nmatrix = tf.get_variable(\"mtx1\", initializer=tf.constant([[0,1],[2,3]]))\nempty = tf.get_variable('empty_mtx1', shape=(7,3), initializer=tf.zeros_initializer())\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n print(sess.run(scalar))\n print(sess.run(matrix))\n print(sess.run(empty))",
"2\n[[0 1]\n [2 3]]\n[[0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]]\n"
]
],
[
[
"## Variables: Initialization\n\nNote that in the last two code blocks, the variables were **initialized**\n\nThe `global_variables_initializer()` initializes all variables in the graph\n```\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n ...\n```\n\nFor only a subset of variables:\n```\nwith tf.Session() as sess:\n sess.run(tf.variables_initializer([var1, var2, var3]))\n ...\n```\n\nOr only 1 variable\n```\nwith tf.Session() as sess:\n sess.run(W.initializer)\n ...\n```\n\n\n",
"_____no_output_____"
],
[
"## Variables: `eval()`",
"_____no_output_____"
]
],
[
[
"weights = tf.Variable(tf.truncated_normal([5, 3]))\nwith tf.Session() as sess:\n sess.run(weights.initializer)\n print(weights.eval())",
"[[-0.70387346 -0.01866959 0.06857691]\n [ 0.5431961 -0.693588 1.159248 ]\n [-0.24034725 0.65011954 0.50049955]\n [-0.87781674 -0.95692325 1.9339348 ]\n [ 0.09642299 0.8811367 0.9550244 ]]\n"
]
],
[
[
"## Variables: `assign()`",
"_____no_output_____"
]
],
[
[
"ct1 = tf.Variable(10)\nupdated_ct1 = ct1.assign(1000)\n\nwith tf.Session() as sess:\n sess.run(ct1.initializer)\n sess.run(updated_ct1)\n print(ct1.eval()) ",
"1000\n"
]
],
[
[
"### Trick Question: Whats `my var`?",
"_____no_output_____"
]
],
[
[
"my_var = tf.Variable(5)\ndouble_my_var = my_var.assign(2*my_var)\n\nwith tf.Session() as sess:\n sess.run(my_var.initializer)\n sess.run(double_my_var)\n print(my_var.eval())\n \n # run again\n sess.run(double_my_var)\n print(my_var.eval())\n \n # run again\n sess.run(double_my_var)\n print(my_var.eval()) ",
"10\n20\n40\n"
]
],
[
[
"### Sessions & Variables",
"_____no_output_____"
]
],
[
[
"Z = tf.Variable(20)\n\nsess1 = tf.Session()\nsess2 = tf.Session()\n\nsess1.run(Z.initializer)\nsess2.run(Z.initializer)\n\nprint(sess1.run(Z.assign_add(5)))\nprint(sess2.run(Z.assign_sub(3)))\n\nprint(sess1.run(Z.assign_add(5)))\nprint(sess2.run(Z.assign_sub(3)))\n\nsess1.close()\nsess2.close()\n",
"25\n17\n30\n14\n"
]
],
[
[
"### Control Evaluation / Dependencies",
"_____no_output_____"
]
],
[
[
"# a = tf.Variable(2)\n# b = tf.Variable(20)\n# c = tf.Variable(200)\n# add_c = c.assign_add(a)\n\n# g = tf.Graph()\n# with g.control_dependencies([add_c]):\n# add_d = a.assign_add(d)\n \n# with tf.Session(graph=g) as sess:\n# sess.run(tf.global_variables_initializer)\n# sess.run(add_d)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab01eab6715439c409a2249feef4584020a71db
| 41,093 |
ipynb
|
Jupyter Notebook
|
transfer.ipynb
|
caitsithx/dogs-vs-cats-redux
|
3ff588cac9048a3c9f5a76de842a9cd2a4140218
|
[
"Apache-2.0"
] | null | null | null |
transfer.ipynb
|
caitsithx/dogs-vs-cats-redux
|
3ff588cac9048a3c9f5a76de842a9cd2a4140218
|
[
"Apache-2.0"
] | null | null | null |
transfer.ipynb
|
caitsithx/dogs-vs-cats-redux
|
3ff588cac9048a3c9f5a76de842a9cd2a4140218
|
[
"Apache-2.0"
] | null | null | null | 36.237213 | 1,171 | 0.552138 |
[
[
[
"from keras.applications.vgg16 import VGG16\nfrom keras.preprocessing import image\nfrom keras.preprocessing.image import ImageDataGenerator\nfrom keras.applications.vgg16 import preprocess_input\nimport keras as k\nfrom keras.models import Sequential, Model\nfrom keras.layers import Dense, Dropout, Flatten\nfrom keras.layers import Conv2D, MaxPooling2D\nfrom keras.callbacks import EarlyStopping, ModelCheckpoint\nfrom keras.optimizers import SGD, RMSprop, Adam\nfrom keras.layers.normalization import BatchNormalization\n\nimport numpy as np\nimport pandas as pd\nimport cv2\nimport shutil\nfrom matplotlib import pyplot as plt\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"DATA_DIR = '/home/chicm/ml/kgdata/species'\nRESULT_DIR = DATA_DIR + '/results'\n\nTRAIN_FEAT = RESULT_DIR + '/train_feats.dat'\nVAL_FEAT = RESULT_DIR + '/val_feats.dat'\n\nTRAIN_DIR = DATA_DIR + '/train-224'\nVAL_DIR = DATA_DIR + '/val-224'\n\nbatch_size = 64",
"_____no_output_____"
],
[
"df_train = pd.read_csv(DATA_DIR+'/train_labels.csv')",
"_____no_output_____"
]
],
[
[
"## create validation data",
"_____no_output_____"
]
],
[
[
"f_dict = {row[0]: row[1] for i, row in enumerate(df_train.values)}\nfnames = [row[0] for i, row in enumerate(df_train.values)]\nprint(len(f_dict))\nprint(fnames[:10])\n",
"2295\n[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]\n"
],
[
"print(len([row[1] for i, row in enumerate(df_train.values) if row[1] == 0]))\nprint(len([row[1] for i, row in enumerate(df_train.values) if row[1] == 1]))",
"847\n1448\n"
],
[
"for f in fnames:\n cls = f_dict[f]\n src = TRAIN_DIR + '/' + str(f) + '.jpg'\n dst = TRAIN_DIR + '/' + str(cls) + '/' + str(f) + '.jpg'\n shutil.move(src, dst)",
"_____no_output_____"
],
[
"fnames = np.random.permutation(fnames)\nfor i in range(350):\n cls = f_dict[fnames[i]]\n fn = TRAIN_DIR +'/' + str(cls) + '/' + str(fnames[i])+'.jpg'\n tgt_fn = VAL_DIR +'/' + str(cls) + '/' + str(fnames[i])+'.jpg'\n shutil.move(fn, tgt_fn)",
"_____no_output_____"
]
],
[
[
"## build pretrained model",
"_____no_output_____"
]
],
[
[
"vgg_model = VGG16(weights='imagenet', include_top=False, input_shape=(224,224,3))",
"_____no_output_____"
],
[
"# build a classifier model to put on top of the convolutional model\ntop_model = Sequential()\ntop_model.add(Flatten(input_shape=(7,7,512)))\ntop_model.add(Dense(256, activation='relu'))\ntop_model.add(Dropout(0.5))\ntop_model.add(Dense(1, activation='sigmoid'))\n\n#model.add(top_model)\nmodel = Model(inputs=vgg_model.input, outputs=top_model(vgg_model.output))",
"_____no_output_____"
],
[
"for layer in model.layers[:25]:\n layer.trainable = False",
"_____no_output_____"
],
[
"model.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])",
"_____no_output_____"
],
[
"train_datagen = ImageDataGenerator(\n rescale=1. / 255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True)",
"_____no_output_____"
],
[
"test_datagen = ImageDataGenerator(rescale=1. / 255)",
"_____no_output_____"
],
[
"train_generator = train_datagen.flow_from_directory(\n TRAIN_DIR,\n target_size=(224, 224),\n batch_size=batch_size,\n class_mode='binary')",
"Found 1945 images belonging to 2 classes.\n"
],
[
"print(train_generator.n)\nprint(train_generator.samples)",
"1945\n1945\n"
],
[
"validation_generator = test_datagen.flow_from_directory(\n VAL_DIR,\n target_size=(224, 224),\n batch_size=batch_size,\n class_mode='binary')",
"Found 350 images belonging to 2 classes.\n"
],
[
"epochs = 50\nmodel.fit_generator(\n train_generator,\n steps_per_epoch=train_generator.n//batch_size,\n epochs=epochs,\n validation_data=validation_generator,\n validation_steps=validation_generator.n//batch_size,\n verbose=2)",
"Epoch 1/50\n12s - loss: 0.8021 - acc: 0.5160 - val_loss: 0.6967 - val_acc: 0.5385\nEpoch 2/50\n11s - loss: 0.8208 - acc: 0.4923 - val_loss: 0.6955 - val_acc: 0.5219\nEpoch 3/50\n11s - loss: 0.8057 - acc: 0.5085 - val_loss: 0.7009 - val_acc: 0.4755\nEpoch 4/50\n11s - loss: 0.8037 - acc: 0.4996 - val_loss: 0.6970 - val_acc: 0.5315\nEpoch 5/50\n11s - loss: 0.8136 - acc: 0.4905 - val_loss: 0.6909 - val_acc: 0.5315\nEpoch 6/50\n11s - loss: 0.8203 - acc: 0.4801 - val_loss: 0.6995 - val_acc: 0.5105\nEpoch 7/50\n11s - loss: 0.8204 - acc: 0.4881 - val_loss: 0.6966 - val_acc: 0.5156\nEpoch 8/50\n"
],
[
"def create_model():\n conv_layers = [\n Conv2D(24,(3,3), activation='relu',input_shape=(224,224,3)),\n BatchNormalization(axis=-1),\n MaxPooling2D((2, 2), strides=(2, 2)),\n\n Conv2D(24,(3,3), activation='relu'),\n BatchNormalization(axis=-1),\n MaxPooling2D((2, 2), strides=(2, 2)),\n\n Conv2D(48,(3,3), activation='relu'),\n BatchNormalization(axis=-1),\n MaxPooling2D((2, 2), strides=(2, 2)),\n\n Conv2D(48,(3,3), activation='relu'),\n BatchNormalization(axis=-1),\n MaxPooling2D((2, 2), strides=(2, 2)),\n \n Flatten(),\n \n Dropout(0.25),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(128, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(1, activation='softmax')\n ]\n #print conv_layers\n model = Sequential(conv_layers)\n model.compile(Adam(), loss = 'binary_crossentropy', metrics=['accuracy'])\n \n return model",
"_____no_output_____"
],
[
"epochs = 50\nmodel2 = create_model()\nmodel2.fit_generator(\n train_generator,\n steps_per_epoch=train_generator.n//batch_size,\n epochs=epochs,\n validation_data=validation_generator,\n validation_steps=validation_generator.n//batch_size,\n verbose=2)",
"Epoch 1/50\n13s - loss: 5.9748 - acc: 0.6252 - val_loss: 6.0759 - val_acc: 0.6189\nEpoch 2/50\n11s - loss: 5.7836 - acc: 0.6372 - val_loss: 6.4661 - val_acc: 0.5944\nEpoch 3/50\n11s - loss: 5.8003 - acc: 0.6362 - val_loss: 6.2989 - val_acc: 0.6049\nEpoch 4/50\n11s - loss: 5.7959 - acc: 0.6364 - val_loss: 5.8530 - val_acc: 0.6329\nEpoch 5/50\n12s - loss: 5.8289 - acc: 0.6344 - val_loss: 6.4104 - val_acc: 0.5979\nEpoch 6/50\n11s - loss: 5.8293 - acc: 0.6344 - val_loss: 5.6857 - val_acc: 0.6434\nEpoch 7/50\n12s - loss: 5.8378 - acc: 0.6338 - val_loss: 6.0202 - val_acc: 0.6224\nEpoch 8/50\n"
],
[
"x_train = []\ny_train = []\nfor i, row in tqdm(enumerate(df_train.values)):\n fn = DATA_DIR+'/train/' + str(row[0])+'.jpg'\n x_train.append(cv2.resize(cv2.imread(fn), (224,224)))\n y_train.append([row[1]])\nx_train = np.array(x_train, np.float32)\ny_train = np.array(y_train, np.uint8)\nprint(x_train.shape)\nprint(y_train.shape)",
"2295it [00:46, 49.14it/s]\n"
],
[
"split = int(x_train.shape[0] * 0.85)\n\nx_val = x_train[split:]\ny_val = y_train[split:]\nx_train = x_train[:split]\ny_train = y_train[:split]\nprint(x_train.shape)\nprint(x_val.shape)\nprint(y_train.shape)\nprint(y_val.shape)",
"(1950, 224, 224, 3)\n(345, 224, 224, 3)\n(1950, 1)\n(345, 1)\n"
],
[
"train_steps = x_train.shape[0] // batch_size\nx_train = x_train[:train_steps*batch_size]\ny_train = y_train[:train_steps*batch_size]\nval_steps = x_val.shape[0] // batch_size\nx_val = x_val[:val_steps*batch_size]\ny_val = y_val[:val_steps*batch_size]\nprint(train_steps)\nprint(val_steps)",
"30\n5\n"
],
[
"model = VGG16(weights='imagenet', include_top=False)",
"_____no_output_____"
],
[
"datagen = ImageDataGenerator(\n rotation_range=45,\n width_shift_range=0.1,\n height_shift_range=0.1,\n rescale=1./255,\n shear_range=0.2,\n zoom_range=0.2,\n horizontal_flip=True,\n vertical_flip = True)",
"_____no_output_____"
],
[
"print(x_train.shape)\nprint(x_val.shape)",
"(1920, 224, 224, 3)\n(320, 224, 224, 3)\n"
],
[
"train_feat = model.predict_generator(datagen.flow(x_train, batch_size=batch_size, shuffle=False), \n steps = train_steps*8) ",
"_____no_output_____"
],
[
"#val_feat = model.predict_generator(datagen.flow(x_val, batch_size=batch_size, shuffle=False), \n #steps = val_steps) \ntrain_feat = model.predict(x_train)\nval_feat = model.predict(x_val)",
"_____no_output_____"
],
[
"print(train_feat.shape)\nprint(val_feat.shape)",
"(1950, 7, 7, 512)\n(345, 7, 7, 512)\n"
],
[
"import bcolz\n\ndef save_array(fname, arr):\n c=bcolz.carray(arr, rootdir=fname, mode='w')\n c.flush()\ndef load_array(fname):\n return bcolz.open(fname)[:]",
"_____no_output_____"
],
[
"save_array(TRAIN_FEAT, train_feat)\nsave_array(VAL_FEAT, val_feat)",
"_____no_output_____"
],
[
"print(train_feat.shape)\nprint(val_feat.shape)",
"(1950, 7, 7, 512)\n(345, 7, 7, 512)\n"
],
[
"def get_layers(input_shape):\n return [\n Flatten(input_shape=input_shape),\n Dropout(0.4),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(0.5),\n Dense(256, activation='relu'),\n BatchNormalization(),\n Dropout(0.6),\n Dense(1, activation='sigmoid')\n ]\n\ndef get_model(input_shape):\n model = Sequential(get_layers(input_shape))\n model.compile(Adam(), loss = 'binary_crossentropy', metrics=['accuracy'])\n return model",
"_____no_output_____"
],
[
"dense_model = get_model(train_feat.shape[1:])",
"_____no_output_____"
],
[
"dense_model.fit(train_feat, y_train, batch_size=batch_size, \n validation_data=(val_feat, y_val),\n epochs=50, verbose=2)",
"Train on 1950 samples, validate on 345 samples\nEpoch 1/50\n1s - loss: 0.5180 - acc: 0.7892 - val_loss: 0.3515 - val_acc: 0.8986\nEpoch 2/50\n0s - loss: 0.2507 - acc: 0.9000 - val_loss: 0.2008 - val_acc: 0.9391\nEpoch 3/50\n0s - loss: 0.1702 - acc: 0.9344 - val_loss: 0.2712 - val_acc: 0.9246\nEpoch 4/50\n0s - loss: 0.1175 - acc: 0.9559 - val_loss: 0.1826 - val_acc: 0.9333\nEpoch 5/50\n0s - loss: 0.0794 - acc: 0.9697 - val_loss: 0.2345 - val_acc: 0.9217\nEpoch 6/50\n0s - loss: 0.0673 - acc: 0.9769 - val_loss: 0.2085 - val_acc: 0.9246\nEpoch 7/50\n0s - loss: 0.0634 - acc: 0.9764 - val_loss: 0.2127 - val_acc: 0.9217\nEpoch 8/50\n0s - loss: 0.0434 - acc: 0.9851 - val_loss: 0.2152 - val_acc: 0.9304\nEpoch 9/50\n0s - loss: 0.0558 - acc: 0.9831 - val_loss: 0.1870 - val_acc: 0.9449\nEpoch 10/50\n0s - loss: 0.0464 - acc: 0.9800 - val_loss: 0.1804 - val_acc: 0.9333\nEpoch 11/50\n0s - loss: 0.0407 - acc: 0.9846 - val_loss: 0.2482 - val_acc: 0.9333\nEpoch 12/50\n0s - loss: 0.0375 - acc: 0.9856 - val_loss: 0.2167 - val_acc: 0.9333\nEpoch 13/50\n0s - loss: 0.0409 - acc: 0.9862 - val_loss: 0.2987 - val_acc: 0.9275\nEpoch 14/50\n0s - loss: 0.0430 - acc: 0.9841 - val_loss: 0.3403 - val_acc: 0.9246\nEpoch 15/50\n0s - loss: 0.0409 - acc: 0.9867 - val_loss: 0.3289 - val_acc: 0.9130\nEpoch 16/50\n0s - loss: 0.0346 - acc: 0.9877 - val_loss: 0.2558 - val_acc: 0.9246\nEpoch 17/50\n0s - loss: 0.0282 - acc: 0.9913 - val_loss: 0.2379 - val_acc: 0.9246\nEpoch 18/50\n0s - loss: 0.0219 - acc: 0.9923 - val_loss: 0.2496 - val_acc: 0.9246\nEpoch 19/50\n0s - loss: 0.0236 - acc: 0.9933 - val_loss: 0.2112 - val_acc: 0.9333\nEpoch 20/50\n0s - loss: 0.0192 - acc: 0.9908 - val_loss: 0.2149 - val_acc: 0.9246\nEpoch 21/50\n0s - loss: 0.0231 - acc: 0.9918 - val_loss: 0.2017 - val_acc: 0.9362\nEpoch 22/50\n0s - loss: 0.0197 - acc: 0.9938 - val_loss: 0.2079 - val_acc: 0.9333\nEpoch 23/50\n0s - loss: 0.0324 - acc: 0.9882 - val_loss: 0.2277 - val_acc: 0.9333\nEpoch 24/50\n0s - loss: 0.0150 - acc: 0.9944 - val_loss: 0.2659 - val_acc: 0.9246\nEpoch 25/50\n0s - loss: 0.0165 - acc: 0.9938 - val_loss: 0.2315 - val_acc: 0.9420\nEpoch 26/50\n0s - loss: 0.0198 - acc: 0.9918 - val_loss: 0.2515 - val_acc: 0.9333\nEpoch 27/50\n0s - loss: 0.0166 - acc: 0.9944 - val_loss: 0.2534 - val_acc: 0.9304\nEpoch 28/50\n0s - loss: 0.0210 - acc: 0.9933 - val_loss: 0.2733 - val_acc: 0.9217\nEpoch 29/50\n0s - loss: 0.0195 - acc: 0.9933 - val_loss: 0.3690 - val_acc: 0.9246\nEpoch 30/50\n0s - loss: 0.0108 - acc: 0.9959 - val_loss: 0.2729 - val_acc: 0.9304\nEpoch 31/50\n0s - loss: 0.0151 - acc: 0.9933 - val_loss: 0.2608 - val_acc: 0.9333\nEpoch 32/50\n0s - loss: 0.0151 - acc: 0.9938 - val_loss: 0.2197 - val_acc: 0.9362\nEpoch 33/50\n0s - loss: 0.0248 - acc: 0.9928 - val_loss: 0.2133 - val_acc: 0.9449\nEpoch 34/50\n0s - loss: 0.0203 - acc: 0.9923 - val_loss: 0.2388 - val_acc: 0.9333\nEpoch 35/50\n0s - loss: 0.0220 - acc: 0.9923 - val_loss: 0.1917 - val_acc: 0.9449\nEpoch 36/50\n0s - loss: 0.0209 - acc: 0.9908 - val_loss: 0.2106 - val_acc: 0.9362\nEpoch 37/50\n0s - loss: 0.0228 - acc: 0.9918 - val_loss: 0.2565 - val_acc: 0.9275\nEpoch 38/50\n0s - loss: 0.0292 - acc: 0.9897 - val_loss: 0.2785 - val_acc: 0.9333\nEpoch 39/50\n0s - loss: 0.0196 - acc: 0.9928 - val_loss: 0.1958 - val_acc: 0.9449\nEpoch 40/50\n0s - loss: 0.0153 - acc: 0.9959 - val_loss: 0.2178 - val_acc: 0.9217\nEpoch 41/50\n0s - loss: 0.0176 - acc: 0.9938 - val_loss: 0.2230 - val_acc: 0.9304\nEpoch 42/50\n0s - loss: 0.0207 - acc: 0.9928 - val_loss: 0.2584 - val_acc: 0.9246\nEpoch 43/50\n0s - loss: 0.0100 - acc: 0.9969 - val_loss: 0.2752 - val_acc: 0.9246\nEpoch 44/50\n0s - loss: 0.0189 - acc: 0.9938 - val_loss: 0.2677 - val_acc: 0.9217\nEpoch 45/50\n0s - loss: 0.0143 - acc: 0.9944 - val_loss: 0.2913 - val_acc: 0.9304\nEpoch 46/50\n0s - loss: 0.0169 - acc: 0.9949 - val_loss: 0.2436 - val_acc: 0.9217\nEpoch 47/50\n0s - loss: 0.0141 - acc: 0.9959 - val_loss: 0.2457 - val_acc: 0.9333\nEpoch 48/50\n0s - loss: 0.0185 - acc: 0.9938 - val_loss: 0.2856 - val_acc: 0.9246\nEpoch 49/50\n0s - loss: 0.0142 - acc: 0.9944 - val_loss: 0.3000 - val_acc: 0.9246\nEpoch 50/50\n0s - loss: 0.0218 - acc: 0.9938 - val_loss: 0.3120 - val_acc: 0.9246\n"
],
[
"y_train_da = np.concatenate([y_train]*8)\nprint(y_train.shape)\nprint(y_train_da.shape)",
"(1920, 1)\n(15360, 1)\n"
],
[
"model = get_model(train_feat.shape[1:])\n\nmodel.fit(train_feat, y_train_da, batch_size=batch_size, validation_data=(val_feat, y_val), epochs = 20, verbose=2)",
"Train on 15360 samples, validate on 320 samples\nEpoch 1/20\n3s - loss: 0.4335 - acc: 0.8275 - val_loss: 2.0344 - val_acc: 0.8250\nEpoch 2/20\n2s - loss: 0.2770 - acc: 0.8854 - val_loss: 2.9618 - val_acc: 0.7594\nEpoch 3/20\n2s - loss: 0.2355 - acc: 0.9029 - val_loss: 3.6947 - val_acc: 0.7375\nEpoch 4/20\n2s - loss: 0.2196 - acc: 0.9088 - val_loss: 3.4920 - val_acc: 0.7438\nEpoch 5/20\n2s - loss: 0.2015 - acc: 0.9156 - val_loss: 4.7506 - val_acc: 0.6937\nEpoch 6/20\n2s - loss: 0.1968 - acc: 0.9182 - val_loss: 4.1566 - val_acc: 0.7188\nEpoch 7/20\n2s - loss: 0.1873 - acc: 0.9236 - val_loss: 4.6893 - val_acc: 0.6906\nEpoch 8/20\n2s - loss: 0.1770 - acc: 0.9266 - val_loss: 4.7387 - val_acc: 0.6906\nEpoch 9/20\n2s - loss: 0.1673 - acc: 0.9315 - val_loss: 4.6118 - val_acc: 0.6937\nEpoch 10/20\n2s - loss: 0.1646 - acc: 0.9318 - val_loss: 4.6731 - val_acc: 0.7000\nEpoch 11/20\n2s - loss: 0.1546 - acc: 0.9363 - val_loss: 4.5431 - val_acc: 0.7000\nEpoch 12/20\n2s - loss: 0.1469 - acc: 0.9419 - val_loss: 4.5946 - val_acc: 0.6969\nEpoch 13/20\n2s - loss: 0.1538 - acc: 0.9380 - val_loss: 4.3475 - val_acc: 0.7188\nEpoch 14/20\n2s - loss: 0.1437 - acc: 0.9418 - val_loss: 4.6791 - val_acc: 0.7000\nEpoch 15/20\n2s - loss: 0.1411 - acc: 0.9439 - val_loss: 4.5711 - val_acc: 0.7094\nEpoch 16/20\n2s - loss: 0.1339 - acc: 0.9447 - val_loss: 4.7959 - val_acc: 0.6969\nEpoch 17/20\n2s - loss: 0.1361 - acc: 0.9456 - val_loss: 4.8885 - val_acc: 0.6906\nEpoch 18/20\n2s - loss: 0.1306 - acc: 0.9481 - val_loss: 4.8044 - val_acc: 0.6969\nEpoch 19/20\n2s - loss: 0.1348 - acc: 0.9439 - val_loss: 4.7747 - val_acc: 0.6937\nEpoch 20/20\n2s - loss: 0.1284 - acc: 0.9479 - val_loss: 4.8829 - val_acc: 0.6937\n"
],
[
"model.optimizer.lr = 0.00001\nmodel.fit(train_feat, y_train_da, batch_size=batch_size, validation_data=(val_feat, y_val), epochs = 50, verbose=2)",
"Train on 15360 samples, validate on 320 samples\nEpoch 1/50\n3s - loss: 0.1186 - acc: 0.9530 - val_loss: 0.3093 - val_acc: 0.8750\nEpoch 2/50\n2s - loss: 0.1270 - acc: 0.9470 - val_loss: 0.3046 - val_acc: 0.8906\nEpoch 3/50\n2s - loss: 0.1189 - acc: 0.9536 - val_loss: 0.2750 - val_acc: 0.8906\nEpoch 4/50\n2s - loss: 0.1239 - acc: 0.9516 - val_loss: 0.2809 - val_acc: 0.9000\nEpoch 5/50\n2s - loss: 0.1140 - acc: 0.9540 - val_loss: 0.3038 - val_acc: 0.8875\nEpoch 6/50\n2s - loss: 0.1139 - acc: 0.9544 - val_loss: 0.2951 - val_acc: 0.8812\nEpoch 7/50\n2s - loss: 0.1110 - acc: 0.9551 - val_loss: 0.3364 - val_acc: 0.8594\nEpoch 8/50\n2s - loss: 0.1141 - acc: 0.9548 - val_loss: 0.2855 - val_acc: 0.8812\nEpoch 9/50\n2s - loss: 0.1152 - acc: 0.9553 - val_loss: 0.3078 - val_acc: 0.9000\nEpoch 10/50\n2s - loss: 0.1052 - acc: 0.9585 - val_loss: 0.3077 - val_acc: 0.8844\nEpoch 11/50\n2s - loss: 0.1006 - acc: 0.9600 - val_loss: 0.3366 - val_acc: 0.8781\nEpoch 12/50\n2s - loss: 0.1073 - acc: 0.9568 - val_loss: 0.3699 - val_acc: 0.8656\nEpoch 13/50\n2s - loss: 0.1015 - acc: 0.9602 - val_loss: 0.2688 - val_acc: 0.9000\nEpoch 14/50\n2s - loss: 0.1023 - acc: 0.9608 - val_loss: 0.3209 - val_acc: 0.8875\nEpoch 15/50\n2s - loss: 0.0977 - acc: 0.9631 - val_loss: 0.3445 - val_acc: 0.8812\nEpoch 16/50\n2s - loss: 0.1022 - acc: 0.9604 - val_loss: 0.2562 - val_acc: 0.9000\nEpoch 17/50\n2s - loss: 0.0895 - acc: 0.9673 - val_loss: 0.2997 - val_acc: 0.8906\nEpoch 18/50\n2s - loss: 0.1001 - acc: 0.9597 - val_loss: 0.2922 - val_acc: 0.9000\nEpoch 19/50\n2s - loss: 0.1004 - acc: 0.9609 - val_loss: 0.3587 - val_acc: 0.8781\nEpoch 20/50\n2s - loss: 0.0940 - acc: 0.9628 - val_loss: 0.3181 - val_acc: 0.8938\nEpoch 21/50\n2s - loss: 0.0924 - acc: 0.9654 - val_loss: 0.3311 - val_acc: 0.8844\nEpoch 22/50\n2s - loss: 0.0860 - acc: 0.9663 - val_loss: 0.3162 - val_acc: 0.8906\nEpoch 23/50\n2s - loss: 0.0905 - acc: 0.9662 - val_loss: 0.3123 - val_acc: 0.9031\nEpoch 24/50\n2s - loss: 0.0906 - acc: 0.9663 - val_loss: 0.4414 - val_acc: 0.8688\nEpoch 25/50\n2s - loss: 0.0977 - acc: 0.9644 - val_loss: 0.3411 - val_acc: 0.8875\nEpoch 26/50\n2s - loss: 0.0913 - acc: 0.9642 - val_loss: 0.3106 - val_acc: 0.9125\nEpoch 27/50\n2s - loss: 0.0882 - acc: 0.9658 - val_loss: 0.3224 - val_acc: 0.9000\nEpoch 28/50\n2s - loss: 0.0899 - acc: 0.9660 - val_loss: 0.3361 - val_acc: 0.9031\nEpoch 29/50\n2s - loss: 0.0885 - acc: 0.9647 - val_loss: 0.4443 - val_acc: 0.8781\nEpoch 30/50\n2s - loss: 0.0853 - acc: 0.9685 - val_loss: 0.3233 - val_acc: 0.8844\nEpoch 31/50\n2s - loss: 0.0844 - acc: 0.9678 - val_loss: 0.3035 - val_acc: 0.8906\nEpoch 32/50\n2s - loss: 0.0845 - acc: 0.9673 - val_loss: 0.3044 - val_acc: 0.9094\nEpoch 33/50\n2s - loss: 0.0843 - acc: 0.9671 - val_loss: 0.3032 - val_acc: 0.8938\nEpoch 34/50\n2s - loss: 0.0817 - acc: 0.9681 - val_loss: 0.3049 - val_acc: 0.8969\nEpoch 35/50\n2s - loss: 0.0767 - acc: 0.9694 - val_loss: 0.3446 - val_acc: 0.8875\nEpoch 36/50\n2s - loss: 0.0755 - acc: 0.9722 - val_loss: 0.3197 - val_acc: 0.9062\nEpoch 37/50\n2s - loss: 0.0767 - acc: 0.9722 - val_loss: 0.3497 - val_acc: 0.8844\nEpoch 38/50\n2s - loss: 0.0764 - acc: 0.9708 - val_loss: 0.2903 - val_acc: 0.9125\nEpoch 39/50\n2s - loss: 0.0790 - acc: 0.9699 - val_loss: 0.4095 - val_acc: 0.8906\nEpoch 40/50\n2s - loss: 0.0743 - acc: 0.9709 - val_loss: 0.3643 - val_acc: 0.8906\nEpoch 41/50\n2s - loss: 0.0748 - acc: 0.9713 - val_loss: 0.3510 - val_acc: 0.8844\nEpoch 42/50\n2s - loss: 0.0732 - acc: 0.9733 - val_loss: 0.3747 - val_acc: 0.8906\nEpoch 43/50\n2s - loss: 0.0739 - acc: 0.9728 - val_loss: 0.3806 - val_acc: 0.8844\nEpoch 44/50\n2s - loss: 0.0751 - acc: 0.9724 - val_loss: 0.2919 - val_acc: 0.9062\nEpoch 45/50\n2s - loss: 0.0748 - acc: 0.9725 - val_loss: 0.3460 - val_acc: 0.9000\nEpoch 46/50\n2s - loss: 0.0740 - acc: 0.9719 - val_loss: 0.3257 - val_acc: 0.8938\nEpoch 47/50\n2s - loss: 0.0766 - acc: 0.9713 - val_loss: 0.2977 - val_acc: 0.8969\nEpoch 48/50\n2s - loss: 0.0673 - acc: 0.9755 - val_loss: 0.3519 - val_acc: 0.9031\nEpoch 49/50\n2s - loss: 0.0695 - acc: 0.9726 - val_loss: 0.3883 - val_acc: 0.8750\nEpoch 50/50\n2s - loss: 0.0664 - acc: 0.9757 - val_loss: 0.3758 - val_acc: 0.8812\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab022ccf1cd32f21a831697146544bfd155db06
| 793,649 |
ipynb
|
Jupyter Notebook
|
Grid2op/8_PlottingCapabilities.ipynb
|
windyasd/RL_Study
|
7b28063542f08471c091a4b5b0f6ed17fbc52c78
|
[
"Apache-2.0"
] | 5 |
2020-10-07T14:06:06.000Z
|
2020-12-27T09:01:30.000Z
|
Grid2op/8_PlottingCapabilities.ipynb
|
windyasd/RL_Study
|
7b28063542f08471c091a4b5b0f6ed17fbc52c78
|
[
"Apache-2.0"
] | null | null | null |
Grid2op/8_PlottingCapabilities.ipynb
|
windyasd/RL_Study
|
7b28063542f08471c091a4b5b0f6ed17fbc52c78
|
[
"Apache-2.0"
] | 1 |
2020-10-15T06:38:52.000Z
|
2020-10-15T06:38:52.000Z
| 1,964.477723 | 135,607 | 0.961546 |
[
[
[
"import matplotlib.pyplot as plt # pip install matplotlib\nimport seaborn as sns # pip install seaborn\nimport plotly.graph_objects as go # pip install plotly\nimport imageio # pip install imageio",
"_____no_output_____"
],
[
"import grid2op\nenv = grid2op.make(test=True)",
"C:\\Users\\LXM\\AppData\\Roaming\\Python\\Python37\\site-packages\\grid2op\\MakeEnv\\Make.py:265: UserWarning: You are using a development environment. This environment is not intended for training agents. It might not be up to date and its primary use if for tests (hence the \"test=True\" you passed as argument). Use at your own risk.\n warnings.warn(_MAKE_DEV_ENV_WARN)\n"
],
[
"from grid2op.PlotGrid import PlotMatplot\nplot_helper = PlotMatplot(env.observation_space)\nline_ids = [int(i) for i in range(env.n_line)]\nfig_layout = plot_helper.plot_layout()\n\n",
"_____no_output_____"
],
[
"obs = env.reset()\nfig_obs = plot_helper.plot_obs(obs)\n",
"_____no_output_____"
],
[
"action = env.action_space({\"set_bus\": {\"loads_id\": [(0,2)], \"lines_or_id\": [(3,2)], \"lines_ex_id\": [(0,2)]}})\nprint(action)\n",
"This action will:\n\t - NOT change anything to the injections\n\t - NOT perform any redispatching action\n\t - NOT force any line status\n\t - NOT switch any line status\n\t - NOT switch anything in the topology\n\t - Set the bus of the following element:\n\t \t - assign bus 2 to line (extremity) 0 [on substation 1]\n\t \t - assign bus 2 to line (origin) 3 [on substation 1]\n\t \t - assign bus 2 to load 0 [on substation 1]\n"
],
[
"new_obs, reward, done, info = env.step(action)\nfig_obs3 = plot_helper.plot_obs(new_obs)\n\n",
"_____no_output_____"
],
[
"from grid2op.Agent import RandomAgent\n\nclass CustomRandom(RandomAgent):\n def __init__(self, action_space):\n RandomAgent.__init__(self, action_space)\n self.i = 1\n\n def my_act(self, transformed_observation, reward, done=False):\n if (self.i % 10) != 0:\n res = 0\n else:\n res = self.action_space.sample()\n self.i += 1\n return res\n \nmyagent = CustomRandom(env.action_space)\nobs = env.reset()\nreward = env.reward_range[0]\ndone = False\nwhile not done:\n env.render()\n act = myagent.act(obs, reward, done)\n obs, reward, done, info = env.step(act)\nenv.close()",
"C:\\Users\\LXM\\AppData\\Roaming\\Python\\Python37\\site-packages\\grid2op\\Environment\\Environment.py:618: UserWarning: Matplotlib is currently using module://ipykernel.pylab.backend_inline, which is a non-GUI backend, so cannot show the figure.\n fig.show()\n"
],
[
"from grid2op.Runner import Runner\nenv = grid2op.make(test=True)\nmy_awesome_agent = CustomRandom(env.action_space)\nrunner = Runner(**env.get_params_for_runner(), agentClass=None, agentInstance=my_awesome_agent)",
"C:\\Users\\LXM\\AppData\\Roaming\\Python\\Python37\\site-packages\\grid2op\\MakeEnv\\Make.py:265: UserWarning: You are using a development environment. This environment is not intended for training agents. It might not be up to date and its primary use if for tests (hence the \"test=True\" you passed as argument). Use at your own risk.\n warnings.warn(_MAKE_DEV_ENV_WARN)\n"
],
[
"import os\npath_agents = \"path_agents\" # this is mandatory for grid2viz to have a directory with only agents\n# that is why we have it here. It is aboslutely not mandatory for this more simple class.\nmax_iter = 10 # to save time we only assess performance on 30 iterations\nif not os.path.exists(path_agents):\n os.mkdir(path_agents)\npath_awesome_agent_log = os.path.join(path_agents, \"awesome_agent_logs\")\nres = runner.run(nb_episode=2, path_save=path_awesome_agent_log, max_iter=max_iter)\n",
"_____no_output_____"
],
[
"from grid2op.Episode import EpisodeReplay\ngif_name = \"episode\"\nep_replay = EpisodeReplay(agent_path=path_awesome_agent_log)\nfor _, chron_name, cum_reward, nb_time_step, max_ts in res:\n ep_replay.replay_episode(chron_name, # which chronic was started\n gif_name=gif_name, # Name of the gif file\n display=False, # dont wait before rendering each frames\n fps=3.0) # limit to 3 frames per second\n",
"C:\\Users\\LXM\\AppData\\Roaming\\Python\\Python37\\site-packages\\grid2op\\Episode\\EpisodeReplay.py:177: UserWarning: Failed to optimize .GIF size, but gif is still saved:\nInstall dependencies to reduce size by ~3 folds\napt-get install gifsicle && pip3 install pygifsicle\n warnings.warn(warn_msg)\n"
],
[
"# make a runner for this agent\nfrom grid2op.Agent import DoNothingAgent, TopologyGreedy\nimport shutil\n\nfor agentClass, agentName in zip([DoNothingAgent], # , TopologyGreedy\n [\"DoNothingAgent\"]): # , \"TopologyGreedy\"\n path_this_agent = os.path.join(path_agents, agentName)\n shutil.rmtree(os.path.abspath(path_this_agent), ignore_errors=True)\n runner = Runner(**env.get_params_for_runner(),\n agentClass=agentClass\n )\n res = runner.run(path_save=path_this_agent, nb_episode=10, \n max_iter=800)\n print(\"The results for the {} agent are:\".format(agentName))\n for _, chron_id, cum_reward, nb_time_step, max_ts in res:\n msg_tmp = \"\\tFor chronics with id {}\\n\".format(chron_id)\n msg_tmp += \"\\t\\t - cumulative reward: {:.6f}\\n\".format(cum_reward)\n msg_tmp += \"\\t\\t - number of time steps completed: {:.0f} / {:.0f}\".format(nb_time_step, max_ts)\n print(msg_tmp)\n",
"The results for the DoNothingAgent agent are:\n\tFor chronics with id 000\n\t\t - cumulative reward: 888427.500000\n\t\t - number of time steps completed: 799 / 800\n\tFor chronics with id 001\n\t\t - cumulative reward: 887177.687500\n\t\t - number of time steps completed: 800 / 800\n\tFor chronics with id 000\n\t\t - cumulative reward: 888427.500000\n\t\t - number of time steps completed: 799 / 800\n\tFor chronics with id 001\n\t\t - cumulative reward: 887177.687500\n\t\t - number of time steps completed: 800 / 800\n\tFor chronics with id 000\n\t\t - cumulative reward: 888427.500000\n\t\t - number of time steps completed: 799 / 800\n\tFor chronics with id 001\n\t\t - cumulative reward: 887177.687500\n\t\t - number of time steps completed: 800 / 800\n\tFor chronics with id 000\n\t\t - cumulative reward: 888427.500000\n\t\t - number of time steps completed: 799 / 800\n\tFor chronics with id 001\n\t\t - cumulative reward: 887177.687500\n\t\t - number of time steps completed: 800 / 800\n\tFor chronics with id 000\n\t\t - cumulative reward: 888427.500000\n\t\t - number of time steps completed: 799 / 800\n\tFor chronics with id 001\n\t\t - cumulative reward: 887177.687500\n\t\t - number of time steps completed: 800 / 800\n"
],
[
"import sys\nshutil.rmtree(os.path.join(os.path.abspath(path_agents), \"_cache\"), ignore_errors=True)\n!$sys.executable -m grid2viz.main --path=$path_agents\n",
"usage: main.py [-h] [--agents_path AGENTS_PATH] [--env_path ENV_PATH]\n [--port PORT] [--debug] [--n_cores N_CORES] [--cache CACHE]\nmain.py: error: unrecognized arguments: --path=path_agents\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab04c24ec98aeb52a5af888ac59741b86d2419e
| 8,347 |
ipynb
|
Jupyter Notebook
|
Matplotlib/Matplotlib_Create_Waterfall_chart.ipynb
|
techthiyanes/awesome-notebooks
|
10ab4da1b94dfa101e908356a649609b0b17561a
|
[
"BSD-3-Clause"
] | null | null | null |
Matplotlib/Matplotlib_Create_Waterfall_chart.ipynb
|
techthiyanes/awesome-notebooks
|
10ab4da1b94dfa101e908356a649609b0b17561a
|
[
"BSD-3-Clause"
] | null | null | null |
Matplotlib/Matplotlib_Create_Waterfall_chart.ipynb
|
techthiyanes/awesome-notebooks
|
10ab4da1b94dfa101e908356a649609b0b17561a
|
[
"BSD-3-Clause"
] | null | null | null | 28.391156 | 298 | 0.585839 |
[
[
[
"<img width=\"10%\" alt=\"Naas\" src=\"https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160\"/>",
"_____no_output_____"
],
[
"# Matplotlib - Create Waterfall chart\n<a href=\"https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Matplotlib/Matplotlib_Create_Waterfall_chart.ipynb\" target=\"_parent\"><img src=\"https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg\"/></a>",
"_____no_output_____"
],
[
"**Tags:** #matplotlib #chart #warterfall #dataviz #snippet #operations #image",
"_____no_output_____"
],
[
"**Author:** [Jeremy Ravenel](https://www.linkedin.com/in/ACoAAAJHE7sB5OxuKHuzguZ9L6lfDHqw--cdnJg/)",
"_____no_output_____"
],
[
"## Input",
"_____no_output_____"
],
[
"### Import library",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import FuncFormatter",
"_____no_output_____"
]
],
[
[
"## Model",
"_____no_output_____"
],
[
"### Create the waterfall chart",
"_____no_output_____"
]
],
[
[
"#Use python 2.7+ syntax to format currency\ndef money(x, pos):\n 'The two args are the value and tick position'\n return \"${:,.0f}\".format(x)\nformatter = FuncFormatter(money)\n\n#Data to plot. Do not include a total, it will be calculated\nindex = ['sales','returns','credit fees','rebates','late charges','shipping']\ndata = {'amount': [350000,-30000,-7500,-25000,95000,-7000]}\n\n#Store data and create a blank series to use for the waterfall\ntrans = pd.DataFrame(data=data,index=index)\nblank = trans.amount.cumsum().shift(1).fillna(0)\n\n#Get the net total number for the final element in the waterfall\ntotal = trans.sum().amount\ntrans.loc[\"net\"]= total\nblank.loc[\"net\"] = total\n\n#The steps graphically show the levels as well as used for label placement\nstep = blank.reset_index(drop=True).repeat(3).shift(-1)\nstep[1::3] = np.nan\n\n#When plotting the last element, we want to show the full bar,\n#Set the blank to 0\nblank.loc[\"net\"] = 0\n\n#Plot and label\nmy_plot = trans.plot(kind='bar', stacked=True, bottom=blank,legend=None, figsize=(10, 5), title=\"2014 Sales Waterfall\")\nmy_plot.plot(step.index, step.values,'k')\nmy_plot.set_xlabel(\"Transaction Types\")\n\n#Format the axis for dollars\nmy_plot.yaxis.set_major_formatter(formatter)\n\n#Get the y-axis position for the labels\ny_height = trans.amount.cumsum().shift(1).fillna(0)\n\n#Get an offset so labels don't sit right on top of the bar\nmax = trans.max()\nneg_offset = max / 25\npos_offset = max / 50\nplot_offset = int(max / 15)\n\n#Start label loop\nloop = 0\nfor index, row in trans.iterrows():\n # For the last item in the list, we don't want to double count\n if row['amount'] == total:\n y = y_height[loop]\n else:\n y = y_height[loop] + row['amount']\n # Determine if we want a neg or pos offset\n if row['amount'] > 0:\n y += pos_offset\n else:\n y -= neg_offset\n my_plot.annotate(\"{:,.0f}\".format(row['amount']),(loop,y),ha=\"center\")\n loop+=1",
"_____no_output_____"
]
],
[
[
"## Output",
"_____no_output_____"
],
[
"### Display result",
"_____no_output_____"
]
],
[
[
"#Scale up the y axis so there is room for the labels\nmy_plot.set_ylim(0,blank.max()+int(plot_offset))\n#Rotate the labels\nmy_plot.set_xticklabels(trans.index,rotation=0)\nmy_plot.get_figure().savefig(\"waterfall.png\",dpi=200,bbox_inches='tight')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4ab04f2e981f9e541039fd2597192a92766b748f
| 418,893 |
ipynb
|
Jupyter Notebook
|
examples/07 forecasting.ipynb
|
Aadityajoshi151/okama
|
7e3b0bc77858d1526210f0fb8012d1929d43efc3
|
[
"MIT"
] | 82 |
2020-11-19T18:34:51.000Z
|
2022-03-25T18:23:43.000Z
|
examples/07 forecasting.ipynb
|
Aadityajoshi151/okama
|
7e3b0bc77858d1526210f0fb8012d1929d43efc3
|
[
"MIT"
] | 21 |
2020-12-06T08:17:33.000Z
|
2022-03-29T03:51:10.000Z
|
examples/07 forecasting.ipynb
|
Aadityajoshi151/okama
|
7e3b0bc77858d1526210f0fb8012d1929d43efc3
|
[
"MIT"
] | 14 |
2020-12-05T11:39:32.000Z
|
2022-03-03T13:02:22.000Z
| 398.566127 | 109,576 | 0.932503 |
[
[
[
"<a href=\"https://colab.research.google.com/github/mbk-dev/okama/blob/master/examples/07%20forecasting.ipynb\"><img align=\"left\" src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" title=\"Open and Execute in Google Colaboratory\"></a>",
"_____no_output_____"
]
],
[
[
"!pip install okama",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\nplt.rcParams['figure.figsize'] = [12.0, 6.0]\n\nimport okama as ok",
"_____no_output_____"
]
],
[
[
"*okama* has several methods to forecast portfolio perfomance:\n- according to historical data (without distribution models)\n- according to normal distribution\n- according to lognormal distribution",
"_____no_output_____"
],
[
"### Testing distribution",
"_____no_output_____"
],
[
"Before we use normal or lognormal distribution models, we should test the portfolio returns historical distribution and see if it fits. \nThere is a notebook dedicated to backtesting distributions.",
"_____no_output_____"
]
],
[
[
"ls = ['GLD.US', 'SPY.US', 'VNQ.US', 'AGG.US']\nal = ok.AssetList(ls, inflation=False)\nal",
"_____no_output_____"
],
[
"al.names",
"_____no_output_____"
],
[
"al.kstest(distr='norm')",
"_____no_output_____"
],
[
"al.kstest(distr='lognorm')",
"_____no_output_____"
]
],
[
[
"We see that at least SPY is failed zero hypothesis (didn't match 5% threshold) for both normal and lognormal distributions. \nBut AGG has distribution close to normal. For GLD lognormal fits slightly better. \n\nNow we can construct the portfolio.",
"_____no_output_____"
]
],
[
[
"weights = [0.20, 0.10, 0.10, 0.60]\npf = ok.Portfolio(ls, ccy='USD', weights=weights, inflation=False)\npf",
"_____no_output_____"
],
[
"pf.table",
"_____no_output_____"
],
[
"pf.kstest(distr='norm')",
"_____no_output_____"
],
[
"pf.kstest(distr='lognorm')",
"_____no_output_____"
]
],
[
[
" As expected Kolmogorov-Smirnov test shows that normal distribution fits much better. AGG has 60% weight in the allocation.",
"_____no_output_____"
],
[
"### Forecasting",
"_____no_output_____"
],
[
"The most intuitive way to present forecasted portfolio performance is to use **plot_forecast** method to draw the accumulated return chart (historical return and forecasted data). \nIt is possible to use arbitrary percentiles set (10, 50, 90 is a default attribute value). \n\nMaximum forecast period is limited with 1/2 historical data period. For example, if the historical data period is 10 years, it's possible to use forecast periods up to 5 years.",
"_____no_output_____"
]
],
[
[
"pf.plot_forecast(distr='norm', years=5, figsize=(12,5));",
"_____no_output_____"
]
],
[
[
"Another way to visualize the normally distributed random forecasted data is with Monte Carlo simulation ...",
"_____no_output_____"
]
],
[
[
"pf.plot_forecast_monte_carlo(distr='norm', years=5, n=20) # Generates 20 forecasted wealth indexes (for random normally distributed returns time series)",
"_____no_output_____"
]
],
[
[
"We can get numeric CAGR percentiles for each period with **percentile_distribution_cagr** method. To get credible forecast results high n values should be used.",
"_____no_output_____"
]
],
[
[
"pf.percentile_distribution_cagr(distr='norm', years=5, percentiles=[1, 20, 50, 80, 99], n=10000)",
"_____no_output_____"
]
],
[
[
"The same could be used to get VAR (Value at Risk):",
"_____no_output_____"
]
],
[
[
"pf.percentile_distribution_cagr(distr='norm', years=1, percentiles=[1], n=10000) # 1% perecentile corresponds to 99% confidence level",
"_____no_output_____"
]
],
[
[
"One-year VAR (99% confidence level) is equal to 8%. It's a fair value for conservative portfolio. \nThe probability of getting negative result in forecasted period is the percentile rank for zero CAGR value (score=0).",
"_____no_output_____"
]
],
[
[
"pf.percentile_inverse_cagr(distr='norm', years=1, score=0, n=10000) # one year period",
"_____no_output_____"
]
],
[
[
"### Lognormal distribution",
"_____no_output_____"
],
[
"Some financial assets returns have returns distribution close to lognormal. \nThe same calculations could be repeated for lognormal distribution by setting dist='lognorm'.",
"_____no_output_____"
]
],
[
[
"ln = ok.Portfolio(['EDV.US'], inflation=False)\nln",
"_____no_output_____"
],
[
"ln.names",
"_____no_output_____"
]
],
[
[
"We can visualize the distribution and compare it with the lognormal PDF (Probability Distribution Function).",
"_____no_output_____"
]
],
[
[
"ln.plot_hist_fit(distr='lognorm', bins=30)",
"_____no_output_____"
],
[
"ln.kstest(distr='norm') # Kolmogorov-Smirnov test for normal distribution",
"_____no_output_____"
],
[
"ln.kstest(distr='lognorm') # Kolmogorov-Smirnov test for lognormal distribution",
"_____no_output_____"
]
],
[
[
"What is more important Kolmogorov-Smirnov test shows that historical distribution is slightly closer to lognormal. \nTherefore, we can use lognormal distribution to forecast.",
"_____no_output_____"
]
],
[
[
"ln.plot_forecast(distr='lognorm', percentiles=[30, 50, 70], years=2, n=10000);",
"_____no_output_____"
],
[
"pf.percentile_distribution_cagr(distr='lognorm', years=1, percentiles=[1, 20, 50, 80, 99], n=10000)",
"_____no_output_____"
]
],
[
[
"### Forecasting using historical data",
"_____no_output_____"
],
[
"If it's not possible to fit the data to normal or lognormal distributions, percentiles from the historical data could be used.",
"_____no_output_____"
]
],
[
[
"ht = ok.Portfolio(['SPY.US'])\nht",
"_____no_output_____"
],
[
"ht.kstest('norm')",
"_____no_output_____"
],
[
"ht.kstest('lognorm')",
"_____no_output_____"
]
],
[
[
"Kolmogorov-Smirnov test is not passing 5% threshold...",
"_____no_output_____"
],
[
"Big deviation in the tails could be seen in Quantile-Quantile Plot.",
"_____no_output_____"
]
],
[
[
"ht.plot_percentiles_fit('norm')",
"_____no_output_____"
]
],
[
[
"Then we can use percentiles from the historical data to forecast.",
"_____no_output_____"
]
],
[
[
"ht.plot_forecast(years=5, percentiles=[20, 50, 80]);",
"_____no_output_____"
],
[
"ht.percentile_wealth(distr='hist', years=5)",
"_____no_output_____"
]
],
[
[
"Quantitative CAGR percentiles could be obtained from **percentile_history_cagr** method:",
"_____no_output_____"
]
],
[
[
"ht.percentile_history_cagr(years=5)",
"_____no_output_____"
]
],
[
[
"We can visualize the same to see how CAGR ranges were narrowing with investment horizon.",
"_____no_output_____"
]
],
[
[
"ht.percentile_history_cagr(years=5).plot();",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab058c34eece3282e6723340c64961770714709
| 812,504 |
ipynb
|
Jupyter Notebook
|
codes/ML_2010_2012 - DKADiag.ipynb
|
anushaihalapathirana/ThesisT1D
|
a9691cc9c4045a21fe54c4f00eccf00e656b2265
|
[
"MIT"
] | null | null | null |
codes/ML_2010_2012 - DKADiag.ipynb
|
anushaihalapathirana/ThesisT1D
|
a9691cc9c4045a21fe54c4f00eccf00e656b2265
|
[
"MIT"
] | null | null | null |
codes/ML_2010_2012 - DKADiag.ipynb
|
anushaihalapathirana/ThesisT1D
|
a9691cc9c4045a21fe54c4f00eccf00e656b2265
|
[
"MIT"
] | null | null | null | 288.019851 | 511,980 | 0.910759 |
[
[
[
"# Models",
"_____no_output_____"
]
],
[
[
"import pandas as pd \nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nfrom functools import reduce\nimport sys\nimport numpy\nimport math\nnumpy.set_printoptions(threshold=sys.maxsize)\n\nfrom sklearn.metrics import accuracy_score\nimport matplotlib.pyplot as plt\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn import metrics\n\nfrom sklearn.feature_selection import RFE\n\nfrom xgboost import XGBClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\nfrom sklearn.neighbors import KNeighborsClassifier\n\nfrom sklearn.model_selection import GridSearchCV\n\nfrom sklearn.preprocessing import OneHotEncoder, StandardScaler, MinMaxScaler\nfrom sklearn.feature_selection import SelectKBest, mutual_info_classif, SelectPercentile\nfrom sklearn.metrics import confusion_matrix, classification_report, f1_score, auc, roc_curve, roc_auc_score, precision_score, recall_score, balanced_accuracy_score\nfrom numpy.random import seed\nfrom sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score, KFold, StratifiedKFold\nseed(42)\nimport tensorflow as tf\ntf.random.set_seed(38)\nfrom sklearn.impute import SimpleImputer, KNNImputer\nfrom sklearn.experimental import enable_iterative_imputer\nfrom sklearn.impute import IterativeImputer\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis\n\nfrom keras.callbacks import TensorBoard\nfrom keras.models import Sequential\nfrom keras.layers import Dense",
"_____no_output_____"
],
[
"# load pre porcessed data\ndf = pd.read_csv('../prepross_data/data.csv')\n",
"_____no_output_____"
]
],
[
[
"#### Filterout the paper described patient set",
"_____no_output_____"
]
],
[
[
"# filter dataset as describe in paper\ndef get_filter_by_age_diabDur(df, age, diabDur):\n filter_patients = df[(df[\"AgeAtConsent\"] >= age) & (df[\"diagDuration\"] > diabDur)] \n# filter_patients=filter_patients.drop_duplicates(subset=\"PtID\",keep=\"first\") \n print(f'Number of patients whos age is {age}+ and diabetics duration greater than {diabDur} is -> {filter_patients.PtID.size}')\n return filter_patients\n\ndf = get_filter_by_age_diabDur(df, 26, 2)",
"Number of patients whos age is 26+ and diabetics duration greater than 2 is -> 7155\n"
]
],
[
[
"### for SH events prediction pre processing\n",
"_____no_output_____"
]
],
[
[
"y_label = 'DKADiag' \n# possible labels Pt_SevHypoEver, SHSeizComaPast12mos, DKAPast12mos, Depression, DiabNeuro, DKADiag",
"_____no_output_____"
],
[
"# fill null value according to the other parameters\n\n# fill with 0 - if data not available probably patient has not that medical condition\ndef fill_y_label(row):\n\n if(math.isnan(row['DKADiag'])):\n if((row['DKAPast12mos'] == 1) or (row['NumDKAOccur'] >= 1) or (row['Pt_NumHospDKA']>=1) or (row['Pt_HospDKASinceDiag'] == 1)):\n row['DKADiag'] = 0\n else:\n row['DKADiag'] = 1\n return row\n\n\ndf = df.apply(fill_y_label, axis=1)\n",
"_____no_output_____"
],
[
"# get possible values in column including nan\ndef get_possible_vals_with_nan(df, colName):\n list_val =df[colName].unique().tolist()\n return list_val\n\n\n# {'1.Yes': 0, '2.No': 1, \"3.Don't know\": 2}\n\nprint(df[y_label].unique())\nget_possible_vals_with_nan(df, y_label)\n\n\nif (y_label == 'DKADiag'):\n# df.drop(['NumSHSeizComaPast12mos','Pt_v3NumSHSeizComa', 'SHSeizComaPast12mos'], inplace=True, axis=1) # add SHSeizComaPast12mos\n df[y_label] = df[y_label].replace({2.0: 0.0, 1.0: 0.0, 3.0:1.0, 4.0:1.0})\n\n ",
"[2. 3. 0. 1.]\n"
],
[
"pd.options.display.max_rows = 100\n\ndef get_missing_val_percentage(df):\n return (df.isnull().sum()* 100 / len(df))\n\n\nmissing_per = get_missing_val_percentage(df)\n\n# get missing values < threshold feature name list\nvariables = df.columns\nthresh = 40\nvariable = [ ]\nvar = []\nfor i in range(df.columns.shape[0]):\n if missing_per[i]<= thresh: #setting the threshold as 40%\n variable.append(variables[i])\n else :\n var.append(variables[i])\n \nprint(\"variables missing vals < threshold\") \nprint(variable)\nprint(\"Length: \", len(variable))\n\nprint()\nprint(\"variables missing vals > threshold\") \nprint(var)\nprint(\"Length: \", len(var))",
"variables missing vals < threshold\n['PtID', 'Pt_InitTrt', 'Pt_SevHypoEver', 'Pt_HospDKASinceDiag', 'Pt_NumHospDKA', 'Pt_InsulinRecMethod', 'Pt_InsLev1PerDay', 'Pt_InsLev2PerDay', 'Pt_InsLant1PerDay', 'Pt_InsLant2PerDay', 'Pt_InsUnk', 'Pt_MealBolusMethod', 'Pt_InsCarbRat', 'Pt_InsCarbRatBrkfst', 'Pt_InsCarbRatLunch', 'Pt_InsCarbRatDinn', 'Pt_BolusDecCntCarb', 'Pt_BolusDaySnackFreq', 'Pt_BedtimeSnack', 'Pt_ChkBldSugPriBolus', 'Pt_MissInsDoseFreq', 'Pt_NumBolusDayUnk', 'Pt_InjLongActDayNotUsed', 'Pt_InjShortActDayNotUsed', 'Pt_LongActInsDayNotUsed', 'Pt_NumMeterCheckDay', 'Pt_DLoadHGMFreq', 'Pt_LogBook', 'Pt_ChkKetones', 'Pt_CGMUse', 'Pt_CGMStopUse', 'Pt_LastEyeExamPart', 'Pt_DiabRetTrtPart', 'Pt_LegBlind', 'Pt_HealthProfDiabEdu', 'Pt_GlutFreeDiet', 'Pt_CeliacDr', 'Pt_HighBldPrTrt', 'Pt_Smoke', 'Pt_GenHealth', 'Pt_StressDiab', 'Pt_AnnualInc', 'Pt_HouseholdCnt', 'Pt_InsPriv', 'Pt_MaritalStatus', 'Pt_EmployStatus', 'Pt_RaceEth', 'HyperglyCritHbA1c', 'HyperglyCritRandGluc', 'ReqInsulinCrit', 'DKADiag', 'OralAgnTrt', 'ExamDaysFromConsent', 'Gender', 'Weight', 'Height', 'BldPrSys', 'BldPrDia', 'TannerNotDone', 'InsulinDeliv', 'Lypohyper', 'Lipoatrophy', 'AcanNigrDiag', 'PulseRate', 'AcanNigrPres', 'FootUlcerPres', 'NumOfficeVisits', 'CGMUsed', 'LastEyeExam', 'DiabRetTrt', 'AlbuminStatus', 'PrevMicroalbuminCurrNone', 'GFRBelow60', 'RenalFailDial', 'PostKidneyTrans', 'NephropOthCause', 'ACEARB', 'DKAPast12mos', 'SHSeizComaPast12mos', 'InsHumalog', 'InsNovolog', 'InsApidra', 'InsRegular', 'InsNPH', 'InsPremix7030', 'InsPremix5050', 'InsPremix7525', 'InsLevemir', 'InsLantus', 'InsOther', 'InsNotTaking', 'AgeAtConsent', 'DiagAge', 'Pt_v3NumERVisOthReas', 'Pt_v3NumHospOthReas', 'Pt_v3NumSHSeizComa', 'MajorLifeStressEvent', 'HbA1c', 'HbA1C_SH', 'bmi', 'relative_T1D', 'CardiacAngio', 'CoronaryBypass', 'Hypertension', 'HighTrig', 'HighLDL', 'LowHDL', 'Stroke', 'Celiac', 'Cardiomyopathy', 'CongHeartFail', 'AtrialFib', 'CardiacArrhyth', 'Hemoglob', 'RheumArth', 'Osteo', 'Depression', 'Anxiety', 'Psychosis', 'DiabNeuro', 'diagDuration', 'Diab_dur_greater', 'education_level']\nLength: 123\n\nvariables missing vals > threshold\n['Pt_BolusBedtimeSnackFreq', 'Pt_InsPumpStartAge', 'Pt_PumpManuf', 'Pt_PumpModel', 'Pt_DaysLeavePumpIns', 'Pt_BasInsRateChgDay', 'Pt_NumBolusDay', 'Pt_ReturnPump', 'Pt_InjMethod', 'Pt_InjLongActDay', 'Pt_InjShortActDay', 'Pt_LongActInsDay', 'Pt_ShortActInsDay', 'Pt_PumpStopUse', 'Pt_SmokeAmt', 'Pt_DaysWkEx', 'Pt_MenarcheAge', 'Pt_RegMenstCyc', 'Pt_IrregMenstCycReas', 'Pt_CurrPreg', 'Pt_MiscarriageNum', 'WeightDiag', 'NumDKAOccur', 'PumpTotBasIns', 'HGMNumDays', 'HGMTestCntAvg', 'HGMGlucMean', 'CGMGlucPctBelow70', 'CGMGlucPctBelow60', 'InsCarbRatBrkfst', 'InsCarbRatLunch', 'InsCarbRatDinn', 'InsCarbRatDinnNotUsed', 'CGMPctBelow55', 'CGMPctBelow80', 'NumSHSeizComaPast12mos']\nLength: 36\n"
],
[
"# cols_to_del = ['Diab_dur_greater','HbA1C_SH', 'Pt_InsHumalog', 'Pt_InsNovolog', 'Pt_BolusDecCntCarb', \n# 'Pt_BolusBedtimeSnackFreq', 'Pt_InsPumpStartAge', 'Pt_PumpManuf', 'Pt_PumpModel',\n# 'Pt_DaysLeavePumpIns', 'Pt_BasInsRateChgDay', 'Pt_NumBolusDay', 'Pt_ReturnPump', \n# 'Pt_InjMethod', 'Pt_InjLongActDay', 'Pt_InjShortActDay', 'Pt_LongActInsDay', \n# 'Pt_ShortActInsDay', 'Pt_PumpStopUse', 'Pt_HealthProfDiabEdu', 'Pt_SmokeAmt', \n# 'Pt_DaysWkEx', 'Pt_MenarcheAge', 'Pt_RegMenstCyc', 'Pt_IrregMenstCycReas',\n# 'Pt_CurrPreg', 'Pt_MiscarriageNum', 'Pt_v3NumHospOthReas',\n# 'HyperglyCritRandGluc', 'WeightDiag', 'NumDKAOccur', 'TannerNotDone', 'PumpTotBasIns',\n# 'HGMNumDays', 'HGMTestCntAvg', 'HGMGlucMean', 'CGMGlucPctBelow70', 'CGMGlucPctBelow60', \n# 'PulseRate', 'InsCarbRatBrkfst', 'InsCarbRatLunch', 'InsCarbRatDinn', 'InsCarbRatDinnNotUsed', \n# 'CGMPctBelow55', 'CGMPctBelow80']\n\ncols_to_del = ['Diab_dur_greater']\n\ndf.drop(cols_to_del, inplace=True, axis=1)\ndf.head(10)",
"_____no_output_____"
]
],
[
[
"# Divide Dataset",
"_____no_output_____"
]
],
[
[
"df=df.drop('PtID', axis = 1)",
"_____no_output_____"
],
[
"\ndef divide_data(df,label):\n Y = df[label]\n X = df.drop(label, axis=1)\n return X, Y\n\nX, Y = divide_data(df, y_label)\n",
"_____no_output_____"
],
[
"Y.unique()",
"_____no_output_____"
]
],
[
[
"# Feature Selection",
"_____no_output_____"
]
],
[
[
"shape = np.shape(X) \nfeature = 20 #shape[1] \nn_classes = 2\n",
"_____no_output_____"
],
[
"seed(42)\ntf.random.set_seed(38)\n# Save original data set\noriginal_X = X\n\n# Split into training and testing sets\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, stratify=Y, random_state=123)\n# if variable y is a binary categorical variable with values 0 and 1 and there are 25% of zeros and 75% of ones, stratify=y will make sure that your random split has 25% of 0's and 75% of 1's.\n# X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)\n",
"_____no_output_____"
],
[
"len(Y_train == 0.0)\nunique, counts = numpy.unique(Y_train.to_numpy(), return_counts=True)\nprint(\"Train - \", unique, counts)\n\nunique_test, counts_test = numpy.unique(Y_test.to_numpy(), return_counts=True)\nprint(\"Test - \", unique_test, counts_test)\n",
"Train - [0. 1.] [1978 3388]\nTest - [0. 1.] [ 660 1129]\n"
]
],
[
[
"# Imputations",
"_____no_output_____"
]
],
[
[
"import missingno as msno\nmsno.bar(X_train)",
"_____no_output_____"
]
],
[
[
"### XGB with missing values",
"_____no_output_____"
]
],
[
[
"def plot_roc_curve(fpr, tpr):\n plt.plot(fpr, tpr, color='orange', label='ROC')\n plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--')\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title('Receiver Operating Characteristic (ROC) Curve')\n plt.legend()\n plt.show()",
"_____no_output_____"
],
[
"# use only for XGB classifier with missing values\nX_train_copy = X_train.drop(['DKAPast12mos', 'NumDKAOccur', 'Pt_NumHospDKA', 'Pt_HospDKASinceDiag'], axis=1)\nX_test_copy = X_test.drop(['DKAPast12mos', 'NumDKAOccur', 'Pt_NumHospDKA', 'Pt_HospDKASinceDiag'], axis=1)\n",
"_____no_output_____"
],
[
"# kf = KFold(n_splits= 3, shuffle=False)\ntrain = X_train_copy.copy()\ntrain[y_label] = Y_train.values\n\ndef cross_val_with_missing_val(model):\n# i = 1\n# for train_index, test_index in kf.split(train):\n# X_train1 = train.iloc[train_index].loc[:, X_train.columns]\n# X_test1 = train.iloc[test_index][X_train.columns]\n# y_train1 = train.iloc[train_index].loc[:,y_label]\n# y_test1 = train.iloc[test_index][y_label]\n\n# #Train the model\n# model.fit(X_train1, y_train1) #Training the model\n# print(f\"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}\")\n# i += 1\n# return model\n\n\n\n dfs = []\n kf = StratifiedKFold(n_splits=10, shuffle=True, random_state=123)\n i = 1\n for train_index, test_index in kf.split(train, Y_train):\n X_train1 = train.iloc[train_index].loc[:, X_train_copy.columns]\n X_test1 = train.iloc[test_index].loc[:,X_train_copy.columns]\n y_train1 = train.iloc[train_index].loc[:,y_label]\n y_test1 = train.iloc[test_index].loc[:,y_label]\n\n #Train the model\n model.fit(X_train1, y_train1) #Training the model\n print(f\"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}, doublecheck: {model.score(X_test1,y_test1)}\")\n\n # how many occurances appear in the train set\n s_train = train.iloc[train_index].loc[:,y_label].value_counts()\n s_train.name = f\"train {i}\"\n s_test = train.iloc[test_index].loc[:,y_label].value_counts()\n s_test.name = f\"test {i}\"\n df = pd.concat([s_train, s_test], axis=1, sort=False)\n df[\"|\"] = \"|\"\n dfs.append(df)\n\n i += 1\n return model\n\n",
"_____no_output_____"
],
[
"# xgboost - train with missing values\n\nmodel=XGBClassifier(\n use_label_encoder=False,\n eta = 0.1,#eta between(0.01-0.2)\n max_depth = 4, #values between(3-10)\n max_delta_step = 10,\n# # scale_pos_weight = 0.4,\n# # n_jobs = 0,\n subsample = 0.5,#values between(0.5-1)\n colsample_bytree = 1,#values between(0.5-1)\n tree_method = \"auto\",\n process_type = \"default\",\n num_parallel_tree=7,\n objective='multi:softmax',\n# # min_child_weight = 3,\n booster='gbtree',\n eval_metric = \"mlogloss\",\n num_class = n_classes\n )\n \n# model.fit(X_train_copy,Y_train)\nmodel = cross_val_with_missing_val(model)\n\n# xgb_pred=model.predict(X_test_copy)\n# xgb_pred_train=model.predict(X_train_copy)\n\nprint(\"\\n \\n =========== Train Dataset =============\")\n\n\ny_scores1 = model.predict_proba(X_train_copy)[:,1]\n\nfpr, tpr, thresholds = roc_curve(Y_train, y_scores1)\nprint(\"train ROC score\", roc_auc_score(Y_train, y_scores1))\noptimal_idx = np.argmax(tpr - fpr)\noptimal_threshold = thresholds[optimal_idx]\nprint(\"Threshold value is:\", optimal_threshold)\nplot_roc_curve(fpr, tpr)\nxgb_pred_train = (model.predict_proba(X_train_copy)[:,1] >= optimal_threshold).astype(int)\n\n\nprint(\"accuracy score: \", accuracy_score(Y_train, xgb_pred_train)*100)\n\nconfusion_matrix_xgb_train = pd.DataFrame(confusion_matrix(Y_train, xgb_pred_train))\nsns.heatmap(confusion_matrix_xgb_train, annot=True,fmt='g')\n\nprint(classification_report(Y_train, xgb_pred_train))\nplt.show()\n\ntrain_acc = model.score(X_train_copy, Y_train)\nprint('Accuracy of XGB on training set: {:.2f}'.format(train_acc))\n\n\n\nprint(\"\\n\\n =========== Test Dataset =============\")\n# find optimal threshold\ny_scores = model.predict_proba(X_test_copy)[:,1]\n\nfpr, tpr, thresholds = roc_curve(Y_test, y_scores)\noptimal_idx = np.argmax(tpr - fpr)\noptimal_threshold = thresholds[optimal_idx]\nprint(\"Threshold value is:\", optimal_threshold)\nplot_roc_curve(fpr, tpr)\n\nxgb_pred = (model.predict_proba(X_test_copy)[:,1] >= optimal_threshold).astype(int)\n\n\n\nprint(\"accuracy score: \", accuracy_score(Y_test, xgb_pred)*100)\n\nconfusion_matrix_xgb = confusion_matrix(Y_test, xgb_pred)\nsns.heatmap(confusion_matrix_xgb, annot=True, fmt='g')\n\nprint(classification_report(Y_test, xgb_pred))\nplt.show()\n\ntest_acc = model.score(X_test_copy, Y_test)\nprint('Accuracy of XGB classifier on test set: {:.2f}'\n .format(test_acc))\n\n# ROC\nprint(\"\\n\\n =========== ROC =============\")\n\ny_scores = model.predict_proba(X_test_copy)\nscore = roc_auc_score(Y_test, y_scores[:, 1])\nscore = round(score,4)\nprint(f'roc_auc = {score}')\n\n\nprint(\"\\n\\n =========== Class-wise test accuracy =============\")\nacc = confusion_matrix_xgb.diagonal()/confusion_matrix_xgb.sum(axis=1)\nprint('classwise accuracy [class 0, class 1]: ', acc)\nprint('average accuracy: ', np.sum(acc)/2)\n\n\n",
"_____no_output_____"
],
[
"# feature importance graph of XGB\nfeat_importances = pd.Series(model.feature_importances_, index=X_train_copy.columns[0:162])\nfeat_importances.nlargest(20).plot(kind='barh')",
"_____no_output_____"
],
[
"X_train.update(X_train[[\n 'Pt_InsPriv','Pt_BolusDecCntCarb', 'Pt_HealthProfDiabEdu',\n 'Pt_MiscarriageNum','HyperglyCritRandGluc','NumDKAOccur','TannerNotDone',\n 'Pt_InsLev1PerDay','Pt_InsLev2PerDay','Pt_InsLant1PerDay','Pt_InsLant2PerDay']].fillna(0))\n\nX_test.update(X_test[[\n 'Pt_InsPriv','Pt_BolusDecCntCarb', 'Pt_HealthProfDiabEdu',\n 'Pt_MiscarriageNum','HyperglyCritRandGluc','NumDKAOccur','TannerNotDone',\n 'Pt_InsLev1PerDay','Pt_InsLev2PerDay','Pt_InsLant1PerDay','Pt_InsLant2PerDay']].fillna(0))",
"_____no_output_____"
],
[
"# fill nan values in categorical dataset with frequent value\n\n# tested wuth mean and median - results is lower than most_frequent\nimputeX = SimpleImputer(missing_values=np.nan, strategy = \"most_frequent\")\n# imputeX = KNNImputer(missing_values=np.nan, n_neighbors = 3, weights='distance')\n# imputeX = IterativeImputer(max_iter=5, random_state=0)\n\nX_train = imputeX.fit_transform(X_train)\n",
"_____no_output_____"
],
[
"# test data imputation\n\nTest = X_test.copy()\nTest.loc[:,y_label] = Y_test\n\nX_test = imputeX.transform(X_test)\n\n",
"_____no_output_____"
]
],
[
[
"# Scale data",
"_____no_output_____"
]
],
[
[
"# Normalize numeric features\nscaler = StandardScaler()\n# scaler = MinMaxScaler()\nselect = {}\nselect[0] = pd.DataFrame(scaler.fit_transform(X_train))\nselect[1] = Y_train\nselect[2] = pd.DataFrame(scaler.transform(X_test))\n",
"_____no_output_____"
]
],
[
[
"## Feature Selection",
"_____no_output_____"
]
],
[
[
"# TODO\n\n# def select_features(select, feature):\n# selected = {}\n# fs = SelectKBest(score_func=mutual_info_classif, k=feature) # k=feature score_func SelectPercentile\n# selected[0] = fs.fit_transform(select[0], select[1])\n# selected[1] = fs.transform(select[2])\n \n# idx = fs.get_support(indices=True)\n \n# return selected, fs, idx\n\n",
"_____no_output_____"
],
[
"\n\n#Selecting the Best important features according to Logistic Regression\n# Give better performance than selectKBest \ndef select_features(select, feature):\n selected = {}\n# fs = RFE(estimator=LogisticRegression(), n_features_to_select=feature, step = 1) # step (the number of features eliminated each iteration) \n fs = RFE(estimator=XGBClassifier(), n_features_to_select=feature, step = 5) # step (the number of features eliminated each iteration) \n# fs = RFE(estimator=RandomForestClassifier(), n_features_to_select=feature, step = 1) # step (the number of features eliminated each iteration) \n \n \n selected[0] = fs.fit_transform(select[0], select[1])\n selected[1] = fs.transform(select[2])\n \n idx = fs.get_support(indices=True)\n \n return selected, fs, idx\n",
"_____no_output_____"
],
[
"# Feature selection\nselected, fs, idx = select_features(select, feature)\n",
"/home/kali/.local/lib/python3.9/site-packages/xgboost/sklearn.py:1224: UserWarning: The use of label encoder in XGBClassifier is deprecated and will be removed in a future release. To remove this warning, do the following: 1) Pass option use_label_encoder=False when constructing XGBClassifier object; and 2) Encode your labels (y) as integers starting with 0, i.e. 0, 1, 2, ..., [num_class - 1].\n warnings.warn(label_encoder_deprecation_msg, UserWarning)\n"
],
[
"# Get columns to keep and create new dataframe with those only\nfrom pprint import pprint\ncols = fs.get_support(indices=True)\nfeatures_df_new = original_X.iloc[:,cols]\npprint(features_df_new.columns)\nprint(features_df_new.shape)",
"Index(['Pt_LongActInsDay', 'Pt_RaceEth', 'HyperglyCritHbA1c',\n 'HyperglyCritRandGluc', 'ReqInsulinCrit', 'WeightDiag', 'OralAgnTrt',\n 'TannerNotDone', 'HGMNumDays', 'HGMGlucMean', 'Lipoatrophy',\n 'AcanNigrDiag', 'InsCarbRatDinnNotUsed', 'CGMUsed', 'DKAPast12mos',\n 'InsApidra', 'InsOther', 'HighLDL', 'Depression', 'diagDuration'],\n dtype='object')\n(7155, 20)\n"
],
[
"X_train = pd.DataFrame(selected[0], columns = features_df_new.columns)\nX_test = pd.DataFrame(selected[1], columns = features_df_new.columns)\n",
"_____no_output_____"
],
[
"\nif('DKAPast12mos' in X_train.columns):\n X_train = X_train.drop(['DKAPast12mos'], axis=1)\n X_test = X_test.drop(['DKAPast12mos'], axis=1)\nif('NumDKAOccur' in X_train.columns):\n X_train = X_train.drop(['NumDKAOccur'], axis=1)\n X_test = X_test.drop([ 'NumDKAOccur'], axis=1)\nif('Pt_NumHospDKA' in X_train.columns):\n X_train = X_train.drop(['Pt_NumHospDKA'], axis=1)\n X_test = X_test.drop([ 'Pt_NumHospDKA'], axis=1)\nif('Pt_HospDKASinceDiag' in X_train.columns):\n X_train = X_train.drop(['Pt_HospDKASinceDiag'], axis=1)\n X_test = X_test.drop([ 'Pt_HospDKASinceDiag'], axis=1)\n",
"_____no_output_____"
]
],
[
[
"### Common functions",
"_____no_output_____"
]
],
[
[
"# kf = KFold(n_splits= 3, shuffle=False)\ntrain = X_train.copy()\ntrain[y_label] = Y_train.values\n\ndef cross_val(model):\n# i = 1\n# for train_index, test_index in kf.split(train):\n# X_train1 = train.iloc[train_index].loc[:, X_train.columns]\n# X_ test1 = train.iloc[test_index][X_train.columns]\n# y_train1 = train.iloc[train_index].loc[:,y_label]\n# y_test1 = train.iloc[test_index][y_label]\n\n# #Train the model\n# model.fit(X_train1, y_train1) #Training the model\n# print(f\"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}\")\n# i += 1\n# return model\n\n\n\n dfs = []\n kf = StratifiedKFold(n_splits=10, shuffle=True, random_state=123)\n i = 1\n for train_index, test_index in kf.split(train, Y_train):\n X_train1 = train.iloc[train_index].loc[:, X_train.columns]\n X_test1 = train.iloc[test_index].loc[:,X_train.columns]\n y_train1 = train.iloc[train_index].loc[:,y_label]\n y_test1 = train.iloc[test_index].loc[:,y_label]\n\n #Train the model\n model.fit(X_train1, y_train1) #Training the model\n print(f\"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}, doublecheck: {model.score(X_test1,y_test1)}\")\n\n # how many occurances appear in the train set\n s_train = train.iloc[train_index].loc[:,y_label].value_counts()\n s_train.name = f\"train {i}\"\n s_test = train.iloc[test_index].loc[:,y_label].value_counts()\n s_test.name = f\"test {i}\"\n df = pd.concat([s_train, s_test], axis=1, sort=False)\n df[\"|\"] = \"|\"\n dfs.append(df)\n\n i += 1\n return model\n\n",
"_____no_output_____"
],
[
"def optimal_thresh(model, X, Y):\n y_scores = model.predict_proba(X)[:,1]\n\n fpr, tpr, thresholds = roc_curve(Y, y_scores)\n print(roc_auc_score(Y, y_scores))\n# optimal_idx = np.argmax(sqrt(tpr * (1-fpr)))\n optimal_idx = np.argmax(tpr - fpr)\n optimal_threshold = thresholds[optimal_idx]\n print(\"Threshold value is:\", optimal_threshold)\n plot_roc_curve(fpr, tpr)\n return optimal_threshold",
"_____no_output_____"
],
[
"def train_results(model, X_train, Y_train, pred_train):\n print(\"\\n \\n ===================== Train Dataset ======================\")\n\n print(accuracy_score(Y_train, pred_train)*100)\n\n confusion_matrix_train = pd.DataFrame(confusion_matrix(Y_train, pred_train))\n sns.heatmap(confusion_matrix_train, annot=True,fmt='g')\n print(classification_report(Y_train, pred_train))\n plt.show()\n \n train_acc = model.score(X_train, Y_train)\n print('Accuracy of on training set: {:.2f}'.format(train_acc))\n\ndef test_results(model, X_test, Y_test, pred):\n print(\"\\n\\n ===================== Test Dataset =======================\")\n\n print(accuracy_score(Y_test, pred)*100)\n\n confusion_matrix_model = confusion_matrix(Y_test, pred)\n sns.heatmap(confusion_matrix_model, annot=True,fmt='g')\n print(classification_report(Y_test, pred))\n plt.show()\n \n test_acc = model.score(X_test, Y_test)\n print('Accuracy of classifier on test set: {:.2f}'\n .format(test_acc))\n\ndef ROC_results(model, X_test, Y_test):\n print(\"\\n\\n ======================= Test-ROC =========================\")\n\n y_scores = model.predict_proba(X_test)\n score = roc_auc_score(Y_test, y_scores[:, 1])\n score = round(score,4)\n print(f'roc_auc = {score}')\n \ndef class_wise_test_accuracy(model, Y_test, pred):\n print(\"\\n\\n ======================= Class-wise test accuracy =====================\")\n confusion_matrix_model = confusion_matrix(Y_test, pred)\n acc = confusion_matrix_model.diagonal()/confusion_matrix_model.sum(axis=1)\n print('classwise accuracy [class 0, class 1]: ',(acc))\n print('average accuracy: ',( np.sum(acc)/2))",
"_____no_output_____"
]
],
[
[
"### Adaboost model",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import KFold, StratifiedKFold, train_test_split, cross_validate, cross_val_score\n\nadaboost = AdaBoostClassifier(random_state=0, learning_rate=0.05, n_estimators=1000, algorithm = \"SAMME.R\") #algorithm{‘SAMME’, ‘SAMME.R’}, default=’SAMME.R’\n\n# adaboost.fit(X_train, Y_train)\nadaboost = cross_val(adaboost)\n\n# pred=adaboost.predict(X_test)\n# pred_train=adaboost.predict(X_train)\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(adaboost, X_test, Y_test)\npred = (adaboost.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\n\noptimal_threshold_train= optimal_thresh(adaboost, X_train, Y_train)\npred_train = (adaboost.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\n# test and train results\ntrain_results(adaboost, X_train, Y_train, pred_train)\ntest_results(adaboost, X_test, Y_test, pred)\n\n# ROC\nROC_results(adaboost, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(adaboost, Y_test, pred)\n",
"_____no_output_____"
],
[
"feat_importances = pd.Series(adaboost.feature_importances_, index=X_train.columns[0:feature])\nfeat_importances.nlargest(20).plot(kind='barh')",
"_____no_output_____"
],
[
"from imodels import BoostedRulesClassifier, FIGSClassifier, SkopeRulesClassifier\nfrom imodels import RuleFitRegressor, HSTreeRegressorCV, SLIMRegressor",
"_____no_output_____"
],
[
"def viz_classification_preds(probs, y_test):\n '''look at prediction breakdown\n '''\n plt.subplot(121)\n plt.hist(probs[:, 1][y_test == 0], label='Class 0')\n plt.hist(probs[:, 1][y_test == 1], label='Class 1', alpha=0.8)\n plt.ylabel('Count')\n plt.xlabel('Predicted probability of class 1')\n plt.legend()\n\n plt.subplot(122)\n preds = np.argmax(probs, axis=1)\n plt.title('ROC curve')\n fpr, tpr, thresholds = metrics.roc_curve(y_test, preds)\n plt.xlabel('False positive rate')\n plt.ylabel('True positive rate')\n plt.plot(fpr, tpr)\n plt.tight_layout()\n plt.show()\n\n",
"_____no_output_____"
],
[
"# fit boosted stumps\nbrc = BoostedRulesClassifier(n_estimators=10)\nbrc.fit(X_train, Y_train, feature_names=X_test.columns)\n\nprint(brc)\n\n# look at performance\nprobs = brc.predict_proba(X_test)\nviz_classification_preds(probs, Y_test)",
"BoostedRules:\nRule → predicted probability (final prediction is weighted sum of all predictions)\n If\u001b[96m OralAgnTrt <= 0.17893\u001b[00m → 0.49 (weight: 0.56)\n If\u001b[96m OralAgnTrt > 0.17893\u001b[00m → 0.68 (weight: 0.40)\n If\u001b[96m HyperglyCritRandGluc <= 0.37907\u001b[00m → 0.66 (weight: 0.33)\n If\u001b[96m HyperglyCritRandGluc > 0.37907\u001b[00m → 1.00 (weight: 0.20)\n If\u001b[96m diagDuration <= -0.52137\u001b[00m → 0.59 (weight: 0.16)\n If\u001b[96m diagDuration > -0.52137\u001b[00m → 0.32 (weight: 0.12)\n If\u001b[96m AcanNigrDiag <= -0.16894\u001b[00m → 0.43 (weight: 0.09)\n If\u001b[96m AcanNigrDiag > -0.16894\u001b[00m → 0.51 (weight: 0.12)\n If\u001b[96m AcanNigrDiag <= -0.16894\u001b[00m → 0.28 (weight: 0.07)\n If\u001b[96m AcanNigrDiag > -0.16894\u001b[00m → 0.89 (weight: 0.10)\n\n"
]
],
[
[
"# Model - XGB",
"_____no_output_____"
]
],
[
[
"# xgboost - train with missing values\n\n\nxgb_impute=XGBClassifier(\n use_label_encoder=False,\n eta = 0.1,#eta between(0.01-0.2)\n max_depth = 4, #values between(3-10)\n max_delta_step = 10,\n subsample = 0.5,#values between(0.5-1)\n colsample_bytree = 1,#values between(0.5-1)\n tree_method = \"auto\",\n process_type = \"default\",\n num_parallel_tree=7,\n objective='multi:softmax',\n# min_child_weight = 3,\n booster='gbtree',\n eval_metric = \"mlogloss\",\n num_class = n_classes\n )\n \n# xgb_impute.fit(X_train,Y_train)\nxgb_impute = cross_val(xgb_impute)\n\n# xgb_pred=xgb_impute.predict(X_test)\n# xgb_pred_train=xgb_impute.predict(X_train)\n\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(xgb_impute, X_test, Y_test)\noptimal_threshold_train= optimal_thresh(xgb_impute, X_train, Y_train)\n\nxgb_pred = (xgb_impute.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\nxgb_pred_train = (xgb_impute.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\n\ntrain_results(xgb_impute, X_train, Y_train, xgb_pred_train)\ntest_results(xgb_impute, X_test, Y_test, xgb_pred)\n\n# ROC\nROC_results(xgb_impute, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(xgb_impute, Y_test, xgb_pred)\n\n",
"_____no_output_____"
],
[
"# feature importance graph of XGB\nfeat_importances = pd.Series(xgb_impute.feature_importances_, index=X_train.columns[0:feature])\nfeat_importances.nlargest(20).plot(kind='barh')",
"_____no_output_____"
]
],
[
[
"## Model 2 - Random forest",
"_____no_output_____"
]
],
[
[
"# random forest classifier\n\nrf=RandomForestClassifier(max_depth=10,\n# n_estimators = feature,\n criterion = 'entropy', # {“gini”, “entropy”}, default=”gini”\n class_weight = 'balanced_subsample', # {“balanced”, “balanced_subsample”}, dict or list of dicts, default=None\n ccp_alpha=0.001,\n random_state=0)\n\n# rf.fit(X_train,Y_train)\nrf = cross_val(rf)\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(rf, X_test, Y_test)\noptimal_threshold_train= optimal_thresh(rf, X_train, Y_train)\n\n# pred=rf.predict(X_test)\npred = (rf.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\n\n# pred_train=rf.predict(X_train)\npred_train = (rf.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\n\ntrain_results(rf, X_train, Y_train, pred_train)\ntest_results(rf, X_test, Y_test, pred)\n\n# ROC\nROC_results(rf, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(rf, Y_test, pred)",
"Accuracy for the fold no. 1 on the test set: 0.7616387337057728, doublecheck: 0.7616387337057728\nAccuracy for the fold no. 2 on the test set: 0.8156424581005587, doublecheck: 0.8156424581005587\nAccuracy for the fold no. 3 on the test set: 0.8286778398510242, doublecheck: 0.8286778398510242\nAccuracy for the fold no. 4 on the test set: 0.7728119180633147, doublecheck: 0.7728119180633147\nAccuracy for the fold no. 5 on the test set: 0.7877094972067039, doublecheck: 0.7877094972067039\nAccuracy for the fold no. 6 on the test set: 0.7951582867783985, doublecheck: 0.7951582867783985\nAccuracy for the fold no. 7 on the test set: 0.789179104477612, doublecheck: 0.789179104477612\nAccuracy for the fold no. 8 on the test set: 0.8022388059701493, doublecheck: 0.8022388059701493\nAccuracy for the fold no. 9 on the test set: 0.8041044776119403, doublecheck: 0.8041044776119403\nAccuracy for the fold no. 10 on the test set: 0.7947761194029851, doublecheck: 0.7947761194029851\n0.8633599055211101\nThreshold value is: 0.4082997836255711\n"
],
[
"feat_importances = pd.Series(rf.feature_importances_, index=X_train.columns[0:feature])\nfeat_importances.nlargest(20).plot(kind='barh')",
"_____no_output_____"
]
],
[
[
"## Model 3 LogisticRegression",
"_____no_output_____"
]
],
[
[
"#penalty{‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’\nlogreg = LogisticRegression(\n penalty='l2',\n tol = 5e-4,\n C=1,\n l1_ratio = 10,\n class_weight='balanced', # balanced\n random_state=0,\n solver = 'saga' # saga, sag\n)\n\n# logreg.fit(X_train, Y_train)\nlogreg = cross_val(logreg)\n\n# pred=logreg.predict(X_test)\n# pred_train=logreg.predict(X_train)\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(logreg, X_test, Y_test)\noptimal_threshold_train= optimal_thresh(logreg, X_train, Y_train)\n\npred = (logreg.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\npred_train = (logreg.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\ntrain_results(logreg, X_train, Y_train, pred_train)\ntest_results(logreg, X_test, Y_test, pred)\n\n# ROC\nROC_results(logreg, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(logreg, Y_test, pred)",
"/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n/home/kali/.local/lib/python3.9/site-packages/sklearn/linear_model/_logistic.py:1317: UserWarning: l1_ratio parameter is only used when penalty is 'elasticnet'. Got (penalty=l2)\n warnings.warn(\"l1_ratio parameter is only used when penalty is \"\n"
],
[
"\nfeat_importances = pd.Series(logreg.coef_[0], index=X_train.columns[0:feature])\nfeat_importances.nlargest(20).plot(kind='barh')",
"_____no_output_____"
]
],
[
[
"## Model 4 - Decision tree",
"_____no_output_____"
]
],
[
[
"clf = DecisionTreeClassifier(\n random_state=0,\n criterion='gini',\n splitter = 'best',\n max_depth = 100,\n max_features = 20)\n# clf.fit(X_train, Y_train)\nclf = cross_val(clf)\n\n# pred=clf.predict(X_test)\n# pred_train=clf.predict(X_train)\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(clf, X_test, Y_test)\noptimal_threshold_train= optimal_thresh(clf, X_train, Y_train)\n\npred = (clf.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\npred_train = (clf.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\ntrain_results(clf, X_train, Y_train, pred_train)\ntest_results(clf, X_test, Y_test, pred)\n\n# ROC\nROC_results(clf, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(clf, Y_test, pred)",
"_____no_output_____"
],
[
"\nfeat_importances = pd.Series(clf.feature_importances_, index=X_train.columns[0:feature])\nfeat_importances.nlargest(20).plot(kind='barh')",
"_____no_output_____"
]
],
[
[
"## Model 5 - K-Nearest Neighbors",
"_____no_output_____"
]
],
[
[
"knn = KNeighborsClassifier(\n n_neighbors =1,\n weights = \"uniform\", # uniform, distance\n algorithm = 'brute', # {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’\n)\n\n# knn.fit(X_train, Y_train)\nknn = cross_val(knn)\n\n# pred=knn.predict(X_test)\n# pred_train=knn.predict(X_train)\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(knn, X_test, Y_test)\noptimal_threshold_train= optimal_thresh(knn, X_train, Y_train)\n\npred = (knn.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\npred_train = (knn.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\ntrain_results(knn, X_train, Y_train, pred_train)\ntest_results(knn, X_test, Y_test, pred)\n\n# ROC\nROC_results(knn, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(knn, Y_test, pred)",
"_____no_output_____"
]
],
[
[
"## Model 6 - Linear Discriminant Analysis",
"_____no_output_____"
]
],
[
[
"\nlda = LinearDiscriminantAnalysis(\n solver = 'eigen', # solver{‘svd’, ‘lsqr’, ‘eigen’}, default=’svd’\n shrinkage= 'auto', #shrinkage‘auto’ or float, default=None\n n_components = 1,\n tol = 1e-3 \n)\n# lda.fit(X_train, Y_train)\nlda = cross_val(lda)\n\n# pred=lda.predict(X_test)\n# pred_train=lda.predict(X_train)\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(lda, X_test, Y_test)\noptimal_threshold_train= optimal_thresh(lda, X_train, Y_train)\n\npred = (lda.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\npred_train = (lda.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\ntrain_results(lda, X_train, Y_train, pred_train)\ntest_results(lda, X_test, Y_test, pred)\n\n# ROC\nROC_results(lda, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(lda, Y_test, pred)\n",
"_____no_output_____"
],
[
"\n",
"_____no_output_____"
]
],
[
[
"## Model 7- Gaussian Naive Bayes",
"_____no_output_____"
]
],
[
[
"gnb = GaussianNB()\n\nparam_grid_nb = {\n 'var_smoothing': np.logspace(0,-9, num=100)\n}\n\nnbModel_grid = GridSearchCV(estimator=gnb, param_grid=param_grid_nb, verbose=1, cv=10, n_jobs=-1)\n# nbModel_grid.fit(X_train, Y_train)\nnbModel_grid = cross_val(nbModel_grid)\n\n# best parameters\nprint(nbModel_grid.best_estimator_)\n\ngnb = GaussianNB(priors=None, var_smoothing=1.0)\ngnb.fit(X_train, Y_train)\n \n# pred=gnb.predict(X_test)\n# pred_train=gnb.predict(X_train)\n\n# find optimal threshold\noptimal_threshold = optimal_thresh(gnb, X_test, Y_test)\noptimal_threshold_train= optimal_thresh(gnb, X_train, Y_train)\n\npred = (gnb.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)\npred_train = (gnb.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)\n\n\ntrain_results(gnb, X_train, Y_train, pred_train)\ntest_results(gnb, X_test, Y_test, pred)\n\n# ROC\nROC_results(gnb, X_test, Y_test)\n\n# class wise accuracy\nclass_wise_test_accuracy(gnb, Y_test, pred)\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab06c8b6378ba2d656827bd77f82822c15d4818
| 35,084 |
ipynb
|
Jupyter Notebook
|
site/zh-cn/tutorials/generative/style_transfer.ipynb
|
ilyaspiridonov/docs-l10n
|
a061a44e40d25028d0a4458094e48ab717d3565c
|
[
"Apache-2.0"
] | 1 |
2021-09-23T09:56:29.000Z
|
2021-09-23T09:56:29.000Z
|
site/zh-cn/tutorials/generative/style_transfer.ipynb
|
ilyaspiridonov/docs-l10n
|
a061a44e40d25028d0a4458094e48ab717d3565c
|
[
"Apache-2.0"
] | null | null | null |
site/zh-cn/tutorials/generative/style_transfer.ipynb
|
ilyaspiridonov/docs-l10n
|
a061a44e40d25028d0a4458094e48ab717d3565c
|
[
"Apache-2.0"
] | 1 |
2020-06-23T07:43:49.000Z
|
2020-06-23T07:43:49.000Z
| 28.112179 | 404 | 0.481644 |
[
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# 神经风格迁移",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/generative/style_transfer\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\" />在 tensorflow.google.cn 上查看</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/style_transfer.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\" />在 Google Colab 上运行</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/style_transfer.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\" />在 GitHub 上查看源代码</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/generative/style_transfer.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\" />下载此 notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的\n[官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到\n[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入\n[[email protected] Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。",
"_____no_output_____"
],
[
"本教程使用深度学习来用其他图像的风格创造一个图像(曾经你是否希望可以像毕加索或梵高一样绘画?)。 这被称为*神经风格迁移*,该技术概述于 <a href=\"https://arxiv.org/abs/1508.06576\" class=\"external\">A Neural Algorithm of Artistic Style</a> (Gatys et al.). \n\nNote: 本教程演示了原始的风格迁移算法。它将图像内容优化为特定样式。最新的一些方法训练模型以直接生成风格化图像(类似于 [cyclegan](cyclegan.ipynb))。原始的这种方法要快得多(高达 1000 倍)。[TensorFlow Hub](https://tensorflow.google.cn/hub) 和 [TensorFlow Lite](https://tensorflow.google.cn/lite/models/style_transfer/overview) 中提供了预训练的[任意图像风格化模块](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb)。\n\n神经风格迁移是一种优化技术,用于将两个图像——一个*内容*图像和一个*风格参考*图像(如著名画家的一个作品)——混合在一起,使输出的图像看起来像内容图像, 但是用了风格参考图像的风格。\n\n这是通过优化输出图像以匹配内容图像的内容统计数据和风格参考图像的风格统计数据来实现的。 这些统计数据可以使用卷积网络从图像中提取。\n",
"_____no_output_____"
],
[
"例如,我们选取这张小狗的照片和 Wassily Kandinsky 的作品 7:\n\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg\" width=\"500px\"/>\n\n[黄色拉布拉多犬的凝视](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg),来自 Wikimedia Commons\n\n<img src=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/images/kadinsky.jpg?raw=1\" style=\"width: 500px;\"/>\n\n如果 Kandinsky 决定用这种风格来专门描绘这只海龟会是什么样子? 是否如下图一样?\n\n<img src=\"https://tensorflow.google.cn/tutorials/generative/images/stylized-image.png\" style=\"width: 500px;\"/>\n",
"_____no_output_____"
],
[
"## 配置\n",
"_____no_output_____"
],
[
"### 导入和配置模块",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf",
"_____no_output_____"
],
[
"import IPython.display as display\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['figure.figsize'] = (12,12)\nmpl.rcParams['axes.grid'] = False\n\nimport numpy as np\nimport PIL.Image\nimport time\nimport functools",
"_____no_output_____"
],
[
"def tensor_to_image(tensor):\n tensor = tensor*255\n tensor = np.array(tensor, dtype=np.uint8)\n if np.ndim(tensor)>3:\n assert tensor.shape[0] == 1\n tensor = tensor[0]\n return PIL.Image.fromarray(tensor)",
"_____no_output_____"
]
],
[
[
"下载图像并选择风格图像和内容图像:",
"_____no_output_____"
]
],
[
[
"content_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')\n\n# https://commons.wikimedia.org/wiki/File:Vassily_Kandinsky,_1913_-_Composition_7.jpg\nstyle_path = tf.keras.utils.get_file('kandinsky5.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg')",
"_____no_output_____"
]
],
[
[
"## 将输入可视化",
"_____no_output_____"
],
[
"定义一个加载图像的函数,并将其最大尺寸限制为 512 像素。",
"_____no_output_____"
]
],
[
[
"def load_img(path_to_img):\n max_dim = 512\n img = tf.io.read_file(path_to_img)\n img = tf.image.decode_image(img, channels=3)\n img = tf.image.convert_image_dtype(img, tf.float32)\n\n shape = tf.cast(tf.shape(img)[:-1], tf.float32)\n long_dim = max(shape)\n scale = max_dim / long_dim\n\n new_shape = tf.cast(shape * scale, tf.int32)\n\n img = tf.image.resize(img, new_shape)\n img = img[tf.newaxis, :]\n return img",
"_____no_output_____"
]
],
[
[
"创建一个简单的函数来显示图像:",
"_____no_output_____"
]
],
[
[
"def imshow(image, title=None):\n if len(image.shape) > 3:\n image = tf.squeeze(image, axis=0)\n\n plt.imshow(image)\n if title:\n plt.title(title)",
"_____no_output_____"
],
[
"content_image = load_img(content_path)\nstyle_image = load_img(style_path)\n\nplt.subplot(1, 2, 1)\nimshow(content_image, 'Content Image')\n\nplt.subplot(1, 2, 2)\nimshow(style_image, 'Style Image')",
"_____no_output_____"
]
],
[
[
"## 使用 TF-Hub 进行快速风格迁移\n\n本教程演示了原始的风格迁移算法。其将图像内容优化为特定风格。在进入细节之前,让我们看一下 [TensorFlow Hub](https://tensorflow.google.cn/hub) 模块如何快速风格迁移:",
"_____no_output_____"
]
],
[
[
"import tensorflow_hub as hub\nhub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/1')\nstylized_image = hub_module(tf.constant(content_image), tf.constant(style_image))[0]\ntensor_to_image(stylized_image)",
"_____no_output_____"
]
],
[
[
"## 定义内容和风格的表示\n\n使用模型的中间层来获取图像的*内容*和*风格*表示。 从网络的输入层开始,前几个层的激励响应表示边缘和纹理等低级 feature (特征)。 随着层数加深,最后几层代表更高级的 feature (特征)——实体的部分,如*轮子*或*眼睛*。 在此教程中,我们使用的是 VGG19 网络结构,这是一个已经预训练好的图像分类网络。 这些中间层是从图像中定义内容和风格的表示所必需的。 对于一个输入图像,我们尝试匹配这些中间层的相应风格和内容目标的表示。\n",
"_____no_output_____"
],
[
"加载 [VGG19](https://keras.io/applications/#vgg19) 并在我们的图像上测试它以确保正常运行:",
"_____no_output_____"
]
],
[
[
"x = tf.keras.applications.vgg19.preprocess_input(content_image*255)\nx = tf.image.resize(x, (224, 224))\nvgg = tf.keras.applications.VGG19(include_top=True, weights='imagenet')\nprediction_probabilities = vgg(x)\nprediction_probabilities.shape",
"_____no_output_____"
],
[
"predicted_top_5 = tf.keras.applications.vgg19.decode_predictions(prediction_probabilities.numpy())[0]\n[(class_name, prob) for (number, class_name, prob) in predicted_top_5]",
"_____no_output_____"
]
],
[
[
"现在,加载没有分类部分的 `VGG19` ,并列出各层的名称:",
"_____no_output_____"
]
],
[
[
"vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')\n\nprint()\nfor layer in vgg.layers:\n print(layer.name)",
"_____no_output_____"
]
],
[
[
"从网络中选择中间层的输出以表示图像的风格和内容:\n",
"_____no_output_____"
]
],
[
[
"# 内容层将提取出我们的 feature maps (特征图)\ncontent_layers = ['block5_conv2'] \n\n# 我们感兴趣的风格层\nstyle_layers = ['block1_conv1',\n 'block2_conv1',\n 'block3_conv1', \n 'block4_conv1', \n 'block5_conv1']\n\nnum_content_layers = len(content_layers)\nnum_style_layers = len(style_layers)",
"_____no_output_____"
]
],
[
[
"#### 用于表示风格和内容的中间层\n\n那么,为什么我们预训练的图像分类网络中的这些中间层的输出允许我们定义风格和内容的表示?\n\n从高层理解,为了使网络能够实现图像分类(该网络已被训练过),它必须理解图像。 这需要将原始图像作为输入像素并构建内部表示,这个内部表示将原始图像像素转换为对图像中存在的 feature (特征)的复杂理解。\n\n这也是卷积神经网络能够很好地推广的一个原因:它们能够捕获不变性并定义类别(例如猫与狗)之间的 feature (特征),这些 feature (特征)与背景噪声和其他干扰无关。 因此,将原始图像传递到模型输入和分类标签输出之间的某处的这一过程,可以视作复杂的 feature (特征)提取器。通过这些模型的中间层,我们就可以描述输入图像的内容和风格。",
"_____no_output_____"
],
[
"## 建立模型 \n\n使用`tf.keras.applications`中的网络可以让我们非常方便的利用Keras的功能接口提取中间层的值。\n\n在使用功能接口定义模型时,我们需要指定输入和输出:\n\n`model = Model(inputs, outputs)`\n\n以下函数构建了一个 VGG19 模型,该模型返回一个中间层输出的列表:",
"_____no_output_____"
]
],
[
[
"def vgg_layers(layer_names):\n \"\"\" Creates a vgg model that returns a list of intermediate output values.\"\"\"\n # 加载我们的模型。 加载已经在 imagenet 数据上预训练的 VGG \n vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')\n vgg.trainable = False\n \n outputs = [vgg.get_layer(name).output for name in layer_names]\n\n model = tf.keras.Model([vgg.input], outputs)\n return model",
"_____no_output_____"
]
],
[
[
"然后建立模型:",
"_____no_output_____"
]
],
[
[
"style_extractor = vgg_layers(style_layers)\nstyle_outputs = style_extractor(style_image*255)\n\n#查看每层输出的统计信息\nfor name, output in zip(style_layers, style_outputs):\n print(name)\n print(\" shape: \", output.numpy().shape)\n print(\" min: \", output.numpy().min())\n print(\" max: \", output.numpy().max())\n print(\" mean: \", output.numpy().mean())\n print()",
"_____no_output_____"
]
],
[
[
"## 风格计算\n\n图像的内容由中间 feature maps (特征图)的值表示。\n\n事实证明,图像的风格可以通过不同 feature maps (特征图)上的平均值和相关性来描述。 通过在每个位置计算 feature (特征)向量的外积,并在所有位置对该外积进行平均,可以计算出包含此信息的 Gram 矩阵。 对于特定层的 Gram 矩阵,具体计算方法如下所示:\n\n$$G^l_{cd} = \\frac{\\sum_{ij} F^l_{ijc}(x)F^l_{ijd}(x)}{IJ}$$\n\n这可以使用`tf.linalg.einsum`函数来实现:",
"_____no_output_____"
]
],
[
[
"def gram_matrix(input_tensor):\n result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)\n input_shape = tf.shape(input_tensor)\n num_locations = tf.cast(input_shape[1]*input_shape[2], tf.float32)\n return result/(num_locations)",
"_____no_output_____"
]
],
[
[
"## 提取风格和内容\n",
"_____no_output_____"
],
[
"构建一个返回风格和内容张量的模型。",
"_____no_output_____"
]
],
[
[
"class StyleContentModel(tf.keras.models.Model):\n def __init__(self, style_layers, content_layers):\n super(StyleContentModel, self).__init__()\n self.vgg = vgg_layers(style_layers + content_layers)\n self.style_layers = style_layers\n self.content_layers = content_layers\n self.num_style_layers = len(style_layers)\n self.vgg.trainable = False\n\n def call(self, inputs):\n \"Expects float input in [0,1]\"\n inputs = inputs*255.0\n preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)\n outputs = self.vgg(preprocessed_input)\n style_outputs, content_outputs = (outputs[:self.num_style_layers], \n outputs[self.num_style_layers:])\n\n style_outputs = [gram_matrix(style_output)\n for style_output in style_outputs]\n\n content_dict = {content_name:value \n for content_name, value \n in zip(self.content_layers, content_outputs)}\n\n style_dict = {style_name:value\n for style_name, value\n in zip(self.style_layers, style_outputs)}\n \n return {'content':content_dict, 'style':style_dict}",
"_____no_output_____"
]
],
[
[
"在图像上调用此模型,可以返回 style_layers 的 gram 矩阵(风格)和 content_layers 的内容:",
"_____no_output_____"
]
],
[
[
"extractor = StyleContentModel(style_layers, content_layers)\n\nresults = extractor(tf.constant(content_image))\n\nstyle_results = results['style']\n\nprint('Styles:')\nfor name, output in sorted(results['style'].items()):\n print(\" \", name)\n print(\" shape: \", output.numpy().shape)\n print(\" min: \", output.numpy().min())\n print(\" max: \", output.numpy().max())\n print(\" mean: \", output.numpy().mean())\n print()\n\nprint(\"Contents:\")\nfor name, output in sorted(results['content'].items()):\n print(\" \", name)\n print(\" shape: \", output.numpy().shape)\n print(\" min: \", output.numpy().min())\n print(\" max: \", output.numpy().max())\n print(\" mean: \", output.numpy().mean())\n",
"_____no_output_____"
]
],
[
[
"## 梯度下降\n\n使用此风格和内容提取器,我们现在可以实现风格传输算法。我们通过计算每个图像的输出和目标的均方误差来做到这一点,然后取这些损失值的加权和。\n\n设置风格和内容的目标值:",
"_____no_output_____"
]
],
[
[
"style_targets = extractor(style_image)['style']\ncontent_targets = extractor(content_image)['content']",
"_____no_output_____"
]
],
[
[
"定义一个 `tf.Variable` 来表示要优化的图像。 为了快速实现这一点,使用内容图像对其进行初始化( `tf.Variable` 必须与内容图像的形状相同)",
"_____no_output_____"
]
],
[
[
"image = tf.Variable(content_image)",
"_____no_output_____"
]
],
[
[
"由于这是一个浮点图像,因此我们定义一个函数来保持像素值在 0 和 1 之间:",
"_____no_output_____"
]
],
[
[
"def clip_0_1(image):\n return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)",
"_____no_output_____"
]
],
[
[
"创建一个 optimizer 。 本教程推荐 LBFGS,但 `Adam` 也可以正常工作:",
"_____no_output_____"
]
],
[
[
"opt = tf.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)",
"_____no_output_____"
]
],
[
[
"为了优化它,我们使用两个损失的加权组合来获得总损失:",
"_____no_output_____"
]
],
[
[
"style_weight=1e-2\ncontent_weight=1e4",
"_____no_output_____"
],
[
"def style_content_loss(outputs):\n style_outputs = outputs['style']\n content_outputs = outputs['content']\n style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2) \n for name in style_outputs.keys()])\n style_loss *= style_weight / num_style_layers\n\n content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2) \n for name in content_outputs.keys()])\n content_loss *= content_weight / num_content_layers\n loss = style_loss + content_loss\n return loss",
"_____no_output_____"
]
],
[
[
"使用 `tf.GradientTape` 来更新图像。",
"_____no_output_____"
]
],
[
[
"@tf.function()\ndef train_step(image):\n with tf.GradientTape() as tape:\n outputs = extractor(image)\n loss = style_content_loss(outputs)\n\n grad = tape.gradient(loss, image)\n opt.apply_gradients([(grad, image)])\n image.assign(clip_0_1(image))",
"_____no_output_____"
]
],
[
[
"现在,我们运行几个步来测试一下:",
"_____no_output_____"
]
],
[
[
"train_step(image)\ntrain_step(image)\ntrain_step(image)\ntensor_to_image(image)",
"_____no_output_____"
]
],
[
[
"运行正常,我们来执行一个更长的优化:",
"_____no_output_____"
]
],
[
[
"import time\nstart = time.time()\n\nepochs = 10\nsteps_per_epoch = 100\n\nstep = 0\nfor n in range(epochs):\n for m in range(steps_per_epoch):\n step += 1\n train_step(image)\n print(\".\", end='')\n display.clear_output(wait=True)\n display.display(tensor_to_image(image))\n print(\"Train step: {}\".format(step))\n \nend = time.time()\nprint(\"Total time: {:.1f}\".format(end-start))",
"_____no_output_____"
]
],
[
[
"## 总变分损失\n\n此实现只是一个基础版本,它的一个缺点是它会产生大量的高频误差。 我们可以直接通过正则化图像的高频分量来减少这些高频误差。 在风格转移中,这通常被称为*总变分损失*:",
"_____no_output_____"
]
],
[
[
"def high_pass_x_y(image):\n x_var = image[:,:,1:,:] - image[:,:,:-1,:]\n y_var = image[:,1:,:,:] - image[:,:-1,:,:]\n\n return x_var, y_var",
"_____no_output_____"
],
[
"x_deltas, y_deltas = high_pass_x_y(content_image)\n\nplt.figure(figsize=(14,10))\nplt.subplot(2,2,1)\nimshow(clip_0_1(2*y_deltas+0.5), \"Horizontal Deltas: Original\")\n\nplt.subplot(2,2,2)\nimshow(clip_0_1(2*x_deltas+0.5), \"Vertical Deltas: Original\")\n\nx_deltas, y_deltas = high_pass_x_y(image)\n\nplt.subplot(2,2,3)\nimshow(clip_0_1(2*y_deltas+0.5), \"Horizontal Deltas: Styled\")\n\nplt.subplot(2,2,4)\nimshow(clip_0_1(2*x_deltas+0.5), \"Vertical Deltas: Styled\")",
"_____no_output_____"
]
],
[
[
"这显示了高频分量如何增加。\n\n而且,本质上高频分量是一个边缘检测器。 我们可以从 Sobel 边缘检测器获得类似的输出,例如:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(14,10))\n\nsobel = tf.image.sobel_edges(content_image)\nplt.subplot(1,2,1)\nimshow(clip_0_1(sobel[...,0]/4+0.5), \"Horizontal Sobel-edges\")\nplt.subplot(1,2,2)\nimshow(clip_0_1(sobel[...,1]/4+0.5), \"Vertical Sobel-edges\")",
"_____no_output_____"
]
],
[
[
"与此相关的正则化损失是这些值的平方和:",
"_____no_output_____"
]
],
[
[
"def total_variation_loss(image):\n x_deltas, y_deltas = high_pass_x_y(image)\n return tf.reduce_sum(tf.abs(x_deltas)) + tf.reduce_sum(tf.abs(y_deltas))",
"_____no_output_____"
],
[
"total_variation_loss(image).numpy()",
"_____no_output_____"
]
],
[
[
"以上说明了总变分损失的用途。但是无需自己实现,因为 TensorFlow 包含了一个标准实现:",
"_____no_output_____"
]
],
[
[
"tf.image.total_variation(image).numpy()",
"_____no_output_____"
]
],
[
[
"## 重新进行优化\n\n选择 `total_variation_loss` 的权重:",
"_____no_output_____"
]
],
[
[
"total_variation_weight=30",
"_____no_output_____"
]
],
[
[
"现在,将它加入 `train_step` 函数中:",
"_____no_output_____"
]
],
[
[
"@tf.function()\ndef train_step(image):\n with tf.GradientTape() as tape:\n outputs = extractor(image)\n loss = style_content_loss(outputs)\n loss += total_variation_weight*tf.image.total_variation(image)\n\n grad = tape.gradient(loss, image)\n opt.apply_gradients([(grad, image)])\n image.assign(clip_0_1(image))",
"_____no_output_____"
]
],
[
[
"重新初始化优化的变量:",
"_____no_output_____"
]
],
[
[
"image = tf.Variable(content_image)",
"_____no_output_____"
]
],
[
[
"并进行优化:",
"_____no_output_____"
]
],
[
[
"import time\nstart = time.time()\n\nepochs = 10\nsteps_per_epoch = 100\n\nstep = 0\nfor n in range(epochs):\n for m in range(steps_per_epoch):\n step += 1\n train_step(image)\n print(\".\", end='')\n display.clear_output(wait=True)\n display.display(tensor_to_image(image))\n print(\"Train step: {}\".format(step))\n\nend = time.time()\nprint(\"Total time: {:.1f}\".format(end-start))",
"_____no_output_____"
]
],
[
[
"最后,保存结果:",
"_____no_output_____"
]
],
[
[
"file_name = 'stylized-image.png'\ntensor_to_image(image).save(file_name)\n\ntry:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download(file_name)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab071583f103f0b7a36aa3d41d1c92000d3f94d
| 14,030 |
ipynb
|
Jupyter Notebook
|
examples/FactorOperations.ipynb
|
seuzmj/pyBN
|
ce7b6823f4e6c4f6f9b77e89f05de87ed486b349
|
[
"MIT"
] | 126 |
2016-01-17T22:59:08.000Z
|
2021-12-19T15:35:22.000Z
|
examples/FactorOperations.ipynb
|
arita37/pyBN
|
ce7b6823f4e6c4f6f9b77e89f05de87ed486b349
|
[
"MIT"
] | 24 |
2016-01-21T20:11:03.000Z
|
2018-09-21T01:23:58.000Z
|
examples/FactorOperations.ipynb
|
arita37/pyBN
|
ce7b6823f4e6c4f6f9b77e89f05de87ed486b349
|
[
"MIT"
] | 55 |
2016-05-27T00:46:54.000Z
|
2022-03-24T11:43:57.000Z
| 27.563851 | 368 | 0.526657 |
[
[
[
"# Factor Operations with pyBN",
"_____no_output_____"
],
[
"It is probably rare that a user wants to directly manipulate factors unless they are developing a new algorithm, but it's still important to see how factor operations are done in pyBN. Moreover, the ease-of-use and transparency of pyBN's factor operations mean it can be a great teaching/learning tool!\n\nIn this tutorial, I will go over the main operations you can do with factors. First, let's start with actually creating a factor. So, we will read in a Bayesian Network from one of the included networks:",
"_____no_output_____"
]
],
[
[
"from pyBN import *\nbn = read_bn('data/cmu.bn')",
"_____no_output_____"
],
[
"print bn.V\nprint bn.E",
"['Burglary', 'Earthquake', 'Alarm', 'JohnCalls', 'MaryCalls']\n[['Burglary', 'Alarm'], ['Earthquake', 'Alarm'], ['Alarm', 'JohnCalls'], ['Alarm', 'MaryCalls']]\n"
]
],
[
[
"As you can see, we have a Bayesian network with 5 nodes and some edges between them. Let's create a factor now. This is easy in pyBN - just pass in the BayesNet object and the name of the variable.",
"_____no_output_____"
]
],
[
[
"alarm_factor = Factor(bn,'Alarm')",
"_____no_output_____"
]
],
[
[
"Now that we have a factor, we can explore its properties. Every factor in pyBN has the following attributes:\n\n *self.bn* : a BayesNet object\n\n *self.var* : a string\n The random variable to which this Factor belongs\n \n *self.scope* : a list\n The RV, and its parents (the RVs involved in the\n conditional probability table)\n \n *self.card* : a dictionary, where\n key = an RV in self.scope, and\n val = integer cardinality of the key (i.e. how\n many possible values it has)\n \n *self.stride* : a dictionary, where\n key = an RV in self.scope, and\n val = integer stride (i.e. how many rows in the \n CPT until the NEXT value of RV is reached)\n \n *self.cpt* : a nested numpy array\n The probability values for self.var conditioned\n on its parents",
"_____no_output_____"
]
],
[
[
"print alarm_factor.bn\nprint alarm_factor.var\nprint alarm_factor.scope\nprint alarm_factor.card\nprint alarm_factor.stride\nprint alarm_factor.cpt",
"<pyBN.classes.bayesnet.BayesNet object at 0x10c73ced0>\nAlarm\n['Alarm', 'Burglary', 'Earthquake']\n{'Burglary': 2, 'Alarm': 2, 'Earthquake': 2}\n{'Burglary': 2, 'Alarm': 1, 'Earthquake': 4}\n[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n"
]
],
[
[
"Along with those properties, there are a great number of methods (functions) at hand:\n\n *multiply_factor*\n Multiply two factors together. The factor\n multiplication algorithm used here is adapted\n from Koller and Friedman (PGMs) textbook.\n\n *sumover_var* :\n Sum over one *rv* by keeping it constant. Thus, you \n end up with a 1-D factor whose scope is ONLY *rv*\n and whose length = cardinality of rv. \n\n *sumout_var_list* :\n Remove a collection of rv's from the factor\n by summing out (i.e. calling sumout_var) over\n each rv.\n\n *sumout_var* :\n Remove passed-in *rv* from the factor by summing\n over everything else.\n\n *maxout_var* :\n Remove *rv* from the factor by taking the maximum value \n of all rv instantiations over everyting else.\n\n *reduce_factor_by_list* :\n Reduce the factor by numerous sets of\n [rv,val]\n\n *reduce_factor* :\n Condition the factor by eliminating any sets of\n values that don't align with a given [rv, val]\n\n *to_log* :\n Convert probabilities to log space from\n normal space.\n\n *from_log* :\n Convert probabilities from log space to\n normal space.\n\n *normalize* :\n Make relevant collections of probabilities sum to one.",
"_____no_output_____"
],
[
"Here is a look at Factor Multiplication:",
"_____no_output_____"
]
],
[
[
"import numpy as np\nf1 = Factor(bn,'Alarm')\nf2 = Factor(bn,'Burglary')\nf1.multiply_factor(f2)\n\nf3 = Factor(bn,'Burglary')\nf4 = Factor(bn,'Alarm')\nf3.multiply_factor(f4)\n\nprint np.round(f1.cpt,3)\nprint '\\n',np.round(f3.cpt,3)",
"[ 0.998 0.001 0.001 0. 0.06 0.939 0. 0.001]\n\n[ 0.998 0.001 0.001 0. 0.06 0.939 0. 0.001]\n"
]
],
[
[
"Here is a look at \"sumover_var\":",
"_____no_output_____"
]
],
[
[
"f = Factor(bn,'Alarm')\nprint f.cpt\nprint f.scope\nprint f.stride\nf.sumover_var('Burglary')\nprint '\\n',f.cpt\nprint f.scope\nprint f.stride",
"[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n['Alarm', 'Burglary', 'Earthquake']\n{'Burglary': 2, 'Alarm': 1, 'Earthquake': 4}\n\n[ 0.5 0.5]\n['Burglary']\n{'Burglary': 1}\n"
]
],
[
[
"Here is a look at \"sumout_var\", which is essentially the opposite of \"sumover_var\":",
"_____no_output_____"
]
],
[
[
"f = Factor(bn,'Alarm')\nf.sumout_var('Earthquake')\nprint f.stride\nprint f.scope\nprint f.card\nprint f.cpt",
"{'Burglary': 2, 'Alarm': 1}\n['Alarm', 'Burglary']\n{'Burglary': 2, 'Alarm': 2}\n[ 0.5295 0.4705 0.38 0.62 ]\n"
]
],
[
[
"Additionally, you can sum over a LIST of variables with \"sumover_var_list\". Notice how summing over every variable in the scope except for ONE variable is equivalent to summing over that ONE variable:",
"_____no_output_____"
]
],
[
[
"f = Factor(bn,'Alarm')\nprint f.cpt\nf.sumout_var_list(['Burglary','Earthquake'])\nprint f.scope\nprint f.stride\nprint f.cpt\n\nf1 = Factor(bn,'Alarm')\nprint '\\n',f1.cpt\nf1.sumover_var('Alarm')\nprint f1.scope\nprint f1.stride\nprint f1.cpt",
"[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n['Alarm']\n{'Alarm': 1}\n[ 0.45475 0.54525]\n\n[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n['Alarm']\n{'Alarm': 1}\n[ 0.45475 0.54525]\n"
]
],
[
[
"Even more, you can use \"maxout_var\" to take the max values over a variable in the factor. This is a fundamental operation in Max-Sum Variable Elimination for MAP inference. Notice how the variable being maxed out is removed from the scope because it is conditioned upon and thus taken as truth in a sense.",
"_____no_output_____"
]
],
[
[
"f = Factor(bn,'Alarm')\nprint f.scope\nprint f.cpt\nf.maxout_var('Burglary')\nprint '\\n', f.scope\nprint f.cpt",
"['Alarm', 'Burglary', 'Earthquake']\n[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n\n['Alarm', 'Earthquake']\n[ 0.77501 0.22499 0.05942 0.94058]\n"
]
],
[
[
"Moreover, you can also use \"reduce_factor\" to reduce a factor based on evidence. This is different from \"sumover_var\" because \"reduce_factor\" is not summing over anything, it is simply removing any \n parent-child instantiations which are not consistent with\n the evidence. Moreover, there should not be any need for\n normalization because the CPT should already be normalized\n over the rv-val evidence (but we do it anyways because of\n rounding). This function is essential when user's pass in evidence to any inference query.",
"_____no_output_____"
]
],
[
[
"f = Factor(bn, 'Alarm')\nprint f.scope\nprint f.cpt\nf.reduce_factor('Burglary','Yes')\nprint '\\n', f.scope\nprint f.cpt",
"['Alarm', 'Burglary', 'Earthquake']\n[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n\n['Alarm', 'Earthquake']\n[ 0.71 0.29 0.05 0.95]\n"
]
],
[
[
"Another piece of functionality is the capability to convert the factor probabilities to/from log-space. This is important for MAP inference, since the sum of log-probabilities is equal the product of normal probabilities",
"_____no_output_____"
]
],
[
[
"f = Factor(bn,'Alarm')\nprint f.cpt\nf.to_log()\nprint np.round(f.cpt,2)\nf.from_log()\nprint f.cpt",
"[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n[-0. -6.91 -0.34 -1.24 -2.81 -0.06 -3. -0.05]\n[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n"
]
],
[
[
"Lastly, we have normalization. This function does most of its work behind the scenes because it cleans up the factor probabilities after multiplication or reduction. Still, it's an important function of which users should be aware.",
"_____no_output_____"
]
],
[
[
"f = Factor(bn, 'Alarm')\nprint f.cpt\nf.cpt[0]=20\nf.cpt[1]=20\nf.cpt[4]=0.94\nf.cpt[7]=0.15\nprint f.cpt\nf.normalize()\nprint f.cpt",
"[ 0.999 0.001 0.71 0.29 0.06 0.94 0.05 0.95 ]\n[ 20. 20. 0.71 0.29 0.94 0.94 0.05 0.15]\n[ 0.5 0.5 0.71 0.29 0.5 0.5 0.25 0.75]\n"
]
],
[
[
"That's all for factor operations with pyBN. As you can see, there is a lot going on with factor operations. While these functions are the behind-the-scenes drivers of most inference queries, it is still useful for users to see how they operate. These operations have all been optimized to run incredibly fast so that inference queries can be as fast as possible.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ab0956ccdae3ac5d4fc4d8dc686d188796a6858
| 284,442 |
ipynb
|
Jupyter Notebook
|
Examples/ClusteringIris.ipynb
|
FlyN-Nick/introToML
|
a61c330f97755fdc446171c4a39db3d3ff5c879a
|
[
"MIT"
] | 3 |
2020-10-21T17:45:20.000Z
|
2021-06-05T10:38:45.000Z
|
Examples/ClusteringIris.ipynb
|
FlyN-Nick/introToML
|
a61c330f97755fdc446171c4a39db3d3ff5c879a
|
[
"MIT"
] | null | null | null |
Examples/ClusteringIris.ipynb
|
FlyN-Nick/introToML
|
a61c330f97755fdc446171c4a39db3d3ff5c879a
|
[
"MIT"
] | 1 |
2020-11-10T03:33:23.000Z
|
2020-11-10T03:33:23.000Z
| 831.701754 | 248,664 | 0.950936 |
[
[
[
"# Clustering\n\nSee our notes on [unsupervised learning](https://jennselby.github.io/MachineLearningCourseNotes/#unsupervised-learning), [K-means](https://jennselby.github.io/MachineLearningCourseNotes/#k-means-clustering), [DBSCAN](https://jennselby.github.io/MachineLearningCourseNotes/#dbscan-clustering), and [clustering validation](https://jennselby.github.io/MachineLearningCourseNotes/#clustering-validation).\n\nFor documentation of various clustering methods in scikit-learn, see http://scikit-learn.org/stable/modules/clustering.html\n\nThis code was based on the example at http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_iris.html\nwhich has the following comments:\n\nCode source: Gaël Varoquaux<br/>\nModified for documentation by Jaques Grobler<br/>\nLicense: BSD 3 clause\n## Instructions\n0. If you haven't already, follow [the setup instructions here](https://jennselby.github.io/MachineLearningCourseNotes/#setting-up-python3) to get all necessary software installed.\n1. Read through the code in the following sections:\n * [Iris Dataset](#Iris-Dataset)\n * [Visualization](#Visualization)\n * [Training and Visualization](#Training-and-Visualization)\n2. Complete the three-part [Exercise](#Exercise)",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nimport numpy\nimport matplotlib.pyplot\nfrom mpl_toolkits.mplot3d import Axes3D\n\nfrom sklearn.cluster import KMeans\nfrom sklearn import datasets\n\nimport pandas",
"_____no_output_____"
]
],
[
[
"## Iris Dataset\n\nBefore you go on, if you haven't used the iris dataset in a previous assignment, make sure you understand it. Modify the cell below to examine different parts of the dataset that are contained in the iris dictionary object.\n\nWhat are the features? What are we trying to classify?",
"_____no_output_____"
]
],
[
[
"iris = datasets.load_iris()\niris.keys()",
"_____no_output_____"
],
[
"iris_df = pandas.DataFrame(iris.data)\niris_df.columns = iris.feature_names\niris_df.head()",
"_____no_output_____"
]
],
[
[
"## Visualization Setup",
"_____no_output_____"
]
],
[
[
"# We can only plot 3 of the 4 iris features, since we only see in 3D.\n# These are the ones the example code picked\nX_FEATURE = 'petal width (cm)' \nY_FEATURE = 'sepal length (cm)' \nZ_FEATURE = 'petal length (cm)'\n\n# set some bounds for the figures that will display the plots of clusterings with various\n# hyperparameter settings\n# this allows for NUM_COLS * NUM_ROWS plots in the figure\nNUM_COLS = 4\nNUM_ROWS = 6\nFIG_WIDTH = 4 * NUM_COLS\nFIG_HEIGHT = 3 * NUM_ROWS\n\ndef add_plot(figure, subplot_num, subplot_name, data, labels):\n '''Create a new subplot in the figure.'''\n\n # create a new subplot\n axis = figure.add_subplot(NUM_ROWS, NUM_COLS, subplot_num, projection='3d',\n elev=48, azim=134)\n\n # Plot three of the four features on the graph, and set the color according to the labels\n axis.scatter(data[X_FEATURE], data[Y_FEATURE], data[Z_FEATURE], c=labels)\n\n # get rid of the tick numbers. Otherwise, they all overlap and it looks horrible\n for axis_obj in [axis.w_xaxis, axis.w_yaxis, axis.w_zaxis]:\n axis_obj.set_ticklabels([])\n\n # label the subplot\n axis.title.set_text(subplot_name)",
"_____no_output_____"
]
],
[
[
"## Visualization\n\nThis is the correct labeling, based on the targets.",
"_____no_output_____"
]
],
[
[
"# start a new figure to hold all of the subplots\ntruth_figure = matplotlib.pyplot.figure(figsize=(FIG_WIDTH, FIG_HEIGHT))\n\n# Plot the ground truth\nadd_plot(truth_figure, 1, \"Ground Truth\", iris_df, iris.target)",
"_____no_output_____"
]
],
[
[
"## Training and Visualization\n\nNow let's see how k-means clusters the iris dataset, with various different numbers of clusters",
"_____no_output_____"
]
],
[
[
"MAX_CLUSTERS = 10\n# start a new figure to hold all of the subplots\nkmeans_figure = matplotlib.pyplot.figure(figsize=(FIG_WIDTH, FIG_HEIGHT))\n\n# Plot the ground truth\nadd_plot(kmeans_figure, 1, \"Ground Truth\", iris_df, iris.target)\n\nplot_num = 2\nfor num_clusters in range(2, MAX_CLUSTERS + 1):\n # train the model\n model = KMeans(n_clusters=num_clusters)\n model.fit(iris_df)\n \n # get the predictions of which cluster each input is in\n labels = model.labels_\n\n # plot this clustering\n title = '{} Clusters'.format(num_clusters) \n add_plot(kmeans_figure, plot_num, title, iris_df, labels.astype(numpy.float))\n plot_num += 1",
"_____no_output_____"
]
],
[
[
"# Exercise\n\n1. Add [validation](https://jennselby.github.io/MachineLearningCourseNotes/#clustering-validation) to measure how good the clustering is, with different numbers of clusters.\n1. Run the iris data through DBSCAN or hierarchical clustering and validate that as well.\n1. Comment on the validation results, explaining which models did best and why you think that might be.",
"_____no_output_____"
]
],
[
[
"# your code here",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab09fc7d69defe04a0e5e616eb43bd4ac788302
| 15,758 |
ipynb
|
Jupyter Notebook
|
2019.1-AB2-TP2.ipynb
|
WagnerFLL/Leader-Election-Algorithm
|
68d73ae36bd5afd1bef31bbb8c0b74f369ae6617
|
[
"MIT"
] | null | null | null |
2019.1-AB2-TP2.ipynb
|
WagnerFLL/Leader-Election-Algorithm
|
68d73ae36bd5afd1bef31bbb8c0b74f369ae6617
|
[
"MIT"
] | null | null | null |
2019.1-AB2-TP2.ipynb
|
WagnerFLL/Leader-Election-Algorithm
|
68d73ae36bd5afd1bef31bbb8c0b74f369ae6617
|
[
"MIT"
] | null | null | null | 41.036458 | 341 | 0.572408 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4ab0a117a167d0d51184ab867570c8183b7e2448
| 83,055 |
ipynb
|
Jupyter Notebook
|
DS_Unit_1_Sprint_Challenge_3.ipynb
|
ChanceDurr/AB-Demo
|
2d711a91386d9eb48c55b5e4029d5e2779c65ae2
|
[
"MIT"
] | null | null | null |
DS_Unit_1_Sprint_Challenge_3.ipynb
|
ChanceDurr/AB-Demo
|
2d711a91386d9eb48c55b5e4029d5e2779c65ae2
|
[
"MIT"
] | null | null | null |
DS_Unit_1_Sprint_Challenge_3.ipynb
|
ChanceDurr/AB-Demo
|
2d711a91386d9eb48c55b5e4029d5e2779c65ae2
|
[
"MIT"
] | null | null | null | 53.653101 | 15,780 | 0.520125 |
[
[
[
"<a href=\"https://colab.research.google.com/github/ChanceDurr/AB-Demo/blob/master/DS_Unit_1_Sprint_Challenge_3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Data Science Unit 1 Sprint Challenge 3\n\n## Exploring Data, Testing Hypotheses\n\nIn this sprint challenge you will look at a dataset of people being approved or rejected for credit.\n\nhttps://archive.ics.uci.edu/ml/datasets/Credit+Approval\n\nData Set Information: This file concerns credit card applications. All attribute names and values have been changed to meaningless symbols to protect confidentiality of the data. This dataset is interesting because there is a good mix of attributes -- continuous, nominal with small numbers of values, and nominal with larger numbers of values. There are also a few missing values.\n\nAttribute Information:\n- A1: b, a.\n- A2: continuous.\n- A3: continuous.\n- A4: u, y, l, t.\n- A5: g, p, gg.\n- A6: c, d, cc, i, j, k, m, r, q, w, x, e, aa, ff.\n- A7: v, h, bb, j, n, z, dd, ff, o.\n- A8: continuous.\n- A9: t, f.\n- A10: t, f.\n- A11: continuous.\n- A12: t, f.\n- A13: g, p, s.\n- A14: continuous.\n- A15: continuous.\n- A16: +,- (class attribute)\n\nYes, most of that doesn't mean anything. A16 (the class attribute) is the most interesting, as it separates the 307 approved cases from the 383 rejected cases. The remaining variables have been obfuscated for privacy - a challenge you may have to deal with in your data science career.\n\nSprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it!",
"_____no_output_____"
],
[
"## Part 1 - Load and validate the data\n\n- Load the data as a `pandas` data frame.\n- Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI).\n- UCI says there should be missing data - check, and if necessary change the data so pandas recognizes it as na\n- Make sure that the loaded features are of the types described above (continuous values should be treated as float), and correct as necessary\n\nThis is review, but skills that you'll use at the start of any data exploration. Further, you may have to do some investigation to figure out which file to load from - that is part of the puzzle.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom scipy.stats import ttest_ind, chi2_contingency\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/crx.data',\n names=['A1', 'A2', 'A3', 'A4', 'A5', 'A6', 'A7', 'A8', 'A9',\n 'A10', 'A11', 'A12', 'A13', 'A14', 'A15', 'A16'])\nprint(df.shape) # Check to see the correct amount of observations\ndf.head()",
"(690, 16)\n"
],
[
"# Check for missing values in A16\ndf['A16'].value_counts(dropna=False)",
"_____no_output_____"
],
[
"# Replace + and - with 1 and 0\ndf['A16'] = df['A16'].replace({'+': 1, '-': 0})\ndf.head(10)",
"_____no_output_____"
],
[
"df = df.replace({'?': None}) #Replace ? with NaN\ndf['A2'] = df['A2'].astype(float) # Change the dtype of A2 to float\ndf['A2'].describe()",
"_____no_output_____"
],
[
"df_approved = df[df['A16'] == 1]\ndf_rejected = df[df['A16'] == 0]\nprint(df_approved.shape)\ndf_approved.head(10)",
"(307, 16)\n"
],
[
"print(df_rejected.shape)\ndf_rejected.head(10)",
"(383, 16)\n"
]
],
[
[
"## Part 2 - Exploring data, Testing hypotheses\n\nThe only thing we really know about this data is that A16 is the class label. Besides that, we have 6 continuous (float) features and 9 categorical features.\n\nExplore the data: you can use whatever approach (tables, utility functions, visualizations) to get an impression of the distributions and relationships of the variables. In general, your goal is to understand how the features are different when grouped by the two class labels (`+` and `-`).\n\nFor the 6 continuous features, how are they different when split between the two class labels? Choose two features to run t-tests (again split by class label) - specifically, select one feature that is *extremely* different between the classes, and another feature that is notably less different (though perhaps still \"statistically significantly\" different). You may have to explore more than two features to do this.\n\nFor the categorical features, explore by creating \"cross tabs\" (aka [contingency tables](https://en.wikipedia.org/wiki/Contingency_table)) between them and the class label, and apply the Chi-squared test to them. [pandas.crosstab](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html) can create contingency tables, and [scipy.stats.chi2_contingency](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html) can calculate the Chi-squared statistic for them.\n\nThere are 9 categorical features - as with the t-test, try to find one where the Chi-squared test returns an extreme result (rejecting the null that the data are independent), and one where it is less extreme.\n\n**NOTE** - \"less extreme\" just means smaller test statistic/larger p-value. Even the least extreme differences may be strongly statistically significant.\n\nYour *main* goal is the hypothesis tests, so don't spend too much time on the exploration/visualization piece. That is just a means to an end - use simple visualizations, such as boxplots or a scatter matrix (both built in to pandas), to get a feel for the overall distribution of the variables.\n\nThis is challenging, so manage your time and aim for a baseline of at least running two t-tests and two Chi-squared tests before polishing. And don't forget to answer the questions in part 3, even if your results in this part aren't what you want them to be.",
"_____no_output_____"
]
],
[
[
"# ttest_ind to see if means are similar, reject null hypothesis\nttest_ind(df_approved['A2'].dropna(), df_rejected['A2'].dropna())",
"_____no_output_____"
],
[
"# ttest_ind to see if means are similar, reject null hypothesis\nttest_ind(df_approved['A8'].dropna(), df_rejected['A8'].dropna())",
"_____no_output_____"
],
[
"ct1 = pd.crosstab(df['A16'], df['A1'])\nchi_statistic1, p_value1, dof1, table1 = chi2_contingency(ct1)\nprint(f'Chi test statistic: {chi_statistic1}')\nprint(f'P Value: {p_value1}')\nprint(f'Degrees of freedom: {dof1}')\nprint(f'Expected Table: \\n {table1}')",
"Chi test statistic: 0.3112832649161994\nP Value: 0.5768937883001118\nDegrees of freedom: 1\nExpected Table: \n [[115.84070796 258.15929204]\n [ 94.15929204 209.84070796]]\n"
],
[
"ct2 = pd.crosstab(df['A16'], df['A4'])\nchi_statistic2, p_value2, dof2, table2 = chi2_contingency(ct2)\nprint(f'Chi test statistic: {chi_statistic2}')\nprint(f'P Value: {p_value2}')\nprint(f'Degrees of freedom: {dof2}')\nprint(f'Expected Table: \\n {table2}')\nct2",
"Chi test statistic: 26.234074966202144\nP Value: 2.010680204180363e-06\nDegrees of freedom: 2\nExpected Table: \n [[ 1.11403509 289.09210526 90.79385965]\n [ 0.88596491 229.90789474 72.20614035]]\n"
]
],
[
[
"## Exploration with Visuals\n",
"_____no_output_____"
]
],
[
[
"plt.style.use('fivethirtyeight')\nplt.scatter(df['A2'], df['A16'], alpha=.1)\nplt.yticks([0, 1])",
"_____no_output_____"
],
[
"plt.style.use('fivethirtyeight')\nplt.scatter(df['A8'], df['A16'], alpha=.1)\nplt.yticks([0, 1])",
"_____no_output_____"
]
],
[
[
"## Part 3 - Analysis and Interpretation\n\nNow that you've looked at the data, answer the following questions:\n\n- Interpret and explain the two t-tests you ran - what do they tell you about the relationships between the continuous features you selected and the class labels?\n- Interpret and explain the two Chi-squared tests you ran - what do they tell you about the relationships between the categorical features you selected and the class labels?\n- What was the most challenging part of this sprint challenge?\n\nAnswer with text, but feel free to intersperse example code/results or refer to it from earlier.",
"_____no_output_____"
],
[
"In both ttests you can see that we were able to reject the null hypothesis that the two means of the features(A2 and A8) are not the same. Therefore, we should be able to say that there is a correlation between the features and having an effect on whether or not they get approved for credit. If we Failed to Reject the null, I would say that there isn't a significant correlation between the A2, and A8 features and getting approved for credit.\n\nWith the two Chi sqaured test, I wanted to see if there was a dependency with one of the other categorical features and whether or not they got approved for credit. You can see in one of the cases that we Rejected the Null hypothesis of them being independant of each other. Therefore we can say that there is a correlation between the two features. On the other hand, we had a case where we Fail to Reject the Null hypothesis. Meaning that we cannot say that these are dependent on each other.\n\nI would say the most challenging part of this Sprint challenge was preparing for it. It was tough to get a grasp of what were doing and why we were doing it. After a full day of study though with some peers and Ryan himself. I was able to go through step by step and get some questions answered. After that, it was a lot easier to understand. However, I still dont know why there is a higher chance with door 2 in the monty hall problem :)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4ab0aa2ea837dfe5b0b086f815315348e076a870
| 40,441 |
ipynb
|
Jupyter Notebook
|
04_hacking_training_loop/hyperparameter_tuning.ipynb
|
modoai/ml-design-patterns
|
cfe5a1f46f1691663c56ec53382b398e0c8ca505
|
[
"Apache-2.0"
] | 1,149 |
2020-04-09T21:20:56.000Z
|
2022-03-31T02:41:53.000Z
|
04_hacking_training_loop/hyperparameter_tuning.ipynb
|
QuintonQu/ml-design-patterns
|
060eb9f9be1d7f7e2d7e103a29a01386723c22fe
|
[
"Apache-2.0"
] | 28 |
2020-06-14T15:17:59.000Z
|
2022-02-17T10:13:08.000Z
|
04_hacking_training_loop/hyperparameter_tuning.ipynb
|
QuintonQu/ml-design-patterns
|
060eb9f9be1d7f7e2d7e103a29a01386723c22fe
|
[
"Apache-2.0"
] | 296 |
2020-04-28T06:26:41.000Z
|
2022-03-31T06:52:33.000Z
| 33.58887 | 557 | 0.426523 |
[
[
[
"## Hyperparameter Tuning Design Pattern\n\nIn Hyperparameter Tuning, the training loop is itself inserted into an optimization method to find the optimal set of model hyperparameters.",
"_____no_output_____"
]
],
[
[
"import datetime\nimport os\nimport numpy as np\nimport pandas as pd\nimport tensorflow as tf\nimport time\n\nfrom tensorflow import keras\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.metrics import accuracy_score, f1_score",
"_____no_output_____"
]
],
[
[
"### Grid search in Scikit-learn\n\nHere we'll look at how to implement hyperparameter tuning with the grid search algorithm, using Scikit-learn's built-in `GridSearchCV`. We'll do this by training a random forest model on the UCI mushroom dataset, which predicts whether a mushroom is edible or poisonous.",
"_____no_output_____"
]
],
[
[
"# First, download the data\n# We've made it publicly available in Google Cloud Storage\n!gsutil cp gs://ml-design-patterns/mushrooms.csv .",
"Copying gs://ml-design-patterns/mushrooms.csv...\n/ [1 files][365.2 KiB/365.2 KiB] \nOperation completed over 1 objects/365.2 KiB. \n"
],
[
"mushroom_data = pd.read_csv('mushrooms.csv')\nmushroom_data.head()",
"_____no_output_____"
]
],
[
[
"To keep things simple, we'll first convert the label column to numeric and then \nuse `pd.get_dummies()` to covert the data to numeric. ",
"_____no_output_____"
]
],
[
[
"# 1 = edible, 0 = poisonous\nmushroom_data.loc[mushroom_data['class'] == 'p', 'class'] = 0\nmushroom_data.loc[mushroom_data['class'] == 'e', 'class'] = 1",
"_____no_output_____"
],
[
"labels = mushroom_data.pop('class')",
"_____no_output_____"
],
[
"dummy_data = pd.get_dummies(mushroom_data)",
"_____no_output_____"
],
[
"# Split the data\ntrain_size = int(len(mushroom_data) * .8)\n\ntrain_data = dummy_data[:train_size]\ntest_data = dummy_data[train_size:]\n\ntrain_labels = labels[:train_size].astype(int)\ntest_labels = labels[train_size:].astype(int)",
"_____no_output_____"
]
],
[
[
"Next, we'll build our Scikit-learn model and define the hyperparameters we want to optimize using grid serach.",
"_____no_output_____"
]
],
[
[
"model = RandomForestClassifier()",
"_____no_output_____"
],
[
"grid_vals = {\n 'max_depth': [5, 10, 100],\n 'n_estimators': [100, 150, 200]\n}",
"_____no_output_____"
],
[
"grid_search = GridSearchCV(model, param_grid=grid_vals, scoring='accuracy')",
"_____no_output_____"
],
[
"# Train the model while running hyperparameter trials\ngrid_search.fit(train_data.values, train_labels.values)",
"_____no_output_____"
]
],
[
[
"Let's see which hyperparameters resulted in the best accuracy.",
"_____no_output_____"
]
],
[
[
"grid_search.best_params_",
"_____no_output_____"
]
],
[
[
"Finally, we can generate some test predictions on our model and evaluate its accuracy.",
"_____no_output_____"
]
],
[
[
"grid_predict = grid_search.predict(test_data.values)",
"_____no_output_____"
],
[
"grid_acc = accuracy_score(test_labels.values, grid_predict)\ngrid_f = f1_score(test_labels.values, grid_predict)",
"_____no_output_____"
],
[
"print('Accuracy: ', grid_acc)\nprint('F1-Score: ', grid_f)",
"Accuracy: 0.9950769230769231\nF1-Score: 0.9921722113502935\n"
]
],
[
[
"### Hyperparameter tuning with `keras-tuner`\n\nTo show how this works we'll train a model on the MNIST handwritten digit dataset, which is available directly in Keras. For more details, see this [Keras tuner guide](https://www.tensorflow.org/tutorials/keras/keras_tuner).",
"_____no_output_____"
]
],
[
[
"!pip install keras-tuner --quiet",
"\u001b[?25l\r\u001b[K |██████ | 10kB 16.6MB/s eta 0:00:01\r\u001b[K |████████████ | 20kB 1.8MB/s eta 0:00:01\r\u001b[K |██████████████████ | 30kB 2.3MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 40kB 1.6MB/s eta 0:00:01\r\u001b[K |██████████████████████████████ | 51kB 2.0MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 61kB 1.8MB/s \n\u001b[?25h Building wheel for keras-tuner (setup.py) ... \u001b[?25l\u001b[?25hdone\n Building wheel for terminaltables (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
],
[
"import kerastuner as kt",
"_____no_output_____"
],
[
"# Get the mnist data\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()",
"Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz\n11493376/11490434 [==============================] - 0s 0us/step\n"
],
[
"def build_model(hp):\n model = keras.Sequential([\n keras.layers.Flatten(input_shape=(28, 28)),\n keras.layers.Dense(hp.Int('first_hidden', 128, 256, step=32), activation='relu'),\n keras.layers.Dense(hp.Int('second_hidden', 16, 128, step=32), activation='relu'),\n keras.layers.Dense(10, activation='softmax')\n ])\n\n model.compile(\n optimizer=tf.keras.optimizers.Adam(\n hp.Float('learning_rate', .005, .01, sampling='log')),\n loss='sparse_categorical_crossentropy', \n metrics=['accuracy'])\n \n return model",
"_____no_output_____"
],
[
"tuner = kt.BayesianOptimization(\n build_model,\n objective='val_accuracy',\n max_trials=30\n)",
"INFO:tensorflow:Reloading Oracle from existing project ./untitled_project/oracle.json\nWARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter\nWARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1\nWARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2\nWARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay\nWARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate\nWARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.\nINFO:tensorflow:Reloading Tuner from ./untitled_project/tuner0.json\n"
],
[
"tuner.search(x_train, y_train, validation_split=0.1, epochs=10)",
"_____no_output_____"
],
[
"best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0]",
"_____no_output_____"
]
],
[
[
"### Hyperparameter tuning on Cloud AI Platform\n\nIn this section we'll show you how to scale your hyperparameter optimization by running it on Google Cloud's AI Platform. You'll need a Cloud account with AI Platform Training enabled to run this section.\n\nWe'll be using PyTorch to build a regression model in this section. To train the model we'll be the BigQuery natality dataset. We've made a subset of this data available in a public Cloud Storage bucket, which we'll download from within the training job.",
"_____no_output_____"
]
],
[
[
"from google.colab import auth\nauth.authenticate_user()",
"_____no_output_____"
]
],
[
[
"In the cells below, replcae `your-project-id` with the ID of your Cloud project, and `your-gcs-bucket` with the name of your Cloud Storage bucket.",
"_____no_output_____"
]
],
[
[
"!gcloud config set project your-project-id",
"Updated property [core/project].\n"
],
[
"BUCKET_URL = 'gs://your-gcs-bucket'",
"_____no_output_____"
]
],
[
[
"To run this on AI Platform, we'll need to package up our model code in Python's package format, which includes an empty `__init__.py` file and a `setup.py` to install dependencies (in this case PyTorch, Scikit-learn, and Pandas).",
"_____no_output_____"
]
],
[
[
"!mkdir trainer\n!touch trainer/__init__.py",
"mkdir: cannot create directory ‘trainer’: File exists\n"
],
[
"%%writefile setup.py\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nREQUIRED_PACKAGES = ['torch>=1.5', 'scikit-learn>=0.20', 'pandas>=1.0']\n\nsetup(\n name='trainer',\n version='0.1',\n install_requires=REQUIRED_PACKAGES,\n packages=find_packages(),\n include_package_data=True,\n description='My training application package.'\n)",
"Overwriting setup.py\n"
]
],
[
[
"Below, we're copying our model training code to a `model.py` file in our trainer package directory. This code runs training and after training completes, reports the model's final loss to Cloud HyperTune.",
"_____no_output_____"
]
],
[
[
"%%writefile trainer/model.py\nimport argparse\nimport hypertune\nimport numpy as np\nimport pandas as pd\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\n\nfrom sklearn.utils import shuffle\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn.preprocessing import normalize\n\ndef get_args():\n \"\"\"Argument parser.\n Returns:\n Dictionary of arguments.\n \"\"\"\n parser = argparse.ArgumentParser(description='PyTorch MNIST')\n parser.add_argument('--job-dir', # handled automatically by AI Platform\n help='GCS location to write checkpoints and export ' \\\n 'models')\n parser.add_argument('--lr', # Specified in the config file\n type=float,\n default=0.01,\n help='learning rate (default: 0.01)')\n parser.add_argument('--momentum', # Specified in the config file\n type=float,\n default=0.5,\n help='SGD momentum (default: 0.5)')\n parser.add_argument('--hidden-layer-size', # Specified in the config file\n type=int,\n default=8,\n help='hidden layer size')\n args = parser.parse_args()\n return args\n\ndef train_model(args):\n # Get the data\n natality = pd.read_csv('https://storage.googleapis.com/ml-design-patterns/natality.csv')\n natality = natality.dropna()\n natality = shuffle(natality, random_state = 2)\n natality.head()\n\n natality_labels = natality['weight_pounds']\n natality = natality.drop(columns=['weight_pounds'])\n\n\n train_size = int(len(natality) * 0.8)\n traindata_natality = natality[:train_size]\n trainlabels_natality = natality_labels[:train_size]\n\n testdata_natality = natality[train_size:]\n testlabels_natality = natality_labels[train_size:]\n\n # Normalize and convert to PT tensors\n normalized_train = normalize(np.array(traindata_natality.values), axis=0)\n normalized_test = normalize(np.array(testdata_natality.values), axis=0)\n\n train_x = torch.Tensor(normalized_train)\n train_y = torch.Tensor(np.array(trainlabels_natality))\n\n test_x = torch.Tensor(normalized_test)\n test_y = torch.Tensor(np.array(testlabels_natality))\n\n # Define our data loaders\n train_dataset = torch.utils.data.TensorDataset(train_x, train_y)\n train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=128, shuffle=True)\n\n test_dataset = torch.utils.data.TensorDataset(test_x, test_y)\n test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=128, shuffle=False)\n\n # Define the model, while tuning the size of our hidden layer\n model = nn.Sequential(nn.Linear(len(train_x[0]), args.hidden_layer_size),\n nn.ReLU(),\n nn.Linear(args.hidden_layer_size, 1))\n criterion = nn.MSELoss()\n\n # Tune hyperparameters in our optimizer\n optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)\n epochs = 10\n for e in range(epochs):\n for batch_id, (data, label) in enumerate(train_dataloader):\n optimizer.zero_grad()\n y_pred = model(data)\n label = label.view(-1,1)\n loss = criterion(y_pred, label)\n \n loss.backward()\n optimizer.step()\n\n\n val_mse = 0\n num_batches = 0\n # Evaluate accuracy on our test set\n with torch.no_grad():\n for i, (data, label) in enumerate(test_dataloader):\n num_batches += 1\n y_pred = model(data)\n mse = criterion(y_pred, label.view(-1,1))\n val_mse += mse.item()\n\n\n avg_val_mse = (val_mse / num_batches)\n\n # Report the metric we're optimizing for to AI Platform's HyperTune service\n # In this example, we're mimizing loss on our test set\n hpt = hypertune.HyperTune()\n hpt.report_hyperparameter_tuning_metric(\n hyperparameter_metric_tag='val_mse',\n metric_value=avg_val_mse,\n global_step=epochs \n )\n\ndef main():\n args = get_args()\n print('in main', args)\n train_model(args)\n\nif __name__ == '__main__':\n main()",
"Overwriting trainer/model.py\n"
],
[
"%%writefile config.yaml\ntrainingInput:\n hyperparameters:\n goal: MINIMIZE\n maxTrials: 10\n maxParallelTrials: 5\n hyperparameterMetricTag: val_mse\n enableTrialEarlyStopping: TRUE\n params:\n - parameterName: lr\n type: DOUBLE\n minValue: 0.0001\n maxValue: 0.1\n scaleType: UNIT_LINEAR_SCALE\n - parameterName: momentum\n type: DOUBLE\n minValue: 0.0\n maxValue: 1.0\n scaleType: UNIT_LINEAR_SCALE\n - parameterName: hidden-layer-size\n type: INTEGER\n minValue: 8\n maxValue: 32\n scaleType: UNIT_LINEAR_SCALE",
"Overwriting config.yaml\n"
],
[
"MAIN_TRAINER_MODULE = \"trainer.model\"\nTRAIN_DIR = os.getcwd() + '/trainer'\nJOB_DIR = BUCKET_URL + '/output'\nREGION = \"us-central1\"",
"_____no_output_____"
],
[
"# Create a unique job name (run this each time you submit a job)\ntimestamp = str(datetime.datetime.now().time())\nJOB_NAME = 'caip_training_' + str(int(time.time()))",
"_____no_output_____"
]
],
[
[
"The command below will submit your training job to AI Platform. To view the logs, and the results of each HyperTune trial visit your Cloud console.",
"_____no_output_____"
]
],
[
[
"# Configure and submit the training job\n!gcloud ai-platform jobs submit training $JOB_NAME \\\n --scale-tier basic \\\n --package-path $TRAIN_DIR \\\n --module-name $MAIN_TRAINER_MODULE \\\n --job-dir $JOB_DIR \\\n --region $REGION \\\n --runtime-version 2.1 \\\n --python-version 3.7 \\\n --config config.yaml",
"Job [caip_training_1589925625] submitted successfully.\nYour job is still active. You may view the status of your job with the command\n\n $ gcloud ai-platform jobs describe caip_training_1589925625\n\nor continue streaming the logs with the command\n\n $ gcloud ai-platform jobs stream-logs caip_training_1589925625\njobId: caip_training_1589925625\nstate: QUEUED\n"
]
],
[
[
"Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ab0ad2bc4d01d8d7bfdce63696055a9cfc9c6cc
| 463,976 |
ipynb
|
Jupyter Notebook
|
notebooks/Figures/Figure5.ipynb
|
SBRG/xplatform_ica_paper
|
ae1114161f820a225e3b4c2c05f5b1dfb7093823
|
[
"MIT"
] | null | null | null |
notebooks/Figures/Figure5.ipynb
|
SBRG/xplatform_ica_paper
|
ae1114161f820a225e3b4c2c05f5b1dfb7093823
|
[
"MIT"
] | null | null | null |
notebooks/Figures/Figure5.ipynb
|
SBRG/xplatform_ica_paper
|
ae1114161f820a225e3b4c2c05f5b1dfb7093823
|
[
"MIT"
] | null | null | null | 148.47232 | 40,428 | 0.851005 |
[
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Setup\" data-toc-modified-id=\"Setup-1\"><span class=\"toc-item-num\">1 </span>Setup</a></span><ul class=\"toc-item\"><li><span><a href=\"#Load-data\" data-toc-modified-id=\"Load-data-1.1\"><span class=\"toc-item-num\">1.1 </span>Load data</a></span></li></ul></li><li><span><a href=\"#Figure-5b---Categories-of-combined-iModulons\" data-toc-modified-id=\"Figure-5b---Categories-of-combined-iModulons-2\"><span class=\"toc-item-num\">2 </span>Figure 5b - Categories of combined iModulons</a></span></li><li><span><a href=\"#Create-RBH-graph\" data-toc-modified-id=\"Create-RBH-graph-3\"><span class=\"toc-item-num\">3 </span>Create RBH graph</a></span></li><li><span><a href=\"#Figure-5c---Presence/absence-of-iModulons\" data-toc-modified-id=\"Figure-5c---Presence/absence-of-iModulons-4\"><span class=\"toc-item-num\">4 </span>Figure 5c - Presence/absence of iModulons</a></span></li><li><span><a href=\"#Figure-5d---Heatmap\" data-toc-modified-id=\"Figure-5d---Heatmap-5\"><span class=\"toc-item-num\">5 </span>Figure 5d - Heatmap</a></span></li><li><span><a href=\"#Figure-5e---Explained-variance\" data-toc-modified-id=\"Figure-5e---Explained-variance-6\"><span class=\"toc-item-num\">6 </span>Figure 5e - Explained variance</a></span></li><li><span><a href=\"#Figure-5f---ppGpp-Activities\" data-toc-modified-id=\"Figure-5f---ppGpp-Activities-7\"><span class=\"toc-item-num\">7 </span>Figure 5f - ppGpp Activities</a></span></li><li><span><a href=\"#Figure-5g:-PCA-of-datasets\" data-toc-modified-id=\"Figure-5g:-PCA-of-datasets-8\"><span class=\"toc-item-num\">8 </span>Figure 5g: PCA of datasets</a></span></li><li><span><a href=\"#Figure-5h:-PCA-of-activites\" data-toc-modified-id=\"Figure-5h:-PCA-of-activites-9\"><span class=\"toc-item-num\">9 </span>Figure 5h: PCA of activites</a></span></li><li><span><a href=\"#Supplementary-Figure-7\" data-toc-modified-id=\"Supplementary-Figure-7-10\"><span class=\"toc-item-num\">10 </span>Supplementary Figure 7</a></span><ul class=\"toc-item\"><li><span><a href=\"#Panel-a:-Explained-variance-of-lost-i-modulons\" data-toc-modified-id=\"Panel-a:-Explained-variance-of-lost-i-modulons-10.1\"><span class=\"toc-item-num\">10.1 </span>Panel a: Explained variance of lost i-modulons</a></span></li><li><span><a href=\"#Panel-b:-Classes-of-new-i-modulons\" data-toc-modified-id=\"Panel-b:-Classes-of-new-i-modulons-10.2\"><span class=\"toc-item-num\">10.2 </span>Panel b: Classes of new i-modulons</a></span></li><li><span><a href=\"#Panel-c:-Histogram-of-IC-gene-coefficients\" data-toc-modified-id=\"Panel-c:-Histogram-of-IC-gene-coefficients-10.3\"><span class=\"toc-item-num\">10.3 </span>Panel c: Histogram of IC gene coefficients</a></span></li><li><span><a href=\"#Panel-e:-F1-score-chart\" data-toc-modified-id=\"Panel-e:-F1-score-chart-10.4\"><span class=\"toc-item-num\">10.4 </span>Panel e: F1-score chart</a></span></li><li><span><a href=\"#Panel-f:-Pearson-R-between-activities\" data-toc-modified-id=\"Panel-f:-Pearson-R-between-activities-10.5\"><span class=\"toc-item-num\">10.5 </span>Panel f: Pearson R between activities</a></span></li></ul></li><li><span><a href=\"#New-biological-component\" data-toc-modified-id=\"New-biological-component-11\"><span class=\"toc-item-num\">11 </span>New biological component</a></span></li></ul></div>",
"_____no_output_____"
],
[
"# Setup",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nfrom tqdm.notebook import tqdm\nimport pandas as pd\nimport numpy as np\nimport os, sys\nfrom itertools import combinations\nimport seaborn as sns\nfrom matplotlib_venn import venn2\nfrom scipy import stats\nfrom sklearn.decomposition import PCA\nsys.path.append('../../scripts/')\nfrom core import *",
"_____no_output_____"
],
[
"sns.set_style('ticks')",
"_____no_output_____"
],
[
"# Use custom stylesheet for figures\nplt.style.use('custom')",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
]
],
[
[
"datasets = sorted([x for x in os.listdir(os.path.join(DATA_DIR,'iModulons'))\n if '.' not in x])",
"_____no_output_____"
],
[
"# Thresholds were obtained from sensitivity analysis\ncutoffs = {'MA-1': 550,\n 'MA-2': 600,\n 'MA-3': 350,\n 'RNAseq-1': 700,\n 'RNAseq-2': 300,\n 'combined': 400}",
"_____no_output_____"
],
[
"def load(dataset):\n # Define directories\n ds_dir = os.path.join(DATA_DIR,'iModulons',dataset)\n \n # Define files\n X_file = os.path.join(DATA_DIR,'processed_data',dataset+'_bc.csv')\n M_file = os.path.join(ds_dir,'M.csv')\n A_file = os.path.join(ds_dir,'A.csv')\n metadata_file = os.path.join(DATA_DIR,'metadata',dataset+'_metadata.csv')\n \n return IcaData(M_file,A_file,X_file,metadata_file,cutoffs[dataset])",
"_____no_output_____"
],
[
"# Load datasets\nobjs = {}\nfor ds in tqdm(datasets):\n objs[ds] = load(ds)",
"_____no_output_____"
],
[
"DF_categories = pd.read_csv(os.path.join(DATA_DIR,'iModulons','categories_curated.csv'),index_col=0)\nDF_categories.index = DF_categories.dataset.combine(DF_categories.component,lambda x1,x2:x1+'_'+str(x2))",
"_____no_output_____"
]
],
[
[
"# Figure 5b - Categories of combined iModulons",
"_____no_output_____"
]
],
[
[
"data = DF_categories[DF_categories.dataset=='combined'].type.value_counts()\ndata",
"_____no_output_____"
],
[
"data.sum()",
"_____no_output_____"
],
[
"data/data.sum()",
"_____no_output_____"
],
[
"unchar_mod_lens = []\nmod_lens = []\nfor k in objs['combined'].M.columns:\n if DF_categories.loc['combined_'+str(k),'type']=='uncharacterized':\n unchar_mod_lens.append(len(objs['combined'].show_enriched(k)))\n else:\n mod_lens.append(len(objs['combined'].show_enriched(k)))",
"_____no_output_____"
],
[
"data = DF_categories[DF_categories.dataset=='combined'].type.value_counts()\nplt.pie(data.values,labels=data.index);",
"_____no_output_____"
]
],
[
[
"# Create RBH graph",
"_____no_output_____"
]
],
[
[
"from rbh import *",
"_____no_output_____"
],
[
"l2s = []\nfor ds in datasets[:-1]:\n links = rbh(objs['combined'].M,objs[ds].M)\n for i,j,val in links:\n comp1 = 'combined'+'_'+str(i)\n comp2 = ds+'_'+str(j)\n class1 = DF_categories.loc[comp1,'type']\n class2 = DF_categories.loc[comp2,'type']\n desc1 = DF_categories.loc[comp1,'description']\n desc2 = DF_categories.loc[comp2,'description']\n l2s.append(['combined',ds,i,j,comp1,comp2,class1,class2,desc1,desc2,1-val])\nDF_links = pd.DataFrame(l2s,columns=['ds1','ds2','comp1','comp2','name1','name2','type1','type2','desc1','desc2','dist'])\nDF_links = DF_links[DF_links.dist > 0.3]",
"../../scripts/rbh.py:5: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.\n return np.dot(s1.T,s2)/np.sqrt((s1**2).sum())[:, np.newaxis]/np.sqrt((s2**2).sum())[np.newaxis,:]\n"
],
[
"DF_links = DF_links.sort_values(['ds1','comp1','ds2'])",
"_____no_output_____"
],
[
"DF_links[DF_links.type1 == 'uncharacterized'].name1.value_counts()",
"_____no_output_____"
],
[
"# Total links between full dataset and individual datasets\nDF_links.groupby('ds2').count()['ds1']",
"_____no_output_____"
],
[
"# Average distance between full dataset and individual datasets\nmeans = DF_links.groupby('ds2').mean()['dist']\nstds = DF_links.groupby('ds2').std()['dist']",
"_____no_output_____"
],
[
"DF_links.to_csv(os.path.join(DATA_DIR,'iModulons','RBH_combined.csv'))",
"_____no_output_____"
],
[
"DF_links",
"_____no_output_____"
]
],
[
[
"# Figure 5c - Presence/absence of iModulons",
"_____no_output_____"
]
],
[
[
"index = objs['combined'].M.columns",
"_____no_output_____"
],
[
"type_dict = {'regulatory':-2,'functional':-3,'genomic':-4,'uncharacterized':-5}",
"_____no_output_____"
],
[
"DF_binarized = pd.DataFrame([1]*len(index),index=index,columns=['Combined Compendium'])\nfor ds in datasets[:-1]:\n DF_binarized[ds] = [x in DF_links[DF_links.ds2==ds].comp1.tolist() for x in index]\nDF_binarized = DF_binarized.astype(int)\n\nDF_binarized['total'] = DF_binarized.sum(axis=1)\n\nDF_binarized = (DF_binarized-1)\nDF_binarized = DF_binarized[['RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','total']]",
"_____no_output_____"
],
[
"DF_binarized['type'] = [type_dict[DF_categories.loc['combined_'+str(k)].type] for k in DF_binarized.index]",
"_____no_output_____"
],
[
"DF_binarized = DF_binarized.sort_values(['total','RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','type'],ascending=False)",
"_____no_output_____"
],
[
"cmap = ['#b4d66c','#bc80b7','#81b1d3','#f47f72'] + ['white','black'] + sns.color_palette('Blues',5)",
"_____no_output_____"
],
[
"bin_counts = DF_binarized.groupby(['total','type']).size().unstack(fill_value=0).T.sort_index(ascending=False)\nbin_counts = bin_counts\nbin_counts.index = ['regulatory','biological','genomic','uncharacterized']",
"_____no_output_____"
],
[
"bin_counts.T.plot.bar(stacked=True)\nplt.legend(bbox_to_anchor=(1,1))",
"_____no_output_____"
],
[
"print('Number of comps:',len(DF_binarized))\nprint('Number of linked comps: {} ({:.2f})'.format(sum(DF_binarized.total > 0),\n sum(DF_binarized.total > 0)/len(DF_binarized)))",
"Number of comps: 181\nNumber of linked comps: 137 (0.76)\n"
],
[
"print('Number of linked comps: {} ({:.2f})'.format(sum(DF_binarized.total >1),\n sum(DF_binarized.total > 1)/len(DF_binarized)))",
"Number of linked comps: 58 (0.32)\n"
],
[
"fig,ax = plt.subplots(figsize=(4,1.5))\nsns.heatmap(DF_binarized.T,cmap=cmap,ax=ax)\nax.set_xticks(np.arange(len(DF_binarized),step=20));\nax.tick_params(axis='x',reset=True,length=3,width=.5,color='k',top=False)\nax.set_xticklabels(np.arange(len(DF_binarized),step=20),);",
"_____no_output_____"
]
],
[
[
"# Figure 5d - Heatmap",
"_____no_output_____"
]
],
[
[
"fig,ax = plt.subplots(figsize=(2.1,1.3))\n\nDF_types = DF_categories.groupby(['dataset','type']).count().component.unstack().fillna(0).drop('combined')\nDF_types.loc['Total'] = DF_types.sum(axis=0)\nDF_types['Total'] = DF_types.sum(axis=1)\n\nDF_types_linked = DF_links.groupby(['ds2','type2']).count().comp1.unstack().fillna(0)\nDF_types_linked.loc['Total'] = DF_types_linked.sum(axis=0)\nDF_types_linked['Total'] = DF_types_linked.sum(axis=1)\n\nDF_types_lost = DF_types - DF_types_linked\n\nDF_text = pd.DataFrame()\nfor col in DF_types_lost:\n DF_text[col] = DF_types_lost[col].astype(int).astype(str).str.cat(DF_types[col].astype(int).astype(str),sep='/')\nDF_text = DF_text[['regulatory','functional','genomic','uncharacterized','Total']]\n\ntype_grid = (DF_types_lost/DF_types).fillna(0)[['regulatory','functional','genomic','uncharacterized','Total']]\n\ntype_grid = type_grid.reindex(['RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','Total'])\nDF_text = DF_text.reindex(['RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','Total'])\n\nsns.heatmap(type_grid,cmap='Blues',annot=DF_text,fmt='s',annot_kws={\"size\": 5})",
"_____no_output_____"
],
[
"# Types lost\nDF_lost = DF_types- DF_types_linked\nDF_lost",
"_____no_output_____"
],
[
"DF_types_linked.loc['Total']",
"_____no_output_____"
],
[
"DF_types_linked.loc['Total']/DF_types_linked.loc['Total'].iloc[:-1].sum()",
"_____no_output_____"
]
],
[
[
"# Figure 5e - Explained variance",
"_____no_output_____"
]
],
[
[
"# Load dataset - Downloaded from Sanchez-Vasquez et al 2019\nDF_ppGpp = pd.read_excel(os.path.join(DATA_DIR,'ppGpp_data','dataset_s01_from_sanchez_vasquez_2019.xlsx'),sheet_name='Data')\n\n# Get 757 genes described to be directly regulated by ppGpp\npaper_genes = DF_ppGpp[DF_ppGpp['1+2+ 5 min Category'].isin(['A','B'])].Synonym.values\nlen(paper_genes)",
"_____no_output_____"
],
[
"paper_genes_down = DF_ppGpp[DF_ppGpp['1+2+ 5 min Category'].isin(['A'])].Synonym.values",
"_____no_output_____"
],
[
"paper_genes_up = DF_ppGpp[DF_ppGpp['1+2+ 5 min Category'].isin(['B'])].Synonym.values",
"_____no_output_____"
],
[
"venn2((set(paper_genes_down),set(objs['combined'].show_enriched(147).index)),set_labels=('Genes downregulated from ppGpp binding to RNAP','Genes in Central Dogma I-modulon'))",
"_____no_output_____"
],
[
"pp_genes = {}\nfor k in objs['combined'].M.columns:\n pp_genes[k] = set(objs['combined'].show_enriched(k).index) & set(paper_genes)",
"_____no_output_____"
],
[
"set(objs['combined'].show_enriched(147).index) - set(paper_genes)",
"_____no_output_____"
]
],
[
[
"# Figure 5f - ppGpp Activities",
"_____no_output_____"
]
],
[
[
"ppGpp_X = pd.read_csv(os.path.join(DATA_DIR,'ppGpp_data','log_tpm_norm.csv'),index_col=0)\n\n# Get genes in both ICA data and ppGpp dataframe\nshared_genes = sorted(set(objs['combined'].X.index) & set(ppGpp_X.index))\n\n# Keep only genes in both dataframes\nppGpp_X = ppGpp_X.loc[shared_genes]\nM = objs['combined'].M.loc[shared_genes]\n\n# Center columns\nX = ppGpp_X.sub(ppGpp_X.mean(axis=0))",
"_____no_output_____"
],
[
"# Perform projection\nM_inv = np.linalg.pinv(M)\nA = np.dot(M_inv,X)\nA = pd.DataFrame(A,columns = X.columns, index = M.columns)",
"_____no_output_____"
],
[
"t0 = ['ppgpp__t0__1','ppgpp__t0__2','ppgpp__t0__3']\nt5 = ['ppgpp__t5__1','ppgpp__t5__2','ppgpp__t5__3']",
"_____no_output_____"
],
[
"ds4 = objs['combined'].metadata[objs['combined'].metadata['dataset'] == 'RNAseq-1'].index\ndf = pd.DataFrame(objs['combined'].A.loc[147,ds4])\ndf['group'] = ['RpoB\\nE672K' if 'rpoBE672K' in x else 'RpoB\\nE546V' if 'rpoBE546V' in x else 'WT RpoB' for x in df.index]\n\nfig,ax = plt.subplots(figsize=(2,2))\nsns.boxplot(data=df,y=147,x='group')\nsns.stripplot(data=df,y=147,x='group',dodge=True,color='k',jitter=0.3,s=3)\nax.set_ylabel('Central Dogma\\nI-modulon Activity',fontsize=7)\nax.set_xlabel('Carbon Source',fontsize=7)\nax.tick_params(labelsize=5)\nplt.tight_layout()",
"_____no_output_____"
]
],
[
[
"# Figure 5g: PCA of datasets",
"_____no_output_____"
]
],
[
[
"cdict = dict(zip(datasets[:-1],['tab:orange','black','tab:red','tab:green','tab:blue']))",
"_____no_output_____"
],
[
"exp_data = pd.read_csv(os.path.join(DATA_DIR,'processed_data','combined_bc.csv'),index_col=0)\n\npca = PCA()\nDF_weights = pd.DataFrame(pca.fit_transform(exp_data.T),index=exp_data.columns)\nDF_components = pd.DataFrame(pca.components_.T,index=exp_data.index)\nvar_cutoff = 0.99",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(1.5,1.5))\nfor name,group in objs['combined'].metadata.groupby('dataset'):\n idx = exp_data.loc[:,group.index.tolist()].columns.tolist()\n ax.scatter(DF_weights.loc[idx,0],\n DF_weights.loc[idx,1],\n c=cdict[name],\n label=name,alpha=0.8,s=3)\nax.set_xlabel('Component 1: %.1f%%'%(pca.explained_variance_ratio_[0]*100))\nax.set_ylabel('Component 2: %.1f%%'%(pca.explained_variance_ratio_[1]*100))\nax.legend(bbox_to_anchor=(1,-.2),ncol=2)",
"_____no_output_____"
]
],
[
[
"# Figure 5h: PCA of activites",
"_____no_output_____"
]
],
[
[
"pca = PCA()\nDF_weights = pd.DataFrame(pca.fit_transform(objs['combined'].A.T),index=objs['combined'].A.columns)\nDF_components = pd.DataFrame(pca.components_.T,index=objs['combined'].A.index)\nvar_cutoff = 0.99",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(1.5,1.5))\nfor name,group in objs['combined'].metadata.groupby('dataset'):\n idx = exp_data.loc[:,group.index.tolist()].columns.tolist()\n ax.scatter(DF_weights.loc[idx,0],\n DF_weights.loc[idx,1],\n c=cdict[name],\n label=name,alpha=0.8,s=3)\nax.set_xlabel('Component 1: %.1f%%'%(pca.explained_variance_ratio_[0]*100))\nax.set_ylabel('Component 2: %.1f%%'%(pca.explained_variance_ratio_[1]*100))\nax.legend(bbox_to_anchor=(1,-.2),ncol=2)",
"_____no_output_____"
]
],
[
[
"# Supplementary Figure 7",
"_____no_output_____"
],
[
"## Panel a: Explained variance of lost i-modulons",
"_____no_output_____"
]
],
[
[
"kept_mods = set(DF_links.name2.unique())",
"_____no_output_____"
],
[
"all_mods = set([ds+'_'+str(name) for ds in datasets[:-1] for name in objs[ds].M.columns])",
"_____no_output_____"
],
[
"missing_mods = all_mods - kept_mods",
"_____no_output_____"
],
[
"from util import plot_rec_var",
"_____no_output_____"
],
[
"missing_var = []\nfor mod in tqdm(missing_mods):\n ds,comp = mod.split('_')\n missing_var.append(plot_rec_var(objs[ds],modulons=[int(comp)],plot=False).values[0])\n if plot_rec_var(objs[ds],modulons=[int(comp)],plot=False).values[0] > 10:\n print(mod)\n \nkept_var = []\nfor mod in tqdm(kept_mods):\n ds,comp = mod.split('_')\n kept_var.append(plot_rec_var(objs[ds],modulons=[int(comp)],plot=False).values[0])",
"_____no_output_____"
],
[
"plt.hist(missing_var,range=(0,20),bins=20)\nplt.hist(kept_var,range=(0,20),bins=20,alpha=0.5)\nplt.xticks(range(0,21,2))\nplt.xlabel('Percent Variance Explained')\nplt.ylabel('Count')",
"_____no_output_____"
],
[
"stats.mannwhitneyu(missing_var,kept_var)",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(1.5,1.5))\nplt.hist(missing_var,range=(0,1),bins=10)\nplt.hist(kept_var,range=(0,1),bins=10,alpha=0.5)\nplt.xlabel('Percent Variance Explained')\nplt.ylabel('Count')",
"_____no_output_____"
]
],
[
[
"## Panel b: Classes of new i-modulons",
"_____no_output_____"
]
],
[
[
"type_dict",
"_____no_output_____"
],
[
"new_counts = DF_binarized[(DF_binarized.total==0)].type.value_counts()\nnew_counts",
"_____no_output_____"
],
[
"new_reg = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-2)].index\nnew_bio = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-3)].index\nnew_gen = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-4)].index\nnew_unc = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-5)].index",
"_____no_output_____"
],
[
"new_single = []\nfor k in new_unc:\n if objs['combined'].show_enriched(k)['weight'].max() > 0.4:\n new_single.append(k)",
"_____no_output_____"
],
[
"[len(new_reg),len(new_bio),len(new_gen),len(new_unc)-len(new_single),len(new_single)]",
"_____no_output_____"
],
[
"plt.pie([len(new_reg),len(new_bio),len(new_gen),len(new_unc)-len(new_single),len(new_single)],\n labels=['Regulatory','Functional','Genomic','Uncharacterized','Single Gene'])",
"_____no_output_____"
]
],
[
[
"## Panel c: Histogram of IC gene coefficients",
"_____no_output_____"
]
],
[
[
"fig,ax = plt.subplots(figsize=(2,2))\nplt.hist(objs['combined'].M[31])\nplt.yscale('log')\nplt.xlabel('IC Gene Coefficient')\nplt.ylabel('Count (Log-scale)')\nplt.vlines([objs['combined'].thresholds[31],-objs['combined'].thresholds[31]],0,3000,\n linestyles='dashed',linewidth=0.5)",
"_____no_output_____"
]
],
[
[
"## Panel e: F1-score chart",
"_____no_output_____"
]
],
[
[
"reg_links = DF_links[(DF_links.type1 == 'regulatory') & (DF_links.desc1 == DF_links.desc2)]\nreg_links.head()",
"_____no_output_____"
],
[
"fig,ax=plt.subplots(figsize=(1.5,2))\nstruct = []\nfor name,group in reg_links.groupby('ds2'):\n struct.append(pd.DataFrame(list(zip([name]*len(group),\n DF_categories.loc[group.name1,'f1score'].values,\n DF_categories.loc[group.name2,'f1score'].values)),\n columns=['title','full','partial']))\nDF_stats = pd.concat(struct)\nDF_stats = DF_stats.melt(id_vars='title')\nsns.boxplot(data=DF_stats,x='variable',y='value',order=['partial','full'])\nsns.stripplot(data=DF_stats,x='variable',y='value',color='k',s=2,jitter=0.3,order=['partial','full'])",
"_____no_output_____"
],
[
"DF_stats[DF_stats.variable=='partial'].value.mean()",
"_____no_output_____"
],
[
"DF_stats[DF_stats.variable=='full'].value.mean()",
"_____no_output_____"
],
[
"stats.wilcoxon(DF_stats[DF_stats.variable=='partial'].value,DF_stats[DF_stats.variable=='full'].value)",
"_____no_output_____"
]
],
[
[
"## Panel f: Pearson R between activities",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import r2_score",
"_____no_output_____"
],
[
"linked_pearson = []\nfor i,row in DF_links.iterrows():\n partial_acts = objs[row.ds2].A.loc[row.comp2]\n full_acts = objs[row.ds1].A.loc[row.comp1,partial_acts.index]\n r,_ = stats.spearmanr(full_acts,partial_acts)\n linked_pearson.append(abs(r))",
"_____no_output_____"
],
[
"sum(np.array(linked_pearson) > 0.6) / len(linked_pearson)",
"_____no_output_____"
],
[
"fig,ax = plt.subplots(figsize=(2,2))\nax.hist(linked_pearson,bins=20)\nax.set_xlabel('Absolute Spearman R between activities of linked i-modulons')\nax.set_ylabel('Count')",
"_____no_output_____"
]
],
[
[
"# New biological component",
"_____no_output_____"
]
],
[
[
"rRNA = 0\ntRNA = 0\nygene = 0\npolyamine = 0\nfor gene in objs['combined'].show_enriched(147)['product']:\n if 'rRNA' in gene or 'ribosom' in gene:\n rRNA += 1\n elif 'tRNA' in gene:\n tRNA += 1\n elif 'putative' in gene or 'family' in gene:\n ygene += 1\n elif 'spermidine' in gene or 'YEEF' in gene:\n polyamine +=1\n else:\n print(gene)",
"G6999-MONOMER\nRNase P protein component\nATP-dependent DNA helicase Rep\ninner membrane protein YhbE\nATP-dependent RNA helicase DbpA\nlong-chain fatty acid outer membrane channel / bacteriophage T2 receptor\nYICE-MONOMER\nSDAC-MONOMER\nN<sup>6</sup>-L-threonylcarbamoyladenine synthase, TsaB subunit\nH2PTERIDINEPYROPHOSPHOKIN-MONOMER\nGTPase ObgE\npoly(A) polymerase I\nKdo<sub>2</sub>-lipid A phosphotransferase\nLYSP-MONOMER\nlipid II flippase MurJ\nDUF2594 domain-containing protein YecF\norotidine-5'-phosphate decarboxylase\nATP-dependent RNA helicase SrmB\nRNA polymerase-binding ATPase and RNAP recycling factor\ntruncated RNase PH\ninosine/guanosine kinase\nEG10812-MONOMER\nATP-dependent RNA helicase RhlE\nEF-P-lysine lysyltransferase\ninositol-phosphate phosphatase\nDNA-binding transcriptional dual regulator Fis\n"
],
[
"objs['combined'].show_enriched(147)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4ab0b24a8a26408eb5d6246e0c115c65f04c4000
| 728,741 |
ipynb
|
Jupyter Notebook
|
_notebooks/2021-07-11-career-village-2.ipynb
|
recohut/notebook
|
610670666a1c3d8ef430d42f712ff72ecdbd8f86
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/2021-07-11-career-village-2.ipynb
|
recohut/notebook
|
610670666a1c3d8ef430d42f712ff72ecdbd8f86
|
[
"Apache-2.0"
] | 1 |
2022-01-12T05:40:57.000Z
|
2022-01-12T05:40:57.000Z
|
_notebooks/2021-07-11-career-village-2.ipynb
|
recohut/notebook
|
610670666a1c3d8ef430d42f712ff72ecdbd8f86
|
[
"Apache-2.0"
] | 1 |
2021-08-13T19:00:26.000Z
|
2021-08-13T19:00:26.000Z
| 216.822672 | 179,586 | 0.664288 |
[
[
[
"# CareerVillage Questions Recommendation Content-based Model\n> We’ll develop an implicit content-based filtering system for recommending questions to professionals. Given a question-professional pair, our model will predict how likely the professional is to answer the question. This model can then be used to determine what new (or still-unanswered) questions a professional is most likely to answer, and those questions can be sent to the professional either via email or via their landing page on the CareerVillage site.\n\n- toc: true\n- badges: true\n- comments: true\n- categories: [BERT, Education]\n- author: \"<a href='https://brendanhasz.github.io/2019/05/20/career-village'>Brendan Hasz</a>\"\n- image:",
"_____no_output_____"
],
[
"## Introduction\n[CareerVillage.org](http://careervillage.org/) is a nonprofit that crowdsources career advice for underserved youth. The platform uses a Q&A style similar to *StackOverflow* or *Quora* to provide students with answers to any question about any career. \n\nTo date, 25K+ volunteers have created profiles and opted in to receive emails when a career question is a good fit for them. To help students get the advice they need, the team at [CareerVillage.org](http://careervillage.org/) needs to be able to send the right questions to the right volunteers. The notifications sent to volunteers seem to have the greatest impact on how many questions are answered.\n\n**Objective**: develop a method to recommend relevant questions to the professionals who are most likely to answer them.",
"_____no_output_____"
],
[
"Here we’ll develop an implicit content-based filtering system for recommending questions to professionals. Given a question-professional pair, our model will predict how likely the professional is to answer the question. This model can then be used to determine what new (or still-unanswered) questions a professional is most likely to answer, and those questions can be sent to the professional either via email or via their landing page on the CareerVillage site.\n\nThe model will go beyond using only tag similarity information, and also extract information from the body of the question text, the question title, as well as information about the student which asked the question and the professional who may (hopfully) be able to answer it. We’ll be using BeautifulSoup and nltk for processing the text data, bert-as-service to create sentence and paragraph embeddings using a pre-trained BERT language model, and XGBoost to generate predictions as to how likely professionals are to answer student questions.",
"_____no_output_____"
],
[
"## Environment Setup",
"_____no_output_____"
]
],
[
[
"import subprocess\nimport re\nimport os\n\n# SciPy stack\nimport numpy as np\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n# Sklearn\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.preprocessing import RobustScaler\nfrom sklearn.impute import SimpleImputer\nfrom sklearn.metrics import roc_auc_score\nfrom sklearn.decomposition import PCA\n\n# XGBoost\nfrom xgboost import XGBClassifier\n\n# NLP\nimport html as ihtml\nfrom bs4 import BeautifulSoup\nfrom nltk import tokenize\nfrom scipy.sparse import coo_matrix\n\n# Plot settings\n%config InlineBackend.figure_format = 'svg'\nCOLORS = plt.rcParams['axes.prop_cycle'].by_key()['color']\n\n# Target encoder and other utilities\n!pip install git+http://github.com/brendanhasz/dsutils.git\nfrom dsutils.encoding import TargetEncoderCV\nfrom dsutils.evaluation import metric_cv\nfrom dsutils.evaluation import permutation_importance_cv\nfrom dsutils.evaluation import plot_permutation_importance\n\n# BERT-as-service\n!pip install bert-serving-server\n!pip install bert-serving-client",
"Collecting git+http://github.com/brendanhasz/dsutils.git\n Cloning http://github.com/brendanhasz/dsutils.git to /tmp/pip-req-build-xjhb9ybq\n Running command git clone -q http://github.com/brendanhasz/dsutils.git /tmp/pip-req-build-xjhb9ybq\nRequirement already satisfied (use --upgrade to upgrade): dsutils==0.1 from git+http://github.com/brendanhasz/dsutils.git in /usr/local/lib/python3.7/dist-packages\nBuilding wheels for collected packages: dsutils\n Building wheel for dsutils (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for dsutils: filename=dsutils-0.1-cp37-none-any.whl size=43743 sha256=393dad55ea34bedfcc7851ca5df181009c8f97dc6b3d77a1732b2bde51f0adcd\n Stored in directory: /tmp/pip-ephem-wheel-cache-chp0g631/wheels/a1/ff/2a/75bdc08e9c96d4917294db5e6faf99ef3de673f37992c52278\nSuccessfully built dsutils\nRequirement already satisfied: bert-serving-server in /usr/local/lib/python3.7/dist-packages (1.10.0)\nRequirement already satisfied: termcolor>=1.1 in /usr/local/lib/python3.7/dist-packages (from bert-serving-server) (1.1.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from bert-serving-server) (1.21.0)\nRequirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from bert-serving-server) (1.15.0)\nRequirement already satisfied: GPUtil>=1.3.0 in /usr/local/lib/python3.7/dist-packages (from bert-serving-server) (1.4.0)\nRequirement already satisfied: pyzmq>=17.1.0 in /usr/local/lib/python3.7/dist-packages (from bert-serving-server) (22.1.0)\nRequirement already satisfied: bert-serving-client in /usr/local/lib/python3.7/dist-packages (1.10.0)\nRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from bert-serving-client) (1.21.0)\nRequirement already satisfied: pyzmq>=17.1.0 in /usr/local/lib/python3.7/dist-packages (from bert-serving-client) (22.1.0)\n"
],
[
"import warnings\nwarnings.filterwarnings('ignore')\n\nseed = 13\nimport random\nrandom.seed(seed)\nnp.random.seed(seed)\n\n%reload_ext google.colab.data_table",
"_____no_output_____"
],
[
"!pip install -q watermark\n%reload_ext watermark\n%watermark -m -iv",
"Compiler : GCC 7.5.0\nOS : Linux\nRelease : 5.4.104+\nMachine : x86_64\nProcessor : x86_64\nCPU cores : 2\nArchitecture: 64bit\n\nmatplotlib: 3.2.2\nre : 2.2.1\npandas : 1.3.0\nIPython : 5.5.0\nnumpy : 1.21.0\nnltk : 3.2.5\n\n"
]
],
[
[
"## Data Loading",
"_____no_output_____"
],
[
"The dataset consists of a set of data tables - 15 tables in all, but we’re only going to use a few in this project. There’s a table which contains information about each student who has an account on CareerVillage, another table with information about each professional with an account on the site, another table with each question that’s been asked on the site, etc.\n\nThe diagram below shows each table we’ll use, and how values in columns in those tables relate to each other. For example, we can figure out what student asked a given question by looking up where the value in the questions_author_id column of the questions.csv table occurs in the students_id column of the students.csv table. Note that there’s a lot of other information (columns) in the tables - in the diagram I’ve left out columns which don’t contain relationships to other tables for clarity.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"!pip install -q -U kaggle\n!pip install --upgrade --force-reinstall --no-deps kaggle\n!mkdir ~/.kaggle\n!cp /content/drive/MyDrive/kaggle.json ~/.kaggle/\n!chmod 600 ~/.kaggle/kaggle.json\n!kaggle competitions download -c data-science-for-good-careervillage",
"_____no_output_____"
],
[
"!unzip /content/data-science-for-good-careervillage.zip",
"_____no_output_____"
],
[
"# Load tables\nfiles = ['questions',\n 'answers',\n 'students',\n 'professionals',\n 'tag_questions',\n 'tag_users',\n 'tags']\ndfs = dict()\nfor file in files:\n dfs[file] = pd.read_csv(file+'.csv', dtype=str)\n\n# Convert date cols to datetime\ndatetime_cols = {\n 'answers': 'answers_date_added',\n 'professionals': 'professionals_date_joined',\n 'questions': 'questions_date_added',\n 'students': 'students_date_joined',\n}\nfor df, col in datetime_cols.items():\n dfs[df][col] = pd.to_datetime(dfs[df][col].str.slice(0, 19),\n format='%Y-%m-%d %H:%M:%S')",
"_____no_output_____"
]
],
[
[
"Let’s take a quick look at a few rows from each data table to get a feel for the data contained in each. The questions.csv table contains information about each question that is asked on the CareerVillage site, including the question text, the title of the question post, when it was posted, and what student posted it.",
"_____no_output_____"
]
],
[
[
"dfs['questions'].head()",
"_____no_output_____"
]
],
[
[
"The answers.csv table stores information about professionals’ answers to the questions which were posted, including the answer text, when the answer was posted, and what professional posted it.",
"_____no_output_____"
]
],
[
[
"dfs['answers'].head()",
"_____no_output_____"
]
],
[
[
"The students.csv table stores an ID for each student (which we’ll use to identify each unique student in the other tables), the student’s location (most of which are empty), and the date the student joined the CareerVillage site.",
"_____no_output_____"
]
],
[
[
"dfs['students'].head()",
"_____no_output_____"
]
],
[
[
"Similarly, the professionals.csv table contains information about each professional who has a CareerVillage account, including their ID, location, industry, and the date they joined the site.",
"_____no_output_____"
]
],
[
[
"dfs['professionals'].head()",
"_____no_output_____"
]
],
[
[
"The remaining three tables store information about tags. When students post questions, they can tag their questions with keywords to help professionals find them. Sudents can also set tags for themselves (to indicate what fields they’re interested in, for example nursing, or what topics they are needing help with, for example college-admission). Professionals can subscribe to tags, and they’ll get notifications of questions which have the tags they suscribe to.\n\nThe tag_questions.csv table has a list of tag ID - question ID pairs. This will allow us to figure out what tags each question has: for each question, we can look up rows in tag_questions where the question ID matches that question.",
"_____no_output_____"
]
],
[
[
"dfs['tag_questions'].head()",
"_____no_output_____"
]
],
[
[
"Similarly, tag_users.csv has a list of tag ID - user ID pairs, which we can use to figure out what tags each student has, or what tags each professional subscribes to.\n\n",
"_____no_output_____"
]
],
[
[
"dfs['tag_users'].head()",
"_____no_output_____"
]
],
[
[
"Notice that the tag IDs in the previous two tables aren’t the text of the tag, they’re just an arbitrary integer. In order to figure out what actual tags (that is, the tag text) each question, student, or professional has, we’ll need to use the tags.csv table, which contains the tag text for each tag ID.\n\n",
"_____no_output_____"
]
],
[
[
"dfs['tags'].head()",
"_____no_output_____"
]
],
[
[
"Now that we’ve loaded the data, we can start linking up the tables to construct the single large matrix we’ll need to perform prediction.",
"_____no_output_____"
],
[
"In order to use a machine learning algorithm to predict how likely professionals are to answer questions, we’ll need to transform this set of tables into a single large matrix. Each row will correspond to a potential question-professional pair, and each column will correspond to a feature about that pair. Features could include how similar the question text is to questions the professional has previously answered, how similar the question’s tags are to the professional’s tags, the date when the question was added, the date when the professional joined, etc. A final column of the matrix will be our target variable: whether this question-professional pair actually occurred. That is, whether the professional actually answered the question (in which case the value in the column will be 1), or not (0).",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"### Merge tags to questions, students, and professionals\nFirst we’ll join the tags information to the questions, students, and professionals tables, so that we have a list of tags for each question, student, and professional.\n\nUnfortunately the tag text is a bit inconsistent: some tags have the hashtag character (#) before the tag text, and some don’t. We can remove hashtag characters to ensure that all the tag data contains just text:",
"_____no_output_____"
]
],
[
[
"def remove_hashtags(text):\n if type(text) is float:\n return ''\n else:\n return re.sub(r\"#\", \"\", text)\n \n# Remove hashtag characters\ndfs['tags']['tags_tag_name'] = \\\n dfs['tags']['tags_tag_name'].apply(remove_hashtags)",
"_____no_output_____"
]
],
[
[
"Now we can add a list of tags to each question in the questions table. We’ll make a function which creates a list of tags for each user/question, then merge the tag text to the questions table.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"def agg_tags(df_short, df_long, short_col, long_col, long_col_agg):\n \"\"\"Aggregate elements in a shorter df by joining w/ spaces\"\"\"\n grouped = df_long.groupby(long_col)\n joined_tags = grouped.agg({long_col_agg: lambda x: ' '.join(x)})\n out_df = pd.DataFrame(index=list(df_short[short_col]))\n out_df['aggs'] = joined_tags\n return list(out_df['aggs'])\n\n# Merge tags to questions\ntag_questions = dfs['tag_questions'].merge(dfs['tags'],\n left_on='tag_questions_tag_id',\n right_on='tags_tag_id')\nquestions = dfs['questions']\nquestions['questions_tags'] = \\\n agg_tags(questions, tag_questions,\n 'questions_id', 'tag_questions_question_id', 'tags_tag_name')",
"_____no_output_____"
]
],
[
[
"Then we can add a list of tags to each professional and student. First we’ll join the tag text to the tag_users table, and then add a list of tags for each student and professional to their respective tables.",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"# Merge tag text to tags_users\ntag_users = dfs['tag_users'].merge(dfs['tags'], \n left_on='tag_users_tag_id',\n right_on='tags_tag_id')\n\n# Merge tags to students\nstudents = dfs['students']\nstudents['students_tags'] = \\\n agg_tags(students, tag_users, \n 'students_id', 'tag_users_user_id', 'tags_tag_name')\n\n# Merge tags to professionals\nprofessionals = dfs['professionals']\nprofessionals['professionals_tags'] = \\\n agg_tags(professionals, tag_users, \n 'professionals_id', 'tag_users_user_id', 'tags_tag_name')",
"_____no_output_____"
]
],
[
[
"Now the questions, students, and professionals tables contain columns with space-separated lists of their tags.",
"_____no_output_____"
],
[
"### Clean the sentences\nBefore embedding the question title and body text, we’ll first have to clean that data. Let’s remove weird whitespace characters, HTML tags, and other things using BeautifulSoup.\n\nSome students also included hashtags directly in the question body text. We’ll just remove the hashtag characters from the text. A different option would be to pull out words after hashtags and add them to the tag list for the question, and then remove them from the question text. But for now we’ll just remove the hashtag character and keep the tag text in the question body text.",
"_____no_output_____"
]
],
[
[
"# Pull out a list of question text and titles\nquestions_list = list(questions['questions_body'])\nquestion_title_list = list(questions['questions_title'])\ndef clean_text(text):\n if type(text) is float:\n return ' '\n text = BeautifulSoup(ihtml.unescape(text), \"html.parser\").text\n text = re.sub(r\"http[s]?://\\S+\", \"\", text)\n text = re.sub(r\"\\s+\", \" \", text)\n text = re.sub(r\"#\", \"\", text) #just remove hashtag character\n return text\n# Clean the questions text and titles\nquestions_list = [clean_text(s) for s in questions_list]\nquestion_title_list = [clean_text(s) for s in question_title_list]",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
],
[
[
"## BERT embeddings of the questions\n\nFor our predictive algorithm to use information about the question text, we'll have to convert the text information into numeric values. To capture information about the content of the question text, we'll use a pre-trained BERT model to generate embeddings of the text of each question. BERT ([Bidirectional Encoder Representations from Transformers](https://arxiv.org/abs/1810.04805)) is a deep neural network model which uses layers of attention networks ([Transformers](https://arxiv.org/abs/1706.03762)) to model the next word in a sentence or paragraph given the preceeding words. We'll take a pre-trained BERT model, pass it the text of the questions, and then use the activations of a layer near the end of the network as our features. These features (the \"encodings\" or \"embeddings\", which I'll use interchangeably) should capture information about the content of the question, while encoding the question text into a vector of a fixed length, which our prediction algorithm requires!\n\n[Han Xiao](http://hanxiao.github.io/) has a great package called [bert-as-service](http://github.com/hanxiao/bert-as-service) which can generate embeddings using a pre-trained BERT model (fun fact: they're also one of the people behind [fashion MNIST](http://github.com/zalandoresearch/fashion-mnist)). Basically the package runs a BERT model on a server, and one can send that server requests (consisting of sentences) to generate embeddings of those sentences.\n\nNote that another valid method for generating numeric representations of text would be to use latent Dirichlet allocation (LDA) topic modelling. We could treat each question as a bag of words, use LDA to model the topics, and then use the estimated topic probability distribution for each question as our \"embedding\" for that question. Honestly using LDA might even be a better way to capture information about the question content, because we're mostly interested in the *topic* of the question (which LDA exclusively models), rather than the semantic content (which BERT also models).",
"_____no_output_____"
],
[
"### Clean the sentences\n\nBefore embedding the question title and body text, we'll first have to clean that data. Let's remove weird whitespace characters, HTML tags, and other things using [BeautifulSoup](https://www.crummy.com/software/BeautifulSoup/).\n\nSome students also included hashtags directly in the question body text. We'll just remove the hashtag characters from the text. A different option would be to pull out words after hashtags and add them to the tag list for the question, and then remove them from the question text. But for now we'll just remove the hashtag character and keep the tag text in the question body text.",
"_____no_output_____"
]
],
[
[
"# Pull out a list of question text and titles\nquestions_list = list(questions['questions_body'])\nquestion_title_list = list(questions['questions_title'])",
"_____no_output_____"
],
[
"def clean_text(text):\n if type(text) is float:\n return ' '\n text = BeautifulSoup(ihtml.unescape(text), \"html.parser\").text\n text = re.sub(r\"http[s]?://\\S+\", \"\", text)\n text = re.sub(r\"\\s+\", \" \", text)\n text = re.sub(r\"#\", \"\", text) #just remove hashtag character\n return text",
"_____no_output_____"
],
[
"# Clean the questions text and titles\nquestions_list = [clean_text(s) for s in questions_list]\nquestion_title_list = [clean_text(s) for s in question_title_list]",
"_____no_output_____"
]
],
[
[
"Because BERT can only encode a single sentence at a time, we also need to ensure each question is a list of strings, where each sentence is a string, and each list corresponds to a single question (so we'll have a list of lists of strings). So, let's use [nltk](https://www.nltk.org/)'s [sent_tokenize](https://www.nltk.org/api/nltk.tokenize.html) to separate the questions into lists of sentences.",
"_____no_output_____"
]
],
[
[
"# Convert questions to lists of sentences\nquestions_list = [tokenize.sent_tokenize(s) for s in questions_list]",
"_____no_output_____"
]
],
[
[
"### Start the BERT server\n\nTo use bert-as-service to generate features, we'll first have to download the model, start the server, and then start the client service which we'll use to request the sentence encodings.",
"_____no_output_____"
]
],
[
[
"# Download and unzip the model\n!wget http://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\n!unzip uncased_L-12_H-768_A-12.zip",
"--2019-05-20 02:23:14-- http://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip\r\nResolving storage.googleapis.com (storage.googleapis.com)... 108.177.127.128, 2a00:1450:4013:c07::80\r\nConnecting to storage.googleapis.com (storage.googleapis.com)|108.177.127.128|:80... connected.\r\nHTTP request sent, awaiting response... 200 OK\r\nLength: 407727028 (389M) [application/zip]\r\nSaving to: ‘uncased_L-12_H-768_A-12.zip’\r\n\r\nuncased_L-12_H-768_ 100%[===================>] 388.84M 50.3MB/s in 8.0s \r\n\r\n2019-05-20 02:23:22 (48.4 MB/s) - ‘uncased_L-12_H-768_A-12.zip’ saved [407727028/407727028]\r\n\r\nArchive: uncased_L-12_H-768_A-12.zip\r\n creating: uncased_L-12_H-768_A-12/\r\n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.meta \r\n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.data-00000-of-00001 \r\n inflating: uncased_L-12_H-768_A-12/vocab.txt \r\n inflating: uncased_L-12_H-768_A-12/bert_model.ckpt.index \r\n inflating: uncased_L-12_H-768_A-12/bert_config.json \r\n"
],
[
"# Start the BERT server\nbert_command = 'bert-serving-start -model_dir /content/uncased_L-12_H-768_A-12'\nprocess = subprocess.Popen(bert_command.split(), stdout=subprocess.PIPE)",
"_____no_output_____"
],
[
"# Start the BERT client\nfrom bert_serving.client import BertClient",
"_____no_output_____"
],
[
"bc = BertClient()",
"_____no_output_____"
]
],
[
[
"Now that we've started up the client and the server, we can use bert-as-service to embed some sentences!",
"_____no_output_____"
]
],
[
[
"encodings = bc.encode(['an example sentence',\n 'a different example sentence'])",
"_____no_output_____"
]
],
[
[
"The output encoding is a vector with 768 elements for each sentence:",
"_____no_output_____"
]
],
[
[
"encodings.shape",
"_____no_output_____"
]
],
[
[
"### Embed the question titles with BERT\n\nEach title we'll treat as a single sentence, so we can use bert-as-service to encode the titles really easily:",
"_____no_output_____"
]
],
[
[
"%%time\nquestion_title_embeddings = bc.encode(question_title_list)",
"/opt/conda/lib/python3.6/site-packages/bert_serving/client/__init__.py:286: UserWarning: some of your sentences have more tokens than \"max_seq_len=25\" set on the server, as consequence you may get less-accurate or truncated embeddings.\nhere is what you can do:\n- disable the length-check by create a new \"BertClient(check_length=False)\" when you do not want to display this warning\n- or, start a new server with a larger \"max_seq_len\"\n '- or, start a new server with a larger \"max_seq_len\"' % self.length_limit)\n"
]
],
[
[
"Now we have 768-dimensional embeddings for each title of our ~24k questions.",
"_____no_output_____"
]
],
[
[
"question_title_embeddings.shape",
"_____no_output_____"
]
],
[
[
"### Compute average embedding of each sentence in questions\n\nMost of the time, the questions' body text contain multiple sentences - but the BERT models we're using were only trained on single sentences. To generate an encoding of the entire paragraph for each question, we'll use BERT to encode each sentence in that question, and then take the average of their encodings.",
"_____no_output_____"
]
],
[
[
"def bert_embed_paragraphs(paragraphs):\n \"\"\"Embed paragraphs by taking the average embedding of each sentence\n \n Parameters\n ----------\n paragraphs : list of lists of str\n The paragraphs. Each element should correspond to a paragraph\n and each paragraph should be a list of str, where each str is \n a sentence.\n \n Returns\n -------\n embeddings : numpy ndarray of size (len(paragraphs), 768)\n The paragraph embeddings\n \"\"\"\n \n # Covert to single list\n # (this is b/c bert-as-service is faster w/ one large request\n # than with many small requests)\n sentences = []\n ids = []\n for i in range(len(paragraphs)):\n sentences += paragraphs[i]\n ids += [i]*len(paragraphs[i])\n \n # Embed the sentences\n embeddings = bc.encode(sentences)\n \n # Average by paragraph id\n Np = len(paragraphs) #number of paragraphs\n n_dims = embeddings.shape[1]\n embeddings_out = np.full([Np, n_dims], np.nan)\n ids = np.array(ids)\n the_range = np.arange(len(ids))\n for i in range(n_dims):\n embeddings_out[:,i] = coo_matrix((embeddings[:,i], (ids, the_range))).mean(axis=1).ravel()\n return embeddings_out",
"_____no_output_____"
],
[
"%%time\n\n# Embed the questions\nquestions_embeddings = bert_embed_paragraphs(questions_list)",
"/opt/conda/lib/python3.6/site-packages/bert_serving/client/__init__.py:286: UserWarning: some of your sentences have more tokens than \"max_seq_len=25\" set on the server, as consequence you may get less-accurate or truncated embeddings.\nhere is what you can do:\n- disable the length-check by create a new \"BertClient(check_length=False)\" when you do not want to display this warning\n- or, start a new server with a larger \"max_seq_len\"\n '- or, start a new server with a larger \"max_seq_len\"' % self.length_limit)\n"
]
],
[
[
"### Reduce dimensionality of the embeddings using PCA\n\nThe embeddings have a pretty large dimensionality for the amount of data we have. To reduce the number of dimensions while keeping the most useful information, we'll perform dimensionality reduction using principal components analysis (PCA). We'll just take the top 10 dimensions which explain the most variance of the embeddings.",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Reduce BERT embedding dimensionality w/ PCA\npca = PCA(n_components=10)\nquestion_title_embeddings = pca.fit_transform(question_title_embeddings)\nquestions_embeddings = pca.fit_transform(questions_embeddings)",
"CPU times: user 3.49 s, sys: 968 ms, total: 4.46 s\nWall time: 2.52 s\n"
]
],
[
[
"### Add embeddings to tables\n\nNow that we have matrixes corresponding to the title and question encodings, we need to replace the question body text in our original table with the encoding values we generated.",
"_____no_output_____"
]
],
[
[
"# Drop the text data\nquestions.drop('questions_title', axis='columns', inplace=True)\nquestions.drop('questions_body', axis='columns', inplace=True)\nanswers = dfs['answers']\nanswers.drop('answers_body', axis='columns', inplace=True)\n\ndef add_matrix_to_df(df, X, col_name):\n for iC in range(X.shape[1]):\n df[col_name+str(iC)] = X[:,iC]\n \n# Add embeddings data\nadd_matrix_to_df(questions, questions_embeddings, 'questions_embeddings')\nadd_matrix_to_df(questions, question_title_embeddings, 'question_title_embeddings')",
"_____no_output_____"
]
],
[
[
"Instead of containing the raw text of the question, our `questions` table now contains the 10-dimensional embeddings of the questions and the question titles.",
"_____no_output_____"
]
],
[
[
"questions.head()",
"_____no_output_____"
]
],
[
[
"## Compute average embedding of questions each professional has answered\n\nAn important predictor of whether a professional will answer a question is likely how similar that question is to ones they have answered in the past. To create a feature which measures how similar a question is to ones a given professional has answered, we'll compute the average embedding of questions each professional has answered. Then later, we'll add a feature which measures the distance between a question's embedding and professional's average embedding.\n\nHowever, to connect the questions to the professionals, we'll first have to merge the questions to the answers table, and then merge the result to the professionals table.",
"_____no_output_____"
]
],
[
[
"# Merge questions and answers\nanswer_questions = answers.merge(questions, \n left_on='answers_question_id',\n right_on='questions_id')\n\n# Merge answers and professionals\nprofessionals_questions = answer_questions.merge(professionals, how='left',\n left_on='answers_author_id',\n right_on='professionals_id')",
"_____no_output_____"
]
],
[
[
"Then we can compute the average question embedding for all questions each professional has answered.",
"_____no_output_____"
]
],
[
[
"# Compute mean question embedding of all Qs each professional has answered\naggs = dict((c, 'mean') for c in professionals_questions if 'questions_embeddings' in c)\nmean_prof_q_embeddings = (professionals_questions\n .groupby('professionals_id')\n .agg(aggs))\nmean_prof_q_embeddings.columns = ['mean_'+x for x in mean_prof_q_embeddings.columns]\nmean_prof_q_embeddings.reset_index(inplace=True)\n\n# Add mean Qs embeddings to professionals table\nprofessionals = professionals.merge(mean_prof_q_embeddings,\n how='left', on='professionals_id')",
"_____no_output_____"
]
],
[
[
"And we'll do the same thing for the question titles:",
"_____no_output_____"
]
],
[
[
"# Compute mean question title embedding of all Qs each professional has answered\naggs = dict((c, 'mean') for c in professionals_questions if 'question_title_embeddings' in c)\nmean_q_title_embeddings = (professionals_questions\n .groupby('professionals_id')\n .agg(aggs))\nmean_q_title_embeddings.columns = ['mean_'+x for x in mean_q_title_embeddings.columns]\nmean_q_title_embeddings.reset_index(inplace=True)\n\n# Add mean Qs embeddings to professionals table\nprofessionals = professionals.merge(mean_q_title_embeddings,\n how='left', on='professionals_id')",
"_____no_output_____"
]
],
[
[
"## Sample questions which each professional has not answered\n\nTo train a model which predicts whether a professional will answer a given question or not, we'll need to construct a dataset containing examples of question-professional pairs which exist (that is, questions the professional has answered) and question-professional pairs which do *not* exist (questions the professional has *not* actually answered). We obviously already have pairs which do exist (in the `answers` table), but we need to sample pairs which do not exist in order to have negative samples on which to train our model. You can't train a model to predict A from B if you don't have any examples of B! Coming up with these negative examples is called [negative sampling](https://arxiv.org/abs/1310.4546), and is often used in natural language processing (also see this great [video about it](https://www.coursera.org/lecture/nlp-sequence-models/negative-sampling-Iwx0e)). Here we'll just create negative samples once, instead of once per training epoch.\n\nLet's define a function which adds negative samples to a list of positive sample pairs:",
"_____no_output_____"
]
],
[
[
"def add_negative_samples(A, B, k=5):\n \"\"\"Add pairs which do not exist to positive pairs.\n \n If `A` and `B` are two corresponding lists , this function\n returns a table with two copies of elements in `A`.\n For the first copy, corresponding elements in `B` are unchaged.\n However, for the second copy, elements in `B` are elements\n which exist in `B`, but the corresponding `A`-`B` pair\n does not exist in the original pairs.\n \n Parameters\n ----------\n A : list or ndarray or pandas Series\n Indexes\n B : list or ndarray or pandas Series\n Values\n k : int\n Number of negative samples per positive sample.\n Default=5\n \n Returns\n -------\n Ao : list\n Output indexes w/ both positive and negative samples.\n Bo : list\n Output indexes w/ both positive and negative samples.\n E : list\n Whether the corresponding `Ao`-`Bo` pair exists (1) or\n does not (0) in the original input data.\n \"\"\"\n \n # Convert to lists\n if isinstance(A, (np.ndarray, pd.Series)):\n A = A.tolist()\n if isinstance(B, (np.ndarray, pd.Series)):\n B = B.tolist()\n \n # Construct a dict of pairs for each unique value in A\n df = pd.DataFrame()\n df['A'] = A\n df['B'] = B\n to_sets = lambda g: set(g.values.tolist())\n pairs = df.groupby('A')['B'].apply(to_sets).to_dict()\n \n # Randomize B\n uB = np.unique(B) # unique elements of B\n nB = np.random.choice(uB, k*len(A)).tolist() #(hopefully) negative samples\n \n # Ensure pairs do not exist\n for i in range(k*len(A)):\n while nB[i] in pairs[A[i%len(A)]]:\n nB[i] = np.random.choice(uB)\n # NOTE: this will run forever if there's an element \n # in A which has pairs w/ *all* unique values of B...\n \n # Construct output lists\n Ao = A*(k+1)\n Bo = B+nB\n E = [1]*len(A) + [0]*(k*len(A))\n return Ao, Bo, E",
"_____no_output_____"
]
],
[
[
"Now we can create a table which contains professional-question pairs which exist, and the same number of pairs for each professional which do *not* exist:",
"_____no_output_____"
]
],
[
[
"# Find negative samples\nauthor_id_samples, question_id_samples, samples_exist = \\\n add_negative_samples(answers['answers_author_id'], \n answers['answers_question_id'])\n\n# Create table containing both positive and negative samples\ntrain_df = pd.DataFrame()\ntrain_df['target'] = samples_exist\ntrain_df['professionals_id'] = author_id_samples\ntrain_df['questions_id'] = question_id_samples",
"_____no_output_____"
]
],
[
[
"Finally, for each answer-question pair, we can add information about the professional who authored it (or did not author it), the question which it answered, and the student who asked that question.",
"_____no_output_____"
]
],
[
[
"# Merge with professionals table\ntrain_df = train_df.merge(professionals, how='left',\n on='professionals_id')\n\n# Merge with questions table\ntrain_df = train_df.merge(questions, how='left',\n on='questions_id')\n\n# Merge with students table\ntrain_df = train_df.merge(students, how='left',\n left_on='questions_author_id',\n right_on='students_id')\n\n# Delete extra columns that won't be used for prediction\ndel train_df['professionals_id']\ndel train_df['questions_id']\ndel train_df['professionals_headline'] #though this could definitely be used...\ndel train_df['questions_author_id']\ndel train_df['students_id']",
"_____no_output_____"
]
],
[
[
"## Cosine similarity between question embeddings and average embedding for questions professionals have answered\n\nProfessionals are probably more likely to answer questions which are similar to ones they've answered before. To capture how similar the text of a question is to questions a professional has previously answered, we can measure how close the question's BERT embedding is the the average of the embeddings of questions the professional has answered.\n\nTo measure this \"closeness\", we'll use [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity). Cosine similarity measures the cosine of the angle between two points. When the angle is near 0, the cosine similarity is near 1, and when the angle between the two points is as large as it can be (near 180), the cosine similarity is -1. Given two embedding vectors $\\mathbf{a}$ and $\\mathbf{b}$, the cosine distance is:\n\n$$\n\\frac{\\mathbf{a}^\\top \\mathbf{b}}{||\\mathbf{a}|| ~ ||\\mathbf{b}||}\n$$\n\nThere are a few other ways we could have measured the similarity between the embeddings of previously answered questions and the embedding of a new question. Instead of just taking the mean embedding we could also account for the *spread* of the embeddings by computing the [Mahalanobis distance](https://en.wikipedia.org/wiki/Mahalanobis_distance), which would account for the possibility that some professionals have broader expertise than others. We could also use a model to predict whether the new question is in the set of questions the professional has answered (for example, K-nearest neighbors). However, just computing the cosine distance from the mean embedding of previously answered questions will probably give nearly as good results, and will be hundreds of times faster to compute, so we'll do that here.\n\nLet's create a function to compute the cosine similarity between pairs of columns in a dataframe:",
"_____no_output_____"
]
],
[
[
"def cosine_similarity_df(A, B):\n \"\"\"Compute the cosine similarities between each row of two matrixes\n \n Parameters\n ----------\n A : numpy matrix or pandas DataFrame\n First matrix.\n B : numpy matrix or pandas DataFrame\n Second matrix. Must be same size as A.\n \n Returns\n -------\n cos_sim : numpy ndarray of shape (A.shape[0],)\n \"\"\"\n \n # Convert to numpy arrays\n if isinstance(A, pd.DataFrame):\n A = A.values\n if isinstance(B, pd.DataFrame):\n B = B.values\n \n # Ensure both matrixes are same size\n if not A.shape == B.shape:\n raise ValueError('A and B must be same size')\n \n # Compute dot products\n dot_prods = np.sum(A*B, axis=1)\n \n # Compute magnitudes\n mA = np.sqrt(np.sum(np.square(A), axis=1))\n mB = np.sqrt(np.sum(np.square(B), axis=1))\n \n # Return cosine similarity between rows\n return dot_prods / (mA*mB)",
"_____no_output_____"
]
],
[
[
"Then we can use that function to compute the cosine similarities between each question embedding and the mean embedding of questions the professional has answered, and add it to our training dataframe (the one with both positive and negative samples, which we created in the previous section). We'll also do the same for the embeddings of the question titles.",
"_____no_output_____"
]
],
[
[
"# Compute similarity between professional's mean Q embedding and Q embedding\nmean_question_embedding_cols = [c for c in train_df.columns \n if 'mean_questions_embeddings' in c]\nquestion_embedding_cols = [c for c in train_df.columns \n if 'questions_embeddings' in c and 'mean' not in c]\ntrain_df['question_embedding_similarity'] = \\\n cosine_similarity_df(train_df[mean_question_embedding_cols],\n train_df[question_embedding_cols])\n\n# Compute similarity between professional's mean Q embedding and Q title embedding\nmean_title_embedding_cols = [c for c in train_df.columns \n if 'mean_question_title_embeddings' in c]\ntitle_embedding_cols = [c for c in train_df.columns \n if 'question_title_embeddings' in c and 'mean' not in c]\ntrain_df['title_embedding_similarity'] = \\\n cosine_similarity_df(train_df[mean_title_embedding_cols],\n train_df[title_embedding_cols])",
"_____no_output_____"
]
],
[
[
"Do these similarity scores actually capture information about whether a professional is more likely to answer a question or not? Let's plot a histogram of the similarity scores for questions which the professional has actually answered against those which they did not. There's a respectable difference between the two distributions:",
"_____no_output_____"
]
],
[
[
"# Plot histograms of question embedding sim for Q-prof pairs\n# which were answered and Q-prof pairs which weren't\nbins = np.linspace(-1, 1, 30)\nanswered = train_df['target']==1\nplt.hist(train_df.loc[answered, 'question_embedding_similarity'],\n bins=bins, label='Answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[0])+(0.5,))\nplt.hist(train_df.loc[~answered, 'question_embedding_similarity'],\n bins=bins, label='Not answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[1])+(0.5,))\nplt.legend()\nplt.xlabel('Cosine similarity')\nplt.ylabel('Proportion')\nplt.title('Similarity between question embeddings\\n'\n 'and professional\\'s mean question embedding')\nplt.show()",
"_____no_output_____"
]
],
[
[
"There's an even larger difference when we plot the same thing for the title embedding similarity scores!",
"_____no_output_____"
]
],
[
[
"# Plot histograms of title embedding sim for Q-prof pairs\n# which were answered and Q-prof pairs which weren't\nbins = np.linspace(-1, 1, 30)\nanswered = train_df['target']==1\nplt.hist(train_df.loc[answered, 'title_embedding_similarity'],\n bins=bins, label='Answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[0])+(0.5,))\nplt.hist(train_df.loc[~answered, 'title_embedding_similarity'],\n bins=bins, label='Not answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[1])+(0.5,))\nplt.legend()\nplt.xlabel('Cosine similarity')\nplt.ylabel('Proportion')\nplt.title('Similarity between title embeddings\\n'\n 'and professional\\'s mean title embedding')\nplt.show()",
"_____no_output_____"
]
],
[
[
"Note that computing the mean embedding with *all* the data is introducing data leakage if we evaluate the model using cross validation. For example, many of the cosine similarities are exactly 1. This occurs when a professional has answered exactly one question (and so the similarity between the mean embedding of answered questions and the embedding of that question are equal!). To properly evaluate the performance of our model, we would want to use a nested cross-validated scheme, where the training set includes only questions posted before some time point, and the test set includes only questions posted after that timepoint. However, we could put the model into production as-is, as long as we only use it to predict answer likelihoods for questions that were asked after the model was trained.",
"_____no_output_____"
],
[
"## Extract date and time features\n\nDate and time features could in theory be informative in predicting whether a professional will answer a given question. For example, a professional may be far more likely to answer questions in a few months after they join the CareerVillage site, but may become less (or more!) enthusiastic over time and answer less (or more) questions. Keep in mind that this may not be information we really want to consider when making recommendations. It could be that we only want to be considering the content of the question and the expertise of the professional. Let's include date and time features for now, as they're easily removable.",
"_____no_output_____"
]
],
[
[
"# Extract date and time features\ntrain_df['students_joined_year'] = train_df['students_date_joined'].dt.year\ntrain_df['students_joined_month'] = train_df['students_date_joined'].dt.month\ntrain_df['students_joined_dayofweek'] = train_df['students_date_joined'].dt.dayofweek\ntrain_df['students_joined_dayofyear'] = train_df['students_date_joined'].dt.dayofyear\ntrain_df['students_joined_hour'] = train_df['students_date_joined'].dt.hour\n\ntrain_df['questions_added_year'] = train_df['questions_date_added'].dt.year\ntrain_df['questions_added_month'] = train_df['questions_date_added'].dt.month\ntrain_df['questions_added_dayofweek'] = train_df['questions_date_added'].dt.dayofweek\ntrain_df['questions_added_dayofyear'] = train_df['questions_date_added'].dt.dayofyear\ntrain_df['questions_added_hour'] = train_df['questions_date_added'].dt.hour\n\ntrain_df['professionals_joined_year'] = train_df['professionals_date_joined'].dt.year\ntrain_df['professionals_joined_month'] = train_df['professionals_date_joined'].dt.month\ntrain_df['professionals_joined_dayofweek'] = train_df['professionals_date_joined'].dt.dayofweek\ntrain_df['professionals_joined_dayofyear'] = train_df['professionals_date_joined'].dt.dayofyear\ntrain_df['professionals_joined_hour'] = train_df['professionals_date_joined'].dt.hour\n\n# Remove original datetime columns\ndel train_df['students_date_joined']\ndel train_df['questions_date_added']\ndel train_df['professionals_date_joined']",
"_____no_output_____"
]
],
[
[
"## Jaccard similarity between question and professional tags\n\nThe original CareerVillage question recommendation system was based solely on tags. While we've added a lot to that here, tags are still carry a lot of information about how likely a professional is to answer a question. If a question has exactly the same tags as a professional subscribes to, of course that professional is more likely to answer the question! To let our recommendation model decide how heavily to depend on the tag similarity, we'll add the similarity between a question's tags and a professional's tags as a feature.\n\nSpecifically, we'll use [Jaccard similarity](https://en.wikipedia.org/wiki/Jaccard_index), which measures how similar two sets are. The Jaccard similarity is the number of elements (in our case, tags) in common between the two sets (between the question's and the professional's set of tags), divided by the number of unique elements all together.\n\n$$\nJ(A, B) = \\frac{|A \\cap B|}{| A \\cup B |} = \\frac{|A \\cap B|}{|A| + |B| - |A \\cap B|}\n$$\n\nwhere $|x|$ is the number of elements in set $x$, $A \\cup B$ is the union of sets $A$ and $B$ (all unique items after pooling both sets), and $A \\cap\\ B$ is the intersection of the two sets (only the items which are in *both* sets). Python's built-in `set` data structure makes it pretty easy to compute this metric:",
"_____no_output_____"
]
],
[
[
"def jaccard_similarity(set1, set2):\n \"\"\"Compute Jaccard similarity between two sets\"\"\"\n set1 = set(set1)\n set2 = set(set2)\n union_len = len(set1.intersection(set2))\n return union_len / (len(set1) + len(set2) - union_len)",
"_____no_output_____"
]
],
[
[
"We'll also want a function to compute the Jaccard similarity between pairs of sets in a dataframe:",
"_____no_output_____"
]
],
[
[
"def jaccard_similarity_df(df, col1, col2, sep=' '):\n \"\"\"Compute Jaccard similarity between lists of sets.\n \n Parameters\n ----------\n df : pandas DataFrame\n data\n col1 : str\n Column for set 1. Each element should be a string, with space-separated elements\n col2 : str\n Column for set 2.\n \n Returns\n -------\n pandas Series\n Jaccard similarity for each row in df\n \"\"\"\n list1 = list(df[col1])\n list2 = list(df[col2])\n scores = []\n for i in range(len(list1)):\n if type(list1[i]) is float or type(list2[i]) is float:\n scores.append(0.0)\n else:\n scores.append(jaccard_similarity(\n list1[i].split(sep), list2[i].split(sep)))\n return pd.Series(data=scores, index=df.index)",
"_____no_output_____"
]
],
[
[
"We can use that function to compute the Jaccard similarity between the tags for each professional and the question which they did (or didn't) answer, and add that information to our training dataframe.",
"_____no_output_____"
]
],
[
[
"# Compute jaccard similarity between professional and question tags\ntrain_df['question_professional_tag_jac_sim'] = \\\n jaccard_similarity_df(train_df, 'questions_tags', 'professionals_tags')\n\n# Compute jaccard similarity between professional and student tags\ntrain_df['student_professional_tag_jac_sim'] = \\\n jaccard_similarity_df(train_df, 'students_tags', 'professionals_tags')\n\n# Remove tag columns\ndel train_df['questions_tags']\ndel train_df['professionals_tags']\ndel train_df['students_tags']",
"_____no_output_____"
]
],
[
[
"Are professionals actually more likely to answer questions which have similar tags to the ones they subscribe to? We can plot histograms comparing the Jaccard similarity between tags for professional-question pairs which were answered and those which weren't. It looks like the tags are on average more similar for questions which a professional did actually answer, but this separation isn't quite as clear as it was for the question embeddings:",
"_____no_output_____"
]
],
[
[
"# Plot histograms of jac sim for Q-prof pairs\n# which were answered and Q-prof pairs which weren't\nbins = np.linspace(0, 1, 30)\nanswered = train_df['target']==1\nplt.hist(train_df.loc[answered, 'question_professional_tag_jac_sim'],\n bins=bins, label='Answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[0])+(0.5,))\nplt.hist(train_df.loc[~answered, 'question_professional_tag_jac_sim'],\n bins=bins, label='Not answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[1])+(0.5,))\nplt.legend()\nplt.yscale('log', nonposy='clip')\nplt.xlabel('Jaccard similarity between\\nquestion and professional tags')\nplt.ylabel('Log proportion')\nplt.show()",
"_____no_output_____"
]
],
[
[
"The students are also able to have tags (to indicate what fields they're interested in). This might also be useful information for our recommender, seeing as students might not include all the relevant tags in a question post, but may still have the tag on their profile. Again we can plot histograms for the Jaccard similarity scores for question-professional pairs which were answered and those which weren't.",
"_____no_output_____"
]
],
[
[
"# Plot histograms of jac sim for Q-prof pairs\n# which were answered and Q-prof pairs which weren't\nbins = np.linspace(0, 1, 30)\nanswered = train_df['target']==1\nplt.hist(train_df.loc[answered, 'student_professional_tag_jac_sim'],\n bins=bins, label='Answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[0])+(0.5,))\nplt.hist(train_df.loc[~answered, 'student_professional_tag_jac_sim'],\n bins=bins, label='Not answered', density=True,\n fc=matplotlib.colors.to_rgb(COLORS[1])+(0.5,))\nplt.legend()\nplt.yscale('log', nonposy='clip')\nplt.xlabel('Jaccard similarity between\\nstudent and professional tags')\nplt.ylabel('Log proportion')\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Train model to predict probability of answering\n\nNow we finally have one large matrix where each row corresponds to a question-professional pair, and each column corresponds to features about that pair! The `target` column contains whether that question was actually answered by the professional for that row, and the rest of the column contain features about the professional, the question, the student who asked it, and the interactions between them. All the data is either numeric or categorical, and so we're ready to build a model which will use the features to predict the probability that a question will be answered by a professional.",
"_____no_output_____"
]
],
[
[
"train_df.head()",
"_____no_output_____"
]
],
[
[
"Let's separate the table into the target (whether the question was actually answered by the professional), and the rest of the features.",
"_____no_output_____"
]
],
[
[
"# Split into target and features\ny_train = train_df['target']\nX_train = train_df[[c for c in train_df if c is not 'target']]",
"_____no_output_____"
]
],
[
[
"There are a few features which are still not numeric: the locations of the professionals and students, and the industry in which the professional works. We'll have to encode these into numeric values somehow. We could use one-hot encoding, but there are a *lot* of unique values (the locations are city names). Another alternative is to use [target encoding](http://brendanhasz.github.io/2019/03/04/target-encoding.html), where we replace each category with the mean target value for that category. Unfortunately, the locations of the students and professionals might have pretty important interaction effects, and target encoding doesn't handle interaction effects well. That is, professionals may be more likely to answer questions by students in the same location as themselves. One-hot encoding would allow our model to capture these interaction effects, but the number of categories makes this impractical.",
"_____no_output_____"
]
],
[
[
"# Categorical columns to target-encode\ncat_cols = [\n 'professionals_location',\n 'professionals_industry',\n 'students_location',\n]",
"_____no_output_____"
]
],
[
[
"Our model will include a few preprocessing steps: first target-encode the categorical features, then normalize the feature values, and finally impute missing values by replacing them with the median value for the column. After preprocessing, we'll use [XGBoost](https://github.com/dmlc/xgboost) to predict the probability of a professional answering a question.",
"_____no_output_____"
]
],
[
[
"# Predictive model\nmodel = Pipeline([\n ('target_encoder', TargetEncoderCV(cols=cat_cols)),\n ('scaler', RobustScaler()),\n ('imputer', SimpleImputer(strategy='median')),\n ('classifier', XGBClassifier())\n])",
"_____no_output_____"
]
],
[
[
"Let's evaluate the cross-validated [area under the ROC curve](https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve). A value of 1 is perfect, and a value of 0.5 corresponds to chance. Note that for an accurate evaluation of our model, we would need to use nested cross validation, or test on a validation dataset constructed from data collected after the data on which the model was trained.",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Compute cross-validated performance\nmetric_cv(model, X_train, y_train,\n metric=roc_auc_score,\n display='AUROC')",
"Cross-validated AUROC: 0.764 +/- 0.003\nCPU times: user 21min 10s, sys: 5.72 s, total: 21min 16s\nWall time: 21min 16s\n"
]
],
[
[
"Not perfect, but not bad! To truly evaluate the quality of the recommendations, CareerVillage may want to run an A/B test to see if professionals who are served recommendations from this model are more likely to answer questions than professionals served recommendations using the old exclusively-tag-based system.\n\nWhich features were the most important? We can use permutation-based feature importance to see what features had the largest effect on the predictions.",
"_____no_output_____"
]
],
[
[
"%%time\n\n# Compute the cross-validated feature importances\nimp_df = permutation_importance_cv(\n X_train, y_train, model, 'auc')\n\n# Plot the feature importances\nplt.figure(figsize=(8, 20))\nplot_permutation_importance(imp_df)\nplt.show()",
"/opt/conda/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.\n return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval\n"
]
],
[
[
"To actually generate predictions on new data, we would need to first fit our model to data which we've already collected:",
"_____no_output_____"
]
],
[
[
"# Fit model to historical data\n#fit_model = model.fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"Then, after processing data corresponding to new questions as described above (into dataframes `X_new` and `y_new`), we would be able to generate predictions for how likely a professional is to answer each of the new questions:",
"_____no_output_____"
]
],
[
[
"# Predict answer probability of new questions\n#predicted_probs = model.predict(X_new, y_new)",
"_____no_output_____"
]
],
[
[
"## Conclusion\n\nWe've created a processing pipeline which aggregates information across several different data tables, and uses that information to predict how likely a professional is to answer a given question.\n\nHow can this predictive model be used to generate a list of questions to recommend to each professional? Each week, we can generate a list of new questions asked this week (though the time interval doesn't have to be a week - it could be a day, or a month, etc). This list could also potentially include older questions which have not yet been answered. Then, for each professional, we can use the recommendation model to generate scores for each new question-professional pair (that is, the probabilities that the professional would answer a given question). Finally, we could then send the questions with the top K scores (say, the top 10) to that professional.\n\nAnother strategy for using the recommendation model would be to recommend *professionals* for each question. That is, given a question, come up with the top K professionals most likely to answer it, and reccomend the question to them. This strategy could use the exact same method as described previously (generating a list of new questions, and pairing with each professional), except we would choose the *professionals* with the top K scores for a given question (as opposed to choosing the top K questions for a given professional). There are pros and cons to this strategy relative to the previous one. I worry that using this strategy would simply send all the questions to the professionals who answer the most questions (because their answer probabilities are likely to be higher), and not send any questions to professionals who answer fewer questions. This could result in a small subset of overworked professionals, and the majority of professionals not receiving any recommended questions! On the other hand, those professionals who answer the most questions are indeed more likely to answer the questions, so perhaps it's OK to send them a larger list of questions. I think the optimal approach would be to use the first strategy (recommend K questions to each professional), but allow each professional to set their K - that is, let professionals choose how many questions they are recommended per week.\n\nOn the other hand, using the current strategy could result in the opposite problem, where only a small subset of questions get sent to professionals. In the long run, it may be best to view the problem not as a recommendation problem, but as an *allocation* problem. That is, how can we allocate the questions to professionals such that the expected number of answered questions is highest? Once we've generated the probabilities that each professional will answer each question, determining the best allocation becomes a discrete optimization problem. However, the number of elements here is pretty large (questions and professionals). Deterministic discrete optimization algorithms will likely be impractical (because they'll take too long to run given the large number of elements), and so metaheuristic methods like local search or evolutionary optimization would probably have to be used.\n\nThere was a lot of other data provided by CareerVillage which was not used in this model, but which could have been! Parhaps the most important of this additional data was the data on the scores (basically the \"likes\") for each answer and question. Instead of just predicting *whether* a professional was likely to answer a question (as we did with our implicit recommendation system), we could have predicted which questions they were most likely to give a *good* answer to, as judged by the number of \"likes\" their answers received(an *explicit* recommendation system).\n\nThe framework we created here uses a classifier to predict professional-question pairs - basically, a content-based filtering recommendation system. However, there are other frameworks we could have used. We could have framed the challenge as an implicit collaborative filtering problem (or even an explicit one if we attempted to predict the \"hearts\" given to professionals' answers). I chose not to use a collaborative filtering framework because collaborative filtering suffers from the \"cold-start\" problem: it has trouble recommending users to new items. This is because it depends on making predictions about user-item pair scores (in our case, whether professional-question pairs \"exist\" in the form of an answer) based on similarities between the query user and the scores of the query item by users similar to the query user. Unfortunately, for this application it is especially important to recommend questions to professionals when the question has no answers yet! So in production, more often than not there will *be* no scores of the query item by any other users when we want to be making the predictions. Therefore, we have to use primarily the features about the users and the items to make recommendations. Although some collaborative filtering methods can take into account the user and item features (such as neural collaborative filtering) I thought it would be best to use a framework which *only* uses the user and item features.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ab0b2fe2892bce9a98354362dc5976bc153328c
| 1,516 |
ipynb
|
Jupyter Notebook
|
code/notebooks_python/.ipynb_checkpoints/pattern_recognition-checkpoint.ipynb
|
joph/Machine-Learning-Workshop
|
2379e931c76aa688180d8c0c6f3b2630db2b0862
|
[
"MIT"
] | 1 |
2020-11-25T07:38:44.000Z
|
2020-11-25T07:38:44.000Z
|
code/notebooks_python/.ipynb_checkpoints/pattern_recognition-checkpoint.ipynb
|
joph/Machine-Learning-Workshop
|
2379e931c76aa688180d8c0c6f3b2630db2b0862
|
[
"MIT"
] | null | null | null |
code/notebooks_python/.ipynb_checkpoints/pattern_recognition-checkpoint.ipynb
|
joph/Machine-Learning-Workshop
|
2379e931c76aa688180d8c0c6f3b2630db2b0862
|
[
"MIT"
] | 2 |
2019-05-17T08:18:07.000Z
|
2020-11-25T07:38:52.000Z
| 32.255319 | 421 | 0.593668 |
[
[
[
"import scripts.windturbines.functions_pattern_recognition as fpr",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4ab0b701c769e406632f8918582a90da36603e30
| 294,447 |
ipynb
|
Jupyter Notebook
|
notebooks/graphics/oil_transfers/plot_exports.ipynb
|
MIDOSS/analysis-rachael
|
c4a3082eca53b49e358e86a6e623a359ca6668be
|
[
"Apache-2.0"
] | null | null | null |
notebooks/graphics/oil_transfers/plot_exports.ipynb
|
MIDOSS/analysis-rachael
|
c4a3082eca53b49e358e86a6e623a359ca6668be
|
[
"Apache-2.0"
] | 3 |
2020-07-03T17:07:04.000Z
|
2020-07-07T16:09:53.000Z
|
notebooks/graphics/oil_transfers/plot_exports.ipynb
|
MIDOSS/analysis-rachael
|
c4a3082eca53b49e358e86a6e623a359ca6668be
|
[
"Apache-2.0"
] | null | null | null | 220.394461 | 128,852 | 0.875974 |
[
[
[
"import cartopy.crs\nimport cmocean.cm\nimport matplotlib.pyplot as plt\nimport numpy\nimport xarray\nimport pandas\nimport pathlib\nimport yaml",
"_____no_output_____"
]
],
[
[
"##### file paths and names",
"_____no_output_____"
]
],
[
[
"mesh_mask = xarray.open_dataset(\"https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn2DMeshMaskV17-02\")\nwater_mask = mesh_mask.tmaskutil.isel(time=0)\n\nfields = xarray.open_dataset(\"https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV19-05\")\nsalinity = fields.salinity.sel(time=\"2020-08-14 14:30\", depth=0, method=\"nearest\").where(water_mask)\n\ngeoref = xarray.open_dataset(\"https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetryV17-02\")\n\nlocation_dir = pathlib.Path('/Users/rmueller/Data/MIDOSS/AIS')\nlocation_file = location_dir / 'Oil_Transfer_Facilities.xlsx'\nwa_oil = pathlib.Path('/Users/rmueller/Data/MIDOSS/marine_transport_data') / 'WA_origin.yaml'",
"_____no_output_____"
],
[
"vessel_types = [\n 'tanker', \n 'atb', \n 'barge'\n]\noil_types = [\n 'akns', \n 'bunker', \n 'dilbit', \n 'jet', \n 'diesel', \n 'gas', \n 'other'\n]\noil_colors = [\n 'darkolivegreen',\n 'olivedrab',\n 'slategrey',\n 'indigo',\n 'mediumslateblue',\n 'cornflowerblue',\n 'saddlebrown'\n]",
"_____no_output_____"
]
],
[
[
"### Load locations of marine transfer facilities",
"_____no_output_____"
],
[
"##### read in oil attribute data",
"_____no_output_____"
]
],
[
[
"with open(wa_oil) as file:\n oil_attrs = yaml.load(file, Loader=yaml.Loader)",
"_____no_output_____"
],
[
"oil_attrs['Alon Asphalt Company (Paramount Petroleum)']['barge']['diesel']",
"_____no_output_____"
],
[
"oil_attrs.keys()",
"_____no_output_____"
]
],
[
[
"##### read in locations for marine transfer facilities",
"_____no_output_____"
]
],
[
[
"wa_locs = pandas.read_excel(location_file, sheet_name='Washington', \n usecols=\"B,I,J\")\nbc_locs = pandas.read_excel(location_file, sheet_name='British Columbia', \n usecols=\"A,B,C\")",
"_____no_output_____"
]
],
[
[
"##### find and correct entries that differ between DOE naming and Casey's name list (I plan to correct these names so we are self-consistent)",
"_____no_output_____"
]
],
[
[
"for location in wa_locs['FacilityName']:\n if location not in oil_attrs.keys():\n print(location)",
"Maxum Petroleum - Harbor Island Terminal\nMarathon Anacortes Refinery (formerly Tesoro)\nNustar Energy Vancouver\n"
],
[
"# Maxum Petroleum - Harbor Island Terminal -> 'Maxum (Rainer Petroleum)'\n# Marathon Anacortes Refinery (formerly Tesoro) -> 'Andeavor Anacortes Refinery (formerly Tesoro)'\n# Nustar Energy Vancouver -> Not included",
"_____no_output_____"
],
[
"for location in wa_locs['FacilityName']:\n if location == 'Marathon Anacortes Refinery (formerly Tesoro)':\n# wa_locs[wa_locs['FacilityName']=='Marathon Anacortes Refinery (formerly Tesoro)'].FacilityName = (\n# 'Andeavor Anacortes Refinery (formerly Tesoro)'\n# )\n wa_locs.loc[wa_locs['FacilityName']=='Marathon Anacortes Refinery (formerly Tesoro)','FacilityName'] = (\n 'Andeavor Anacortes Refinery (formerly Tesoro)'\n )\n elif location == 'Maxum Petroleum - Harbor Island Terminal':\n wa_locs.loc[wa_locs['FacilityName']=='Maxum Petroleum - Harbor Island Terminal','FacilityName'] = (\n 'Maxum (Rainer Petroleum)'\n )\nwa_locs.drop(index=17, inplace=True)\nwa_locs.reset_index( drop=True, inplace=True )\nwa_locs",
"_____no_output_____"
]
],
[
[
"### Add oil entries to dataframe",
"_____no_output_____"
]
],
[
[
"for oil in oil_types:\n wa_locs[oil] = numpy.zeros(len(wa_locs['FacilityName']))\n wa_locs[f'{oil}_scale'] = numpy.zeros(len(wa_locs['FacilityName']))",
"_____no_output_____"
],
[
"#oil_attrs[location][vessel][oil]['total_gallons']",
"_____no_output_____"
],
[
"# for oil in oil_types:\n# wa_locs[oil] = numpy.zeros(len(wa_locs['FacilityName']))\n\n# index = 0\n# for oil in oil_types:\n# wa_locs[oil[index]] = 1\n# wa_locs",
"_____no_output_____"
]
],
[
[
"### loop through all facilities and get the total amount of fuel by fuel type",
"_____no_output_____"
]
],
[
[
"wa_locs['all'] = numpy.zeros(len(wa_locs['FacilityName']))\n\nfor index in range(len(wa_locs)):\n for oil in oil_types:\n for vessel in vessel_types:\n # add oil accross vessel types and convert to liters while we are at it\n wa_locs.loc[index, oil] += oil_attrs[wa_locs.FacilityName[index]][vessel][oil]['total_gallons']\n \n # add oil across oil types\n wa_locs.loc[index,'all'] += wa_locs.loc[index,oil]\n\n# Now make a scale to use for marker size\nfor oil in oil_types:\n wa_locs[f'{oil}_scale'] = wa_locs[oil]/wa_locs[oil].sum()\n print(wa_locs[f'{oil}_scale'].sum())\n \nwa_locs['all_scale'] = wa_locs['all']/wa_locs['all'].sum()",
"1.0\n1.0\n0.0\n1.0\n1.0\n1.0\n0.9999999999999999\n"
],
[
"wa_locs",
"_____no_output_____"
],
[
"fs = 20\nscatter_handles = numpy.zeros(len(oil_types))\n%matplotlib inline\n\nrotated_crs = cartopy.crs.RotatedPole(pole_longitude=120.0, pole_latitude=63.75)\nplain_crs = cartopy.crs.PlateCarree()\n\n# Use `subplot_kw` arg to pass a dict of kwargs containing the `RotatedPole` CRS\n# and the `facecolor` value to the add_subplot() call(s) that subplots() will make\nfig,ax = plt.subplots(\n 1, 1, figsize=(18, 9), subplot_kw={\"projection\": rotated_crs, \"facecolor\": \"#8b7765\"}\n)\n\n# Use the `transform` arg to tell cartopy to transform the model field\n# between grid coordinates and lon/lat coordinates when it is plotted\nquad_mesh = ax.pcolormesh(\n georef.longitude, georef.latitude, salinity, transform=plain_crs, cmap=cmocean.cm.haline, shading=\"auto\"\n)\n\n# add WA locations\nfor oil in oil_types:\n index = oil_types.index(oil)\n scatter_handles = ax.scatter(\n wa_locs['DockLongNumber'],\n wa_locs['DockLatNumber'],\n s = 600*wa_locs[f'{oil}_scale'],\n transform=plain_crs, \n color=oil_colors[index],\n edgecolors='white',\n linewidth=1,\n label=oil\n )\n#plt.legend(handles = scatter_handles)\n#plt.show()\n\n# add BC locations\n# ax.scatter(\n# bc_locs['Longitude'],\n# bc_locs['Latitude'],\n# transform=plain_crs, \n# color='maroon'\n# )\n\n# Colour bar\ncbar = plt.colorbar(quad_mesh, ax=ax)\ncbar.set_label(f\"{salinity.attrs['long_name']} [{salinity.attrs['units']}]\")\n# Axes title; ax.gridlines() below labels axes tick in a way that makes\n# additional axes labels unnecessary IMHO\nax.set_title(f\"Exports of all oil types, scaled by magnitude of individual type\", fontsize=fs)\n\n# Don't call set_aspect() because plotting on lon/lat grid implicitly makes the aspect ratio correct\n\n# Show grid lines\n# Note that ax.grid() has no effect; ax.gridlines() is from cartopy, not matplotlib\nax.gridlines(draw_labels=True, auto_inline=False)\n\n# cartopy doesn't seem to play nice with tight_layout() unless we call canvas.draw() first\nfig.canvas.draw()\nfig.tight_layout()",
"_____no_output_____"
],
[
"fs = 20\n%matplotlib inline\n\nrotated_crs = cartopy.crs.RotatedPole(pole_longitude=120.0, pole_latitude=63.75)\nplain_crs = cartopy.crs.PlateCarree()\n\n# Use `subplot_kw` arg to pass a dict of kwargs containing the `RotatedPole` CRS\n# and the `facecolor` value to the add_subplot() call(s) that subplots() will make\nfig,ax = plt.subplots(\n 1, 1, figsize=(18, 9), subplot_kw={\"projection\": rotated_crs, \"facecolor\": \"#8b7765\"}\n)\n\n# Use the `transform` arg to tell cartopy to transform the model field\n# between grid coordinates and lon/lat coordinates when it is plotted\nquad_mesh = ax.pcolormesh(\n georef.longitude, georef.latitude, salinity, transform=plain_crs, cmap=cmocean.cm.haline, shading=\"auto\"\n)\n\n# add WA locations\nax.scatter(\n wa_locs['DockLongNumber'],\n wa_locs['DockLatNumber'],\n s = 600*wa_locs['all_scale'],\n transform=plain_crs, \n color='maroon',\n edgecolors='white',\n linewidth=1\n)\n\n# add BC locations\n# ax.scatter(\n# bc_locs['Longitude'],\n# bc_locs['Latitude'],\n# transform=plain_crs, \n# color='maroon'\n# )\n\n# Colour bar\ncbar = plt.colorbar(quad_mesh, ax=ax)\ncbar.set_label(f\"{salinity.attrs['long_name']} [{salinity.attrs['units']}]\")\n# Axes title; ax.gridlines() below labels axes tick in a way that makes\n# additional axes labels unnecessary IMHO\nax.set_title('Exports of all oil types', fontsize=fs)\n\n# Don't call set_aspect() because plotting on lon/lat grid implicitly makes the aspect ratio correct\n\n# Show grid lines\n# Note that ax.grid() has no effect; ax.gridlines() is from cartopy, not matplotlib\nax.gridlines(draw_labels=True, auto_inline=False)\n\n# cartopy doesn't seem to play nice with tight_layout() unless we call canvas.draw() first\nfig.canvas.draw()\nfig.tight_layout()",
"_____no_output_____"
]
],
[
[
"### Now add spill locations",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4ab0cafc8ed18d2c885783bbf321aaeee23217f1
| 23,220 |
ipynb
|
Jupyter Notebook
|
PythonDataBasics_HypothesisTesting.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 403 |
2017-10-15T02:07:38.000Z
|
2022-03-30T15:27:14.000Z
|
PythonDataBasics_HypothesisTesting.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 4 |
2019-08-21T10:35:09.000Z
|
2021-02-04T04:57:13.000Z
|
PythonDataBasics_HypothesisTesting.ipynb
|
AndrewAnnex/PythonNumericalDemos
|
11a7dec09bf08527b358fa95119811b6f73023b5
|
[
"MIT"
] | 276 |
2018-06-27T11:20:30.000Z
|
2022-03-25T16:04:24.000Z
| 51.258278 | 512 | 0.616236 |
[
[
[
"<p align=\"center\">\n <img src=\"https://github.com/GeostatsGuy/GeostatsPy/blob/master/TCG_color_logo.png?raw=true\" width=\"220\" height=\"240\" />\n\n</p>\n\n## Bootstrap-based Hypothesis Testing Demonstration\n\n### Boostrap and Methods for Hypothesis Testing, Difference in Means\n\n* we calculate the hypothesis test for different in means with boostrap and compare to the analytical expression\n\n* **Welch's t-test**: we assume the features are Gaussian distributed and the variance are unequal\n\n#### Michael Pyrcz, Associate Professor, University of Texas at Austin \n\n##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)\n\n#### Hypothesis Testing\n\nPowerful methodology for spatial data analytics:\n\n1. extracted sample set 1 and 2, the means look different, but are they? \n2. should we suspect that the samples are in fact from 2 different populations?\n\nNow, let's try the t-test, hypothesis test for difference in means. This test assumes that the variances are similar along with the data being Gaussian distributed (see the course notes for more on this). This is our test:\n\n\\begin{equation}\nH_0: \\mu_{X1} = \\mu_{X2}\n\\end{equation}\n\n\\begin{equation}\nH_1: \\mu_{X1} \\ne \\mu_{X2}\n\\end{equation}\n\nTo test this we will calculate the t statistic with the bootstrap and analytical approaches.\n\n#### The Welch's t-test for Difference in Means by Analytical and Empirical Methods\n\nWe work with the following test statistic, *t-statistic*, from the two sample sets.\n\n\\begin{equation}\n\\hat{t} = \\frac{\\overline{x}_1 - \\overline{x}_2}{\\sqrt{\\frac{s^2_1}{n_1} + \\frac{s^2_2}{n_2}}}\n\\end{equation}\n\nwhere $\\overline{x}_1$ and $\\overline{x}_2$ are the sample means, $s^2_1$ and $s^2_2$ are the sample variances and $n_1$ and $n_2$ are the numer of samples from the two datasets.\n\nThe critical value, $t_{critical}$ is calculated by the analytical expression by:\n\n\\begin{equation}\nt_{critical} = \\left|t(\\frac{\\alpha}{2},\\nu)\\right|\n\\end{equation}\n\nThe degrees of freedom, $\\nu$, is calculated as follows:\n\n\\begin{equation}\n\\nu = \\frac{\\left(\\frac{1}{n_1} + \\frac{\\mu}{n_2}\\right)^2}{\\frac{1}{n_1^2(n_1-1)} + \\frac{\\mu^2}{n_2^2(n_2-1)}}\n\\end{equation}\n\nAlternatively, the sampling distribution of the $t_{statistic}$ and $t_{critical}$ may be calculated empirically with bootstrap.\n\nThe workflow proceeds as:\n\n* shift both sample sets to have the mean of the combined data set, $x_1$ → $x^*_1$, $x_2$ → $x^*_2$, this makes the null hypothesis true. \n\n* for each bootstrap realization, $\\ell=1\\ldots,L$\n\n * perform $n_1$ Monte Carlo simulations, draws with replacement, from sample set $x^*_1$\n \n * perform $n_2$ Monte Carlo simulations, draws with replacement, from sample set $x^*_2$\n \n * calculate the t_{statistic} realization, $\\hat{t}^{\\ell}$ given the resulting sample means $\\overline{x}^{*,\\ell}_1$ and $\\overline{x}^{*,\\ell}_2$ and the sample variances $s^{*,2,\\ell}_1$ and $s^{*,2,\\ell}_2$\n \n* pool the results to assemble the $t_{statistic}$ sampling distribution\n\n* calculate the cumulative probability of the observed t_{statistic}m, $\\hat{t}$, from the boostrap distribution based on $\\hat{t}^{\\ell}$, $\\ell = 1,\\ldots,L$.\n\nHere's some prerequisite information on the boostrap.\n\n#### Bootstrap\n\nBootstrap is a method to assess the uncertainty in a sample statistic by repeated random sampling with replacement.\n\nAssumptions\n* sufficient, representative sampling, identical, idependent samples\n\nLimitations\n1. assumes the samples are representative \n2. assumes stationarity\n3. only accounts for uncertainty due to too few samples, e.g. no uncertainty due to changes away from data\n4. does not account for boundary of area of interest \n5. assumes the samples are independent\n6. does not account for other local information sources\n\nThe Bootstrap Approach (Efron, 1982)\n\nStatistical resampling procedure to calculate uncertainty in a calculated statistic from the data itself.\n* Does this work? Prove it to yourself, for uncertainty in the mean solution is standard error: \n\n\\begin{equation}\n\\sigma^2_\\overline{x} = \\frac{\\sigma^2_s}{n}\n\\end{equation}\n\nExtremely powerful - could calculate uncertainty in any statistic! e.g. P13, skew etc.\n* Would not be possible access general uncertainty in any statistic without bootstrap.\n* Advanced forms account for spatial information and sampling strategy (game theory and Journel’s spatial bootstrap (1993).\n\nSteps: \n\n1. assemble a sample set, must be representative, reasonable to assume independence between samples\n\n2. optional: build a cumulative distribution function (CDF)\n * may account for declustering weights, tail extrapolation\n * could use analogous data to support\n\n3. For $\\ell = 1, \\ldots, L$ realizations, do the following:\n\n * For $i = \\alpha, \\ldots, n$ data, do the following:\n\n * Draw a random sample with replacement from the sample set or Monte Carlo simulate from the CDF (if available). \n\n6. Calculate a realization of the sammary statistic of interest from the $n$ samples, e.g. $m^\\ell$, $\\sigma^2_{\\ell}$. Return to 3 for another realization.\n\n7. Compile and summarize the $L$ realizations of the statistic of interest.\n\nThis is a very powerful method. Let's try it out and compare the result to the analytical form of the confidence interval for the sample mean. \n\n\n#### Objective \n\nProvide an example and demonstration for:\n\n1. interactive plotting in Jupyter Notebooks with Python packages matplotlib and ipywidgets\n2. provide an intuitive hands-on example of confidence intervals and compare to statistical boostrap \n\n#### Getting Started\n\nHere's the steps to get setup in Python with the GeostatsPy package:\n\n1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/). \n2. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality. \n\n#### Load the Required Libraries\n\nThe following code loads the required libraries.",
"_____no_output_____"
]
],
[
[
"%matplotlib inline\nfrom ipywidgets import interactive # widgets and interactivity\nfrom ipywidgets import widgets \nfrom ipywidgets import Layout\nfrom ipywidgets import Label\nfrom ipywidgets import VBox, HBox\nimport matplotlib.pyplot as plt # plotting\nimport numpy as np # working with arrays\nimport pandas as pd # working with DataFrames\nfrom scipy import stats # statistical calculations\nimport random # random drawing / bootstrap realizations of the data",
"_____no_output_____"
]
],
[
[
"#### Make a Synthetic Dataset\n\nThis is an interactive method to:\n\n* select a parametric distribution\n\n* select the distribution parameters\n\n* select the number of samples and visualize the synthetic dataset distribution",
"_____no_output_____"
]
],
[
[
"\n# interactive calculation of the sample set (control of source parametric distribution and number of samples)\nl = widgets.Text(value=' Interactive Hypothesis Testing, Difference in Means, Analytical & Bootstrap Methods, Michael Pyrcz, Associate Professor, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))\n\nn1 = widgets.IntSlider(min=0, max = 100, value = 10, step = 1, description = '$n_{1}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\nn1.style.handle_color = 'red'\n\nm1 = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$\\overline{x}_{1}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\nm1.style.handle_color = 'red'\n\ns1 = widgets.FloatSlider(min=0, max = 10, value = 3, step = 0.25, description = '$s_1$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\ns1.style.handle_color = 'red'\n\nui1 = widgets.VBox([n1,m1,s1],) # basic widget formatting \n\nn2 = widgets.IntSlider(min=0, max = 100, value = 10, step = 1, description = '$n_{2}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\nn2.style.handle_color = 'yellow'\n\nm2 = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$\\overline{x}_{2}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\nm2.style.handle_color = 'yellow'\n\ns2 = widgets.FloatSlider(min=0, max = 10, value = 3, step = 0.25, description = '$s_2$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\ns2.style.handle_color = 'yellow'\n\nui2 = widgets.VBox([n2,m2,s2],) # basic widget formatting \n\nL = widgets.IntSlider(min=10, max = 1000, value = 100, step = 1, description = '$L$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\nL.style.handle_color = 'gray'\n\nalpha = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$α$',orientation='horizontal',layout=Layout(width='300px', height='30px'))\nalpha.style.handle_color = 'gray'\n\nui3 = widgets.VBox([L,alpha],) # basic widget formatting \n\nui4 = widgets.HBox([ui1,ui2,ui3],) # basic widget formatting \n\nui2 = widgets.VBox([l,ui4],)\n\ndef f_make(n1, m1, s1, n2, m2, s2, L, alpha): # function to take parameters, make sample and plot\n\n \n np.random.seed(73073)\n x1 = np.random.normal(loc=m1,scale=s1,size=n1)\n np.random.seed(73074)\n x2 = np.random.normal(loc=m2,scale=s2,size=n2)\n \n mu = (s2*s2)/(s1*s1)\n nu = ((1/n1 + mu/n2)*(1/n1 + mu/n2))/(1/(n1*n1*(n1-1)) + ((mu*mu)/(n2*n2*(n2-1))))\n \n prop_values = np.linspace(-8.0,8.0,100)\n analytical_distribution = stats.t.pdf(prop_values,df = nu) \n analytical_tcrit = stats.t.ppf(1.0-alpha*0.005,df = nu)\n \n # Analytical Method with SciPy\n t_stat_observed, p_value_analytical = stats.ttest_ind(x1,x2,equal_var=False)\n \n # Bootstrap Method\n global_average = np.average(np.concatenate([x1,x2])) # shift the means to be equal to the globla mean\n x1s = x1 - np.average(x1) + global_average\n x2s = x2 - np.average(x2) + global_average\n \n t_stat = np.zeros(L); p_value = np.zeros(L)\n \n random.seed(73075)\n for l in range(0, L): # loop over realizations\n samples1 = random.choices(x1s, weights=None, cum_weights=None, k=len(x1s))\n #print(samples1)\n samples2 = random.choices(x2s, weights=None, cum_weights=None, k=len(x2s))\n #print(samples2)\n t_stat[l], p_value[l] = stats.ttest_ind(samples1,samples2,equal_var=False)\n \n bootstrap_lower = np.percentile(t_stat,alpha * 0.5)\n bootstrap_upper = np.percentile(t_stat,100.0 - alpha * 0.5)\n \n plt.subplot(121)\n #print(t_stat)\n \n plt.hist(x1,cumulative = False, density = True, alpha=0.4,color=\"red\",edgecolor=\"black\", bins = np.linspace(0,50,50), label = '$x_1$')\n plt.hist(x2,cumulative = False, density = True, alpha=0.4,color=\"yellow\",edgecolor=\"black\", bins = np.linspace(0,50,50), label = '$x_2$')\n plt.ylim([0,0.4]); plt.xlim([0.0,30.0])\n plt.title('Sample Distributions'); plt.xlabel('Value'); plt.ylabel('Density')\n plt.legend()\n \n #plt.hist(x2)\n \n plt.subplot(122)\n plt.ylim([0,0.6]); plt.xlim([-8.0,8.0])\n plt.title('Bootstrap and Analytical $t_{statistic}$ Sampling Distributions'); plt.xlabel('$t_{statistic}$'); plt.ylabel('Density')\n plt.plot([t_stat_observed,t_stat_observed],[0.0,0.6],color = 'black',label='observed $t_{statistic}$')\n plt.plot([bootstrap_lower,bootstrap_lower],[0.0,0.6],color = 'blue',linestyle='dashed',label = 'bootstrap interval')\n plt.plot([bootstrap_upper,bootstrap_upper],[0.0,0.6],color = 'blue',linestyle='dashed')\n plt.plot(prop_values,analytical_distribution, color = 'red',label='analytical $t_{statistic}$')\n plt.hist(t_stat,cumulative = False, density = True, alpha=0.2,color=\"blue\",edgecolor=\"black\", bins = np.linspace(-8.0,8.0,50), label = 'bootstrap $t_{statistic}$')\n\n plt.fill_between(prop_values, 0, analytical_distribution, where = prop_values <= -1*analytical_tcrit, facecolor='red', interpolate=True, alpha = 0.2)\n plt.fill_between(prop_values, 0, analytical_distribution, where = prop_values >= analytical_tcrit, facecolor='red', interpolate=True, alpha = 0.2)\n ax = plt.gca()\n handles,labels = ax.get_legend_handles_labels()\n handles = [handles[0], handles[2], handles[3], handles[1]]\n labels = [labels[0], labels[2], labels[3], labels[1]]\n\n plt.legend(handles,labels,loc=1)\n \n \n plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)\n plt.show()\n\n\n# connect the function to make the samples and plot to the widgets \ninteractive_plot = widgets.interactive_output(f_make, {'n1': n1, 'm1': m1, 's1': s1, 'n2': n2, 'm2': m2, 's2': s2, 'L': L, 'alpha': alpha})\ninteractive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating",
"_____no_output_____"
]
],
[
[
"### Boostrap and Analytical Methods for Hypothesis Testing, Difference in Means\n\n* including the analytical and bootstrap methods for testing the difference in means\n* interactive plot demonstration with ipywidget, matplotlib packages\n\n#### Michael Pyrcz, Associate Professor, University of Texas at Austin \n\n##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)\n\n### The Problem\n\nLet's simulate bootstrap, resampling with replacement from a hat with $n_{red}$ and $n_{green}$ balls\n\n* **$n_1$**, **$n_2$** number of samples, **$\\overline{x}_1$**, **$\\overline{x}_2$** means and **$s_1$**, **$s_2$** standard deviation of the 2 sample sets\n* **$L$**: number of bootstrap realizations\n* **$\\alpha$**: alpha level",
"_____no_output_____"
]
],
[
[
"display(ui2, interactive_plot) # display the interactive plot",
"_____no_output_____"
]
],
[
[
"#### Observations\n\nSome observations:\n\n* lower dispersion and higher difference in means increases the absolute magnitude of the observed $t_{statistic}$\n\n* the bootstrap distribution closely matches the analytical distribution if $L$ is large enough\n\n* it is possible to use bootstrap to calculate the sampling distribution instead of relying on the theoretical express distribution, in this case the Student's t distribution. \n\n\n#### Comments\n\nThis was a demonstration of interactive hypothesis testing for the significance in difference in means aboserved between 2 sample sets in Jupyter Notebook Python with the ipywidgets and matplotlib packages. \n\nI have many other demonstrations on data analytics and machine learning, e.g. on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy. \n \nI hope this was helpful,\n\n*Michael*\n\n#### The Author:\n\n### Michael Pyrcz, Associate Professor, University of Texas at Austin \n*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*\n\nWith over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development. \n\nFor more about Michael check out these links:\n\n#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)\n\n#### Want to Work Together?\n\nI hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.\n\n* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you! \n\n* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!\n\n* I can be reached at [email protected].\n\nI'm always happy to discuss,\n\n*Michael*\n\nMichael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin\n\n#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ab0ddeea264ba0beb485c3e635e9f8d99cd33a2
| 77,023 |
ipynb
|
Jupyter Notebook
|
Project_with_Data_scrapying.ipynb
|
Praglu/jupyter-notebooks
|
cd9e0fd9fe8009dbaeb5c1b2f308985367346044
|
[
"Apache-2.0"
] | 1 |
2022-01-20T13:54:58.000Z
|
2022-01-20T13:54:58.000Z
|
Project_with_Data_scrapying.ipynb
|
Praglu/jupyter-notebooks
|
cd9e0fd9fe8009dbaeb5c1b2f308985367346044
|
[
"Apache-2.0"
] | null | null | null |
Project_with_Data_scrapying.ipynb
|
Praglu/jupyter-notebooks
|
cd9e0fd9fe8009dbaeb5c1b2f308985367346044
|
[
"Apache-2.0"
] | null | null | null | 138.530576 | 39,316 | 0.839658 |
[
[
[
"# Individual Project",
"_____no_output_____"
],
[
"## Barber review in Gliwice",
"_____no_output_____"
],
[
"### Wojciech Pragłowski",
"_____no_output_____"
],
[
"#### Data scraped from [booksy.](https://booksy.com/pl-pl/s/barber-shop/12795_gliwice)",
"_____no_output_____"
]
],
[
[
"import requests\nfrom bs4 import BeautifulSoup\n\n\nbooksy = requests.get(\"https://booksy.com/pl-pl/s/barber-shop/12795_gliwice\")\nsoup = BeautifulSoup(booksy.content, 'html.parser')\n\n\nbarber = soup.find_all('h2')\ntext_barber = [i.get_text() for i in barber]\nbarber_names = [i.strip() for i in text_barber]\nbarber_names.pop(-1)\n\nrate = soup.find_all('div', attrs={'data-testid':'rank-average'})\ntext_rate = [i.get_text() for i in rate]\nbarber_rate = [i.strip() for i in text_rate]\n\nopinions = soup.find_all('div', attrs={'data-testid':'rank-label'})\ntext_opinions = [i.get_text() for i in opinions]\nreplace_opinions = [i.replace('opinii', '') for i in text_opinions]\nreplace_opinions2 = [i.replace('opinie', '') for i in replace_opinions]\nstrip_opinions = [i.strip() for i in replace_opinions2]\nbarber_opinions = [int(i) for i in strip_opinions]\n\nprices = soup.find_all('div', attrs={'data-testid':'service-price'})\ntext_prices = [i.get_text() for i in prices]\nreplace_prices = [i.replace('zł', '') for i in text_prices]\nreplace_prices2 = [i.replace(',', '.') for i in replace_prices]\nreplace_prices3 = [i.replace('+', '') for i in replace_prices2]\nreplace_null = [i.replace('Bezpłatna', '0') for i in replace_prices3]\nreplace_space = [i.replace(' ', '') for i in replace_null]\nstrip_prices = [i.strip() for i in replace_space]\nbarber_prices = [float(i) for i in strip_prices]",
"_____no_output_____"
],
[
"import pandas as pd\n\nbarbers = pd.DataFrame(barber_names, columns=[\"Barber's name\"])\nbarbers[\"Barber's rate\"] = barber_rate\nbarbers[\"Barber's opinions\"] = barber_opinions\n\nbarbers",
"_____no_output_____"
]
],
[
[
"#### I want to find those barbers who have more than 500 reviews",
"_____no_output_____"
]
],
[
[
"# znalezienie wiarygodnych barberów, czyli takich, którzy mają opinii > 500\nbest_opinions = [i for i in barber_opinions if i > 500]\nbest_indexes = []\n\nfor amount in barber_opinions:\n if amount in best_opinions:\n best_indexes.append(barber_opinions.index(amount))\n\nbest_barbers = [barber_names[i] for i in best_indexes]\nbest_rates = [barber_rate[i] for i in best_indexes]",
"_____no_output_____"
]
],
[
[
"#### On the page there are 3 basic prices for one Barber, so I'm combining them",
"_____no_output_____"
]
],
[
[
"# połączenie 3 cen dla jednego barbera\ncombined_prices = [barber_prices[i:i+3] for i in range(0, len(barber_prices), 3)]\n\nbest_prices = [combined_prices[i] for i in best_indexes]\nprint(best_prices)",
"[[80.0, 30.0, 125.0], [90.0, 50.0, 65.0], [60.0, 50.0, 50.0], [120.0, 50.0, 40.0], [50.0, 40.0, 40.0], [70.0, 100.0, 0.0], [800.0, 1500.0, 69.0]]\n"
],
[
"avg = [sum(i)/len(i) for i in best_prices]\navg_price = [round(i,2) for i in avg]\navg_price",
"_____no_output_____"
],
[
"df_best_barber = pd.DataFrame(best_barbers, columns=[\"Barber's name\"])\ndf_best_barber[\"Amount of opinions\"] = best_opinions\ndf_best_barber[\"Barber's rate\"] = best_rates\ndf_best_barber[\"Average Barber's prices\"] = avg_price\ndf_best_barber",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt\n\nplt.style.use('ggplot')\nx = ['POPE', 'Matt', 'Sick', 'Freak', 'WILKOSZ', 'Wojnar', 'Trendy']\ny = avg_price\n\nx_pos = [i for i, _ in enumerate(x)]\n\nplt.bar(x_pos, y)\nplt.ylabel(\"Barbers' prices\")\nplt.title(\"Barbers in Gliwice & their average prices\")\n\nplt.xticks(x_pos, x)\nplt.show()",
"_____no_output_____"
],
[
"import seaborn as sns\n\nsns.relplot(x = \"Average Barber's prices\", y = \"Amount of opinions\", hue=\"Barber's name\", data = df_best_barber)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4ab0e7786eb04df02cc21ae12c070be18fb4b7b8
| 68,385 |
ipynb
|
Jupyter Notebook
|
svm.ipynb
|
Yeolnim/nlp-intro
|
d25262d95f59bc2ed11885def2e8767193efb9d7
|
[
"MIT"
] | null | null | null |
svm.ipynb
|
Yeolnim/nlp-intro
|
d25262d95f59bc2ed11885def2e8767193efb9d7
|
[
"MIT"
] | null | null | null |
svm.ipynb
|
Yeolnim/nlp-intro
|
d25262d95f59bc2ed11885def2e8767193efb9d7
|
[
"MIT"
] | null | null | null | 324.099526 | 22,444 | 0.907158 |
[
[
[
"from sklearn.datasets import make_blobs\nfrom sklearn.svm import SVC\nimport matplotlib.pyplot as plt\nimport numpy as np\nX,y=make_blobs(n_samples=50,centers=2,random_state=0,cluster_std=0.6)\nplt.scatter(X[:,0],X[:,1],c=y,s=50,cmap='rainbow')\nplt.show()\nprint(X.shape)\n# print(X)\n\nclf=SVC(kernel=\"linear\").fit(X,y)\n# plot_svc_decision_function(clf)\n\nfrom sklearn.svm import SVC # 导入SVC类 SVC(Support vector classifier)\nmodel = SVC(kernel='linear', C=1E10) # 引入线性核函数\nmodel.fit(X, y)\ndef plot_svc_decision_function(model, ax=None, plot_support=True):\n \"\"\"画2维SVC的决策函数(分离超平面)\"\"\"\n if ax is None:\n ax = plt.gca() #plt.gca()获得当前的Axes对象ax\n xlim = ax.get_xlim() #Return the x-axis view limits返回x轴视图限制。\n ylim = ax.get_ylim() \n # 创建评估模型的网格\n x = np.linspace(xlim[0], xlim[1], 30)\n y = np.linspace(ylim[0], ylim[1], 30)\n Y, X = np.meshgrid(y, x) # 将两个一维数组变为二维矩阵 返回一个行乘和列乘的二维数组\n xy = np.vstack([X.ravel(), Y.ravel()]).T # np.vstack()沿着竖直方向将矩阵堆叠起来。 \n P = model.decision_function(xy).reshape(X.shape) \n # 画决策边界和边界\n ax.contour(X, Y, P, colors='k',\n levels=[-1, 0, 1], alpha=0.5,\n linestyles=['--', '-', '--'])\n\n if plot_support:\n ax.scatter(model.support_vectors_[:, 0],\n model.support_vectors_[:, 1],\n s=200, linewidth=1,c='k',alpha=0.4)\n ax.set_xlim(xlim)\n ax.set_ylim(ylim)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')\nplot_svc_decision_function(model)\n",
"_____no_output_____"
],
[
"from sklearn.datasets import make_circles\nfrom sklearn.svm import SVC\nimport matplotlib.pyplot as plt\nimport numpy as np\nX,y=make_circles(100,factor=0.1,noise=0.2)\nplt.scatter(X[:,0],X[:,1],c=y,s=50,cmap='rainbow')\nplt.show()\nprint(X.shape)\nmodel=SVC(kernel='RBF')\nmodel.fit(X,y)\n# plot_svc_decision_function(model)\n# print(X)\n\n# from sklearn.datasets import make_circles\n# from sklearn.svm import SVC\n# import matplotlib.pyplot as plt\n# import numpy as np\n# X,y = make_circles(100,factor=0.1,noise=0.2)\n# plt.scatter(X[:,0],X[:,1],c=y,s=50,cmap=\"rainbow\")\n# plt.show()\n# print(X.shape)\n# def plot_svc_decision_function(model, ax=None, plot_support=True):\n# \"\"\"画2维SVC的决策函数(分离超平面)\"\"\"\n# if ax is None:\n# ax = plt.gca() #plt.gca()获得当前的Axes对象ax\n# xlim = ax.get_xlim() #Return the x-axis view limits返回x轴视图限制。\n# ylim = ax.get_ylim() \n# # 创建评估模型的网格\n# x = np.linspace(xlim[0], xlim[1], 30)\n# y = np.linspace(ylim[0], ylim[1], 30)\n# Y, X = np.meshgrid(y, x) # 将两个一维数组变为二维矩阵 返回一个行乘和列乘的二维数组\n# xy = np.vstack([X.ravel(), Y.ravel()]).T # np.vstack()沿着竖直方向将矩阵堆叠起来。 \n# P = model.decision_function(xy).reshape(X.shape) \n# # 画决策边界和边界\n# ax.contour(X, Y, P, colors='k',\n# levels=[-1, 0, 1], alpha=0.5,\n# linestyles=['--', '-', '--'])\n\n# if plot_support:\n# ax.scatter(model.support_vectors_[:, 0],\n# model.support_vectors_[:, 1],\n# s=200, linewidth=1,c='k',alpha=0.4)\n# ax.set_xlim(xlim)\n# ax.set_ylim(ylim)\n# plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')\n# plot_svc_decision_function(model)\n\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
4ab0e92ed870666aa73965a52af9fd0cde207901
| 78,734 |
ipynb
|
Jupyter Notebook
|
topics/nonlinear-equations/Nonlinear System - with gaps.ipynb
|
jomorodi/NumericalMethods
|
e040693001941079b2e0acc12e0c3ee5c917671c
|
[
"MIT"
] | 3 |
2019-03-27T05:22:34.000Z
|
2021-01-27T10:49:13.000Z
|
topics/nonlinear-equations/Nonlinear System - with gaps.ipynb
|
jomorodi/NumericalMethods
|
e040693001941079b2e0acc12e0c3ee5c917671c
|
[
"MIT"
] | null | null | null |
topics/nonlinear-equations/Nonlinear System - with gaps.ipynb
|
jomorodi/NumericalMethods
|
e040693001941079b2e0acc12e0c3ee5c917671c
|
[
"MIT"
] | 7 |
2019-12-29T23:31:56.000Z
|
2021-12-28T19:04:10.000Z
| 38.842625 | 277 | 0.464158 |
[
[
[
"# Systems of Nonlinear Equations\n## CH EN 2450 - Numerical Methods\n**Prof. Tony Saad (<a>www.tsaad.net</a>) <br/>Department of Chemical Engineering <br/>University of Utah**\n<hr/>",
"_____no_output_____"
],
[
"# Example 1\n\nA system of nonlinear equations consists of several nonlinear functions - as many as there are unknowns. Solving a system of nonlinear equations means funding those points where the functions intersect each other. Consider for example the following system of equations\n\\begin{equation}\ny = 4x - 0.5 x^3\n\\end{equation}\n\\begin{equation}\ny = \\sin(x)e^{-x}\n\\end{equation}\n\nThe first step is to write these in residual form\n\\begin{equation}\nf_1 = y - 4x + 0.5 x^3,\\\\\nf_2 = y - \\sin(x)e^{-x}\n\\end{equation}\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom numpy import cos, sin, pi, exp\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import fsolve",
"_____no_output_____"
],
[
"y1 = lambda x: 4 * x - 0.5 * x**3\ny2 = lambda x: sin(x)*exp(-x)\nx = np.linspace(-3.5,4,100)\nplt.ylim(-8,6)\nplt.plot(x,y1(x), 'k')\nplt.plot(x,y2(x), 'r')\nplt.grid()\nplt.savefig('example1.pdf')",
"_____no_output_____"
],
[
"def F(xval):\n x = xval[0] # let the first value in xval denote x\n y = xval[1] # let the second value in xval denote y\n f1 = y - 4.0*x + 0.5*x**3 # define f1\n f2 = y - sin(x)*exp(-x) # define f2\n return np.array([f1,f2]) # must return an array \n\n\ndef J(xval):\n x = xval[0]\n y = xval[1]\n return np.array([[1.5*x**2 - 4.0 , 1.0 ],\n [-cos(x)*exp(-x) + sin(x)*exp(-x) , 1.0]]) # Jacobian matrix J = [[df1/dx, df1/dy], [df2/dx,df2/dy]]",
"_____no_output_____"
],
[
"guess = np.array([1,3])\nF(guess)",
"_____no_output_____"
],
[
"J(guess)",
"_____no_output_____"
],
[
"def newton_solver(F, J, x, tol): # x is nothing more than your initial guess\n F_value = F(x)\n err = np.linalg.norm(F_value, ord=2) # l2 norm of vector\n# err = tol + 100\n niter = 0\n while abs(err) > tol and niter < 100:\n J_value = J(x)\n delta = np.linalg.solve(J_value, - F_value)\n x = x + delta # update the solution\n F_value = F(x) # compute new values for vector of residual functions\n err = np.linalg.norm(F_value, ord=2) # compute error norm (absolute error)\n niter += 1\n\n # Here, either a solution is found, or too many iterations\n if abs(err) > tol:\n niter = -1\n print('No Solution Found!!!!!!!!!')\n return x, niter, err",
"_____no_output_____"
]
],
[
[
"Try to find the root less than [-2,-4]",
"_____no_output_____"
]
],
[
[
"tol = 1e-8\nxguess = np.array([-3,0])\nroots, n, err = newton_solver(F,J,xguess,tol)\nprint ('# of iterations', n, 'roots:', roots)\nprint ('Error Norm =',err)",
"# of iterations 5 roots: [-3.32550287 5.08630572]\nError Norm = 1.4015965240388036e-09\n"
],
[
"F(roots)",
"_____no_output_____"
]
],
[
[
"Use Python's fsolve routine",
"_____no_output_____"
]
],
[
[
"fsolve(F,xguess)",
"_____no_output_____"
]
],
[
[
"# Example 2\nFind the roots of the following system of equations\n\\begin{equation}\nx^2 + y^2 = 1, \\\\\ny = x^3 - x + 1\n\\end{equation}\nFirst we assign $x_1 \\equiv x$ and $x_2 \\equiv y$ and rewrite the system in residual form\n\\begin{equation}\nf_1(x_1,x_2) = x_1^2 + x_2^2 - 1, \\\\\nf_2(x_1,x_2) = x_1^3 - x_1 - x_2 + 1\n\\end{equation}\n",
"_____no_output_____"
]
],
[
[
"x = np.linspace(-1,1)\ny1 = lambda x: x**3 - x + 1\ny2 = lambda x: np.sqrt(1 - x**2)\nplt.plot(x,y1(x), 'k')\nplt.plot(x,y2(x), 'r')\nplt.grid()",
"_____no_output_____"
],
[
"def F(xval):\n ?\n \ndef J(xval):\n ?",
"_____no_output_____"
],
[
"tol = 1e-8\nxguess = np.array([0.5,0.5])\nx, n, err = newton_solver(F, J, xguess, tol)\nprint (n, x)\nprint ('Error Norm =',err)",
"6 [0.74419654 0.66796071]\nError Norm = 4.965068306494546e-16\n"
],
[
"fsolve(F,(0.5,0.5))",
"_____no_output_____"
],
[
"import urllib\nimport requests\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = requests.get(\"https://raw.githubusercontent.com/saadtony/NumericalMethods/master/styles/custom.css\")\n return HTML(styles.text)\ncss_styling()\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
]
] |
4ab0fefce5079665e2936f0021208ca56405ce26
| 55,295 |
ipynb
|
Jupyter Notebook
|
04_mini-projects/09_deepl.ipynb
|
rico-mix/py-algorithms-4-automotive-engineering
|
1da36207aa27f1dbfbd8e829c28de356f2456163
|
[
"MIT"
] | null | null | null |
04_mini-projects/09_deepl.ipynb
|
rico-mix/py-algorithms-4-automotive-engineering
|
1da36207aa27f1dbfbd8e829c28de356f2456163
|
[
"MIT"
] | null | null | null |
04_mini-projects/09_deepl.ipynb
|
rico-mix/py-algorithms-4-automotive-engineering
|
1da36207aa27f1dbfbd8e829c28de356f2456163
|
[
"MIT"
] | null | null | null | 62.692744 | 23,904 | 0.769871 |
[
[
[
"[Table of contents](../toc.ipynb)\n\n# Deep Learning\n\nThis notebook is a contribution of Dr.-Ing. Mauricio Fernández.\n\nInstitution: Technical University of Darmstadt, Cyber-Physical Simulation Group.\n\nEmail: [email protected], [email protected]\n\nProfiles\n- [TU Darmstadt](https://www.maschinenbau.tu-darmstadt.de/cps/department_cps/team_1/team_detail_184000.en.jsp)\n- [Google Scholar](https://scholar.google.com/citations?user=pwQ_YNEAAAAJ&hl=de)\n- [GitHub](https://github.com/mauricio-fernandez-l)",
"_____no_output_____"
],
[
"## Contents of this lecture\n\n[1. Short overview of artificial intelligence](#1.-Short-overview-of-artificial-intelligence-(AI))\n\n[2. Introduction to artificial neural networks](#2.-Introduction-to-artificial-neural-networks-(ANN))\n\n[3. How to build a basic tf.keras model](#3.-How-to-build-a-basic-tf.keras-model)\n\n[4. Regression problem](#4.-Regression-problem)\n\n[5. Classification problem](#5.-Classification-problem)\n\n[6. Summary of this lecture](#6.-Summary-of-this-lecture)",
"_____no_output_____"
],
[
"## 1. Short overview of artificial intelligence (AI)\n\nSome definitions in the web:\n- the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.\n- study of \"intelligent agents\": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals\n\n<img src=\"https://images.theconversation.com/files/168081/original/file-20170505-21003-zbguhy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=926&fit=clip\" alt=\"neural netowork\" width=\"300\" align=\"right\">\n\nComputational methods in AI:\n- Data mining\n- Machine learning\n - Artificial neural networks\n - Single layer learning\n - **Deep learning (DL)**\n - Kernel methods (SVM,...)\n - Decision trees\n - ...\n- ...",
"_____no_output_____"
],
[
"## Why DL?\n\nPros:\n- Enourmous flexibility due to high number of parameters\n- Capability to represent complex functions\n- Huge range of applications (visual perception, decision-making, ...) in industry and research\n- Open high-performance software (TensorFlow, Keras, PyTorch, Scikit-learn,...)\n\n<img src=\"https://s3.amazonaws.com/keras.io/img/keras-logo-2018-large-1200.png\" alt=\"Keras\" width=\"200\" align=\"right\">\n<img src=\"https://www.tensorflow.org/images/tf_logo_social.png\" alt=\"TensoFlow\" width=\"200\" align=\"right\">\n\nCons:\n- Difficult to train (vanishing gradient,...)\n- High number of internal parameters",
"_____no_output_____"
],
[
"## 2. Introduction to artificial neural networks (ANN)\n\nNeeded modules",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits import mplot3d\nfrom matplotlib.image import imread\nimport os",
"_____no_output_____"
]
],
[
[
"## Neuron model\n\n**Neuron:** single unit cell processing incoming electric signals (input)\n\n<img src=\"https://images.theconversation.com/files/168081/original/file-20170505-21003-zbguhy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=926&fit=clip\" alt=\"neural netowork\" width=\"200\" align=\"left\">\n<img src=\"https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Keras+Python+Tutorial/content_content_neuron.png\" alt=\"neuron\" width=\"600\" align=\"center\">\n\n**Mathematical model:** input $x$ with output $y$ and internal parameters $w$ (weight), $b$ (bias) and activation function $a(z)$\n$$\n\\hat{y} = a(wx +b)\n$$\n\n**Example:**\n$$\n\\hat{y} = \\tanh(0.3x - 3)\n$$",
"_____no_output_____"
],
[
"## Activation functions\n\n$$ a(z) = a(w x + b) $$",
"_____no_output_____"
]
],
[
[
"z = np.linspace(-3, 3, 100)\nplt.figure()\nplt.plot(z, tf.nn.relu(z), label='relu')\nplt.plot(z, tf.nn.softplus(z), label='softplus')\nplt.plot(z, tf.nn.tanh(z), label='tanh')\nplt.plot(z, tf.nn.sigmoid(z), label='sigmoid')\nplt.xlabel('$z$')\nplt.ylabel('$a(z)$')\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## ANN architecture\n\n**Example 1:** Two one-dimensional layers\n$$\n\\hat{y} = a^{(2)}(w^{(2)}a^{(1)}(w^{(1)}x+b^{(1)})+b^{(2)})\n$$\n\n**Example 2:** Network for 2D input and 1D output with one hidden layer (3 neurons) and identity final activation \n$$\n\\hat{y} = \n\\begin{pmatrix}\nw^{(2)}_1 & w^{(2)}_2 & w^{(2)}_3\n\\end{pmatrix}\na^{(1)}\n\\left(\n\\begin{pmatrix}\nw^{(1)}_{11} & w^{(1)}_{12} \\\\\nw^{(1)}_{21} & w^{(1)}_{22} \\\\\nw^{(1)}_{31} & w^{(1)}_{32} \\\\\n\\end{pmatrix}\n\\begin{pmatrix}\nx_1 \\\\ x_2\n\\end{pmatrix}\n+\n\\begin{pmatrix}\nb^{(1)}_1 \\\\ b^{(1)}_2 \\\\ b^{(1)}_3\n\\end{pmatrix}\n\\right)\n+\nb^{(2)}\n$$\n\n<img src=\"deepl_files/network1.png\" alt=\"network1\" width=\"300\" align=\"center\">\n\n[Draw networks](http://alexlenail.me/NN-SVG/index.html)",
"_____no_output_____"
],
[
"## Deep networks\n\nLots of layers\n\n<img src=\"deepl_files/network2.png\" alt=\"network2\" width=\"600\" align=\"left\">\n<img src=\"https://images.theconversation.com/files/168081/original/file-20170505-21003-zbguhy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=926&fit=clip\" alt=\"neural network\" width=\"200\" align=\"right\">",
"_____no_output_____"
],
[
"## Training an ANN\n\nFor input vector $x \\in \\mathbb{R}^3$ consider the network $\\hat{y}(x) \\in \\mathbb{R}^2$ \n\n<img src=\"deepl_files/network2.png\" alt=\"network2\" width=\"400\" align=\"center\">\n\nfor the approximation of a vector function $y(x) \\in \\mathbb{R}^2$. After fixing the architecture of the network (number of layers, number of neurons and activation functions), the remaining parameters (weights and biases) need calibration. This is achieved in **supervised learning** through the minimization of an objective function (referred to as **loss**) for provided dataset $D$ with $N$ data pairs\n$$\n D = \\{(x^{(1)},y^{(1)}),(x^{(2)},y^{(2)}),\\dots,(x^{(N)},y^{(N)})\\}\n$$\nwhich the ANN $\\hat{y}(x)$ is required to approximate. \n\nExample loss: mean squared error (MSE) $e(y,\\hat{y})$ for each data pair averaged over the complete dataset\n$$\n L = \\frac{1}{N} \\sum_{i=1}^N e(y^{(i)}, \\hat{y}^{(i)})\n \\ , \\quad\n e(y,\\hat{y}) = \\frac{1}{2}\\sum_{j=1}^2(y_j-\\hat{y}_j)^2\n$$\n\nThe calibration of weights and biases based on the minimization of the loss for given data is referred to as **training**.",
"_____no_output_____"
],
[
"## Standard problems\n\n**Regression:** fit a model $\\hat{y}(x)$ to approximate a function $y(x)$.\n* $x = 14.5$\n* $y(x) = 3\\sin(14.5)+10 = 12.8...$\n* $\\hat{y}(x) = 11.3...$\n\n**Classification:** fit a model $\\hat{y}(x)$ predicting that $x$ belongs to one of $C$ classes.\n* $C=4$ classes $\\{$cat,dog,horse,pig$\\}$\n* $x =$ image of a horse\n* $y(x) = (0,0,1,0)$ (third class = horse)\n* $\\hat{y}(x) = (0.1,0.2,0.4,0.3)$ (class probabilities - model predicts for the third class the highest probability)",
"_____no_output_____"
],
[
"## 3. How to build a basic tf.keras model\n\n<img src=\"deepl_files/network1.png\" alt=\"network1\" width=\"300\" align=\"ceter\">",
"_____no_output_____"
]
],
[
[
"# Create sequential model\nmodel = tf.keras.Sequential([\n tf.keras.layers.Dense(3, input_shape=[2], activation='relu') \n # 3x2 weights and 3 biases = 9 parameters\n ,tf.keras.layers.Dense(1) \n # 1x3 weights and 1 bias = 4 parameters\n])",
"_____no_output_____"
],
[
"# Model summary\nmodel.summary()",
"Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense (Dense) (None, 3) 9 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 4 \n=================================================================\nTotal params: 13\nTrainable params: 13\nNon-trainable params: 0\n_________________________________________________________________\n"
]
],
[
[
"<img src=\"deepl_files/network1.png\" alt=\"network1\" width=\"300\" align=\"ceter\">",
"_____no_output_____"
]
],
[
[
"# List of 3 points to be evaluated\nxs = np.array([\n [0, 0], [0, np.pi], [np.pi, np.pi]\n])\n\n# Prediction / model evaluation\nys_model = model.predict(xs)\nprint(ys_model)",
"[[0. ]\n [1.9389963]\n [2.541233 ]]\n"
]
],
[
[
"$$ y = 3 \\sin(x_1 + x_2) + 10 $$",
"_____no_output_____"
]
],
[
[
"# Data of function to be approximated (e.g., from measurements or simulations)\nys = 3*np.sin(np.sum(xs, axis=1, keepdims=True))+10\n\n# Compile model: choose optimizer and loss\nmodel.compile(optimizer='adam', loss='mse')\n\n# Train\nmodel.fit(xs, ys, epochs=100, verbose=0)\n\n# Predict after training\nys_model = model.predict(xs)\nprint(xs)\nprint(ys)\nprint(ys_model)",
"[[0. 0. ]\n [0. 3.14159265]\n [3.14159265 3.14159265]]\n[[10.]\n [10.]\n [10.]]\n[[0.21297303]\n [2.997461 ]\n [4.1517887 ]]\n"
]
],
[
[
"## 4. Regression problem\n\nApproximate the function\n$$\n y(x_1,x_2) = 3 \\sin(x_1+x_2)+10\n$$",
"_____no_output_____"
],
[
"## Exercise: train an ANN\n\n<img src=\"../_static/exercise.png\" alt=\"Exercise\" width=\"75\" align=\"left\">\n\nCreate training data and train an ANN\n\n* For $(x_1,x_2) \\in [0,\\pi] \\times [0,2\\pi]$ generate a grid with 20 points in each direction.\n* Evaluate the function $y(x_1,x_2) = 3 \\sin(x_1+x_2) + 10$ for the generated points.\n* Build a tf.keras model with two hidden-layers, 16 and 8 neurons. Use the RELU activation function.\n* Plot the data and the model output at its initialization.\n* Train the model based on the MSE.\n* Plot the data and the model output after training for 500 epochs.",
"_____no_output_____"
],
[
"## Solution\n\nPlease find one possible solution in [`regression.py`](./deepl_files/regression.py) file.",
"_____no_output_____"
],
[
"## 5. Classification problem\n\nBuild a classifier $\\hat{y}(x)$ for distinguishing between the following examples.\n\n**Question**\nHow could this be useful in automotive engineering?\n\n<img src=\"deepl_files/data/3_1.png\" alt=\"triangle 1\" width=\"100\" align=\"left\">\n<img src=\"deepl_files/data/3_3.png\" alt=\"triangle 2\" width=\"100\" align=\"left\">\n\n<img src=\"deepl_files/data/4_2.png\" alt=\"triangle 1\" width=\"100\" align=\"left\">\n<img src=\"deepl_files/data/4_4.png\" alt=\"triangle 2\" width=\"100\" align=\"left\">\n\n<img src=\"deepl_files/data/16_2.png\" alt=\"triangle 1\" width=\"100\" align=\"left\">\n<img src=\"deepl_files/data/16_4.png\" alt=\"triangle 2\" width=\"100\" align=\"left\">",
"_____no_output_____"
],
[
"**Autonomous driving**: recognition of street signs\n\n<img src=\"https://w.grube.de/media/image/7b/5f/63/art_78-101_1.jpg\" alt=\"street sign\" width=\"200\">\n<img src=\"https://assets.tooler.de/media/catalog/product/b/g/bgk106363_8306628.jpg\" alt=\"street sign\" width=\"200\">\n\nVery good advanced tutorial: https://www.pyimagesearch.com/2019/11/04/traffic-sign-classification-with-keras-and-deep-learning/",
"_____no_output_____"
],
[
"## Image classification\n\nWhat is an image in terms of data?",
"_____no_output_____"
]
],
[
[
"# This if else is a fix to make the file available for Jupyter and Travis CI\nif os.path.isfile('deepl_files/data/3_1.png'):\n file = 'deepl_files/data/3_1.png'\nelse:\n file = '04_mini-projects/deepl_files/data/3_1.png'\n\n# Load image as np.array\nimage = plt.imread(file)\nprint(type(image))\nprint(image.shape)\nplt.imshow(image)",
"<class 'numpy.ndarray'>\n(20, 20, 3)\n"
],
[
"# Image shape\nprint(image.shape)\nprint(image[0, 0, :])\n\n# Flatten\na = np.array([[1, 2, 3], [4, 5, 6]])\na_fl = a.flatten()\nprint(a_fl)\n\n# Flatten image\nimage_fl = image.flatten()\nprint(image_fl.shape)",
"(20, 20, 3)\n[0. 0.41568628 0.01960784]\n[1 2 3 4 5 6]\n(1200,)\n"
]
],
[
[
"## Classification - formulation of optimization problem\n\nEncode an image in a vector $x = (x_1,\\dots,x_n) \\in \\mathbb{R}^n$. Every image $x^{(i)}, i=1,\\dots,N$ belongs to one of $C$ prescribed classes. Denote the unknown classification function $y:\\mathbb{R}^n \\mapsto [0,1]^C$, e.g., $C=3$, $y(x^{(1)}) = (1,0,0), y(x^{(2)}) = (0,0,1), y(x^{(3)}) = (0,1,0)$. \n\nAssume a model $\\hat{y}(x)$ is given but requires calibration. For given labeled images, the [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) $e(p,\\hat{p})$ (exact and model class probabilities $p$ and $\\hat{p}$, respectively) as loss function\n\n$$\n L \n =\n \\frac{1}{N}\n \\sum_{i=1}^{N}\n e(y(x^{(i)}),\\hat{y}(x^{(i)}))\n \\ , \\quad\n e(p,\\hat{p})\n =\n -\n \\sum_{j=1}^{M}\n p_j \\log(\\hat{p}_j) \n$$\n\nis best suited for classification problems.",
"_____no_output_____"
],
[
"## Exercise: train an image classifier\n\n<img src=\"../_static/exercise.png\" alt=\"Exercise\" width=\"75\" align=\"left\">\n\nTrain an image classifier for given images\n\n* Load the images from the folder [deepl_files/data](./deepl_files/data) and [deepl_files/data_test](./deepl_files/data_test)\n* Create a tf.keras model with input image and output class probability\n* Train the model with the cross entropy for 10 epochs\n* Test the trained model on the test data",
"_____no_output_____"
],
[
"## Solution\n\nPlease find one possible solution in [`classification.py`](./deepl_files/classification.py) file.",
"_____no_output_____"
],
[
"## 6. Summary of this lecture\n\nANN\n<img src=\"deepl_files/network2.png\" alt=\"network2\" width=\"700\" align=\"ceter\">\n\nStandard problems\n\n<img src=\"deepl_files/reg_start.png\" alt=\"network2\" width=\"200\" align=\"left\">\n<img src=\"deepl_files/reg_trained.png\" alt=\"network2\" width=\"200\" align=\"left\">\n\n<img src=\"deepl_files/data/3_1.png\" alt=\"network2\" width=\"100\" align=\"right\">\n<img src=\"deepl_files/data/4_1.png\" alt=\"network2\" width=\"100\" align=\"right\">\n<img src=\"deepl_files/data/16_1.png\" alt=\"network2\" width=\"100\" align=\"right\">",
"_____no_output_____"
],
[
"## Further topics\n\n* DOE, Sample generation strategies and extrapolation\n* Learning scenarios\n * Unsupervised learning\n * Reinforcement learning\n* Keras models\n * Functional API\n * Subclassing\n * Layers\n * Convolution layer\n * Dropout\n * Batch normalization\n * Advanced neural networks\n * CNN (convolutional NN)\n * RNN (recurrent NN)\n * Custom\n * Losses\n * Custom loss\n* Training\n * Overfitting\n * Optimization algorithm and parameters\n * Mini-batch training",
"_____no_output_____"
],
[
"Thank you very much for your attention! Happy coding!\n\nContact: Dr.-Ing. Mauricio Fernández\n\nEmail: [email protected], [email protected]",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4ab10c1114bd15d9f1fd99a07596cc5d37a41e5a
| 55,310 |
ipynb
|
Jupyter Notebook
|
_notebooks/Quantum Simple Harmonic Oscillator - SciPy function.ipynb
|
SubirSarkar2021/CompPhyWithPython
|
d7506f4b89f00cd8cb6dd6096d1d0ff652571e97
|
[
"Apache-2.0"
] | null | null | null |
_notebooks/Quantum Simple Harmonic Oscillator - SciPy function.ipynb
|
SubirSarkar2021/CompPhyWithPython
|
d7506f4b89f00cd8cb6dd6096d1d0ff652571e97
|
[
"Apache-2.0"
] | 1 |
2022-02-26T10:50:49.000Z
|
2022-02-26T10:50:49.000Z
|
_notebooks/Quantum Simple Harmonic Oscillator - SciPy function.ipynb
|
SubirSarkar2021/CompPhyWithPython
|
d7506f4b89f00cd8cb6dd6096d1d0ff652571e97
|
[
"Apache-2.0"
] | null | null | null | 212.730769 | 34,908 | 0.906888 |
[
[
[
"# Quantum Simple Harmonic Oscillator",
"_____no_output_____"
],
[
"Motion of a quantum simple harmonic oscillator is guided by time independent Schr$\\ddot{o}$dinger equation -\n$$\n\\frac{d^2\\psi}{dx^2}=\\frac{2m}{\\hbar^2}(V(x)-E)\\psi \n$$\n\nIn simple case, we may consider the potential function $V(x)$ to be square well one, which can be described by\n$$\nE = \n\\begin{cases}\n\\frac{1}{2}kL^2,& -L < x < L\\\\\n\\frac{1}{2}kx^2,& \\text{otherwise}\n\\end{cases}\n$$\nThis equation can be solved analytically and the energy eigenvalues are given by\n$$\nE_n = \\left(n + \\frac{1}{2}\\right)\\hbar \\omega \n$$\nIn this section, we shall try to solve the equation numerically by `odeint` function from `SciPy` package. For that we have to express ODE (\\ref{eq01}) into two first order ODEs in the following way -\n$$\n\\begin{aligned}\n&\\frac{d\\psi}{dx}=\\phi\\\\\n&\\frac{d\\phi}{dx}= \\frac{2m}{\\hbar^2}(V(x)-E)\\psi\n\\end{aligned}\n$$\n\nSince, it is an initial value problem, we can solve it by `solve_ivp` function `SciPy` package.",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import solve_ivp\nfrom scipy.optimize import bisect",
"_____no_output_____"
],
[
"@np.vectorize\ndef V(x):\n if np.abs(x) < L:\n return (1/2)*k*x**2\n else:\n return (1/2)*k*L**2\n \ndef model(x, z):\n psi, phi = z\n dpsi_dx = phi\n dphi_dx = 2*(V(x) - E)*psi\n return np.array([dpsi_dx, dphi_dx])\n\[email protected]\ndef waveRight(energy):\n global E, x, psi\n E = energy\n x = np.linspace(-b, b, 100)\n x_span = (x[0], x[-1])\n psi0, dpsi_dx0 = 0.1, 0\n x0 = [psi0, dpsi_dx0]\n sol = solve_ivp(model, x_span, x0, t_eval=x)\n x = sol.t\n psi, phi = sol.y\n return psi[-1]\n\nk = 50\nm = 1\nhcross = 1\nb = 2\nL = 1\nomega = np.sqrt(k/m)\nenergy = np.linspace(0, 0.5*k*L**2, 100)\npsiR = waveRight(energy)\n\nenergyEigenVal = []\n\nfor i in range(len(psiR)-1):\n if np.sign(psiR[i+1]) == -np.sign(psiR[i]):\n root = bisect(waveRight, energy[i+1], energy[i])\n energyEigenVal.append(root)\n \nenergyEigenVal",
"_____no_output_____"
],
[
"# Analytic energies\nE_analytic = []\nEmax = max(energyEigenVal)\nn = 0\nEn = 0\nwhile En < Emax:\n En = (n + 1/2)*hcross*omega\n E_analytic.append(En)\n n += 1\n \nE_analytic",
"_____no_output_____"
],
[
"plt.plot(energyEigenVal, ls=':', marker='^', color='blue', label='Numerical')\nplt.plot(E_analytic, ls=':', marker='o', color='red', label='Analytical')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"print('------------------------------------')\nprint('{0:10s}{1:2s}{2:10s}'.format('Energy(Analytic)','','Energy(Numerical)'))\nprint('------------------------------------')\nfor i in range(len(energyEigenVal)):\n print('{0:10.3f}{1:5s}{2:10.3f}'.format(E_analytic[i],'', energyEigenVal[i]))\nprint('------------------------------------')",
"------------------------------------\nEnergy(Analytic) Energy(Numerical)\n------------------------------------\n 3.536 3.534\n 10.607 10.589\n 17.678 17.544\n 24.749 23.741\n------------------------------------\n"
],
[
"\nfor i in range(len(energyEigenVal)):\n waveRight(energyEigenVal[i])\n plt.plot(x, 100**i*psi**2, label='En = %0.3f'%energyEigenVal[i])\nplt.xlabel('$x$', fontsize=14)\nplt.ylabel('$|\\psi(x)|^2$', fontsize=14)\nplt.legend()\nplt.show()\n ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab136c7a69327f4d1ee89dddf6f18a3fecc8657
| 198,836 |
ipynb
|
Jupyter Notebook
|
notebooks/b_coding/Pandas/6_plotting.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 3 |
2020-08-02T07:32:14.000Z
|
2021-11-16T16:40:43.000Z
|
notebooks/b_coding/Pandas/6_plotting.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 5 |
2020-07-27T10:45:26.000Z
|
2020-08-12T15:09:14.000Z
|
notebooks/b_coding/Pandas/6_plotting.ipynb
|
primer-computational-mathematics/book
|
305941b4f1fc4f15d472fd11f2c6e90741fb8b64
|
[
"MIT"
] | 4 |
2020-08-05T13:57:32.000Z
|
2022-02-02T19:03:57.000Z
| 688.013841 | 95,832 | 0.951794 |
[
[
[
"(pandas_plotting)=\n# Plotting\n``` {index} Pandas: plotting\n```\nPlotting with pandas is very intuitive. We can use syntax:\n\n df.plot.*\n \nwhere * is any plot from matplotlib.pyplot supported by pandas. Full tutorial on pandas plots can be found [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html).\n\nAlternatively, we can use other plots from matplotlib library and pass specific columns as arguments:\n\n plt.scatter(df.col1, df.col2, c=df.col3, s=df.col4, *kwargs)\n \nIn this tutorial we will use both ways of plotting.\n\nAt first we will load New Zealand earthquake data and following date-time tutorial we will create date-time index:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\n\nnz_eqs = pd.read_csv(\"../../geosciences/data/nz_largest_eq_since_1970.csv\")\nnz_eqs.head(4)\n\nnz_eqs[\"hour\"] = nz_eqs[\"utc_time\"].str.split(':').str.get(0).astype(float)\nnz_eqs[\"minute\"] = nz_eqs[\"utc_time\"].str.split(':').str.get(1).astype(float)\nnz_eqs[\"second\"] = nz_eqs[\"utc_time\"].str.split(':').str.get(2).astype(float)\n\nnz_eqs[\"datetime\"] = pd.to_datetime(nz_eqs[['year', 'month', 'day', 'hour', 'minute', 'second']])\nnz_eqs.head(4)\n\nnz_eqs = nz_eqs.set_index('datetime')",
"_____no_output_____"
]
],
[
[
"Let's plot magnitude data for all years and then for year 2000 only using pandas way of plotting:",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(7,5))\nnz_eqs['mag'].plot()\nplt.xlabel('Date')\nplt.ylabel('Magnitude')\nplt.show()\n\nplt.figure(figsize=(7,5))\nnz_eqs['mag'].loc['2000-01':'2001-01'].plot()\nplt.xlabel('Date')\nplt.ylabel('Magnitude')\nplt.show()",
"_____no_output_____"
]
],
[
[
"We can calculate how many earthquakes are within each year using:\n\n df.resample('bintype').count()\n\nFor example, if we want to use intervals for year, month, minute and second we can use 'Y', 'M', 'T' and 'S' in the bintype argument.\n\nLet's count our earthquakes in 4 month intervals and display it with xticks every 4 years:",
"_____no_output_____"
]
],
[
[
"figure, ax = plt.subplots(figsize=(7,5))\n\n# Resample datetime index into 4 month bins\n# and then count how many \nnz_eqs['year'].resample(\"4M\").count().plot(ax=ax, x_compat=True)\n\nimport matplotlib\n# Change xticks to be every 4 years\nax.xaxis.set_major_locator(matplotlib.dates.YearLocator(base=4))\nax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter(\"%Y\"))\n\nplt.xlabel('Date')\nplt.ylabel('No. of earthquakes')\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"Suppose we would like to view the earthquake locations, places with largest earthquakes and their depths. To do that, we can use Cartopy library and create a scatter plot, passing magnitude column into size and depth column into colour.",
"_____no_output_____"
]
],
[
[
"import cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nfrom cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER\nimport matplotlib.ticker as mticker",
"_____no_output_____"
]
],
[
[
"Let's plot this data passing columns into scatter plot:",
"_____no_output_____"
]
],
[
[
"plt.rcParams.update({'font.size': 14})\n\ncentral_lon, central_lat = 170, -50\nextent = [160,188,-48,-32]\n\nfig, ax = plt.subplots(1, subplot_kw=dict(projection=ccrs.Mercator(central_lon, central_lat)), figsize=(7,7))\n\nax.set_extent(extent)\nax.coastlines(resolution='10m')\nax.set_title(\"Earthquakes in New Zealand since 1970\")\n\n# Create a scatter plot\nscatplot = ax.scatter(nz_eqs.lon,nz_eqs.lat, c=nz_eqs.depth_km,\n s=nz_eqs.depth_km/10, edgecolor=\"black\",\n cmap=\"PuRd\", lw=0.1,\n transform=ccrs.Geodetic())\n\n# Create colourbar\ncbar = plt.colorbar(scatplot, ax=ax, fraction=0.03, pad=0.1, label='Depth [km]')\n\n# Sort out gridlines and their density\nxticks_extent = list(np.arange(160, 180, 4)) + list(np.arange(-200,-170,4))\nyticks_extent = list(np.arange(-60, -30, 2))\n\ngl = ax.gridlines(linewidths=0.1)\ngl.xlabels_top = False\ngl.xlabels_bottom = True\ngl.ylabels_left = True\ngl.ylabels_right = False\ngl.xlocator = mticker.FixedLocator(xticks_extent)\ngl.ylocator = mticker.FixedLocator(yticks_extent)\ngl.xformatter = LONGITUDE_FORMATTER\ngl.yformatter = LATITUDE_FORMATTER\n\nplt.show()",
"_____no_output_____"
]
],
[
[
"This way we can easily see that the deepest and largest earthquakes are in the North.",
"_____no_output_____"
],
[
"# References\nThe notebook was compiled based on:\n* [Pandas official Getting Started tutorials](https://pandas.pydata.org/docs/getting_started/index.html#getting-started)\n* [Kaggle tutorial](https://www.kaggle.com/learn/pandas)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ab13fc2c63843590252e254d92846f51814c3ff
| 79,118 |
ipynb
|
Jupyter Notebook
|
_build/jupyter_execute/ipynb/01b-fundamentos-python.ipynb
|
gcpeixoto/FMECD
|
9bca72574c6630d1594396fffef31cfb8d58dec2
|
[
"CC0-1.0"
] | null | null | null |
_build/jupyter_execute/ipynb/01b-fundamentos-python.ipynb
|
gcpeixoto/FMECD
|
9bca72574c6630d1594396fffef31cfb8d58dec2
|
[
"CC0-1.0"
] | null | null | null |
_build/jupyter_execute/ipynb/01b-fundamentos-python.ipynb
|
gcpeixoto/FMECD
|
9bca72574c6630d1594396fffef31cfb8d58dec2
|
[
"CC0-1.0"
] | null | null | null | 22.180544 | 461 | 0.507457 |
[
[
[
"# Fundamentos de *Python* para Computação Científica",
"_____no_output_____"
],
[
"## Python como calculadora de bolso\n\nNesta seção mostraremos como podemos fazer cálculos usando Python como uma calculadora científica. Como Python é uma *linguagem interpretada*, ela funciona diretamente como um ciclo *LAI (Ler-Avaliar-Imprimir)*. Isto é, ela lê uma instrução que você escreve, avalia essa instrução interpretando-a e imprimindo (ou não) um valor.",
"_____no_output_____"
],
[
"### Aritmética\n\nOs símbolos são os seguintes:\n\n|operação|símbolo|\n|---|---|\n|adição|+|\n|subtração|-|\n|multiplicação|*|\n|divisão|/|\n|potenciação|**|",
"_____no_output_____"
],
[
"**Exemplo:** calcule o valor de $4/3 - 2 \\times 3^4$",
"_____no_output_____"
]
],
[
[
"4/3 - 2*3**4",
"_____no_output_____"
]
],
[
[
"### Expressões numéricas\n\nComutatividade, associatividade, distributividade e agrupamento de números por meio de parênteses, colchetes ou chaves são escritos em Python usando apenas parênteses.",
"_____no_output_____"
],
[
"**Exemplo:** calcule o valor de $2 - 98\\{ 7/3 \\, [ 2 \\times (3 + 5 \\times 6) - ( 2^3 + 2^2 ) - (2^2 + 2^3) ] - (-1) \\}$",
"_____no_output_____"
]
],
[
[
"2 - 98*(7/3*(2*(3+5*6)-(2**3+2**2)-(2**2+2**3))-(-1))",
"_____no_output_____"
]
],
[
[
"Note que a legibilidade da instrução acima não está boa. Podemos incluir espaços para melhorá-la. Espaços **não alteram** o resultado do cálculo. ",
"_____no_output_____"
]
],
[
[
"2 - 98*( 7/3*( 2*( 3 + 5*6 ) - ( 2**3 + 2**2 ) - ( 2**2 + 2**3 ) ) - (-1) )",
"_____no_output_____"
]
],
[
[
"Um dos princípios do Zen do Python é *a legibilidade conta*, ou seja, vale a pena aperfeiçoar sua maneira de escrever um código computacional de maneira que outros possam entendê-lo sem dificuldades quando o lerem.",
"_____no_output_____"
],
[
"Parênteses não pareados (abrir, mas não fechar) produzirão um erro. ",
"_____no_output_____"
]
],
[
[
"2 - 98*( 7/3",
"_____no_output_____"
]
],
[
[
"O erro diz que houve uma terminação não esperada da instrução. Em geral, quando digitamos um parêntese de abertura, o Jupyter adiciona o de fechamento automaticamente. ",
"_____no_output_____"
]
],
[
[
"2 - 98*(7/3)",
"_____no_output_____"
]
],
[
[
"### Números inteiros e fracionários ",
"_____no_output_____"
],
[
"Em Python, números inteiros e fracionários são escritos de maneira distinta através do ponto decimal. ",
"_____no_output_____"
],
[
"**Exemplo:** Calcule o valor de $2 + 3$",
"_____no_output_____"
]
],
[
[
"2 + 3",
"_____no_output_____"
]
],
[
[
"**Exemplo:** Calcule o valor de \n$2,0 + 3$",
"_____no_output_____"
]
],
[
[
"2.0 + 3",
"_____no_output_____"
]
],
[
[
"O valor é o mesmo, mas em Python, os dois números acima possuem naturezas diferentes. Essa \"natureza\" é chamada _tipo_. Falaremos disso mais tarde. Por enquanto, veja o seguinte: ",
"_____no_output_____"
]
],
[
[
"type(5)",
"_____no_output_____"
],
[
"type(5.0)",
"_____no_output_____"
]
],
[
[
"As duas palavras acima indicam que o **tipo** do valor 5 é `int` e o do valor 5.0 é `float`. Isto significa que no primeiro caso temos um número inteiro, mas no segundo caso temos um número em _ponto flutuante_. Números em ponto flutuante imitam o conjunto dos números reais. Veremos mais exemplos disso ao longo do curso. ",
"_____no_output_____"
],
[
"**Exemplo:** Calcule o valor de $5.0 - 5$",
"_____no_output_____"
]
],
[
[
"5.0 - 5",
"_____no_output_____"
]
],
[
[
"Observe a resposta no caso anterior. A diferença entre um número em ponto flutuante e um número inteiro resulta em um número em ponto flutuante! Isto é verdadeiro para as versões 3.x (onde x é um número maior ou igual a 0) da linguagem Python, mas não o é para versões anteriores da linguagem. \n\nVale mencionar este detalhe para seu conhecimento. Entretanto, usaremos Python 3 neste curso e você não deverá ter problemas com isso no caminho.",
"_____no_output_____"
],
[
"#### Usar ponto ou usar vírgula?",
"_____no_output_____"
],
[
"Em nosso sistema numérico, a vírgula (`,`) é quem separa a parte inteira da parte fracionária de um número decimal. Em Python, este papel é desempenhado pelo ponto (`.`). Daqui para a frente, sempre que não houver ambiguidade de notação, usaremos o ponto e não a vírgula para representar números fracionários em exemplos, exercícios ou explicações. ",
"_____no_output_____"
],
[
"**Exemplo:** calcule o valor de $5.1 - 5$",
"_____no_output_____"
]
],
[
[
"5.1 - 5",
"_____no_output_____"
]
],
[
[
"Observe o cálculo anterior... O resultado deveria ser 0.1, não? O que aconteceu?! \n\nIsto se deve a um conceito chamado _precisão de máquina_. Não entraremos em detalhe nisto neste curso, mas é suficiente que você saiba que números no computador não são exatos como na Matemática tradicional. \n\nUm computador, por mais rápido e inteligente que possa parecer, é incapaz de representar a infinidade dos números reais. Às vezes, ele _aproximará_ resultados. Portanto, não se preocupe com isso. A conta parece errada e, de fato, está! Porém, este erro é tão pequeno que praticamente não afetará suas contas significativamente.",
"_____no_output_____"
],
[
"### Divisão inteira\n \nPodemos realizar a operação de divisão inteira quando quisermos um quociente inteiro. Para isso, utilizamos o símbolo `//` (divisão inteira).",
"_____no_output_____"
],
[
"**Exemplo:** calcule o valor de $5/2$",
"_____no_output_____"
]
],
[
[
"5/2",
"_____no_output_____"
]
],
[
[
"**Exemplo:** calcule o valor da **divisão inteira** $5/2$",
"_____no_output_____"
]
],
[
[
"5//2",
"_____no_output_____"
]
],
[
[
"#### O algoritmo da divisão e restos \n\nVocê deve se lembrar do algoritmo da divisão. Um número $D$ (dividendo), quando dividido por outro número $d$ (divisor), resulta em um quociente $q$ e um resto $r$ nulo se a divisão for exata, ou um resto $r$ diferente de zero, se a divisão não for exata. Ou seja, o algoritmo diz o seguinte:\n\n$$D = d \\times q + r$$\n\nEm Python, podemos descobrir o valor $r$ de uma divisão inexata diretamente por meio do símbolo `%`, chamado de **operador módulo**.",
"_____no_output_____"
],
[
"**Exemplo:** determine o resto da divisão $7/2$",
"_____no_output_____"
]
],
[
[
"7 % 2",
"_____no_output_____"
]
],
[
[
"De fato, $5 = 2\\times3 + 1$",
"_____no_output_____"
],
[
"A partir disso, podemos então verificar números inteiros pares e ímpares com o operador `%`.",
"_____no_output_____"
]
],
[
[
"5 % 2 ",
"_____no_output_____"
],
[
"6 % 2",
"_____no_output_____"
]
],
[
[
"Com efeito, 5 é ímpar, pois, ao ser dividido por 2, retorna resto igual a 1, e 6 é par, pois, sendo exatamente divisível por 2, retorna resto 0.",
"_____no_output_____"
],
[
"### Porcentagem",
"_____no_output_____"
],
[
"Então, se `%` serve para calcular o resto de uma divisão, como calcular porcentagem? \n\nBem, não há um símbolo especial para o cálculo da porcentagem. Ele deve ser feito dividindo o número em questão por 100 da maneira usual.",
"_____no_output_____"
],
[
"**Exemplo:** Quanto é 45\\% de R\\$ 43,28?",
"_____no_output_____"
]
],
[
[
"45/100*43.28",
"_____no_output_____"
]
],
[
[
"Veja que poderíamos também realizar esta conta da seguinte forma:",
"_____no_output_____"
]
],
[
[
"0.45*43.28",
"_____no_output_____"
]
],
[
[
"Ou da forma:",
"_____no_output_____"
]
],
[
[
"45*43.28/100",
"_____no_output_____"
]
],
[
[
"Esta última não seria tão literal quanto à conta original, porém, os três exemplos mostram que a multiplicação e a divisão são equivalentes em **precedência**. Tanto faz neste caso realizar primeiro a multiplicação e depois a divisão, ou vice-versa. A única ressalva é o segundo exemplo, no qual, na verdade, não foi o computador quem dividiu 45 por 100. ",
"_____no_output_____"
],
[
"#### Precedência de operações\n\nAssim como na Matemática, Python possui precedências em operações. \n\nQuando não há parênteses envolvidos, multiplicação e divisão são realizadas antes de adições e subtrações.",
"_____no_output_____"
],
[
"**Exemplo:** calcule $4 \\times 5 + 3$",
"_____no_output_____"
]
],
[
[
"4*5 + 3",
"_____no_output_____"
]
],
[
[
"**Exemplo:** calcule $4 / 5 + 3$",
"_____no_output_____"
]
],
[
[
"4/5 + 3",
"_____no_output_____"
]
],
[
[
"Quando houver parênteses, eles têm precedência. ",
"_____no_output_____"
],
[
"**Exemplo:** calcule $4 \\times (5 + 3)$",
"_____no_output_____"
]
],
[
[
"4*(5+3)",
"_____no_output_____"
]
],
[
[
"Em regra, operações são executadas da esquerda para a direita e do par de parênteses mais interno para o mais externo.",
"_____no_output_____"
],
[
"**Exemplo:** calcule $2 - 10 - 3$",
"_____no_output_____"
]
],
[
[
"# primeiramente, 2 - 10 = - 8 é calculado; \n# depois - 8 - 3 = -11\n2 - 10 - 3 ",
"_____no_output_____"
]
],
[
[
"**Exemplo:** calcule $2 - (10 - 3)$",
"_____no_output_____"
]
],
[
[
"# primeiramente, (10 - 3) = 7 é calculado; \n# depois 2 - 7 = -5\n2 - (10 - 3)",
"_____no_output_____"
]
],
[
[
"### Comentários\n\nNote que acima descrevemos passos executados pelo interpretador Python para calcular as expressões numéricas. Porém, fizemos isso em uma célula de código e não houve nenhuma interferência no resultado. Por quê? Porque inserimos _comentários_. ",
"_____no_output_____"
],
[
"#### Comentários em linha\n\nSão utilizados para ignorar tudo o que vier após o símbolo `#` naquela linha.",
"_____no_output_____"
]
],
[
[
"# isto aqui é ignorado",
"_____no_output_____"
],
[
"2 + 3 # esta linha será calculada ",
"_____no_output_____"
]
],
[
[
"A instrução abaixo resulta em erro, pois `2 +` não é uma operação completa.",
"_____no_output_____"
]
],
[
[
"2 + # 3",
"_____no_output_____"
]
],
[
[
"A instrução abaixo resulta em 2, pois `+ 3` está comentado.",
"_____no_output_____"
]
],
[
[
"2 # + 3",
"_____no_output_____"
]
],
[
[
"**Exemplo:** qual é o valor de $(3 - 4)^2 + (7/2 - 2 - (3 + 1))$?",
"_____no_output_____"
]
],
[
[
"\"\"\" ORDEM DE OPERAÇÕES \nprimeiramente, (3 + 1) = 4 é calculado :: parêntese mais interno\nem seguida, 7/2 = 3.5 :: divisão mais interna\nem seguida, 3.5 - 2 = 1.5 :: primeira subtração mais interna\nem seguida, 1.5 - 4 = -2.5 :: segunda subtração e resolve parêntese externo\nem seguida, (3 - 4) = -1 :: parêntese\nem seguida, (-1)**2 = 1 :: potenciação, ou duas multiplicações\nem seguida, 1 + (-2.5) = -1.5 :: última soma\n\"\"\"\n\n# Expressão \n(3 - 4)**2 + (7/2 - 2 - (3 + 1))",
"_____no_output_____"
]
],
[
[
"#### Docstrings \n\nEm Python, não há comentários em bloco, mas podemos usar _docstrings_ quando queremos comentar uma ou múltiplas linhas. Basta inserir um par de três aspas simples (`'''`) ou duplas (`\"\"\"`). Aspas simples são também chamadas de \"plicas\" (`'...'`). \n\nPor exemplo, \n\n```python\n'''\nIsto aqui é \num comentário em bloco e\nserá ignorado pelo \ninterpretador se vier \nseguido de outra instruções.\n'''\n```\n\ne\n\n```python\n\"\"\"\nIsto aqui é \num comentário em bloco e\nserá ignorado pelo \ninterpretador se vier \nseguido de outra instruções.\n\"\"\"\n```\npossuem o mesmo efeito prático, porém existem recomendações de estilo para usar docstrings. Não discutiremos esses tópicos aqui.\n\nSe inseridas isoladamente numa célula de código aqui, produzem uma saída textual.\n\n**Cuidado!** plicas não são apóstrofes!",
"_____no_output_____"
]
],
[
[
"\"\"\"\nIsto aqui é \num comentário em bloco e\nserá ignorado pelo \ninterpretador se vier \nseguido de outra instruções.\n\"\"\"",
"_____no_output_____"
]
],
[
[
"### Omitindo impressão \n\nPodemos usar um `;` para omitir a impressão do último comando executado na célula. ",
"_____no_output_____"
]
],
[
[
"2 + 3; # a saída não é impressa na tela",
"_____no_output_____"
],
[
"2 - 3 \n1 - 2; # nada é impresso pois o último comando vem seguido por ;",
"_____no_output_____"
],
[
"\"\"\"\nDocstring múlipla\nnão impressa\n\"\"\";",
"_____no_output_____"
],
[
"'''Docstring simples omitida''';",
"_____no_output_____"
]
],
[
[
"## Variáveis, atribuição e reatribuição",
"_____no_output_____"
],
[
"Em Matemática, é muito comum usarmos letras para substituir valores. Em Python, uma variável é um nome associado a um local na memória do computador. A idéia é similar ao endereço de seu domicílio.",
"_____no_output_____"
],
[
"**Exemplo:** se $x = 2$ e $y = 3$, calcule $x + y$",
"_____no_output_____"
]
],
[
[
"x = 2\ny = 3\nx + y",
"_____no_output_____"
]
],
[
[
"Acima, `x` e `y` são variáveis. O símbolo `=` indica que fizemos uma atribuição.\n\nUma reatribuição ocorre quando fazemos uma nova atribuição na mesma variável. Isto é chamade de _overwriting_. ",
"_____no_output_____"
]
],
[
[
"x = 2 # x tem valor 2 \ny = 3 # y tem valor 3\nx = y # x tem valor 3\nx",
"_____no_output_____"
]
],
[
[
"**Exemplo:** A área de um retângulo de base $b$ e altura $h$ é dada por $A = bh$. Calcule valores de $A$ para diferentes valores de $b$ e $h$.",
"_____no_output_____"
]
],
[
[
"b = 2 # base\nh = 4 # altura\nA = b*h\nA",
"_____no_output_____"
],
[
"b = 10 # reatribuição em b\nA = b*h # reatribuição em A, mas não em h \nA",
"_____no_output_____"
],
[
"h = 5 # reatribuição em h\nb*h # cálculo sem atribuição em A",
"_____no_output_____"
],
[
"A # A não foi alterado",
"_____no_output_____"
]
],
[
[
"Variáveis são _case sensitive_, isto é, sensíveis a maiúsculas/minúsculas. ",
"_____no_output_____"
]
],
[
[
"a = 2\nA = 3 \na - A",
"_____no_output_____"
],
[
"A - a",
"_____no_output_____"
]
],
[
[
"### Atribuição por desempacotamento\n\nPodemos realizar atribuições em uma única linha.",
"_____no_output_____"
]
],
[
[
"b,h = 2,4",
"_____no_output_____"
],
[
"b",
"_____no_output_____"
],
[
"h",
"_____no_output_____"
]
],
[
[
"### Imprimindo com `print`\n\n`print` é uma função. Aprenderemos um pouco sobre funções mais à frente. Por enquanto, basta entender que ela funciona da seguinte forma: ",
"_____no_output_____"
]
],
[
[
"print(A) # imprime o valor de A",
"3\n"
],
[
"print(b*h) # imprime o valor do produto b*h",
"8\n"
]
],
[
[
"Com `print`, podemos ter mais de uma saída na célula.",
"_____no_output_____"
]
],
[
[
"b, h = 2.1, 3.4\nprint(b) \nprint(h)",
"2.1\n3.4\n"
]
],
[
[
"Podemos inserir mais de uma variável em print separando-as por vírgula.",
"_____no_output_____"
]
],
[
[
"print(b, h)",
"2.1 3.4\n"
],
[
"x = 0\ny = 1 \nz = 2\nprint(x, y, z)",
"0 1 2\n"
],
[
"print(x, x + 1, x + 2)",
"0 1 2\n"
]
],
[
[
"## Caracteres, letras, palavras: strings",
"_____no_output_____"
],
[
"Em Python, escrevemos caracteres entre plicas ou aspas. ",
"_____no_output_____"
]
],
[
[
"'Olá!'",
"_____no_output_____"
],
[
"\"Olá!\"",
"_____no_output_____"
],
[
"'Olá, meu nome é Python.'",
"_____no_output_____"
]
],
[
[
"Podemos atribuir caracteres a variáveis.",
"_____no_output_____"
]
],
[
[
"a = 'a'\nA = 'A'\nprint(a)\nprint(A)",
"a\nA\n"
],
[
"area1 = 'Matemática'\narea2 = 'Estatística' \nprint(area1, 'e', area2)",
"Matemática e Estatística\n"
]
],
[
[
"Nomes de variáveis não podem iniciar por números ou conter caracteres especiais ($, #, ?, !, etc)",
"_____no_output_____"
]
],
[
[
"1a = 'a' # inválido",
"_____no_output_____"
],
[
"a1 = 'a1' # válido\na2 = 'a2' # válido",
"_____no_output_____"
]
],
[
[
"Podemos usar `print` concatenando variáveis e valores.",
"_____no_output_____"
]
],
[
[
"b = 20\nh = 10\nprint('Aprendo', area1, 'e', area2)\nprint('A área do retângulo é', b*h)",
"Aprendo Matemática e Estatística\nA área do retângulo é 200\n"
]
],
[
[
"Em Python, tudo é um \"objeto\". Nos comandos abaixo, temos objetos de três \"tipos\" diferentes.",
"_____no_output_____"
]
],
[
[
"a = 1\nx = 2.0 \nb = 'b'",
"_____no_output_____"
]
],
[
[
"Poderíamos também fazer:",
"_____no_output_____"
]
],
[
[
"a, x, b = 1, 2.0, 'b' # modo menos legível!",
"_____no_output_____"
]
],
[
[
"Ao investigar essas variáveis (objetos) com `type`, veja o que temos:",
"_____no_output_____"
]
],
[
[
"type(a) # verifica o \"tipo\" do objeto",
"_____no_output_____"
],
[
"type(x)",
"_____no_output_____"
],
[
"type(b)",
"_____no_output_____"
]
],
[
[
"`str` (abreviação de _string_) é um objeto definido por uma cadeia de 0 ou mais caracteres.",
"_____no_output_____"
]
],
[
[
"nenhum = ''\nnenhum",
"_____no_output_____"
],
[
"espaco = ' '\nespaco",
"_____no_output_____"
]
],
[
[
"**Exemplo:** Calcule o valor da área de um círculo de raio $R = 3$ e imprima o seu valor usando print e strings. Assuma o valor de $\\pi = 3.145$.",
"_____no_output_____"
]
],
[
[
"R = 3\npi = 3.145\nA = pi*R**2\nprint('A área do círculo é: ', A)",
"A área do círculo é: 28.305\n"
]
],
[
[
"## Tipos de dados: `int`, `float` e `str`",
"_____no_output_____"
],
[
"Até o momento, aprendemos a trabalhar com números inteiros, fracionários e sequencias de caracteres. \n\nEm Python, cada objeto possui uma \"família\". Chamamos essas famílias de \"tipos\". \n\nFazendo um paralelo com a teoria dos conjuntos em Matemática, você sabe que $\\mathbb{Z}$ representa o _conjunto dos números inteiros_ e que $\\mathbb{R}$ representa o _conjunto dos números inteiros_. \n\nBasicamente, se `type(x)` é `int`, isto equivale a dizer que $x \\in \\mathbb{Z}$. Semelhantemente, se `type(y)` é `float`, isto é \"quase o mesmo\" que dizer $y \\in \\mathbb{R}$. Porém, neste caso, não é uma verdade absoluta para todo número $y$.\n\nFuturamente, você aprenderá mais sobre \"ponto flutuante\". Então, é mais correto dizer que $y \\in \\mathbb{F}$, onde $\\mathbb{F}$ é seria o conjunto de todos os números em _ponto flutuante_. No final das contas, um número de $\\mathbb{F}$ faz uma \"aproximação\" para um número de $\\mathbb{R}$.\n\nNo caso de `str`, é mais difícil estabelecer uma notação similar, mas podemos criar exemplos.",
"_____no_output_____"
],
[
"**Exemplo:** Considere o conjunto $A = \\{s \\in \\mathbb{S} \\, : s \\text{ possui apenas duas vogais}\\}$, onde $\\mathbb{S}$ é o conjunto de palavras formadas por 2 letras.",
"_____no_output_____"
],
[
"Todo elemento $s$ desse conjunto pode assumir um dos seguintes valores:\n\n`aa`, `ae`, `ai`, `ao`, `au`, \n\n`ea`, `ee`, `ei`, `eo`, `eu`, \n\n`ia`, `ie`, `ii`, `io`, `iu`,\n\n`oa`, `oe`, `oi`, `oo`, `ou`,\n\n`ua`, `ue`, `ui`, `uo`, `uu`.\n\nApesar de apenas algumas terem significado na nossa língua, como é o caso de `ai`, `ei`, `oi` e `ui`, que são interjeições, `eu`, que é um pronome e `ou`, que é uma conjunção, existem 25 _anagramas_.\n\nEntão, poderíamos escrever:",
"_____no_output_____"
]
],
[
[
"s1, s2, s3, s4, s5 = 'aa', 'ae', 'ai', 'ao', 'au'\ns6, s7, s8, s9, s10 = 'ea', 'ee', 'ei', 'eo', 'eu'\ns11, s12, s13, s14, s15 = 'ia', 'ie', 'ii', 'io', 'iu'\ns16, s17, s18, s19, s20 = 'oa', 'oe', 'oi', 'oo', 'ou'\ns21, s22, s23, s24, s25 = 'oa', 'oe', 'oi', 'oo', 'ou'",
"_____no_output_____"
]
],
[
[
"Os 25 anagramas acima foram armazenados em 25 variáveis diferentes.",
"_____no_output_____"
]
],
[
[
"print(s3)\nprint(s11)\nprint(s24)",
"ai\nia\noo\n"
]
],
[
[
"### _Casting_",
"_____no_output_____"
],
[
"Uma das coisas legais que podemos fazer em Python é alterar o tipo de um dado para outro. Essa operação é chamada de _type casting_, ou simplesmente _casting_. \n\nPara fazer _casting_ de `int`, `float` e `str`, usamos funções de mesmo nome.",
"_____no_output_____"
]
],
[
[
"float(25) # 25 é um inteiro, mas float(25) é fracionário",
"_____no_output_____"
],
[
"int(34.21) # 34.21 é fracionário, mas int(34.21) é um \"arredondamento\" para um inteiro",
"_____no_output_____"
],
[
"int(5.65) # o arredondamento é sempre para o inteiro mais próximo \"para baixo\"",
"_____no_output_____"
],
[
"int(-6.6)",
"_____no_output_____"
]
],
[
[
"O _casting_ de um objeto 'str' composto de letras com 'int' é inválido.",
"_____no_output_____"
]
],
[
[
"int('a') ",
"_____no_output_____"
]
],
[
[
"O _casting_ de um 'int' ou 'float' com 'str' formada como número é válido.",
"_____no_output_____"
]
],
[
[
"str(2) ",
"_____no_output_____"
],
[
"str(3.14)",
"_____no_output_____"
]
],
[
[
"O _casting_ de um 'str' puro com 'float' é inválido.",
"_____no_output_____"
]
],
[
[
"float('a')",
"_____no_output_____"
]
],
[
[
"### Concatenação",
"_____no_output_____"
],
[
"Outra coisa legal que podemos fazer em Python é a _concatenação_ de objetos `str`. A concatenação pode ser feita de modo direto como uma \"soma\" (usando `+`) ou acompanhada por _casting_.",
"_____no_output_____"
]
],
[
[
"s21 + s23",
"_____no_output_____"
],
[
"s1 + s3 + s5",
"_____no_output_____"
],
[
"'Casting do número ' + str(2) + '.'",
"_____no_output_____"
],
[
"str(1) + ' é inteiro' + ',' + ' ' + 'mas ' + str(3.1415) + ' é fracionário!'",
"_____no_output_____"
]
],
[
[
"Podemos criar concatenações de `str` por repetição usando multiplicação (`*`) e usar parênteses para formar as mais diversas \"montagens\". ",
"_____no_output_____"
]
],
[
[
"(str(s3) + ',')*2 + s3 + '...'",
"_____no_output_____"
],
[
"x = 1.0\n'Usando 0.1, incrementamos ' + str(x) + ' para obter ' + str(x + 0.1) + '.'",
"_____no_output_____"
],
[
"print(5*s20 + 10*s2 + 5*s17 + 5*('.') + 'YEAH! :)')",
"ouououououaeaeaeaeaeaeaeaeaeaeoeoeoeoeoe.....YEAH! :)\n"
]
],
[
[
"## Módulos e importação",
"_____no_output_____"
],
[
"Um mecânico, bombeiro hidráulico ou eletricista sempre anda com uma caixa de ferramentas contendo ferramentas essenciais, tais como alicate, chave de fenda e parafusadeira. Cada ferramenta possui uma função bem definida. Porém, quando esses profissionais deparam com uma tarefa que nenhuma ferramenta de sua caixa é capaz de executar, é necessário buscar por outra ferramenta para resolver o problema.\n\nPodemos entender Python de modo similar. A linguagem fornece uma estrutura mínima de ferramentas que pode ser expandida. Por exemplo, até agora aprendemos a somar e a subtrair, mas ainda não sabemos calcular a raiz quadrada de um número. \n\nFazemos isto com a importação de funções que \"habitam\" em _módulos_. Sempre que precisarmos de algo especial em nossa maleta de ferramentas de _data science_, devemos pesquisar por alguma solução existente ou criar nossa própria solução. Neste curso, não vamos nos aprofundar no tema de criar nossas próprias soluções, o que se chama _customização_. Você aprenderá mais sobre isto no devido tempo. Usaremos coisas já construídas para ganharmos tempo.\n\nMódulos são como gavetas em um escritório ou equipamentos em uma oficina. Vários módulos podem ser organizados juntos para formar um _pacote_. Então, imagine que você precisa substituir o cabo RJ45 do seu desktop que seu _pet_ roeu e o deixou sem internet no final de semana. Além dos terminais e de muita paciência, você precisará de um alicate de crimpagem para construir um cabo novo. \n\nO alicate de crimpagem é um alicate especial, assim como a chave phillips é um tipo de chave especial. Se você tiver todas essas ferramentas na sua casa e for organizado, a primeira coisa que você fará é ir até a gaveta onde aquele tipo de ferramenta está guardada. Em seguida, você pegará a ferramenta especializada.\n\nEm Python, quando precisamos de algo que desempenha um papel específico, realizamos a importação de um módulo inteiro, de um submódulo deste módulo ou de apenas um objeto do módulo. Há muitas maneiras de fazer isso. \n\nPara importar um módulo inteiro, podemos usar a sintaxe:\n\n```python \nimport nomeDoModulo\n```\n\nPara importar um submódulo, podemos usar a sintaxe:\n\n```python \nimport nomeDoModulo.nomeDoSubmodulo\n```\nou a sintaxe\n\n```python \nfrom nomeDoModulo import nomeDoSubmodulo\n```\n\nCaso queiramos usar uma função específica, podemos usar \n\n\n```python \nfrom nomeDoModulo import nomeDaFuncao\n```\n\nou \n\n```python \nfrom nomeDoModulo.nomeDoSubmodulo import nomeDaFuncao\n```\nse a função pertencer a um submódulo.\n\nExiste uma forma muito eficiente de acessar os objetos de um módulo específico usando um _alias_ (pseudônimo). O _alias_ é um nome substituto para o módulo. Outra maneira de enxergar esse tipo de importação é pensar em uma chave que abre um cadeado. Este tipo de importação usa a sintaxe: \n\n```python\nimport nomeDoModulo as nomeQueEuQuero\n```\nBasicamente, esta sentença diz: _\"importe o módulo `nomeDoModulo` como `nomeQueEuQuero`\"_. Isto fará com que `nomeQueEuQuero` seja um pseudônimo para o módulo que você quer acessar. Entretanto, isto faz mais sentido quando o pseudônimo é uma palavra com menos caracteres. Por exemplo, \n\n```python\nimport nomeDoModulo as nqeq\n```\n\nNeste caso, `nqeq` é um pseudônimo mais curto. Ao longo do curso usaremos pseudônimos que já se tornaram praticamente um padrão na comunidade Python. \n\nComo último exemplo de importação, considere a sintaxe:\n\n```python \nfrom nomeDoModulo import nomeDoSubmodulo as nds\n```\n\nNeste exemplo, estamos criando um pseudônimo para um submódulo.\n",
"_____no_output_____"
],
[
"### O módulo `math`",
"_____no_output_____"
],
[
"O módulo `math` é uma biblioteca de funções matemáticas. O que é uma função? \n\nEm Matemática, uma função é como uma máquina que recebe uma matéria-prima e entrega um produto. \nSe a matéria-prima é $x$, o produto é $y$ e a função é $f$, então, $y = f(x)$. Logo, para cada valor de entrada $x$, um valor de saída $y$ é esperado.\n\nNeste capítulo, já lidamos com a função `print`. O que ela faz? \n\nEla recebe um conteúdo, o valor de entrada (denominado _argumento_), e o produto (valor de saída) é a impressão do conteúdo. \n\nA partir de agora, vamos usar o módulo `math` para realizar operações matemáticas mais específicas. ",
"_____no_output_____"
],
[
"Importaremos o módulo `math` como:",
"_____no_output_____"
]
],
[
[
"import math as mt",
"_____no_output_____"
]
],
[
[
"Note que, aparentemente, nada aconteceu. Porém, este comando permite que várias funções sejam usadas em nosso _espaço de trabalho_ usando a \"chave\" `mt` para abrir o \"cadeado\" `math`.\n\nPara ver uma lista das funções existentes, escreva `mt.` e pressione a tecla `<TAB>`.",
"_____no_output_____"
],
[
"O número neperiano (ou número de Euler) $e$ é obtido como:",
"_____no_output_____"
]
],
[
[
"mt.e",
"_____no_output_____"
]
],
[
[
"O valor de $\\pi$ pode ser obtido como: ",
"_____no_output_____"
]
],
[
[
"mt.pi",
"_____no_output_____"
]
],
[
[
"Perceba, contudo, que $e$ e $\\pi$ são números irracionais. No computador, eles possuem um número finito de casas decimais! Acima, ambos possuem 16 dígitos.",
"_____no_output_____"
],
[
"**Exemplo:** calcule a área de um círculo de raio $r = 3$.",
"_____no_output_____"
]
],
[
[
"r = 3 # raio \narea = mt.pi*r**2 # area \nprint('A área é',area) # imprime com concatenação",
"A área é 28.274333882308138\n"
]
],
[
[
"A raiz quadrada de um número $x$, $\\sqrt{x}$, é calculada pela função `sqrt` (abreviação de \"square root\")",
"_____no_output_____"
]
],
[
[
"x = 3 # note que x é um 'int'\nmt.sqrt(x) # o resultado é um 'float'",
"_____no_output_____"
],
[
"mt.sqrt(16) # 16 é 'int', mas o resultado é 'float'",
"_____no_output_____"
]
],
[
[
"**Exemplo:** calcule o valor de $\\sqrt{ \\sqrt{\\pi} + e + \\left( \\frac{3}{2} \\right)^y }$, para $y = 2.1$.",
"_____no_output_____"
]
],
[
[
"mt.sqrt( mt.sqrt( mt.pi ) + mt.e + 3/2**2.1 ) # espaços dão legibilidade",
"_____no_output_____"
]
],
[
[
"O logaritmo de um número $b$ na base $a$ é dado por $\\log_a \\, b$, com $a > 0$, $b > 0$ e $a \\neq 1$. \n\n- Quando $a = e$ (base neperiana), temos o _logaritmo natural_ de $b$, denotado por $\\text{ln} \\, b$.\n\n- Quando $a = 10$ (base 10), temos o _logaritmo em base 10_ de $b$, denotado por $\\text{log}_{10} \\, b$, ou simplesmente $\\text{log} \\, b$.\n\nEm Python, algum cuidado deve ser tomado com a função logaritmo. \n\n- Para calcular $\\text{ln} \\, b$, use `log(b)`.\n\n- Para calcular $\\text{log} \\, b$, use `log10(b)`.\n\n- Para calcular $\\text{log}_2 \\, b$, use `log2(b)`.\n\n- Para calcular $\\text{log}_a \\, b$, use `log(b,a)`.\n\nPelas duas últimas colocações, vemos que `log(b,2)` é o mesmo que `log2(b)`.\n",
"_____no_output_____"
],
[
"Vejamos alguns exemplos:",
"_____no_output_____"
]
],
[
[
"mt.log(2) # isto é ln(2)",
"_____no_output_____"
],
[
"mt.log10(2) # isto é log(2) na base 10",
"_____no_output_____"
],
[
"mt.log(2,10) # isto é o mesmo que a anterior",
"_____no_output_____"
],
[
"mt.log2(2) # isto é log(2) na base 2",
"_____no_output_____"
]
],
[
[
"**Exemplo:** se $f(x) = \\dfrac{ \\text{ln}(x+4) + \\log_3 x }{ \\log_{10} x }$, calcule o valor de $f(e) + f(\\pi)$.",
"_____no_output_____"
]
],
[
[
"x = mt.e # x = e\n\nfe = ( mt.log(x + 4) + mt.log(x,3) ) / ( mt.log10(x) ) # f(e)\n\nx = mt.pi # reatribuição do valor de x\n\nfpi = ( mt.log(x + 4) + mt.log(x,3) ) / ( mt.log10(x) ) # f(pi)\n\nprint('O valor é', fe + fpi)",
"O valor é 12.53225811727087\n"
]
],
[
[
"No exemplo anterior, espaços foram acrescentados para tornar os comandos mais legíveis.",
"_____no_output_____"
],
[
"## Introspecção \n\nPodemos entender mais sobre módulos, funções e suas capacidades examinando seus componentes e pedindo ajuda sobre eles. \n\nPara listar todos os objetos de um módulo, use `dir(nomeDoModulo)`. \n\nPara pedir ajuda sobre um objeto, use a função `help` ou um ponto de interrogação após o nome `?`.",
"_____no_output_____"
]
],
[
[
"dir(mt) # lista todas as funções do módulo math",
"_____no_output_____"
],
[
"help(mt.pow) # ajuda sobre a função 'pow' do módulo 'math'",
"Help on built-in function pow in module math:\n\npow(x, y, /)\n Return x**y (x to the power of y).\n\n"
]
],
[
[
"Como vemos, a função `pow` do módulo `math` serve para realizar uma potenciação do tipo $x^y$. ",
"_____no_output_____"
]
],
[
[
"mt.pow(2,3) # 2 elevado a 3",
"_____no_output_____"
]
],
[
[
"**Exemplo:** considere o triângulo retângulo com catetos de comprimento $a = \\frac{3}{4}\\pi$ e $b = \\frac{2}{e}$, ambos medidos em metros. Qual é o comprimento da hipotenusa $c$?",
"_____no_output_____"
]
],
[
[
"'''\nResolução pelo Teorema de Pitágoras\nc = sqrt( a**2 + b**2 )\n'''\na = 3./4. * mt.pi # 3. e 4. é o mesmo que 3.0 e 4.0\nb = 2./mt.e\nc = mt.sqrt( a**2 + b**2 )\nprint('O valor da hipotenusa é:', c, 'm')",
"O valor da hipotenusa é: 2.4683989970341536 m\n"
]
],
[
[
"O mesmo cálculo acima poderia ser resolvido com apenas uma linha usando a função `hypot` do módulo `math`.",
"_____no_output_____"
]
],
[
[
"c = mt.hypot(a,b) # hypot calcula a hipotenusa\nprint('O valor da hipotenusa é:', c, 'm')",
"O valor da hipotenusa é: 2.4683989970341536 m\n"
]
],
[
[
"**Exemplo:** Converta o ângulo de $270^{\\circ}$ para radianos.",
"_____no_output_____"
],
[
"1 radiano ($rad$) é igual a um arco de medida $r$ de uma circunferência cujo raio mede $r$. Isto é, $r = 1 \\,rad$. Uma vez que $180^{\\circ}$ corresponde a meia-circunferência, $\\pi r$, então, $\\pi \\, rad = 180^{\\circ}$. Por regra de três, podemos concluir que $x^{\\circ}$ equivale a $\\frac{x}{180^{\\circ}} \\pi \\, rad$.",
"_____no_output_____"
]
],
[
[
"ang_graus = 270 # valor em graus\nang_rad = ang_graus/180*mt.pi # valor em radianos\nprint(ang_rad)",
"4.71238898038469\n"
]
],
[
[
"Note que em \n\n```python\nang_graus/180*mt.pi\n```\n\na divisão é executada antes da multiplicação porque vem primeiro à esquerda. Parênteses não são necessários neste caso.",
"_____no_output_____"
],
[
"Poderíamos chegar ao mesmo resultado diretamente com a função `radians` do módulo `math`.",
"_____no_output_____"
]
],
[
[
"mt.radians?",
"_____no_output_____"
],
[
"mt.radians(ang_graus) ",
"_____no_output_____"
]
],
[
[
"### Arredondamento de números fracionários para inteiros",
"_____no_output_____"
],
[
"Com `math`, podemos arredondar números fracionários para inteiros usando a regra \"para cima\" (teto) ou \"para baixo\" (piso/chão). \n\n- Use `ceil` (abreviatura de _ceiling_, ou \"teto\") para arredondar para cima;\n- Use `floor` (\"chão\") para arredondar para baixo;",
"_____no_output_____"
]
],
[
[
"mt.ceil(3.42)",
"_____no_output_____"
],
[
"mt.floor(3.42)",
"_____no_output_____"
],
[
"mt.ceil(3.5)",
"_____no_output_____"
],
[
"mt.floor(3.5)",
"_____no_output_____"
]
],
[
[
"## Números complexos",
"_____no_output_____"
],
[
"Números complexos são muito importantes no estudo de fenômenos físicos envolvendo sons, frequencias e vibrações. Na Matemática, o conjunto dos números complexos é definido como\n\n$\\mathbb{C} = \\{ z = a + bi \\, : \\, a, b \\in \\mathbb{R} \\text{ e } i = \\sqrt{-1} \\}$. O valor $i$ é o _número imaginário_.\n\n\nEm Python, os números complexos são objetos do tipo `complex` e são escritos na forma\n\n```python\nz = a + bj\n```\nou \n\n```python\nz = a + bJ\n```\nO símbolo `j` (ou `J`) quando vem acompanhado de um `int` ou `float` define a parte imaginária do número complexo.",
"_____no_output_____"
]
],
[
[
"3 - 2j",
"_____no_output_____"
],
[
"type(3 - 2j) # o número é um complex",
"_____no_output_____"
],
[
"4J # J (maiúsculo)",
"_____no_output_____"
],
[
"type(4j)",
"_____no_output_____"
]
],
[
[
"Se `j` ou `J` são colocados isoladamente, significarão uma variável. Caso a variável não esteja definida, um erro de indefinição resultará.",
"_____no_output_____"
]
],
[
[
"j",
"_____no_output_____"
]
],
[
[
"### Parte real, parte imaginária e conjugados",
"_____no_output_____"
],
[
"Números complexos também podem ser diretamente definidos como `complex(a,b)` onde `a` é a parte real e `b` é a parte imaginária. ",
"_____no_output_____"
]
],
[
[
"complex(6.3,9.8)",
"_____no_output_____"
]
],
[
[
"As partes real e imaginária de um `complex` são extraídas usando as funções `real` e `imag`, nesta ordem. ",
"_____no_output_____"
]
],
[
[
"z = 6.3 + 9.8j\nz.real",
"_____no_output_____"
],
[
"z.imag",
"_____no_output_____"
]
],
[
[
"O conjugado de `z` pode ser encontrado com a função `conjugate()`.",
"_____no_output_____"
]
],
[
[
"z.conjugate()",
"_____no_output_____"
]
],
[
[
"### Módulo de um número complexo\n\nSe $Re(z)$ e $Im(z)$ forem, respectivamente, a parte real e a parte imaginária de um número complexo, o _módulo_ de $z$ é definido como\n\n$$|z| =\\sqrt{[Re(z)]^2 + [Im(z)]^2}$$",
"_____no_output_____"
],
[
"**Exemplo:** se $z = 2 + 2i$, calcule o valor de $|z|$.",
"_____no_output_____"
],
[
"Podemos computar esta quantidade de maneiras distintas. Uma delas é:",
"_____no_output_____"
]
],
[
[
"(z.real ** 2 + z.imag ** 2) ** 0.5",
"_____no_output_____"
]
],
[
[
"Entretanto, podemos usar a função `abs` do Python. Esta função é uma função predefinida pertencente ao _core_ da linguagem.",
"_____no_output_____"
]
],
[
[
"abs(z)",
"_____no_output_____"
]
],
[
[
"A função `abs` também serve para retornar o \"valor absoluto\" (ou módulo) de números reais. Lembremos que o módulo de um número real $x$ é definido como\n\n$$\n|x| =\n\\begin{cases}\nx,& \\text{se } x \\geq 0 \\\\\n-x,& \\text{se } x < 0 \\\\\n\\end{cases}\n$$",
"_____no_output_____"
]
],
[
[
"help(abs)",
"Help on built-in function abs in module builtins:\n\nabs(x, /)\n Return the absolute value of the argument.\n\n"
],
[
"abs(-3.1)",
"_____no_output_____"
],
[
"abs(-mt.e)",
"_____no_output_____"
]
],
[
[
"## Zen do Python: o estilo \"pythônico\" dos códigos\n\nO Python tem um estilo de programação próprio popularmente chamado de *Zen do Python*, que advoga por princípios de projeto. O Zen do Python foi elaborado por Tim Peters e documentado no [[PEP 20]](https://legacy.python.org/dev/peps/pep-0020/). Em nossa língua, os princípios seriam traduzidos da seguinte forma: \n\n- Bonito é melhor que feio.\n- Explícito é melhor que implícito.\n- Simples é melhor que complexo.\n- Complexo é melhor que complicado.\n- Linear é melhor do que aninhado.\n- Esparso é melhor que denso.\n- Legibilidade conta.\n- Casos especiais não são especiais o bastante para quebrar as regras.\n- Ainda que praticidade vença a pureza.\n- Erros nunca devem passar silenciosamente.\n- A menos que sejam explicitamente silenciados.\n- Diante da ambiguidade, recuse a tentação de adivinhar.\n- Deveria haver um — e preferencialmente só um — modo óbvio para fazer algo.\n- Embora esse modo possa não ser óbvio a princípio a menos que você seja holandês.\n- Agora é melhor que nunca.\n- Embora nunca frequentemente seja melhor que já.\n- Se a implementação é difícil de explicar, é uma má ideia.\n- Se a implementação é fácil de explicar, pode ser uma boa ideia.\n- Namespaces são uma grande ideia — vamos ter mais dessas!\n\nUm dos mais importantes é **deveria haver um — e preferencialmente só um — modo óbvio para fazer algo**. Isto quer dizer que códigos devem ser escritos de modo \"pythônico\", ou seja, seguindo o estilo natural da linguagem. \n\nVocê pode sempre lembrar desses princípios com a seguinte instrução:\n\n```python\nimport this\n```",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
]
] |
4ab15722b8d0e8831f9ef2d11857c4d5718c707a
| 6,373 |
ipynb
|
Jupyter Notebook
|
federated_learning/nvflare/nvflare_example_docker/2-Server.ipynb
|
phcerdan/tutorials
|
0a1ce3eaece521f00c365b248616c8e82ddc75ce
|
[
"Apache-2.0"
] | 535 |
2020-09-16T06:23:49.000Z
|
2022-03-31T13:48:34.000Z
|
federated_learning/nvflare/nvflare_example_docker/2-Server.ipynb
|
phcerdan/tutorials
|
0a1ce3eaece521f00c365b248616c8e82ddc75ce
|
[
"Apache-2.0"
] | 454 |
2020-09-16T02:11:17.000Z
|
2022-03-31T20:00:09.000Z
|
federated_learning/nvflare/nvflare_example_docker/2-Server.ipynb
|
phcerdan/tutorials
|
0a1ce3eaece521f00c365b248616c8e82ddc75ce
|
[
"Apache-2.0"
] | 289 |
2020-09-21T16:24:53.000Z
|
2022-03-31T13:04:14.000Z
| 26.334711 | 131 | 0.548878 |
[
[
[
"# FL Server Joining FL experiment\n\nThe purpose of this notebook is to show how to start a server to participate in an FL experiment.",
"_____no_output_____"
],
[
"## Prerequisites\n- The [Startup notebook](1-Startup.ipynb) has been run successfully.",
"_____no_output_____"
],
[
"### Start Server Docker",
"_____no_output_____"
]
],
[
[
"import os\nfrom IPython.display import HTML\nfrom multiprocessing import Process",
"_____no_output_____"
],
[
"server_name = \"server\"\nworkspace = \"demo_workspace\"\n\nserver_startup_path = os.path.join(workspace, server_name, \"startup\")\ncmd = server_startup_path + \"/docker.sh\"\n\n\ndef run_server():\n cmd = server_startup_path + \"/docker.sh\"\n print(\"running cmd \" + cmd)\n !$cmd\n\n\np1 = Process(target=run_server)\n\np1.start()",
"running cmd demo_workspace/server/startup/docker.sh\nStarting docker with monai_nvflare:latest\n\n=============\n== PyTorch ==\n=============\n\nNVIDIA Release 21.08 (build 26011915)\nPyTorch Version 1.10.0a0+3fd9dcf\n\nContainer image Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.\n\nCopyright (c) 2014-2021 Facebook Inc.\nCopyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert)\nCopyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu)\nCopyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu)\nCopyright (c) 2011-2013 NYU (Clement Farabet)\nCopyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston)\nCopyright (c) 2006 Idiap Research Institute (Samy Bengio)\nCopyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz)\nCopyright (c) 2015 Google Inc.\nCopyright (c) 2015 Yangqing Jia\nCopyright (c) 2013-2016 The Caffe contributors\nAll rights reserved.\n\nNVIDIA Deep Learning Profiler (dlprof) Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.\n\nVarious files include modifications (c) NVIDIA CORPORATION. All rights reserved.\n\nThis container image and its contents are governed by the NVIDIA Deep Learning Container License.\nBy pulling and using the container, you accept the terms and conditions of this license:\nhttps://developer.nvidia.com/ngc/nvidia-deep-learning-container-license\n\nNOTE: MOFED driver for multi-node communication was not detected.\n Multi-node communication performance may be reduced.\n\n\u001b]0;root@sys: /workspace\u0007root@sys:/workspace# "
]
],
[
[
"### Check Started Containers",
"_____no_output_____"
]
],
[
[
"!docker ps -a",
"CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n08492335d003 monai_nvflare:latest \"/usr/local/bin/nvid…\" 2 seconds ago Up 1 second flserver\n"
]
],
[
[
"### Start Server",
"_____no_output_____"
],
[
"To start a server, you should:\n\n- open a terminal and enter the container named `flserver`.\n- run `start.sh` under `startup/`.",
"_____no_output_____"
]
],
[
[
"# You can click the following link, or manually open a new terminal.\nHTML('<a href=\"\", data-commandlinker-command=\"terminal:create-new\"> Open a new terminal</a>')",
"_____no_output_____"
]
],
[
[
"The commands can be:\n\n```\ndocker exec -it flserver bash\ncd startup/\nsh start.sh\n```\n\nA successfully started server will print logs as follow:\n<br><br>",
"_____no_output_____"
],
[
"### Next Steps\n\nYou have now started the server container.\nIn the next notebook, [Client Startup Notebook](3-Client.ipynb), you'll start two clients participating in the FL experiment.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ab15ec96a75860ebe943f52ed7022dd2c8bc860
| 3,734 |
ipynb
|
Jupyter Notebook
|
SysStat.ipynb
|
eddo888/Carnets
|
92932da7f0d61080db231ba3a0b63c95edb209c3
|
[
"MIT"
] | null | null | null |
SysStat.ipynb
|
eddo888/Carnets
|
92932da7f0d61080db231ba3a0b63c95edb209c3
|
[
"MIT"
] | null | null | null |
SysStat.ipynb
|
eddo888/Carnets
|
92932da7f0d61080db231ba3a0b63c95edb209c3
|
[
"MIT"
] | null | null | null | 28.287879 | 231 | 0.483396 |
[
[
[
"pip install --upgrade arrow",
"Requirement already up-to-date: arrow in /private/var/mobile/Containers/Data/Application/4053D624-71DA-49F6-96B2-782AE93B4AC1/Library/lib/python3.7/site-packages (0.15.5)\nRequirement already satisfied, skipping upgrade: python-dateutil in /private/var/mobile/Containers/Data/Application/4053D624-71DA-49F6-96B2-782AE93B4AC1/Library/lib/python3.7/site-packages (from arrow) (2.7.5)\nRequirement already satisfied, skipping upgrade: six>=1.5 in /private/var/mobile/Containers/Data/Application/4053D624-71DA-49F6-96B2-782AE93B4AC1/Library/lib/python3.7/site-packages (from python-dateutil->arrow) (1.12.0)\nNote: you may need to restart the kernel to use updated packages.\n"
],
[
"import sys, os, re, pwd, grp, json, arrow\nfrom collections import OrderedDict",
"_____no_output_____"
],
[
"me=sys.argv[0]\nstat = os.stat(me)\n\ndef toAEST(t):\n mt=arrow.get(t).to('AEST')\n return mt.format('YYYY-MM-DD HH:mm:ss')",
"_____no_output_____"
],
[
"sysstat=OrderedDict([\n ('stat', OrderedDict([\n ('base', os.path.basename(me)),\n ('mode', stat.st_mode),\n ('ino', stat.st_ino),\n ('dev', stat.st_dev),\n ('nlink', stat.st_nlink),\n ('size', os.path.getsize(me)),\n ('owner', OrderedDict([\n ('user', pwd.getpwuid(stat.st_uid).pw_name),\n ('group', grp.getgrgid(stat.st_gid).gr_name),\n ])),\n ('times', OrderedDict([\n ('created', toAEST(stat.st_ctime)),\n ('modified', toAEST(stat.st_mtime)),\n ('accessed', toAEST(stat.st_atime)),\n ]))\n ]))\n])\n\nprint(json.dumps(sysstat, indent=4))",
"{\n \"stat\": {\n \"base\": \"ipykernel_launcher.py\",\n \"mode\": 33188,\n \"ino\": 8701748802,\n \"dev\": 16777220,\n \"nlink\": 1,\n \"size\": 451,\n \"owner\": {\n \"user\": \"dave\",\n \"group\": \"admin\"\n },\n \"times\": {\n \"created\": \"2020-03-23 16:49:45\",\n \"modified\": \"2020-03-23 16:49:44\",\n \"accessed\": \"2020-03-23 16:49:45\"\n }\n }\n}\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
4ab1665b44dc7aac233e347175626bbf03945747
| 150,216 |
ipynb
|
Jupyter Notebook
|
Notebooks/Pandas.ipynb
|
leandrobarbieri/pydata-book
|
36733869f4d23e4c6c8e081b0097c7aa2b65545d
|
[
"MIT"
] | null | null | null |
Notebooks/Pandas.ipynb
|
leandrobarbieri/pydata-book
|
36733869f4d23e4c6c8e081b0097c7aa2b65545d
|
[
"MIT"
] | null | null | null |
Notebooks/Pandas.ipynb
|
leandrobarbieri/pydata-book
|
36733869f4d23e4c6c8e081b0097c7aa2b65545d
|
[
"MIT"
] | null | null | null | 24.670061 | 233 | 0.395823 |
[
[
[
"<a href=\"https://colab.research.google.com/github/leandrobarbieri/pydata-book/blob/2nd-edition/Pandas.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Pandas",
"_____no_output_____"
]
],
[
[
"# Importando os pacotes/modulos do pandas\nimport pandas as pd\nfrom pandas import Series, DataFrame\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Configurações inicias das bibliotecas\nnp.random.seed(1234)\nplt.rc('figure', figsize=(10, 6))\n\n# salva a quantidade original de linhas exibidas\nPREVIOUS_MAX_ROWS = pd.options.display.max_rows\n\n# set a nova quantidade de itens máximo exibidos ao dar um display em um dataframe, usar ... para representar os itens intermediarios\npd.options.display.max_rows = 20\n\n# set o formato de apresentação\nnp.set_printoptions(precision=4, suppress=True)\n",
"_____no_output_____"
]
],
[
[
"## Series",
"_____no_output_____"
],
[
"### index e values",
"_____no_output_____"
]
],
[
[
"# Series do pandas é como um array numpy com um indice que pode conter labels\nserie1 = pd.Series([4, 5, 6, -2, 2])\n\n# atribui um indice automaticamente\nprint(f\"{serie1}\\n\")\n\n# index é um obj do tipo range com inicio, fim, step\nprint(f\"Tipo: {type(serie1.index)}\\n\")\n\nindex = [x for x in serie1.index]\nprint(f\"Somente o index: {index}\\n\")\nprint(f\"Somente o index: {serie1.index.values}\\n\")",
"0 4\n1 5\n2 6\n3 -2\n4 2\ndtype: int64\n\nTipo: <class 'pandas.core.indexes.range.RangeIndex'>\n\nSomente o index: [0, 1, 2, 3, 4]\n\nSomente o index: [0 1 2 3 4]\n\n"
],
[
"# definindo lables para o indice\nserie2 = pd.Series(data=[1, 2, 4, 5], index=[\"a\", \"b\", \"c\", \"d\"])\nprint(f\"Labels como índice:\\n{serie2}\\n\")\nprint(f\"Indice: {serie2.index} \\nValores: {serie2.values}\")",
"Labels como índice:\na 1\nb 2\nc 4\nd 5\ndtype: int64\n\nIndice: Index(['a', 'b', 'c', 'd'], dtype='object') \nValores: [1 2 4 5]\n"
]
],
[
[
"### slice e atribuição",
"_____no_output_____"
]
],
[
[
"# um item específico\nserie2[\"a\"]",
"_____no_output_____"
],
[
"# um range de itens. Limite superior incluido (diferente de indices numpy)\nserie2[\"a\":\"c\"]",
"_____no_output_____"
],
[
"# itens específicos em ordem específica\nserie2[[\"b\", \"d\", \"c\"]]",
"_____no_output_____"
],
[
"# atribuir valores\nserie2[[\"d\", \"b\"]] = 999\nprint(serie2)",
"a 1\nb 999\nc 4\nd 999\ndtype: int64\n"
],
[
"# operações logicas nas series retornam resultados fitrados\nserie_nova = serie2[serie2 < 10]\nserie_nova",
"_____no_output_____"
],
[
"# operação vetorizada\nserie_nova * 2",
"_____no_output_____"
],
[
"# verificar se um item está na serie (index)\n\n# Retorna True/false\nprint(f\"a in serie2: {'a' in serie2}\")\n\n# Retorna True/false\nprint(f\"1 in serie2: {[True for valor in serie2 if valor == 1]}\")\n\n# Retona o elemento\nprint(f\"serie2['a']: {serie2['a']}\")\n\n# Retona chave/valor\nprint(f\"serie2[serie2.index == 'a']: {serie2[serie2.index == 'a']}\")",
"a in serie2: True\n1 in serie2: [True]\nserie2['a']: 1\nserie2[serie2.index == 'a']: a 1\ndtype: int64\n"
]
],
[
[
"### pareamento a partir do index",
"_____no_output_____"
]
],
[
[
"# Criar uma serie a partir de um dict\nestados_dict = {\"Sao Paulo\": 45000, \"Rio de Janeiro\": 50141, \"Espirito Santo\": 30914}\n\n# os estados são transformados em index e os valores e values\nserie3 = pd.Series(estados_dict)\n\nprint(f\"serie3:\\n{serie3}\\n\")\nprint(f\"index:\\n{serie3.index}\\n\")\nprint(f\"values:\\n{serie3.values}\\n\")\n",
"serie3:\nSao Paulo 45000\nRio de Janeiro 50141\nEspirito Santo 30914\ndtype: int64\n\nindex:\nIndex(['Sao Paulo', 'Rio de Janeiro', 'Espirito Santo'], dtype='object')\n\nvalues:\n[45000 50141 30914]\n\n"
],
[
"# definindo a propria lista de index. \n\n# A lista tem apenas 2 estados, ES ficou de fora \nestados_index = [\"Sao Paulo\", \"Rio de Janeiro\"]\n\n# apesar dos dados terem o ES, ele não aparece porque foi definido apenas SP e RJ na lista de indices\nserie4 = pd.Series(estados_dict, index=estados_index)\nserie4",
"_____no_output_____"
],
[
"# se um index não existir na serie de dados, será preenchido com NaN\n# É como se um left join entre os indices informados no parametro index de series e os indices preexistentes na serie de dados\nserie5 = pd.Series(data=estados_dict, index=[\"Sao Paulo\", \"Rio de Janeiro\", \"Novo Estado\"])\nserie5",
"_____no_output_____"
],
[
"# descobrir se existe elementos null na serie\n# serie5.isnull()\npd.isnull(serie5)",
"_____no_output_____"
],
[
"# Retorna somente itens não null\nserie5[~pd.isnull(serie5)]",
"_____no_output_____"
],
[
"# somar series faz o pareamento pelo indice\n# na serie3 existe ES mas na serie5 não\n# Novo Estado existe na serie5 mas não na serie3\n# qualquer um que tenha NaN vai gerar um NaN na soma final, mesmo que exista algum valor\n# no final faz a soma daqueles que existem nos dois lados\nserie3 + serie5",
"_____no_output_____"
],
[
"# para somar sem perder o valor caso tenha algum valor NaN na soma\n# o ES agora retorna a população mesmo com uma soma com NaN (porque serie5 não tem ES)\nserie3.add(serie5, fill_value=0)",
"_____no_output_____"
],
[
"# metadado da serie, identifica o nome para a serie\nserie5.name = \"Populacao\"\n# nome do index\nserie5.index.name = \"Estados\"\n\n# renomeando os indices\n# serie5.index = [\"a\", \"b\", \"c\"]\n\n# alterando o nome da coluna de index\nserie5.index.rename('UF', inplace=True)\n\nserie5",
"_____no_output_____"
]
],
[
[
"## DataFrames\nSão tabelas com linhas e colunas que podem ser indexadas com os parametros index ou columns.\nVários tipos de objetos podem ser transformados em DataFrames (lists, dicts..)\nPossuem uma série de funções para manipulação dos dados e filtragem",
"_____no_output_____"
],
[
"### Dataframe a partir de um dict",
"_____no_output_____"
]
],
[
[
"# dataframes são como series multidimensionais que compartilham o mesmo index\n# possuem index/labels nas linhas (index=) e colunas (columns=)\n\n# criando um dataframe a partir de um dict\ndados1 = {\"estado\": [\"SP\", \"RJ\", \"ES\"] * 3, \n \"ano\": [2000, 2000, 2000, 2010, 2010, 2010, 2020, 2020, 2020],\n \"populacao\": [50_000, 30_000, 20_000, 55_000, 33_000, 22_000, 60_000, 40_000, 30_000]}\n\n# transforma um dicionario em um dataframe\ndf1 = pd.DataFrame(dados1)\ndf1",
"_____no_output_____"
]
],
[
[
"### listar dataframes",
"_____no_output_____"
]
],
[
[
"# alguns metodos úteis para listar dataframes\n\n# top5\ndf1.head(5)\n#df1[:5]\n\n# ultimos 2\n# df1.tail(2)\n# df1[-2:]\n\n# lista tudo\n#display(df1)",
"_____no_output_____"
],
[
"# lista algumas colunas\ndf1[[\"estado\", \"populacao\"]]\n\n# estatisticas da coluna populaçao\ndf1[\"populacao\"].describe()",
"_____no_output_____"
]
],
[
[
"### alterando a sequencia das colunas",
"_____no_output_____"
]
],
[
[
"print(f\"Sequência atual das colunas: {list(df1.columns)}\\n\")\n\n# alterando a sequencia\ndf_sequencia_colunas_alteradas = pd.DataFrame(df1, columns=[\"ano\", \"estado\", \"populacao\"])\ndf_sequencia_colunas_alteradas",
"Sequência atual das colunas: ['estado', 'ano', 'populacao']\n\n"
]
],
[
[
"### criando colunas em df existente",
"_____no_output_____"
]
],
[
[
"# criando um df novo a partir de um que já existe mas com a coluna dif que ainda não exite\ndf3 = pd.DataFrame(dados1, columns=[\"ano\", \"estado\", \"populacao\", \"dif\"], \n index=[\"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\"])\n\ndf3",
"_____no_output_____"
]
],
[
[
"### filtrando linhas e colunas loc e iloc",
"_____no_output_____"
]
],
[
[
"# localizando linhas e colunas usando loc e iloc\n\n# filtrando uma coluna\ndados_es = df3.loc[df3[\"estado\"] == \"ES\", [\"estado\", \"ano\", \"populacao\"]]\ndados_es\n#print(dados_es[[\"ano\", \"estado\", \"populacao\"]])",
"_____no_output_____"
],
[
"# filtrando com iloc: usando indices para buscar os 3 primeiros registros e as duas primeiras colunas\ndados_es = df3.iloc[:3, :2]\nprint(dados_es[[\"ano\", \"estado\"]])",
" ano estado\na 2000 SP\nb 2000 RJ\nc 2000 ES\n"
]
],
[
[
"### filtrando uma coluna com mais de uma condição",
"_____no_output_____"
]
],
[
[
"# filtrando uma coluna com mais de uma condição\ndados_es_2010 = df3.loc[ (df3[\"ano\"] == 2010) & (df3[\"estado\"] == \"ES\") ]\n\nprint(dados_es_2010[[\"ano\", \"estado\", \"populacao\"]])",
" ano estado populacao\nf 2010 ES 22000\n"
]
],
[
[
"### preencher a nova coluna",
"_____no_output_____"
]
],
[
[
"# preencher a nova coluna com dados incrementais de arange iniciando em zero e indo até o tamanho maximo da tabela\ndf3[\"dif\"] = np.arange(len(df3))\ndf3",
"_____no_output_____"
],
[
"# Cria uma serie independente somente com valores [\"a\", \"b\", \"c\"]\n# Faz um left join da serie com o dataframe. Somente onde os indices da nova serie correspondem com o dataframe será inserido\nvalores_dif = pd.Series(np.arange(3), index=[\"a\", \"b\", \"c\"])\n\n# Preenche os primeiros registros com os valores da serie. Acontece um pareamento dos índices do df com a serie\ndf3[\"dif\"] = valores_dif\n\n# completa os demais com o valor zero\ndf3.fillna(value=999, inplace=True)\ndf3",
"_____no_output_____"
]
],
[
[
"### Criar coluna com logica boleana",
"_____no_output_____"
]
],
[
[
"# Criando uma nova coluna a partir de um resultado lógico de outra\ndf3[\"Regiao\"] = [\"SUDESTE-ES\" if uf == \"ES\" else \"SUDESTE-OUTRA\" for uf in df3[\"estado\"]]\ndf3",
"_____no_output_____"
],
[
"# dict de dicts de população dos estados\npop = {\"ES\": {2000: 22000, 2010: 24000},\n \"RJ\": {2000: 33000, 2010: 36000, 2020: 66000}}\n# type(pop)\n\n# pd.DataFrame(pop).T\ndf4 = pd.DataFrame(pop, index=[2000, 2010, 2020])\ndf4",
"_____no_output_____"
],
[
"# remove a ultima linha de ES que tem NaN e considera apenas as dias primeiras de RJ\npdata = {\"ES\": df4[\"ES\"][:-1], \n \"RJ\": df4[\"RJ\"][:2]}\n\n# Define um nome para a coluna de indices e um nome para as colunas da tabela\ndf4= pd.DataFrame(pdata)\ndf4.index.name = \"ANO\"\ndf4.columns.name = \"UF\"\ndf4",
"_____no_output_____"
]
],
[
[
"## Objeto Index\nSão os objetos responsaveis por armazenar os rotulos dos eixos e outros metadados",
"_____no_output_____"
]
],
[
[
"obj_index = pd.Series(np.arange(3), index=[\"a\", \"b\", \"c\"])\n\n# armazena o index da serie\nindex = obj_index.index\n\n# pandas.core.indexes.base.Index\ntype(index)\nprint(index)\nprint(index[0])\n\n# um index é sempre imutavel: TypeError: Index does not support mutable operations\n# index[\"a\"] = \"x\"",
"Index(['a', 'b', 'c'], dtype='object')\na\n"
],
[
"# criando um obj index\nlabels = pd.Index(np.arange(3))\nlabels",
"_____no_output_____"
],
[
"# Usando um obj do tipo index para criar uma series\nobj_Series2 = pd.Series([1.4, 3.5, 5.2], index=labels)\nobj_Series2",
"_____no_output_____"
],
[
"print(f\"df original:\\n {df4}\\n\")\n\n# verificando a existencia de uma coluna\nprint(\"ES\" in df4.columns)\n\n# verificando a existencia de um indice\nprint(2010 in df4.index)",
"df original:\n UF ES RJ\nANO \n2000 22000.0 33000\n2010 24000.0 36000\n\nTrue\nTrue\n"
]
],
[
[
"## Reindexação de linhas e colunas\n\nRedefine o indice de um df já criado. \nSe os novos indices já existem serão mantidos senão serão introduzidos com valores faltantes NaN",
"_____no_output_____"
]
],
[
[
"obj_Series3 = pd.Series(np.random.randn(4), index=[\"a\", \"b\", \"c\", \"d\"])\nobj_Series3",
"_____no_output_____"
],
[
"# Os indices com os mesmos valores são mantidos, os novos recebem os valores NaN\nobj_Series3 = obj_Series3.reindex([\"a\", \"b\", \"c\", \"d\", \"x\", \"y\", \"z\"])\nobj_Series3",
"_____no_output_____"
],
[
"obj_Series4 = pd.Series([\"Azul\", \"Amarelo\", \"Verde\"], index=[0, 2, 4])\n\n# passando um range de 6 os valores auxentes no indice atual ficam com o valor NaN\nobj_Series4.reindex(range(6))",
"_____no_output_____"
],
[
"# usando o metodo ffill (forward fill) preenche os valores NaN com o valor anterior a ocorrencia do NaN\n# é como um \"preencher para frente\"\nobj_Series4.reindex(range(6), method=\"ffill\")",
"_____no_output_____"
],
[
"# reindexando linhas e colunas\ndf5 = pd.DataFrame(np.arange(9).reshape((3, 3)),\n index=[\"a\", \"c\", \"d\"],\n columns=[\"ES\",\"RJ\", \"SP\"]\n )\ndf5",
"_____no_output_____"
],
[
"# reindexar as linhas: inclui o indice \"b\"\n# index=<default>\n# columns=<lista de colunas>\ndf5 = df5.reindex(index=[\"a\", \"b\", \"c\", \"d\"], columns=[\"ES\",\"RJ\", \"SP\", \"MG\"])\ndf5",
"_____no_output_____"
],
[
"# se passar outra lista para indexação, irá ser feito um left join, os valores atuais serão mantidos e os novos serão inseridos com o valor NaN\nnovos_estados = [\"ES\",\"RJ\", \"SP\", \"MG\", \"RS\", \"PR\", \"BA\"]\n\ndf5 = df5.reindex(columns=novos_estados)\ndf5",
"_____no_output_____"
],
[
"# selecionando colunas a partir de uma lista\nfiltro_estados = [\"ES\",\"RJ\", \"SP\"]\ndados = df5.loc[:, filtro_estados]",
"_____no_output_____"
]
],
[
[
"### drop apagando linhas ou colunas",
"_____no_output_____"
]
],
[
[
"# copiando dados do df anterior\ndados_para_apagar = dados\n\n# apagar linhas\n\n# inplace True para de verdade, False (padrão), remove na memoria mas não apaga do obj original\ndados_para_apagar.drop([\"b\", \"d\"], inplace=True)\n\ndados_para_apagar",
"_____no_output_____"
],
[
"# apagar colunas axis=1\ndados_sem_RJ_SP = dados_para_apagar.drop([\"SP\", \"RJ\"], axis=1)\ndados_sem_RJ_SP",
"_____no_output_____"
],
[
"dados_para_apagar.drop([\"SP\", \"RJ\"], axis=\"columns\", inplace=True)\ndados_para_apagar",
"_____no_output_____"
],
[
"dados_para_apagar.drop([\"a\"], axis=\"index\", inplace=True)\ndados_para_apagar",
"_____no_output_____"
]
],
[
[
"## Seleção e Filtragem\na seleção e filtragem de df é semelhante a arrays numpy porém com o uso de rotulos nomeados nos dois eixos.",
"_____no_output_____"
],
[
"### Series",
"_____no_output_____"
]
],
[
[
"# criando uma Series para testes\nobj_Series5 = pd.Series(np.arange(4.0), index=[\"a\", \"b\", \"c\", \"d\"])\nobj_Series5",
"_____no_output_____"
],
[
"# filtrando pelo rótulo da linha\nobj_Series5[\"d\"]",
"_____no_output_____"
],
[
"# filtrando pelo indice da linha: implicitamente o pandas cria um índice mesmo que exista um rótulo nomeado\nobj_Series5[3]",
"_____no_output_____"
],
[
"# slice do intervalo do índice implicito de 2 ate o 4 (não incluído)\nobj_Series5[2:4]",
"_____no_output_____"
],
[
"# diferentemente do acesso pelo índice implicito, o indice pelo rotulo sempre inclui o ultimo elemento\nobj_Series5[\"b\":\"d\"]",
"_____no_output_____"
],
[
"# elementos específicos. Lista de elementos dentro dos colchetes\nobj_Series5[[\"a\", \"c\", \"d\"]]",
"_____no_output_____"
],
[
"# filtrando usando uma condição: retorna os itens que \nobj_Series5[obj_Series5 > 2]",
"_____no_output_____"
],
[
"# atribuindo valores a uma faixa de indices\nobj_Series5[\"b\":\"d\"] = 4\nobj_Series5",
"_____no_output_____"
]
],
[
[
"### DataFrame",
"_____no_output_____"
]
],
[
[
"df6 = pd.DataFrame(np.random.randn(16).reshape(4, 4), \n index=[\"ES\", \"RJ\", \"SP\", \"MG\"],\n columns=[2000, 2010, 2020, 2030])\nprint(\"Dados Iniciais:\\n\")\ndf6 ",
"Dados Iniciais:\n\n"
],
[
"# uma coluna específica\ndf6[2020]",
"_____no_output_____"
],
[
"# subconjunto de colunas\ndf6[[2010, 2020]]",
"_____no_output_____"
],
[
"# quando informa um intervalo, o filtro inicia no eixo das linhas\ndf6[1:3]",
"_____no_output_____"
],
[
"# subconjunto de linhas e colunas (podemos misturar acesso pelo indice e pelo rotulo no mesmo comando)\ndf6[1:3][[2000, 2030]]",
"_____no_output_____"
],
[
"# acessando dados com filtro: somente dados onde no ano de 2030 os valores são maiores que Zero\ndf6[df6[2030] > 0][2030]",
"_____no_output_____"
],
[
"# atribuindo valores Zero onde o valor for negativo\ndf6[df6 < 0] = 0\ndf6",
"_____no_output_____"
]
],
[
[
"### usando loc e iloc",
"_____no_output_____"
]
],
[
[
"# selecionando linhas e colunas pelos rótulos: df6.loc[ [<linhas], [<colunas>] ] \ndf6.loc[\"ES\", [2000, 2020]] ",
"_____no_output_____"
],
[
"# iloc funciona da mesma forma. Mas opera com os índices implicitos que iniciam em zero\n# o mesmo comando acima acessando pelos índices\ndf6.iloc[0, [0,2]]",
"_____no_output_____"
],
[
"# loc e iloc acessam por intervalos. Não precisam dos colchetes internos para formar os ranges\ndf6.loc[\"RJ\":\"MG\", 2020:2030]",
"_____no_output_____"
]
],
[
[
"## Cálculos e Alinhamento de Dados",
"_____no_output_____"
]
],
[
[
"# dois DataFrames que compartilham algumas colunas e indices podem ser alinhados\n# mesmo com shapes diferentes\n\ndata_left = pd.DataFrame(np.random.randn(9).reshape(3,3),\n index=[\"ES\", \"RJ\", \"SP\"],\n columns=list(\"bcd\"))\ndata_left",
"_____no_output_____"
],
[
"# possue shape diferente\ndata_rigth = pd.DataFrame(np.random.randn(12).reshape(4, 3),\n index=[\"SP\", \"RJ\", \"MG\", \"RS\"],\n columns=list(\"bde\"))\ndata_rigth",
"_____no_output_____"
],
[
"# o que acontece ao somar os dos dfs com shapes diferentes\n# acontece o alinhamento pelos rotulos dos indices\n# se o calculo envolver algum valor NaN então retorna NaN\n\n# só retorna algum resultado onde houver um inner join. apenas a coluna b e d e linhas RJ e SP existem nos dois\ndata_left + data_rigth",
"_____no_output_____"
],
[
"# preenche com Zero o valor NaN quando um dos elementos da soma for NaN, isso presenva o valor que existe em um dos dois lados\n# quando os dois valores da soma for NaN retorna NaN\ndata_left.add(data_rigth, fill_value=0)",
"_____no_output_____"
],
[
"# reindexa o data_left usando apenas as colunas do data_rigth. É como um RIGTH JOIN\ndata_left.reindex(columns=data_rigth.columns, fill_value=0)",
"_____no_output_____"
]
],
[
[
"## Operações entre DataFrames e Series",
"_____no_output_____"
]
],
[
[
"# criando um df de exemplo\ndf7 = pd.DataFrame(np.arange(12.).reshape((4, 3)),\n columns=list('bde'),\n index=['ES', 'RJ', 'SP', 'MG'])\n\n# criando uma serie a partir de uma linha (MG)\nserie6 = df7.iloc[3]\n\n# dataframe do exemplo\ndf7",
"_____no_output_____"
],
[
"# serie criada a partir da linha com o indice MG\nserie6",
"_____no_output_____"
],
[
"# faz o alinhamento do índice da serie com os índice das colunas em df e faz o calculo nos correspondentes para todas as linhas\ndf7 - serie6",
"_____no_output_____"
],
[
"# df7.sub(serie6, axis=\"index\") ???",
"_____no_output_____"
]
],
[
[
"## Apply: Mapping e Function\nMapeando funções em valores de series. Aplica a função em todos elementos da serie ou coluna",
"_____no_output_____"
]
],
[
[
"# criando um df de exemplo\ndf8 = pd.DataFrame(np.random.randn(12).reshape((4, 3)),\n columns=list('bde'),\n index=['ES', 'RJ', 'SP', 'MG'])\n\nprint(f\"df8 original:\\n {df8}\\n\")\n\n# transforma todos os numeros em absolutos removendo os valores negativos\nnp.abs(df8)",
"df8 original:\n b d e\nES -0.566446 0.036142 -2.074978\nRJ 0.247792 -0.897157 -0.136795\nSP 0.018289 0.755414 0.215269\nMG 0.841009 -1.445810 -1.401973\n\n"
],
[
"# funções que calculam a diferença entre o max e min de uma coluna\nfuncao_dif_max_min = lambda x: x.max() - x.min()\nfuncao_max = lambda x: x.max()\nfuncao_min = lambda x: x.min()\n\n# APPLY: por padrão a função é aplicada no sentido de agregação das linhas (vertical)\ndf8.apply(funcao_dif_max_min)",
"_____no_output_____"
],
[
"df8.apply(funcao_max)",
"_____no_output_____"
],
[
"df8.apply(funcao_min)",
"_____no_output_____"
],
[
"# aplicar o calculo no sentido das colunas (horizontal)\ndf8.apply(funcao_min, axis=\"columns\")",
"_____no_output_____"
],
[
"# aplicando uma função com multiplos retornos\n#def f(x):\n# return pd.Series([x.max(), x.min()], index=[\"max\", \"min\"])\n\nf_lambda = lambda x: pd.Series([x.max(), x.min()], index=[\"max\", \"min\"])\ndf8.apply(f_lambda)",
"_____no_output_____"
],
[
"# applymap: aplicando uma formatação em cada elemento da serie\n# diferentemente de apply, applymap não faz agregação ela aplica em todas as celulas\nformato = lambda x: f\"R$ {x: ,.2f}\"\ndf8.applymap(formato)",
"_____no_output_____"
],
[
"# map aplica a formatacao\ndf8[\"b\"].map(formato)",
"_____no_output_____"
]
],
[
[
"## Ordenação e Rank",
"_____no_output_____"
]
],
[
[
"obj_Series5 = pd.Series(range(5), index=[\"x\", \"y\", \"z\", \"a\", \"b\"])\n\n# ordenando uma serie pelo index\nobj_Series5.sort_index(ascending=True)",
"_____no_output_____"
],
[
"# ordenando uma serie pelos VALORES\nobj_Series5.sort_values(ascending=False)",
"_____no_output_____"
],
[
"# ordenação de índices de dataframe\ndf9 = pd.DataFrame(np.arange(8).reshape((2, 4)),\n index=['LinhaA', 'LinhaB'],\n columns=['d', 'a', 'b', 'c'])\n\n# ordenando as linhas (padrao) pelos valores dos INDICES\ndf9.sort_index()\n\n# ordenando as colunas (axis=1) pelos valores dos ROTULOS de coluna\ndf9.sort_index(axis=1)",
"_____no_output_____"
],
[
"# controlando o sentido da ordenação do índice. ascending=TRUE\ndf9.sort_index(axis=0, ascending=False)",
"_____no_output_____"
],
[
"# controlando o sentido da ordenação dos INDICES. ascending=True representa DESC\ndf9.sort_index(axis=1, ascending=False)",
"_____no_output_____"
],
[
"# ordenando os VALORES de uma ou várias colunas de um um DataFrame\ndf9.sort_values(by=[\"c\", \"d\"], ascending=False)",
"_____no_output_____"
],
[
"# Ordenação dos VALORES no sentido horizontal (colunas)\ndf9.sort_values(by=[\"LinhaB\"], ascending=False, axis=1)",
"_____no_output_____"
]
],
[
[
"## Agregação soma, max, min, mean, cumsum\nFunções de agregação de colunas numéricas. Permite analisar as estatísticas e analisar as distribuições dos valores",
"_____no_output_____"
]
],
[
[
"# a soma acontece no sentido das linhas (vertical) para todas as colunas quando não é especificada\ndf9.sum()",
"_____no_output_____"
],
[
"# soma de apenas uma coluna\ndf9[\"b\"].sum()",
"_____no_output_____"
],
[
"# soma de colunas específicas\ndf9[[\"c\",\"a\"]].sum()",
"_____no_output_____"
],
[
"# soma no sentido horizontal, agrega as colunas para obter o todal da linha\ndf9.sum(axis=\"columns\")",
"_____no_output_____"
],
[
"df9[:1].sum(axis=\"columns\")",
"_____no_output_____"
],
[
"# outras medidas de agregação\ndf9.mean(axis='columns', skipna=False)",
"_____no_output_____"
],
[
"# agregação do acumulado na horizontal (colunas)\ndf9.cumsum(axis=1)",
"_____no_output_____"
],
[
"# agregação do acumulado na vertical (linhas)\ndf9.cumsum(axis=0)",
"_____no_output_____"
],
[
"# Estatísticas descritivas das colunas numéricas\ndf9.describe()",
"_____no_output_____"
]
],
[
[
"## Valores únicos value counts\nExibe a lista com valores distintos de uma serie ou de uma coluna de um dataframe",
"_____no_output_____"
]
],
[
[
"# Valores únicos de uma serie\nobj_Series6 = pd.Series(['c', 'a', 'd', 'a', 'a', 'b', 'b', 'c', 'c'])\nobj_Series6.unique()",
"_____no_output_____"
],
[
"# valores únicos e frequencia da ocorrência do valor ordenado do maior para o menor\nobj_Series6.value_counts()",
"_____no_output_____"
],
[
"# os elementos das categorias agregadas são o index e os totais agregados são os values\nobj_Series6.value_counts().index\nobj_Series6.value_counts().values\n",
"_____no_output_____"
],
[
"# totais de valores únicos e frequencia por colunas\n\ndf10 = pd.DataFrame({'Qu1': [1, 3, 4, 3, 4],\n 'Qu2': [2, 3, 1, 2, 3],\n 'Qu3': [1, 5, 2, 4, 4]})\n\ndf10[\"Qu1\"].value_counts()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4ab18989ae665188fb719496fa40631f7651de39
| 2,863 |
ipynb
|
Jupyter Notebook
|
notebooks/notebook-template.ipynb
|
knu2xs/business-analyst-python-api-examples
|
c2f17bc87195872183ecbcd998b4bb0e9c295761
|
[
"Apache-2.0"
] | null | null | null |
notebooks/notebook-template.ipynb
|
knu2xs/business-analyst-python-api-examples
|
c2f17bc87195872183ecbcd998b4bb0e9c295761
|
[
"Apache-2.0"
] | null | null | null |
notebooks/notebook-template.ipynb
|
knu2xs/business-analyst-python-api-examples
|
c2f17bc87195872183ecbcd998b4bb0e9c295761
|
[
"Apache-2.0"
] | null | null | null | 28.068627 | 143 | 0.579811 |
[
[
[
"import importlib\nimport os\nfrom pathlib import Path\nimport sys\n\nfrom arcgis.features import GeoAccessor, GeoSeriesAccessor\nfrom arcgis.gis import GIS\nfrom dotenv import load_dotenv, find_dotenv\nimport pandas as pd\n\n# import arcpy if available\nif importlib.util.find_spec(\"arcpy\") is not None:\n import arcpy",
"_____no_output_____"
],
[
"# paths to common data locations - NOTE: to convert any path to a raw string, simply use str(path_instance)\ndir_prj = Path.cwd().parent\n\ndir_data = dir_prj/'data'\n\ndir_raw = dir_data/'raw'\ndir_ext = dir_data/'external'\ndir_int = dir_data/'interim'\ndir_out = dir_data/'processed'\n\ngdb_raw = dir_raw/'raw.gdb'\ngdb_int = dir_int/'interim.gdb'\ngdb_out = dir_out/'processed.gdb'\n\n# import the project package from the project package path - only necessary if you are not using a unique environemnt for this project\nsys.path.append(str(dir_prj/'src'))\nimport ba_samples\n\n# load the \"autoreload\" extension so that code can change, & always reload modules so that as you change code in src, it gets loaded\n%load_ext autoreload\n%autoreload 2\n\n# load environment variables from .env\nload_dotenv(find_dotenv())\n\n# create a GIS object instance; if you did not enter any information here, it defaults to anonymous access to ArcGIS Online\ngis = GIS(\n url=os.getenv('ESRI_GIS_URL'), \n username=os.getenv('ESRI_GIS_USERNAME'),\n password=None if len(os.getenv('ESRI_GIS_PASSWORD')) is 0 else os.getenv('ESRI_GIS_PASSWORD')\n)\n\ngis",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code"
]
] |
4ab1a270addbbc8a1ab5fb1df41afa1f992731b5
| 871,888 |
ipynb
|
Jupyter Notebook
|
ch15/ch15_part3.ipynb
|
edwardcodes/machine-learning-book
|
10215861cc540792c858840e115bbaf78a238f64
|
[
"MIT"
] | 655 |
2021-12-19T00:33:00.000Z
|
2022-03-31T16:30:36.000Z
|
ch15/ch15_part3.ipynb
|
rasbt/machine-learning-book
|
87aa9b357120b3109923928ff68398c76c9d1ec4
|
[
"MIT"
] | 41 |
2022-01-14T14:22:02.000Z
|
2022-03-31T16:26:09.000Z
|
ch15/ch15_part3.ipynb
|
edwardcodes/machine-learning-book
|
10215861cc540792c858840e115bbaf78a238f64
|
[
"MIT"
] | 180 |
2021-12-20T07:05:42.000Z
|
2022-03-31T07:38:20.000Z
| 1,051.73462 | 324,936 | 0.953647 |
[
[
[
"# Machine Learning with PyTorch and Scikit-Learn \n# -- Code Examples",
"_____no_output_____"
],
[
"## Package version checks",
"_____no_output_____"
],
[
"Add folder to path in order to load from the check_packages.py script:",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.insert(0, '..')",
"_____no_output_____"
]
],
[
[
"Check recommended package versions:",
"_____no_output_____"
]
],
[
[
"from python_environment_check import check_packages\n\n\nd = {\n 'torch': '1.8.0',\n}\ncheck_packages(d)",
"[OK] Your Python version is 3.9.7 | packaged by conda-forge | (default, Sep 29 2021, 19:24:02) \n[Clang 11.1.0 ]\n[OK] torch 1.10.2\n"
]
],
[
[
"Chapter 15: Modeling Sequential Data Using Recurrent Neural Networks (part 3/3)\n========\n\n",
"_____no_output_____"
],
[
"**Outline**\n\n- Implementing RNNs for sequence modeling in PyTorch\n - [Project two -- character-level language modeling in PyTorch](#Project-two----character-level-language-modeling-in-PyTorch)\n - [Preprocessing the dataset](#Preprocessing-the-dataset)\n - [Evaluation phase -- generating new text passages](#Evaluation-phase----generating-new-text-passages)\n- [Summary](#Summary)",
"_____no_output_____"
],
[
"Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\n%matplotlib inline",
"_____no_output_____"
]
],
[
[
"## Project two: character-level language modeling in PyTorch\n",
"_____no_output_____"
]
],
[
[
"Image(filename='figures/15_11.png', width=500)",
"_____no_output_____"
]
],
[
[
"### Preprocessing the dataset",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\n## Reading and processing text\nwith open('1268-0.txt', 'r', encoding=\"utf8\") as fp:\n text=fp.read()\n \nstart_indx = text.find('THE MYSTERIOUS ISLAND')\nend_indx = text.find('End of the Project Gutenberg')\n\ntext = text[start_indx:end_indx]\nchar_set = set(text)\nprint('Total Length:', len(text))\nprint('Unique Characters:', len(char_set))",
"Total Length: 1112350\nUnique Characters: 80\n"
],
[
"Image(filename='figures/15_12.png', width=500)",
"_____no_output_____"
],
[
"chars_sorted = sorted(char_set)\nchar2int = {ch:i for i,ch in enumerate(chars_sorted)}\nchar_array = np.array(chars_sorted)\n\ntext_encoded = np.array(\n [char2int[ch] for ch in text],\n dtype=np.int32)\n\nprint('Text encoded shape: ', text_encoded.shape)\n\nprint(text[:15], ' == Encoding ==> ', text_encoded[:15])\nprint(text_encoded[15:21], ' == Reverse ==> ', ''.join(char_array[text_encoded[15:21]]))",
"Text encoded shape: (1112350,)\nTHE MYSTERIOUS == Encoding ==> [44 32 29 1 37 48 43 44 29 42 33 39 45 43 1]\n[33 43 36 25 38 28] == Reverse ==> ISLAND\n"
],
[
"for ex in text_encoded[:5]:\n print('{} -> {}'.format(ex, char_array[ex]))",
"44 -> T\n32 -> H\n29 -> E\n1 -> \n37 -> M\n"
],
[
"Image(filename='figures/15_13.png', width=500)",
"_____no_output_____"
],
[
"Image(filename='figures/15_14.png', width=500)",
"_____no_output_____"
],
[
"seq_length = 40\nchunk_size = seq_length + 1\n\ntext_chunks = [text_encoded[i:i+chunk_size] \n for i in range(len(text_encoded)-chunk_size+1)] \n\n## inspection:\nfor seq in text_chunks[:1]:\n input_seq = seq[:seq_length]\n target = seq[seq_length] \n print(input_seq, ' -> ', target)\n print(repr(''.join(char_array[input_seq])), \n ' -> ', repr(''.join(char_array[target])))",
"[44 32 29 1 37 48 43 44 29 42 33 39 45 43 1 33 43 36 25 38 28 1 6 6\n 6 0 0 0 0 0 40 67 64 53 70 52 54 53 1 51] -> 74\n'THE MYSTERIOUS ISLAND ***\\n\\n\\n\\n\\nProduced b' -> 'y'\n"
],
[
"import torch\nfrom torch.utils.data import Dataset\n\nclass TextDataset(Dataset):\n def __init__(self, text_chunks):\n self.text_chunks = text_chunks\n\n def __len__(self):\n return len(self.text_chunks)\n \n def __getitem__(self, idx):\n text_chunk = self.text_chunks[idx]\n return text_chunk[:-1].long(), text_chunk[1:].long()\n \nseq_dataset = TextDataset(torch.tensor(text_chunks))",
"/var/folders/jg/tpqyh1fd5js5wsr1d138k3n40000gn/T/ipykernel_44396/2527503007.py:15: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at /Users/runner/miniforge3/conda-bld/pytorch-recipe_1645462109041/work/torch/csrc/utils/tensor_new.cpp:201.)\n seq_dataset = TextDataset(torch.tensor(text_chunks))\n"
],
[
"for i, (seq, target) in enumerate(seq_dataset):\n print(' Input (x):', repr(''.join(char_array[seq])))\n print('Target (y):', repr(''.join(char_array[target])))\n print()\n if i == 1:\n break\n ",
" Input (x): 'THE MYSTERIOUS ISLAND ***\\n\\n\\n\\n\\nProduced b'\nTarget (y): 'HE MYSTERIOUS ISLAND ***\\n\\n\\n\\n\\nProduced by'\n\n Input (x): 'HE MYSTERIOUS ISLAND ***\\n\\n\\n\\n\\nProduced by'\nTarget (y): 'E MYSTERIOUS ISLAND ***\\n\\n\\n\\n\\nProduced by '\n\n"
],
[
"device = torch.device(\"cuda:0\")\n# device = 'cpu'",
"_____no_output_____"
],
[
"from torch.utils.data import DataLoader\n \nbatch_size = 64\n\ntorch.manual_seed(1)\nseq_dl = DataLoader(seq_dataset, batch_size=batch_size, shuffle=True, drop_last=True)\n",
"_____no_output_____"
]
],
[
[
"### Building a character-level RNN model",
"_____no_output_____"
]
],
[
[
"import torch.nn as nn\n\nclass RNN(nn.Module):\n def __init__(self, vocab_size, embed_dim, rnn_hidden_size):\n super().__init__()\n self.embedding = nn.Embedding(vocab_size, embed_dim) \n self.rnn_hidden_size = rnn_hidden_size\n self.rnn = nn.LSTM(embed_dim, rnn_hidden_size, \n batch_first=True)\n self.fc = nn.Linear(rnn_hidden_size, vocab_size)\n\n def forward(self, x, hidden, cell):\n out = self.embedding(x).unsqueeze(1)\n out, (hidden, cell) = self.rnn(out, (hidden, cell))\n out = self.fc(out).reshape(out.size(0), -1)\n return out, hidden, cell\n\n def init_hidden(self, batch_size):\n hidden = torch.zeros(1, batch_size, self.rnn_hidden_size)\n cell = torch.zeros(1, batch_size, self.rnn_hidden_size)\n return hidden.to(device), cell.to(device)\n \nvocab_size = len(char_array)\nembed_dim = 256\nrnn_hidden_size = 512\n\ntorch.manual_seed(1)\nmodel = RNN(vocab_size, embed_dim, rnn_hidden_size) \nmodel = model.to(device)\nmodel",
"_____no_output_____"
],
[
"loss_fn = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=0.005)\n\nnum_epochs = 10000 \n\ntorch.manual_seed(1)\n\nfor epoch in range(num_epochs):\n hidden, cell = model.init_hidden(batch_size)\n seq_batch, target_batch = next(iter(seq_dl))\n seq_batch = seq_batch.to(device)\n target_batch = target_batch.to(device)\n optimizer.zero_grad()\n loss = 0\n for c in range(seq_length):\n pred, hidden, cell = model(seq_batch[:, c], hidden, cell) \n loss += loss_fn(pred, target_batch[:, c])\n loss.backward()\n optimizer.step()\n loss = loss.item()/seq_length\n if epoch % 500 == 0:\n print(f'Epoch {epoch} loss: {loss:.4f}')\n ",
"Epoch 0 loss: 4.3719\nEpoch 500 loss: 1.3804\nEpoch 1000 loss: 1.2956\nEpoch 1500 loss: 1.2816\nEpoch 2000 loss: 1.1968\nEpoch 2500 loss: 1.2456\nEpoch 3000 loss: 1.1763\nEpoch 3500 loss: 1.1868\nEpoch 4000 loss: 1.1476\nEpoch 4500 loss: 1.2118\nEpoch 5000 loss: 1.2072\nEpoch 5500 loss: 1.1243\nEpoch 6000 loss: 1.1637\nEpoch 6500 loss: 1.1513\nEpoch 7000 loss: 1.1196\nEpoch 7500 loss: 1.1439\nEpoch 8000 loss: 1.1536\nEpoch 8500 loss: 1.1263\nEpoch 9000 loss: 1.1597\nEpoch 9500 loss: 1.1048\n"
]
],
[
[
"### Evaluation phase: generating new text passages",
"_____no_output_____"
]
],
[
[
"from torch.distributions.categorical import Categorical\n\ntorch.manual_seed(1)\n\nlogits = torch.tensor([[1.0, 1.0, 1.0]])\n\nprint('Probabilities:', nn.functional.softmax(logits, dim=1).numpy()[0])\n\nm = Categorical(logits=logits)\nsamples = m.sample((10,))\n \nprint(samples.numpy())",
"Probabilities: [0.33333334 0.33333334 0.33333334]\n[[0]\n [0]\n [0]\n [0]\n [1]\n [0]\n [1]\n [2]\n [1]\n [1]]\n"
],
[
"torch.manual_seed(1)\n\nlogits = torch.tensor([[1.0, 1.0, 3.0]])\n\nprint('Probabilities:', nn.functional.softmax(logits, dim=1).numpy()[0])\n\nm = Categorical(logits=logits)\nsamples = m.sample((10,))\n \nprint(samples.numpy())",
"Probabilities: [0.10650698 0.10650698 0.78698605]\n[[0]\n [2]\n [2]\n [1]\n [2]\n [1]\n [2]\n [2]\n [2]\n [2]]\n"
],
[
"def sample(model, starting_str, \n len_generated_text=500, \n scale_factor=1.0):\n\n encoded_input = torch.tensor([char2int[s] for s in starting_str])\n encoded_input = torch.reshape(encoded_input, (1, -1))\n\n generated_str = starting_str\n\n model.eval()\n hidden, cell = model.init_hidden(1)\n hidden = hidden.to('cpu')\n cell = cell.to('cpu')\n for c in range(len(starting_str)-1):\n _, hidden, cell = model(encoded_input[:, c].view(1), hidden, cell) \n \n last_char = encoded_input[:, -1]\n for i in range(len_generated_text):\n logits, hidden, cell = model(last_char.view(1), hidden, cell) \n logits = torch.squeeze(logits, 0)\n scaled_logits = logits * scale_factor\n m = Categorical(logits=scaled_logits)\n last_char = m.sample()\n generated_str += str(char_array[last_char])\n \n return generated_str\n\ntorch.manual_seed(1)\nmodel.to('cpu')\nprint(sample(model, starting_str='The island'))",
"The island had been neged to reward from them with denies he giving these gigant of some taking two years from the question has been employed without\nconsequences in his depth. As\nto the notice beate can be vapor when there we must forgot.\n\n“Shat’s likely,” said Pencroft.\n\n“What a\nregion of Neption!” Cyrus Harding would have\nbeen pushed their return to the shored him, to do in the dockyar and formation of the settlers’ animal which would be better dead of freshest.\n\nCyrus, was well presented by means of t\n"
]
],
[
[
"* **Predictability vs. randomness**",
"_____no_output_____"
]
],
[
[
"logits = torch.tensor([[1.0, 1.0, 3.0]])\n\nprint('Probabilities before scaling: ', nn.functional.softmax(logits, dim=1).numpy()[0])\n\nprint('Probabilities after scaling with 0.5:', nn.functional.softmax(0.5*logits, dim=1).numpy()[0])\n\nprint('Probabilities after scaling with 0.1:', nn.functional.softmax(0.1*logits, dim=1).numpy()[0])\n",
"Probabilities before scaling: [0.10650698 0.10650698 0.78698605]\nProbabilities after scaling with 0.5: [0.21194156 0.21194156 0.57611686]\nProbabilities after scaling with 0.1: [0.3104238 0.3104238 0.37915248]\n"
],
[
"torch.manual_seed(1)\nprint(sample(model, starting_str='The island', \n scale_factor=2.0))",
"The island on the shore, and thought the island was finished, and the stranger had constituted to this precipitation of the corral, which he suffered in the depths of the cone of a ship was only a long time was a sort of the passage there to the sea, he was ready to\ndo anything to the south and to the shore, the settlers did not think the convicts were to be on the corral, and the engineer then to hope of the settlers, and shelter at the corral, the colonists had not happened to the settlers had only to t\n"
],
[
"torch.manual_seed(1)\nprint(sample(model, starting_str='The island', \n scale_factor=0.5))",
"The island deep ising side; his”not\nawoke-prtaugh? Gideon Spilett too?,”\n\nHelofe, and ihos eyes; “re” helm usn’t produtiuso,\nat Tajow him, somucn!”\n\nAbove Verishy, silene;\note fat wire, notwiths, and\nbank pludes\nattackleep, plaions fraitr byll,\nyou ironaliers thefee\ngroat, however\nparcht mad. Would wideney twontashmonest.\n\nA fugitors plan ascererver; Thousand-shwark orhrankheaded abope had\nlet us alwayberrusible, withwh.\nBlacaneara,moss, Net susnid! 6\n\nI- Poingin’s new kay was “Muldhidge?”\n-!\n\n“It phincip\n"
]
],
[
[
"\n...\n\n\n# Summary\n\n...\n",
"_____no_output_____"
],
[
"\n\nReaders may ignore the next cell.\n",
"_____no_output_____"
]
],
[
[
"! python ../.convert_notebook_to_script.py --input ch15_part3.ipynb --output ch15_part3.py",
"[NbConvertApp] Converting notebook ch15_part3.ipynb to script\n[NbConvertApp] Writing 7922 bytes to ch15_part3.py\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4ab1b17a686297343277bd00d9c1d2de284db221
| 416,001 |
ipynb
|
Jupyter Notebook
|
input/plot_galah_spectra.ipynb
|
svenbuder/GALAH_DR3
|
33e86c90c2f117c1a21d56d60d50f09335c76f6b
|
[
"MIT"
] | 10 |
2020-03-04T07:11:54.000Z
|
2021-09-28T15:55:04.000Z
|
input/plot_galah_spectra.ipynb
|
Shalmalee15/GALAH_DR3
|
33e86c90c2f117c1a21d56d60d50f09335c76f6b
|
[
"MIT"
] | 1 |
2020-11-11T21:56:38.000Z
|
2020-11-11T22:22:16.000Z
|
input/plot_galah_spectra.ipynb
|
Shalmalee15/GALAH_DR3
|
33e86c90c2f117c1a21d56d60d50f09335c76f6b
|
[
"MIT"
] | 5 |
2020-11-06T05:42:19.000Z
|
2021-10-14T02:16:47.000Z
| 1,055.840102 | 401,072 | 0.951527 |
[
[
[
"# Script to plot GALAH spectra, but also save them into python dictionaries\n\n## Author: Sven Buder (SB, MPIA) buder at mpia dot de\n\nThis script is intended to plot the 4 spectra of the arms of the HERMES spectrograph\n\nHistory:\n 181012 - SB created",
"_____no_output_____"
]
],
[
[
"try:\n %matplotlib inline\n %config InlineBackend.figure_format='retina'\nexcept:\n pass\n\nimport numpy as np\nimport os\nimport astropy.io.fits as pyfits\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"### Adjust script",
"_____no_output_____"
],
[
"### Definitions which will be executed in the last cell",
"_____no_output_____"
]
],
[
[
"def read_spectra(sobject_id, iraf_dr = 'dr5.3', SPECTRA = 'SPECTRA'):\n \"\"\"\n This function reads in the 4 individual spectra from the subdirectory working_directory/SPECTRA\n \n INPUT:\n sobject_id = identifier of spectra by date (6digits), plate (4digits), combination (2digits) and pivot number (3digits)\n iraf_dr = reduction which shall be used, current version: dr5.3\n SPECTRA = string to indicate sub directory where spectra are saved\n \n OUTPUT\n spectrum = dictionary\n \"\"\"\n \n spectrum = dict(sobject_id = sobject_id)\n \n # Assess if spectrum is stacked\n if str(sobject_id)[11] == '1':\n # Single observations are saved in 'com'\n com='com'\n else:\n # Stacked observations are saved in 'com2'\n com='com' \n \n # Iterate through all 4 CCDs\n for each_ccd in [1,2,3,4]:\n \n try:\n fits = pyfits.open(SPECTRA+'/'+iraf_dr+'/'+str(sobject_id)[0:6]+'/standard/'+com+'/'+str(sobject_id)+str(each_ccd)+'.fits')\n\n # Extension 0: Reduced spectrum\n # Extension 1: Relative error spectrum\n # Extension 4: Normalised spectrum, NB: cut for CCD4\n\n # Extract wavelength grid for the reduced spectrum\n start_wavelength = fits[0].header[\"CRVAL1\"]\n dispersion = fits[0].header[\"CDELT1\"]\n nr_pixels = fits[0].header[\"NAXIS1\"]\n reference_pixel = fits[0].header[\"CRPIX1\"]\n if reference_pixel == 0:\n reference_pixel = 1\n spectrum['wave_red_'+str(each_ccd)] = np.array(map(lambda x:((x-reference_pixel+1)*dispersion+start_wavelength),range(0,nr_pixels)))\n\n try:\n # Extract wavelength grid for the normalised spectrum\n start_wavelength = fits[4].header[\"CRVAL1\"]\n dispersion = fits[4].header[\"CDELT1\"]\n nr_pixels = fits[4].header[\"NAXIS1\"]\n reference_pixel = fits[4].header[\"CRPIX1\"]\n if reference_pixel == 0:\n reference_pixel=1\n spectrum['wave_norm_'+str(each_ccd)] = np.array(map(lambda x:((x-reference_pixel+1)*dispersion+start_wavelength),range(0,nr_pixels)))\n except:\n spectrum['wave_norm_'+str(each_ccd)] = spectrum['wave_red_'+str(each_ccd)]\n\n # Extract flux and flux error of reduced spectrum\n spectrum['sob_red_'+str(each_ccd)] = np.array(fits[0].data)\n spectrum['uob_red_'+str(each_ccd)] = np.array(fits[0].data * fits[1].data)\n\n # Extract flux and flux error of reduced spectrum\n try:\n spectrum['sob_norm_'+str(each_ccd)] = np.array(fits[4].data)\n except:\n spectrum['sob_norm_'+str(each_ccd)] = np.ones(len(fits[0].data))\n\n if each_ccd != 4:\n try:\n spectrum['uob_norm_'+str(each_ccd)] = np.array(fits[4].data * fits[1].data)\n except:\n spectrum['uob_norm_'+str(each_ccd)] = np.array(fits[1].data)\n else:\n # for normalised error of CCD4, only used appropriate parts of error spectrum\n try:\n spectrum['uob_norm_4'] = np.array(fits[4].data * (fits[1].data)[-len(spectrum['sob_norm_4']):])\n except:\n spectrum['uob_norm_4'] = np.zeros(len(fits[0].data))\n fits.close()\n except:\n \n spectrum['wave_norm_'+str(each_ccd)] = np.arange(7693.50,7875.55,0.074)\n spectrum['wave_red_'+str(each_ccd)] = np.arange(7693.50,7875.55,0.074)\n spectrum['sob_norm_'+str(each_ccd)] = np.ones(len(spectrum['wave_red_'+str(each_ccd)]))\n spectrum['sob_red_'+str(each_ccd)] = np.ones(len(spectrum['wave_red_'+str(each_ccd)]))\n spectrum['uob_norm_'+str(each_ccd)] = np.zeros(len(spectrum['wave_red_'+str(each_ccd)]))\n spectrum['uob_red_'+str(each_ccd)] = np.zeros(len(spectrum['wave_red_'+str(each_ccd)]))\n\n return spectrum",
"_____no_output_____"
],
[
"def interpolate_spectrum_onto_cannon_wavelength(spectrum):\n \"\"\"\n This function interpolates the spectrum \n onto the wavelength grid of The Cannon as used for GALAH DR2\n \n INPUT:\n spectrum dictionary\n \n OUTPUT:\n interpolated spectrum dictionary\n \n \"\"\"\n \n # Initialise interpolated spectrum from input spectrum\n interpolated_spectrum = dict()\n for each_key in spectrum.keys():\n interpolated_spectrum[each_key] = spectrum[each_key]\n \n # The Cannon wavelength grid as used for GALAH DR2\n wave_cannon = dict()\n wave_cannon['ccd1'] = np.arange(4715.94,4896.00,0.046) # ab lines 4716.3 - 4892.3\n wave_cannon['ccd2'] = np.arange(5650.06,5868.25,0.055) # ab lines 5646.0 - 5867.8\n wave_cannon['ccd3'] = np.arange(6480.52,6733.92,0.064) # ab lines 6481.6 - 6733.4\n wave_cannon['ccd4'] = np.arange(7693.50,7875.55,0.074) # ab lines 7691.2 - 7838.5\n \n for each_ccd in [1, 2, 3, 4]:\n \n # exchange wavelength\n interpolated_spectrum['wave_red_'+str(each_ccd)] = wave_cannon['ccd'+str(each_ccd)]\n interpolated_spectrum['wave_norm_'+str(each_ccd)] = wave_cannon['ccd'+str(each_ccd)]\n \n # interpolate and exchange flux\n interpolated_spectrum['sob_red_'+str(each_ccd)] = np.interp(\n x=wave_cannon['ccd'+str(each_ccd)],\n xp=spectrum['wave_red_'+str(each_ccd)],\n fp=spectrum['sob_red_'+str(each_ccd)],\n )\n interpolated_spectrum['sob_norm_'+str(each_ccd)] = np.interp(\n wave_cannon['ccd'+str(each_ccd)],\n spectrum['wave_norm_'+str(each_ccd)],\n spectrum['sob_norm_'+str(each_ccd)],\n )\n\n # interpolate and exchange flux error\n interpolated_spectrum['uob_red_'+str(each_ccd)] = np.interp(\n wave_cannon['ccd'+str(each_ccd)],\n spectrum['wave_red_'+str(each_ccd)],\n spectrum['uob_red_'+str(each_ccd)],\n )\n interpolated_spectrum['uob_norm_'+str(each_ccd)] = np.interp(\n wave_cannon['ccd'+str(each_ccd)],\n spectrum['wave_norm_'+str(each_ccd)],\n spectrum['uob_norm_'+str(each_ccd)],\n )\n\n return interpolated_spectrum",
"_____no_output_____"
],
[
"def plot_spectrum(spectrum, normalisation = True, lines_to_indicate = None, save_as_png = False):\n \"\"\"\n This function plots the spectrum in 4 subplots for each arm of the HERMES spectrograph\n \n INPUT:\n spectrum = dictionary created by read_spectra()\n normalisation = True or False (either normalised or un-normalised spectra are plotted)\n save_as_png = Save figure as png if True\n \n OUTPUT:\n Plot that spectrum!\n \n \"\"\"\n \n f, axes = plt.subplots(4, 1, figsize = (15,10))\n \n kwargs_sob = dict(c = 'k', label='Flux', rasterized=True)\n kwargs_error_spectrum = dict(color = 'grey', label='Flux error', rasterized=True)\n\n # Adjust keyword used for dictionaries and plot labels\n if normalisation==True:\n red_norm = 'norm'\n else:\n red_norm = 'red'\n \n for each_ccd in [1, 2, 3, 4]:\n\n axes[each_ccd-1].fill_between(\n spectrum['wave_'+red_norm+'_'+str(each_ccd)],\n spectrum['sob_'+red_norm+'_'+str(each_ccd)] - spectrum['uob_'+red_norm+'_'+str(each_ccd)],\n spectrum['sob_'+red_norm+'_'+str(each_ccd)] + spectrum['uob_'+red_norm+'_'+str(each_ccd)],\n **kwargs_error_spectrum\n )\n \n # Overplot observed spectrum a bit thicker\n axes[each_ccd-1].plot(\n spectrum['wave_'+red_norm+'_'+str(each_ccd)],\n spectrum['sob_'+red_norm+'_'+str(each_ccd)],\n **kwargs_sob\n )\n \n # Plot important lines if committed\n if lines_to_indicate != None:\n for each_line in lines_to_indicate:\n if (float(each_line[0]) >= spectrum['wave_'+red_norm+'_'+str(each_ccd)][0]) & (float(each_line[0]) <= spectrum['wave_'+red_norm+'_'+str(each_ccd)][-1]):\n axes[each_ccd-1].axvline(float(each_line[0]), color = each_line[2], ls='dashed')\n if red_norm=='norm':\n axes[each_ccd-1].text(float(each_line[0]), 1.25, each_line[1], color = each_line[2], ha='left', va='top')\n \n # Plot layout\n if red_norm == 'norm':\n axes[each_ccd-1].set_ylim(-0.1,1.3)\n else:\n axes[each_ccd-1].set_ylim(0,1.3*np.median(spectrum['sob_'+red_norm+'_'+str(each_ccd)]))\n\n axes[each_ccd-1].set_xlabel(r'Wavelength CCD '+str(each_ccd)+' [$\\mathrm{\\AA}$]')\n axes[each_ccd-1].set_ylabel(r'Flux ('+red_norm+') [a.u.]')\n if each_ccd == 1:\n axes[each_ccd-1].legend(loc='lower left')\n plt.tight_layout()\n \n if save_as_png == True:\n plt.savefig(str(spectrum['sobject_id'])+'_'+red_norm+'.png', dpi=200)\n\n return f",
"_____no_output_____"
]
],
[
[
"### Execute and have fun looking at spectra",
"_____no_output_____"
]
],
[
[
"# Adjust directory you want to work in\nworking_directory = '/Users/buder/trunk/GALAH/'\nworking_directory = '/avatar/buder/trunk/GALAH/'\n\nos.chdir(working_directory)\n\n# You can activate a number of lines that will be plotted in the spectra\nimportant_lines = np.array([\n [4861.35, 'H' , 'red'],\n [6562.79, 'H' , 'red'],\n [6708. , 'Li', 'orange'],\n ])\n\n# Last but not least, declare which sobject_ids shall be plotted\n",
"_____no_output_____"
],
[
"sobject_ids_to_plot = [\n 190211002201088\n ]\nfor each_sobject_id in sobject_ids_to_plot:\n # read in spectrum\n spectrum = read_spectra(each_sobject_id) \n # interpolate spectrum onto The Cannon wavelength grid\n interpolated_spectrum = interpolate_spectrum_onto_cannon_wavelength(spectrum)\n # plot input spectrum\n plot_spectrum(spectrum,\n normalisation = False,\n lines_to_indicate = None,\n save_as_png = True\n )\n# # plot interpolated spectrum\n# plot_spectrum(\n# interpolated_spectrum,\n# normalisation = True,\n# lines_to_indicate = None,\n# save_as_png = True\n# )",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4ab1ce506766b88c175d469c4bf6f4d1107c1f7b
| 20,061 |
ipynb
|
Jupyter Notebook
|
racial_disc/EDA_racial_discrimination/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_1_solutions-checkpoint.ipynb
|
cathyxinxyz/mini_projects
|
feafe2f9989e7d47d4f827b2132463c30585b06f
|
[
"MIT"
] | null | null | null |
racial_disc/EDA_racial_discrimination/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_1_solutions-checkpoint.ipynb
|
cathyxinxyz/mini_projects
|
feafe2f9989e7d47d4f827b2132463c30585b06f
|
[
"MIT"
] | null | null | null |
racial_disc/EDA_racial_discrimination/.ipynb_checkpoints/sliderule_dsi_inferential_statistics_exercise_1_solutions-checkpoint.ipynb
|
cathyxinxyz/mini_projects
|
feafe2f9989e7d47d4f827b2132463c30585b06f
|
[
"MIT"
] | null | null | null | 35.443463 | 574 | 0.605204 |
[
[
[
"# What is the True Normal Human Body Temperature? \n\n#### Background\n\nThe mean normal body temperature was held to be 37$^{\\circ}$C or 98.6$^{\\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct?",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-info\">\n<h3>Exercises</h3>\n\n<p>In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.</p>\n\n<p>Answer the following questions <b>in this notebook below and submit to your Github account</b>.</p> \n\n<ol>\n<li> Is the distribution of body temperatures normal? \n <ul>\n <li> Although this is not a requirement for the Central Limit Theorem to hold (read the introduction on Wikipedia's page about the CLT carefully: https://en.wikipedia.org/wiki/Central_limit_theorem), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population.\n <li> Think about the way you're going to check for the normality of the distribution. Graphical methods are usually used first, but there are also other ways: https://en.wikipedia.org/wiki/Normality_test\n </ul>\n<li> Is the sample size large? Are the observations independent?\n <ul>\n <li> Remember that this is a condition for the Central Limit Theorem, and hence the statistical tests we are using, to apply.\n </ul>\n<li> Is the true population mean really 98.6 degrees F?\n <ul>\n <li> First, try a bootstrap hypothesis test.\n <li> Now, let's try frequentist statistical testing. Would you use a one-sample or two-sample test? Why?\n <li> In this situation, is it appropriate to use the $t$ or $z$ statistic? \n <li> Now try using the other test. How is the result be different? Why?\n </ul>\n<li> Draw a small sample of size 10 from the data and repeat both frequentist tests. \n <ul>\n <li> Which one is the correct one to use? \n <li> What do you notice? What does this tell you about the difference in application of the $t$ and $z$ statistic?\n </ul>\n<li> At what temperature should we consider someone's temperature to be \"abnormal\"?\n <ul>\n <li> As in the previous example, try calculating everything using the boostrap approach, as well as the frequentist approach.\n <li> Start by computing the margin of error and confidence interval. When calculating the confidence interval, keep in mind that you should use the appropriate formula for one draw, and not N draws.\n </ul>\n<li> Is there a significant difference between males and females in normal temperature?\n <ul>\n <li> What testing approach did you use and why?\n <li> Write a story with your conclusion in the context of the original problem.\n </ul>\n</ol>\n\nYou can include written notes in notebook cells using Markdown: \n - In the control panel at the top, choose Cell > Cell Type > Markdown\n - Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet\n\n#### Resources\n\n+ Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm\n+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet\n\n****",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndf = pd.read_csv('data/human_body_temperature.csv')",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n<h2>SOLUTION: Is the distribution of body temperatures normal?</h2>\n</div>",
"_____no_output_____"
]
],
[
[
"# First, a histogram\n%matplotlib inline\nplt.hist(df['temperature'])\nplt.xlabel('Temperature')\nplt.ylabel('Frequency')\nplt.title('Histogram of Body Temperature')\nplt.ylim(0, 40) # Add some buffer space at the top so the bar doesn't get cut off.",
"_____no_output_____"
],
[
"# Next, a quantile plot.\nimport statsmodels.api as sm\nmean = np.mean(df['temperature'])\nsd = np.std(df['temperature'])\nz = (df['temperature'] - mean) / sd\nsm.qqplot(z, line='45')",
"_____no_output_____"
],
[
"# Finally, a normal distribution test. Not recommended!! Use only when you're not sure.\nimport scipy.stats as stats\nstats.mstats.normaltest(df['temperature'])",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n<h4>SOLUTION</h4>\n\n<p>The histogram looks *very roughly* normally distributed. There is an implied bell shape, though there are some values above the mode that occur much less frequently than we would expect under a normal distribution. The shape is not so deviant as to call it some other distribution. </p>\n\n<p>A quantile plot can help. The quantile plot computes percentiles for our data and also the percentiles for a normal distribution via sampling (mean 0, sd 1). If the quantiles/percentiles for both distributions match, we expect to see a more or less straight line of data points. Note that the quantile plot does pretty much follow a straight line, so this helps us conclude that the distribution is likely normal. Note that there are three outliers on the \"high\" end and two on the \"low\" end that cause deviations in the tail, but this is pretty typical.</p>\n\n<p>Suppose we really aren't sure, or the plots tell us two different conclusions. We could confirm with a statistical significance test, though this should not be your first method of attack. The p-value from the normality test is 0.25 which is significantly above the usual cutoff of 0.05. The null hypothesis is that the distribution is normal. Since we fail to reject the null hypothesis, we conclude that the distribution is probably normal.</p>\n</div>",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-success\">\n<h2>SOLUTION: Is the sample size large? Are the observations independent?</h2>\n</div>",
"_____no_output_____"
]
],
[
[
"n = len(df['temperature'])\nn",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n<p>The sample size is 130. Literature typically suggests a lower limit of 30 observations in a sample for CLT to hold. In terms of CLT, the sample is large enough.</p>\n\n<p>We must assume that the obserations are independent. One person's body temperature should not have any affect on another person's body temperature, so under common sense conditions, the observations are independent. Note that this condition may potentially be violated if the researcher lacked common sense and performed this study by stuffing all of the participants shoulder to shoulder in a very hot and confined room. </p>\n\n<p>Note that the temperatures <i>may</i> be dependent on age, gender, or health status, but this is a separate issue and does not affect our conclusion that <i>another person's</i> temperature does not affect someone else's temperature.</p>\n</div>",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-success\">\n<h2>SOLUTION: Is the true population mean really 98.6 degrees F?</h2>\n</div>",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-success\">\n<p>We will now perform a bootstrap hypothesis test with the following:</p>\n\n<p>$H_0$: The mean of the sample and the true mean of 98.6 are the same. $\\mu=\\mu_0$</p>\n\n<p>$H_A$: The means are different. $\\mu\\neq\\mu_0$</p>\n\n</div>",
"_____no_output_____"
]
],
[
[
"# Calculates p value using 100,000 boostrap replicates\nbootstrap_replicates = np.empty(100000)\n\nsize = len(bootstrap_replicates)\n\nfor i in range(size):\n bootstrap_sample = np.random.choice(temperature, size=len(temperature))\n bootstrap_replicates[i] = np.mean(bootstrap_sample)\n\np = np.sum(bootstrap_replicates >= 98.6) / len(bootstrap_replicates)\nprint('p =', p)",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n<p>We are testing only if the true population mean temperature is 98.6. We are treating everyone as being in the same group, with one mean. We use a **one-sample** test. The population standard deviation is not given, so we assume it is not known. We do however know the sample standard deviation from the data and we know that the sample size is large enough for CLT to apply, so we can use a $z$-test.</p>\n</div>",
"_____no_output_____"
]
],
[
[
"z = (mean - 98.6)/(sd / np.sqrt(n))\nz",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\nSince the question does not ask if the true mean is greater than, or less than 98.6 as the alternative hypothesis, we use a two-tailed test. We have to regions where we reject the null hypothesis: if $z < -1.96$ or if $z > 1.96$, assuming $\\alpha = 0.05$. Since -5.48 < -1.96, we reject the null hypothesis: the true population mean temperature is NOT 98.6.\n\n<p>We can also use a p-value:</p>\n</div>",
"_____no_output_____"
]
],
[
[
"stats.norm.cdf(z) * 2\n# NOTE: Since CDF gives us $P(Z \\le z)$ and this is a two-tailed test, we multiply the result by 2",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n<p>Since the p-value is *way* below 0.05, we reject the null hypothesis. The population mean is not 98.6.</p>\n\n<p>The $z$-test was the \"correct\" test to use in this case. But what if we used a $t$-test instead? The degrees of freedom is $n - 1 = 129$.</p>\n</div>",
"_____no_output_____"
]
],
[
[
"t = (mean - 98.6)/(sd / np.sqrt(n))",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\nWe find the critical value of $t$ and when $\\vert t \\vert > \\vert t^* \\vert$ we reject the null hypothesis.\n</div>",
"_____no_output_____"
]
],
[
[
"t_critical = stats.t.ppf(0.05 / 2, n - 1)\nt_critical",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n<p>Note that the critical value of $t$ is $\\pm 1.979$. This is pretty close to the $\\pm 1.96$ we used for the $z$-test. *As the sample size gets larger, the student's $t$ distribution converges to the normal distribution.* So in theory, even if your sample size is large you could use the $t$-test, but the pesky degrees of freedom step is likely why people do not. If we use a sample of size, say, 1000, the critical values are close to identical.</p>\n\n<p>So, to answer the question, the result is NOT different! The only case where it would be different is if the $t$ statistic were between -1.96 and -1.979 which would be pretty rare.</p>",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-success\">\n<h2>SOLUTION: At what temperature should we consider someone's temperature to be \"abnormal\"?</h2>\n\n<p>We compute the confidence interval using $z^* = \\pm 1.96$.</p>\n\n<p>The margin of error is </p>\n\n$$MOE = z^* \\frac{\\sigma}{\\sqrt{n}}$$\n</div>",
"_____no_output_____"
]
],
[
[
"sd = df['temperature'].std()\nn = len(df['temperature'])\nmoe = 1.96 * sd / np.sqrt(n)\nmoe",
"_____no_output_____"
],
[
"mean = df['temperature'].mean()\nci = mean + np.array([-1, 1]) * moe\nci",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">At 95% confidence level, we consider a temperature abnormal if it is below 98.1 degrees or above 98.38 degrees. Since the null hypothesis 98.6 is not in the confidence interval, we reject the null hypothesis -- the true population mean is not 98.6 degrees.</div>",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-success\">\nWe can also use the bootstrap approach.\n</div>",
"_____no_output_____"
]
],
[
[
"# Define bootstrap functions:\n\ndef replicate(data, function):\n \"\"\"Return replicate of a resampled data array.\"\"\"\n \n # Create the resampled array and return the statistic of interest:\n return function(np.random.choice(data, size=len(data)))\n\n\ndef draw_replicates(data, function, size=1):\n \"\"\"Draw bootstrap replicates.\"\"\"\n\n # Initialize array of replicates:\n replicates = np.empty(size)\n\n # Generate replicates:\n for i in range(size):\n replicates[i] = replicate(data, function)\n\n return replicates",
"_____no_output_____"
],
[
"# Seed the random number generator:\nnp.random.seed(15)\n\n# Draw bootstrap replicates of temperatures:\nreplicates = draw_replicates(df.temperature, np.mean, 10000)\n\n# Compute the 99.9% confidence interval:\nCI = np.percentile(replicates, [0.05, 99.95])\nprint('99.9% Confidence Interval:', CI)",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n\n<h2>SOLUTION: Is there a significant difference between males and females in normal temperature?</h2>\n\n<p>We use a two-sample test. Since the number of males is greater than 30 and the number of females is greater than 30, we use a two-sample z-test. Since the question just asks if there is a *difference* and doesn't specify a direction, we use a two-tailed test.</p>\n\n$$z = \\frac{(\\bar{x}_M - \\bar{x}_F) - 0}{\\sqrt{\\frac{\\sigma_M^2}{n_M} + \\frac{\\sigma_F^2}{n_F}}}$$",
"_____no_output_____"
]
],
[
[
"males = df.gender == 'M'\ndiff_means = df.temperature[males].mean() - df.temperature[~males].mean()\nsd_male = df.temperature[males].std()\nsd_female = df.temperature[~males].std()\nn_male = np.sum(males)\nn_female = len(df.temperature) - n_male\n\nz = diff_means / np.sqrt(((sd_male ** 2)/ n_male) + ((sd_female ** 2)/ n_female))\nz",
"_____no_output_____"
],
[
"pval = stats.norm.cdf(z) * 2\npval",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">\n<p>Since the p-value of 0.022 < 0.05, we reject the null hypothesis that the mean body temperature for men and women is the same. The difference in mean body temperature between men and women is statistically significant.</p>\n</p>",
"_____no_output_____"
]
],
[
[
"diff_means + np.array([-1, 1]) * 1.96 * np.sqrt(((sd_male ** 2)/ n_male) + ((sd_female ** 2)/ n_female))",
"_____no_output_____"
]
],
[
[
"<div class=\"span5 alert alert-success\">Since the null hypothesized 0 is not in the confidence interval, we reject the null hypothesis with the same conclusion as the hypothesis test.</div>",
"_____no_output_____"
],
[
"<div class=\"span5 alert alert-success\">Now let's try the hacker stats approach.</div>",
"_____no_output_____"
]
],
[
[
"permutation_replicates = np.empty(100000)\n\nsize = len(permutation_replicates)\n\nfor i in range(size):\n combined_perm_temperatures = np.random.permutation(np.concatenate((male_temperature, female_temperature)))\n\n male_permutation = combined_perm_temperatures[:len(male_temperature)]\n female_permutation = combined_perm_temperatures[len(male_temperature):]\n\n permutation_replicates[i] = np.abs(np.mean(male_permutation) - np.mean(female_permutation))\n \np_val = np.sum(permutation_replicates >= male_and_female_diff) / len(permutation_replicates)\n\nprint('p =', p_val)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
]
] |
4ab1dfccf9a3a33f2a348bb9991d2ba5b8c81ee3
| 111,768 |
ipynb
|
Jupyter Notebook
|
notebooks/tutorial.ipynb
|
RossVerploegh/nupxrd
|
677a1362b30d8cf776f09fb45e6c5486d9db6e83
|
[
"MIT"
] | null | null | null |
notebooks/tutorial.ipynb
|
RossVerploegh/nupxrd
|
677a1362b30d8cf776f09fb45e6c5486d9db6e83
|
[
"MIT"
] | null | null | null |
notebooks/tutorial.ipynb
|
RossVerploegh/nupxrd
|
677a1362b30d8cf776f09fb45e6c5486d9db6e83
|
[
"MIT"
] | null | null | null | 184.740496 | 44,002 | 0.537318 |
[
[
[
"# Automating Powder X-Ray Diffraction Data Analysis\n\n### Authors\n* **Ross Verploegh**, [email protected]",
"_____no_output_____"
],
[
"### Getting Started\n\nThe Python module NuPXRD was written to simplify the analysis of PXRD data. \n\n*Input*: A raw PXRD .dat file \n*Output*: CSV file, visualization of the pattern ",
"_____no_output_____"
]
],
[
[
"import os\nimport numpy as np\nfrom nupxrd import nupxrd\n\nprint(os.path.dirname(nupxrd.__file__))",
"/Users/rossjamesverploegh/miniconda3/lib/python3.6/site-packages/nupxrd\n"
]
],
[
[
"Initialize the NuPXRD object and pass it the path to the PXRD .DAT file.",
"_____no_output_____"
]
],
[
[
"pxrd_data=nupxrd.NuPXRD(path=os.getcwd()+\"/UiO_66.DAT\")",
"_____no_output_____"
]
],
[
[
"Read the UiO-66 PXRD data and initialize the attributes.",
"_____no_output_____"
]
],
[
[
"pxrd_data.read_pxrd()",
"_____no_output_____"
]
],
[
[
"Inspect all the attributes of the pxrd_data object.",
"_____no_output_____"
]
],
[
[
"print(pxrd_data.path)\nprint(pxrd_data.mofname)\nprint(pxrd_data.data)",
"/Users/rossjamesverploegh/Dropbox (NuMat Technologies)/Ross_Verploegh/2018_Projects/github_verploegh/nupxrd/notebooks/UiO_66.DAT\nUiO_66\n[['Bundle', '1'], ['Partition', '1'], ['Operator', 'N.U. X-ray Dif. Fac'], ['Gonio', 'ATX-G'], ['Memo', 'J.B. Cohen X-ray Diffraction Facility'], ['IncidentMonochro', 'Slit collimation'], ['CounterMonochro', 'Receiving slit'], ['Attachment', 'XY-STAGE'], ['Filter', ''], ['SlitName0', 'XRR Compression'], ['SlitName1', 'Reciprocal SM Slit Collimation'], ['Counter', 'SC-70'], ['KAlpha1', '1.54056'], ['KAlpha2', '1.54439'], ['KBata', '0'], ['Target', 'Cu'], ['KV', '50'], ['mA', '240'], ['AxisName', '2Theta'], ['MeasurementMethod', 'Continuation'], ['Start', '2'], ['Finish', '30'], ['Speed', '10'], ['FixedTime', ''], ['Width', '0.01'], ['FullScale', '11800'], ['Comment', ''], ['SampleName', ''], ['Xunit', 'deg.'], ['Yunit', 'CPS'], [2.0, 600.4], [2.01, 683.8], [2.02, 750.6], [2.03, 667.1], [2.04, 733.9], [2.05, 750.6], [2.06, 516.9], [2.07, 650.4], [2.08, 633.7], [2.09, 633.7], [2.1, 700.5], [2.11, 533.6], [2.12, 533.6], [2.13, 617.1], [2.14, 617.1], [2.15, 466.9], [2.16, 633.8], [2.17, 650.4], [2.18, 700.5], [2.19, 617.1], [2.2, 600.4], [2.21, 667.1], [2.22, 483.6], [2.23, 533.6], [2.24, 483.6], [2.25, 550.3], [2.26, 617.1], [2.27, 433.5], [2.28, 433.5], [2.29, 583.7], [2.3, 567.0], [2.31, 550.3], [2.32, 667.1], [2.33, 416.9], [2.34, 433.5], [2.35, 516.9], [2.36, 650.4], [2.37, 650.4], [2.38, 516.9], [2.39, 717.2], [2.4, 366.8], [2.41, 683.8], [2.42, 633.8], [2.43, 533.6], [2.44, 617.1], [2.45, 600.4], [2.46, 567.0], [2.47, 466.9], [2.48, 617.1], [2.49, 583.7], [2.5, 633.7], [2.51, 500.3], [2.52, 583.7], [2.53, 567.0], [2.54, 583.7], [2.55, 550.3], [2.56, 583.7], [2.57, 533.6], [2.58, 533.6], [2.59, 433.5], [2.6, 416.8], [2.61, 550.3], [2.62, 567.0], [2.63, 350.1], [2.64, 400.2], [2.65, 683.8], [2.66, 567.0], [2.67, 567.0], [2.68, 700.5], [2.69, 617.0], [2.7, 466.9], [2.71, 667.2], [2.72, 567.0], [2.73, 483.6], [2.74, 533.7], [2.75, 383.5], [2.76, 466.9], [2.77, 683.8], [2.78, 633.7], [2.79, 600.4], [2.8, 483.6], [2.81, 433.5], [2.82, 550.3], [2.83, 500.3], [2.84, 500.3], [2.85, 633.7], [2.86, 583.7], [2.87, 500.3], [2.88, 583.7], [2.89, 733.9], [2.9, 683.8], [2.91, 416.9], [2.92, 633.7], [2.93, 516.9], [2.94, 483.6], [2.95, 617.1], [2.96, 500.3], [2.97, 567.0], [2.98, 600.4], [2.99, 550.3], [3.0, 516.9], [3.01, 633.8], [3.02, 517.0], [3.03, 433.5], [3.04, 667.1], [3.05, 450.2], [3.06, 650.5], [3.07, 583.7], [3.08, 500.3], [3.09, 617.0], [3.1, 433.5], [3.11, 617.1], [3.12, 733.9], [3.13, 650.4], [3.14, 683.8], [3.15, 617.1], [3.16, 567.0], [3.17, 550.3], [3.18, 516.9], [3.19, 333.5], [3.2, 333.5], [3.21, 516.9], [3.22, 650.4], [3.23, 617.1], [3.24, 500.3], [3.25, 433.5], [3.26, 650.5], [3.27, 550.3], [3.28, 767.3], [3.29, 516.9], [3.3, 717.3], [3.31, 617.1], [3.32, 600.4], [3.33, 650.4], [3.34, 567.0], [3.35, 567.0], [3.36, 400.2], [3.37, 567.0], [3.38, 500.3], [3.39, 450.2], [3.4, 667.1], [3.41, 533.6], [3.42, 583.7], [3.43, 600.4], [3.44, 617.1], [3.45, 617.1], [3.46, 550.3], [3.47, 650.4], [3.48, 500.3], [3.49, 533.6], [3.5, 650.4], [3.51, 650.4], [3.52, 633.8], [3.53, 567.0], [3.54, 683.8], [3.55, 383.5], [3.56, 583.7], [3.57, 483.6], [3.58, 767.3], [3.59, 650.5], [3.6, 517.0], [3.61, 733.9], [3.62, 633.7], [3.63, 450.2], [3.64, 600.4], [3.65, 617.1], [3.66, 650.4], [3.67, 567.0], [3.68, 617.1], [3.69, 517.0], [3.7, 600.4], [3.71, 500.3], [3.72, 717.2], [3.73, 683.8], [3.74, 667.1], [3.75, 683.8], [3.76, 633.8], [3.77, 617.1], [3.78, 633.8], [3.79, 600.4], [3.8, 550.3], [3.81, 483.6], [3.82, 700.5], [3.83, 667.2], [3.84, 516.9], [3.85, 717.2], [3.86, 733.9], [3.87, 617.0], [3.88, 583.7], [3.89, 533.6], [3.9, 516.9], [3.91, 617.0], [3.92, 750.6], [3.93, 416.9], [3.94, 683.8], [3.95, 600.4], [3.96, 717.2], [3.97, 600.4], [3.98, 600.4], [3.99, 600.4], [4.0, 450.2], [4.01, 550.3], [4.02, 600.4], [4.03, 533.6], [4.04, 567.0], [4.05, 850.7], [4.06, 567.0], [4.07, 583.7], [4.08, 600.4], [4.09, 550.3], [4.1, 750.6], [4.11, 550.3], [4.12, 483.6], [4.13, 650.4], [4.14, 583.7], [4.15, 700.5], [4.16, 516.9], [4.17, 617.1], [4.18, 683.8], [4.19, 717.2], [4.2, 683.8], [4.21, 650.5], [4.22, 600.4], [4.23, 500.3], [4.24, 683.8], [4.25, 884.1], [4.26, 450.2], [4.27, 683.8], [4.28, 466.9], [4.29, 650.4], [4.3, 567.0], [4.31, 500.3], [4.32, 934.3], [4.33, 800.6], [4.34, 600.4], [4.35, 767.3], [4.36, 733.9], [4.37, 433.5], [4.38, 750.6], [4.39, 700.5], [4.4, 500.3], [4.41, 667.1], [4.42, 500.3], [4.43, 667.1], [4.44, 567.0], [4.45, 600.4], [4.46, 650.4], [4.47, 366.8], [4.48, 600.4], [4.49, 533.6], [4.5, 517.0], [4.51, 667.1], [4.52, 416.8], [4.53, 617.1], [4.54, 516.9], [4.55, 466.9], [4.56, 500.3], [4.57, 650.4], [4.58, 483.6], [4.59, 750.6], [4.6, 667.1], [4.61, 717.2], [4.62, 733.9], [4.63, 483.6], [4.64, 733.9], [4.65, 600.4], [4.66, 466.9], [4.67, 483.6], [4.68, 600.4], [4.69, 784.0], [4.7, 600.4], [4.71, 617.0], [4.72, 466.9], [4.73, 400.2], [4.74, 483.6], [4.75, 750.6], [4.76, 567.0], [4.77, 600.4], [4.78, 550.3], [4.79, 550.3], [4.8, 617.0], [4.81, 483.6], [4.82, 667.1], [4.83, 516.9], [4.84, 700.5], [4.85, 567.0], [4.86, 533.6], [4.87, 433.5], [4.88, 650.4], [4.89, 600.4], [4.9, 450.2], [4.91, 750.6], [4.92, 366.8], [4.93, 466.9], [4.94, 400.2], [4.95, 383.5], [4.96, 400.2], [4.97, 617.1], [4.98, 366.8], [4.99, 416.8], [5.0, 500.3], [5.01, 366.8], [5.02, 516.9], [5.03, 433.5], [5.04, 466.9], [5.05, 533.6], [5.06, 450.2], [5.07, 500.3], [5.08, 483.6], [5.09, 466.9], [5.1, 500.3], [5.11, 533.6], [5.12, 450.2], [5.13, 366.8], [5.14, 516.9], [5.15, 483.6], [5.16, 466.9], [5.17, 383.5], [5.18, 567.0], [5.19, 550.3], [5.2, 466.9], [5.21, 416.8], [5.22, 416.8], [5.23, 450.2], [5.24, 366.8], [5.25, 500.3], [5.26, 483.6], [5.27, 617.1], [5.28, 483.6], [5.29, 400.2], [5.3, 450.2], [5.31, 366.8], [5.32, 433.5], [5.33, 400.2], [5.34, 433.5], [5.35, 300.1], [5.36, 383.5], [5.37, 400.2], [5.38, 300.1], [5.39, 416.8], [5.4, 550.3], [5.41, 450.2], [5.42, 416.8], [5.43, 266.7], [5.44, 316.8], [5.45, 416.9], [5.46, 366.8], [5.47, 516.9], [5.48, 466.9], [5.49, 550.3], [5.5, 316.8], [5.51, 533.6], [5.52, 300.1], [5.53, 483.6], [5.54, 483.6], [5.55, 416.8], [5.56, 483.6], [5.57, 550.3], [5.58, 533.6], [5.59, 400.2], [5.6, 483.6], [5.61, 450.2], [5.62, 450.2], [5.63, 416.8], [5.64, 633.7], [5.65, 533.6], [5.66, 433.5], [5.67, 633.7], [5.68, 416.9], [5.69, 466.9], [5.7, 450.2], [5.71, 466.9], [5.72, 400.2], [5.73, 466.9], [5.74, 483.6], [5.75, 500.3], [5.76, 450.2], [5.77, 466.9], [5.78, 400.2], [5.79, 333.4], [5.8, 383.5], [5.81, 466.9], [5.82, 433.5], [5.83, 250.1], [5.84, 516.9], [5.85, 416.8], [5.86, 333.5], [5.87, 266.7], [5.88, 383.5], [5.89, 433.5], [5.9, 400.2], [5.91, 466.9], [5.92, 466.9], [5.93, 350.1], [5.94, 333.4], [5.95, 283.4], [5.96, 366.8], [5.97, 333.4], [5.98, 300.1], [5.99, 500.3], [6.0, 266.7], [6.01, 383.5], [6.02, 416.8], [6.03, 650.4], [6.04, 416.8], [6.05, 416.8], [6.06, 350.1], [6.07, 316.8], [6.08, 366.8], [6.09, 433.5], [6.1, 366.8], [6.11, 466.9], [6.12, 383.5], [6.13, 283.4], [6.14, 416.8], [6.15, 183.4], [6.16, 650.4], [6.17, 433.5], [6.18, 350.1], [6.19, 350.1], [6.2, 366.8], [6.21, 333.5], [6.22, 383.5], [6.23, 333.5], [6.24, 333.5], [6.25, 450.2], [6.26, 483.6], [6.27, 483.6], [6.28, 500.3], [6.29, 350.1], [6.3, 300.1], [6.31, 333.5], [6.32, 383.5], [6.33, 416.8], [6.34, 250.1], [6.35, 400.2], [6.36, 450.2], [6.37, 500.3], [6.38, 450.2], [6.39, 516.9], [6.4, 400.2], [6.41, 233.4], [6.42, 400.2], [6.43, 416.8], [6.44, 366.8], [6.45, 483.6], [6.46, 383.5], [6.47, 233.4], [6.48, 450.2], [6.49, 466.9], [6.5, 567.0], [6.51, 466.9], [6.52, 283.4], [6.53, 366.8], [6.54, 433.5], [6.55, 466.9], [6.56, 416.8], [6.57, 517.0], [6.58, 416.8], [6.59, 400.2], [6.6, 466.9], [6.61, 483.6], [6.62, 600.4], [6.63, 416.8], [6.64, 533.6], [6.65, 567.0], [6.66, 600.4], [6.67, 500.3], [6.68, 383.5], [6.69, 416.9], [6.7, 416.8], [6.71, 567.0], [6.72, 583.7], [6.73, 416.8], [6.74, 550.3], [6.75, 733.9], [6.76, 516.9], [6.77, 817.3], [6.78, 667.1], [6.79, 583.7], [6.8, 583.7], [6.81, 984.3], [6.82, 650.4], [6.83, 917.5], [6.84, 784.0], [6.85, 617.1], [6.86, 784.0], [6.87, 733.9], [6.88, 717.2], [6.89, 934.2], [6.9, 1251.6], [6.91, 967.6], [6.92, 950.9], [6.93, 1101.2], [6.94, 1418.7], [6.95, 1001.0], [6.96, 1084.5], [6.97, 1234.9], [6.98, 1402.1], [6.99, 1368.7], [7.0, 1719.6], [7.01, 1602.6], [7.02, 1468.9], [7.03, 1669.5], [7.04, 1669.6], [7.05, 2054.3], [7.06, 1887.1], [7.07, 1803.3], [7.08, 2087.7], [7.09, 2305.5], [7.1, 2355.7], [7.11, 2489.8], [7.12, 2992.7], [7.13, 2590.2], [7.14, 3193.8], [7.15, 3260.9], [7.16, 4016.4], [7.17, 3781.7], [7.18, 3865.4], [7.19, 4151.3], [7.2, 4251.6], [7.21, 4588.2], [7.22, 5615.6], [7.23, 5430.3], [7.24, 5868.8], [7.25, 6308.0], [7.26, 7422.7], [7.27, 7913.5], [7.28, 7711.1], [7.29, 8405.5], [7.3, 9849.0], [7.31, 8644.2], [7.32, 9084.7], [7.33, 8575.8], [7.34, 9133.8], [7.35, 8371.8], [7.36, 8355.0], [7.37, 7338.1], [7.38, 6830.9], [7.39, 6797.5], [7.4, 5548.2], [7.41, 4672.3], [7.42, 4386.4], [7.43, 3563.1], [7.44, 2372.6], [7.45, 2506.4], [7.46, 2054.4], [7.47, 1569.1], [7.48, 1502.3], [7.49, 1051.1], [7.5, 934.2], [7.51, 917.5], [7.52, 600.4], [7.53, 900.8], [7.54, 733.9], [7.55, 817.4], [7.56, 683.8], [7.57, 683.8], [7.58, 350.1], [7.59, 450.2], [7.6, 483.6], [7.61, 450.2], [7.62, 533.6], [7.63, 316.8], [7.64, 400.2], [7.65, 466.9], [7.66, 466.9], [7.67, 433.5], [7.68, 400.2], [7.69, 300.1], [7.7, 450.2], [7.71, 500.3], [7.72, 316.8], [7.73, 283.4], [7.74, 200.0], [7.75, 183.4], [7.76, 216.7], [7.77, 216.7], [7.78, 416.9], [7.79, 400.2], [7.8, 366.8], [7.81, 316.8], [7.82, 300.1], [7.83, 250.1], [7.84, 283.4], [7.85, 316.8], [7.86, 300.1], [7.87, 250.1], [7.88, 250.1], [7.89, 350.1], [7.9, 350.1], [7.91, 316.8], [7.92, 250.1], [7.93, 350.1], [7.94, 383.5], [7.95, 316.8], [7.96, 383.5], [7.97, 233.4], [7.98, 316.8], [7.99, 416.8], [8.0, 383.5], [8.01, 316.8], [8.02, 400.2], [8.03, 283.4], [8.04, 316.8], [8.05, 450.2], [8.06, 400.2], [8.07, 433.5], [8.08, 300.1], [8.09, 383.5], [8.1, 450.2], [8.11, 400.2], [8.12, 567.0], [8.13, 633.7], [8.14, 450.2], [8.15, 550.3], [8.16, 567.0], [8.17, 800.6], [8.18, 550.3], [8.19, 817.3], [8.2, 767.3], [8.21, 700.5], [8.22, 650.4], [8.23, 650.5], [8.24, 834.0], [8.25, 1017.7], [8.26, 900.8], [8.27, 1118.0], [8.28, 850.7], [8.29, 1335.2], [8.3, 1168.1], [8.31, 1368.5], [8.32, 1686.2], [8.33, 1285.1], [8.34, 1519.0], [8.35, 1636.1], [8.36, 1970.6], [8.37, 1769.9], [8.38, 2037.6], [8.39, 2456.3], [8.4, 2322.1], [8.41, 3395.2], [8.42, 2891.9], [8.43, 3126.9], [8.44, 3193.8], [8.45, 3227.4], [8.46, 3412.1], [8.47, 3563.2], [8.48, 3579.7], [8.49, 3261.1], [8.5, 3009.4], [8.51, 2875.3], [8.52, 2640.4], [8.53, 2305.5], [8.54, 1937.2], [8.55, 1887.0], [8.56, 1502.3], [8.57, 1351.8], [8.58, 1084.5], [8.59, 917.5], [8.6, 700.5], [8.61, 617.1], [8.62, 383.5], [8.63, 550.3], [8.64, 350.1], [8.65, 433.5], [8.66, 383.5], [8.67, 416.8], [8.68, 300.1], [8.69, 300.1], [8.7, 350.1], [8.71, 216.7], [8.72, 300.1], [8.73, 183.4], [8.74, 216.7], [8.75, 300.1], [8.76, 150.0], [8.77, 233.4], [8.78, 133.4], [8.79, 150.0], [8.8, 150.0], [8.81, 150.0], [8.82, 166.7], [8.83, 166.7], [8.84, 133.4], [8.85, 150.0], [8.86, 150.0], [8.87, 183.4], [8.88, 150.0], [8.89, 150.0], [8.9, 233.4], [8.91, 166.7], [8.92, 150.0], [8.93, 166.7], [8.94, 200.0], [8.95, 183.4], [8.96, 116.7], [8.97, 166.7], [8.98, 216.7], [8.99, 216.7], [9.0, 166.7], [9.01, 233.4], [9.02, 233.4], [9.03, 266.7], [9.04, 116.7], [9.05, 116.7], [9.06, 150.0], [9.07, 166.7], [9.08, 183.4], [9.09, 150.0], [9.1, 133.4], [9.11, 166.7], [9.12, 133.4], [9.13, 166.7], [9.14, 100.0], [9.15, 116.7], [9.16, 216.7], [9.17, 133.4], [9.18, 183.4], [9.19, 166.7], [9.2, 200.0], [9.21, 150.0], [9.22, 133.4], [9.23, 166.7], [9.24, 100.0], [9.25, 216.7], [9.26, 250.1], [9.27, 50.0], [9.28, 183.4], [9.29, 133.4], [9.3, 150.0], [9.31, 216.7], [9.32, 116.7], [9.33, 200.0], [9.34, 166.7], [9.35, 133.4], [9.36, 183.4], [9.37, 150.0], [9.38, 150.0], [9.39, 150.0], [9.4, 83.3], [9.41, 166.7], [9.42, 133.4], [9.43, 83.3], [9.44, 266.7], [9.45, 116.7], [9.46, 233.4], [9.47, 116.7], [9.48, 200.0], [9.49, 150.0], [9.5, 133.4], [9.51, 216.7], [9.52, 116.7], [9.53, 83.3], [9.54, 150.0], [9.55, 200.0], [9.56, 150.0], [9.57, 150.0], [9.58, 150.0], [9.59, 100.0], [9.6, 233.4], [9.61, 166.7], [9.62, 83.3], [9.63, 133.4], [9.64, 133.4], [9.65, 100.0], [9.66, 83.3], [9.67, 150.0], [9.68, 100.0], [9.69, 166.7], [9.7, 100.0], [9.71, 150.0], [9.72, 150.0], [9.73, 116.7], [9.74, 50.0], [9.75, 150.0], [9.76, 150.0], [9.77, 50.0], [9.78, 116.7], [9.79, 116.7], [9.8, 150.0], [9.81, 300.1], [9.82, 183.4], [9.83, 150.0], [9.84, 150.0], [9.85, 116.7], [9.86, 83.3], [9.87, 216.7], [9.88, 100.0], [9.89, 100.0], [9.9, 150.0], [9.91, 116.7], [9.92, 216.7], [9.93, 133.4], [9.94, 116.7], [9.95, 166.7], [9.96, 133.4], [9.97, 216.7], [9.98, 150.0], [9.99, 183.4], [10.0, 150.0], [10.01, 116.7], [10.02, 83.3], [10.03, 133.4], [10.04, 100.0], [10.05, 66.7], [10.06, 150.0], [10.07, 133.4], [10.08, 100.0], [10.09, 100.0], [10.1, 50.0], [10.11, 116.7], [10.12, 50.0], [10.13, 50.0], [10.14, 150.0], [10.15, 183.4], [10.16, 83.3], [10.17, 216.7], [10.18, 100.0], [10.19, 200.0], [10.2, 116.7], [10.21, 83.3], [10.22, 133.4], [10.23, 116.7], [10.24, 100.0], [10.25, 83.3], [10.26, 133.4], [10.27, 133.4], [10.28, 66.7], [10.29, 216.7], [10.3, 50.0], [10.31, 83.3], [10.32, 100.0], [10.33, 133.4], [10.34, 100.0], [10.35, 83.3], [10.36, 166.7], [10.37, 200.0], [10.38, 83.3], [10.39, 83.3], [10.4, 66.7], [10.41, 83.3], [10.42, 50.0], [10.43, 116.7], [10.44, 183.4], [10.45, 50.0], [10.46, 200.0], [10.47, 100.0], [10.48, 133.4], [10.49, 50.0], [10.5, 33.3], [10.51, 100.0], [10.52, 50.0], [10.53, 116.7], [10.54, 50.0], [10.55, 250.1], [10.56, 133.4], [10.57, 66.7], [10.58, 116.7], [10.59, 116.7], [10.6, 116.7], [10.61, 66.7], [10.62, 83.3], [10.63, 83.3], [10.64, 116.7], [10.65, 150.0], [10.66, 33.3], [10.67, 133.4], [10.68, 166.7], [10.69, 66.7], [10.7, 83.3], [10.71, 33.3], [10.72, 100.0], [10.73, 33.3], [10.74, 50.0], [10.75, 150.0], [10.76, 83.3], [10.77, 150.0], [10.78, 133.4], [10.79, 100.0], [10.8, 116.7], [10.81, 66.7], [10.82, 83.3], [10.83, 83.3], [10.84, 133.4], [10.85, 150.0], [10.86, 83.3], [10.87, 116.7], [10.88, 116.7], [10.89, 150.0], [10.9, 66.7], [10.91, 83.3], [10.92, 16.7], [10.93, 100.0], [10.94, 83.3], [10.95, 66.7], [10.96, 100.0], [10.97, 33.3], [10.98, 150.0], [10.99, 83.3], [11.0, 50.0], [11.01, 133.4], [11.02, 116.7], [11.03, 83.3], [11.04, 66.7], [11.05, 100.0], [11.06, 116.7], [11.07, 16.7], [11.08, 133.4], [11.09, 133.4], [11.1, 50.0], [11.11, 66.7], [11.12, 100.0], [11.13, 100.0], [11.14, 100.0], [11.15, 100.0], [11.16, 83.3], [11.17, 50.0], [11.18, 166.7], [11.19, 16.7], [11.2, 166.7], [11.21, 133.4], [11.22, 66.7], [11.23, 116.7], [11.24, 100.0], [11.25, 133.4], [11.26, 33.3], [11.27, 100.0], [11.28, 66.7], [11.29, 133.4], [11.3, 83.3], [11.31, 116.7], [11.32, 133.4], [11.33, 50.0], [11.34, 33.3], [11.35, 33.3], [11.36, 150.0], [11.37, 50.0], [11.38, 100.0], [11.39, 100.0], [11.4, 100.0], [11.41, 66.7], [11.42, 50.0], [11.43, 50.0], [11.44, 66.7], [11.45, 166.7], [11.46, 116.7], [11.47, 66.7], [11.48, 133.4], [11.49, 100.0], [11.5, 66.7], [11.51, 100.0], [11.52, 116.7], [11.53, 116.7], [11.54, 66.7], [11.55, 100.0], [11.56, 200.0], [11.57, 66.7], [11.58, 133.4], [11.59, 66.7], [11.6, 66.7], [11.61, 133.4], [11.62, 50.0], [11.63, 133.4], [11.64, 150.0], [11.65, 133.4], [11.66, 83.3], [11.67, 66.7], [11.68, 166.7], [11.69, 83.3], [11.7, 116.7], [11.71, 100.0], [11.72, 166.7], [11.73, 183.4], [11.74, 250.1], [11.75, 100.0], [11.76, 200.0], [11.77, 216.7], [11.78, 233.4], [11.79, 150.0], [11.8, 233.4], [11.81, 166.7], [11.82, 266.7], [11.83, 250.1], [11.84, 333.5], [11.85, 200.0], [11.86, 333.5], [11.87, 283.4], [11.88, 316.8], [11.89, 250.1], [11.9, 383.5], [11.91, 533.6], [11.92, 400.2], [11.93, 500.3], [11.94, 633.8], [11.95, 567.0], [11.96, 600.4], [11.97, 700.5], [11.98, 767.3], [11.99, 850.7], [12.0, 767.3], [12.01, 633.7], [12.02, 700.5], [12.03, 667.1], [12.04, 567.0], [12.05, 700.5], [12.06, 700.5], [12.07, 300.1], [12.08, 450.2], [12.09, 366.8], [12.1, 383.5], [12.11, 216.7], [12.12, 216.7], [12.13, 250.1], [12.14, 200.0], [12.15, 150.0], [12.16, 133.4], [12.17, 166.7], [12.18, 100.0], [12.19, 183.4], [12.2, 166.7], [12.21, 83.3], [12.22, 150.0], [12.23, 166.7], [12.24, 50.0], [12.25, 66.7], [12.26, 133.4], [12.27, 16.7], [12.28, 133.4], [12.29, 83.3], [12.3, 100.0], [12.31, 116.7], [12.32, 33.3], [12.33, 66.7], [12.34, 116.7], [12.35, 16.7], [12.36, 50.0], [12.37, 116.7], [12.38, 116.7], [12.39, 100.0], [12.4, 100.0], [12.41, 66.7], [12.42, 83.3], [12.43, 166.7], [12.44, 66.7], [12.45, 66.7], [12.46, 83.3], [12.47, 50.0], [12.48, 33.3], [12.49, 133.4], [12.5, 66.7], [12.51, 133.4], [12.52, 116.7], [12.53, 66.7], [12.54, 50.0], [12.55, 100.0], [12.56, 100.0], [12.57, 50.0], [12.58, 133.4], [12.59, 66.7], [12.6, 0.0], [12.61, 83.3], [12.62, 66.7], [12.63, 83.3], [12.64, 100.0], [12.65, 100.0], [12.66, 83.3], [12.67, 50.0], [12.68, 83.3], [12.69, 100.0], [12.7, 116.7], [12.71, 66.7], [12.72, 83.3], [12.73, 50.0], [12.74, 66.7], [12.75, 116.7], [12.76, 116.7], [12.77, 66.7], [12.78, 66.7], [12.79, 100.0], [12.8, 133.4], [12.81, 66.7], [12.82, 33.3], [12.83, 33.3], [12.84, 50.0], [12.85, 116.7], [12.86, 66.7], [12.87, 133.4], [12.88, 66.7], [12.89, 116.7], [12.9, 100.0], [12.91, 66.7], [12.92, 50.0], [12.93, 66.7], [12.94, 116.7], [12.95, 66.7], [12.96, 16.7], [12.97, 66.7], [12.98, 183.4], [12.99, 66.7], [13.0, 66.7], [13.01, 0.0], [13.02, 50.0], [13.03, 100.0], [13.04, 83.3], [13.05, 83.3], [13.06, 50.0], [13.07, 66.7], [13.08, 116.7], [13.09, 116.7], [13.1, 33.3], [13.11, 133.4], [13.12, 50.0], [13.13, 133.4], [13.14, 33.3], [13.15, 150.0], [13.16, 116.7], [13.17, 83.3], [13.18, 166.7], [13.19, 100.0], [13.2, 50.0], [13.21, 116.7], [13.22, 83.3], [13.23, 83.3], [13.24, 100.0], [13.25, 66.7], [13.26, 66.7], [13.27, 116.7], [13.28, 50.0], [13.29, 66.7], [13.3, 100.0], [13.31, 50.0], [13.32, 100.0], [13.33, 116.7], [13.34, 50.0], [13.35, 100.0], [13.36, 50.0], [13.37, 116.7], [13.38, 166.7], [13.39, 66.7], [13.4, 33.3], [13.41, 16.7], [13.42, 133.4], [13.43, 50.0], [13.44, 33.3], [13.45, 183.4], [13.46, 66.7], [13.47, 33.3], [13.48, 66.7], [13.49, 33.3], [13.5, 116.7], [13.51, 100.0], [13.52, 100.0], [13.53, 50.0], [13.54, 100.0], [13.55, 83.3], [13.56, 83.3], [13.57, 66.7], [13.58, 83.3], [13.59, 116.7], [13.6, 100.0], [13.61, 133.4], [13.62, 66.7], [13.63, 66.7], [13.64, 83.3], [13.65, 83.3], [13.66, 100.0], [13.67, 66.7], [13.68, 83.3], [13.69, 116.7], [13.7, 66.7], [13.71, 116.7], [13.72, 33.3], [13.73, 116.7], [13.74, 50.0], [13.75, 66.7], [13.76, 50.0], [13.77, 83.3], [13.78, 50.0], [13.79, 33.3], [13.8, 66.7], [13.81, 66.7], [13.82, 166.7], [13.83, 150.0], [13.84, 66.7], [13.85, 100.0], [13.86, 66.7], [13.87, 166.7], [13.88, 33.3], [13.89, 83.3], [13.9, 150.0], [13.91, 50.0], [13.92, 16.7], [13.93, 116.7], [13.94, 150.0], [13.95, 66.7], [13.96, 83.3], [13.97, 66.7], [13.98, 133.4], [13.99, 100.0], [14.0, 150.0], [14.01, 216.7], [14.02, 150.0], [14.03, 200.0], [14.04, 250.1], [14.05, 233.4], [14.06, 200.0], [14.07, 216.7], [14.08, 183.4], [14.09, 166.7], [14.1, 116.7], [14.11, 150.0], [14.12, 266.7], [14.13, 183.4], [14.14, 150.0], [14.15, 233.4], [14.16, 283.4], [14.17, 66.7], [14.18, 66.7], [14.19, 50.0], [14.2, 116.7], [14.21, 150.0], [14.22, 66.7], [14.23, 83.3], [14.24, 66.7], [14.25, 100.0], [14.26, 66.7], [14.27, 150.0], [14.28, 83.3], [14.29, 83.3], [14.3, 66.7], [14.31, 133.4], [14.32, 133.4], [14.33, 66.7], [14.34, 50.0], [14.35, 66.7], [14.36, 50.0], [14.37, 0.0], [14.38, 116.7], [14.39, 66.7], [14.4, 133.4], [14.41, 33.3], [14.42, 116.7], [14.43, 66.7], [14.44, 150.0], [14.45, 66.7], [14.46, 16.7], [14.47, 83.3], [14.48, 50.0], [14.49, 66.7], [14.5, 66.7], [14.51, 116.7], [14.52, 116.7], [14.53, 83.3], [14.54, 166.7], [14.55, 83.3], [14.56, 50.0], [14.57, 150.0], [14.58, 183.4], [14.59, 83.3], [14.6, 83.3], [14.61, 133.4], [14.62, 83.3], [14.63, 150.0], [14.64, 283.4], [14.65, 50.0], [14.66, 250.1], [14.67, 216.7], [14.68, 200.0], [14.69, 283.4], [14.7, 283.4], [14.71, 333.4], [14.72, 300.1], [14.73, 333.4], [14.74, 250.1], [14.75, 250.1], [14.76, 233.4], [14.77, 200.1], [14.78, 200.0], [14.79, 233.4], [14.8, 200.0], [14.81, 100.0], [14.82, 150.0], [14.83, 150.0], [14.84, 116.7], [14.85, 116.7], [14.86, 50.0], [14.87, 100.0], [14.88, 16.7], [14.89, 116.7], [14.9, 50.0], [14.91, 66.7], [14.92, 83.3], [14.93, 66.7], [14.94, 183.4], [14.95, 16.7], [14.96, 83.3], [14.97, 16.7], [14.98, 33.3], [14.99, 150.0], [15.0, 100.0], [15.01, 100.0], [15.02, 100.0], [15.03, 66.7], [15.04, 16.7], [15.05, 100.0], [15.06, 16.7], [15.07, 83.3], [15.08, 100.0], [15.09, 50.0], [15.1, 50.0], [15.11, 50.0], [15.12, 166.7], [15.13, 133.4], [15.14, 50.0], [15.15, 66.7], [15.16, 66.7], [15.17, 100.0], [15.18, 50.0], [15.19, 83.3], [15.2, 116.7], [15.21, 50.0], [15.22, 50.0], [15.23, 66.7], [15.24, 33.3], [15.25, 100.0], [15.26, 66.7], [15.27, 66.7], [15.28, 66.7], [15.29, 100.0], [15.3, 83.3], [15.31, 116.7], [15.32, 100.0], [15.33, 33.3], [15.34, 50.0], [15.35, 83.3], [15.36, 66.7], [15.37, 33.3], [15.38, 66.7], [15.39, 66.7], [15.4, 66.7], [15.41, 116.7], [15.42, 83.3], [15.43, 66.7], [15.44, 33.3], [15.45, 83.3], [15.46, 66.7], [15.47, 50.0], [15.48, 33.3], [15.49, 50.0], [15.5, 66.7], [15.51, 66.7], [15.52, 66.7], [15.53, 66.7], [15.54, 83.3], [15.55, 83.3], [15.56, 116.7], [15.57, 33.3], [15.58, 133.4], [15.59, 33.3], [15.6, 100.0], [15.61, 33.3], [15.62, 83.3], [15.63, 133.4], [15.64, 100.0], [15.65, 33.3], [15.66, 33.3], [15.67, 66.7], [15.68, 66.7], [15.69, 83.3], [15.7, 100.0], [15.71, 100.0], [15.72, 133.4], [15.73, 100.0], [15.74, 33.3], [15.75, 16.7], [15.76, 16.7], [15.77, 150.0], [15.78, 66.7], [15.79, 133.4], [15.8, 16.7], [15.81, 116.7], [15.82, 66.7], [15.83, 100.0], [15.84, 150.0], [15.85, 100.0], [15.86, 116.7], [15.87, 116.7], [15.88, 66.7], [15.89, 116.7], [15.9, 50.0], [15.91, 50.0], [15.92, 50.0], [15.93, 50.0], [15.94, 50.0], [15.95, 50.0], [15.96, 50.0], [15.97, 83.3], [15.98, 100.0], [15.99, 100.0], [16.0, 66.7], [16.01, 133.4], [16.02, 50.0], [16.03, 100.0], [16.04, 116.7], [16.05, 33.3], [16.06, 133.4], [16.07, 83.3], [16.08, 100.0], [16.09, 66.7], [16.1, 66.7], [16.11, 83.3], [16.12, 100.0], [16.13, 116.7], [16.14, 83.3], [16.15, 50.0], [16.16, 116.7], [16.17, 50.0], [16.18, 100.0], [16.19, 66.7], [16.2, 50.0], [16.21, 183.4], [16.22, 66.7], [16.23, 116.7], [16.24, 133.4], [16.25, 83.3], [16.26, 50.0], [16.27, 66.7], [16.28, 50.0], [16.29, 33.3], [16.3, 83.3], [16.31, 0.0], [16.32, 83.3], [16.33, 150.0], [16.34, 16.7], [16.35, 100.0], [16.36, 66.7], [16.37, 66.7], [16.38, 50.0], [16.39, 116.7], [16.4, 50.0], [16.41, 50.0], [16.42, 33.3], [16.43, 133.4], [16.44, 133.4], [16.45, 66.7], [16.46, 50.0], [16.47, 66.7], [16.48, 133.4], [16.49, 66.7], [16.5, 116.7], [16.51, 66.7], [16.52, 83.3], [16.53, 183.4], [16.54, 83.3], [16.55, 66.7], [16.56, 83.3], [16.57, 150.0], [16.58, 166.7], [16.59, 83.3], [16.6, 150.0], [16.61, 100.0], [16.62, 100.0], [16.63, 50.0], [16.64, 33.3], [16.65, 66.7], [16.66, 133.4], [16.67, 116.7], [16.68, 0.0], [16.69, 116.7], [16.7, 116.7], [16.71, 133.4], [16.72, 133.4], [16.73, 100.0], [16.74, 83.3], [16.75, 50.0], [16.76, 200.0], [16.77, 66.7], [16.78, 133.4], [16.79, 166.7], [16.8, 133.4], [16.81, 150.0], [16.82, 16.7], [16.83, 200.0], [16.84, 116.7], [16.85, 266.7], [16.86, 250.1], [16.87, 266.7], [16.88, 133.4], [16.89, 266.7], [16.9, 166.7], [16.91, 216.7], [16.92, 333.4], [16.93, 350.1], [16.94, 400.2], [16.95, 366.8], [16.96, 350.1], [16.97, 316.8], [16.98, 350.1], [16.99, 433.5], [17.0, 500.3], [17.01, 283.4], [17.02, 366.8], [17.03, 517.0], [17.04, 600.4], [17.05, 350.1], [17.06, 283.4], [17.07, 200.0], [17.08, 366.8], [17.09, 383.5], [17.1, 266.7], [17.11, 300.1], [17.12, 233.4], [17.13, 283.4], [17.14, 183.4], [17.15, 133.4], [17.16, 183.4], [17.17, 166.7], [17.18, 133.4], [17.19, 83.3], [17.2, 166.7], [17.21, 150.0], [17.22, 83.3], [17.23, 66.7], [17.24, 66.7], [17.25, 83.3], [17.26, 116.7], [17.27, 116.7], [17.28, 183.4], [17.29, 116.7], [17.3, 166.7], [17.31, 33.3], [17.32, 150.0], [17.33, 66.7], [17.34, 50.0], [17.35, 83.3], [17.36, 33.3], [17.37, 116.7], [17.38, 83.3], [17.39, 100.0], [17.4, 150.0], [17.41, 100.0], [17.42, 83.3], [17.43, 150.0], [17.44, 150.0], [17.45, 50.0], [17.46, 166.7], [17.47, 83.3], [17.48, 116.7], [17.49, 66.7], [17.5, 66.7], [17.51, 116.7], [17.52, 133.4], [17.53, 183.4], [17.54, 16.7], [17.55, 283.4], [17.56, 100.0], [17.57, 83.3], [17.58, 66.7], [17.59, 83.3], [17.6, 66.7], [17.61, 166.7], [17.62, 100.0], [17.63, 66.7], [17.64, 100.0], [17.65, 83.3], [17.66, 183.4], [17.67, 83.3], [17.68, 50.0], [17.69, 150.0], [17.7, 50.0], [17.71, 150.0], [17.72, 66.7], [17.73, 183.4], [17.74, 83.3], [17.75, 100.0], [17.76, 83.3], [17.77, 100.0], [17.78, 150.0], [17.79, 116.7], [17.8, 100.0], [17.81, 116.7], [17.82, 100.0], [17.83, 66.7], [17.84, 200.0], [17.85, 116.7], [17.86, 166.7], [17.87, 83.3], [17.88, 166.7], [17.89, 150.0], [17.9, 100.0], [17.91, 116.7], [17.92, 100.0], [17.93, 66.7], [17.94, 50.0], [17.95, 66.7], [17.96, 116.7], [17.97, 66.7], [17.98, 66.7], [17.99, 183.4], [18.0, 100.0], [18.01, 116.7], [18.02, 83.3], [18.03, 66.7], [18.04, 116.7], [18.05, 150.0], [18.06, 100.0], [18.07, 66.7], [18.08, 66.7], [18.09, 50.0], [18.1, 116.7], [18.11, 100.0], [18.12, 150.0], [18.13, 100.0], [18.14, 100.0], [18.15, 66.7], [18.16, 150.0], [18.17, 116.7], [18.18, 116.7], [18.19, 133.4], [18.2, 83.3], [18.21, 66.7], [18.22, 150.0], [18.23, 66.7], [18.24, 133.4], [18.25, 116.7], [18.26, 116.7], [18.27, 166.7], [18.28, 100.0], [18.29, 66.7], [18.3, 66.7], [18.31, 83.3], [18.32, 166.7], [18.33, 116.7], [18.34, 100.0], [18.35, 166.7], [18.36, 116.7], [18.37, 133.4], [18.38, 200.0], [18.39, 83.3], [18.4, 133.4], [18.41, 83.3], [18.42, 166.7], [18.43, 183.4], [18.44, 116.7], [18.45, 216.7], [18.46, 166.7], [18.47, 233.4], [18.48, 250.1], [18.49, 266.7], [18.5, 183.4], [18.51, 150.0], [18.52, 316.8], [18.53, 266.7], [18.54, 266.7], [18.55, 233.4], [18.56, 333.5], [18.57, 366.8], [18.58, 266.7], [18.59, 233.4], [18.6, 300.1], [18.61, 266.7], [18.62, 233.4], [18.63, 250.1], [18.64, 266.7], [18.65, 250.1], [18.66, 133.4], [18.67, 50.0], [18.68, 150.0], [18.69, 133.4], [18.7, 166.7], [18.71, 100.0], [18.72, 166.7], [18.73, 150.0], [18.74, 183.4], [18.75, 150.0], [18.76, 83.3], [18.77, 100.0], [18.78, 166.7], [18.79, 133.4], [18.8, 66.7], [18.81, 100.0], [18.82, 183.4], [18.83, 266.7], [18.84, 200.0], [18.85, 166.7], [18.86, 166.7], [18.87, 166.7], [18.88, 233.4], [18.89, 133.4], [18.9, 116.7], [18.91, 200.0], [18.92, 166.7], [18.93, 166.7], [18.94, 200.0], [18.95, 250.1], [18.96, 200.0], [18.97, 250.1], [18.98, 350.1], [18.99, 300.1], [19.0, 250.1], [19.01, 233.4], [19.02, 183.4], [19.03, 483.6], [19.04, 416.8], [19.05, 283.4], [19.06, 233.4], [19.07, 316.8], [19.08, 200.0], [19.09, 333.4], [19.1, 316.8], [19.11, 200.0], [19.12, 316.8], [19.13, 183.4], [19.14, 200.0], [19.15, 250.1], [19.16, 133.4], [19.17, 366.8], [19.18, 183.4], [19.19, 216.7], [19.2, 216.7], [19.21, 133.4], [19.22, 166.7], [19.23, 100.0], [19.24, 133.4], [19.25, 116.7], [19.26, 133.4], [19.27, 66.7], [19.28, 66.7], [19.29, 216.7], [19.3, 166.7], [19.31, 133.4], [19.32, 100.0], [19.33, 150.0], [19.34, 133.4], [19.35, 166.7], [19.36, 233.4], [19.37, 83.3], [19.38, 150.0], [19.39, 100.0], [19.4, 83.3], [19.41, 133.4], [19.42, 133.4], [19.43, 66.7], [19.44, 150.0], [19.45, 166.7], [19.46, 150.0], [19.47, 133.4], [19.48, 100.0], [19.49, 100.0], [19.5, 50.0], [19.51, 200.0], [19.52, 216.7], [19.53, 150.0], [19.54, 266.7], [19.55, 100.0], [19.56, 183.4], [19.57, 133.4], [19.58, 83.3], [19.59, 116.7], [19.6, 183.4], [19.61, 100.0], [19.62, 183.4], [19.63, 50.0], [19.64, 83.3], [19.65, 66.7], [19.66, 116.7], [19.67, 150.0], [19.68, 166.7], [19.69, 100.0], [19.7, 200.0], [19.71, 100.0], [19.72, 166.7], [19.73, 50.0], [19.74, 66.7], [19.75, 83.3], [19.76, 150.0], [19.77, 166.7], [19.78, 200.0], [19.79, 183.4], [19.8, 200.0], [19.81, 66.7], [19.82, 100.0], [19.83, 150.0], [19.84, 133.4], [19.85, 66.7], [19.86, 83.3], [19.87, 150.0], [19.88, 66.7], [19.89, 150.0], [19.9, 133.4], [19.91, 166.7], [19.92, 116.7], [19.93, 100.0], [19.94, 133.4], [19.95, 83.3], [19.96, 150.0], [19.97, 116.7], [19.98, 150.0], [19.99, 50.0], [20.0, 66.7], [20.01, 100.0], [20.02, 100.0], [20.03, 50.0], [20.04, 116.7], [20.05, 116.7], [20.06, 116.7], [20.07, 166.7], [20.08, 116.7], [20.09, 50.0], [20.1, 183.4], [20.11, 166.7], [20.12, 133.4], [20.13, 83.3], [20.14, 166.7], [20.15, 116.7], [20.16, 116.7], [20.17, 150.0], [20.18, 83.3], [20.19, 100.0], [20.2, 150.0], [20.21, 116.7], [20.22, 83.3], [20.23, 116.7], [20.24, 83.3], [20.25, 150.0], [20.26, 150.0], [20.27, 133.4], [20.28, 133.4], [20.29, 150.0], [20.3, 83.3], [20.31, 116.7], [20.32, 66.7], [20.33, 133.4], [20.34, 133.4], [20.35, 216.7], [20.36, 183.4], [20.37, 100.0], [20.38, 133.4], [20.39, 133.4], [20.4, 183.4], [20.41, 150.0], [20.42, 166.7], [20.43, 83.3], [20.44, 200.0], [20.45, 83.3], [20.46, 183.4], [20.47, 100.0], [20.48, 66.7], [20.49, 50.0], [20.5, 66.7], [20.51, 200.0], [20.52, 200.0], [20.53, 166.7], [20.54, 216.7], [20.55, 116.7], [20.56, 100.0], [20.57, 116.7], [20.58, 116.7], [20.59, 133.4], [20.6, 133.4], [20.61, 83.3], [20.62, 116.7], [20.63, 133.4], [20.64, 216.7], [20.65, 166.7], [20.66, 183.4], [20.67, 133.4], [20.68, 116.7], [20.69, 116.7], [20.7, 166.7], [20.71, 133.4], [20.72, 183.4], [20.73, 166.7], [20.74, 200.0], [20.75, 283.4], [20.76, 333.4], [20.77, 233.4], [20.78, 200.0], [20.79, 300.1], [20.8, 283.4], [20.81, 300.1], [20.82, 333.5], [20.83, 166.7], [20.84, 233.4], [20.85, 200.0], [20.86, 450.2], [20.87, 400.2], [20.88, 333.4], [20.89, 366.8], [20.9, 316.8], [20.91, 383.5], [20.92, 200.0], [20.93, 433.5], [20.94, 216.7], [20.95, 250.1], [20.96, 300.1], [20.97, 300.1], [20.98, 333.4], [20.99, 216.7], [21.0, 200.0], [21.01, 233.4], [21.02, 216.7], [21.03, 200.0], [21.04, 216.7], [21.05, 166.7], [21.06, 150.0], [21.07, 100.0], [21.08, 266.7], [21.09, 100.0], [21.1, 116.7], [21.11, 200.0], [21.12, 166.7], [21.13, 166.7], [21.14, 166.7], [21.15, 150.0], [21.16, 150.0], [21.17, 200.0], [21.18, 166.7], [21.19, 100.0], [21.2, 50.0], [21.21, 200.0], [21.22, 100.0], [21.23, 133.4], [21.24, 100.0], [21.25, 133.4], [21.26, 133.4], [21.27, 150.0], [21.28, 150.0], [21.29, 166.7], [21.3, 166.7], [21.31, 150.0], [21.32, 150.0], [21.33, 333.4], [21.34, 116.7], [21.35, 183.4], [21.36, 66.7], [21.37, 83.3], [21.38, 166.7], [21.39, 200.0], [21.4, 133.4], [21.41, 183.4], [21.42, 116.7], [21.43, 166.7], [21.44, 300.1], [21.45, 166.7], [21.46, 250.1], [21.47, 116.7], [21.48, 100.0], [21.49, 333.4], [21.5, 166.7], [21.51, 216.7], [21.52, 66.7], [21.53, 133.4], [21.54, 133.4], [21.55, 166.7], [21.56, 233.4], [21.57, 166.7], [21.58, 83.3], [21.59, 250.1], [21.6, 200.0], [21.61, 133.4], [21.62, 116.7], [21.63, 166.7], [21.64, 166.7], [21.65, 200.0], [21.66, 100.0], [21.67, 166.7], [21.68, 300.1], [21.69, 183.4], [21.7, 116.7], [21.71, 266.7], [21.72, 250.1], [21.73, 200.0], [21.74, 116.7], [21.75, 233.4], [21.76, 183.4], [21.77, 100.0], [21.78, 66.7], [21.79, 116.7], [21.8, 166.7], [21.81, 200.0], [21.82, 100.0], [21.83, 200.0], [21.84, 150.0], [21.85, 166.7], [21.86, 233.4], [21.87, 166.7], [21.88, 150.0], [21.89, 133.4], [21.9, 200.0], [21.91, 166.7], [21.92, 133.4], [21.93, 100.0], [21.94, 350.1], [21.95, 183.4], [21.96, 200.0], [21.97, 150.0], [21.98, 183.4], [21.99, 216.7], [22.0, 283.4], [22.01, 266.7], [22.02, 150.0], [22.03, 150.0], [22.04, 366.8], [22.05, 350.1], [22.06, 266.7], [22.07, 450.2], [22.08, 366.8], [22.09, 366.8], [22.1, 366.8], [22.11, 466.9], [22.12, 300.1], [22.13, 533.6], [22.14, 667.1], [22.15, 533.6], [22.16, 567.0], [22.17, 483.6], [22.18, 633.7], [22.19, 350.1], [22.2, 383.5], [22.21, 366.8], [22.22, 400.2], [22.23, 466.9], [22.24, 583.7], [22.25, 450.2], [22.26, 466.9], [22.27, 300.1], [22.28, 266.7], [22.29, 366.8], [22.3, 250.1], [22.31, 250.1], [22.32, 250.1], [22.33, 233.4], [22.34, 100.0], [22.35, 100.0], [22.36, 200.0], [22.37, 233.4], [22.38, 200.0], [22.39, 183.4], [22.4, 150.0], [22.41, 216.7], [22.42, 200.0], [22.43, 216.7], [22.44, 166.7], [22.45, 200.0], [22.46, 116.7], [22.47, 200.0], [22.48, 133.4], [22.49, 200.0], [22.5, 200.0], [22.51, 133.4], [22.52, 266.7], [22.53, 150.0], [22.54, 150.0], [22.55, 50.0], [22.56, 200.0], [22.57, 150.0], [22.58, 133.4], [22.59, 150.0], [22.6, 100.0], [22.61, 150.0], [22.62, 200.0], [22.63, 133.4], [22.64, 133.4], [22.65, 133.4], [22.66, 150.0], [22.67, 233.4], [22.68, 133.4], [22.69, 133.4], [22.7, 300.1], [22.71, 216.7], [22.72, 83.3], [22.73, 116.7], [22.74, 250.1], [22.75, 133.4], [22.76, 100.0], [22.77, 133.4], [22.78, 150.0], [22.79, 116.7], [22.8, 200.0], [22.81, 200.0], [22.82, 133.4], [22.83, 166.7], [22.84, 166.7], [22.85, 100.0], [22.86, 250.1], [22.87, 166.7], [22.88, 266.7], [22.89, 100.0], [22.9, 200.0], [22.91, 116.7], [22.92, 133.4], [22.93, 133.4], [22.94, 83.3], [22.95, 283.4], [22.96, 216.7], [22.97, 183.4], [22.98, 50.0], [22.99, 66.7], [23.0, 183.4], [23.01, 116.7], [23.02, 133.4], [23.03, 116.7], [23.04, 116.7], [23.05, 166.7], [23.06, 83.3], [23.07, 133.4], [23.08, 116.7], [23.09, 133.4], [23.1, 100.0], [23.11, 133.4], [23.12, 150.0], [23.13, 50.0], [23.14, 183.4], [23.15, 133.4], [23.16, 133.4], [23.17, 66.7], [23.18, 116.7], [23.19, 83.3], [23.2, 83.3], [23.21, 133.4], [23.22, 166.7], [23.23, 100.0], [23.24, 100.0], [23.25, 166.7], [23.26, 133.4], [23.27, 216.7], [23.28, 133.4], [23.29, 233.4], [23.3, 33.3], [23.31, 216.7], [23.32, 250.1], [23.33, 150.0], [23.34, 66.7], [23.35, 183.4], [23.36, 116.7], [23.37, 166.7], [23.38, 200.0], [23.39, 183.4], [23.4, 133.4], [23.41, 83.3], [23.42, 183.4], [23.43, 150.0], [23.44, 116.7], [23.45, 133.4], [23.46, 150.0], [23.47, 66.7], [23.48, 150.0], [23.49, 166.7], [23.5, 133.4], [23.51, 150.0], [23.52, 233.4], [23.53, 183.4], [23.54, 166.7], [23.55, 216.7], [23.56, 83.3], [23.57, 216.7], [23.58, 233.4], [23.59, 100.0], [23.6, 116.7], [23.61, 83.3], [23.62, 166.7], [23.63, 83.3], [23.64, 133.4], [23.65, 83.3], [23.66, 83.3], [23.67, 150.0], [23.68, 116.7], [23.69, 33.3], [23.7, 233.4], [23.71, 100.0], [23.72, 133.4], [23.73, 133.4], [23.74, 100.0], [23.75, 300.1], [23.76, 233.4], [23.77, 133.4], [23.78, 133.4], [23.79, 150.0], [23.8, 133.4], [23.81, 83.3], [23.82, 166.7], [23.83, 83.3], [23.84, 100.0], [23.85, 100.0], [23.86, 116.7], [23.87, 150.0], [23.88, 100.0], [23.89, 133.4], [23.9, 100.0], [23.91, 216.7], [23.92, 166.7], [23.93, 116.7], [23.94, 33.3], [23.95, 116.7], [23.96, 166.7], [23.97, 116.7], [23.98, 66.7], [23.99, 83.3], [24.0, 100.0], [24.01, 100.0], [24.02, 150.0], [24.03, 116.7], [24.04, 166.7], [24.05, 216.7], [24.06, 200.0], [24.07, 183.4], [24.08, 166.7], [24.09, 166.7], [24.1, 100.0], [24.11, 300.1], [24.12, 183.4], [24.13, 266.7], [24.14, 250.1], [24.15, 283.4], [24.16, 250.1], [24.17, 350.1], [24.18, 350.1], [24.19, 133.4], [24.2, 333.5], [24.21, 300.1], [24.22, 300.1], [24.23, 400.2], [24.24, 166.7], [24.25, 233.4], [24.26, 216.7], [24.27, 166.7], [24.28, 200.0], [24.29, 200.0], [24.3, 133.4], [24.31, 216.7], [24.32, 333.5], [24.33, 100.0], [24.34, 150.0], [24.35, 50.0], [24.36, 150.0], [24.37, 183.4], [24.38, 100.0], [24.39, 116.7], [24.4, 66.7], [24.41, 183.4], [24.42, 83.3], [24.43, 133.4], [24.44, 183.4], [24.45, 100.0], [24.46, 100.0], [24.47, 166.7], [24.48, 216.7], [24.49, 116.7], [24.5, 233.4], [24.51, 116.7], [24.52, 100.0], [24.53, 233.4], [24.54, 116.7], [24.55, 166.7], [24.56, 200.0], [24.57, 66.7], [24.58, 216.7], [24.59, 150.0], [24.6, 133.4], [24.61, 83.3], [24.62, 83.3], [24.63, 100.0], [24.64, 83.3], [24.65, 150.0], [24.66, 216.7], [24.67, 50.0], [24.68, 216.7], [24.69, 166.7], [24.7, 100.0], [24.71, 133.4], [24.72, 50.0], [24.73, 166.7], [24.74, 50.0], [24.75, 166.7], [24.76, 100.0], [24.77, 83.3], [24.78, 83.3], [24.79, 233.4], [24.8, 100.0], [24.81, 66.7], [24.82, 83.3], [24.83, 150.0], [24.84, 116.7], [24.85, 166.7], [24.86, 183.4], [24.87, 50.0], [24.88, 50.0], [24.89, 150.0], [24.9, 200.0], [24.91, 66.7], [24.92, 116.7], [24.93, 200.0], [24.94, 50.0], [24.95, 183.4], [24.96, 183.4], [24.97, 133.4], [24.98, 150.0], [24.99, 116.7], [25.0, 100.0], [25.01, 116.7], [25.02, 150.0], [25.03, 116.7], [25.04, 150.0], [25.05, 233.4], [25.06, 133.4], [25.07, 133.4], [25.08, 150.0], [25.09, 66.7], [25.1, 50.0], [25.11, 116.7], [25.12, 133.4], [25.13, 166.7], [25.14, 166.7], [25.15, 116.7], [25.16, 150.0], [25.17, 200.0], [25.18, 200.0], [25.19, 166.7], [25.2, 350.1], [25.21, 150.0], [25.22, 250.1], [25.23, 366.8], [25.24, 316.8], [25.25, 266.7], [25.26, 350.1], [25.27, 316.8], [25.28, 216.7], [25.29, 333.4], [25.3, 266.7], [25.31, 283.4], [25.32, 250.1], [25.33, 283.4], [25.34, 333.4], [25.35, 466.9], [25.36, 316.8], [25.37, 383.5], [25.38, 266.7], [25.39, 283.4], [25.4, 283.4], [25.41, 366.8], [25.42, 266.7], [25.43, 200.0], [25.44, 466.9], [25.45, 250.1], [25.46, 233.4], [25.47, 366.8], [25.48, 283.4], [25.49, 133.4], [25.5, 316.8], [25.51, 300.1], [25.52, 316.8], [25.53, 383.5], [25.54, 433.5], [25.55, 533.6], [25.56, 450.2], [25.57, 650.4], [25.58, 583.7], [25.59, 700.5], [25.6, 984.4], [25.61, 950.9], [25.62, 700.5], [25.63, 1017.7], [25.64, 1017.7], [25.65, 1084.6], [25.66, 1251.6], [25.67, 1168.0], [25.68, 1301.8], [25.69, 1084.5], [25.7, 1084.5], [25.71, 967.6], [25.72, 1134.6], [25.73, 950.9], [25.74, 767.3], [25.75, 617.1], [25.76, 951.0], [25.77, 617.0], [25.78, 617.1], [25.79, 633.7], [25.8, 617.1], [25.81, 400.2], [25.82, 450.2], [25.83, 350.1], [25.84, 200.0], [25.85, 233.4], [25.86, 300.1], [25.87, 200.0], [25.88, 216.7], [25.89, 233.4], [25.9, 283.4], [25.91, 233.4], [25.92, 133.4], [25.93, 150.0], [25.94, 233.4], [25.95, 150.0], [25.96, 66.7], [25.97, 283.4], [25.98, 166.7], [25.99, 116.7], [26.0, 50.0], [26.01, 166.7], [26.02, 216.7], [26.03, 133.4], [26.04, 166.7], [26.05, 133.4], [26.06, 100.0], [26.07, 33.3], [26.08, 250.1], [26.09, 183.4], [26.1, 166.7], [26.11, 183.4], [26.12, 100.0], [26.13, 100.0], [26.14, 116.7], [26.15, 183.4], [26.16, 100.0], [26.17, 100.0], [26.18, 150.0], [26.19, 133.4], [26.2, 133.4], [26.21, 166.7], [26.22, 133.4], [26.23, 116.7], [26.24, 100.0], [26.25, 83.3], [26.26, 83.3], [26.27, 100.0], [26.28, 150.0], [26.29, 66.7], [26.3, 100.0], [26.31, 183.4], [26.32, 166.7], [26.33, 200.0], [26.34, 116.7], [26.35, 150.0], [26.36, 66.7], [26.37, 216.7], [26.38, 166.7], [26.39, 166.7], [26.4, 83.3], [26.41, 116.7], [26.42, 200.0], [26.43, 116.7], [26.44, 133.4], [26.45, 133.4], [26.46, 50.0], [26.47, 116.7], [26.48, 150.0], [26.49, 100.0], [26.5, 100.0], [26.51, 183.4], [26.52, 166.7], [26.53, 116.7], [26.54, 166.7], [26.55, 133.4], [26.56, 33.3], [26.57, 133.4], [26.58, 150.0], [26.59, 133.4], [26.6, 133.4], [26.61, 116.7], [26.62, 116.7], [26.63, 116.7], [26.64, 166.7], [26.65, 100.0], [26.66, 133.4], [26.67, 83.3], [26.68, 150.0], [26.69, 166.7], [26.7, 266.7], [26.71, 166.7], [26.72, 116.7], [26.73, 116.7], [26.74, 83.3], [26.75, 183.4], [26.76, 83.3], [26.77, 100.0], [26.78, 100.0], [26.79, 116.7], [26.8, 83.3], [26.81, 116.7], [26.82, 66.7], [26.83, 183.4], [26.84, 83.3], [26.85, 100.0], [26.86, 116.7], [26.87, 116.7], [26.88, 83.3], [26.89, 116.7], [26.9, 116.7], [26.91, 150.0], [26.92, 66.7], [26.93, 116.7], [26.94, 150.0], [26.95, 83.3], [26.96, 66.7], [26.97, 133.4], [26.98, 83.3], [26.99, 133.4], [27.0, 183.4], [27.01, 133.4], [27.02, 150.0], [27.03, 133.4], [27.04, 133.4], [27.05, 33.3], [27.06, 50.0], [27.07, 133.4], [27.08, 166.7], [27.09, 150.0], [27.1, 83.3], [27.11, 133.4], [27.12, 150.0], [27.13, 133.4], [27.14, 33.3], [27.15, 183.4], [27.16, 116.7], [27.17, 100.0], [27.18, 133.4], [27.19, 166.7], [27.2, 116.7], [27.21, 100.0], [27.22, 83.3], [27.23, 200.0], [27.24, 83.3], [27.25, 183.4], [27.26, 50.0], [27.27, 83.3], [27.28, 116.7], [27.29, 116.7], [27.3, 100.0], [27.31, 116.7], [27.32, 116.7], [27.33, 100.0], [27.34, 83.3], [27.35, 166.7], [27.36, 183.4], [27.37, 83.3], [27.38, 166.7], [27.39, 150.0], [27.4, 116.7], [27.41, 133.4], [27.42, 83.3], [27.43, 100.0], [27.44, 150.0], [27.45, 150.0], [27.46, 83.3], [27.47, 83.3], [27.48, 83.3], [27.49, 50.0], [27.5, 166.7], [27.51, 200.0], [27.52, 116.7], [27.53, 200.0], [27.54, 100.0], [27.55, 83.3], [27.56, 116.7], [27.57, 183.4], [27.58, 100.0], [27.59, 100.0], [27.6, 133.4], [27.61, 66.7], [27.62, 166.7], [27.63, 150.0], [27.64, 116.7], [27.65, 166.7], [27.66, 66.7], [27.67, 133.4], [27.68, 100.0], [27.69, 33.3], [27.7, 200.0], [27.71, 116.7], [27.72, 183.4], [27.73, 183.4], [27.74, 66.7], [27.75, 233.4], [27.76, 116.7], [27.77, 116.7], [27.78, 83.3], [27.79, 66.7], [27.8, 133.4], [27.81, 150.0], [27.82, 83.3], [27.83, 133.4], [27.84, 133.4], [27.85, 100.0], [27.86, 100.0], [27.87, 183.4], [27.88, 116.7], [27.89, 233.4], [27.9, 133.4], [27.91, 66.7], [27.92, 133.4], [27.93, 83.3], [27.94, 100.0], [27.95, 133.4], [27.96, 133.4], [27.97, 116.7], [27.98, 166.7], [27.99, 266.7], [28.0, 150.0], [28.01, 233.4], [28.02, 250.1], [28.03, 266.7], [28.04, 250.1], [28.05, 100.0], [28.06, 300.1], [28.07, 183.4], [28.08, 333.5], [28.09, 316.8], [28.1, 383.5], [28.11, 316.8], [28.12, 300.1], [28.13, 316.8], [28.14, 166.7], [28.15, 250.1], [28.16, 333.4], [28.17, 316.8], [28.18, 300.1], [28.19, 200.0], [28.2, 266.7], [28.21, 200.0], [28.22, 200.0], [28.23, 166.7], [28.24, 183.4], [28.25, 116.7], [28.26, 183.4], [28.27, 133.4], [28.28, 183.4], [28.29, 133.4], [28.3, 100.0], [28.31, 166.7], [28.32, 116.7], [28.33, 183.4], [28.34, 100.0], [28.35, 116.7], [28.36, 166.7], [28.37, 166.7], [28.38, 183.4], [28.39, 150.0], [28.4, 166.7], [28.41, 183.4], [28.42, 116.7], [28.43, 83.3], [28.44, 116.7], [28.45, 183.4], [28.46, 133.4], [28.47, 133.4], [28.48, 166.7], [28.49, 200.0], [28.5, 200.0], [28.51, 183.4], [28.52, 116.7], [28.53, 133.4], [28.54, 150.0], [28.55, 166.7], [28.56, 133.4], [28.57, 83.3], [28.58, 183.4], [28.59, 100.0], [28.6, 116.7], [28.61, 83.3], [28.62, 83.3], [28.63, 100.0], [28.64, 100.0], [28.65, 100.0], [28.66, 66.7], [28.67, 83.3], [28.68, 100.0], [28.69, 66.7], [28.7, 150.0], [28.71, 100.0], [28.72, 66.7], [28.73, 133.4], [28.74, 66.7], [28.75, 66.7], [28.76, 116.7], [28.77, 133.4], [28.78, 66.7], [28.79, 66.7], [28.8, 116.7], [28.81, 116.7], [28.82, 100.0], [28.83, 50.0], [28.84, 133.4], [28.85, 50.0], [28.86, 133.4], [28.87, 100.0], [28.88, 150.0], [28.89, 150.0], [28.9, 33.3], [28.91, 33.3], [28.92, 66.7], [28.93, 133.4], [28.94, 33.3], [28.95, 116.7], [28.96, 83.3], [28.97, 66.7], [28.98, 83.3], [28.99, 83.3], [29.0, 83.3], [29.01, 83.3], [29.02, 50.0], [29.03, 100.0], [29.04, 100.0], [29.05, 150.0], [29.06, 83.3], [29.07, 66.7], [29.08, 100.0], [29.09, 83.3], [29.1, 66.7], [29.11, 100.0], [29.12, 33.3], [29.13, 100.0], [29.14, 66.7], [29.15, 50.0], [29.16, 100.0], [29.17, 166.7], [29.18, 83.3], [29.19, 116.7], [29.2, 100.0], [29.21, 166.7], [29.22, 83.3], [29.23, 150.0], [29.24, 116.7], [29.25, 150.0], [29.26, 133.4], [29.27, 83.3], [29.28, 150.0], [29.29, 83.3], [29.3, 116.7], [29.31, 150.0], [29.32, 100.0], [29.33, 83.3], [29.34, 183.4], [29.35, 83.3], [29.36, 133.4], [29.37, 133.4], [29.38, 116.7], [29.39, 100.0], [29.4, 183.4], [29.41, 150.0], [29.42, 100.0], [29.43, 83.3], [29.44, 133.4], [29.45, 150.0], [29.46, 116.7], [29.47, 83.3], [29.48, 16.7], [29.49, 150.0], [29.5, 50.0], [29.51, 66.7], [29.52, 100.0], [29.53, 83.3], [29.54, 116.7], [29.55, 133.4], [29.56, 100.0], [29.57, 183.4], [29.58, 166.7], [29.59, 133.4], [29.6, 133.4], [29.61, 200.0], [29.62, 200.0], [29.63, 116.7], [29.64, 200.0], [29.65, 333.4], [29.66, 283.4], [29.67, 350.1], [29.68, 350.1], [29.69, 250.1], [29.7, 233.4], [29.71, 466.9], [29.72, 400.2], [29.73, 466.9], [29.74, 300.1], [29.75, 283.4], [29.76, 266.7], [29.77, 366.8], [29.78, 366.8], [29.79, 350.1], [29.8, 133.4], [29.81, 300.1], [29.82, 350.1], [29.83, 266.7], [29.84, 266.7], [29.85, 166.7], [29.86, 133.4], [29.87, 216.7], [29.88, 100.0], [29.89, 183.4], [29.9, 116.7], [29.91, 133.4], [29.92, 66.7], [29.93, 133.4], [29.94, 166.7], [29.95, 116.7], [29.96, 100.0], [29.97, 150.0], [29.98, 116.7], [29.99, 116.7], [30.0, 100.0]]\n"
]
],
[
[
"Write the csv file for easy reading.",
"_____no_output_____"
]
],
[
[
"pxrd_data.write_csv()",
"_____no_output_____"
]
],
[
[
"Use pandas to inspect the csv file, making sure it was written correctly.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nuio66 = pd.read_csv(\"output_UiO_66.csv\")\nprint(uio66)",
" Bundle 1\n0 Partition 1\n1 Operator N.U. X-ray Dif. Fac\n2 Gonio ATX-G\n3 Memo J.B. Cohen X-ray Diffraction Facility\n4 IncidentMonochro Slit collimation\n5 CounterMonochro Receiving slit\n6 Attachment XY-STAGE\n7 Filter NaN\n8 SlitName0 XRR Compression\n9 SlitName1 Reciprocal SM Slit Collimation\n10 Counter SC-70\n11 KAlpha1 1.54056\n12 KAlpha2 1.54439\n13 KBata 0\n14 Target Cu\n15 KV 50\n16 mA 240\n17 AxisName 2Theta\n18 MeasurementMethod Continuation\n19 Start 2\n20 Finish 30\n21 Speed 10\n22 FixedTime NaN\n23 Width 0.01\n24 FullScale 11800\n25 Comment NaN\n26 SampleName NaN\n27 Xunit deg.\n28 Yunit CPS\n29 2.0 600.4\n... ... ...\n2800 29.71 466.9\n2801 29.72 400.2\n2802 29.73 466.9\n2803 29.74 300.1\n2804 29.75 283.4\n2805 29.76 266.7\n2806 29.77 366.8\n2807 29.78 366.8\n2808 29.79 350.1\n2809 29.8 133.4\n2810 29.81 300.1\n2811 29.82 350.1\n2812 29.83 266.7\n2813 29.84 266.7\n2814 29.85 166.7\n2815 29.86 133.4\n2816 29.87 216.7\n2817 29.88 100.0\n2818 29.89 183.4\n2819 29.9 116.7\n2820 29.91 133.4\n2821 29.92 66.7\n2822 29.93 133.4\n2823 29.94 166.7\n2824 29.95 116.7\n2825 29.96 100.0\n2826 29.97 150.0\n2827 29.98 116.7\n2828 29.99 116.7\n2829 30.0 100.0\n\n[2830 rows x 2 columns]\n"
]
],
[
[
"Plot the PXRD data using bokeh.",
"_____no_output_____"
]
],
[
[
"from bokeh.plotting import figure, output_notebook, show\n\noutput_notebook()\np = figure(plot_width=400, plot_height=400)\ntwotheta = [i[0] for i in pxrd_data.data][30::]\nintensity = [i[1] for i in pxrd_data.data][30::]\np.line(twotheta, intensity, line_width=0.75)\np.xaxis.axis_label = \"2\\u03b8\"\np.yaxis.axis_label = \"Intensity / a.u.\"\nshow(p)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab1e902423da988c2fa2dcafa0847c42aa7ef2c
| 114,012 |
ipynb
|
Jupyter Notebook
|
posts/probeinterface-paper-figures.ipynb
|
bendichter/SpikeInterface.github.io
|
f241788dfb9397c837e1942768cad4a01eb2942d
|
[
"MIT"
] | 2 |
2020-08-26T16:29:38.000Z
|
2021-03-03T17:49:50.000Z
|
posts/probeinterface-paper-figures.ipynb
|
bendichter/SpikeInterface.github.io
|
f241788dfb9397c837e1942768cad4a01eb2942d
|
[
"MIT"
] | 2 |
2020-09-29T16:47:17.000Z
|
2021-12-15T19:59:46.000Z
|
posts/probeinterface-paper-figures.ipynb
|
bendichter/SpikeInterface.github.io
|
f241788dfb9397c837e1942768cad4a01eb2942d
|
[
"MIT"
] | 2 |
2021-03-03T17:49:50.000Z
|
2021-12-15T15:51:18.000Z
| 212.708955 | 38,192 | 0.882512 |
[
[
[
"# Figure for probeinterface paper\n\nHere a notebook to reproduce figures for paper\n\n**ProbeInterface: a unified framework for probe handling in extracellular electrophysiology**\n",
"_____no_output_____"
]
],
[
[
"from probeinterface import plotting, io, Probe, ProbeGroup, get_probe\nfrom probeinterface.plotting import plot_probe_group\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline",
"_____no_output_____"
],
[
"# create contact positions\npositions = np.zeros((32, 2))\npositions[:, 0] = [0] * 8 + [50] * 8 + [200] * 8 + [250] * 8\npositions[:, 1] = list(range(0, 400, 50)) * 4\n# create an empty probe object with coordinates in um\nprobe0 = Probe(ndim=2, si_units='um')\n# set contacts\nprobe0.set_contacts(positions=positions, shapes='circle',shape_params={'radius': 10})\n# create probe shape (optional)\npolygon = [(-20, 480), (-20, -30), (20, -110), (70, -30), (70, 450),\n (180, 450), (180, -30), (220, -110), (270, -30), (270, 480)]\nprobe0.set_planar_contour(polygon)",
"_____no_output_____"
],
[
"# duplicate the probe and move it horizontally\nprobe1 = probe0.copy()\n# move probe by 600 um in x direction\nprobe1.move([600, 0])\n\n# Create a probegroup\nprobegroup = ProbeGroup()\nprobegroup.add_probe(probe0)\nprobegroup.add_probe(probe1)",
"_____no_output_____"
],
[
"fig2, ax2 = plt.subplots(figsize=(10,7))\nplot_probe_group(probegroup, ax=ax2)",
"_____no_output_____"
],
[
"fig2.savefig(\"fig2.pdf\")",
"_____no_output_____"
],
[
"probe0 = get_probe('cambridgeneurotech', 'ASSY-156-P-1')\nprobe1 = get_probe('neuronexus', 'A1x32-Poly3-10mm-50-177')\nprobe1.move([1000, -100])\n\nprobegroup = ProbeGroup()\nprobegroup.add_probe(probe0)\nprobegroup.add_probe(probe1)\n\nfig3, ax3 = plt.subplots(figsize=(10,7))\nplot_probe_group(probegroup, ax=ax3)",
"_____no_output_____"
],
[
"fig3.savefig(\"fig3.pdf\")",
"_____no_output_____"
],
[
"manufacturer = 'cambridgeneurotech'\nprobe_name = 'ASSY-156-P-1'\n\nprobe = get_probe(manufacturer, probe_name)\nprint(probe)",
"cambridgeneurotech - ASSY-156-P-1 - 64ch - 4shanks\n"
],
[
"probe.wiring_to_device('ASSY-156>RHD2164')\n\nfig4, ax4 = plt.subplots(figsize=(12,7))\nplotting.plot_probe(probe, with_device_index=True, with_contact_id=True, title=False, ax=ax4)\nax4.set_xlim(-100, 400)\nax4.set_ylim(-150, 100)",
"_____no_output_____"
],
[
"fig4.savefig(\"fig4.pdf\")",
"_____no_output_____"
],
[
"probe.device_channel_indices",
"_____no_output_____"
],
[
"probe.to_dataframe(complete=True)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab1f5d2d65a13ca375cad1945d5db7ed4985461
| 72,596 |
ipynb
|
Jupyter Notebook
|
RFM - analysis.ipynb
|
tsyrdugar/vk_airflow_report_ad
|
f14c262425b273cd3b25adc9004f81a357b8af69
|
[
"Unlicense"
] | null | null | null |
RFM - analysis.ipynb
|
tsyrdugar/vk_airflow_report_ad
|
f14c262425b273cd3b25adc9004f81a357b8af69
|
[
"Unlicense"
] | null | null | null |
RFM - analysis.ipynb
|
tsyrdugar/vk_airflow_report_ad
|
f14c262425b273cd3b25adc9004f81a357b8af69
|
[
"Unlicense"
] | null | null | null | 36.352529 | 629 | 0.44997 |
[
[
[
"import pandas as pd\n\n\nimport numpy as np\n\n\n# Matplotlib forms basis for visualization in Python\nimport matplotlib.pyplot as plt\n\n# We will use the Seaborn library\nimport seaborn as sns\nsns.set()\n\n# Graphics in SVG format are more sharp and legible\nget_ipython().run_line_magic('config', \"InlineBackend.figure_format = 'svg'\")\n\n# Increase the default plot size and set the color scheme\nplt.rcParams['figure.figsize'] = (8, 5)\nplt.rcParams['image.cmap'] = 'viridis'",
"_____no_output_____"
],
[
"df = pd.read_csv('https://stepik.org/media/attachments/lesson/413464/RFM_ht_data.csv' );",
"/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/IPython/core/interactiveshell.py:3165: DtypeWarning: Columns (1) have mixed types.Specify dtype option on import or set low_memory=False.\n has_raised = await self.run_ast_nodes(code_ast.body, cell_name,\n"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.dtypes",
"_____no_output_____"
],
[
"df['InvoiceNo'].unique()",
"_____no_output_____"
],
[
"df['InvoiceDate'] = pd.to_datetime(df['InvoiceDate'])",
"_____no_output_____"
],
[
"df['CustomerCode'] = df['CustomerCode'].apply(str)\n\n",
"_____no_output_____"
],
[
"df.shape[0]",
"_____no_output_____"
],
[
"df['InvoiceDate'].describe()",
"<ipython-input-44-eb448c29fc46>:1: FutureWarning: Treating datetime data as categorical rather than numeric in `.describe` is deprecated and will be removed in a future version of pandas. Specify `datetime_is_numeric=True` to silence this warning and adopt the future behavior now.\n df['InvoiceDate'].describe()\n"
],
[
"df.head()",
"_____no_output_____"
],
[
"last_date = df['InvoiceDate'].max()",
"_____no_output_____"
],
[
"\nrfmTable = df.groupby('CustomerCode').agg({'InvoiceDate': lambda x: (last_date - x.max()).days, # Recency #Количество дней с последнего заказа\n 'InvoiceNo': lambda x: len(x), # Frequency #Количество заказов\n 'Amount': lambda x: x.sum()}) # Monetary Value #Общая сумма по всем заказам\n\nrfmTable['InvoiceDate'] = rfmTable['InvoiceDate'].astype(int)\nrfmTable.rename(columns={'InvoiceDate': 'recency', \n 'InvoiceNo': 'frequency', \n 'Amount': 'monetary_value'}, inplace=True)",
"_____no_output_____"
],
[
"rfmTable.head()",
"_____no_output_____"
],
[
"rfmTable.shape[0]",
"_____no_output_____"
],
[
"quantiles = rfmTable.quantile([0.25, 0.5, 0.75])",
"_____no_output_____"
],
[
"quantiles",
"_____no_output_____"
],
[
"\ndef RClass(value,parameter_name,quantiles_table):\n if value <= quantiles_table[parameter_name][0.25]:\n return 1\n elif value <= quantiles_table[parameter_name][0.50]:\n return 2\n elif value <= quantiles_table[parameter_name][0.75]: \n return 3\n else:\n return 4\n\n\ndef FMClass(value, parameter_name,quantiles_table):\n if value <= quantiles_table[parameter_name][0.25]:\n return 4\n elif value <= quantiles_table[parameter_name][0.50]:\n return 3\n elif value <= quantiles_table[parameter_name][0.75]: \n return 2\n else:\n return 1\n ",
"_____no_output_____"
],
[
"rfmTable.head()",
"_____no_output_____"
],
[
"rfmSegmentation = rfmTable",
"_____no_output_____"
],
[
"rfmSegmentation.dtypes",
"_____no_output_____"
],
[
"\nrfmSegmentation['R_Quartile'] = rfmSegmentation['recency'].apply(RClass, args=('recency',quantiles))\n\nrfmSegmentation['F_Quartile'] = rfmSegmentation['frequency'].apply(FMClass, args=('frequency',quantiles))\n\nrfmSegmentation['M_Quartile'] = rfmSegmentation['monetary_value'].apply(FMClass, args=('monetary_value',quantiles))\n\nrfmSegmentation['RFMClass'] = rfmSegmentation.R_Quartile.map(str) \\\n + rfmSegmentation.F_Quartile.map(str) \\\n + rfmSegmentation.M_Quartile.map(str)\n",
"_____no_output_____"
],
[
"rfmSegmentation.head()",
"_____no_output_____"
],
[
"pd.crosstab(index = rfmSegmentation.R_Quartile, columns = rfmSegmentation.F_Quartile)\n",
"_____no_output_____"
],
[
"rfm_table = rfmSegmentation.pivot_table(\n index='R_Quartile', \n columns='F_Quartile', \n values='monetary_value', \n aggfunc=np.median).applymap(int)\nsns.heatmap(rfm_table, cmap=\"YlGnBu\", annot=True, fmt=\".0f\", linewidths=4.15, annot_kws={\"size\": 10},yticklabels=4);",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab1fabee1c3dcd29cacb7454d27f46431545994
| 208,688 |
ipynb
|
Jupyter Notebook
|
ada/final-exam-2017/RusuCosmin_sciper.ipynb
|
AlexanderChristian/private_courses
|
c80f3526af539e35f93b460f3909f669aaef573c
|
[
"MIT"
] | null | null | null |
ada/final-exam-2017/RusuCosmin_sciper.ipynb
|
AlexanderChristian/private_courses
|
c80f3526af539e35f93b460f3909f669aaef573c
|
[
"MIT"
] | 6 |
2020-03-04T20:52:39.000Z
|
2022-03-31T00:33:07.000Z
|
ada/final-exam-2017/RusuCosmin_sciper.ipynb
|
AlexanderChristian/private_courses
|
c80f3526af539e35f93b460f3909f669aaef573c
|
[
"MIT"
] | null | null | null | 51.70664 | 24,220 | 0.533155 |
[
[
[
"import re\nimport os\nimport pandas as pd\nimport numpy as np\nimport scipy as sp\nimport seaborn as sns\nfrom ipywidgets import interact\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport json\n%matplotlib inline\n\nimport findspark\nfindspark.init()\n\nfrom pyspark.sql import *\nfrom pyspark.sql.functions import *\nfrom pyspark.sql.functions import min\n\nfrom pyspark.sql import SparkSession\nfrom pyspark import SparkContext\n\nfrom pandas.plotting import scatter_matrix\nfrom datetime import datetime, timedelta\n\nspark = SparkSession.builder.getOrCreate()\nsc = spark.sparkContext",
"_____no_output_____"
],
[
"p_df = pd.read_csv('pokemon.csv')\nc_df = pd.read_csv('combats.csv')\n\ndisplay(p_df.head(5))\ndisplay(c_df.head(5))",
"_____no_output_____"
],
[
"display(p_df.describe())\ndisplay(c_df.describe())",
"_____no_output_____"
],
[
"display(p_df['Class 1'].unique())\ndisplay(p_df['Class 2'].unique())",
"_____no_output_____"
],
[
"p_df.hist(column = 'Attack')\np_df.hist(column = 'Defense')",
"_____no_output_____"
],
[
"ax = p_df.hist(column='Sp. Atk', alpha = 0.5)\np_df.hist(column='Sp. Def', ax = ax, alpha = 0.5)\nplt.legend(['Sp. Atk', 'Sp. Def'])\nplt.title(\"Sp. Atk + Sp. Def\")",
"_____no_output_____"
],
[
"p_df.plot(kind = 'scatter', x = 'Sp. Atk', y = 'Sp. Def')",
"_____no_output_____"
],
[
"p_df['Attack/Defense'] = p_df['Attack'] / p_df['Defense']\ndisplay(p_df.sort_values(by=['Attack/Defense'], ascending = False)[:3])\nprint(\"list the names of the 3 Pokémon with highest attack-over-defense ratio:\\n\")\nprint(\"\\n\".join(p_df.sort_values(by=['Attack/Defense'], ascending = False)[:3]['Name'].tolist()))\ndisplay(p_df.sort_values(by=['Attack/Defense'], ascending = True)[:3])\nprint(\"list the names of the 3 Pokémon with lowest attack-over-defense ratio:\\n\")\nprint(\"\\n\".join(p_df.sort_values(by=['Attack/Defense'], ascending = True)[:3]['Name'].tolist()))\n",
"_____no_output_____"
],
[
"display(c_df.head(5))\n\nprint('list the names of the 10 Pokémon with the largest number of victories.\\n')\ntop_df = c_df.groupby('Winner').size().reset_index(name='counts').sort_values(by='counts', ascending = False)[:10]\nprint(\"\\n\".join(top_df.merge(p_df, left_on = 'Winner', right_on = 'pid')['Name'].tolist()))",
"_____no_output_____"
],
[
"grass_class = p_df[(p_df['Class 1'] == 'Grass') | (p_df['Class 2'] == 'Grass') &\n ~((p_df['Class 1'] != 'Rock') | (p_df['Class 2'] == 'Rock'))]\nrock_class = p_df[(p_df['Class 1'] == 'Rock') | (p_df['Class 2'] == 'Rock') &\n ~((p_df['Class 1'] != 'Grass') | (p_df['Class 2'] == 'Grass'))]\ndisplay(grass_class.head(5))\ndisplay(rock_class.head(5))\n\nf, (ax1, ax2) = plt.subplots(1, 2, sharey = True)\ngrass_class.boxplot(column = 'Attack', return_type='axes', ax = ax1)\nrock_class.boxplot(column = 'Attack', ax = ax2)",
"_____no_output_____"
],
[
"spark.sql(\"\"\"\n SELECT Pokemons.Winner, Pokemons.Name, COUNT(*) as TotalWins\n FROM Combats\n INNER JOIN Pokemons on Pokemons.pid = Combats.Winner\n GROUP BY Pokemnon.Winner, Pokemons.Name\n ORDER BY TotalWins DESC\n\"\"\")",
"_____no_output_____"
],
[
"X_ext = c_df.merge(p_df, left_on='First_pokemon', right_on='pid') \\\n .merge(p_df, left_on='Second_pokemon', right_on='pid', suffixes=('_x', '_y'))\nX = X_ext.drop(columns=['Winner', 'First_pokemon', 'Second_pokemon', 'pid_x', 'pid_y', 'Name_x', 'Name_y', 'Attack/Defense_x', 'Attack/Defense_y'])\n\ncategories = pd.unique(p_df[['Class 1', 'Class 2']].values.ravel('K'))[:-1]\n\nX['Class 1_x'] = pd.Categorical(X['Class 1_x'], categories=categories).codes\nX['Class 1_y'] = pd.Categorical(X['Class 1_y'], categories=categories).codes\n\nX['Class 2_x'] = pd.Categorical(X['Class 2_x'], categories=categories).codes\nX['Class 2_y'] = pd.Categorical(X['Class 2_y'], categories=categories).codes\n\ndisplay(X)\nY = X_ext['Winner'] == X_ext['First_pokemon']",
"_____no_output_____"
],
[
"from sklearn.ensemble import RandomForestClassifier\nfrom sklearn.datasets import make_classification",
"_____no_output_____"
],
[
"N = len(X)\nN",
"_____no_output_____"
],
[
"train_size = int(N * 0.9)\ntest_size = N - train_size\npermutation = np.random.permutation(N)\ntrain_set_index = permutation[:train_size]\ntest_set_index = permutation[train_size:]\n\nprint(train_set_index)\nprint(test_set_index)",
"[21675 20249 15069 ... 36014 19785 39193]\n[42619 7774 43972 ... 46959 47569 25391]\n"
],
[
"X_train = X.iloc[train_set_index]\nY_train = Y.iloc[train_set_index]\n\nX_test = X.iloc[test_set_index]\nY_test = Y.iloc[test_set_index]",
"_____no_output_____"
],
[
"n_estimators = [10, 25, 50, 100]\nmax_depths = [2, 4, 10]\n\ndef k_fold(X, Y, K):\n permutation = np.random.permutation(N)\n for k in range(K):\n X_test = X.iloc[permutation[k * test_size : (k + 1) * test_size]]\n Y_test = Y.iloc[permutation[k * test_size : (k + 1) * test_size]]\n \n X_train = X.iloc[permutation[:k*test_size].tolist() + permutation[(k + 1)*test_size:].tolist()]\n Y_train = Y.iloc[permutation[:k*test_size].tolist() + permutation[(k + 1)*test_size:].tolist()]\n yield(X_train, Y_train, X_test, Y_test)\n\nbest_acc = 0\nbest_n_est = 0\nbest_max_depth = 0\nfor n_estimator in n_estimators:\n for max_depth in max_depths:\n clf = RandomForestClassifier(n_estimators=n_estimator, max_depth=max_depth, random_state=0)\n accuracies = []\n for (X_train, Y_train, X_test, Y_test) in k_fold(X, Y, 5):\n clf.fit(X_train, Y_train)\n accuracies.append((clf.predict(X_test) == Y_test).sum() / test_size)\n \n accuracy = np.mean(accuracies)\n print(n_estimator, max_depth, accuracy)\n if accuracy > best_acc:\n best_acc = accuracy\n best_n_est = n_estimator\n best_max_depth = max_depth\n \nprint('Best accuracy: ', best_acc)\nprint('Best number of estimators: ', best_n_est)\nprint('Best max depth: ', best_max_depth)\n ",
"10 2 0.7933200000000001\n10 4 0.85808\n10 10 0.9268000000000001\n25 2 0.8106\n25 4 0.857\n25 10 0.9296399999999998\n50 2 0.80632\n50 4 0.86784\n50 10 0.93212\n100 2 0.80808\n100 4 0.87744\n100 10 0.93596\nBest accuracy: 0.93596\nBest number of estimators: 100\nBest max depth: 10\n"
],
[
"forest = RandomForestClassifier(n_estimators=best_n_est, max_depth=best_max_depth, random_state=0)\nforest.fit(X_train, Y_train)",
"_____no_output_____"
],
[
"importances = forest.feature_importances_\nstd = np.std([tree.feature_importances_ for tree in forest.estimators_],\n axis=0)\nindices = np.argsort(importances)[::-1]\n\n# Print the feature ranking\nprint(\"Feature ranking:\")\n\nfor f in range(X.shape[1]):\n print(\"%d. feature %d (%s) (%f)\" % (f + 1, indices[f], X.columns[indices[f]], importances[indices[f]]))\n\n# Plot the feature importances of the forest\nplt.figure()\nplt.title(\"Feature importances\")\nplt.bar(range(X.shape[1]), importances[indices],\n color=\"r\", yerr=std[indices], align=\"center\")\nplt.xticks(range(X.shape[1]), indices)\nplt.xlim([-1, X.shape[1]])\nplt.show()",
"Feature ranking:\n1. feature 7 (Speed_x) (0.367536)\n2. feature 16 (Speed_y) (0.348282)\n3. feature 12 (Attack_y) (0.041669)\n4. feature 3 (Attack_x) (0.038579)\n5. feature 5 (Sp. Atk_x) (0.029665)\n6. feature 14 (Sp. Atk_y) (0.026784)\n7. feature 2 (HP_x) (0.020175)\n8. feature 11 (HP_y) (0.017509)\n9. feature 6 (Sp. Def_x) (0.017483)\n10. feature 15 (Sp. Def_y) (0.017186)\n11. feature 4 (Defense_x) (0.013039)\n12. feature 13 (Defense_y) (0.012665)\n13. feature 10 (Class 2_y) (0.011253)\n14. feature 1 (Class 2_x) (0.010887)\n15. feature 9 (Class 1_y) (0.008595)\n16. feature 0 (Class 1_x) (0.007540)\n17. feature 8 (Legendary_x) (0.006054)\n18. feature 17 (Legendary_y) (0.005099)\n"
]
],
[
[
"(5 points) Compute the winning ratio (number of wins divided by number of battles) for all Pokémon. Show the 10 Pokémon with the highest ratio and describe what they have in common with respect to their features. Discuss your results about feature importance from question 2.7 (regarding feature importance) in this context.l",
"_____no_output_____"
]
],
[
[
"top_df = c_df.groupby('Winner').size().reset_index(name='WinCount').sort_values(by='WinCount', ascending = False)\n\nfirst_df = c_df.groupby('First_pokemon').size().reset_index(name='Battles').sort_values(by='Battles', ascending = False)\nsecond_df = c_df.groupby('Second_pokemon').size().reset_index(name='Battles').sort_values(by='Battles', ascending = False)\nmerged = first_df.merge(second_df, left_on = 'First_pokemon', right_on='Second_pokemon')\n\nmerged['Battles'] = merged['Battles_x'] + merged['Battles_y']\nmerged = merged.drop(columns = ['Second_pokemon', 'Battles_x', \"Battles_y\"])\n\np_df_ext = p_df.merge(top_df, left_on='pid', right_on='Winner')\np_df_ext = p_df_ext.merge(merged, left_on='pid', right_on='First_pokemon')\np_df_ext = p_df_ext.drop(columns = ['First_pokemon', 'Winner'])\n\np_df_ext[\"WinninRatio\"] = p_df_ext['WinCount'] / p_df_ext['Battles']\n\ndisplay(p_df_ext.head(5))\n\n",
"_____no_output_____"
],
[
"p_df_ext.sort_values(by = 'WinninRatio', ascending = False)[:10]",
"_____no_output_____"
],
[
"p_df_ext.describe()",
"_____no_output_____"
],
[
"wins = np.zeros(shape = (800, 800))\n\nfor row in c_df.iterrows():\n if row[1]['First_pokemon'] == row[1]['Winner']:\n wins[row[1]['First_pokemon'] - 1][row[1]['Second_pokemon'] - 1] += 1\n else:\n wins[row[1]['Second_pokemon'] - 1][row[1]['First_pokemon'] - 1] += 1",
"_____no_output_____"
],
[
"G = np.zeros(shape = (800, 800))\n\nfor i in range(800):\n for j in range(800):\n if wins[i][j] > wins[j][i]:\n G[i][j] = 1\n elif wins[i][j] > wins[j][i]:\n G[j][i] = 1\n\nA = G + (G @ G)",
"_____no_output_____"
],
[
"scores = A.sum(axis = 1)\np_df[p_df['pid'].isin(np.argsort(scores)[-10:])]",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab217630cd0e837e29894ed2157929f810ad2da
| 87,129 |
ipynb
|
Jupyter Notebook
|
figures_and_tables/figure-evaluation_by_cancer_type.ipynb
|
luisvalesilva/multisurv
|
9f7c12b51260a3ea853df786014b5e68284c2b95
|
[
"MIT"
] | 11 |
2021-08-02T09:18:59.000Z
|
2022-02-04T15:51:25.000Z
|
figures_and_tables/figure-evaluation_by_cancer_type.ipynb
|
luisvalesilva/multisurv
|
9f7c12b51260a3ea853df786014b5e68284c2b95
|
[
"MIT"
] | null | null | null |
figures_and_tables/figure-evaluation_by_cancer_type.ipynb
|
luisvalesilva/multisurv
|
9f7c12b51260a3ea853df786014b5e68284c2b95
|
[
"MIT"
] | 8 |
2021-07-02T08:06:47.000Z
|
2022-03-08T16:07:34.000Z
| 53.849815 | 18,976 | 0.673461 |
[
[
[
"<a id='Top'></a>\n\n# MultiSurv results by cancer type<a class='tocSkip'></a>\n\nC-index value results for each cancer type of the best MultiSurv model trained on all-cancer data.",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n\n%load_ext watermark\n\nimport sys\nimport os\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport torch\n\n# Make modules in \"src\" dir visible\nproject_dir = os.path.split(os.getcwd())[0]\nif project_dir not in sys.path:\n sys.path.append(os.path.join(project_dir, 'src'))\n\nimport dataset\nfrom model import Model\nimport utils\n\nmatplotlib.style.use('multisurv.mplstyle')",
"_____no_output_____"
]
],
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Load-model\" data-toc-modified-id=\"Load-model-1\"><span class=\"toc-item-num\">1 </span>Load model</a></span></li><li><span><a href=\"#Evaluate\" data-toc-modified-id=\"Evaluate-2\"><span class=\"toc-item-num\">2 </span>Evaluate</a></span></li><li><span><a href=\"#Result-graph\" data-toc-modified-id=\"Result-graph-3\"><span class=\"toc-item-num\">3 </span>Result graph</a></span><ul class=\"toc-item\"><li><span><a href=\"#Save-to-files\" data-toc-modified-id=\"Save-to-files-3.1\"><span class=\"toc-item-num\">3.1 </span>Save to files</a></span></li></ul></li><li><span><a href=\"#Metric-correlation-with-other-attributes\" data-toc-modified-id=\"Metric-correlation-with-other-attributes-4\"><span class=\"toc-item-num\">4 </span>Metric correlation with other attributes</a></span><ul class=\"toc-item\"><li><span><a href=\"#Collect-feature-representations\" data-toc-modified-id=\"Collect-feature-representations-4.1\"><span class=\"toc-item-num\">4.1 </span>Collect feature representations</a></span></li><li><span><a href=\"#Compute-dispersion-and-add-to-selected-metric-table\" data-toc-modified-id=\"Compute-dispersion-and-add-to-selected-metric-table-4.2\"><span class=\"toc-item-num\">4.2 </span>Compute dispersion and add to selected metric table</a></span></li><li><span><a href=\"#Plot\" data-toc-modified-id=\"Plot-4.3\"><span class=\"toc-item-num\">4.3 </span>Plot</a></span></li></ul></li></ul></div>",
"_____no_output_____"
]
],
[
[
"DATA = utils.INPUT_DATA_DIR\nMODELS = utils.TRAINED_MODEL_DIR\n\ndevice = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
"_____no_output_____"
]
],
[
[
"# Load model",
"_____no_output_____"
]
],
[
[
"dataloaders = utils.get_dataloaders(\n data_location=DATA,\n labels_file='../data/labels.tsv',\n modalities=['clinical', 'mRNA'],\n# exclude_patients=exclude_cancers,\n return_patient_id=True\n)",
"Data modalities:\n clinical\n mRNA\n\nDataset sizes (# patients):\n train: 8880\n val: 1109\n test: 1092\n\nBatch size: 128\n"
],
[
"multisurv = Model(dataloaders=dataloaders, device=device)\nmultisurv.load_weights(os.path.join(MODELS, 'clinical_mRNA_lr0.005_epoch43_acc0.81.pth'))",
"Instantiating MultiSurv model...\nLoad model weights:\n/mnt/dataA/multisurv_models/clinical_mRNA_lr0.005_epoch43_acc0.81.pth\n"
]
],
[
[
"# Evaluate",
"_____no_output_____"
]
],
[
[
"def get_patients_with(cancer_type, split_group='test'):\n labels = pd.read_csv('../data/labels.tsv', sep='\\t')\n cancer_labels = labels[labels['project_id'] == cancer_type]\n group_cancer_labels = cancer_labels[cancer_labels['group'] == split_group]\n\n return list(group_cancer_labels['submitter_id'])",
"_____no_output_____"
],
[
"%%time\n\nresults = {}\nminimum_n_patients = 0\n\ncancer_types = pd.read_csv('../data/labels.tsv', sep='\\t').project_id.unique()\n\nfor i, cancer_type in enumerate(cancer_types):\n print('-' * 44)\n print(' ' * 17, f'{i + 1}.', cancer_type)\n print('-' * 44)\n\n patients = get_patients_with(cancer_type)\n if len(patients) < minimum_n_patients:\n continue\n \n exclude_patients = [p for p in dataloaders['test'].dataset.patient_ids\n if not p in patients]\n\n data = utils.get_dataloaders(\n data_location=DATA,\n labels_file='../data/labels.tsv',\n modalities=['clinical', 'mRNA'],\n exclude_patients=exclude_patients,\n return_patient_id=True\n )['test'].dataset\n\n results[cancer_type] = utils.Evaluation(model=multisurv, dataset=data, device=device)\n results[cancer_type].run_bootstrap()\n print()\nprint()\nprint()",
"--------------------------------------------\n 1. SARC\n--------------------------------------------\nKeeping 8880 patient(s) not in exclude list.\nKeeping 1109 patient(s) not in exclude list.\nKeeping 26 patient(s) not in exclude list.\nData modalities:\n clinical\n mRNA\n\nDataset sizes (# patients):\n train: 8880\n val: 1109\n test: 26\n\nBatch size: 128\nCollect patient predictions: 26/26\n\nBootstrap\n---------\n1000/1000\n\n--------------------------------------------\n 2. MESO\n--------------------------------------------\nKeeping 8880 patient(s) not in exclude list.\nKeeping 1109 patient(s) not in exclude list.\nKeeping 8 patient(s) not in exclude list.\nData modalities:\n clinical\n mRNA\n\nDataset sizes (# patients):\n train: 8880\n val: 1109\n test: 8\n\nBatch size: 128\nCollect patient predictions: 8/8\n\nBootstrap\n---------\n1000/1000\n\n--------------------------------------------\n 3. ACC\n--------------------------------------------\nKeeping 8880 patient(s) not in exclude list.\nKeeping 1109 patient(s) not in exclude list.\nKeeping 9 patient(s) not in exclude list.\nData modalities:\n clinical\n mRNA\n\nDataset sizes (# patients):\n train: 8880\n val: 1109\n test: 9\n\nBatch size: 128\nCollect patient predictions: 9/9\n\nBootstrap\n---------\n1000/1000\n\n--------------------------------------------\n 4. READ\n--------------------------------------------\nKeeping 8880 patient(s) not in exclude list.\nKeeping 1109 patient(s) not in exclude list.\n"
],
[
"%%time\n\ndata = utils.get_dataloaders(\n data_location=DATA,\n labels_file='../data/labels.tsv',\n modalities=['clinical', 'mRNA'],\n return_patient_id=True\n)['test'].dataset\n\nresults['All'] = utils.Evaluation(model=multisurv, dataset=data, device=device)\nresults['All'].run_bootstrap()\nprint()",
"Data modalities:\n clinical\n mRNA\n\nDataset sizes (# patients):\n train: 8880\n val: 1109\n test: 1092\n\nBatch size: 128\nCollect patient predictions: 1092/1092\n\nBootstrap\n---------\n1000/1000\n\nCPU times: user 4min 3s, sys: 8.33 s, total: 4min 11s\nWall time: 5min 4s\n"
]
],
[
[
"In order to avoid very __noisy values__, establish a __minimum threshold__ for the number of patients in each given cancer type.",
"_____no_output_____"
]
],
[
[
"minimum_n_patients = 20",
"_____no_output_____"
],
[
"cancer_types = pd.read_csv('../data/labels.tsv', sep='\\t').project_id.unique()\nselected_cancer_types = ['All']\n\nprint('-' * 40)\nprint(' Cancer Ctd IBS # patients')\nprint('-' * 40)\nfor cancer_type in sorted(list(cancer_types)):\n patients = get_patients_with(cancer_type)\n if len(patients) > minimum_n_patients:\n selected_cancer_types.append(cancer_type)\n ctd = str(round(results[cancer_type].c_index_td, 3))\n ibs = str(round(results[cancer_type].ibs, 3))\n\n message = ' ' + cancer_type\n message += ' ' * (11 - len(message)) + ctd\n message += ' ' * (20 - len(message)) + ibs\n message += ' ' * (32 - len(message)) + str(len(patients))\n print(message)\n# print(' ' + cancer_type + ' ' * (10 - len(cancer_type)) + \\\n# ctd + ' ' * (10 - len(ibs)) + ibs + ' ' * (13 - len(ctd)) \\\n# + str(len(patients)))",
"----------------------------------------\n Cancer Ctd IBS # patients\n----------------------------------------\n BLCA 0.784 0.17 40\n BRCA 0.847 0.109 108\n CESC 0.86 0.15 30\n COAD 0.953 0.161 45\n GBM 0.65 0.109 58\n HNSC 0.715 0.225 52\n KIRC 0.78 0.146 53\n KIRP 0.959 0.066 29\n LGG 0.741 0.142 50\n LIHC 0.847 0.161 37\n LUAD 0.686 0.165 51\n LUSC 0.554 0.224 49\n OV 0.644 0.166 57\n PRAD 0.848 0.079 49\n SARC 0.589 0.265 26\n SKCM 0.773 0.142 45\n STAD 0.774 0.191 43\n THCA 0.988 0.045 50\n UCEC 0.658 0.088 54\n"
],
[
"def format_bootstrap_output(evaluator):\n results = evaluator.format_results()\n \n for metric in results:\n results[metric] = results[metric].split(' ')\n val = results[metric][0]\n ci_low, ci_high = results[metric][1].split('(')[1].split(')')[0].split('-')\n results[metric] = val, ci_low, ci_high\n results[metric] = [float(x) for x in results[metric]]\n \n return results",
"_____no_output_____"
],
[
"formatted_results = {}\n\n# for cancer_type in results:\nfor cancer_type in sorted(selected_cancer_types):\n formatted_results[cancer_type] = format_bootstrap_output(results[cancer_type])",
"_____no_output_____"
],
[
"formatted_results",
"_____no_output_____"
]
],
[
[
"# Result graph\n\nExclude cancer types with less than a chosen minimum number of patients, to avoid extremely noisy results.",
"_____no_output_____"
]
],
[
[
"utils.plot.show_default_colors()",
"_____no_output_____"
],
[
"PLOT_SIZE = (15, 4)\ndefault_colors = plt.rcParams['axes.prop_cycle'].by_key()['color']",
"_____no_output_____"
],
[
"def get_metric_results(metric, data):\n df = pd.DataFrame()\n df['Cancer type'] = data.keys()\n\n val, err = [], []\n \n for cancer in formatted_results:\n values = formatted_results[cancer][metric]\n val.append(values[0])\n err.append((values[0] - values[1], values[2] - values[0]))\n\n df[metric] = val\n err = np.swapaxes(np.array(err), 1, 0)\n\n return df, err",
"_____no_output_____"
],
[
"def plot_results(metric, data, ci, y_lim=None, y_label=None, h_lines=[1, 0.5]):\n fig = plt.figure(figsize=PLOT_SIZE)\n ax = fig.add_subplot(1, 1, 1)\n for y in h_lines:\n ax.axhline(y, linestyle='--', color='grey')\n\n ax.bar(df['Cancer type'][:1], df[metric][:1], yerr=err[:, :1],\n align='center', ecolor=default_colors[0],\n alpha=0.5, capsize=5)\n ax.bar(df['Cancer type'][1:], df[metric][1:], yerr=err[:, 1:],\n align='center', color=default_colors[6], ecolor=default_colors[6],\n alpha=0.5, capsize=5)\n if y_lim is None:\n y_lim = (0, 1)\n \n ax.set_ylim(y_lim)\n ax.set_title('')\n ax.set_xlabel('Cancer types')\n if y_label is None:\n ax.set_ylabel(metric + ' (95% CI)')\n else:\n ax.set_ylabel(y_label)\n\n return fig",
"_____no_output_____"
],
[
"metric='Ctd'\n\ndf, err = get_metric_results(metric, formatted_results)\nfig_ctd = plot_results(metric, df, err, y_label='$C^{td}$ (95% CI)')",
"_____no_output_____"
],
[
"metric='IBS'\n\ndf, err = get_metric_results(metric, formatted_results)\nfig_ibs = plot_results(metric, df, err, y_lim=(0, 0.35), y_label=None, h_lines=[0.25])",
"_____no_output_____"
]
],
[
[
"## Save to files",
"_____no_output_____"
]
],
[
[
"%%javascript\n\nIPython.notebook.kernel.execute('nb_name = \"' + IPython.notebook.notebook_name + '\"')",
"_____no_output_____"
],
[
"pdf_file = nb_name.split('.ipynb')[0] + '_Ctd'\nutils.plot.save_plot_for_figure(figure=fig_ctd, file_name=pdf_file) ",
"_____no_output_____"
],
[
"pdf_file = nb_name.split('.ipynb')[0] + '_IBS'\nutils.plot.save_plot_for_figure(figure=fig_ibs, file_name=pdf_file) ",
"_____no_output_____"
],
[
"pdf_file = nb_name.split('.ipynb')[0] + '_INBLL'\nutils.plot.save_plot_for_figure(figure=fig_inbll, file_name=pdf_file) ",
"_____no_output_____"
]
],
[
[
"# Watermark<a class='tocSkip'></a>",
"_____no_output_____"
]
],
[
[
"%watermark --iversions\n%watermark -v\nprint()\n%watermark -u -n",
"pandas 1.0.1\ntorch 1.4.0\nnumpy 1.18.1\nmatplotlib 3.1.2\n\nCPython 3.6.7\nIPython 7.11.1\n\nlast updated: Tue Jul 28 2020\n"
]
],
[
[
"[Top of the page](#Top)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ab21c8748415ce67c69efdd023d63f219c47132
| 168,199 |
ipynb
|
Jupyter Notebook
|
tutorials/hxe-aa-forecast-sql-03/hxe-aa-forecast-sql-03-Lag1AndCycles.ipynb
|
maxta2002/Tutorials
|
7532e9d30e661122ac212184500408c9a6018d67
|
[
"Apache-2.0"
] | 1 |
2018-02-14T12:03:33.000Z
|
2018-02-14T12:03:33.000Z
|
tutorials/hxe-aa-forecast-sql-03/hxe-aa-forecast-sql-03-Lag1AndCycles.ipynb
|
maxta2002/Tutorials
|
7532e9d30e661122ac212184500408c9a6018d67
|
[
"Apache-2.0"
] | null | null | null |
tutorials/hxe-aa-forecast-sql-03/hxe-aa-forecast-sql-03-Lag1AndCycles.ipynb
|
maxta2002/Tutorials
|
7532e9d30e661122ac212184500408c9a6018d67
|
[
"Apache-2.0"
] | null | null | null | 143.270017 | 99,056 | 0.837092 |
[
[
[
"## **Initialize the connection**",
"_____no_output_____"
]
],
[
[
"import sqlalchemy, os\nfrom sqlalchemy import create_engine\n\nimport pandas as pd\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\n%reload_ext sql\n%config SqlMagic.displaylimit = 5\n%config SqlMagic.feedback = False\n%config SqlMagic.autopandas = True\n\nhxe_connection = 'hana://ML_USER:Welcome18@hxehost:39015';\n\n%sql $hxe_connection\n\npd.options.display.max_rows = 1000\npd.options.display.max_colwidth = 1000",
"_____no_output_____"
]
],
[
[
"# **Lag 1 And Cycles**\n\n## Visualize the data",
"_____no_output_____"
]
],
[
[
"%%sql \nresult <<\nselect\n l1cnn.time, l1cnn.signal as signal , l1cwn.signal as signal_wn, l1cnn.signal - l1cwn.signal as delta\nfrom\n forecast_lag_1_and_cycles l1cnn\njoin forecast_lag_1_and_cycles_and_wn l1cwn\non l1cnn.time = l1cwn.time",
" * hana://ML_USER:***@hxehost:39015\nReturning data to local variable result\n"
],
[
"result = %sql select \\\n l1cnn.time, l1cnn.signal as signal , l1cwn.signal as signal_wn, l1cnn.signal - l1cwn.signal as delta \\\nfrom \\\n forecast_lag_1_and_cycles l1cnn \\\njoin forecast_lag_1_and_cycles_and_wn l1cwn \\\non l1cnn.time = l1cwn.time\n\ntime = matplotlib.dates.date2num(result.time)\n\nfig, ax = plt.subplots()\nax.plot(time, result.signal, 'ro-', markersize=2, color='blue')\nax.plot(time, result.signal_wn, 'ro-', markersize=2, color='red')\nax.bar (time, result.delta , color='green')\nax.xaxis_date()\n\nfig.autofmt_xdate()\nfig.set_size_inches(20, 12)\nplt.show()",
" * hana://ML_USER:***@hxehost:39015\n"
]
],
[
[
"## **Dates & intervals**",
"_____no_output_____"
]
],
[
[
"%%sql\nselect 'max' as indicator, to_varchar(max(time)) as value\nfrom forecast_lag_1_and_cycles union all\nselect 'min' , to_varchar(min(time))\nfrom forecast_lag_1_and_cycles union all\nselect 'delta days' , to_varchar(days_between(min(time), max(time)))\nfrom forecast_lag_1_and_cycles union all\nselect 'count' , to_varchar(count(1))\nfrom forecast_lag_1_and_cycles",
" * hana://ML_USER:***@hxehost:39015\n"
],
[
"%%sql \nselect 'max' as indicator, to_varchar(max(time)) as value\nfrom forecast_lag_1_and_cycles_and_wn union all\nselect 'min' , to_varchar(min(time))\nfrom forecast_lag_1_and_cycles_and_wn union all\nselect 'delta days' , to_varchar(days_between(min(time), max(time)))\nfrom forecast_lag_1_and_cycles_and_wn union all\nselect 'count' , to_varchar(count(1))\nfrom forecast_lag_1_and_cycles_and_wn",
" * hana://ML_USER:***@hxehost:39015\n"
],
[
"%%sql\nselect interval, count(1) as count\nfrom (\n select days_between (lag(time) over (order by time asc), time) as interval\n from forecast_lag_1_and_cycles\n order by time asc\n)\nwhere interval is not null\ngroup by interval;",
" * hana://ML_USER:***@hxehost:39015\n"
]
],
[
[
"## **Generic statistics**",
"_____no_output_____"
]
],
[
[
"%%sql\nwith data as (\n select l1cnn.signal as value_nn, l1cwn.signal as value_wn\n from forecast_lag_1_and_cycles l1cnn join forecast_lag_1_and_cycles_and_wn l1cwn on l1cnn.time = l1cwn.time\n)\nselect 'max' as indicator , round(max(value_nn), 2) as value_nn \n , round(max(value_wn), 2) as value_wn from data union all\nselect 'min' , round(min(value_nn), 2)\n , round(min(value_wn), 2) from data union all\nselect 'delta min/max' , round(max(value_nn) - min(value_nn), 2)\n , round(max(value_wn) - min(value_wn), 2) from data union all\nselect 'avg' , round(avg(value_nn), 2)\n , round(avg(value_wn), 2) from data union all\nselect 'median' , round(median(value_nn), 2)\n , round(median(value_wn), 2) from data union all\nselect 'stddev' , round(stddev(value_nn), 2)\n , round(stddev(value_wn), 2) from data",
" * hana://ML_USER:***@hxehost:39015\n"
],
[
"result = %sql select row_number() over (order by signal asc) as row_num, signal from forecast_lag_1_and_cycles order by 1, 2;\nresult_wn = %sql select row_number() over (order by signal asc) as row_num, signal from forecast_lag_1_and_cycles_and_wn order by 1, 2;\n\nfig, ax = plt.subplots()\nax.plot(result.row_num, result.signal, 'ro-', markersize=2, color='blue')\nax.plot(result_wn.row_num, result_wn.signal, 'ro-', markersize=2, color='red')\n\nfig.set_size_inches(20, 12)\nplt.show()",
" * hana://ML_USER:***@hxehost:39015\n * hana://ML_USER:***@hxehost:39015\n"
]
],
[
[
"## **Data Distribution**",
"_____no_output_____"
]
],
[
[
"%%sql\nwith data as (\n select ntile(10) over (order by signal asc) as tile, signal\n from forecast_lag_1_and_cycles\n where signal is not null\n)\nselect tile\n , round(max(signal), 2) as max\n , round(min(signal), 2) as min\n , round(max(signal) - min(signal), 2) as \"delta min/max\"\n , round(avg(signal), 2) as avg\n , round(median(signal), 2) as median\n , round(abs(avg(signal) - median(signal)), 2) as \"delta avg/median\"\n , round(stddev(signal), 2) as stddev\nfrom data\ngroup by tile",
" * hana://ML_USER:***@hxehost:39015\n"
],
[
"%%sql\nwith data as (\n select ntile(10) over (order by signal asc) as tile, signal\n from forecast_lag_1_and_cycles_and_wn\n where signal is not null\n)\nselect tile\n , round(max(signal), 2) as max\n , round(min(signal), 2) as min\n , round(max(signal) - min(signal), 2) as \"delta min/max\"\n , round(avg(signal), 2) as avg\n , round(median(signal), 2) as median\n , round(abs(avg(signal) - median(signal)), 2) as \"delta avg/median\"\n , round(stddev(signal), 2) as stddev\nfrom data\ngroup by tile",
" * hana://ML_USER:***@hxehost:39015\n"
],
[
"%%sql\nwith data as (\n select ntile(12) over (order by signal asc) as tile, signal\n from forecast_lag_1_and_cycles\n where signal is not null\n)\nselect tile\n , round(max(signal), 2) as max\n , round(min(signal), 2) as min\n , round(max(signal) - min(signal), 2) as \"delta min/max\"\n , round(avg(signal), 2) as avg\n , round(median(signal), 2) as median\n , round(abs(avg(signal) - median(signal)), 2) as \"delta avg/median\"\n , round(stddev(signal), 2) as stddev\nfrom data\ngroup by tile",
" * hana://ML_USER:***@hxehost:39015\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab228e076e1fea4b41743af7f82fd328361314c
| 21,822 |
ipynb
|
Jupyter Notebook
|
notebooks/02_numerical_pipeline_scaling.ipynb
|
odotreppe/scikit-learn-mooc
|
da97773fc9b860371e94e3c72791b0c92471b22d
|
[
"CC-BY-4.0"
] | 634 |
2020-03-10T15:42:46.000Z
|
2022-03-28T15:19:00.000Z
|
notebooks/02_numerical_pipeline_scaling.ipynb
|
odotreppe/scikit-learn-mooc
|
da97773fc9b860371e94e3c72791b0c92471b22d
|
[
"CC-BY-4.0"
] | 467 |
2020-03-10T15:42:31.000Z
|
2022-03-31T09:10:04.000Z
|
notebooks/02_numerical_pipeline_scaling.ipynb
|
odotreppe/scikit-learn-mooc
|
da97773fc9b860371e94e3c72791b0c92471b22d
|
[
"CC-BY-4.0"
] | 314 |
2020-03-11T14:28:26.000Z
|
2022-03-31T12:01:02.000Z
| 35.253635 | 181 | 0.63491 |
[
[
[
"# Preprocessing for numerical features\n\nIn this notebook, we will still use only numerical features.\n\nWe will introduce these new aspects:\n\n* an example of preprocessing, namely **scaling numerical variables**;\n* using a scikit-learn **pipeline** to chain preprocessing and model\n training;\n* assessing the generalization performance of our model via **cross-validation**\n instead of a single train-test split.\n\n## Data preparation\n\nFirst, let's load the full adult census dataset.",
"_____no_output_____"
]
],
[
[
"import pandas as pd\n\nadult_census = pd.read_csv(\"../datasets/adult-census.csv\")",
"_____no_output_____"
],
[
"# to display nice model diagram\nfrom sklearn import set_config\nset_config(display='diagram')",
"_____no_output_____"
]
],
[
[
"We will now drop the target from the data we will use to train our\npredictive model.",
"_____no_output_____"
]
],
[
[
"target_name = \"class\"\ntarget = adult_census[target_name]\ndata = adult_census.drop(columns=target_name)",
"_____no_output_____"
]
],
[
[
"Then, we select only the numerical columns, as seen in the previous\nnotebook.",
"_____no_output_____"
]
],
[
[
"numerical_columns = [\n \"age\", \"capital-gain\", \"capital-loss\", \"hours-per-week\"]\n\ndata_numeric = data[numerical_columns]",
"_____no_output_____"
]
],
[
[
"Finally, we can divide our dataset into a train and test sets.",
"_____no_output_____"
]
],
[
[
"from sklearn.model_selection import train_test_split\n\ndata_train, data_test, target_train, target_test = train_test_split(\n data_numeric, target, random_state=42)",
"_____no_output_____"
]
],
[
[
"## Model fitting with preprocessing\n\nA range of preprocessing algorithms in scikit-learn allow us to transform\nthe input data before training a model. In our case, we will standardize the\ndata and then train a new logistic regression model on that new version of\nthe dataset.\n\nLet's start by printing some statistics about the training data.",
"_____no_output_____"
]
],
[
[
"data_train.describe()",
"_____no_output_____"
]
],
[
[
"We see that the dataset's features span across different ranges. Some\nalgorithms make some assumptions regarding the feature distributions and\nusually normalizing features will be helpful to address these assumptions.\n\n<div class=\"admonition tip alert alert-warning\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Tip</p>\n<p>Here are some reasons for scaling features:</p>\n<ul class=\"last simple\">\n<li>Models that rely on the distance between a pair of samples, for instance\nk-nearest neighbors, should be trained on normalized features to make each\nfeature contribute approximately equally to the distance computations.</li>\n<li>Many models such as logistic regression use a numerical solver (based on\ngradient descent) to find their optimal parameters. This solver converges\nfaster when the features are scaled.</li>\n</ul>\n</div>\n\nWhether or not a machine learning model requires scaling the features depends\non the model family. Linear models such as logistic regression generally\nbenefit from scaling the features while other models such as decision trees\ndo not need such preprocessing (but will not suffer from it).\n\nWe show how to apply such normalization using a scikit-learn transformer\ncalled `StandardScaler`. This transformer shifts and scales each feature\nindividually so that they all have a 0-mean and a unit standard deviation.\n\nWe will investigate different steps used in scikit-learn to achieve such a\ntransformation of the data.\n\nFirst, one needs to call the method `fit` in order to learn the scaling from\nthe data.",
"_____no_output_____"
]
],
[
[
"from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nscaler.fit(data_train)",
"_____no_output_____"
]
],
[
[
"The `fit` method for transformers is similar to the `fit` method for\npredictors. The main difference is that the former has a single argument (the\ndata matrix), whereas the latter has two arguments (the data matrix and the\ntarget).\n\n\n\nIn this case, the algorithm needs to compute the mean and standard deviation\nfor each feature and store them into some NumPy arrays. Here, these\nstatistics are the model states.\n\n<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">The fact that the model states of this scaler are arrays of means and\nstandard deviations is specific to the <tt class=\"docutils literal\">StandardScaler</tt>. Other\nscikit-learn transformers will compute different statistics and store them\nas model states, in the same fashion.</p>\n</div>\n\nWe can inspect the computed means and standard deviations.",
"_____no_output_____"
]
],
[
[
"scaler.mean_",
"_____no_output_____"
],
[
"scaler.scale_",
"_____no_output_____"
]
],
[
[
"<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">scikit-learn convention: if an attribute is learned from the data, its name\nends with an underscore (i.e. <tt class=\"docutils literal\">_</tt>), as in <tt class=\"docutils literal\">mean_</tt> and <tt class=\"docutils literal\">scale_</tt> for the\n<tt class=\"docutils literal\">StandardScaler</tt>.</p>\n</div>",
"_____no_output_____"
],
[
"Scaling the data is applied to each feature individually (i.e. each column in\nthe data matrix). For each feature, we subtract its mean and divide by its\nstandard deviation.\n\nOnce we have called the `fit` method, we can perform data transformation by\ncalling the method `transform`.",
"_____no_output_____"
]
],
[
[
"data_train_scaled = scaler.transform(data_train)\ndata_train_scaled",
"_____no_output_____"
]
],
[
[
"Let's illustrate the internal mechanism of the `transform` method and put it\nto perspective with what we already saw with predictors.\n\n\n\nThe `transform` method for transformers is similar to the `predict` method\nfor predictors. It uses a predefined function, called a **transformation\nfunction**, and uses the model states and the input data. However, instead of\noutputting predictions, the job of the `transform` method is to output a\ntransformed version of the input data.",
"_____no_output_____"
],
[
"Finally, the method `fit_transform` is a shorthand method to call\nsuccessively `fit` and then `transform`.\n\n",
"_____no_output_____"
]
],
[
[
"data_train_scaled = scaler.fit_transform(data_train)\ndata_train_scaled",
"_____no_output_____"
],
[
"data_train_scaled = pd.DataFrame(data_train_scaled,\n columns=data_train.columns)\ndata_train_scaled.describe()",
"_____no_output_____"
]
],
[
[
"We can easily combine these sequential operations with a scikit-learn\n`Pipeline`, which chains together operations and is used as any other\nclassifier or regressor. The helper function `make_pipeline` will create a\n`Pipeline`: it takes as arguments the successive transformations to perform,\nfollowed by the classifier or regressor model.",
"_____no_output_____"
]
],
[
[
"import time\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.pipeline import make_pipeline\n\nmodel = make_pipeline(StandardScaler(), LogisticRegression())\nmodel",
"_____no_output_____"
]
],
[
[
"The `make_pipeline` function did not require us to give a name to each step.\nIndeed, it was automatically assigned based on the name of the classes\nprovided; a `StandardScaler` will be a step named `\"standardscaler\"` in the\nresulting pipeline. We can check the name of each steps of our model:",
"_____no_output_____"
]
],
[
[
"model.named_steps",
"_____no_output_____"
]
],
[
[
"This predictive pipeline exposes the same methods as the final predictor:\n`fit` and `predict` (and additionally `predict_proba`, `decision_function`,\nor `score`).",
"_____no_output_____"
]
],
[
[
"start = time.time()\nmodel.fit(data_train, target_train)\nelapsed_time = time.time() - start",
"_____no_output_____"
]
],
[
[
"We can represent the internal mechanism of a pipeline when calling `fit`\nby the following diagram:\n\n\n\nWhen calling `model.fit`, the method `fit_transform` from each underlying\ntransformer (here a single transformer) in the pipeline will be called to:\n\n- learn their internal model states\n- transform the training data. Finally, the preprocessed data are provided to\n train the predictor.\n\nTo predict the targets given a test set, one uses the `predict` method.",
"_____no_output_____"
]
],
[
[
"predicted_target = model.predict(data_test)\npredicted_target[:5]",
"_____no_output_____"
]
],
[
[
"Let's show the underlying mechanism:\n\n\n\nThe method `transform` of each transformer (here a single transformer) is\ncalled to preprocess the data. Note that there is no need to call the `fit`\nmethod for these transformers because we are using the internal model states\ncomputed when calling `model.fit`. The preprocessed data is then provided to\nthe predictor that will output the predicted target by calling its method\n`predict`.\n\nAs a shorthand, we can check the score of the full predictive pipeline\ncalling the method `model.score`. Thus, let's check the computational and\ngeneralization performance of such a predictive pipeline.",
"_____no_output_____"
]
],
[
[
"model_name = model.__class__.__name__\nscore = model.score(data_test, target_test)\nprint(f\"The accuracy using a {model_name} is {score:.3f} \"\n f\"with a fitting time of {elapsed_time:.3f} seconds \"\n f\"in {model[-1].n_iter_[0]} iterations\")",
"_____no_output_____"
]
],
[
[
"We could compare this predictive model with the predictive model used in\nthe previous notebook which did not scale features.",
"_____no_output_____"
]
],
[
[
"model = LogisticRegression()\nstart = time.time()\nmodel.fit(data_train, target_train)\nelapsed_time = time.time() - start",
"_____no_output_____"
],
[
"model_name = model.__class__.__name__\nscore = model.score(data_test, target_test)\nprint(f\"The accuracy using a {model_name} is {score:.3f} \"\n f\"with a fitting time of {elapsed_time:.3f} seconds \"\n f\"in {model.n_iter_[0]} iterations\")",
"_____no_output_____"
]
],
[
[
"We see that scaling the data before training the logistic regression was\nbeneficial in terms of computational performance. Indeed, the number of\niterations decreased as well as the training time. The generalization\nperformance did not change since both models converged.\n\n<div class=\"admonition warning alert alert-danger\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Warning</p>\n<p class=\"last\">Working with non-scaled data will potentially force the algorithm to iterate\nmore as we showed in the example above. There is also the catastrophic\nscenario where the number of required iterations are more than the maximum\nnumber of iterations allowed by the predictor (controlled by the <tt class=\"docutils literal\">max_iter</tt>)\nparameter. Therefore, before increasing <tt class=\"docutils literal\">max_iter</tt>, make sure that the data\nare well scaled.</p>\n</div>",
"_____no_output_____"
],
[
"## Model evaluation using cross-validation\n\nIn the previous example, we split the original data into a training set and a \ntesting set. The score of a model will in general depend on the way we make \nsuch a split. One downside of doing a single split is that it does not give\nany information about this variability. Another downside, in a setting where \nthe amount of data is small, is that the the data available for training\nand testing will be even smaller after splitting.\n\nInstead, we can use cross-validation. Cross-validation consists of repeating\nthe procedure such that the training and testing sets are different each\ntime. Generalization performance metrics are collected for each repetition and\nthen aggregated. As a result we can get an estimate of the variability of the\nmodel's generalization performance.\n\nNote that there exists several cross-validation strategies, each of them\ndefines how to repeat the `fit`/`score` procedure. In this section, we will\nuse the K-fold strategy: the entire dataset is split into `K` partitions. The\n`fit`/`score` procedure is repeated `K` times where at each iteration `K - 1`\npartitions are used to fit the model and `1` partition is used to score. The\nfigure below illustrates this K-fold strategy.\n\n\n\n<div class=\"admonition note alert alert-info\">\n<p class=\"first admonition-title\" style=\"font-weight: bold;\">Note</p>\n<p class=\"last\">This figure shows the particular case of K-fold cross-validation strategy.\nAs mentioned earlier, there are a variety of different cross-validation\nstrategies. Some of these aspects will be covered in more details in future\nnotebooks.</p>\n</div>\n\nFor each cross-validation split, the procedure trains a model on all the red\nsamples and evaluate the score of the model on the blue samples.\nCross-validation is therefore computationally intensive because it requires\ntraining several models instead of one.\n\nIn scikit-learn, the function `cross_validate` allows to do cross-validation\nand you need to pass it the model, the data, and the target. Since there\nexists several cross-validation strategies, `cross_validate` takes a\nparameter `cv` which defines the splitting strategy.",
"_____no_output_____"
]
],
[
[
"%%time\nfrom sklearn.model_selection import cross_validate\n\nmodel = make_pipeline(StandardScaler(), LogisticRegression())\ncv_result = cross_validate(model, data_numeric, target, cv=5)\ncv_result",
"_____no_output_____"
]
],
[
[
"The output of `cross_validate` is a Python dictionary, which by default\ncontains three entries: (i) the time to train the model on the training data\nfor each fold, (ii) the time to predict with the model on the testing data\nfor each fold, and (iii) the default score on the testing data for each fold.\n\nSetting `cv=5` created 5 distinct splits to get 5 variations for the training\nand testing sets. Each training set is used to fit one model which is then\nscored on the matching test set. This strategy is called K-fold\ncross-validation where `K` corresponds to the number of splits.\n\nNote that by default the `cross_validate` function discards the 5 models that\nwere trained on the different overlapping subset of the dataset. The goal of\ncross-validation is not to train a model, but rather to estimate\napproximately the generalization performance of a model that would have been\ntrained to the full training set, along with an estimate of the variability\n(uncertainty on the generalization accuracy).\n\nYou can pass additional parameters to `cross_validate` to get more\ninformation, for instance training scores. These features will be covered in\na future notebook.\n\nLet's extract the test scores from the `cv_result` dictionary and compute\nthe mean accuracy and the variation of the accuracy across folds.",
"_____no_output_____"
]
],
[
[
"scores = cv_result[\"test_score\"]\nprint(\"The mean cross-validation accuracy is: \"\n f\"{scores.mean():.3f} +/- {scores.std():.3f}\")",
"_____no_output_____"
]
],
[
[
"Note that by computing the standard-deviation of the cross-validation scores,\nwe can estimate the uncertainty of our model generalization performance. This is\nthe main advantage of cross-validation and can be crucial in practice, for\nexample when comparing different models to figure out whether one is better\nthan the other or whether the generalization performance differences are within\nthe uncertainty.\n\nIn this particular case, only the first 2 decimals seem to be trustworthy. If\nyou go up in this notebook, you can check that the performance we get\nwith cross-validation is compatible with the one from a single train-test\nsplit.",
"_____no_output_____"
],
[
"In this notebook we have:\n\n* seen the importance of **scaling numerical variables**;\n* used a **pipeline** to chain scaling and logistic regression training;\n* assessed the generalization performance of our model via **cross-validation**.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ab2340ec713822c2da5b00f56d33c0eb83ec78c
| 50,457 |
ipynb
|
Jupyter Notebook
|
notebooks/gabriela_caesar_analise_de_dados_29set2021_perguntas.ipynb
|
gabrielacaesar/lgbt_casamento
|
ee0224f4e5362b6f5b3fa3609f36c4ff95b37700
|
[
"MIT"
] | 13 |
2021-10-07T01:12:04.000Z
|
2022-03-25T00:50:50.000Z
|
notebooks/gabriela_caesar_analise_de_dados_29set2021_perguntas.ipynb
|
fmasanori/lgbt_casamento
|
01e9142d49e15d772a39f875cef5b95c0adc1c0f
|
[
"MIT"
] | null | null | null |
notebooks/gabriela_caesar_analise_de_dados_29set2021_perguntas.ipynb
|
fmasanori/lgbt_casamento
|
01e9142d49e15d772a39f875cef5b95c0adc1c0f
|
[
"MIT"
] | null | null | null | 78.471229 | 22,953 | 0.432289 |
[
[
[
"Notebook - análise exploratória de dados\n\nGabriela Caesar\n\n29/set/2021\n\nPergunta a ser respondida\n\n- Defina a sua UF e o ano no input e veja as estatísticas básicas da sua UF/ano quanto ao casamento LGBT\n",
"_____no_output_____"
]
],
[
[
"# importacao da biblioteca\nimport pandas as pd",
"_____no_output_____"
],
[
"# leitura do dataframe\nlgbt_casamento = pd.read_csv('https://raw.githubusercontent.com/gabrielacaesar/lgbt_casamento/main/data/lgbt_casamento.csv')",
"_____no_output_____"
],
[
"lgbt_casamento.head(2)",
"_____no_output_____"
],
[
"sigla_uf = pd.read_csv('https://raw.githubusercontent.com/kelvins/Municipios-Brasileiros/main/csv/estados.csv')",
"_____no_output_____"
],
[
"sigla_uf.head(2)",
"_____no_output_____"
],
[
"sigla_uf_lgbt_casamento = lgbt_casamento.merge(sigla_uf, how = 'left', left_on = 'uf', right_on = 'nome')\nlen(sigla_uf_lgbt_casamento['uf_y'].unique())\nsigla_uf_lgbt_casamento.head(2)",
"_____no_output_____"
],
[
"sigla_uf_lgbt_casamento = sigla_uf_lgbt_casamento.drop(['uf_x', 'codigo_uf', 'latitude', 'longitude'], axis=1)\nsigla_uf_lgbt_casamento = sigla_uf_lgbt_casamento.rename(columns={'uf_y':'uf', 'nome': 'nome_uf'})\nsigla_uf_lgbt_casamento.columns\nsigla_uf_lgbt_casamento.head(2)",
"_____no_output_____"
],
[
"print(\" --------------------------- \\n Bem-vindo/a! \\n ---------------------------\")\nano_user = int(input(\"Escolha um ano de 2013 a 2019: \\n\"))\nuf_user = input(\"Escolha uma UF. Por exemplo, AC, AL, SP, RJ... \\n\")\nuf_user = uf_user.upper().strip()\n#print(uf_user)\nprint(\" --------------------------- \\n Já vamos calcular! \\n ---------------------------\")",
" --------------------------- \n Bem-vindo/a! \n ---------------------------\nEscolha um ano de 2013 a 2019: \n2018\nEscolha uma UF. Por exemplo, AC, AL, SP, RJ... \nRJ\n --------------------------- \n Já vamos calcular! \n ---------------------------\n"
]
],
[
[
"# Veja os números, por mês, no ano e na UF de escolha\nGráfico mostra o número de casamentos LGBTs, por gênero, no ano e na unidade federativa informada antes pelo usuário.\n\nPasse o mouse em cima do gráfico para mais detalhes.\n",
"_____no_output_____"
]
],
[
[
"# filtro pela UF e pelo ano informados pelo usuário\n# mais gráfico\nimport altair as alt\nalt.Chart(sigla_uf_lgbt_casamento.query('uf == @uf_user & ano == @ano_user', engine='python')).mark_line(point=True).encode(\n x = alt.X('mes', title = 'Mês', sort=['Janeiro', 'Fevereiro', 'Março']),\n y = alt.Y('numero', title='Número'),\n color = 'genero',\n tooltip = ['mes', 'ano', 'genero', 'numero']\n).properties(\n title = f'{uf_user}: Casamento LGBTs em {ano_user}'\n).interactive()",
"_____no_output_____"
]
],
[
[
"# Veja as estatísticas básicas, por ano, na sua UF de escolha\nGráfico mostra todos os anos da base de dados. A unidade federativa foi informada antes pelo usuário.\n\nPasse o mouse em cima do gráfico para mais detalhes.\n",
"_____no_output_____"
]
],
[
[
"dados_user = sigla_uf_lgbt_casamento.query('uf == @uf_user', engine='python')\n\nalt.Chart(dados_user).mark_boxplot(size=10).encode(\n x = alt.X('ano:O', title=\"Ano\"),\n y = alt.Y('numero', title=\"Número\"),\n color = 'genero',\n tooltip = ['mes', 'ano', 'genero', 'numero']\n).properties(\n title={\n \"text\": [f'{uf_user}: Casamento LGBTs'], \n \"subtitle\": [f'Mulheres vs. Homens']\n },\n width=600,\n height=300\n).interactive()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab23e90ed8f79fd26e50d19470abbd5eb263f1e
| 222,809 |
ipynb
|
Jupyter Notebook
|
clustering_toronto.ipynb
|
virmax/Coursera_Capstone
|
a38909a91a559773b0c39688c7cff4b577029f5b
|
[
"MIT"
] | null | null | null |
clustering_toronto.ipynb
|
virmax/Coursera_Capstone
|
a38909a91a559773b0c39688c7cff4b577029f5b
|
[
"MIT"
] | null | null | null |
clustering_toronto.ipynb
|
virmax/Coursera_Capstone
|
a38909a91a559773b0c39688c7cff4b577029f5b
|
[
"MIT"
] | null | null | null | 39.53318 | 8,816 | 0.350062 |
[
[
[
"## Segmenting and Clustering Neighborhoods in Toronto",
"_____no_output_____"
],
[
"Start by creating a new Notebook for this assignment.\nUse the Notebook to build the code to scrape the following Wikipedia page, https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M, in order to obtain the data that is in the table of postal codes and to transform the data into a pandas dataframe",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport requests ",
"_____no_output_____"
],
[
"website_wiki = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text\nfrom bs4 import BeautifulSoup\nsoup = BeautifulSoup(website_wiki,'lxml')\nprint(soup.prettify())",
"<!DOCTYPE html>\n<html class=\"client-nojs\" dir=\"ltr\" lang=\"en\">\n <head>\n <meta charset=\"utf-8\"/>\n <title>\n List of postal codes of Canada: M - Wikipedia\n </title>\n <script>\n document.documentElement.className = document.documentElement.className.replace( /(^|\\s)client-nojs(\\s|$)/, \"$1client-js$2\" );\n </script>\n <script>\n (window.RLQ=window.RLQ||[]).push(function(){mw.config.set({\"wgCanonicalNamespace\":\"\",\"wgCanonicalSpecialPageName\":false,\"wgNamespaceNumber\":0,\"wgPageName\":\"List_of_postal_codes_of_Canada:_M\",\"wgTitle\":\"List of postal codes of Canada: M\",\"wgCurRevisionId\":867606113,\"wgRevisionId\":867606113,\"wgArticleId\":539066,\"wgIsArticle\":true,\"wgIsRedirect\":false,\"wgAction\":\"view\",\"wgUserName\":null,\"wgUserGroups\":[\"*\"],\"wgCategories\":[\"Communications in Ontario\",\"Postal codes in Canada\",\"Toronto\",\"Ontario-related lists\"],\"wgBreakFrames\":false,\"wgPageContentLanguage\":\"en\",\"wgPageContentModel\":\"wikitext\",\"wgSeparatorTransformTable\":[\"\",\"\"],\"wgDigitTransformTable\":[\"\",\"\"],\"wgDefaultDateFormat\":\"dmy\",\"wgMonthNames\":[\"\",\"January\",\"February\",\"March\",\"April\",\"May\",\"June\",\"July\",\"August\",\"September\",\"October\",\"November\",\"December\"],\"wgMonthNamesShort\":[\"\",\"Jan\",\"Feb\",\"Mar\",\"Apr\",\"May\",\"Jun\",\"Jul\",\"Aug\",\"Sep\",\"Oct\",\"Nov\",\"Dec\"],\"wgRelevantPageName\":\"List_of_postal_codes_of_Canada:_M\",\"wgRelevantArticleId\":539066,\"wgRequestId\":\"W-lIvwpAAEIAAJ@z1h8AAABT\",\"wgCSPNonce\":false,\"wgIsProbablyEditable\":true,\"wgRelevantPageIsProbablyEditable\":true,\"wgRestrictionEdit\":[],\"wgRestrictionMove\":[],\"wgFlaggedRevsParams\":{\"tags\":{}},\"wgStableRevisionId\":null,\"wgCategoryTreePageCategoryOptions\":\"{\\\"mode\\\":0,\\\"hideprefix\\\":20,\\\"showcount\\\":true,\\\"namespaces\\\":false}\",\"wgWikiEditorEnabledModules\":[],\"wgBetaFeaturesFeatures\":[],\"wgMediaViewerOnClick\":true,\"wgMediaViewerEnabledByDefault\":true,\"wgPopupsShouldSendModuleToUser\":true,\"wgPopupsConflictsWithNavPopupGadget\":false,\"wgVisualEditor\":{\"pageLanguageCode\":\"en\",\"pageLanguageDir\":\"ltr\",\"pageVariantFallbacks\":\"en\",\"usePageImages\":true,\"usePageDescriptions\":true},\"wgMFExpandAllSectionsUserOption\":true,\"wgMFEnableFontChanger\":true,\"wgMFDisplayWikibaseDescriptions\":{\"search\":true,\"nearby\":true,\"watchlist\":true,\"tagline\":false},\"wgRelatedArticles\":null,\"wgRelatedArticlesUseCirrusSearch\":true,\"wgRelatedArticlesOnlyUseCirrusSearch\":false,\"wgWMESchemaEditAttemptStepOversample\":false,\"wgULSCurrentAutonym\":\"English\",\"wgNoticeProject\":\"wikipedia\",\"wgCentralNoticeCookiesToDelete\":[],\"wgCentralNoticeCategoriesUsingLegacy\":[\"Fundraising\",\"fundraising\"],\"wgWikibaseItemId\":\"Q3248240\",\"wgScoreNoteLanguages\":{\"arabic\":\"العربية\",\"catalan\":\"català\",\"deutsch\":\"Deutsch\",\"english\":\"English\",\"espanol\":\"español\",\"italiano\":\"italiano\",\"nederlands\":\"Nederlands\",\"norsk\":\"norsk\",\"portugues\":\"português\",\"suomi\":\"suomi\",\"svenska\":\"svenska\",\"vlaams\":\"West-Vlams\"},\"wgScoreDefaultNoteLanguage\":\"nederlands\",\"wgCentralAuthMobileDomain\":false,\"wgCodeMirrorEnabled\":true,\"wgVisualEditorToolbarScrollOffset\":0,\"wgVisualEditorUnsupportedEditParams\":[\"undo\",\"undoafter\",\"veswitched\"],\"wgEditSubmitButtonLabelPublish\":true});mw.loader.state({\"ext.gadget.charinsert-styles\":\"ready\",\"ext.globalCssJs.user.styles\":\"ready\",\"ext.globalCssJs.site.styles\":\"ready\",\"site.styles\":\"ready\",\"noscript\":\"ready\",\"user.styles\":\"ready\",\"ext.globalCssJs.user\":\"ready\",\"ext.globalCssJs.site\":\"ready\",\"user\":\"ready\",\"user.options\":\"ready\",\"user.tokens\":\"loading\",\"ext.cite.styles\":\"ready\",\"mediawiki.legacy.shared\":\"ready\",\"mediawiki.legacy.commonPrint\":\"ready\",\"wikibase.client.init\":\"ready\",\"ext.visualEditor.desktopArticleTarget.noscript\":\"ready\",\"ext.uls.interlanguage\":\"ready\",\"ext.wikimediaBadges\":\"ready\",\"ext.3d.styles\":\"ready\",\"mediawiki.skinning.interface\":\"ready\",\"skins.vector.styles\":\"ready\"});mw.loader.implement(\"user.tokens@0tffind\",function($,jQuery,require,module){/*@nomin*/mw.user.tokens.set({\"editToken\":\"+\\\\\",\"patrolToken\":\"+\\\\\",\"watchToken\":\"+\\\\\",\"csrfToken\":\"+\\\\\"});\n});RLPAGEMODULES=[\"ext.cite.a11y\",\"site\",\"mediawiki.page.startup\",\"mediawiki.user\",\"mediawiki.page.ready\",\"jquery.tablesorter\",\"mediawiki.searchSuggest\",\"ext.gadget.teahouse\",\"ext.gadget.ReferenceTooltips\",\"ext.gadget.watchlist-notice\",\"ext.gadget.DRN-wizard\",\"ext.gadget.charinsert\",\"ext.gadget.refToolbar\",\"ext.gadget.extra-toolbar-buttons\",\"ext.gadget.switcher\",\"ext.centralauth.centralautologin\",\"mmv.head\",\"mmv.bootstrap.autostart\",\"ext.popups\",\"ext.visualEditor.desktopArticleTarget.init\",\"ext.visualEditor.targetLoader\",\"ext.eventLogging.subscriber\",\"ext.wikimediaEvents\",\"ext.navigationTiming\",\"ext.uls.eventlogger\",\"ext.uls.init\",\"ext.uls.compactlinks\",\"ext.uls.interface\",\"ext.centralNotice.geoIP\",\"ext.centralNotice.startUp\",\"skins.vector.js\"];mw.loader.load(RLPAGEMODULES);});\n </script>\n <link href=\"/w/load.php?debug=false&lang=en&modules=ext.3d.styles%7Cext.cite.styles%7Cext.uls.interlanguage%7Cext.visualEditor.desktopArticleTarget.noscript%7Cext.wikimediaBadges%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.skinning.interface%7Cskins.vector.styles%7Cwikibase.client.init&only=styles&skin=vector\" rel=\"stylesheet\"/>\n <script async=\"\" src=\"/w/load.php?debug=false&lang=en&modules=startup&only=scripts&skin=vector\">\n </script>\n <meta content=\"\" name=\"ResourceLoaderDynamicStyles\"/>\n <link href=\"/w/load.php?debug=false&lang=en&modules=ext.gadget.charinsert-styles&only=styles&skin=vector\" rel=\"stylesheet\"/>\n <link href=\"/w/load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector\" rel=\"stylesheet\"/>\n <meta content=\"MediaWiki 1.33.0-wmf.4\" name=\"generator\"/>\n <meta content=\"origin\" name=\"referrer\"/>\n <meta content=\"origin-when-crossorigin\" name=\"referrer\"/>\n <meta content=\"origin-when-cross-origin\" name=\"referrer\"/>\n <link href=\"android-app://org.wikipedia/http/en.m.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M\" rel=\"alternate\"/>\n <link href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit\" rel=\"alternate\" title=\"Edit this page\" type=\"application/x-wiki\"/>\n <link href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit\" rel=\"edit\" title=\"Edit this page\"/>\n <link href=\"/static/apple-touch/wikipedia.png\" rel=\"apple-touch-icon\"/>\n <link href=\"/static/favicon/wikipedia.ico\" rel=\"shortcut icon\"/>\n <link href=\"/w/opensearch_desc.php\" rel=\"search\" title=\"Wikipedia (en)\" type=\"application/opensearchdescription+xml\"/>\n <link href=\"//en.wikipedia.org/w/api.php?action=rsd\" rel=\"EditURI\" type=\"application/rsd+xml\"/>\n <link href=\"//creativecommons.org/licenses/by-sa/3.0/\" rel=\"license\"/>\n <link href=\"https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M\" rel=\"canonical\"/>\n <link href=\"//login.wikimedia.org\" rel=\"dns-prefetch\"/>\n <link href=\"//meta.wikimedia.org\" rel=\"dns-prefetch\"/>\n <!--[if lt IE 9]><script src=\"/w/load.php?debug=false&lang=en&modules=html5shiv&only=scripts&skin=vector&sync=1\"></script><![endif]-->\n </head>\n <body class=\"mediawiki ltr sitedir-ltr mw-hide-empty-elt ns-0 ns-subject page-List_of_postal_codes_of_Canada_M rootpage-List_of_postal_codes_of_Canada_M skin-vector action-view\">\n <div class=\"noprint\" id=\"mw-page-base\">\n </div>\n <div class=\"noprint\" id=\"mw-head-base\">\n </div>\n <div class=\"mw-body\" id=\"content\" role=\"main\">\n <a id=\"top\">\n </a>\n <div class=\"mw-body-content\" id=\"siteNotice\">\n <!-- CentralNotice -->\n </div>\n <div class=\"mw-indicators mw-body-content\">\n </div>\n <h1 class=\"firstHeading\" id=\"firstHeading\" lang=\"en\">\n List of postal codes of Canada: M\n </h1>\n <div class=\"mw-body-content\" id=\"bodyContent\">\n <div class=\"noprint\" id=\"siteSub\">\n From Wikipedia, the free encyclopedia\n </div>\n <div id=\"contentSub\">\n </div>\n <div id=\"jump-to-nav\">\n </div>\n <a class=\"mw-jump-link\" href=\"#mw-head\">\n Jump to navigation\n </a>\n <a class=\"mw-jump-link\" href=\"#p-search\">\n Jump to search\n </a>\n <div class=\"mw-content-ltr\" dir=\"ltr\" id=\"mw-content-text\" lang=\"en\">\n <div class=\"mw-parser-output\">\n <p>\n This is a list of\n <a href=\"/wiki/Postal_codes_in_Canada\" title=\"Postal codes in Canada\">\n postal codes in Canada\n </a>\n where the first letter is M. Postal codes beginning with M are located within the city of\n <a href=\"/wiki/Toronto\" title=\"Toronto\">\n Toronto\n </a>\n in the province of\n <a href=\"/wiki/Ontario\" title=\"Ontario\">\n Ontario\n </a>\n . Only the first three characters are listed, corresponding to the Forward Sortation Area.\n </p>\n <p>\n <a href=\"/wiki/Canada_Post\" title=\"Canada Post\">\n Canada Post\n </a>\n provides a free postal code look-up tool on its website,\n <sup class=\"reference\" id=\"cite_ref-1\">\n <a href=\"#cite_note-1\">\n [1]\n </a>\n </sup>\n via its\n <a href=\"/wiki/Mobile_app\" title=\"Mobile app\">\n applications\n </a>\n for such\n <a class=\"mw-redirect\" href=\"/wiki/Smartphones\" title=\"Smartphones\">\n smartphones\n </a>\n as the\n <a href=\"/wiki/IPhone\" title=\"IPhone\">\n iPhone\n </a>\n and\n <a href=\"/wiki/BlackBerry\" title=\"BlackBerry\">\n BlackBerry\n </a>\n ,\n <sup class=\"reference\" id=\"cite_ref-2\">\n <a href=\"#cite_note-2\">\n [2]\n </a>\n </sup>\n and sells hard-copy directories and\n <a href=\"/wiki/CD-ROM\" title=\"CD-ROM\">\n CD-ROMs\n </a>\n . Many vendors also sell validation tools, which allow customers to properly match addresses and postal codes. Hard-copy directories can also be consulted in all post offices, and some libraries.\n </p>\n <h2>\n <span class=\"mw-headline\" id=\"Toronto_-_FSAs\">\n <a href=\"/wiki/Toronto\" title=\"Toronto\">\n Toronto\n </a>\n -\n <a href=\"/wiki/Postal_codes_in_Canada#Forward_sortation_areas\" title=\"Postal codes in Canada\">\n FSAs\n </a>\n </span>\n <span class=\"mw-editsection\">\n <span class=\"mw-editsection-bracket\">\n [\n </span>\n <a href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=1\" title=\"Edit section: Toronto - FSAs\">\n edit\n </a>\n <span class=\"mw-editsection-bracket\">\n ]\n </span>\n </span>\n </h2>\n <p>\n Note: There are no rural FSAs in Toronto, hence no postal codes start with M0.\n </p>\n <table class=\"wikitable sortable\">\n <tbody>\n <tr>\n <th>\n Postcode\n </th>\n <th>\n Borough\n </th>\n <th>\n Neighbourhood\n </th>\n </tr>\n <tr>\n <td>\n M1A\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M2A\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3A\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Parkwoods\" title=\"Parkwoods\">\n Parkwoods\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4A\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Victoria_Village\" title=\"Victoria Village\">\n Victoria Village\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5A\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Harbourfront_(Toronto)\" title=\"Harbourfront (Toronto)\">\n Harbourfront\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5A\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Regent_Park\" title=\"Regent Park\">\n Regent Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6A\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Lawrence_Heights\" title=\"Lawrence Heights\">\n Lawrence Heights\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6A\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Lawrence_Manor\" title=\"Lawrence Manor\">\n Lawrence Manor\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7A\n </td>\n <td>\n <a href=\"/wiki/Queen%27s_Park_(Toronto)\" title=\"Queen's Park (Toronto)\">\n Queen's Park\n </a>\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8A\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9A\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Islington_Avenue\" title=\"Islington Avenue\">\n Islington Avenue\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1B\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Rouge,_Toronto\" title=\"Rouge, Toronto\">\n Rouge\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1B\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Malvern,_Toronto\" title=\"Malvern, Toronto\">\n Malvern\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2B\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3B\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Don Mills North\n </td>\n </tr>\n <tr>\n <td>\n M4B\n </td>\n <td>\n <a href=\"/wiki/East_York\" title=\"East York\">\n East York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Woodbine_Gardens\" title=\"Woodbine Gardens\">\n Woodbine Gardens\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4B\n </td>\n <td>\n <a href=\"/wiki/East_York\" title=\"East York\">\n East York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Parkview_Hill\" title=\"Parkview Hill\">\n Parkview Hill\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5B\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Ryerson\" title=\"Ryerson\">\n Ryerson\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5B\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Garden District\n </td>\n </tr>\n <tr>\n <td>\n M6B\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Glencairn,_Ontario\" title=\"Glencairn, Ontario\">\n Glencairn\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7B\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8B\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9B\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Cloverdale\n </td>\n </tr>\n <tr>\n <td>\n M9B\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Islington\" title=\"Islington\">\n Islington\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9B\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Martin Grove\n </td>\n </tr>\n <tr>\n <td>\n M9B\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Princess_Gardens\" title=\"Princess Gardens\">\n Princess Gardens\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9B\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/West_Deane_Park\" title=\"West Deane Park\">\n West Deane Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1C\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Highland_Creek_(Toronto)\" title=\"Highland Creek (Toronto)\">\n Highland Creek\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1C\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Rouge_Hill\" title=\"Rouge Hill\">\n Rouge Hill\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1C\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Port_Union,_Toronto\" title=\"Port Union, Toronto\">\n Port Union\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2C\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3C\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Flemingdon_Park\" title=\"Flemingdon Park\">\n Flemingdon Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3C\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Don Mills South\n </td>\n </tr>\n <tr>\n <td>\n M4C\n </td>\n <td>\n <a href=\"/wiki/East_York\" title=\"East York\">\n East York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Woodbine_Heights\" title=\"Woodbine Heights\">\n Woodbine Heights\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5C\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/St._James_Town\" title=\"St. James Town\">\n St. James Town\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6C\n </td>\n <td>\n York\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Humewood-Cedarvale\" title=\"Humewood-Cedarvale\">\n Humewood-Cedarvale\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7C\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8C\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9C\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Bloordale Gardens\n </td>\n </tr>\n <tr>\n <td>\n M9C\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Eringate\n </td>\n </tr>\n <tr>\n <td>\n M9C\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Markland_Wood\" title=\"Markland Wood\">\n Markland Wood\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9C\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Old Burnhamthorpe\n </td>\n </tr>\n <tr>\n <td>\n M1E\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n Guildwood\n </td>\n </tr>\n <tr>\n <td>\n M1E\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Morningside,_Toronto\" title=\"Morningside, Toronto\">\n Morningside\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1E\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/West_Hill,_Toronto\" title=\"West Hill, Toronto\">\n West Hill\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2E\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3E\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4E\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/The_Beaches\" title=\"The Beaches\">\n The Beaches\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5E\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Berczy_Park\" title=\"Berczy Park\">\n Berczy Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6E\n </td>\n <td>\n York\n </td>\n <td>\n Caledonia-Fairbanks\n </td>\n </tr>\n <tr>\n <td>\n M7E\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8E\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9E\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1G\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Woburn,_Toronto\" title=\"Woburn, Toronto\">\n Woburn\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2G\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3G\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4G\n </td>\n <td>\n <a href=\"/wiki/East_York\" title=\"East York\">\n East York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Leaside\" title=\"Leaside\">\n Leaside\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5G\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Central Bay Street\n </td>\n </tr>\n <tr>\n <td>\n M6G\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Christie\n </td>\n </tr>\n <tr>\n <td>\n M7G\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8G\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9G\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1H\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Woburn,_Toronto\" title=\"Woburn, Toronto\">\n Cedarbrae\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2H\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Hillcrest_Village\" title=\"Hillcrest Village\">\n Hillcrest Village\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3H\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Bathurst_Manor\" title=\"Bathurst Manor\">\n Bathurst Manor\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3H\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Downsview North\n </td>\n </tr>\n <tr>\n <td>\n M3H\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Wilson_Heights,_Toronto\" title=\"Wilson Heights, Toronto\">\n Wilson Heights\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4H\n </td>\n <td>\n <a href=\"/wiki/East_York\" title=\"East York\">\n East York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Thorncliffe_Park\" title=\"Thorncliffe Park\">\n Thorncliffe Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5H\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Adelaide\" title=\"Adelaide\">\n Adelaide\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5H\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/King\" title=\"King\">\n King\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5H\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Richmond\n </td>\n </tr>\n <tr>\n <td>\n M6H\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Dovercourt_Village\" title=\"Dovercourt Village\">\n Dovercourt Village\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6H\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n Dufferin\n </td>\n </tr>\n <tr>\n <td>\n M7H\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8H\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9H\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1J\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Scarborough_Village\" title=\"Scarborough Village\">\n Scarborough Village\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2J\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Fairview\n </td>\n </tr>\n <tr>\n <td>\n M2J\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Henry_Farm\" title=\"Henry Farm\">\n Henry Farm\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2J\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Oriole\n </td>\n </tr>\n <tr>\n <td>\n M3J\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Northwood_Park\" title=\"Northwood Park\">\n Northwood Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3J\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/York_University\" title=\"York University\">\n York University\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4J\n </td>\n <td>\n <a href=\"/wiki/East_York\" title=\"East York\">\n East York\n </a>\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5J\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Harbourfront East\n </td>\n </tr>\n <tr>\n <td>\n M5J\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Toronto_Islands\" title=\"Toronto Islands\">\n Toronto Islands\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5J\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Union_Station_(Toronto)\" title=\"Union Station (Toronto)\">\n Union Station\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6J\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Little_Portugal,_Toronto\" title=\"Little Portugal, Toronto\">\n Little Portugal\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6J\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Trinity%E2%80%93Bellwoods\" title=\"Trinity–Bellwoods\">\n Trinity\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7J\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8J\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9J\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1K\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n East Birchmount Park\n </td>\n </tr>\n <tr>\n <td>\n M1K\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Ionview\" title=\"Ionview\">\n Ionview\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1K\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Kennedy_Park,_Toronto\" title=\"Kennedy Park, Toronto\">\n Kennedy Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2K\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Bayview_Village\" title=\"Bayview Village\">\n Bayview Village\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3K\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/CFB_Toronto\" title=\"CFB Toronto\">\n CFB Toronto\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3K\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Downsview East\n </td>\n </tr>\n <tr>\n <td>\n M4K\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n <td>\n The Danforth West\n </td>\n </tr>\n <tr>\n <td>\n M4K\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Riverdale,_Toronto\" title=\"Riverdale, Toronto\">\n Riverdale\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5K\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Design_Exchange\" title=\"Design Exchange\">\n Design Exchange\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5K\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Toronto_Dominion_Centre\" title=\"Toronto Dominion Centre\">\n Toronto Dominion Centre\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6K\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n Brockton\n </td>\n </tr>\n <tr>\n <td>\n M6K\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Exhibition_Place\" title=\"Exhibition Place\">\n Exhibition Place\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6K\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Parkdale_Village\" title=\"Parkdale Village\">\n Parkdale Village\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7K\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8K\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9K\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1L\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Clairlea\" title=\"Clairlea\">\n Clairlea\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1L\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Golden_Mile,_Toronto\" title=\"Golden Mile, Toronto\">\n Golden Mile\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1L\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Oakridge,_Toronto\" title=\"Oakridge, Toronto\">\n Oakridge\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2L\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Silver_Hills\" title=\"Silver Hills\">\n Silver Hills\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2L\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/York_Mills\" title=\"York Mills\">\n York Mills\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3L\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Downsview\" title=\"Downsview\">\n Downsview West\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4L\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n <td>\n The Beaches West\n </td>\n </tr>\n <tr>\n <td>\n M4L\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/India_Bazaar\" title=\"India Bazaar\">\n India Bazaar\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5L\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Commerce_Court\" title=\"Commerce Court\">\n Commerce Court\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5L\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Victoria Hotel\n </td>\n </tr>\n <tr>\n <td>\n M6L\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Maple_Leaf_Park\" title=\"Maple Leaf Park\">\n Maple Leaf Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6L\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n North Park\n </td>\n </tr>\n <tr>\n <td>\n M6L\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Upwood Park\n </td>\n </tr>\n <tr>\n <td>\n M7L\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8L\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9L\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Humber_Summit\" title=\"Humber Summit\">\n Humber Summit\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1M\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Cliffcrest\" title=\"Cliffcrest\">\n Cliffcrest\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1M\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Cliffside,_Toronto\" title=\"Cliffside, Toronto\">\n Cliffside\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1M\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n Scarborough Village West\n </td>\n </tr>\n <tr>\n <td>\n M2M\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Newtonbrook\" title=\"Newtonbrook\">\n Newtonbrook\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2M\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Willowdale,_Toronto\" title=\"Willowdale, Toronto\">\n Willowdale\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3M\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Downsview Central\n </td>\n </tr>\n <tr>\n <td>\n M4M\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n <td>\n Studio District\n </td>\n </tr>\n <tr>\n <td>\n M5M\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Bedford_Park,_Toronto\" title=\"Bedford Park, Toronto\">\n Bedford Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5M\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Lawrence Manor East\n </td>\n </tr>\n <tr>\n <td>\n M6M\n </td>\n <td>\n <a href=\"/wiki/York\" title=\"York\">\n York\n </a>\n </td>\n <td>\n Del Ray\n </td>\n </tr>\n <tr>\n <td>\n M6M\n </td>\n <td>\n <a href=\"/wiki/York\" title=\"York\">\n York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Keelesdale\" title=\"Keelesdale\">\n Keelesdale\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6M\n </td>\n <td>\n <a href=\"/wiki/York\" title=\"York\">\n York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Mount_Dennis\" title=\"Mount Dennis\">\n Mount Dennis\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6M\n </td>\n <td>\n <a href=\"/wiki/York\" title=\"York\">\n York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Silverthorn,_Toronto\" title=\"Silverthorn, Toronto\">\n Silverthorn\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7M\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8M\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9M\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Emery,_Toronto\" title=\"Emery, Toronto\">\n Emery\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9M\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Humberlea\" title=\"Humberlea\">\n Humberlea\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1N\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Birch_Cliff\" title=\"Birch Cliff\">\n Birch Cliff\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1N\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n Cliffside West\n </td>\n </tr>\n <tr>\n <td>\n M2N\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Willowdale South\n </td>\n </tr>\n <tr>\n <td>\n M3N\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n Downsview Northwest\n </td>\n </tr>\n <tr>\n <td>\n M4N\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Lawrence_Park,_Toronto\" title=\"Lawrence Park, Toronto\">\n Lawrence Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5N\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n Roselawn\n </td>\n </tr>\n <tr>\n <td>\n M6N\n </td>\n <td>\n <a href=\"/wiki/York\" title=\"York\">\n York\n </a>\n </td>\n <td>\n The Junction North\n </td>\n </tr>\n <tr>\n <td>\n M6N\n </td>\n <td>\n <a href=\"/wiki/York\" title=\"York\">\n York\n </a>\n </td>\n <td>\n Runnymede\n </td>\n </tr>\n <tr>\n <td>\n M7N\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8N\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9N\n </td>\n <td>\n <a href=\"/wiki/York\" title=\"York\">\n York\n </a>\n </td>\n <td>\n <a href=\"/wiki/Weston,_Toronto\" title=\"Weston, Toronto\">\n Weston\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1P\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Dorset_Park\" title=\"Dorset Park\">\n Dorset Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1P\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Scarborough_Town_Centre\" title=\"Scarborough Town Centre\">\n Scarborough Town Centre\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1P\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Wexford_Heights\" title=\"Wexford Heights\">\n Wexford Heights\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2P\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n York Mills West\n </td>\n </tr>\n <tr>\n <td>\n M3P\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4P\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n Davisville North\n </td>\n </tr>\n <tr>\n <td>\n M5P\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Forest_Hill_North\" title=\"Forest Hill North\">\n Forest Hill North\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5P\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n Forest Hill West\n </td>\n </tr>\n <tr>\n <td>\n M6P\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/High_Park\" title=\"High Park\">\n High Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6P\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n The Junction South\n </td>\n </tr>\n <tr>\n <td>\n M7P\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8P\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9P\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Westmount\" title=\"Westmount\">\n Westmount\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1R\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Maryvale,_Toronto\" title=\"Maryvale, Toronto\">\n Maryvale\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1R\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Wexford\" title=\"Wexford\">\n Wexford\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2R\n </td>\n <td>\n <a href=\"/wiki/North_York\" title=\"North York\">\n North York\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Willowdale_West\" title=\"Willowdale West\">\n Willowdale West\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M3R\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4R\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n North Toronto West\n </td>\n </tr>\n <tr>\n <td>\n M5R\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/The_Annex\" title=\"The Annex\">\n The Annex\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5R\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n North Midtown\n </td>\n </tr>\n <tr>\n <td>\n M5R\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Yorkville,_Toronto\" title=\"Yorkville, Toronto\">\n Yorkville\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6R\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Parkdale,_Toronto\" title=\"Parkdale, Toronto\">\n Parkdale\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6R\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Roncesvalles\" title=\"Roncesvalles\">\n Roncesvalles\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7R\n </td>\n <td>\n Mississauga\n </td>\n <td>\n Canada Post Gateway Processing Centre\n </td>\n </tr>\n <tr>\n <td>\n M8R\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9R\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Kingsview_Village\" title=\"Kingsview Village\">\n Kingsview Village\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9R\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Martin Grove Gardens\n </td>\n </tr>\n <tr>\n <td>\n M9R\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Richview Gardens\n </td>\n </tr>\n <tr>\n <td>\n M9R\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n St. Phillips\n </td>\n </tr>\n <tr>\n <td>\n M1S\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Agincourt,_Toronto\" title=\"Agincourt, Toronto\">\n Agincourt\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2S\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3S\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4S\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n Davisville\n </td>\n </tr>\n <tr>\n <td>\n M5S\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Harbord\n </td>\n </tr>\n <tr>\n <td>\n M5S\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/University_of_Toronto\" title=\"University of Toronto\">\n University of Toronto\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6S\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Runnymede\" title=\"Runnymede\">\n Runnymede\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6S\n </td>\n <td>\n <a href=\"/wiki/West_Toronto\" title=\"West Toronto\">\n West Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Swansea\" title=\"Swansea\">\n Swansea\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M7S\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8S\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9S\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1T\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n Clarks Corners\n </td>\n </tr>\n <tr>\n <td>\n M1T\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n Sullivan\n </td>\n </tr>\n <tr>\n <td>\n M1T\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Tam_O%27Shanter_%E2%80%93_Sullivan\" title=\"Tam O'Shanter – Sullivan\">\n Tam O'Shanter\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2T\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3T\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4T\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Moore_Park,_Toronto\" title=\"Moore Park, Toronto\">\n Moore Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4T\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n Summerhill East\n </td>\n </tr>\n <tr>\n <td>\n M5T\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Chinatown\" title=\"Chinatown\">\n Chinatown\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5T\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Grange_Park_(Toronto)\" title=\"Grange Park (Toronto)\">\n Grange Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5T\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Kensington_Market\" title=\"Kensington Market\">\n Kensington Market\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6T\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M7T\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8T\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M9T\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1V\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Agincourt_North\" title=\"Agincourt North\">\n Agincourt North\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1V\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n L'Amoreaux East\n </td>\n </tr>\n <tr>\n <td>\n M1V\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a href=\"/wiki/Milliken,_Ontario\" title=\"Milliken, Ontario\">\n Milliken\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1V\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n Steeles East\n </td>\n </tr>\n <tr>\n <td>\n M2V\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3V\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4V\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Deer_Park,_Toronto\" title=\"Deer Park, Toronto\">\n Deer Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4V\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n Forest Hill SE\n </td>\n </tr>\n <tr>\n <td>\n M4V\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Rathnelly\" title=\"Rathnelly\">\n Rathnelly\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4V\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/South_Hill,_Toronto\" title=\"South Hill, Toronto\">\n South Hill\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4V\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Central_Toronto\" title=\"Central Toronto\">\n Central Toronto\n </a>\n </td>\n <td>\n Summerhill West\n </td>\n </tr>\n <tr>\n <td>\n M5V\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/CN_Tower\" title=\"CN Tower\">\n CN Tower\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5V\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Bathurst Quay\n </td>\n </tr>\n <tr>\n <td>\n M5V\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Island airport\n </td>\n </tr>\n <tr>\n <td>\n M5V\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Harbourfront West\n </td>\n </tr>\n <tr>\n <td>\n M5V\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/King_and_Spadina\" title=\"King and Spadina\">\n King and Spadina\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5V\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Railway_Lands\" title=\"Railway Lands\">\n Railway Lands\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5V\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/South_Niagara\" title=\"South Niagara\">\n South Niagara\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6V\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M7V\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Humber Bay Shores\n </td>\n </tr>\n <tr>\n <td>\n M8V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Mimico South\n </td>\n </tr>\n <tr>\n <td>\n M8V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/New_Toronto\" title=\"New Toronto\">\n New Toronto\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Albion Gardens\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Beaumond_Heights\" title=\"Beaumond Heights\">\n Beaumond Heights\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Humbergate\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Mount_Olive-Silverstone-Jamestown\" title=\"Mount Olive-Silverstone-Jamestown\">\n Jamestown\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Mount_Olive-Silverstone-Jamestown\" title=\"Mount Olive-Silverstone-Jamestown\">\n Mount Olive\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Silverstone\" title=\"Silverstone\">\n Silverstone\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/South_Steeles\" title=\"South Steeles\">\n South Steeles\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9V\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Thistletown\" title=\"Thistletown\">\n Thistletown\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1W\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n L'Amoreaux West\n </td>\n </tr>\n <tr>\n <td>\n M1W\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Steeles_West\" title=\"Steeles West\">\n Steeles West\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2W\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3W\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4W\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Rosedale,_Toronto\" title=\"Rosedale, Toronto\">\n Rosedale\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5W\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n Stn A PO Boxes 25 The Esplanade\n </td>\n </tr>\n <tr>\n <td>\n M6W\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M7W\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8W\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Alderwood,_Toronto\" title=\"Alderwood, Toronto\">\n Alderwood\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8W\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Long_Branch,_Toronto\" title=\"Long Branch, Toronto\">\n Long Branch\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9W\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Northwest\" title=\"Northwest\">\n Northwest\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M1X\n </td>\n <td>\n <a href=\"/wiki/Scarborough,_Toronto\" title=\"Scarborough, Toronto\">\n Scarborough\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Upper_Rouge\" title=\"Upper Rouge\">\n Upper Rouge\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M2X\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3X\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4X\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Cabbagetown,_Toronto\" title=\"Cabbagetown, Toronto\">\n Cabbagetown\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M4X\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/St._James_Town\" title=\"St. James Town\">\n St. James Town\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5X\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/First_Canadian_Place\" title=\"First Canadian Place\">\n First Canadian Place\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5X\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Underground_city\" title=\"Underground city\">\n Underground city\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M6X\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M7X\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8X\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/The_Kingsway\" title=\"The Kingsway\">\n The Kingsway\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8X\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Montgomery Road\n </td>\n </tr>\n <tr>\n <td>\n M8X\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Old Mill North\n </td>\n </tr>\n <tr>\n <td>\n M9X\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1Y\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M2Y\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3Y\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4Y\n </td>\n <td>\n <a href=\"/wiki/Downtown_Toronto\" title=\"Downtown Toronto\">\n Downtown Toronto\n </a>\n </td>\n <td>\n <a href=\"/wiki/Church_and_Wellesley\" title=\"Church and Wellesley\">\n Church and Wellesley\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M5Y\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M6Y\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M7Y\n </td>\n <td>\n <a href=\"/wiki/East_Toronto\" title=\"East Toronto\">\n East Toronto\n </a>\n </td>\n <td>\n Business reply mail Processing Centre969 Eastern\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Humber_Bay\" title=\"Humber Bay\">\n Humber Bay\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Kingsmills_Park\" title=\"Kingsmills Park\">\n King's Mill Park\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Kingsway Park South East\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Mimico\" title=\"Mimico\">\n Mimico NE\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Old_Mill,_Toronto\" title=\"Old Mill, Toronto\">\n Old Mill South\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/The_Queensway\" title=\"The Queensway\">\n The Queensway East\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Fairmont_Royal_York_Hotel\" title=\"Fairmont Royal York Hotel\">\n Royal York South East\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Y\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a class=\"mw-redirect\" href=\"/wiki/Sunnylea\" title=\"Sunnylea\">\n Sunnylea\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9Y\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M1Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M2Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M3Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M4Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M5Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M6Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M7Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n <tr>\n <td>\n M8Z\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Kingsway Park South West\n </td>\n </tr>\n <tr>\n <td>\n M8Z\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Mimico\" title=\"Mimico\">\n Mimico NW\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Z\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/The_Queensway\" title=\"The Queensway\">\n The Queensway West\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M8Z\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n Royal York South West\n </td>\n </tr>\n <tr>\n <td>\n M8Z\n </td>\n <td>\n <a href=\"/wiki/Etobicoke\" title=\"Etobicoke\">\n Etobicoke\n </a>\n </td>\n <td>\n <a href=\"/wiki/Bloor\" title=\"Bloor\">\n South of Bloor\n </a>\n </td>\n </tr>\n <tr>\n <td>\n M9Z\n </td>\n <td>\n Not assigned\n </td>\n <td>\n Not assigned\n </td>\n </tr>\n </tbody>\n </table>\n <div>\n <table class=\"multicol\" role=\"presentation\" style=\"border-collapse: collapse; padding: 0; border: 0; background:transparent; width:100%;\">\n </table>\n <h2>\n <span id=\"Most_populated_FSAs.5B3.5D\">\n </span>\n <span class=\"mw-headline\" id=\"Most_populated_FSAs[3]\">\n Most populated FSAs\n <sup class=\"reference\" id=\"cite_ref-statcan_3-0\">\n <a href=\"#cite_note-statcan-3\">\n [3]\n </a>\n </sup>\n </span>\n <span class=\"mw-editsection\">\n <span class=\"mw-editsection-bracket\">\n [\n </span>\n <a href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=2\" title=\"Edit section: Most populated FSAs[3]\">\n edit\n </a>\n <span class=\"mw-editsection-bracket\">\n ]\n </span>\n </span>\n </h2>\n <ol>\n <li>\n M1B, 65,129\n </li>\n <li>\n M2N, 60,124\n </li>\n <li>\n M1V, 55,250\n </li>\n <li>\n M9V, 55,159\n </li>\n <li>\n M2J, 54,391\n </li>\n </ol>\n <p>\n </p>\n <table cellpadding=\"2\" cellspacing=\"0\" rules=\"all\" style='\"width:100%;'>\n <tbody>\n <tr>\n <td>\n </td>\n </tr>\n </tbody>\n </table>\n </div>\n <p>\n </p>\n <h2>\n <span id=\"Least_populated_FSAs.5B3.5D\">\n </span>\n <span class=\"mw-headline\" id=\"Least_populated_FSAs[3]\">\n Least populated FSAs\n <sup class=\"reference\" id=\"cite_ref-statcan_3-1\">\n <a href=\"#cite_note-statcan-3\">\n [3]\n </a>\n </sup>\n </span>\n <span class=\"mw-editsection\">\n <span class=\"mw-editsection-bracket\">\n [\n </span>\n <a href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=3\" title=\"Edit section: Least populated FSAs[3]\">\n edit\n </a>\n <span class=\"mw-editsection-bracket\">\n ]\n </span>\n </span>\n </h2>\n <ol>\n <li>\n M5K, 5\n </li>\n <li>\n M5L, 5\n </li>\n <li>\n M5W, 5\n </li>\n <li>\n M5X, 5\n </li>\n <li>\n M7A, 5\n </li>\n </ol>\n <p>\n </p>\n <h2>\n <span class=\"mw-headline\" id=\"References\">\n References\n </span>\n <span class=\"mw-editsection\">\n <span class=\"mw-editsection-bracket\">\n [\n </span>\n <a href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=4\" title=\"Edit section: References\">\n edit\n </a>\n <span class=\"mw-editsection-bracket\">\n ]\n </span>\n </span>\n </h2>\n <div class=\"mw-references-wrap\">\n <ol class=\"references\">\n <li id=\"cite_note-1\">\n <span class=\"mw-cite-backlink\">\n <b>\n <a href=\"#cite_ref-1\">\n ^\n </a>\n </b>\n </span>\n <span class=\"reference-text\">\n <cite class=\"citation web\">\n Canada Post.\n <a class=\"external text\" href=\"https://www.canadapost.ca/cpotools/apps/fpc/personal/findByCity?execution=e2s1\" rel=\"nofollow\">\n \"Canada Post - Find a Postal Code\"\n </a>\n <span class=\"reference-accessdate\">\n . Retrieved\n <span class=\"nowrap\">\n 9 November\n </span>\n 2008\n </span>\n .\n </cite>\n <span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=unknown&rft.btitle=Canada+Post+-+Find+a+Postal+Code&rft.au=Canada+Post&rft_id=https%3A%2F%2Fwww.canadapost.ca%2Fcpotools%2Fapps%2Ffpc%2Fpersonal%2FfindByCity%3Fexecution%3De2s1&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+postal+codes+of+Canada%3A+M\">\n </span>\n <style data-mw-deduplicate=\"TemplateStyles:r861714446\">\n .mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:\"\\\"\"\"\\\"\"\"'\"\"'\"}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url(\"//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png\")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url(\"//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png\")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url(\"//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png\")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}\n </style>\n </span>\n </li>\n <li id=\"cite_note-2\">\n <span class=\"mw-cite-backlink\">\n <b>\n <a href=\"#cite_ref-2\">\n ^\n </a>\n </b>\n </span>\n <span class=\"reference-text\">\n <cite class=\"citation web\">\n <a class=\"external text\" href=\"https://web.archive.org/web/20110519093024/http://www.canadapost.ca/cpo/mc/personal/tools/mobileapp/default.jsf\" rel=\"nofollow\">\n \"Mobile Apps\"\n </a>\n . Canada Post. Archived from\n <a class=\"external text\" href=\"http://www.canadapost.ca/cpo/mc/personal/tools/mobileapp/default.jsf\" rel=\"nofollow\">\n the original\n </a>\n on 2011-05-19.\n </cite>\n <span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=unknown&rft.btitle=Mobile+Apps&rft.pub=Canada+Post&rft_id=http%3A%2F%2Fwww.canadapost.ca%2Fcpo%2Fmc%2Fpersonal%2Ftools%2Fmobileapp%2Fdefault.jsf&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+postal+codes+of+Canada%3A+M\">\n </span>\n <link href=\"mw-data:TemplateStyles:r861714446\" rel=\"mw-deduplicated-inline-style\"/>\n </span>\n </li>\n <li id=\"cite_note-statcan-3\">\n <span class=\"mw-cite-backlink\">\n ^\n <a href=\"#cite_ref-statcan_3-0\">\n <sup>\n <i>\n <b>\n a\n </b>\n </i>\n </sup>\n </a>\n <a href=\"#cite_ref-statcan_3-1\">\n <sup>\n <i>\n <b>\n b\n </b>\n </i>\n </sup>\n </a>\n </span>\n <span class=\"reference-text\">\n <cite class=\"citation web\">\n <a class=\"external text\" href=\"http://www12.statcan.ca/english/census06/data/popdwell/Table.cfm?T=1201&SR=1&S=0&O=A&RPP=9999&PR=0&CMA=0\" rel=\"nofollow\">\n \"2006 Census of Population\"\n </a>\n . 15 October 2008.\n </cite>\n <span class=\"Z3988\" title=\"ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=unknown&rft.btitle=2006+Census+of+Population&rft.date=2008-10-15&rft_id=http%3A%2F%2Fwww12.statcan.ca%2Fenglish%2Fcensus06%2Fdata%2Fpopdwell%2FTable.cfm%3FT%3D1201%26SR%3D1%26S%3D0%26O%3DA%26RPP%3D9999%26PR%3D0%26CMA%3D0&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+postal+codes+of+Canada%3A+M\">\n </span>\n <link href=\"mw-data:TemplateStyles:r861714446\" rel=\"mw-deduplicated-inline-style\"/>\n </span>\n </li>\n </ol>\n </div>\n <table class=\"navbox\">\n <tbody>\n <tr>\n <td style=\"width:36px; text-align:center\">\n <a class=\"image\" href=\"/wiki/File:Flag_of_Canada.svg\" title=\"Flag of Canada\">\n <img alt=\"Flag of Canada\" data-file-height=\"500\" data-file-width=\"1000\" height=\"18\" src=\"//upload.wikimedia.org/wikipedia/en/thumb/c/cf/Flag_of_Canada.svg/36px-Flag_of_Canada.svg.png\" srcset=\"//upload.wikimedia.org/wikipedia/en/thumb/c/cf/Flag_of_Canada.svg/54px-Flag_of_Canada.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/c/cf/Flag_of_Canada.svg/72px-Flag_of_Canada.svg.png 2x\" width=\"36\"/>\n </a>\n </td>\n <th class=\"navbox-title\" style=\"font-size:110%\">\n <a href=\"/wiki/Postal_codes_in_Canada\" title=\"Postal codes in Canada\">\n Canadian postal codes\n </a>\n </th>\n <td style=\"width:36px; text-align:center\">\n <a class=\"image\" href=\"/wiki/File:Canadian_postal_district_map_(without_legends).svg\">\n <img alt=\"Canadian postal district map (without legends).svg\" data-file-height=\"846\" data-file-width=\"1000\" height=\"18\" src=\"//upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Canadian_postal_district_map_%28without_legends%29.svg/21px-Canadian_postal_district_map_%28without_legends%29.svg.png\" srcset=\"//upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Canadian_postal_district_map_%28without_legends%29.svg/32px-Canadian_postal_district_map_%28without_legends%29.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Canadian_postal_district_map_%28without_legends%29.svg/43px-Canadian_postal_district_map_%28without_legends%29.svg.png 2x\" width=\"21\"/>\n </a>\n </td>\n </tr>\n <tr>\n <td colspan=\"3\" style=\"text-align:center; font-size: 100%;\">\n <table cellspacing=\"0\" style=\"background-color: #F8F8F8;\" width=\"100%\">\n <tbody>\n <tr>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Newfoundland_and_Labrador\" title=\"Newfoundland and Labrador\">\n NL\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Nova_Scotia\" title=\"Nova Scotia\">\n NS\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Prince_Edward_Island\" title=\"Prince Edward Island\">\n PE\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/New_Brunswick\" title=\"New Brunswick\">\n NB\n </a>\n </td>\n <td colspan=\"3\" style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Quebec\" title=\"Quebec\">\n QC\n </a>\n </td>\n <td colspan=\"5\" style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Ontario\" title=\"Ontario\">\n ON\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Manitoba\" title=\"Manitoba\">\n MB\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Saskatchewan\" title=\"Saskatchewan\">\n SK\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Alberta\" title=\"Alberta\">\n AB\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/British_Columbia\" title=\"British Columbia\">\n BC\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Nunavut\" title=\"Nunavut\">\n NU\n </a>\n /\n <a href=\"/wiki/Northwest_Territories\" title=\"Northwest Territories\">\n NT\n </a>\n </td>\n <td style=\"text-align:center; border:1px solid #aaa;\">\n <a href=\"/wiki/Yukon\" title=\"Yukon\">\n YT\n </a>\n </td>\n </tr>\n <tr>\n <td align=\"center\" style=\"border: 1px solid #FF0000; background-color: #FFE0E0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_A\" title=\"List of postal codes of Canada: A\">\n A\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #FF4000; background-color: #FFE8E0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_B\" title=\"List of postal codes of Canada: B\">\n B\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #FF8000; background-color: #FFF0E0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_C\" title=\"List of postal codes of Canada: C\">\n C\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #FFC000; background-color: #FFF8E0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_E\" title=\"List of postal codes of Canada: E\">\n E\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #FFFF00; background-color: #FFFFE0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_G\" title=\"List of postal codes of Canada: G\">\n G\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #C0FF00; background-color: #F8FFE0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_H\" title=\"List of postal codes of Canada: H\">\n H\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #80FF00; background-color: #F0FFE0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_J\" title=\"List of postal codes of Canada: J\">\n J\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #00FF00; background-color: #E0FFE0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_K\" title=\"List of postal codes of Canada: K\">\n K\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #00FF80; background-color: #E0FFF0; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_L\" title=\"List of postal codes of Canada: L\">\n L\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #E0FFF8; background-color: #00FFC0; font-size: 135%; color: black;\" width=\"5%\">\n <a class=\"mw-selflink selflink\">\n M\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #00FFE0; background-color: #E0FFFC; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_N\" title=\"List of postal codes of Canada: N\">\n N\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #00FFFF; background-color: #E0FFFF; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_P\" title=\"List of postal codes of Canada: P\">\n P\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #00C0FF; background-color: #E0F8FF; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_R\" title=\"List of postal codes of Canada: R\">\n R\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #0080FF; background-color: #E0F0FF; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_S\" title=\"List of postal codes of Canada: S\">\n S\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #0040FF; background-color: #E0E8FF; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_T\" title=\"List of postal codes of Canada: T\">\n T\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #0000FF; background-color: #E0E0FF; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_V\" title=\"List of postal codes of Canada: V\">\n V\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #A000FF; background-color: #E8E0FF; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_X\" title=\"List of postal codes of Canada: X\">\n X\n </a>\n </td>\n <td align=\"center\" style=\"border: 1px solid #FF00FF; background-color: #FFE0FF; font-size: 135%;\" width=\"5%\">\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_Y\" title=\"List of postal codes of Canada: Y\">\n Y\n </a>\n </td>\n </tr>\n </tbody>\n </table>\n </td>\n </tr>\n </tbody>\n </table>\n <!-- \nNewPP limit report\nParsed by mw1257\nCached time: 20181106202602\nCache expiry: 1900800\nDynamic content: false\nCPU time usage: 0.268 seconds\nReal time usage: 0.307 seconds\nPreprocessor visited node count: 587/1000000\nPreprocessor generated node count: 0/1500000\nPost‐expand include size: 10232/2097152 bytes\nTemplate argument size: 13/2097152 bytes\nHighest expansion depth: 4/40\nExpensive parser function count: 0/500\nUnstrip recursion depth: 1/20\nUnstrip post‐expand size: 8056/5000000 bytes\nNumber of Wikibase entities loaded: 0/400\nLua time usage: 0.075/10.000 seconds\nLua memory usage: 1.65 MB/50 MB\n-->\n <!--\nTransclusion expansion time report (%,ms,calls,template)\n100.00% 133.862 1 -total\n 82.21% 110.049 3 Template:Cite_web\n 4.58% 6.130 1 Template:Col-2\n 3.57% 4.776 1 Template:Canadian_postal_codes\n 2.87% 3.839 1 Template:Col-break\n 2.18% 2.920 1 Template:Col-begin\n 1.75% 2.343 2 Template:Col-end\n-->\n <!-- Saved in parser cache with key enwiki:pcache:idhash:539066-0!canonical and timestamp 20181106202602 and revision id 867606113\n -->\n </div>\n <noscript>\n <img alt=\"\" height=\"1\" src=\"//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1\" style=\"border: none; position: absolute;\" title=\"\" width=\"1\"/>\n </noscript>\n </div>\n <div class=\"printfooter\">\n Retrieved from \"\n <a dir=\"ltr\" href=\"https://en.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=867606113\">\n https://en.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=867606113\n </a>\n \"\n </div>\n <div class=\"catlinks\" data-mw=\"interface\" id=\"catlinks\">\n <div class=\"mw-normal-catlinks\" id=\"mw-normal-catlinks\">\n <a href=\"/wiki/Help:Category\" title=\"Help:Category\">\n Categories\n </a>\n :\n <ul>\n <li>\n <a href=\"/wiki/Category:Communications_in_Ontario\" title=\"Category:Communications in Ontario\">\n Communications in Ontario\n </a>\n </li>\n <li>\n <a href=\"/wiki/Category:Postal_codes_in_Canada\" title=\"Category:Postal codes in Canada\">\n Postal codes in Canada\n </a>\n </li>\n <li>\n <a href=\"/wiki/Category:Toronto\" title=\"Category:Toronto\">\n Toronto\n </a>\n </li>\n <li>\n <a href=\"/wiki/Category:Ontario-related_lists\" title=\"Category:Ontario-related lists\">\n Ontario-related lists\n </a>\n </li>\n </ul>\n </div>\n </div>\n <div class=\"visualClear\">\n </div>\n </div>\n </div>\n <div id=\"mw-navigation\">\n <h2>\n Navigation menu\n </h2>\n <div id=\"mw-head\">\n <div aria-labelledby=\"p-personal-label\" class=\"\" id=\"p-personal\" role=\"navigation\">\n <h3 id=\"p-personal-label\">\n Personal tools\n </h3>\n <ul>\n <li id=\"pt-anonuserpage\">\n Not logged in\n </li>\n <li id=\"pt-anontalk\">\n <a accesskey=\"n\" href=\"/wiki/Special:MyTalk\" title=\"Discussion about edits from this IP address [n]\">\n Talk\n </a>\n </li>\n <li id=\"pt-anoncontribs\">\n <a accesskey=\"y\" href=\"/wiki/Special:MyContributions\" title=\"A list of edits made from this IP address [y]\">\n Contributions\n </a>\n </li>\n <li id=\"pt-createaccount\">\n <a href=\"/w/index.php?title=Special:CreateAccount&returnto=List+of+postal+codes+of+Canada%3A+M\" title=\"You are encouraged to create an account and log in; however, it is not mandatory\">\n Create account\n </a>\n </li>\n <li id=\"pt-login\">\n <a accesskey=\"o\" href=\"/w/index.php?title=Special:UserLogin&returnto=List+of+postal+codes+of+Canada%3A+M\" title=\"You're encouraged to log in; however, it's not mandatory. [o]\">\n Log in\n </a>\n </li>\n </ul>\n </div>\n <div id=\"left-navigation\">\n <div aria-labelledby=\"p-namespaces-label\" class=\"vectorTabs\" id=\"p-namespaces\" role=\"navigation\">\n <h3 id=\"p-namespaces-label\">\n Namespaces\n </h3>\n <ul>\n <li class=\"selected\" id=\"ca-nstab-main\">\n <span>\n <a accesskey=\"c\" href=\"/wiki/List_of_postal_codes_of_Canada:_M\" title=\"View the content page [c]\">\n Article\n </a>\n </span>\n </li>\n <li id=\"ca-talk\">\n <span>\n <a accesskey=\"t\" href=\"/wiki/Talk:List_of_postal_codes_of_Canada:_M\" rel=\"discussion\" title=\"Discussion about the content page [t]\">\n Talk\n </a>\n </span>\n </li>\n </ul>\n </div>\n <div aria-labelledby=\"p-variants-label\" class=\"vectorMenu emptyPortlet\" id=\"p-variants\" role=\"navigation\">\n <input aria-labelledby=\"p-variants-label\" class=\"vectorMenuCheckbox\" type=\"checkbox\"/>\n <h3 id=\"p-variants-label\">\n <span>\n Variants\n </span>\n </h3>\n <div class=\"menu\">\n <ul>\n </ul>\n </div>\n </div>\n </div>\n <div id=\"right-navigation\">\n <div aria-labelledby=\"p-views-label\" class=\"vectorTabs\" id=\"p-views\" role=\"navigation\">\n <h3 id=\"p-views-label\">\n Views\n </h3>\n <ul>\n <li class=\"collapsible selected\" id=\"ca-view\">\n <span>\n <a href=\"/wiki/List_of_postal_codes_of_Canada:_M\">\n Read\n </a>\n </span>\n </li>\n <li class=\"collapsible\" id=\"ca-edit\">\n <span>\n <a accesskey=\"e\" href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit\" title=\"Edit this page [e]\">\n Edit\n </a>\n </span>\n </li>\n <li class=\"collapsible\" id=\"ca-history\">\n <span>\n <a accesskey=\"h\" href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=history\" title=\"Past revisions of this page [h]\">\n View history\n </a>\n </span>\n </li>\n </ul>\n </div>\n <div aria-labelledby=\"p-cactions-label\" class=\"vectorMenu emptyPortlet\" id=\"p-cactions\" role=\"navigation\">\n <input aria-labelledby=\"p-cactions-label\" class=\"vectorMenuCheckbox\" type=\"checkbox\"/>\n <h3 id=\"p-cactions-label\">\n <span>\n More\n </span>\n </h3>\n <div class=\"menu\">\n <ul>\n </ul>\n </div>\n </div>\n <div id=\"p-search\" role=\"search\">\n <h3>\n <label for=\"searchInput\">\n Search\n </label>\n </h3>\n <form action=\"/w/index.php\" id=\"searchform\">\n <div id=\"simpleSearch\">\n <input accesskey=\"f\" id=\"searchInput\" name=\"search\" placeholder=\"Search Wikipedia\" title=\"Search Wikipedia [f]\" type=\"search\"/>\n <input name=\"title\" type=\"hidden\" value=\"Special:Search\"/>\n <input class=\"searchButton mw-fallbackSearchButton\" id=\"mw-searchButton\" name=\"fulltext\" title=\"Search Wikipedia for this text\" type=\"submit\" value=\"Search\"/>\n <input class=\"searchButton\" id=\"searchButton\" name=\"go\" title=\"Go to a page with this exact name if it exists\" type=\"submit\" value=\"Go\"/>\n </div>\n </form>\n </div>\n </div>\n </div>\n <div id=\"mw-panel\">\n <div id=\"p-logo\" role=\"banner\">\n <a class=\"mw-wiki-logo\" href=\"/wiki/Main_Page\" title=\"Visit the main page\">\n </a>\n </div>\n <div aria-labelledby=\"p-navigation-label\" class=\"portal\" id=\"p-navigation\" role=\"navigation\">\n <h3 id=\"p-navigation-label\">\n Navigation\n </h3>\n <div class=\"body\">\n <ul>\n <li id=\"n-mainpage-description\">\n <a accesskey=\"z\" href=\"/wiki/Main_Page\" title=\"Visit the main page [z]\">\n Main page\n </a>\n </li>\n <li id=\"n-contents\">\n <a href=\"/wiki/Portal:Contents\" title=\"Guides to browsing Wikipedia\">\n Contents\n </a>\n </li>\n <li id=\"n-featuredcontent\">\n <a href=\"/wiki/Portal:Featured_content\" title=\"Featured content – the best of Wikipedia\">\n Featured content\n </a>\n </li>\n <li id=\"n-currentevents\">\n <a href=\"/wiki/Portal:Current_events\" title=\"Find background information on current events\">\n Current events\n </a>\n </li>\n <li id=\"n-randompage\">\n <a accesskey=\"x\" href=\"/wiki/Special:Random\" title=\"Load a random article [x]\">\n Random article\n </a>\n </li>\n <li id=\"n-sitesupport\">\n <a href=\"https://donate.wikimedia.org/wiki/Special:FundraiserRedirector?utm_source=donate&utm_medium=sidebar&utm_campaign=C13_en.wikipedia.org&uselang=en\" title=\"Support us\">\n Donate to Wikipedia\n </a>\n </li>\n <li id=\"n-shoplink\">\n <a href=\"//shop.wikimedia.org\" title=\"Visit the Wikipedia store\">\n Wikipedia store\n </a>\n </li>\n </ul>\n </div>\n </div>\n <div aria-labelledby=\"p-interaction-label\" class=\"portal\" id=\"p-interaction\" role=\"navigation\">\n <h3 id=\"p-interaction-label\">\n Interaction\n </h3>\n <div class=\"body\">\n <ul>\n <li id=\"n-help\">\n <a href=\"/wiki/Help:Contents\" title=\"Guidance on how to use and edit Wikipedia\">\n Help\n </a>\n </li>\n <li id=\"n-aboutsite\">\n <a href=\"/wiki/Wikipedia:About\" title=\"Find out about Wikipedia\">\n About Wikipedia\n </a>\n </li>\n <li id=\"n-portal\">\n <a href=\"/wiki/Wikipedia:Community_portal\" title=\"About the project, what you can do, where to find things\">\n Community portal\n </a>\n </li>\n <li id=\"n-recentchanges\">\n <a accesskey=\"r\" href=\"/wiki/Special:RecentChanges\" title=\"A list of recent changes in the wiki [r]\">\n Recent changes\n </a>\n </li>\n <li id=\"n-contactpage\">\n <a href=\"//en.wikipedia.org/wiki/Wikipedia:Contact_us\" title=\"How to contact Wikipedia\">\n Contact page\n </a>\n </li>\n </ul>\n </div>\n </div>\n <div aria-labelledby=\"p-tb-label\" class=\"portal\" id=\"p-tb\" role=\"navigation\">\n <h3 id=\"p-tb-label\">\n Tools\n </h3>\n <div class=\"body\">\n <ul>\n <li id=\"t-whatlinkshere\">\n <a accesskey=\"j\" href=\"/wiki/Special:WhatLinksHere/List_of_postal_codes_of_Canada:_M\" title=\"List of all English Wikipedia pages containing links to this page [j]\">\n What links here\n </a>\n </li>\n <li id=\"t-recentchangeslinked\">\n <a accesskey=\"k\" href=\"/wiki/Special:RecentChangesLinked/List_of_postal_codes_of_Canada:_M\" rel=\"nofollow\" title=\"Recent changes in pages linked from this page [k]\">\n Related changes\n </a>\n </li>\n <li id=\"t-upload\">\n <a accesskey=\"u\" href=\"/wiki/Wikipedia:File_Upload_Wizard\" title=\"Upload files [u]\">\n Upload file\n </a>\n </li>\n <li id=\"t-specialpages\">\n <a accesskey=\"q\" href=\"/wiki/Special:SpecialPages\" title=\"A list of all special pages [q]\">\n Special pages\n </a>\n </li>\n <li id=\"t-permalink\">\n <a href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=867606113\" title=\"Permanent link to this revision of the page\">\n Permanent link\n </a>\n </li>\n <li id=\"t-info\">\n <a href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=info\" title=\"More information about this page\">\n Page information\n </a>\n </li>\n <li id=\"t-wikibase\">\n <a accesskey=\"g\" href=\"https://www.wikidata.org/wiki/Special:EntityPage/Q3248240\" title=\"Link to connected data repository item [g]\">\n Wikidata item\n </a>\n </li>\n <li id=\"t-cite\">\n <a href=\"/w/index.php?title=Special:CiteThisPage&page=List_of_postal_codes_of_Canada%3A_M&id=867606113\" title=\"Information on how to cite this page\">\n Cite this page\n </a>\n </li>\n </ul>\n </div>\n </div>\n <div aria-labelledby=\"p-coll-print_export-label\" class=\"portal\" id=\"p-coll-print_export\" role=\"navigation\">\n <h3 id=\"p-coll-print_export-label\">\n Print/export\n </h3>\n <div class=\"body\">\n <ul>\n <li id=\"coll-create_a_book\">\n <a href=\"/w/index.php?title=Special:Book&bookcmd=book_creator&referer=List+of+postal+codes+of+Canada%3A+M\">\n Create a book\n </a>\n </li>\n <li id=\"coll-download-as-rdf2latex\">\n <a href=\"/w/index.php?title=Special:ElectronPdf&page=List+of+postal+codes+of+Canada%3A+M&action=show-download-screen\">\n Download as PDF\n </a>\n </li>\n <li id=\"t-print\">\n <a accesskey=\"p\" href=\"/w/index.php?title=List_of_postal_codes_of_Canada:_M&printable=yes\" title=\"Printable version of this page [p]\">\n Printable version\n </a>\n </li>\n </ul>\n </div>\n </div>\n <div aria-labelledby=\"p-lang-label\" class=\"portal\" id=\"p-lang\" role=\"navigation\">\n <h3 id=\"p-lang-label\">\n Languages\n </h3>\n <div class=\"body\">\n <ul>\n <li class=\"interlanguage-link interwiki-fr\">\n <a class=\"interlanguage-link-target\" href=\"https://fr.wikipedia.org/wiki/Liste_des_codes_postaux_canadiens_d%C3%A9butant_par_M\" hreflang=\"fr\" lang=\"fr\" title=\"Liste des codes postaux canadiens débutant par M – French\">\n Français\n </a>\n </li>\n </ul>\n <div class=\"after-portlet after-portlet-lang\">\n <span class=\"wb-langlinks-edit wb-langlinks-link\">\n <a class=\"wbc-editpage\" href=\"https://www.wikidata.org/wiki/Special:EntityPage/Q3248240#sitelinks-wikipedia\" title=\"Edit interlanguage links\">\n Edit links\n </a>\n </span>\n </div>\n </div>\n </div>\n </div>\n </div>\n <div id=\"footer\" role=\"contentinfo\">\n <ul id=\"footer-info\">\n <li id=\"footer-info-lastmod\">\n This page was last edited on 6 November 2018, at 20:26\n <span class=\"anonymous-show\">\n (UTC)\n </span>\n .\n </li>\n <li id=\"footer-info-copyright\">\n Text is available under the\n <a href=\"//en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License\" rel=\"license\">\n Creative Commons Attribution-ShareAlike License\n </a>\n <a href=\"//creativecommons.org/licenses/by-sa/3.0/\" rel=\"license\" style=\"display:none;\">\n </a>\n ;\nadditional terms may apply. By using this site, you agree to the\n <a href=\"//foundation.wikimedia.org/wiki/Terms_of_Use\">\n Terms of Use\n </a>\n and\n <a href=\"//foundation.wikimedia.org/wiki/Privacy_policy\">\n Privacy Policy\n </a>\n . Wikipedia® is a registered trademark of the\n <a href=\"//www.wikimediafoundation.org/\">\n Wikimedia Foundation, Inc.\n </a>\n , a non-profit organization.\n </li>\n </ul>\n <ul id=\"footer-places\">\n <li id=\"footer-places-privacy\">\n <a class=\"extiw\" href=\"https://foundation.wikimedia.org/wiki/Privacy_policy\" title=\"wmf:Privacy policy\">\n Privacy policy\n </a>\n </li>\n <li id=\"footer-places-about\">\n <a href=\"/wiki/Wikipedia:About\" title=\"Wikipedia:About\">\n About Wikipedia\n </a>\n </li>\n <li id=\"footer-places-disclaimer\">\n <a href=\"/wiki/Wikipedia:General_disclaimer\" title=\"Wikipedia:General disclaimer\">\n Disclaimers\n </a>\n </li>\n <li id=\"footer-places-contact\">\n <a href=\"//en.wikipedia.org/wiki/Wikipedia:Contact_us\">\n Contact Wikipedia\n </a>\n </li>\n <li id=\"footer-places-developers\">\n <a href=\"https://www.mediawiki.org/wiki/Special:MyLanguage/How_to_contribute\">\n Developers\n </a>\n </li>\n <li id=\"footer-places-cookiestatement\">\n <a href=\"https://foundation.wikimedia.org/wiki/Cookie_statement\">\n Cookie statement\n </a>\n </li>\n <li id=\"footer-places-mobileview\">\n <a class=\"noprint stopMobileRedirectToggle\" href=\"//en.m.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&mobileaction=toggle_view_mobile\">\n Mobile view\n </a>\n </li>\n </ul>\n <ul class=\"noprint\" id=\"footer-icons\">\n <li id=\"footer-copyrightico\">\n <a href=\"https://wikimediafoundation.org/\">\n <img alt=\"Wikimedia Foundation\" height=\"31\" src=\"/static/images/wikimedia-button.png\" srcset=\"/static/images/wikimedia-button-1.5x.png 1.5x, /static/images/wikimedia-button-2x.png 2x\" width=\"88\"/>\n </a>\n </li>\n <li id=\"footer-poweredbyico\">\n <a href=\"//www.mediawiki.org/\">\n <img alt=\"Powered by MediaWiki\" height=\"31\" src=\"/static/images/poweredby_mediawiki_88x31.png\" srcset=\"/static/images/poweredby_mediawiki_132x47.png 1.5x, /static/images/poweredby_mediawiki_176x62.png 2x\" width=\"88\"/>\n </a>\n </li>\n </ul>\n <div style=\"clear: both;\">\n </div>\n </div>\n <script>\n (window.RLQ=window.RLQ||[]).push(function(){mw.config.set({\"wgPageParseReport\":{\"limitreport\":{\"cputime\":\"0.268\",\"walltime\":\"0.307\",\"ppvisitednodes\":{\"value\":587,\"limit\":1000000},\"ppgeneratednodes\":{\"value\":0,\"limit\":1500000},\"postexpandincludesize\":{\"value\":10232,\"limit\":2097152},\"templateargumentsize\":{\"value\":13,\"limit\":2097152},\"expansiondepth\":{\"value\":4,\"limit\":40},\"expensivefunctioncount\":{\"value\":0,\"limit\":500},\"unstrip-depth\":{\"value\":1,\"limit\":20},\"unstrip-size\":{\"value\":8056,\"limit\":5000000},\"entityaccesscount\":{\"value\":0,\"limit\":400},\"timingprofile\":[\"100.00% 133.862 1 -total\",\" 82.21% 110.049 3 Template:Cite_web\",\" 4.58% 6.130 1 Template:Col-2\",\" 3.57% 4.776 1 Template:Canadian_postal_codes\",\" 2.87% 3.839 1 Template:Col-break\",\" 2.18% 2.920 1 Template:Col-begin\",\" 1.75% 2.343 2 Template:Col-end\"]},\"scribunto\":{\"limitreport-timeusage\":{\"value\":\"0.075\",\"limit\":\"10.000\"},\"limitreport-memusage\":{\"value\":1725790,\"limit\":52428800}},\"cachereport\":{\"origin\":\"mw1257\",\"timestamp\":\"20181106202602\",\"ttl\":1900800,\"transientcontent\":false}}});});\n </script>\n <script type=\"application/ld+json\">\n {\"@context\":\"https:\\/\\/schema.org\",\"@type\":\"Article\",\"name\":\"List of postal codes of Canada: M\",\"url\":\"https:\\/\\/en.wikipedia.org\\/wiki\\/List_of_postal_codes_of_Canada:_M\",\"sameAs\":\"http:\\/\\/www.wikidata.org\\/entity\\/Q3248240\",\"mainEntity\":\"http:\\/\\/www.wikidata.org\\/entity\\/Q3248240\",\"author\":{\"@type\":\"Organization\",\"name\":\"Contributors to Wikimedia projects\"},\"publisher\":{\"@type\":\"Organization\",\"name\":\"Wikimedia Foundation, Inc.\",\"logo\":{\"@type\":\"ImageObject\",\"url\":\"https:\\/\\/www.wikimedia.org\\/static\\/images\\/wmf-hor-googpub.png\"}},\"datePublished\":\"2004-03-20T10:02:13Z\",\"dateModified\":\"2018-11-06T20:26:02Z\",\"headline\":\"Wikimedia list article\"}\n </script>\n <script>\n (window.RLQ=window.RLQ||[]).push(function(){mw.config.set({\"wgBackendResponseTime\":128,\"wgHostname\":\"mw1271\"});});\n </script>\n </body>\n</html>\n\n"
],
[
"toronto_table = soup.find('table',{'class':'wikitable sortable'})",
"_____no_output_____"
],
[
"links = toronto_table.findAll('td')",
"_____no_output_____"
],
[
"torolist = []\ncount = 0\nfor x in links:\n if count == 0:\n x1 = x.text\n count += 1\n elif count == 1:\n x2 = x.text\n count +=1\n elif count == 2:\n x3 = x.text\n x3 = x3.replace('\\n','')\n count = 0\n if x3 == 'Not assigned':\n x3 = x2\n if x2 != 'Not assigned': \n torolist.append((x1,x2,x3))\nprint (torolist)\n \n ",
"[('M3A', 'North York', 'Parkwoods'), ('M4A', 'North York', 'Victoria Village'), ('M5A', 'Downtown Toronto', 'Harbourfront'), ('M5A', 'Downtown Toronto', 'Regent Park'), ('M6A', 'North York', 'Lawrence Heights'), ('M6A', 'North York', 'Lawrence Manor'), ('M7A', \"Queen's Park\", \"Queen's Park\"), ('M9A', 'Etobicoke', 'Islington Avenue'), ('M1B', 'Scarborough', 'Rouge'), ('M1B', 'Scarborough', 'Malvern'), ('M3B', 'North York', 'Don Mills North'), ('M4B', 'East York', 'Woodbine Gardens'), ('M4B', 'East York', 'Parkview Hill'), ('M5B', 'Downtown Toronto', 'Ryerson'), ('M5B', 'Downtown Toronto', 'Garden District'), ('M6B', 'North York', 'Glencairn'), ('M9B', 'Etobicoke', 'Cloverdale'), ('M9B', 'Etobicoke', 'Islington'), ('M9B', 'Etobicoke', 'Martin Grove'), ('M9B', 'Etobicoke', 'Princess Gardens'), ('M9B', 'Etobicoke', 'West Deane Park'), ('M1C', 'Scarborough', 'Highland Creek'), ('M1C', 'Scarborough', 'Rouge Hill'), ('M1C', 'Scarborough', 'Port Union'), ('M3C', 'North York', 'Flemingdon Park'), ('M3C', 'North York', 'Don Mills South'), ('M4C', 'East York', 'Woodbine Heights'), ('M5C', 'Downtown Toronto', 'St. James Town'), ('M6C', 'York', 'Humewood-Cedarvale'), ('M9C', 'Etobicoke', 'Bloordale Gardens'), ('M9C', 'Etobicoke', 'Eringate'), ('M9C', 'Etobicoke', 'Markland Wood'), ('M9C', 'Etobicoke', 'Old Burnhamthorpe'), ('M1E', 'Scarborough', 'Guildwood'), ('M1E', 'Scarborough', 'Morningside'), ('M1E', 'Scarborough', 'West Hill'), ('M4E', 'East Toronto', 'The Beaches'), ('M5E', 'Downtown Toronto', 'Berczy Park'), ('M6E', 'York', 'Caledonia-Fairbanks'), ('M1G', 'Scarborough', 'Woburn'), ('M4G', 'East York', 'Leaside'), ('M5G', 'Downtown Toronto', 'Central Bay Street'), ('M6G', 'Downtown Toronto', 'Christie'), ('M1H', 'Scarborough', 'Cedarbrae'), ('M2H', 'North York', 'Hillcrest Village'), ('M3H', 'North York', 'Bathurst Manor'), ('M3H', 'North York', 'Downsview North'), ('M3H', 'North York', 'Wilson Heights'), ('M4H', 'East York', 'Thorncliffe Park'), ('M5H', 'Downtown Toronto', 'Adelaide'), ('M5H', 'Downtown Toronto', 'King'), ('M5H', 'Downtown Toronto', 'Richmond'), ('M6H', 'West Toronto', 'Dovercourt Village'), ('M6H', 'West Toronto', 'Dufferin'), ('M1J', 'Scarborough', 'Scarborough Village'), ('M2J', 'North York', 'Fairview'), ('M2J', 'North York', 'Henry Farm'), ('M2J', 'North York', 'Oriole'), ('M3J', 'North York', 'Northwood Park'), ('M3J', 'North York', 'York University'), ('M4J', 'East York', 'East Toronto'), ('M5J', 'Downtown Toronto', 'Harbourfront East'), ('M5J', 'Downtown Toronto', 'Toronto Islands'), ('M5J', 'Downtown Toronto', 'Union Station'), ('M6J', 'West Toronto', 'Little Portugal'), ('M6J', 'West Toronto', 'Trinity'), ('M1K', 'Scarborough', 'East Birchmount Park'), ('M1K', 'Scarborough', 'Ionview'), ('M1K', 'Scarborough', 'Kennedy Park'), ('M2K', 'North York', 'Bayview Village'), ('M3K', 'North York', 'CFB Toronto'), ('M3K', 'North York', 'Downsview East'), ('M4K', 'East Toronto', 'The Danforth West'), ('M4K', 'East Toronto', 'Riverdale'), ('M5K', 'Downtown Toronto', 'Design Exchange'), ('M5K', 'Downtown Toronto', 'Toronto Dominion Centre'), ('M6K', 'West Toronto', 'Brockton'), ('M6K', 'West Toronto', 'Exhibition Place'), ('M6K', 'West Toronto', 'Parkdale Village'), ('M1L', 'Scarborough', 'Clairlea'), ('M1L', 'Scarborough', 'Golden Mile'), ('M1L', 'Scarborough', 'Oakridge'), ('M2L', 'North York', 'Silver Hills'), ('M2L', 'North York', 'York Mills'), ('M3L', 'North York', 'Downsview West'), ('M4L', 'East Toronto', 'The Beaches West'), ('M4L', 'East Toronto', 'India Bazaar'), ('M5L', 'Downtown Toronto', 'Commerce Court'), ('M5L', 'Downtown Toronto', 'Victoria Hotel'), ('M6L', 'North York', 'Maple Leaf Park'), ('M6L', 'North York', 'North Park'), ('M6L', 'North York', 'Upwood Park'), ('M9L', 'North York', 'Humber Summit'), ('M1M', 'Scarborough', 'Cliffcrest'), ('M1M', 'Scarborough', 'Cliffside'), ('M1M', 'Scarborough', 'Scarborough Village West'), ('M2M', 'North York', 'Newtonbrook'), ('M2M', 'North York', 'Willowdale'), ('M3M', 'North York', 'Downsview Central'), ('M4M', 'East Toronto', 'Studio District'), ('M5M', 'North York', 'Bedford Park'), ('M5M', 'North York', 'Lawrence Manor East'), ('M6M', 'York', 'Del Ray'), ('M6M', 'York', 'Keelesdale'), ('M6M', 'York', 'Mount Dennis'), ('M6M', 'York', 'Silverthorn'), ('M9M', 'North York', 'Emery'), ('M9M', 'North York', 'Humberlea'), ('M1N', 'Scarborough', 'Birch Cliff'), ('M1N', 'Scarborough', 'Cliffside West'), ('M2N', 'North York', 'Willowdale South'), ('M3N', 'North York', 'Downsview Northwest'), ('M4N', 'Central Toronto', 'Lawrence Park'), ('M5N', 'Central Toronto', 'Roselawn'), ('M6N', 'York', 'The Junction North'), ('M6N', 'York', 'Runnymede'), ('M9N', 'York', 'Weston'), ('M1P', 'Scarborough', 'Dorset Park'), ('M1P', 'Scarborough', 'Scarborough Town Centre'), ('M1P', 'Scarborough', 'Wexford Heights'), ('M2P', 'North York', 'York Mills West'), ('M4P', 'Central Toronto', 'Davisville North'), ('M5P', 'Central Toronto', 'Forest Hill North'), ('M5P', 'Central Toronto', 'Forest Hill West'), ('M6P', 'West Toronto', 'High Park'), ('M6P', 'West Toronto', 'The Junction South'), ('M9P', 'Etobicoke', 'Westmount'), ('M1R', 'Scarborough', 'Maryvale'), ('M1R', 'Scarborough', 'Wexford'), ('M2R', 'North York', 'Willowdale West'), ('M4R', 'Central Toronto', 'North Toronto West'), ('M5R', 'Central Toronto', 'The Annex'), ('M5R', 'Central Toronto', 'North Midtown'), ('M5R', 'Central Toronto', 'Yorkville'), ('M6R', 'West Toronto', 'Parkdale'), ('M6R', 'West Toronto', 'Roncesvalles'), ('M7R', 'Mississauga', 'Canada Post Gateway Processing Centre'), ('M9R', 'Etobicoke', 'Kingsview Village'), ('M9R', 'Etobicoke', 'Martin Grove Gardens'), ('M9R', 'Etobicoke', 'Richview Gardens'), ('M9R', 'Etobicoke', 'St. Phillips'), ('M1S', 'Scarborough', 'Agincourt'), ('M4S', 'Central Toronto', 'Davisville'), ('M5S', 'Downtown Toronto', 'Harbord'), ('M5S', 'Downtown Toronto', 'University of Toronto'), ('M6S', 'West Toronto', 'Runnymede'), ('M6S', 'West Toronto', 'Swansea'), ('M1T', 'Scarborough', 'Clarks Corners'), ('M1T', 'Scarborough', 'Sullivan'), ('M1T', 'Scarborough', \"Tam O'Shanter\"), ('M4T', 'Central Toronto', 'Moore Park'), ('M4T', 'Central Toronto', 'Summerhill East'), ('M5T', 'Downtown Toronto', 'Chinatown'), ('M5T', 'Downtown Toronto', 'Grange Park'), ('M5T', 'Downtown Toronto', 'Kensington Market'), ('M1V', 'Scarborough', 'Agincourt North'), ('M1V', 'Scarborough', \"L'Amoreaux East\"), ('M1V', 'Scarborough', 'Milliken'), ('M1V', 'Scarborough', 'Steeles East'), ('M4V', 'Central Toronto', 'Deer Park'), ('M4V', 'Central Toronto', 'Forest Hill SE'), ('M4V', 'Central Toronto', 'Rathnelly'), ('M4V', 'Central Toronto', 'South Hill'), ('M4V', 'Central Toronto', 'Summerhill West'), ('M5V', 'Downtown Toronto', 'CN Tower'), ('M5V', 'Downtown Toronto', 'Bathurst Quay'), ('M5V', 'Downtown Toronto', 'Island airport'), ('M5V', 'Downtown Toronto', 'Harbourfront West'), ('M5V', 'Downtown Toronto', 'King and Spadina'), ('M5V', 'Downtown Toronto', 'Railway Lands'), ('M5V', 'Downtown Toronto', 'South Niagara'), ('M8V', 'Etobicoke', 'Humber Bay Shores'), ('M8V', 'Etobicoke', 'Mimico South'), ('M8V', 'Etobicoke', 'New Toronto'), ('M9V', 'Etobicoke', 'Albion Gardens'), ('M9V', 'Etobicoke', 'Beaumond Heights'), ('M9V', 'Etobicoke', 'Humbergate'), ('M9V', 'Etobicoke', 'Jamestown'), ('M9V', 'Etobicoke', 'Mount Olive'), ('M9V', 'Etobicoke', 'Silverstone'), ('M9V', 'Etobicoke', 'South Steeles'), ('M9V', 'Etobicoke', 'Thistletown'), ('M1W', 'Scarborough', \"L'Amoreaux West\"), ('M1W', 'Scarborough', 'Steeles West'), ('M4W', 'Downtown Toronto', 'Rosedale'), ('M5W', 'Downtown Toronto', 'Stn A PO Boxes 25 The Esplanade'), ('M8W', 'Etobicoke', 'Alderwood'), ('M8W', 'Etobicoke', 'Long Branch'), ('M9W', 'Etobicoke', 'Northwest'), ('M1X', 'Scarborough', 'Upper Rouge'), ('M4X', 'Downtown Toronto', 'Cabbagetown'), ('M4X', 'Downtown Toronto', 'St. James Town'), ('M5X', 'Downtown Toronto', 'First Canadian Place'), ('M5X', 'Downtown Toronto', 'Underground city'), ('M8X', 'Etobicoke', 'The Kingsway'), ('M8X', 'Etobicoke', 'Montgomery Road'), ('M8X', 'Etobicoke', 'Old Mill North'), ('M4Y', 'Downtown Toronto', 'Church and Wellesley'), ('M7Y', 'East Toronto', 'Business reply mail Processing Centre969 Eastern'), ('M8Y', 'Etobicoke', 'Humber Bay'), ('M8Y', 'Etobicoke', \"King's Mill Park\"), ('M8Y', 'Etobicoke', 'Kingsway Park South East'), ('M8Y', 'Etobicoke', 'Mimico NE'), ('M8Y', 'Etobicoke', 'Old Mill South'), ('M8Y', 'Etobicoke', 'The Queensway East'), ('M8Y', 'Etobicoke', 'Royal York South East'), ('M8Y', 'Etobicoke', 'Sunnylea'), ('M8Z', 'Etobicoke', 'Kingsway Park South West'), ('M8Z', 'Etobicoke', 'Mimico NW'), ('M8Z', 'Etobicoke', 'The Queensway West'), ('M8Z', 'Etobicoke', 'Royal York South West'), ('M8Z', 'Etobicoke', 'South of Bloor')]\n"
],
[
"torodict = {}\nfor x in torolist:\n if x[0] in torodict:\n torodict[x[0]] = [x[0], x[1], torodict[x[0]][1] + ', ' + x[2]]\n else:\n torodict[x[0]] = [x[0], x[1], x[2]]\n \ntorodictindex = {}\nfor count, x in enumerate(torodict):\n torodictindex[count] = [x, torodict[x][1], torodict[x][2]]\n \nprint(torodictindex)",
"{0: ['M3A', 'North York', 'Parkwoods'], 1: ['M4A', 'North York', 'Victoria Village'], 2: ['M5A', 'Downtown Toronto', 'Downtown Toronto, Regent Park'], 3: ['M6A', 'North York', 'North York, Lawrence Manor'], 4: ['M7A', \"Queen's Park\", \"Queen's Park\"], 5: ['M9A', 'Etobicoke', 'Islington Avenue'], 6: ['M1B', 'Scarborough', 'Scarborough, Malvern'], 7: ['M3B', 'North York', 'Don Mills North'], 8: ['M4B', 'East York', 'East York, Parkview Hill'], 9: ['M5B', 'Downtown Toronto', 'Downtown Toronto, Garden District'], 10: ['M6B', 'North York', 'Glencairn'], 11: ['M9B', 'Etobicoke', 'Etobicoke, West Deane Park'], 12: ['M1C', 'Scarborough', 'Scarborough, Port Union'], 13: ['M3C', 'North York', 'North York, Don Mills South'], 14: ['M4C', 'East York', 'Woodbine Heights'], 15: ['M5C', 'Downtown Toronto', 'St. James Town'], 16: ['M6C', 'York', 'Humewood-Cedarvale'], 17: ['M9C', 'Etobicoke', 'Etobicoke, Old Burnhamthorpe'], 18: ['M1E', 'Scarborough', 'Scarborough, West Hill'], 19: ['M4E', 'East Toronto', 'The Beaches'], 20: ['M5E', 'Downtown Toronto', 'Berczy Park'], 21: ['M6E', 'York', 'Caledonia-Fairbanks'], 22: ['M1G', 'Scarborough', 'Woburn'], 23: ['M4G', 'East York', 'Leaside'], 24: ['M5G', 'Downtown Toronto', 'Central Bay Street'], 25: ['M6G', 'Downtown Toronto', 'Christie'], 26: ['M1H', 'Scarborough', 'Cedarbrae'], 27: ['M2H', 'North York', 'Hillcrest Village'], 28: ['M3H', 'North York', 'North York, Wilson Heights'], 29: ['M4H', 'East York', 'Thorncliffe Park'], 30: ['M5H', 'Downtown Toronto', 'Downtown Toronto, Richmond'], 31: ['M6H', 'West Toronto', 'West Toronto, Dufferin'], 32: ['M1J', 'Scarborough', 'Scarborough Village'], 33: ['M2J', 'North York', 'North York, Oriole'], 34: ['M3J', 'North York', 'North York, York University'], 35: ['M4J', 'East York', 'East Toronto'], 36: ['M5J', 'Downtown Toronto', 'Downtown Toronto, Union Station'], 37: ['M6J', 'West Toronto', 'West Toronto, Trinity'], 38: ['M1K', 'Scarborough', 'Scarborough, Kennedy Park'], 39: ['M2K', 'North York', 'Bayview Village'], 40: ['M3K', 'North York', 'North York, Downsview East'], 41: ['M4K', 'East Toronto', 'East Toronto, Riverdale'], 42: ['M5K', 'Downtown Toronto', 'Downtown Toronto, Toronto Dominion Centre'], 43: ['M6K', 'West Toronto', 'West Toronto, Parkdale Village'], 44: ['M1L', 'Scarborough', 'Scarborough, Oakridge'], 45: ['M2L', 'North York', 'North York, York Mills'], 46: ['M3L', 'North York', 'Downsview West'], 47: ['M4L', 'East Toronto', 'East Toronto, India Bazaar'], 48: ['M5L', 'Downtown Toronto', 'Downtown Toronto, Victoria Hotel'], 49: ['M6L', 'North York', 'North York, Upwood Park'], 50: ['M9L', 'North York', 'Humber Summit'], 51: ['M1M', 'Scarborough', 'Scarborough, Scarborough Village West'], 52: ['M2M', 'North York', 'North York, Willowdale'], 53: ['M3M', 'North York', 'Downsview Central'], 54: ['M4M', 'East Toronto', 'Studio District'], 55: ['M5M', 'North York', 'North York, Lawrence Manor East'], 56: ['M6M', 'York', 'York, Silverthorn'], 57: ['M9M', 'North York', 'North York, Humberlea'], 58: ['M1N', 'Scarborough', 'Scarborough, Cliffside West'], 59: ['M2N', 'North York', 'Willowdale South'], 60: ['M3N', 'North York', 'Downsview Northwest'], 61: ['M4N', 'Central Toronto', 'Lawrence Park'], 62: ['M5N', 'Central Toronto', 'Roselawn'], 63: ['M6N', 'York', 'York, Runnymede'], 64: ['M9N', 'York', 'Weston'], 65: ['M1P', 'Scarborough', 'Scarborough, Wexford Heights'], 66: ['M2P', 'North York', 'York Mills West'], 67: ['M4P', 'Central Toronto', 'Davisville North'], 68: ['M5P', 'Central Toronto', 'Central Toronto, Forest Hill West'], 69: ['M6P', 'West Toronto', 'West Toronto, The Junction South'], 70: ['M9P', 'Etobicoke', 'Westmount'], 71: ['M1R', 'Scarborough', 'Scarborough, Wexford'], 72: ['M2R', 'North York', 'Willowdale West'], 73: ['M4R', 'Central Toronto', 'North Toronto West'], 74: ['M5R', 'Central Toronto', 'Central Toronto, Yorkville'], 75: ['M6R', 'West Toronto', 'West Toronto, Roncesvalles'], 76: ['M7R', 'Mississauga', 'Canada Post Gateway Processing Centre'], 77: ['M9R', 'Etobicoke', 'Etobicoke, St. Phillips'], 78: ['M1S', 'Scarborough', 'Agincourt'], 79: ['M4S', 'Central Toronto', 'Davisville'], 80: ['M5S', 'Downtown Toronto', 'Downtown Toronto, University of Toronto'], 81: ['M6S', 'West Toronto', 'West Toronto, Swansea'], 82: ['M1T', 'Scarborough', \"Scarborough, Tam O'Shanter\"], 83: ['M4T', 'Central Toronto', 'Central Toronto, Summerhill East'], 84: ['M5T', 'Downtown Toronto', 'Downtown Toronto, Kensington Market'], 85: ['M1V', 'Scarborough', 'Scarborough, Steeles East'], 86: ['M4V', 'Central Toronto', 'Central Toronto, Summerhill West'], 87: ['M5V', 'Downtown Toronto', 'Downtown Toronto, South Niagara'], 88: ['M8V', 'Etobicoke', 'Etobicoke, New Toronto'], 89: ['M9V', 'Etobicoke', 'Etobicoke, Thistletown'], 90: ['M1W', 'Scarborough', 'Scarborough, Steeles West'], 91: ['M4W', 'Downtown Toronto', 'Rosedale'], 92: ['M5W', 'Downtown Toronto', 'Stn A PO Boxes 25 The Esplanade'], 93: ['M8W', 'Etobicoke', 'Etobicoke, Long Branch'], 94: ['M9W', 'Etobicoke', 'Northwest'], 95: ['M1X', 'Scarborough', 'Upper Rouge'], 96: ['M4X', 'Downtown Toronto', 'Downtown Toronto, St. James Town'], 97: ['M5X', 'Downtown Toronto', 'Downtown Toronto, Underground city'], 98: ['M8X', 'Etobicoke', 'Etobicoke, Old Mill North'], 99: ['M4Y', 'Downtown Toronto', 'Church and Wellesley'], 100: ['M7Y', 'East Toronto', 'Business reply mail Processing Centre969 Eastern'], 101: ['M8Y', 'Etobicoke', 'Etobicoke, Sunnylea'], 102: ['M8Z', 'Etobicoke', 'Etobicoke, South of Bloor']}\n"
],
[
"torodf = pd.DataFrame.from_dict(torodictindex, orient='index', columns=['PostalCode', 'Borough', 'Neighborhood'])\ntorodf",
"_____no_output_____"
],
[
"torodf.shape",
"_____no_output_____"
]
],
[
[
"### Thank you for reviewing this lab!\n[@virmax](https://www.twitter.com/virmax/)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
]
] |
4ab23f214825f81a65893dbedf068221de408c94
| 4,189 |
ipynb
|
Jupyter Notebook
|
2019dl4s_first_colaboratory.ipynb
|
ShinAsakawa/2019dl4s
|
77fb19df5880e6c3595eb14e6d83d1c8ea4858e4
|
[
"MIT"
] | null | null | null |
2019dl4s_first_colaboratory.ipynb
|
ShinAsakawa/2019dl4s
|
77fb19df5880e6c3595eb14e6d83d1c8ea4858e4
|
[
"MIT"
] | null | null | null |
2019dl4s_first_colaboratory.ipynb
|
ShinAsakawa/2019dl4s
|
77fb19df5880e6c3595eb14e6d83d1c8ea4858e4
|
[
"MIT"
] | null | null | null | 24.074713 | 246 | 0.462163 |
[
[
[
"<a href=\"https://colab.research.google.com/github/ShinAsakawa/2019dl4s/blob/master/2019dl4s_first_colaboratory.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# はじめての Google colaboratory\n\n<p align='right' > \n <font size='+1' color='green'>\n 浅川伸一\n</font>\n </p>\nGoogle Colaboratory には予めデータが用意されている。\nここでは,そのデータを操作してみることにする。\n\n下のセルのコマンドは `sample_data` というディレクトリに存在するファイルを表示させるためのコマンド `ls` である",
"_____no_output_____"
]
],
[
[
"!ls sample_data",
"_____no_output_____"
]
],
[
[
"実行すると,以下のファイルがあることが分かる。\n\n- json ファイルが 1 つ\n- csv ファイルが 4 つ\n- README.md ファイル\n\nこのうち README.md ファイルには,このディクレクトリ内にあるファイルの説明が記されている。\n\n直下の操作で\n\n- 何件のデータが `mnist_train_small.csv` に含まれているかを表示し\n- 最初の 10 件のデータを表示している\n",
"_____no_output_____"
]
],
[
[
"!wc sample_data/mnist_train_small.csv\n!head sample_data/mnist_train_small.csv",
"_____no_output_____"
]
],
[
[
"このデータを `pandas` を使って読み込んでみよう。",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nmnistdf = pd.read_csv('sample_data/mnist_train_small.csv', header=None)",
"_____no_output_____"
],
[
"# pandas で読み込んだデータの値を A に代入\nA = mnistdf.values\n\n# 確認のため A の情報を表示\nprint(type(A), A.shape)\n\n# 最初のデータを 28 行 28 列の行列に変換し a_digit という名前で保存\na_digit = A[0,1:].reshape(28,28)\n\n# 確認のため `a_digit` の情報を表示\nprint(type(a_digit), a_digit.shape)",
"_____no_output_____"
],
[
"mnist は手書き数字のデータであるので画像として表示させてみるための準備 \nimport matplotlib.pyplot as plt\n%matplotlib inline",
"_____no_output_____"
],
[
"plt.imshow(a_digit)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
]
] |
4ab242b5dde7da642341476afee52e276138d4f1
| 271,533 |
ipynb
|
Jupyter Notebook
|
docs/tutorial-hod.ipynb
|
DarkQuestCosmology/dark_emulator_public
|
f0f2eb2fcf3bf95d0e93b3e7239928cc7107a3c2
|
[
"MIT"
] | 13 |
2021-03-22T11:47:50.000Z
|
2021-05-19T12:27:32.000Z
|
docs/tutorial-hod.ipynb
|
DarkQuestCosmology/dark_emulator_public
|
f0f2eb2fcf3bf95d0e93b3e7239928cc7107a3c2
|
[
"MIT"
] | 12 |
2021-05-05T14:24:47.000Z
|
2021-11-10T17:57:42.000Z
|
docs/tutorial-hod.ipynb
|
DarkQuestCosmology/dark_emulator_public
|
f0f2eb2fcf3bf95d0e93b3e7239928cc7107a3c2
|
[
"MIT"
] | 2 |
2021-03-28T09:05:41.000Z
|
2022-02-16T23:55:51.000Z
| 756.359331 | 73,268 | 0.950849 |
[
[
[
"# `model_hod` module tutorial notebook",
"_____no_output_____"
]
],
[
[
"%load_ext autoreload\n%autoreload 2\n%pylab inline\nimport logging \nmpl_logger = logging.getLogger('matplotlib') \nmpl_logger.setLevel(logging.WARNING) \npil_logger = logging.getLogger('PIL')",
"Populating the interactive namespace from numpy and matplotlib\n"
],
[
"plt.rcParams['font.family'] = 'sans-serif'\nplt.rcParams['font.size'] = 18\nplt.rcParams['axes.linewidth'] = 1.5\nplt.rcParams['xtick.major.size'] = 5\nplt.rcParams['ytick.major.size'] = 5\nplt.rcParams['xtick.minor.size'] = 3\nplt.rcParams['ytick.minor.size'] = 3\nplt.rcParams['xtick.top'] = True\nplt.rcParams['ytick.right'] = True\nplt.rcParams['xtick.minor.visible'] = True\nplt.rcParams['ytick.minor.visible'] = True\nplt.rcParams['xtick.direction'] = 'in'\nplt.rcParams['ytick.direction'] = 'in'\nplt.rcParams['figure.figsize'] = (10,6)",
"_____no_output_____"
],
[
"from dark_emulator import model_hod",
"_____no_output_____"
],
[
"hod = model_hod.darkemu_x_hod({\"fft_num\":8})",
"initialize cosmo_class\ninitialize xinl emulator\nInitialize pklin emulator\ninitialize propagator emulator\nInitialize sigma_d emulator\ninitialize cross-correlation emulator\ninitialize auto-correlation emulator\nInitialize sigmaM emulator\n"
]
],
[
[
"## how to set cosmology and galaxy parameters (HOD, off-centering, satellite distribution, and incompleteness)",
"_____no_output_____"
]
],
[
[
"cparam = np.array([0.02225,0.1198,0.6844,3.094,0.9645,-1.])\nhod.set_cosmology(cparam)\n\ngparam = {\"logMmin\":13.13, \"sigma_sq\":0.22, \"logM1\": 14.21, \"alpha\": 1.13, \"kappa\": 1.25, # HOD parameters\n \"poff\": 0.2, \"Roff\": 0.1, # off-centering parameters p_off is the fraction of off-centered galaxies. Roff is the typical off-centered scale with respect to R200m.\n \"sat_dist_type\": \"emulator\", # satellite distribution. Chosse emulator of NFW. In the case of NFW, the c-M relation by Diemer & Kravtsov (2015) is assumed.\n \"alpha_inc\": 0.44, \"logM_inc\": 13.57} # incompleteness parameters. For details, see More et al. (2015)\nhod.set_galaxy(gparam)",
"INFO:root:Got same cosmology. Keep quantities already computed.\n"
]
],
[
[
"## how to plot g-g lensing signal in DeltaSigma(R)",
"_____no_output_____"
]
],
[
[
"redshift = 0.55\nr = np.logspace(-1,2,100)\n\nplt.figure(figsize=(10,6))\n\nplt.loglog(r, hod.get_ds(r, redshift), linewidth = 2, color = \"k\", label = \"total\")\nplt.loglog(r, hod.get_ds_cen(r, redshift), \"--\", color = \"k\", label = \"central\")\nplt.loglog(r, hod.get_ds_cen_off(r, redshift), \":\", color = \"k\", label = \"central w/offset\")\nplt.loglog(r, hod.get_ds_sat(r, redshift), \"-.\", color = \"k\", label = \"satellite\")\n\nplt.xlabel(r\"$R$ [Mpc/h]\")\nplt.ylabel(r\"$\\Delta\\Sigma$ [hM$_\\odot$/pc$^2$]\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## how to plot g-g lensing signal in xi",
"_____no_output_____"
]
],
[
[
"redshift = 0.55\nr = np.logspace(-1,2,100)\n\nplt.figure(figsize=(10,6))\n\nplt.loglog(r, hod.get_xi_gm(r, redshift), linewidth = 2, color = \"k\", label = \"total\")\nplt.loglog(r, hod.get_xi_gm_cen(r, redshift), \"--\", color = \"k\", label = \"central\")\nplt.loglog(r, hod.get_xi_gm_cen_off(r, redshift), \":\", color = \"k\", label = \"central w/offset\")\nplt.loglog(r, hod.get_xi_gm_sat(r, redshift), \"-.\", color = \"k\", label = \"satellite\")\n\nplt.xlabel(r\"$R$ [Mpc/h]\")\nplt.ylabel(r\"$\\xi_{\\rm gm}$\")\nplt.legend()",
"_____no_output_____"
]
],
[
[
"## how to plot g-g clustering signal in wp",
"_____no_output_____"
]
],
[
[
"redshift = 0.55\nrs = np.logspace(-1,2,100)\n\nplt.figure(figsize=(10,6))\n\nplt.loglog(r, hod.get_wp(r, redshift), linewidth = 2, color = \"k\", label = \"total\")\nplt.loglog(r, hod.get_wp_1hcs(r, redshift), \"--\", color = \"k\", label = \"1-halo cen-sat\")\nplt.loglog(r, hod.get_wp_1hss(r, redshift), \":\", color = \"k\", label = \"1-halo sat-sat\")\nplt.loglog(r, hod.get_wp_2hcc(r, redshift), \"-.\", color = \"k\", label = \"2-halo cen-cen\")\nplt.loglog(r, hod.get_wp_2hcs(r, redshift), dashes=[4,1,1,1,1,1], color = \"k\", label = \"2-halo cen-sat\")\nplt.loglog(r, hod.get_wp_2hss(r, redshift), dashes=[4,1,1,1,4,1], color = \"k\", label = \"2-halo sat-sat\")\n\nplt.xlabel(r\"$R$ [Mpc/h]\")\nplt.ylabel(r\"$w_p$ [Mpc/h]\")\nplt.legend()\nplt.ylim(0.1, 6e3)",
"_____no_output_____"
]
],
[
[
"## how to plot g-g clustering signal in xi",
"_____no_output_____"
]
],
[
[
"redshift = 0.55\nrs = np.logspace(-1,2,100)\n\nplt.figure(figsize=(10,6))\n\nplt.loglog(r, hod.get_xi_gg(r, redshift), linewidth = 2, color = \"k\", label = \"total\")\nplt.loglog(r, hod.get_xi_gg_1hcs(r, redshift), \"--\", color = \"k\", label = \"1-halo cen-sat\")\nplt.loglog(r, hod.get_xi_gg_1hss(r, redshift), \":\", color = \"k\", label = \"1-halo sat-sat\")\nplt.loglog(r, hod.get_xi_gg_2hcc(r, redshift), \"-.\", color = \"k\", label = \"2-halo cen-cen\")\nplt.loglog(r, hod.get_xi_gg_2hcs(r, redshift), dashes=[4,1,1,1,1,1], color = \"k\", label = \"2-halo cen-sat\")\nplt.loglog(r, hod.get_xi_gg_2hss(r, redshift), dashes=[4,1,1,1,4,1], color = \"k\", label = \"2-halo sat-sat\")\n\nplt.xlabel(r\"$R$ [Mpc/h]\")\nplt.ylabel(r\"$\\xi$\")\nplt.legend()\nplt.ylim(1e-3, 6e3)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab24992fe63c8de1cd26c150137215dda440e37
| 31,203 |
ipynb
|
Jupyter Notebook
|
ResNet.ipynb
|
BradleyBrown19/ModernArchitecturesFromScratch
|
9511c94cc7b6782df4603fd3f7751e653b8e7f20
|
[
"Apache-2.0"
] | null | null | null |
ResNet.ipynb
|
BradleyBrown19/ModernArchitecturesFromScratch
|
9511c94cc7b6782df4603fd3f7751e653b8e7f20
|
[
"Apache-2.0"
] | 3 |
2021-05-20T12:26:59.000Z
|
2022-02-26T06:21:00.000Z
|
ResNet.ipynb
|
BradleyBrown19/ModernArchitecturesFromScratch
|
9511c94cc7b6782df4603fd3f7751e653b8e7f20
|
[
"Apache-2.0"
] | null | null | null | 35.619863 | 307 | 0.539051 |
[
[
[
"#default_exp resnet_08",
"_____no_output_____"
],
[
"#export\nfrom ModernArchitecturesFromScratch.basic_operations_01 import *\nfrom ModernArchitecturesFromScratch.fully_connected_network_02 import *\nfrom ModernArchitecturesFromScratch.model_training_03 import *\nfrom ModernArchitecturesFromScratch.convolutions_pooling_04 import *\nfrom ModernArchitecturesFromScratch.callbacks_05 import *\nfrom ModernArchitecturesFromScratch.batchnorm_06 import *\nfrom ModernArchitecturesFromScratch.optimizers_07 import *",
"_____no_output_____"
]
],
[
[
"# ResNet \n> Fully implemented ResNet architecture from scratch: https://arxiv.org/pdf/1512.03385.pdf",
"_____no_output_____"
],
[
"## Helper",
"_____no_output_____"
]
],
[
[
"#export\ndef get_runner(model=None, layers=None, lf=None, callbacks=[Stats([accuracy]), ProgressCallback(), HyperRecorder(['lr'])], opt=None, db=None):\n \"Helper function to get a quick runner\"\n if model is None:\n model = SequentialModel(*layers) if layers is not None else get_linear_model(0.1)[0]\n lf = CrossEntropy() if lf is None else lf\n db = db if db is not None else get_mnist_databunch()\n opt = opt if opt is not None else adam_opt()\n learn = Learner(model,lf,opt,db)\n return Runner(learn, callbacks)",
"_____no_output_____"
]
],
[
[
"## Nested Modules",
"_____no_output_____"
],
[
"We first need to make new classes that allow architectures that aren't straight forward passes through a defined set of layers. This is normally handled in the forward passes of pytorch with autograd. We need to be a bit more clever due to the fact that we need to define our gradients in each module.",
"_____no_output_____"
]
],
[
[
"#export\nclass NestedModel(Module):\n \"NestModel that allows for a sequential model to be called withing an outer model\"\n def __init__(self):\n super().__init__()\n \n def forward(self,xb): return self.layers(xb)\n \n def bwd(self, out, inp): self.layers.backward()\n \n def parameters(self):\n for p in self.layers.parameters(): yield p \n \n def __repr__(self): return f'\\nSubModel( \\n{self.layers}\\n)'",
"_____no_output_____"
],
[
"#export\nclass TestMixingGrads(NestedModel):\n \"Test module to see if nested SequentialModels will work\"\n def __init__(self):\n super().__init__()\n self.layers = SequentialModel(Linear(784, 50, True), ReLU(), Linear(50,25, False))",
"_____no_output_____"
]
],
[
[
"Testing the gradients and the outputs:",
"_____no_output_____"
]
],
[
[
"m = SequentialModel(TestMixingGrads(), Linear(25,10, False))\ndb = get_mnist_databunch()\nlf = CrossEntropy()\noptimizer = adam_opt()\nm",
"_____no_output_____"
],
[
"learn = Learner(m, CrossEntropy(), Optimizer, db)\nrun = Runner(learn, [CheckGrad()])",
"_____no_output_____"
],
[
"run.fit(1,0.1)",
"good\ngood\ngood\ngood\ngood\ngood\n"
]
],
[
[
"## Refactored Conv Layers",
"_____no_output_____"
],
[
"Before we can start making ResNets, we first define a few helper modules that abstract some of the layers:",
"_____no_output_____"
]
],
[
[
"#export\nclass AutoConv(Conv):\n \"Automatic resizing of padding based on kernel size to ensure constant dimensions of input to output\"\n def __init__(self, n_in, n_out, kernel_size=3, stride=1):\n padding = Padding(kernel_size // 2)\n super().__init__(n_in, n_out, kernel_size, stride, padding=padding)",
"_____no_output_____"
],
[
"#export\nclass ConvBatch(NestedModel):\n \"Performs conv then batchnorm\"\n def __init__(self, n_in, n_out, kernel_size=3, stride=1, **kwargs):\n self.layers = SequentialModel(AutoConv(n_in, n_out, kernel_size, stride), \n Batchnorm(n_out))\n \n def __repr__(self): return f'{self.layers.layers[0]}, {self.layers.layers[1]}'",
"_____no_output_____"
],
[
"#export\nclass Identity(Module):\n \"Module to perform the identity connection (what goes in, comes out)\"\n def forward(self,xb): return xb\n def bwd(self,out,inp): inp.g += out.g\n def __repr__(self): return f'Identity Connection'",
"_____no_output_____"
]
],
[
[
"## ResBlocks",
"_____no_output_____"
],
[
"Final built up ResNet blocks that implement the skip connecton layers characteristic of a ResNet",
"_____no_output_____"
]
],
[
[
"#export\nclass BasicRes(Module):\n \"Basic block to implement the two different ResBlocks presented in the paper\"\n def __init__(self, n_in, n_out, expansion=1, stride=1, Activation=ReLU, *args, **kwargs):\n super().__init__()\n self.n_in, self.n_out, self.expansion, self.stride, self.Activation = n_in, n_out, expansion, stride, Activation\n \n self.identity = Identity() if self.do_identity else AutoConv(self.n_in, self.get_expansion, kernel_size=1, stride=2)\n \n def forward(self, xb): \n self.id_out = self.identity(xb)\n self.res_out = self.res_blocks(xb)\n self.out = self.id_out + self.res_out\n return self.out\n \n def bwd(self, out, inp):\n self.res_out.g = out.g\n self.id_out.g = out.g\n self.res_blocks.backward()\n self.identity.backward()\n \n @property\n def get_expansion(self): return self.n_out * self.expansion\n \n @property\n def do_identity(self): return self.n_in == self.n_out\n \n def parameters(self): \n layers = [self.res_blocks, self.identity]\n for m in layers: \n for p in m.parameters(): yield p ",
"_____no_output_____"
],
[
"#export\nclass BasicResBlock(BasicRes):\n expansion=1\n \"Basic ResBlock layer, 2 `ConvBatch` layers with no expansion\"\n def __init__(self, n_in, n_out, *args, **kwargs):\n super().__init__(n_in, n_out, *args, **kwargs)\n expansion = 1\n \n self.res_blocks = SequentialModel(\n ConvBatch(n_in, n_out, stride=self.stride),\n self.Activation(),\n ConvBatch(n_out, self.n_out*expansion)\n )\n ",
"_____no_output_____"
],
[
"#export\nclass BottleneckBlock(BasicRes):\n expansion=4\n \"Bottleneck layer, 3 `ConvBatch` layers with an expansion factor of 4\"\n def __init__(self, n_in, n_out, *args, **kwargs):\n super().__init__(n_in, n_out, *args, **kwargs)\n \n self.res_blocks = SequentialModel(\n ConvBatch(n_in, n_out, kernel_size=1, stride=1),\n self.Activation(),\n ConvBatch(n_out, n_out),\n self.Activation(),\n ConvBatch(n_out, self.expansion, kernel_size=1)\n )",
"_____no_output_____"
],
[
"#export\nclass ResBlock(NestedModel):\n \"Adds the final activation after the skip connection addition\"\n def __init__(self, n_in, n_out, block=BasicResBlock, stride=1, kernel_size=3, Activation=ReLU, **kwargs):\n super().__init__()\n self.n_in, self.n_out, self.exp, self.ks, self.stride = n_in, n_out, block.expansion, kernel_size, stride\n self.layers = SequentialModel(block(n_in=n_in, n_out=n_out, expansion=block.expansion, kernel_size=kernel_size, stride=stride, Activation=Activation,**kwargs), \n Activation())\n \n def __repr__(self): return f'ResBlock({self.n_in}, {self.n_out*self.exp}, kernel_size={self.ks}, stride={self.stride})'\n ",
"_____no_output_____"
],
[
"#export\nclass ResLayer(NestedModel):\n \"Sequential ResBlock layers as outlined in the paper\"\n def __init__(self, block, n, n_in, n_out, *args, **kwargs):\n layers = []\n self.block, self.n, self.n_in, self.n_out = block, n, n_in, n_out\n \n downsampling = 2 if n_in != n_out else 1\n\n layers = [ResBlock(n_in, n_out, block, stride=downsampling),\n *[ResBlock(n_out * block.expansion, n_out, block, stride=1) for i in range(n-1)]]\n \n self.layers = SequentialModel(*layers)\n \n def __repr__(self): return f'ResLayer(\\n{self.layers}\\n)'",
"_____no_output_____"
]
],
[
[
"```python\nclass ResLayer(NestedModel):\n \"Sequential res layers\"\n def __init__(self, block, n, n_in, n_out, *args, **kwargs):\n layers = []\n self.block, self.n, self.n_in, self.n_out = block, n, n_in, n_out\n \n downsampling = 2 if n_in != n_out else 1\n\n layers = [ResBlock(n_in, n_out, block, stride=downsampling),\n *[ResBlock(n_out * block.expansion, n_out, block, stride=1) for i in range(n-1)]]\n \n self.layers = SequentialModel(*layers)\n \n def __repr__(self): return f'ResLayer(\\n{self.layers}\\n)'\n ```",
"_____no_output_____"
],
[
"# ResNet",
"_____no_output_____"
]
],
[
[
"#export\nclass ResNet(NestedModel):\n \"Class to create ResNet architectures of dynamic sizing\"\n def __init__(self, block, layer_sizes=[64, 128, 256, 512], depths=[2,2,2,2], c_in=3, \n c_out=1000, im_size=(28,28), activation=ReLU, *args, **kwargs):\n \n self.layer_sizes = layer_sizes\n \n gate = [\n Reshape(c_in, im_size[0], im_size[1]),\n ConvBatch(c_in, self.layer_sizes[0], stride=2, kernel_size=7),\n activation(),\n Pool(max_pool, ks=3, stride=2, padding=Padding(1))\n ]\n \n self.conv_sizes = list(zip(self.layer_sizes, self.layer_sizes[1:]))\n body = [\n ResLayer(block, depths[0], self.layer_sizes[0], self.layer_sizes[0], Activation=activation, *args, **kwargs),\n *[ResLayer(block, n, n_in * block.expansion, n_out, Activation=activation)\n for (n_in,n_out), n in zip(self.conv_sizes, depths[1:])]\n ]\n \n tail = [\n Pool(avg_pool, ks=1, stride=1, padding=None),\n Flatten(),\n Linear(self.layer_sizes[-1]*block.expansion, c_out, relu_after=False)\n ]\n\n self.layers = SequentialModel(\n *[layer for layer in gate],\n *[layer for layer in body],\n *[layer for layer in tail]\n )\n \n def __repr__(self): return f'ResNet: \\n{self.layers}'",
"_____no_output_____"
]
],
[
[
"```python\nclass ResNet(NestedModel):\n \"Class to create ResNet architectures of dynamic sizing\"\n def __init__(self, block, layer_sizes=[64, 128, 256, 512], depths=[2,2,2,2], c_in=3, \n c_out=1000, im_size=(28,28), activation=ReLU, *args, **kwargs):\n \n self.layer_sizes = layer_sizes\n \n gate = [\n Reshape(c_in, im_size[0], im_size[1]),\n ConvBatch(c_in, self.layer_sizes[0], stride=2, kernel_size=7),\n activation(),\n Pool(max_pool, ks=3, stride=2, padding=Padding(1))\n ]\n \n self.conv_sizes = list(zip(self.layer_sizes, self.layer_sizes[1:]))\n body = [\n ResLayer(block, depths[0], self.layer_sizes[0], self.layer_sizes[0], Activation=activation, *args, **kwargs),\n *[ResLayer(block, n, n_in * block.expansion, n_out, Activation=activation)\n for (n_in,n_out), n in zip(self.conv_sizes, depths[1:])]\n ]\n \n tail = [\n Pool(avg_pool, ks=1, stride=1, padding=None),\n Flatten(),\n Linear(self.layer_sizes[-1]*block.expansion, c_out, relu_after=False)\n ]\n\n self.layers = SequentialModel(\n *[layer for layer in gate],\n *[layer for layer in body],\n *[layer for layer in tail]\n )\n \n def __repr__(self): return f'ResNet: \\n{self.layers}'\n ```",
"_____no_output_____"
]
],
[
[
"res = ResNet(BasicResBlock)\nres",
"_____no_output_____"
],
[
"#export\ndef GetResnet(size, c_in=3, c_out=10, *args, **kwargs):\n \"Helper function to get ResNet architectures of different sizes\"\n if size == 18: return ResNet(c_in=c_in, c_out=c_out, block=BasicResBlock, depths=[2, 2, 2, 2], size=size, **kwargs)\n elif size == 34: return ResNet(c_in=c_in, c_out=c_out, block=BasicResBlock, depths=[3, 4, 6, 3], size=size, **kwargs)\n elif size == 50: return ResNet(c_in=c_in, c_out=c_out, block=BottleneckBlock, depths=[3, 4, 6, 3], size=size, **kwargs)\n elif size == 150: return ResNet(c_in=c_in, c_out=c_out, block=BottleneckBlock, depths=[3, 4, 23, 3], size=size, **kwargs)\n elif size == 152: return ResNet(c_in=c_in, c_out=c_out, block=BottleneckBlock, depths=[3, 8, 36, 3], size=size, **kwargs)",
"_____no_output_____"
]
],
[
[
"Testing out the ResNet Architectures:",
"_____no_output_____"
]
],
[
[
"GetResnet(18, c_in=1, c_out=10)",
"_____no_output_____"
],
[
"GetResnet(34, c_in=1, c_out=10)",
"_____no_output_____"
],
[
"GetResnet(50, c_in=1, c_out=10)",
"_____no_output_____"
],
[
"GetResnet(150, c_in=1, c_out=10)",
"_____no_output_____"
],
[
"GetResnet(152, c_in=1, c_out=10)",
"_____no_output_____"
],
[
"run = get_runner(model=GetResnet(18,c_in=1, c_out=10))",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab249ec647932be0263025d3d13a9fe234cb7e0
| 23,178 |
ipynb
|
Jupyter Notebook
|
code/Untitled.ipynb
|
shawah/stat-graph-analysis
|
7626ff5314d934da6b917270b6615302338b793f
|
[
"MIT"
] | null | null | null |
code/Untitled.ipynb
|
shawah/stat-graph-analysis
|
7626ff5314d934da6b917270b6615302338b793f
|
[
"MIT"
] | null | null | null |
code/Untitled.ipynb
|
shawah/stat-graph-analysis
|
7626ff5314d934da6b917270b6615302338b793f
|
[
"MIT"
] | null | null | null | 134.755814 | 19,306 | 0.863319 |
[
[
[
"fefefewfw",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport scipy as stats\n%matplotlib inline\n\nhormone = pd.read_csv('../input/Classeur1.csv')",
"_____no_output_____"
],
[
"hormone.head()",
"_____no_output_____"
],
[
"sns.pointplot(x='Treatment', y='Concentration', data=hormone, join=False, capsize=.2, color='#000000', errwidth=0.75, markers=['_'])\nsns.swarmplot(x='Treatment', y='Concentration', data=hormone)\n\nsns.set_style(\"whitegrid\")\nplt.title('Plasmatic hormone concentration')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab24c54ed056cf4343dfc37f25af575bece0ec0
| 59,585 |
ipynb
|
Jupyter Notebook
|
nbs/13 - swav.ipynb
|
jimmiemunyi/self_supervised
|
360df7f1fa06b0ab1e2abe2e1ea6e2230b073a12
|
[
"Apache-2.0"
] | null | null | null |
nbs/13 - swav.ipynb
|
jimmiemunyi/self_supervised
|
360df7f1fa06b0ab1e2abe2e1ea6e2230b073a12
|
[
"Apache-2.0"
] | null | null | null |
nbs/13 - swav.ipynb
|
jimmiemunyi/self_supervised
|
360df7f1fa06b0ab1e2abe2e1ea6e2230b073a12
|
[
"Apache-2.0"
] | null | null | null | 109.130037 | 38,696 | 0.824301 |
[
[
[
"#hide\n#skip\n! [ -e /content ] && pip install -Uqq self-supervised",
"_____no_output_____"
],
[
"#default_exp vision.swav",
"_____no_output_____"
]
],
[
[
"# SwAV\n\n> SwAV: [Unsupervised Learning of Visual Features by Contrasting Cluster Assignments](https://arxiv.org/pdf/2006.09882.pdf)\n",
"_____no_output_____"
]
],
[
[
"#export\nfrom fastai.vision.all import *\nfrom self_supervised.augmentations import *\nfrom self_supervised.layers import *",
"_____no_output_____"
]
],
[
[
"## Algorithm",
"_____no_output_____"
],
[
"#### SwAV",
"_____no_output_____"
],
[
"",
"_____no_output_____"
],
[
"**Absract**: Unsupervised image representations have significantly reduced the gap with supervised\npretraining, notably with the recent achievements of contrastive learning\nmethods. These contrastive methods typically work online and rely on a large number\nof explicit pairwise feature comparisons, which is computationally challenging.\nIn this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive\nmethods without requiring to compute pairwise comparisons. Specifically,\nour method simultaneously clusters the data while enforcing consistency between\ncluster assignments produced for different augmentations (or “views”) of the same\nimage, instead of comparing features directly as in contrastive learning. Simply put,\nwe use a “swapped” prediction mechanism where we predict the code of a view\nfrom the representation of another view. Our method can be trained with large and\nsmall batches and can scale to unlimited amounts of data. Compared to previous\ncontrastive methods, our method is more memory efficient since it does not require\na large memory bank or a special momentum network. In addition, we also propose\na new data augmentation strategy, multi-crop, that uses a mix of views with\ndifferent resolutions in place of two full-resolution views, without increasing the\nmemory or compute requirements. We validate our findings by achieving 75:3%\ntop-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised\npretraining on all the considered transfer tasks.",
"_____no_output_____"
]
],
[
[
"#export\nclass SwAVModel(Module):\n def __init__(self,encoder,projector,prototypes): \n self.encoder,self.projector,self.prototypes = encoder,projector,prototypes\n \n def forward(self, inputs): \n \n if not isinstance(inputs, list): inputs = [inputs]\n \n crop_idxs = torch.cumsum(torch.unique_consecutive(\n torch.tensor([inp.shape[-1] for inp in inputs]),\n return_counts=True)[1], 0)\n\n start_idx = 0\n for idx in crop_idxs:\n _z = self.encoder(torch.cat(inputs[start_idx: idx]))\n if not start_idx: z = _z\n else: z = torch.cat((z, _z))\n start_idx = idx\n \n z = F.normalize(self.projector(z))\n return z, self.prototypes(z)",
"_____no_output_____"
],
[
"#export\ndef create_swav_model(encoder, hidden_size=256, projection_size=128, n_protos=3000, bn=True, nlayers=2):\n \"Create SwAV model\"\n n_in = in_channels(encoder)\n with torch.no_grad(): representation = encoder(torch.randn((2,n_in,128,128)))\n projector = create_mlp_module(representation.size(1), hidden_size, projection_size, bn=bn, nlayers=nlayers)\n prototypes = nn.Linear(projection_size, n_protos, bias=False)\n apply_init(projector)\n with torch.no_grad():\n w = prototypes.weight.data.clone()\n prototypes.weight.copy_(F.normalize(w))\n return SwAVModel(encoder, projector, prototypes)",
"_____no_output_____"
],
[
"encoder = create_encoder(\"tf_efficientnet_b0_ns\", n_in=3, pretrained=False, pool_type=PoolingType.CatAvgMax)\nmodel = create_swav_model(encoder, hidden_size=2048, projection_size=128, n_protos=3000)\nmulti_view_inputs = ([torch.randn(2,3,224,224) for i in range(2)] +\n [torch.randn(2,3,96,96) for i in range(4)])\nembedding, output = model(multi_view_inputs)\nnorms = model.prototypes.weight.data.norm(dim=1)\nassert norms.shape[0] == 3000\nassert [n.item() for n in norms if test_close(n.item(), 1.)] == []",
"_____no_output_____"
]
],
[
[
"## SwAV Callback",
"_____no_output_____"
],
[
"The following parameters can be passed;\n\n- **aug_pipelines** list of augmentation pipelines List[Pipeline, Pipeline,...,Pipeline] created using functions from `self_supervised.augmentations` module. Each `Pipeline` should be set to `split_idx=0`. You can simply use `get_swav_aug_pipelines` utility to get aug_pipelines. SWAV algorithm uses a mix of large and small scale crops.\n\n- **crop_assgn_ids** indexes for large crops from **aug_pipelines**, e.g. if you have total of 8 Pipelines in the `aug_pipelines` list and if you define large crops as first 2 Pipelines then indexes would be [0,1], if as first 3 then [0,1,2] and if as last 2 then [6,7], so on.\n\n- **K** is queue size. For simplicity K needs to be a multiple of batch size and it needs to be less than total training data. You can try out different values e.g. `bs*2^k` by varying k where bs i batch size. You can pass None to disable queue. Idea is similar to MoCo.\n\n- **queue_start_pct** defines when to start using queue in terms of total training percentage, e.g if you train for 100 epochs and if `queue_start_pct` is set to 0.25 then queue will be used starting from epoch 25. You should tune queue size and queue start percentage for your own data and problem. For more information you can refer to [README from official implementation](https://github.com/facebookresearch/swav#training-gets-unstable-when-using-the-queue).\n\n- **temp** temperature scaling for cross entropy loss similar to `SimCLR`.",
"_____no_output_____"
],
[
"SWAV algorithm uses multi-sized-multi-crop views of image. In original paper 2 large crop views and 6 small crop views are used during training. The reason of using smaller crops is to save memory and perhaps it also helps model to learn local features better.\n\nYou can manually pass a mix of large and small scale Pipeline instances within a list to **aug_pipelines** or you can simply use **get_swav_aug_pipelines()** helper function below:\n\n- **num_crops** Number of large and small scale views to be used. \n- **crop_sizes** Image crop sizes for large and small views. \n- **min_scales** Min scale to use in RandomResizedCrop for large and small views. \n- **max_scales** Max scale to use in RandomResizedCrop for large and small views. \n\nI highly recommend this [UI from albumentations](https://albumentations-demo.herokuapp.com/) to get a feel about RandomResizedCrop parameters.\n\nLet's take the following example `get_swav_aug_pipelines(num_crops=(2,6), crop_sizes=(224,96), min_scales=(0.25,0.05), max_scales=(1.,0.14))`. This will create 2 large scale view augmentations with size 224 and with RandomResizedCrop scales between 0.25-1.0. Additionally, it will create 2 small scale view augmentations with size 96 and with RandomResizedCrop scales between 0.05-0.14.\n\n**Note**: Of course, the notion of small and large scale views depend on the values you pass to `crop_sizes`, `min_scales`, and `max_scales`. For example, if I we flip crop sizes from previous example as `crop_sizes=(96,224)`, then in this case first 2 views will have image resolution of 96 and last 6 views will have 224. For reducing confusion it's better to make relative changes, e.g. if you want to try different parameters always try to keep first values for larger resolution views and second values for smaller resolution views.\n\n- ****kwargs** This function uses `get_multi_aug_pipelines` which then uses `get_batch_augs`. For more information you may refer to `self_supervised.augmentations` module. kwargs takes any passable argument to `get_batch_augs`\n",
"_____no_output_____"
]
],
[
[
"#export\n@delegates(get_multi_aug_pipelines, but=['n', 'size', 'resize_scale'])\ndef get_swav_aug_pipelines(num_crops=(2,6), crop_sizes=(224,96), min_scales=(0.25,0.05), max_scales=(1.,0.14), **kwargs): \n aug_pipelines = []\n for nc, size, mins, maxs in zip(num_crops, crop_sizes, min_scales, max_scales):\n aug_pipelines += get_multi_aug_pipelines(n=nc, size=size, resize_scale=(mins,maxs), **kwargs)\n return aug_pipelines",
"_____no_output_____"
],
[
"#export\nclass SWAV(Callback):\n order,run_valid = 9,True\n def __init__(self, aug_pipelines, crop_assgn_ids,\n K=3000, queue_start_pct=0.25, temp=0.1,\n eps=0.05, n_sinkh_iter=3, print_augs=False):\n \n store_attr('K,queue_start_pct,crop_assgn_ids,temp,eps,n_sinkh_iter')\n self.augs = aug_pipelines\n if print_augs: \n for aug in self.augs: print(aug)\n \n \n def before_fit(self):\n self.learn.loss_func = self.lf\n \n # init queue\n if self.K is not None:\n nf = self.learn.model.projector[-1].out_features\n self.queue = torch.randn(self.K, nf).to(self.dls.device)\n self.queue = nn.functional.normalize(self.queue, dim=1)\n self.queue_ptr = 0\n \n \n def before_batch(self):\n \"Compute multi crop inputs\"\n self.bs = self.x.size(0)\n self.learn.xb = ([aug(self.x) for aug in self.augs],)\n\n\n def after_batch(self):\n with torch.no_grad():\n w = self.learn.model.prototypes.weight.data.clone()\n self.learn.model.prototypes.weight.data.copy_(F.normalize(w))\n \n \n @torch.no_grad()\n def sinkhorn_knopp(self, Q, nmb_iters, device=default_device):\n \"https://en.wikipedia.org/wiki/Sinkhorn%27s_theorem#Sinkhorn-Knopp_algorithm\"\n sum_Q = torch.sum(Q)\n Q /= sum_Q\n\n r = (torch.ones(Q.shape[0]) / Q.shape[0]).to(device)\n c = (torch.ones(Q.shape[1]) / Q.shape[1]).to(device)\n\n curr_sum = torch.sum(Q, dim=1)\n\n for it in range(nmb_iters):\n u = curr_sum\n Q *= (r / u).unsqueeze(1)\n Q *= (c / torch.sum(Q, dim=0)).unsqueeze(0)\n curr_sum = torch.sum(Q, dim=1)\n return (Q / torch.sum(Q, dim=0, keepdim=True)).t().float()\n\n \n @torch.no_grad()\n def _dequeue_and_enqueue(self, embedding):\n assert self.K % self.bs == 0 # for simplicity\n self.queue[self.queue_ptr:self.queue_ptr+self.bs, :] = embedding\n self.queue_ptr = (self.queue_ptr + self.bs) % self.K # move pointer\n \n\n @torch.no_grad()\n def _compute_codes(self, output):\n qs = []\n for i in self.crop_assgn_ids: \n # use queue\n if (self.K is not None) and (self.learn.pct_train > self.queue_start_pct):\n target_b = output[self.bs*i:self.bs*(i+1)]\n queue_b = self.learn.model.prototypes(self.queue)\n merged_b = torch.cat([target_b, queue_b])\n q = torch.exp(merged_b/self.eps).t()\n q = self.sinkhorn_knopp(q, self.n_sinkh_iter, q.device)\n qs.append(q[:self.bs])\n \n # don't use queue\n else:\n target_b = output[self.bs*i:self.bs*(i+1)]\n q = torch.exp(target_b/self.eps).t()\n q = self.sinkhorn_knopp(q, self.n_sinkh_iter, q.device)\n qs.append(q)\n return qs\n \n \n def after_pred(self):\n \"Compute ps and qs\"\n \n embedding, output = self.pred\n \n # Update - no need to store all assignment crops, e.g. just 0 from [0,1]\n # Update queue only during training\n if (self.K is not None) and (self.learn.training): self._dequeue_and_enqueue(embedding[:self.bs])\n \n # Compute codes\n qs = self._compute_codes(output)\n \n # Compute predictions\n log_ps = []\n for v in np.arange(len(self.augs)):\n log_p = F.log_softmax(output[self.bs*v:self.bs*(v+1)] / self.temp, dim=1)\n log_ps.append(log_p)\n \n log_ps, qs = torch.stack(log_ps), torch.stack(qs)\n self.learn.pred, self.learn.yb = log_ps, (qs,)\n \n \n def lf(self, pred, *yb):\n log_ps, qs, loss = pred, yb[0], 0\n t = (qs.unsqueeze(1)*log_ps.unsqueeze(0)).sum(-1).mean(-1)\n for i, ti in enumerate(t): loss -= (ti.sum() - ti[i])/(len(ti)-1)/len(t)\n return loss\n \n @torch.no_grad()\n def show(self, n=1):\n xbs = self.learn.xb[0]\n idxs = np.random.choice(range(self.bs), n, False)\n images = [aug.decode(xb.to('cpu').clone()).clamp(0, 1)[i] \n for i in idxs\n for xb, aug in zip(xbs, self.augs)]\n return show_batch(images[0], None, images, max_n=len(images), nrows=n)",
"_____no_output_____"
]
],
[
[
"`crop_sizes` defines the size to be used for original crops and low resolution crops respectively. `num_crops` define `N`: number of original views and `V`: number of low resolution views respectively. `min_scales` and `max_scales` are used for original and low resolution views during random resized crop. `eps` is used during Sinkhorn-Knopp algorithm for calculating the codes and `n_sinkh_iter` is the number of iterations during it's calculation. `temp` is the temperature parameter in cross entropy loss",
"_____no_output_____"
],
[
"### Example Usage",
"_____no_output_____"
]
],
[
[
"path = untar_data(URLs.MNIST_TINY)\nitems = get_image_files(path)\ntds = Datasets(items, [PILImageBW.create, [parent_label, Categorize()]], splits=GrandparentSplitter()(items))\ndls = tds.dataloaders(bs=4, after_item=[ToTensor(), IntToFloatTensor()], device='cpu')",
"_____no_output_____"
],
[
"fastai_encoder = create_fastai_encoder(xresnet18, n_in=1, pretrained=False)\nmodel = create_swav_model(fastai_encoder, hidden_size=2048, projection_size=128)\naug_pipelines = get_swav_aug_pipelines(num_crops=[2,6],\n crop_sizes=[28,16], \n min_scales=[0.25,0.05],\n max_scales=[1.0,0.3],\n rotate=False, jitter=False, bw=False, blur=False, stats=None,cuda=False) \nlearn = Learner(dls, model,\n cbs=[SWAV(aug_pipelines=aug_pipelines, crop_assgn_ids=[0,1], K=None), ShortEpochCallback(0.001)])",
"_____no_output_____"
],
[
"b = dls.one_batch()\nlearn._split(b)\nlearn('before_batch')\nlearn.pred = learn.model(*learn.xb)",
"_____no_output_____"
]
],
[
[
"Display 2 standard resolution crops and 6 additional low resolution crops",
"_____no_output_____"
]
],
[
[
"axes = learn.swav.show(n=4)",
"_____no_output_____"
],
[
"learn.fit(1)",
"_____no_output_____"
],
[
"learn.recorder.losses",
"_____no_output_____"
]
],
[
[
"## Export -",
"_____no_output_____"
]
],
[
[
"#hide\nfrom nbdev.export import notebook2script\nnotebook2script()",
"Converted 01 - augmentations.ipynb.\nConverted 02 - layers.ipynb.\nConverted 03 - distributed.ipynb.\nConverted 10 - simclr.ipynb.\nConverted 11 - moco.ipynb.\nConverted 12 - byol.ipynb.\nConverted 13 - swav.ipynb.\nConverted 20 - clip.ipynb.\nConverted 21 - clip-moco.ipynb.\nConverted index.ipynb.\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab2521c418ed0058d41e166cf79a8eba89bf9ad
| 599,456 |
ipynb
|
Jupyter Notebook
|
notebooks/var-impact-in-premier-league/main.ipynb
|
GauriVaidya/notebooks
|
d97f87bf851397788c2d5953595de458ba832f75
|
[
"Apache-2.0"
] | 1 |
2021-11-12T16:48:29.000Z
|
2021-11-12T16:48:29.000Z
|
notebooks/var-impact-in-premier-league/main.ipynb
|
GauriVaidya/notebooks
|
d97f87bf851397788c2d5953595de458ba832f75
|
[
"Apache-2.0"
] | null | null | null |
notebooks/var-impact-in-premier-league/main.ipynb
|
GauriVaidya/notebooks
|
d97f87bf851397788c2d5953595de458ba832f75
|
[
"Apache-2.0"
] | null | null | null | 472.757098 | 227,151 | 0.777323 |
[
[
[
"# Premier league: How has VAR impacted the rankings?\n\nThere has been much debate about the video assistant referee (VAR) when it was introduced last year (in 2019).\nThe goal is to lead to fairer refereeing, but concerns are high on whether this will really be the case and the fact that it could break the rythm of the game.\n\nWe will let football analysts – or soccer analysts depending on where you are reading this notebook from – answer this question. But one thing we can look at is how has VAR impacted the league so far.\n\nThis is what we will do in this notebook, alongside some other simulations we found interesting.",
"_____no_output_____"
],
[
" \n<div style=\"text-align:center\"><a href=\"https://www.atoti.io/?utm_source=gallery&utm_content=premier-league\" target=\"_blank\" rel=\"noopener noreferrer\"><img src=\"https://data.atoti.io/notebooks/banners/discover.png\" alt=\"atoti\" /></a></div>",
"_____no_output_____"
],
[
"## Importing the data\n\nThe data we will use is composed of events. An event can be anything that happens in a game: kick-off, goal, foul, etc. \nIn this dataset, we only kept kick-off and goal events to build our analysis. \nNote that in the goal events we also have all the goals that were later cancelled by VAR during a game.\n\nWe will first start by importing atoti and creating a session.",
"_____no_output_____"
]
],
[
[
"import atoti as tt\n\nsession = tt.create_session()",
"_____no_output_____"
]
],
[
[
"Then load the events in a store",
"_____no_output_____"
]
],
[
[
"events = session.read_csv(\n \"s3://data.atoti.io/notebooks/premier-league/events.csv\",\n separator=\";\",\n table_name=\"events\",\n)\nevents.head()",
"_____no_output_____"
]
],
[
[
"### Creating a cube\n\nWe create a cube on the event store so that some matches or teams that ended with no goal will still be reflected in the pivot tables. \n\nWhen creating a cube in the default auto mode, a hierarchy will be created for each non float column, and average and sum measures for each float column. This setup can later be edited, or you could also define all hierarchies/measures by yourself switching to manual mode.",
"_____no_output_____"
]
],
[
[
"cube = session.create_cube(events)",
"_____no_output_____"
],
[
"cube.schema",
"_____no_output_____"
]
],
[
[
"Let's assign measures/levels/hierarchies to shorter variables",
"_____no_output_____"
]
],
[
[
"m = cube.measures\nlvl = cube.levels\nh = cube.hierarchies",
"_____no_output_____"
],
[
"h[\"Day\"] = [events[\"Day\"]]",
"_____no_output_____"
]
],
[
[
"## Computing the rankings from the goals\n\nComputing the first measure below to count the total goals scored for each event. At this point the total still includes the potential own goals and VAR-refused goals.",
"_____no_output_____"
]
],
[
[
"m[\"Team Goals (incl Own Goals)\"] = tt.agg.sum(\n tt.where(lvl[\"EventType\"] == \"Goal\", tt.agg.count_distinct(events[\"EventId\"]), 0.0),\n scope=tt.scope.origin(lvl[\"EventType\"]),\n)",
"_____no_output_____"
]
],
[
[
"In this data format, own goals are scored by players from a Team, but those points should be attributed to the opponent. Therefore we will isolate the own goals in a separate measure.",
"_____no_output_____"
]
],
[
[
"m[\"Team Own Goals\"] = tt.agg.sum(\n tt.where(lvl[\"IsOwnGoal\"] == True, m[\"Team Goals (incl Own Goals)\"], 0.0),\n scope=tt.scope.origin(lvl[\"IsOwnGoal\"]),\n)",
"_____no_output_____"
]
],
[
[
"And deduce the actual goals scored for the team",
"_____no_output_____"
]
],
[
[
"m[\"Team Goals\"] = m[\"Team Goals (incl Own Goals)\"] - m[\"Team Own Goals\"]",
"_____no_output_____"
]
],
[
[
"At this point we can already have a look at the goals per team. By right clicking on the chart we have sorted it descending by team goals.",
"_____no_output_____"
]
],
[
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"For a particular match, the `Opponent Goals` are equal to the `Team Goals` if we switch to the data facts where Team is replaced by Opponent and Opponent by Team",
"_____no_output_____"
]
],
[
[
"m[\"Opponent Goals\"] = tt.agg.sum(\n tt.at(\n m[\"Team Goals\"],\n {lvl[\"Team\"]: lvl[\"Opponent\"], lvl[\"Opponent\"]: lvl[\"Team\"]},\n ),\n scope=tt.scope.origin(lvl[\"Team\"], lvl[\"Opponent\"]),\n)",
"_____no_output_____"
],
[
"m[\"Opponent Own Goals\"] = tt.agg.sum(\n tt.at(\n m[\"Team Own Goals\"],\n {lvl[\"Team\"]: lvl[\"Opponent\"], lvl[\"Opponent\"]: lvl[\"Team\"]},\n ),\n scope=tt.scope.origin(lvl[\"Team\"], lvl[\"Opponent\"]),\n)",
"_____no_output_____"
]
],
[
[
"We are now going to add two measures `Team Score` and `Opponent Score` to compute the result of a particular match. ",
"_____no_output_____"
]
],
[
[
"m[\"Team Score\"] = m[\"Team Goals\"] + m[\"Opponent Own Goals\"]",
"_____no_output_____"
],
[
"m[\"Opponent Score\"] = m[\"Opponent Goals\"] + m[\"Team Own Goals\"]",
"_____no_output_____"
]
],
[
[
"We can now visualize the result of each match of the season",
"_____no_output_____"
]
],
[
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"We now have the team goals/score and those of the opponent for each match. However, these measures include VAR cancelled goals. Let's create new measures that takes into account VAR.",
"_____no_output_____"
]
],
[
[
"m[\"VAR team goals impact\"] = m[\"Team Goals\"] - tt.filter(\n m[\"Team Goals\"], lvl[\"IsCancelledAfterVAR\"] == False\n)\nm[\"VAR opponent goals impact\"] = m[\"Opponent Goals\"] - tt.filter(\n m[\"Opponent Goals\"], lvl[\"IsCancelledAfterVAR\"] == False\n)",
"_____no_output_____"
]
],
[
[
"We can visualize that in details, there are already 4 goals cancelled by VAR on the first day of the season !",
"_____no_output_____"
]
],
[
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"Now that for any game we have the number of goals of each team, we can compute how many points teams have earned. \nFollowing the FIFA World Cup points system, three points are awarded for a win, one for a draw and none for a loss (before, winners received two points). \nWe create a measure for each of this condition.",
"_____no_output_____"
]
],
[
[
"m[\"Points for victory\"] = 3.0\nm[\"Points for tie\"] = 1.0\nm[\"Points for loss\"] = 0.0",
"_____no_output_____"
],
[
"m[\"Points\"] = tt.agg.sum(\n tt.where(\n m[\"Team Score\"] > m[\"Opponent Score\"],\n m[\"Points for victory\"],\n tt.where(\n m[\"Team Score\"] == m[\"Opponent Score\"],\n m[\"Points for tie\"],\n m[\"Points for loss\"],\n ),\n ),\n scope=tt.scope.origin(lvl[\"League\"], lvl[\"Day\"], lvl[\"Team\"]),\n)",
"_____no_output_____"
]
],
[
[
"The previous points were computed including VAR-refused goals. \nFiltering out these goals gives the actual rankings of the teams, as you would find on any sports websites.",
"_____no_output_____"
]
],
[
[
"m[\"Actual Points\"] = tt.filter(m[\"Points\"], lvl[\"IsCancelledAfterVAR\"] == False)",
"_____no_output_____"
]
],
[
[
"And here we have our ranking. We will dive into it in the next section. \n\n## Rankings and VAR impact\n\nColor rules were added to show teams that benefited from the VAR in green and those who lost championship points because of it in red.",
"_____no_output_____"
]
],
[
[
"m[\"Difference in points\"] = m[\"Actual Points\"] - m[\"Points\"]",
"_____no_output_____"
],
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"More than half of the teams have had their points total impacted by VAR.\nThough it does not impact the top teams, it definitely has an impact in the ranking of many teams, Manchester United would have lost 2 ranks and Tottenham 4 for example!\n\nWe could also visualize the difference of points in a more graphical way:",
"_____no_output_____"
]
],
[
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"Since the rankings are computed from the goal level, we can perform any kind of simulation we want using simple UI filters. \nYou can filter the pivot table above to see what would happen if we only keep the first half of the games? If we only keep matches played home? What if we filter out Vardy, would Leicester lose some places? \nNote that if you filter out VAR-refused goals, the `Points` measures takes the same value as the `Actual Points`.\n\n## Evolution of the rankings over time\n\nAtoti also enables you to define cumulative sums over a hierarchy, we will use that to see how the team rankings evolved during the season. ",
"_____no_output_____"
]
],
[
[
"m[\"Points cumulative sum\"] = tt.agg.sum(\n m[\"Actual Points\"], scope=tt.scope.cumulative(lvl[\"Day\"])\n)",
"_____no_output_____"
],
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"We can notice that data is missing for the 28th match of Manchester City. This is because the game was delayed due to weather, and then never played because of the COVID-19 pandemic.",
"_____no_output_____"
],
[
"## Players most impacted by the VAR\n\nUntil now we looked at most results at team level, but since the data exists at goal level, we could have a look at which players are most impacted by the VAR.",
"_____no_output_____"
]
],
[
[
"m[\"Valid player goals\"] = tt.filter(\n m[\"Team Goals\"], lvl[\"IsCancelledAfterVAR\"] == False\n)",
"_____no_output_____"
],
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"Unsurprisingly Mané is the most impacted player. He is also one of the top scorers with only Vardy scoring more goals (you can sort on the Team Goals column to verify). \nMore surprisingly, Boly has had all the goals of his season cancelled by VAR and Antonio half of them..\n\n## Simulation of a different scoring system\n\nAlthough we are all used to a scoring system giving 3 points for a victory, 1 for a tie and 0 per lost match this was not always the case. Before the 1990's many european leagues only gave 2 points per victory, reason for the change being to encourage teams to score more goals during the games.\nThe premier league gifts us well with plenty of goals scored (take it from someone watching the French ligue 1), but how different would the results be with the old scoring system?\n\natoti enables us to simulate this very easily. We simply have to create a new scenario where we replace the number of points given for a victory.\nWe first setup a simulation on that measure.",
"_____no_output_____"
]
],
[
[
"scoring_system_simulation = cube.create_parameter_simulation(\n name=\"Scoring system simulations\",\n measures={\"Points for victory\": 3.0},\n base_scenario_name=\"Current System\",\n)",
"_____no_output_____"
]
],
[
[
"And create a new scenario where we give it another value",
"_____no_output_____"
]
],
[
[
"scoring_system_simulation += (\"Old system\", 2.0)",
"_____no_output_____"
]
],
[
[
"And that's it, no need to define anything else, all the measures will be re-computed on demand with the new value in the new scenario. \nLet's compare the rankings between the two scoring systems.",
"_____no_output_____"
]
],
[
[
"session.visualize()",
"_____no_output_____"
],
[
"session.visualize()",
"_____no_output_____"
]
],
[
[
"Surprisingly, having only 2 points for a win would only have made Burnley and West Ham lose 2 ranks, but no other real impact on the standings.",
"_____no_output_____"
],
[
" \n<div style=\"text-align:center\"><a href=\"https://www.atoti.io/?utm_source=gallery&utm_content=premier-league\" target=\"_blank\" rel=\"noopener noreferrer\"><img src=\"https://data.atoti.io/notebooks/banners/discover-try.png\" alt=\"atoti\" /></a></div>",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4ab260a1e26b0fd740922f9619209e3657710df0
| 33,276 |
ipynb
|
Jupyter Notebook
|
DigitalBiomarkers-HumanActivityRecognition/10_code/30_end_pre_processing/32_engineer_features/31_roll_timepoints.ipynb
|
Big-Ideas-Lab/DBDP
|
99357dac197ceeb8c240ead804dd2c8bd3e3fc93
|
[
"Apache-2.0"
] | 20 |
2020-01-27T16:32:25.000Z
|
2021-05-27T15:06:29.000Z
|
DigitalBiomarkers-HumanActivityRecognition/10_code/30_end_pre_processing/32_engineer_features/31_roll_timepoints.ipynb
|
chopeter27/DBDP
|
99357dac197ceeb8c240ead804dd2c8bd3e3fc93
|
[
"Apache-2.0"
] | 11 |
2020-01-27T16:22:09.000Z
|
2020-07-29T20:11:22.000Z
|
DigitalBiomarkers-HumanActivityRecognition/10_code/30_end_pre_processing/32_engineer_features/31_roll_timepoints.ipynb
|
chopeter27/DBDP
|
99357dac197ceeb8c240ead804dd2c8bd3e3fc93
|
[
"Apache-2.0"
] | 16 |
2019-04-05T15:01:46.000Z
|
2021-07-07T05:42:27.000Z
| 32.719764 | 439 | 0.358487 |
[
[
[
"# Rolling Timepoints",
"_____no_output_____"
],
[
"This notebook contains our procedure for creating a rolling analysis of our time series data, which is used to capture our sensor instability over time. A common technique to assess the constancy of a model’s parameters is to compute parameter estimates over a rolling window of a fixed size through the sample. As the sensor parameters due to some variability in time sampling, the rolling estimates should capture this instability.",
"_____no_output_____"
],
[
"**INPUT: CSV output of 31_outlier_removal** (plain_data.csv)\n\n**OUTPUT: Rolled Data file** (rolled_data.csv)",
"_____no_output_____"
],
[
"## Imports",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np",
"_____no_output_____"
]
],
[
[
"## Read in Data",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('plain_data.csv')",
"_____no_output_____"
],
[
"df.head()",
"_____no_output_____"
],
[
"df.shape",
"_____no_output_____"
]
],
[
[
"## View Data Subset (Example without Rolling)",
"_____no_output_____"
]
],
[
[
"df1 = df[(df['Subject_ID'] == '19-001') & (df['Activity'] == 'Baseline') & (df['Round'] == 1)]\ndf1",
"_____no_output_____"
]
],
[
[
"## View Data Subset (Example with Rolling)",
"_____no_output_____"
]
],
[
[
"df2 = df1.rolling(40).median().dropna()\ndf2['Activity'] = 'Baseline'\ndf2['Round'] = 1\ndf2['Subject_ID'] = '19-001'\ndf2",
"_____no_output_____"
]
],
[
[
"## Rolling Procedure ",
"_____no_output_____"
]
],
[
[
"rolled = pd.DataFrame(columns = ['ACC1', 'ACC2', 'ACC3', 'TEMP', 'EDA', 'BVP', 'HR', 'Round', 'Magnitude', 'Activity', 'Subject_ID'])\n\nfor i in pd.unique(df['Subject_ID']):\n for j in pd.unique(df['Activity']):\n for k in pd.unique(df['Round']):\n df_new = df[(df['Subject_ID'] == i) & (df['Activity'] == j) & (df['Round'] == k)]\n df_roll = df_new.rolling(40).mean().dropna()\n df_roll['Activity'] = j\n df_roll['Round'] = k\n df_roll['Subject_ID'] = i\n #print(df_roll.head())\n rolled = rolled.append(df_roll)",
"_____no_output_____"
],
[
"rolled = rolled.drop(columns = ['Time'])\nrolled",
"_____no_output_____"
],
[
"rolled.to_csv('../../40_usable_data_for_models/41_Duke_Data/rolled_data.csv', index = False)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab27771fc2cc019aa3ecc77dc1ec9755e72004e
| 8,930 |
ipynb
|
Jupyter Notebook
|
hw1_matrix_mutiplication/matrix_multiplication.ipynb
|
yanchen0615/massive-data-analysis
|
bf16aa38d09bf5c654294885448631b518106180
|
[
"MIT"
] | null | null | null |
hw1_matrix_mutiplication/matrix_multiplication.ipynb
|
yanchen0615/massive-data-analysis
|
bf16aa38d09bf5c654294885448631b518106180
|
[
"MIT"
] | null | null | null |
hw1_matrix_mutiplication/matrix_multiplication.ipynb
|
yanchen0615/massive-data-analysis
|
bf16aa38d09bf5c654294885448631b518106180
|
[
"MIT"
] | null | null | null | 19.455338 | 210 | 0.468869 |
[
[
[
"載入SparkContext和SparkConf",
"_____no_output_____"
]
],
[
[
"from pyspark import SparkContext\nfrom pyspark import SparkConf",
"_____no_output_____"
],
[
"conf = SparkConf().setAppName('appName').setMaster('local')",
"_____no_output_____"
],
[
"sc = SparkContext(conf=conf)",
"_____no_output_____"
]
],
[
[
"先建立M(i,j)的mapper1, 將index j取出來當成key;\n再建立N(j,k)的mapper1, 將index j取出來當成key",
"_____no_output_____"
]
],
[
[
"def mapper1(line):\n wordlist = line.split(\",\")\n maplist = []\n key = wordlist.pop(2)\n maplist.append((int(key),wordlist))\n return maplist ",
"_____no_output_____"
],
[
"def mapper2(line):\n wordlist = line.split(\",\")\n maplist = []\n key = wordlist.pop(1)\n maplist.append((int(key),wordlist))\n return maplist ",
"_____no_output_____"
]
],
[
[
"定義reducer, 將相同key的value list蒐集起來",
"_____no_output_____"
]
],
[
[
"def reducer1(x,y):\n return x+y",
"_____no_output_____"
]
],
[
[
"讀取檔案\"5ooinput.txt\"",
"_____no_output_____"
]
],
[
[
"data = sc.textFile('2input.txt')",
"_____no_output_____"
]
],
[
[
"先建立兩個空的list M,N,這兩個list之後會用來蒐集M矩陣和N矩陣的元素",
"_____no_output_____"
]
],
[
[
"M = []\nN = []",
"_____no_output_____"
]
],
[
[
"將所有資料傳換成list的資料型態, 再用if-else把MN兩個矩陣的元素分開",
"_____no_output_____"
]
],
[
[
"d = data.collect()",
"_____no_output_____"
],
[
"for i in range(len(d)):\n if 'M' in d[i]:\n M.append(d[i])\n else :\n N.append(d[i])",
"_____no_output_____"
]
],
[
[
"分好之後將M,N各自由list轉成RDD",
"_____no_output_____"
]
],
[
[
"M",
"_____no_output_____"
],
[
"rddm = sc.parallelize(M)\nrddn = sc.parallelize(N)",
"_____no_output_____"
]
],
[
[
"屬於M矩陣的元素藉由mapper1轉換成(j,[M,i,m])的形式, 把j拿出來當key; 同理, 將N矩陣的元素轉換成(j,[N,k,n])的形式。",
"_____no_output_____"
]
],
[
[
"rddm = rddm.flatMap(mapper1)\nrddn = rddn.flatMap(mapper2)",
"_____no_output_____"
]
],
[
[
"為了後續要把對應的矩陣元素相乘,所以在執行reduce後轉換回list的資料型態。",
"_____no_output_____"
]
],
[
[
"m = rddm.reduceByKey(reducer1).collect() \nn = rddn.reduceByKey(reducer1).collect() ",
"_____no_output_____"
],
[
"m",
"_____no_output_____"
]
],
[
[
"接下來解釋如何將(j,(M,i,m))和(j,(N,k,n))轉換成((i,k),mn):",
"_____no_output_____"
],
[
"(1)首先建立一個空的list, 命名為aggregate, 用來蒐集轉換後的((i,k),mn)",
"_____no_output_____"
],
[
"(2)第一層for loop是將j組轉換後的((i,k),mn)集合起來",
"_____no_output_____"
],
[
"(3)第二層有三組for loop, 第一個和第二個loop執行的目的是要將list m和list n的資料排列成[['M','i','m'],...,]的形式方便後續運算值的取用,而a和b則是在loop中的暫時容器, 用來裝排列好的資料, 排列完成後第三組loop才真正執行乘法運算, 運算後再以((i,k),mn)的形式存入list1這個暫時容器, list1在第二層loop結束時存入aggregate",
"_____no_output_____"
],
[
"(4)這個三層迴圈需要非常久的時間才能跑完(大概一天多),所以不是一個好方法,不過礙於作業繳交時限,只能先用這個方法,後續有想到其他解法會再補上。",
"_____no_output_____"
],
[
"aggregate = []\nfor j in range(len(m)):\n data1 = m[j][1]\n a = []\n for i in range(0, len(data1), 3):\n a.append([data1[i],data1[i+1],data1[i+2]])\n data2 = n[j][1]\n b = []\n for i in range(0, len(data2), 3):\n b.append([data2[i],data2[i+1],data2[i+2]])\n list1 = []\n for i in range(len(a)):\n mul1 = int(a[i][2])\n index1 = a[i][1]\n for k in range(len(b)):\n mul2 = int(b[k][2])\n index2 = b[k][1]\n list1.append(((index1,index2),mul1*mul2))\n aggregate.extend(list1)",
"_____no_output_____"
],
[
"aggregate已經將資料整理成[(index1,index2),value]的形式,接下來只要將aggregate轉換成RDD的資料型式,再用map-reduce就完成矩陣乘法了。",
"_____no_output_____"
]
],
[
[
"rdda = sc.parallelize(aggregate)",
"_____no_output_____"
]
],
[
[
"用reduceByKey將相同key(i,k)的值都相加起來。",
"_____no_output_____"
]
],
[
[
"outcome = rdda.reduceByKey(reducer1).collect()",
"_____no_output_____"
]
],
[
[
"再用sort()指令排序",
"_____no_output_____"
]
],
[
[
"outcome.sort()",
"_____no_output_____"
]
],
[
[
"開一個新的檔案",
"_____no_output_____"
]
],
[
[
"file = open(\"mapreduce_output.txt\", \"w\")",
"_____no_output_____"
],
[
"file.write(\"Output\\n\")",
"_____no_output_____"
]
],
[
[
"將資料轉換成string之後一一寫入剛剛建立的檔案",
"_____no_output_____"
]
],
[
[
"for i in range(len(outcome)):\n file.write(str(outcome[i][0][0]))\n file.write(',')\n file.write(str(outcome[i][0][1]))\n file.write(',')\n file.write(str(outcome[i][1]))\n file.write(' \\n')",
"_____no_output_____"
],
[
"file.close()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4ab28f64f54a5831132c7c0edb6b6944ecbbe139
| 10,247 |
ipynb
|
Jupyter Notebook
|
segmentation_epistroma_unet/visualize_validation_results.ipynb
|
rahulnair502/PytorchDigitalPathology
|
7edb9a4f53f6b289043b57bb003f5c3ce658c112
|
[
"MIT"
] | 91 |
2018-11-29T21:56:02.000Z
|
2022-03-26T13:32:58.000Z
|
segmentation_epistroma_unet/visualize_validation_results.ipynb
|
rahulnair502/PytorchDigitalPathology
|
7edb9a4f53f6b289043b57bb003f5c3ce658c112
|
[
"MIT"
] | 14 |
2019-04-28T23:35:43.000Z
|
2022-03-30T10:11:38.000Z
|
segmentation_epistroma_unet/visualize_validation_results.ipynb
|
rahulnair502/PytorchDigitalPathology
|
7edb9a4f53f6b289043b57bb003f5c3ce658c112
|
[
"MIT"
] | 55 |
2018-12-12T01:19:39.000Z
|
2022-03-04T16:21:59.000Z
| 37.811808 | 166 | 0.583 |
[
[
[
"#v1\n#26/10/2018\n\n\ndataname=\"epistroma\" #should match the value used to train the network, will be used to load the appropirate model\ngpuid=0\n\n\npatch_size=256 #should match the value used to train the network\nbatch_size=1 #nicer to have a single batch so that we can iterately view the output, while not consuming too much \nedge_weight=1",
"_____no_output_____"
],
[
"# https://github.com/jvanvugt/pytorch-unet\n#torch.multiprocessing.set_start_method(\"fork\")\nimport random, sys\nimport cv2\nimport glob\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport scipy.ndimage\nimport skimage\nimport time\n\nimport tables\nfrom skimage import io, morphology\nfrom sklearn.metrics import confusion_matrix\nfrom tensorboardX import SummaryWriter\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import nn\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\nfrom unet import UNet\n\nimport PIL",
"_____no_output_____"
],
[
"print(torch.cuda.get_device_properties(gpuid))\ntorch.cuda.set_device(gpuid)\ndevice = torch.device(f'cuda:{gpuid}' if torch.cuda.is_available() else 'cpu')",
"_____no_output_____"
],
[
"checkpoint = torch.load(f\"{dataname}_unet_best_model.pth\")",
"_____no_output_____"
],
[
"#load the model, note that the paramters are coming from the checkpoint, since the architecture of the model needs to exactly match the weights saved\nmodel = UNet(n_classes=checkpoint[\"n_classes\"], in_channels=checkpoint[\"in_channels\"], padding=checkpoint[\"padding\"],depth=checkpoint[\"depth\"],\n wf=checkpoint[\"wf\"], up_mode=checkpoint[\"up_mode\"], batch_norm=checkpoint[\"batch_norm\"]).to(device)\nprint(f\"total params: \\t{sum([np.prod(p.size()) for p in model.parameters()])}\")\nmodel.load_state_dict(checkpoint[\"model_dict\"])",
"_____no_output_____"
],
[
"#this defines our dataset class which will be used by the dataloader\nclass Dataset(object):\n def __init__(self, fname ,img_transform=None, mask_transform = None, edge_weight= False):\n #nothing special here, just internalizing the constructor parameters\n self.fname=fname\n self.edge_weight = edge_weight\n \n self.img_transform=img_transform\n self.mask_transform = mask_transform\n \n self.tables=tables.open_file(self.fname)\n self.numpixels=self.tables.root.numpixels[:]\n self.nitems=self.tables.root.img.shape[0]\n self.tables.close()\n \n self.img = None\n self.mask = None\n def __getitem__(self, index):\n #opening should be done in __init__ but seems to be\n #an issue with multithreading so doing here\n if(self.img is None): #open in thread\n self.tables=tables.open_file(self.fname)\n self.img=self.tables.root.img\n self.mask=self.tables.root.mask\n \n #get the requested image and mask from the pytable\n img = self.img[index,:,:,:]\n mask = self.mask[index,:,:]\n \n #the original Unet paper assignes increased weights to the edges of the annotated objects\n #their method is more sophistocated, but this one is faster, we simply dilate the mask and \n #highlight all the pixels which were \"added\"\n if(self.edge_weight):\n weight = scipy.ndimage.morphology.binary_dilation(mask==1, iterations =2) & ~mask\n else: #otherwise the edge weight is all ones and thus has no affect\n weight = np.ones(mask.shape,dtype=mask.dtype)\n \n mask = mask[:,:,None].repeat(3,axis=2) #in order to use the transformations given by torchvision\n weight = weight[:,:,None].repeat(3,axis=2) #inputs need to be 3D, so here we convert from 1d to 3d by repetition\n \n img_new = img\n mask_new = mask\n weight_new = weight\n \n seed = random.randrange(sys.maxsize) #get a random seed so that we can reproducibly do the transofrmations\n if self.img_transform is not None:\n random.seed(seed) # apply this seed to img transforms\n img_new = self.img_transform(img)\n\n if self.mask_transform is not None:\n random.seed(seed)\n mask_new = self.mask_transform(mask)\n mask_new = np.asarray(mask_new)[:,:,0].squeeze()\n \n random.seed(seed)\n weight_new = self.mask_transform(weight)\n weight_new = np.asarray(weight_new)[:,:,0].squeeze()\n\n return img_new, mask_new, weight_new\n def __len__(self):\n return self.nitems",
"_____no_output_____"
],
[
"#note that since we need the transofrmations to be reproducible for both masks and images\n#we do the spatial transformations first, and afterwards do any color augmentations\n\n#in the case of using this for output generation, we want to use the original images since they will give a better sense of the exepected \n#output when used on the rest of the dataset, as a result, we disable all unnecessary augmentation.\n#the only component that remains here is the randomcrop, to ensure that regardless of the size of the image\n#in the database, we extract an appropriately sized patch\nimg_transform = transforms.Compose([\n transforms.ToPILImage(),\n #transforms.RandomVerticalFlip(),\n #transforms.RandomHorizontalFlip(),\n transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color\n #transforms.RandomResizedCrop(size=patch_size),\n #transforms.RandomRotation(180),\n #transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=.5),\n #transforms.RandomGrayscale(),\n transforms.ToTensor()\n ])\n\n\nmask_transform = transforms.Compose([\n transforms.ToPILImage(),\n #transforms.RandomVerticalFlip(),\n #transforms.RandomHorizontalFlip(),\n transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color\n #transforms.RandomResizedCrop(size=patch_size,interpolation=PIL.Image.NEAREST),\n #transforms.RandomRotation(180),\n ])\n\nphases=[\"val\"]\ndataset={}\ndataLoader={}\nfor phase in phases:\n \n dataset[phase]=Dataset(f\"./{dataname}_{phase}.pytable\", img_transform=img_transform , mask_transform = mask_transform ,edge_weight=edge_weight)\n dataLoader[phase]=DataLoader(dataset[phase], batch_size=batch_size, \n shuffle=True, num_workers=0, pin_memory=True) #,pin_memory=True)\n",
"_____no_output_____"
],
[
"%matplotlib inline\n\n#set the model to evaluation mode, since we're only generating output and not doing any back propogation\nmodel.eval()\nfor ii , (X, y, y_weight) in enumerate(dataLoader[\"val\"]):\n X = X.to(device) # [NBATCH, 3, H, W]\n y = y.type('torch.LongTensor').to(device) # [NBATCH, H, W] with class indices (0, 1)\n\n output = model(X) # [NBATCH, 2, H, W]\n\n output=output.detach().squeeze().cpu().numpy() #get output and pull it to CPU\n output=np.moveaxis(output,0,-1) #reshape moving last dimension\n \n fig, ax = plt.subplots(1,4, figsize=(10,4)) # 1 row, 2 columns\n\n ax[0].imshow(output[:,:,1])\n ax[1].imshow(np.argmax(output,axis=2))\n ax[2].imshow(y.detach().squeeze().cpu().numpy())\n ax[3].imshow(np.moveaxis(X.detach().squeeze().cpu().numpy(),0,-1))\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab293902f10dd1c5aaafbcbea798e46fb79a14a
| 8,999 |
ipynb
|
Jupyter Notebook
|
notebooks/step_3_EDA_prep_data.ipynb
|
wongwalter/aws_simple_algorithm
|
833d01622bac0ec91f2f87f6df3b2cbed6189d4b
|
[
"MIT"
] | null | null | null |
notebooks/step_3_EDA_prep_data.ipynb
|
wongwalter/aws_simple_algorithm
|
833d01622bac0ec91f2f87f6df3b2cbed6189d4b
|
[
"MIT"
] | null | null | null |
notebooks/step_3_EDA_prep_data.ipynb
|
wongwalter/aws_simple_algorithm
|
833d01622bac0ec91f2f87f6df3b2cbed6189d4b
|
[
"MIT"
] | null | null | null | 23.870027 | 95 | 0.544172 |
[
[
[
"# Objective\n\nPerform the exploratory data analysis (EDA) to find insights in the AWS pricing data",
"_____no_output_____"
],
[
"# Code",
"_____no_output_____"
],
[
"## Load libs",
"_____no_output_____"
]
],
[
[
"import sys\nsys.path.append('..')\n\nimport random\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom src.data.helpers import load_aws_dataset",
"_____no_output_____"
]
],
[
[
"## Input params",
"_____no_output_____"
]
],
[
[
"interim_dir = '../data/interim'\nin_fname = 'step_1_aws_filtered_sample.csv.zip'\ncompression = 'zip'",
"_____no_output_____"
],
[
"# Papermill parameters injection ... do not delete!",
"_____no_output_____"
]
],
[
[
"## Load data",
"_____no_output_____"
]
],
[
[
"file = f'{interim_dir}/{in_fname}'\ndata = load_aws_dataset(file)\nprint(data.shape)\ndata.head()",
"_____no_output_____"
]
],
[
[
"## Data wrangling\n\nLet's find something interesting in the data!\n- Look for most volatile instances, i.e., with more price changes, thus to avoid them;\n- Least volatile instances;\n- Longer price update times;",
"_____no_output_____"
],
[
"### Data check-up for nulls and missing values\n\nAssumptions:\n\n- Considered region: us-east-1a (Virginia);\n- Check presence of null columns (i.e., there is no price change on that);",
"_____no_output_____"
]
],
[
[
"%%time\n\ndf = data.query('AvailabilityZone == \"us-east-1a\"')\\\n .drop('AvailabilityZone', axis=1)\n\nprint(df.shape)\n\n# Pivot table to change a wide format for the data. Thus, we can remove\n# instances that do not have any price update.\n# Dropping MultiIndex column 'SpotPrice' as there is no use for it.\npvt = df.pivot_table(index=['Timestamp'], \n columns=['InstanceType'])\\\n .droplevel(0, axis=1)\n\npvt.head()",
"_____no_output_____"
],
[
"# Checking if there is any column with only 'NaN'\n# Returns None, meaning that all \npvt.isna().all(axis=0).loc[lambda x: x.isna()]",
"_____no_output_____"
],
[
"# Cross-check to see if this is correct. Getting a sample of confirm this\n# using instance 'a1.2xlarge'\npvt['a1.2xlarge'].dropna().head()",
"_____no_output_____"
],
[
"# Picking random instance and checking if the values are not null\n# just for sanity check.\nfor i in range(5):\n rand_instance = random.randint(0, len(pvt.columns))\n tmp = pvt.iloc[rand_instance].dropna().head()\n print(tmp)",
"_____no_output_____"
]
],
[
[
"### Most volatile instances",
"_____no_output_____"
]
],
[
[
"# Now getting the most volatile instances\nmost_volatiles = pvt.count().sort_values(ascending=False).nlargest(10)\nmost_volatiles",
"_____no_output_____"
],
[
"# Let's quickly plot to see the pricing trends\n# Some normalization is required:\n# 1. Remove rows with only NaN (not columns, otherwise it will remove all pricing!);\n# 2. There are gaps in the pricing. This happens because if there is no pricing\n# update, then there is not price capture. Thus, we can safely use backwards fill\n# to fill the missing values\n\nfig, ax = plt.subplots(figsize=(12, 6))\n\npvt.loc[:, most_volatiles.index.to_list()]\\\n .dropna(how='all', axis=0)\\\n .fillna(method='bfill').plot(ax=ax)\n\nax.set_title('Top 10 most volatile instances')\nax.set_ylabel('Hourly Price (USD)')\nax.legend(loc='lower center', ncol=5, bbox_to_anchor=(0.5, -0.35))",
"_____no_output_____"
]
],
[
[
"### Least volatile instances",
"_____no_output_____"
]
],
[
[
"# Now getting the least volatile instances\nleast_volatiles = pvt.count().sort_values(ascending=False).nsmallest(10)\nleast_volatiles",
"_____no_output_____"
],
[
"fig, ax = plt.subplots(figsize=(12, 6))\n\npvt.loc[:, least_volatiles.index.to_list()]\\\n .dropna(how='all', axis=0)\\\n .fillna(method='bfill').plot(ax=ax)\n\nax.set_title('Top 10 least volatile instances')\nax.set_ylabel('Hourly Price (USD)')\nax.legend(loc='lower center', ncol=5, bbox_to_anchor=(0.5, -0.35))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4ab2979726025cb136c95d6c94c5a89d5681f970
| 594 |
ipynb
|
Jupyter Notebook
|
debug_scripts/combine_all_mtlzs.ipynb
|
akremin/M2FSreduce
|
42092f18aa1e5d7ad6f6528a395ee93e89165b30
|
[
"BSD-3-Clause"
] | 1 |
2020-04-30T15:28:06.000Z
|
2020-04-30T15:28:06.000Z
|
debug_scripts/combine_all_mtlzs.ipynb
|
akremin/M2FSreduce
|
42092f18aa1e5d7ad6f6528a395ee93e89165b30
|
[
"BSD-3-Clause"
] | 100 |
2020-04-27T10:56:45.000Z
|
2021-07-30T04:47:09.000Z
|
debug_scripts/combine_all_mtlzs.ipynb
|
akremin/M2FSreduce
|
42092f18aa1e5d7ad6f6528a395ee93e89165b30
|
[
"BSD-3-Clause"
] | null | null | null | 16.054054 | 34 | 0.515152 |
[
[
[
"\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code"
]
] |
4ab29c6b49fb45cd39d522ec858439df9997f684
| 7,445 |
ipynb
|
Jupyter Notebook
|
Notebooks/CirComPara/CirComPara Pipeline.ipynb
|
DugongBioinformatics/DugongRepository
|
fcefb1adafe37fc6d49e07ee203d93f25e10328b
|
[
"MIT"
] | 2 |
2018-07-31T15:02:27.000Z
|
2019-11-20T00:44:24.000Z
|
Notebooks/CirComPara/CirComPara Pipeline.ipynb
|
DugongBioinformatics/DugongRepository
|
fcefb1adafe37fc6d49e07ee203d93f25e10328b
|
[
"MIT"
] | null | null | null |
Notebooks/CirComPara/CirComPara Pipeline.ipynb
|
DugongBioinformatics/DugongRepository
|
fcefb1adafe37fc6d49e07ee203d93f25e10328b
|
[
"MIT"
] | null | null | null | 25.152027 | 292 | 0.57233 |
[
[
[
"# CirComPara Pipeline",
"_____no_output_____"
],
[
"To demonstrate Dugong ́s effectiveness to distribute and run bioinformatics tools in alternative computational environments, the CirComPara pipeline was implemented in a Dugong container and tested in different OS with the aid of virtual machines (VM) or cloud computing servers.\n\nCirComPara is a computational pipeline to detect, quantify, and correlate expression of linear and circular RNAs from RNA-seq data. Is a highly complex pipeline, which employs a series of bioinformatics software and was originally designed to run in an Ubuntu Server 16.04 LTS (x64).\n\nAlthough authors provide details regarding the expected versions of each software and their dependency requirements, several problems can still be encountered during CirComPara implementation by inexperienced users.\n\nSee documentation for CirComPara installation details: https://github.com/egaffo/CirComPara",
"_____no_output_____"
],
[
"-----------------------------------------------------------------------------------------------------------------------",
"_____no_output_____"
],
[
"## Pipeline steps",
"_____no_output_____"
],
[
"- The test data is already unpacked and available in the path: **/headless/CirComPara/test_circompara/**",
"_____no_output_____"
],
[
"- The **meta.csv** and **vars.py** files are already configured to run CirComPara, as documented: https://github.com/egaffo/CirComPara",
"_____no_output_____"
],
[
"- Defining the folder for the analysis with the CirComPara of the test data provided by the developers of the tool:",
"_____no_output_____"
]
],
[
[
"from functools import partial\nfrom os import chdir\n\nchdir('/headless/CirComPara/test_circompara/analysis')",
"_____no_output_____"
]
],
[
[
"- Viewing files from /headless/CirComPara/test_circompara/",
"_____no_output_____"
]
],
[
[
"from IPython.display import FileLinks, FileLink\n\nFileLinks('/headless/CirComPara/test_circompara/')",
"_____no_output_____"
]
],
[
[
"- Viewing the contents of the configuration file: vars.py",
"_____no_output_____"
]
],
[
[
"!cat /headless/CirComPara/test_circompara/analysis/vars.py",
"_____no_output_____"
]
],
[
[
"- Viewing the contents of the configuration file: meta.csv",
"_____no_output_____"
]
],
[
[
"!cat /headless/CirComPara/test_circompara/analysis/meta.csv",
"_____no_output_____"
]
],
[
[
"- Running CirCompara with test data",
"_____no_output_____"
]
],
[
[
"!../../circompara",
"_____no_output_____"
]
],
[
[
"-----------------------------------------------------------------------------------------------------------------------",
"_____no_output_____"
],
[
"## Results:",
"_____no_output_____"
],
[
"- Viewing output files after running CirComPara:",
"_____no_output_____"
]
],
[
[
"from IPython.display import FileLinks, FileLink\n\nFileLinks('/headless/CirComPara/test_circompara/analysis/')",
"_____no_output_____"
]
],
[
[
"- Viewing graphic files after running CirComPara:",
"_____no_output_____"
]
],
[
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/corr_density_plot-1.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/cumulative_expression_box-1.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/show_circrnas_per_method-1.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_per_gene-1.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/correlations_box-1.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_2reads_2methods_sample-1.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circ_gene_expr-1.png\")",
"_____no_output_____"
],
[
"from IPython.display import Image\nImage(\"/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_gene_expressed_by_sample-1.png\")",
"_____no_output_____"
]
],
[
[
"-----------------------------------------------------------------------------------------------------------------------",
"_____no_output_____"
],
[
"**NOTE:** This pipeline is just an example of what you can do with Dugong. I",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4ab2a101a83c02a63954ddcc579ad8c3352fe453
| 3,492 |
ipynb
|
Jupyter Notebook
|
nbs/Deploy_And_Testing_Insect_Classifier.ipynb
|
fastailondon/ghost-ants
|
f75be03a6ea577c5cd452e8c893f478fc0454dd3
|
[
"Apache-2.0"
] | 1 |
2020-02-26T23:44:08.000Z
|
2020-02-26T23:44:08.000Z
|
nbs/Deploy_And_Testing_Insect_Classifier.ipynb
|
fastailondon/ghost-ants
|
f75be03a6ea577c5cd452e8c893f478fc0454dd3
|
[
"Apache-2.0"
] | 2 |
2021-09-28T00:58:02.000Z
|
2022-02-26T06:44:20.000Z
|
nbs/Deploy_And_Testing_Insect_Classifier.ipynb
|
fastailondon/ghost-ants
|
f75be03a6ea577c5cd452e8c893f478fc0454dd3
|
[
"Apache-2.0"
] | 2 |
2020-02-26T23:44:14.000Z
|
2020-03-28T12:36:01.000Z
| 22.242038 | 144 | 0.512027 |
[
[
[
"from fastai.vision import *",
"_____no_output_____"
]
],
[
[
"## Enable ipywidgets on Jupyter Notebook\n\n```\npip install ipywidgets\njupyter nbextension enable --py widgetsnbextension\n```\n\nMore info at https://ipywidgets.readthedocs.io/en/stable/\n",
"_____no_output_____"
],
[
"## Load pretrained model",
"_____no_output_____"
]
],
[
[
"learn = load_learner(Path('../datasets/insects'))",
"_____no_output_____"
]
],
[
[
"### Upload image",
"_____no_output_____"
]
],
[
[
"import ipywidgets as widgets\nuploader = widgets.FileUpload()\ndisplay(uploader)",
"_____no_output_____"
],
[
"# Show image uploaded\n[uploaded_filename] = uploader.value\nprint(\"Reading uploaded file \",uploaded_filename)\n\nwidgets.Image(\n value=uploader.value[uploaded_filename][\"content\"],\n format='png',\n width=300,\n height=400,\n )",
"Reading uploaded file paper-wasp-no-text.jpg\n"
]
],
[
[
"## Predict image class",
"_____no_output_____"
]
],
[
[
" import tempfile\n# # create a temporary file and write some data to it\nwith tempfile.TemporaryFile() as fp:\n fp.write(uploader.value[uploaded_filename][\"content\"])\n img = open_image(fp)\n \n pred_class,pred_idx,outputs = learn.predict(img)\n print(\"PREDICTED CLASS \", pred_class)",
"PREDICTED CLASS wasp\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab2a4cfc02ae79c9e47b9e72419adc1d895786e
| 7,696 |
ipynb
|
Jupyter Notebook
|
site/pt-br/tutorials/load_data/numpy.ipynb
|
phoenix-fork-tensorflow/docs-l10n
|
2287738c22e3e67177555e8a41a0904edfcf1544
|
[
"Apache-2.0"
] | 491 |
2020-01-27T19:05:32.000Z
|
2022-03-31T08:50:44.000Z
|
site/pt-br/tutorials/load_data/numpy.ipynb
|
phoenix-fork-tensorflow/docs-l10n
|
2287738c22e3e67177555e8a41a0904edfcf1544
|
[
"Apache-2.0"
] | 511 |
2020-01-27T22:40:05.000Z
|
2022-03-21T08:40:55.000Z
|
site/pt-br/tutorials/load_data/numpy.ipynb
|
phoenix-fork-tensorflow/docs-l10n
|
2287738c22e3e67177555e8a41a0904edfcf1544
|
[
"Apache-2.0"
] | 627 |
2020-01-27T21:49:52.000Z
|
2022-03-28T18:11:50.000Z
| 28.609665 | 261 | 0.511305 |
[
[
[
"##### Copyright 2019 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
]
],
[
[
"# Carregar dados NumPy",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/load_data/numpy\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Ver em TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/numpy.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Executar em Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/numpy.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Ver código fonte no GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/load_data/numpy.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Baixar notebook</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"Este tutorial fornece um exemplo de carregamento de dados de matrizes NumPy para um `tf.data.Dataset`.\n\nEste exemplo carrega o conjunto de dados MNIST de um arquivo `.npz`. No entanto, a fonte das matrizes NumPy não é importante.\n",
"_____no_output_____"
],
[
"## Configuração",
"_____no_output_____"
]
],
[
[
"try:\n # %tensorflow_version only exists in Colab.\n %tensorflow_version 2.x\nexcept Exception:\n pass\n",
"_____no_output_____"
],
[
"from __future__ import absolute_import, division, print_function, unicode_literals\n \nimport numpy as np\nimport tensorflow as tf",
"_____no_output_____"
]
],
[
[
"### Carregar um arquivo `.npz` ",
"_____no_output_____"
]
],
[
[
"DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'\n\npath = tf.keras.utils.get_file('mnist.npz', DATA_URL)\nwith np.load(path) as data:\n train_examples = data['x_train']\n train_labels = data['y_train']\n test_examples = data['x_test']\n test_labels = data['y_test']",
"_____no_output_____"
]
],
[
[
"## Carregar matrizes NumPy com `tf.data.Dataset`",
"_____no_output_____"
],
[
"Supondo que você tenha uma matriz de exemplos e uma matriz correspondente de rótulos, passe as duas matrizes como uma tupla para `tf.data.Dataset.from_tensor_slices` para criar um `tf.data.Dataset`.",
"_____no_output_____"
]
],
[
[
"train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))\ntest_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))",
"_____no_output_____"
]
],
[
[
"## Usar o conjunto de dados",
"_____no_output_____"
],
[
"### Aleatório e lote dos conjuntos de dados",
"_____no_output_____"
]
],
[
[
"BATCH_SIZE = 64\nSHUFFLE_BUFFER_SIZE = 100\n\ntrain_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)\ntest_dataset = test_dataset.batch(BATCH_SIZE)",
"_____no_output_____"
]
],
[
[
"### Construir e treinar um modelo",
"_____no_output_____"
]
],
[
[
"model = tf.keras.Sequential([\n tf.keras.layers.Flatten(input_shape=(28, 28)),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dense(10)\n])\n\nmodel.compile(optimizer=tf.keras.optimizers.RMSprop(),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['sparse_categorical_accuracy'])",
"_____no_output_____"
],
[
"model.fit(train_dataset, epochs=10)",
"_____no_output_____"
],
[
"model.evaluate(test_dataset)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab2a6de69cf544aa74638fc32ecfb4805a846fa
| 726 |
ipynb
|
Jupyter Notebook
|
notebooks/am2-data-qaqc.ipynb
|
kwk38kh/03-simple-predictions
|
4097d45d9a25a261dc1591da1dd1ca641e6868ae
|
[
"MIT"
] | null | null | null |
notebooks/am2-data-qaqc.ipynb
|
kwk38kh/03-simple-predictions
|
4097d45d9a25a261dc1591da1dd1ca641e6868ae
|
[
"MIT"
] | null | null | null |
notebooks/am2-data-qaqc.ipynb
|
kwk38kh/03-simple-predictions
|
4097d45d9a25a261dc1591da1dd1ca641e6868ae
|
[
"MIT"
] | null | null | null | 18.15 | 86 | 0.541322 |
[
[
[
"This is the notebook for the second morning session. It doesn't do very much yet",
"_____no_output_____"
]
]
] |
[
"markdown"
] |
[
[
"markdown"
]
] |
4ab2ac1ebd1bc5f6130129e5439b0f2a0213e68a
| 3,851 |
ipynb
|
Jupyter Notebook
|
ludus/examples/ppo_lunar_lander.ipynb
|
ejmejm/ludus
|
2a05d1a69ccddcf70e8738c7e589ec27163d5b63
|
[
"MIT"
] | 9 |
2019-02-25T20:44:20.000Z
|
2020-08-24T10:52:36.000Z
|
ludus/examples/ppo_lunar_lander.ipynb
|
ejmejm/ludus
|
2a05d1a69ccddcf70e8738c7e589ec27163d5b63
|
[
"MIT"
] | null | null | null |
ludus/examples/ppo_lunar_lander.ipynb
|
ejmejm/ludus
|
2a05d1a69ccddcf70e8738c7e589ec27163d5b63
|
[
"MIT"
] | null | null | null | 30.322835 | 112 | 0.601143 |
[
[
[
"# PPO Lunar Lander Example\n\n### Lunar Lander\n\nTrain a Gym continuous lunar lander environment using a proximal policy optimization algorithm\n\nBecause PPO by itself does not explore much, this example will often get stuck in a local minima",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten\nimport gym\nfrom ludus.policies import PPOTrainer\nfrom ludus.env import EnvController, make_lunar_lander_c",
"_____no_output_____"
],
[
"env = make_lunar_lander_c() # This instance of the environment is only used\n # to get action dimensions\n\n# Creating a conv net for the policy and value estimator\nobs_op = Input(shape=env.observation_space.shape)\ndense1 = Dense(32, activation='tanh')(obs_op)\ndense2 = Dense(32, activation='tanh')(dense1)\nact_probs_op = Dense(env.action_space.shape[0])(dense2) # Prob dist over possible actions\n\n# Output value of observed state\nvdense1 = Dense(32, activation='tanh')(obs_op)\nvdense2 = Dense(32, activation='tanh')(vdense1)\nvalue_op = Dense(1)(vdense2)\n\n# Wrap a Proximal Policy Optimization Trainer on top of the network\nnetwork = PPOTrainer(obs_op, act_probs_op, value_op, act_type='continuous', ppo_iters=40, entropy_coef=1.)",
"_____no_output_____"
],
[
"n_episodes = 10000 # Total episodes of data to collect\nmax_steps = 400 # Max number of frames per game\nbatch_size = 8 # Smaller = faster, larger = stabler\nprint_freq = 10 # How many training updates between printing progress",
"_____no_output_____"
],
[
"# Create the environment controller for generating game data\nec = EnvController(make_lunar_lander_c, n_threads=4)\n# Set the preprocessing function for observations\nec.set_act_transform(lambda x: np.clip(x, -1, 1))",
"_____no_output_____"
],
[
"update_rewards = []\n\nfor i in range(int(n_episodes / batch_size)):\n ec.sim_episodes(network, batch_size, max_steps) # Simualate env to generate data\n update_rewards.append(ec.get_avg_reward()) # Append rewards to reward tracker list\n dat = ec.get_data() # Get all the data gathered\n network.train(dat) # Train the network with PPO\n if i != 0 and i % print_freq == 0:\n print(f'Update #{i}, Avg Reward: {np.mean(update_rewards[-print_freq:])}') # Print an update",
"_____no_output_____"
],
[
"ec.render_episodes(network, 5, max_steps) # Render an episode to see the result",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab2ba9901b69f1c20e24bc89087555bee3ee6f1
| 14,796 |
ipynb
|
Jupyter Notebook
|
experimental/integration_demo.ipynb
|
credo-ai/credoai_lens
|
fac0d21b69fa86fc7ab8c2a748c0631ec15fdccc
|
[
"Apache-2.0"
] | 11 |
2021-12-16T19:04:12.000Z
|
2022-03-15T23:41:30.000Z
|
experimental/integration_demo.ipynb
|
credo-ai/credoai_lens
|
fac0d21b69fa86fc7ab8c2a748c0631ec15fdccc
|
[
"Apache-2.0"
] | 72 |
2021-12-22T22:06:46.000Z
|
2022-03-31T15:25:23.000Z
|
experimental/integration_demo.ipynb
|
credo-ai/credoai_lens
|
fac0d21b69fa86fc7ab8c2a748c0631ec15fdccc
|
[
"Apache-2.0"
] | null | null | null | 33.249438 | 442 | 0.593674 |
[
[
[
"# Generic Integration With Credo AI's Governance App \n\nLens is primarily a framework for comprehensive assessment of AI models. However, in addition, it is the primary way to integrate assessment analysis with Credo AI's Governance App.\n\nIn this tutorial, we will take a model created and assessed _completely independently of Lens_ and send that data to Credo AI's Governance App\n\n### Find the code\nThis notebook can be found on [github](https://github.com/credo-ai/credoai_lens/blob/develop/docs/notebooks/integration_demo.ipynb).",
"_____no_output_____"
],
[
"## Create an example ML Model",
"_____no_output_____"
]
],
[
[
"import numpy as np\nfrom matplotlib import pyplot as plt\nfrom pprint import pprint\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import datasets\nfrom sklearn.svm import SVC\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import precision_recall_curve",
"_____no_output_____"
]
],
[
[
"### Load data and train model\n\nFor the purpose of this demonstration, we will be classifying digits after a large amount of noise has been added to each image.\n\nWe'll create some charts and assessment metrics to reflect our work.",
"_____no_output_____"
]
],
[
[
"# load data\ndigits = datasets.load_digits()\n\n# add noise\ndigits.data += np.random.rand(*digits.data.shape)*16\n\n# split into train and test\nX_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)\n\n# create and fit model\nclf = SVC(probability=True)\nclf.fit(X_train, y_train)",
"_____no_output_____"
]
],
[
[
"### Visualize example images along with predicted label",
"_____no_output_____"
]
],
[
[
"examples_plot = plt.figure()\nfor i in range(8):\n image_data = X_test[i,:]\n prediction = digits.target_names[clf.predict(image_data[None,:])[0]]\n label = f'Pred: \"{prediction}\"'\n # plot\n ax = plt.subplot(2,4,i+1)\n ax.imshow(image_data.reshape(8,8), cmap='gray')\n ax.set_title(label)\n ax.tick_params(labelbottom=False, labelleft=False, length=0)\nplt.suptitle('Example Images and Predictions', fontsize=16)",
"_____no_output_____"
]
],
[
[
"### Calculate performance metrics and visualize\n\nAs a multiclassification problem, we can calculate metrics per class, or overall. We record overall metrics, but include figures for individual class performance breakdown",
"_____no_output_____"
]
],
[
[
"metrics = classification_report(y_test, clf.predict(X_test), output_dict=True)\noverall_metrics = metrics['macro avg']\ndel overall_metrics['support']\npprint(overall_metrics)",
"_____no_output_____"
],
[
"probs = clf.predict_proba(X_test)\npr_curves = plt.figure(figsize=(8,6))\n# plot PR curve sper digit\nfor digit in digits.target_names:\n y_true = y_test == digit\n y_prob = probs[:,digit]\n precisions, recalls, thresholds = precision_recall_curve(y_true, y_prob)\n plt.plot(recalls, precisions, lw=3, label=f'Digit: {digit}')\nplt.xlabel('Recall', fontsize=16)\nplt.ylabel('Precision', fontsize=16)\n\n# plot iso lines\nf_scores = np.linspace(0.2, 0.8, num=4)\nlines = []\nlabels = []\nfor f_score in f_scores:\n label = label='ISO f1 curves' if f_score==f_scores[0] else ''\n x = np.linspace(0.01, 1)\n y = f_score * x / (2 * x - f_score)\n l, = plt.plot(x[y >= 0], y[y >= 0], color='gray', alpha=0.2, label=label)\n# final touches\nplt.xlim([0.5, 1.0])\nplt.ylim([0.0, 1.05])\nplt.tick_params(labelsize=14)\nplt.title('PR Curves per Digit', fontsize=20)\nplt.legend(loc='lower left', fontsize=10)",
"_____no_output_____"
],
[
"from sklearn.metrics import plot_confusion_matrix\nconfusion_plot = plt.figure(figsize=(6,6))\nplot_confusion_matrix(clf, X_test, y_test, \\\n normalize='true', ax=plt.gca(), colorbar=False)\nplt.tick_params(labelsize=14)",
"_____no_output_____"
]
],
[
[
"## Sending assessment information to Credo AI\n\nNow that we have completed training and assessing the model, we will demonstrate how information can be sent to the Credo AI Governance App. Metrics related to performance, fairness, or other governance considerations are the most important kind of evidence needed for governance.\n\nIn addition, figures are often produced that help communicate metrics better, understand the model, or other contextualize the AI system. Credo can ingest those as well.\n\n**Which metrics to record?**\n\nIdeally you will have decided on the most important metrics before building the model. We refer to this stage as `Metric Alignment`. This is the phase where your team explicitly determine how you will measure whether your model can be safely deployed. It is part of the more general `Alignment Stage`, which often requires input from multiple stakeholders outside of the team specifically involved in the development of the AI model.\n\nOf course, you may want to record more metrics than those explicitly determined during `Metric Alignment`.\n\nFor instance, in this example let's say that during `Metric Alignment`, the _F1 Score_ is the primary metric used to evaluate model performance. However, we have decided that recall and precision would be helpful supporting. So we will send those three metrics.\n\n\nTo reiterate: You are always free to send more metrics - Credo AI will ingest them. It is you and your team's decision which metrics are tracked specifically for governance purposes.",
"_____no_output_____"
]
],
[
[
"import credoai.integration as ci\nfrom credoai.utils import list_metrics\nmodel_name = 'SVC'\ndataset_name = 'sklearn_digits'",
"_____no_output_____"
]
],
[
[
"## Quick reference\n\nBelow is all the code needed to record a set of metrics and figures. We will unpack each part below.",
"_____no_output_____"
]
],
[
[
"# metrics\nmetric_records = ci.record_metrics_from_dict(overall_metrics, \n model_label=model_name,\n dataset_label=dataset_name)\n\n#figures\nexample_figure_record = ci.Figure(examples_plot._suptitle.get_text(), examples_plot)\nconfusion_figure_record = ci.Figure(confusion_plot.axes[0].get_title(), confusion_plot)\n\npr_curve_caption=\"\"\"Precision-recall curves are shown for each digit separately.\nThese are calculated by treating each class as a separate\nbinary classification problem. The grey lines are \nISO f1 curves - all points on each curve have identical\nf1 scores.\n\"\"\"\npr_curve_figure_record = ci.Figure(pr_curves.axes[0].get_title(),\n figure=pr_curves,\n caption=pr_curve_caption)\nfigure_records = ci.MultiRecord([example_figure_record, confusion_figure_record, pr_curve_figure_record])\n\n# export to file\n# ci.export_to_file(model_record, 'model_record.json')",
"_____no_output_____"
]
],
[
[
"## Metric Record\n\nTo record a metric you can either record each one manually or ingest a dictionary of metrics.",
"_____no_output_____"
],
[
"### Manually entering individual metrics",
"_____no_output_____"
]
],
[
[
"f1_description = \"\"\"Harmonic mean of precision and recall scores.\nRanges from 0-1, with 1 being perfect performance.\"\"\"\nf1_record = ci.Metric(metric_type='f1', \n value=overall_metrics['f1-score'],\n model_label=model_name, \n dataset_label=dataset_name)\n\nprecision_record = ci.Metric(metric_type='precision',\n value=overall_metrics['precision'],\n model_label=model_name, \n dataset_label=dataset_name)\n\nrecall_record = ci.Metric(metric_type='recall', \n value=overall_metrics['recall'],\n model_label=model_name, \n dataset_label=dataset_name)\nmetrics = [f1_record, precision_record, recall_record]",
"_____no_output_____"
]
],
[
[
"### Convenience to record multiple metrics\n\nMultiple metrics can be recorded as long as they are described using a pandas dataframe. ",
"_____no_output_____"
]
],
[
[
"metric_records = ci.record_metrics_from_dict(overall_metrics, \n model_name=model_name, \n dataset_name=dataset_name)",
"_____no_output_____"
]
],
[
[
"## Record figures\n\nCredo can accept a path to an image file or a matplotlib figure. Matplotlib figures are converted to PNG images and saved.\n\n\nA caption can be included for futher description. Included a caption is recommended when the image is not self-explanatory, which is most of the time! ",
"_____no_output_____"
]
],
[
[
"example_figure_record = ci.Figure(examples_plot._suptitle.get_text(), examples_plot)\nconfusion_figure_record = ci.Figure(confusion_plot.axes[0].get_title(), confusion_plot)\n\npr_curve_caption=\"\"\"Precision-recall curves are shown for each digit separately.\nThese are calculated by treating each class as a separate\nbinary classification problem. The grey lines are \nISO f1 curves - all points on each curve have identical\nf1 scores.\n\"\"\"\npr_curve_figure_record = ci.Figure(pr_curves.axes[0].get_title(),\n figure=pr_curves,\n description=pr_curve_caption)\nfigure_records = [example_figure_record, confusion_figure_record, pr_curve_figure_record]",
"_____no_output_____"
]
],
[
[
"## MultiRecords\n\nTo send all the information, we wrap the records in a MuliRecord, which wraps records of the same type.",
"_____no_output_____"
]
],
[
[
"metric_records = ci.MultiRecord(metric_records)\nfigure_records = ci.MultiRecord(figure_records)",
"_____no_output_____"
]
],
[
[
"## Export to Credo AI\n\nThe json object of the model record can be created by calling `MultiRecord.jsonify()`. The convenience function `export_to_file` can be called to export the json record to a file. This file can then be uploaded to Credo AI's Governance App.",
"_____no_output_____"
]
],
[
[
"# filename is the location to save the json object of the model record\n# filename=\"XXX.json\"\n# ci.export_to_file(metric_records, filename)",
"_____no_output_____"
]
],
[
[
"MultiRecords can be directly uploaded to Credo AI's Governance App as well. A model (or data) ID must be known to do so. You use `export_to_credo` to accomplish this.",
"_____no_output_____"
]
],
[
[
"# model_id = \"XXX\"\n# ci.export_to_credo(metric_records, model_id)",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
]
] |
4ab2c79d976afe502753ad4b5cdfb90277b228e3
| 255,131 |
ipynb
|
Jupyter Notebook
|
my_colabs/jnrr19/4_callbacks_hyperparameter_tuning.ipynb
|
guyk1971/stable-baselines
|
ac7a1f3c32851577d5a4fc76e2c42760b9379634
|
[
"MIT"
] | null | null | null |
my_colabs/jnrr19/4_callbacks_hyperparameter_tuning.ipynb
|
guyk1971/stable-baselines
|
ac7a1f3c32851577d5a4fc76e2c42760b9379634
|
[
"MIT"
] | null | null | null |
my_colabs/jnrr19/4_callbacks_hyperparameter_tuning.ipynb
|
guyk1971/stable-baselines
|
ac7a1f3c32851577d5a4fc76e2c42760b9379634
|
[
"MIT"
] | null | null | null | 88.772095 | 90,711 | 0.728304 |
[
[
[
"<a href=\"https://colab.research.google.com/github/araffin/rl-tutorial-jnrr19/blob/master/4_callbacks_hyperparameter_tuning.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Stable Baselines Tutorial - Callbacks and hyperparameter tuning\n\nGithub repo: https://github.com/araffin/rl-tutorial-jnrr19\n\nStable-Baselines: https://github.com/hill-a/stable-baselines\n\nDocumentation: https://stable-baselines.readthedocs.io/en/master/\n\nRL Baselines zoo: https://github.com/araffin/rl-baselines-zoo\n\n\n## Introduction\n\nIn this notebook, you will learn how to use *Callbacks* which allow to do monitoring, auto saving, model manipulation, progress bars, ...\n\n\nYou will also see that finding good hyperparameters is key to success in RL.\n\n## Install Dependencies and Stable Baselines Using Pip",
"_____no_output_____"
]
],
[
[
"# !apt install swig\n# !pip install tqdm==4.36.1\n# !pip install stable-baselines[mpi]==2.8.0\n# # Stable Baselines only supports tensorflow 1.x for now\n# %tensorflow_version 1.x",
"_____no_output_____"
],
[
"%load_ext autoreload\n%autoreload 2\ntry:\n %%tensorflow_version 1.x\nexcept:\n pass\n\nimport os\nimport sys\nlib_path = os.path.abspath('../..')\nprint('inserting the following to path',lib_path)\nif lib_path not in sys.path:\n sys.path.insert(0,lib_path)\nprint(sys.path)\n#-----------------------------------\n# Filter tensorflow version warnings\n#-----------------------------------\n# https://stackoverflow.com/questions/40426502/is-there-a-way-to-suppress-the-messages-tensorflow-prints/40426709\nos.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'}\nimport warnings\n# https://stackoverflow.com/questions/15777951/how-to-suppress-pandas-future-warning\nwarnings.simplefilter(action='ignore', category=FutureWarning)\nwarnings.simplefilter(action='ignore', category=Warning)\nimport tensorflow as tf\ntf.get_logger().setLevel('INFO')\ntf.autograph.set_verbosity(0)\nimport logging\ntf.get_logger().setLevel(logging.ERROR)\n\n# from IPython.core.debugger import set_trace # GK debug",
"inserting the following to path /home/gkoren2/PycharmProjects/remote/MLA/RL/stable-baselines\n['/home/gkoren2/PycharmProjects/remote/MLA/RL/stable-baselines', '/home/gkoren2/PycharmProjects/remote/MLA/RL/stable-baselines/my_colabs/jnrr19', '/opt/anaconda3/envs/tf15/lib/python37.zip', '/opt/anaconda3/envs/tf15/lib/python3.7', '/opt/anaconda3/envs/tf15/lib/python3.7/lib-dynload', '', '/opt/anaconda3/envs/tf15/lib/python3.7/site-packages', '/opt/anaconda3/envs/tf15/lib/python3.7/site-packages/IPython/extensions', '/home/gkoren2/.ipython']\n"
],
[
"# sys.path.pop(0)\n# print(sys.path)",
"['/home/guy/workspace/study/remote/agents', '/home/guy/workspace/study/remote/stable-baselines', '/home/guy/anaconda3/envs/rl15/lib/python37.zip', '/home/guy/anaconda3/envs/rl15/lib/python3.7', '/home/guy/anaconda3/envs/rl15/lib/python3.7/lib-dynload', '', '/home/guy/anaconda3/envs/rl15/lib/python3.7/site-packages', '/home/guy/anaconda3/envs/rl15/lib/python3.7/site-packages/IPython/extensions', '/home/guy/.ipython']\n"
],
[
"import gym\nfrom stable_baselines import A2C, SAC, PPO2, TD3",
"_____no_output_____"
]
],
[
[
"# The importance of hyperparameter tuning\n\nWhen compared with Supervised Learning, Deep Reinforcement Learning is far more sensitive to the choice of hyper-parameters such as learning rate, number of neurons, number of layers, optimizer ... etc. \nPoor choice of hyper-parameters can lead to poor/unstable convergence. This challenge is compounded by the variability in performance across random seeds (used to initialize the network weights and the environment).\n\nHere we demonstrate on a toy example the [Soft Actor Critic](https://arxiv.org/abs/1801.01290) algorithm applied in the Pendulum environment. Note the change in performance between the default and \"tuned\" parameters. ",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\ndef evaluate(model, env, num_episodes=100):\n # This function will only work for a single Environment\n all_episode_rewards = []\n for i in range(num_episodes):\n episode_rewards = []\n done = False\n obs = env.reset()\n while not done:\n action, _states = model.predict(obs)\n obs, reward, done, info = env.step(action)\n episode_rewards.append(reward)\n\n all_episode_rewards.append(sum(episode_rewards))\n\n mean_episode_reward = np.mean(all_episode_rewards)\n return mean_episode_reward",
"_____no_output_____"
],
[
"eval_env = gym.make('Pendulum-v0')",
"_____no_output_____"
],
[
"default_model = SAC('MlpPolicy', 'Pendulum-v0', verbose=1).learn(8000)",
"Creating environment from the given name, wrapped in a DummyVecEnv.\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.86086357 |\n| ent_coef_loss | -0.24097586 |\n| entropy | 1.0934155 |\n| episodes | 4 |\n| fps | 116 |\n| mean 100 episode reward | -1.4e+03 |\n| n_updates | 500 |\n| policy_loss | 14.504475 |\n| qf1_loss | 0.63816196 |\n| qf2_loss | 0.6681262 |\n| time_elapsed | 5 |\n| total timesteps | 600 |\n| value_loss | 0.05305259 |\n-----------------------------------------\n----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.6817271 |\n| ent_coef_loss | -0.5901724 |\n| entropy | 1.0763786 |\n| episodes | 8 |\n| fps | 113 |\n| mean 100 episode reward | -1.54e+03 |\n| n_updates | 1300 |\n| policy_loss | 36.730305 |\n| qf1_loss | 0.2912222 |\n| qf2_loss | 0.3506565 |\n| time_elapsed | 12 |\n| total timesteps | 1400 |\n| value_loss | 0.15306549 |\n----------------------------------------\n----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.5426672 |\n| ent_coef_loss | -0.874297 |\n| entropy | 1.0220902 |\n| episodes | 12 |\n| fps | 112 |\n| mean 100 episode reward | -1.54e+03 |\n| n_updates | 2100 |\n| policy_loss | 65.32428 |\n| qf1_loss | 30.966835 |\n| qf2_loss | 31.497124 |\n| time_elapsed | 19 |\n| total timesteps | 2200 |\n| value_loss | 0.19978185 |\n----------------------------------------\n----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.4425113 |\n| ent_coef_loss | -0.9054169 |\n| entropy | 0.9032334 |\n| episodes | 16 |\n| fps | 111 |\n| mean 100 episode reward | -1.45e+03 |\n| n_updates | 2900 |\n| policy_loss | 84.642166 |\n| qf1_loss | 1.4909126 |\n| qf2_loss | 1.1197826 |\n| time_elapsed | 26 |\n| total timesteps | 3000 |\n| value_loss | 1.1148813 |\n----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.3716035 |\n| ent_coef_loss | -0.65237087 |\n| entropy | 0.72794294 |\n| episodes | 20 |\n| fps | 111 |\n| mean 100 episode reward | -1.31e+03 |\n| n_updates | 3700 |\n| policy_loss | 85.19386 |\n| qf1_loss | 19.516129 |\n| qf2_loss | 17.376013 |\n| time_elapsed | 34 |\n| total timesteps | 3800 |\n| value_loss | 1.1181409 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.32477146 |\n| ent_coef_loss | -0.25891066 |\n| entropy | 0.748786 |\n| episodes | 24 |\n| fps | 111 |\n| mean 100 episode reward | -1.28e+03 |\n| n_updates | 4500 |\n| policy_loss | 100.28521 |\n| qf1_loss | 10.946207 |\n| qf2_loss | 8.666966 |\n| time_elapsed | 41 |\n| total timesteps | 4600 |\n| value_loss | 1.6161776 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.29656214 |\n| ent_coef_loss | -0.35232383 |\n| entropy | 0.650649 |\n| episodes | 28 |\n| fps | 111 |\n| mean 100 episode reward | -1.27e+03 |\n| n_updates | 5300 |\n| policy_loss | 97.61492 |\n| qf1_loss | 15.680162 |\n| qf2_loss | 11.12932 |\n| time_elapsed | 48 |\n| total timesteps | 5400 |\n| value_loss | 3.2327273 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.2708956 |\n| ent_coef_loss | -0.14060733 |\n| entropy | 0.66970193 |\n| episodes | 32 |\n| fps | 110 |\n| mean 100 episode reward | -1.24e+03 |\n| n_updates | 6100 |\n| policy_loss | 123.726555 |\n| qf1_loss | 9.375865 |\n| qf2_loss | 7.6604624 |\n| time_elapsed | 55 |\n| total timesteps | 6200 |\n| value_loss | 1.8344877 |\n-----------------------------------------\n------------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.2536729 |\n| ent_coef_loss | -0.016172528 |\n| entropy | 0.6564288 |\n| episodes | 36 |\n| fps | 110 |\n| mean 100 episode reward | -1.22e+03 |\n| n_updates | 6900 |\n| policy_loss | 131.00299 |\n| qf1_loss | 11.398428 |\n| qf2_loss | 9.232851 |\n| time_elapsed | 63 |\n| total timesteps | 7000 |\n| value_loss | 1.7097068 |\n------------------------------------------\n------------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.23458087 |\n| ent_coef_loss | -0.012409501 |\n| entropy | 0.58784926 |\n| episodes | 40 |\n| fps | 110 |\n| mean 100 episode reward | -1.22e+03 |\n| n_updates | 7700 |\n| policy_loss | 140.40617 |\n| qf1_loss | 142.03795 |\n| qf2_loss | 141.36569 |\n| time_elapsed | 70 |\n| total timesteps | 7800 |\n| value_loss | 1.3033414 |\n------------------------------------------\n"
],
[
"evaluate(default_model, eval_env, num_episodes=100)",
"_____no_output_____"
],
[
"tuned_model = SAC('MlpPolicy', 'Pendulum-v0', batch_size=256, verbose=1, policy_kwargs=dict(layers=[256, 256])).learn(8000)",
"Creating environment from the given name, wrapped in a DummyVecEnv.\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.9020036 |\n| ent_coef_loss | -0.16898558 |\n| entropy | 1.2131529 |\n| episodes | 4 |\n| fps | 147 |\n| mean 100 episode reward | -1.35e+03 |\n| n_updates | 345 |\n| policy_loss | 12.203175 |\n| qf1_loss | 0.22476763 |\n| qf2_loss | 0.22148173 |\n| time_elapsed | 4 |\n| total timesteps | 600 |\n| value_loss | 0.0542944 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.7152608 |\n| ent_coef_loss | -0.48915467 |\n| entropy | 1.0975705 |\n| episodes | 8 |\n| fps | 114 |\n| mean 100 episode reward | -1.38e+03 |\n| n_updates | 1145 |\n| policy_loss | 33.982548 |\n| qf1_loss | 0.15928556 |\n| qf2_loss | 0.134309 |\n| time_elapsed | 12 |\n| total timesteps | 1400 |\n| value_loss | 0.36052936 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.57604194 |\n| ent_coef_loss | -0.60929537 |\n| entropy | 1.0738825 |\n| episodes | 12 |\n| fps | 107 |\n| mean 100 episode reward | -1.4e+03 |\n| n_updates | 1945 |\n| policy_loss | 53.973625 |\n| qf1_loss | 0.5117459 |\n| qf2_loss | 0.3860364 |\n| time_elapsed | 20 |\n| total timesteps | 2200 |\n| value_loss | 1.2275451 |\n-----------------------------------------\n----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.47800726 |\n| ent_coef_loss | -0.6580939 |\n| entropy | 0.9332459 |\n| episodes | 16 |\n| fps | 104 |\n| mean 100 episode reward | -1.31e+03 |\n| n_updates | 2745 |\n| policy_loss | 70.29514 |\n| qf1_loss | 6.9415975 |\n| qf2_loss | 6.7279634 |\n| time_elapsed | 28 |\n| total timesteps | 3000 |\n| value_loss | 0.7363456 |\n----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.40782788 |\n| ent_coef_loss | -0.46044654 |\n| entropy | 0.87612736 |\n| episodes | 20 |\n| fps | 103 |\n| mean 100 episode reward | -1.17e+03 |\n| n_updates | 3545 |\n| policy_loss | 80.86107 |\n| qf1_loss | 5.183026 |\n| qf2_loss | 5.2415833 |\n| time_elapsed | 36 |\n| total timesteps | 3800 |\n| value_loss | 1.6897521 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.35913584 |\n| ent_coef_loss | -0.37838006 |\n| entropy | 0.6995595 |\n| episodes | 24 |\n| fps | 101 |\n| mean 100 episode reward | -1.01e+03 |\n| n_updates | 4345 |\n| policy_loss | 79.21163 |\n| qf1_loss | 35.647327 |\n| qf2_loss | 35.577343 |\n| time_elapsed | 45 |\n| total timesteps | 4600 |\n| value_loss | 2.2238235 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.31183186 |\n| ent_coef_loss | -0.46596664 |\n| entropy | 0.7889936 |\n| episodes | 28 |\n| fps | 101 |\n| mean 100 episode reward | -913 |\n| n_updates | 5145 |\n| policy_loss | 88.655174 |\n| qf1_loss | 1.4137881 |\n| qf2_loss | 1.4575037 |\n| time_elapsed | 53 |\n| total timesteps | 5400 |\n| value_loss | 1.0878401 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.27100357 |\n| ent_coef_loss | -0.40352774 |\n| entropy | 0.6981318 |\n| episodes | 32 |\n| fps | 100 |\n| mean 100 episode reward | -848 |\n| n_updates | 5945 |\n| policy_loss | 88.61823 |\n| qf1_loss | 5.9261127 |\n| qf2_loss | 6.0075383 |\n| time_elapsed | 61 |\n| total timesteps | 6200 |\n| value_loss | 2.2948709 |\n-----------------------------------------\n-----------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.23609188 |\n| ent_coef_loss | -0.16232425 |\n| entropy | 0.5260376 |\n| episodes | 36 |\n| fps | 100 |\n| mean 100 episode reward | -773 |\n| n_updates | 6745 |\n| policy_loss | 86.67501 |\n| qf1_loss | 1.8504347 |\n| qf2_loss | 1.7631382 |\n| time_elapsed | 69 |\n| total timesteps | 7000 |\n| value_loss | 1.5603675 |\n-----------------------------------------\n------------------------------------------\n| current_lr | 0.0003 |\n| ent_coef | 0.207431 |\n| ent_coef_loss | -0.064386636 |\n| entropy | 0.625165 |\n| episodes | 40 |\n| fps | 99 |\n| mean 100 episode reward | -707 |\n| n_updates | 7545 |\n| policy_loss | 90.650246 |\n| qf1_loss | 2.047592 |\n| qf2_loss | 1.7619454 |\n| time_elapsed | 78 |\n| total timesteps | 7800 |\n| value_loss | 1.8599651 |\n------------------------------------------\n"
],
[
"evaluate(tuned_model, eval_env, num_episodes=100)",
"_____no_output_____"
]
],
[
[
"Exploring hyperparameter tuning is out of the scope (and time schedule) of this tutorial. However, you need to know that we provide tuned hyperparameter in the [rl zoo](https://github.com/araffin/rl-baselines-zoo) as well as automatic hyperparameter optimization using [Optuna](https://github.com/pfnet/optuna).\n\n<font color='red'> TODO : have a deeper look at the above links </font>",
"_____no_output_____"
],
[
"## Helper functions\nThis is to help the callbacks store variables (as they are function), but this could be also done by passing a class method.",
"_____no_output_____"
]
],
[
[
"def get_callback_vars(model, **kwargs): \n \"\"\"\n Helps store variables for the callback functions\n :param model: (BaseRLModel)\n :param **kwargs: initial values of the callback variables\n \"\"\"\n # save the called attribute in the model\n if not hasattr(model, \"_callback_vars\"): \n model._callback_vars = dict(**kwargs)\n else: # check all the kwargs are in the callback variables\n for (name, val) in kwargs.items():\n if name not in model._callback_vars:\n model._callback_vars[name] = val\n return model._callback_vars # return dict reference (mutable)",
"_____no_output_____"
]
],
[
[
"# Callbacks\n\n## A functional approach\nA callback function takes the `locals()` variables and the `globals()` variables from the model, then returns a boolean value for whether or not the training should continue.\n\nThanks to the access to the models variables, in particular `_locals[\"self\"]`, we are able to even change the parameters of the model without halting the training, or changing the model's code.\n\nHere we have a simple callback that can only be called twice:",
"_____no_output_____"
]
],
[
[
"def simple_callback(_locals, _globals):\n \"\"\"\n Callback called at each step (for DQN an others) or after n steps (see ACER or PPO2)\n :param _locals: (dict)\n :param _globals: (dict)\n \"\"\" \n # get callback variables, with default values if unintialized\n callback_vars = get_callback_vars(_locals[\"self\"], called=False) \n \n if not callback_vars[\"called\"]:\n print(\"callback - first call\")\n callback_vars[\"called\"] = True\n return True # returns True, training continues.\n else:\n print(\"callback - second call\")\n return False # returns False, training stops.\n\nmodel = SAC('MlpPolicy', 'Pendulum-v0', verbose=1)\nmodel.learn(8000, callback=simple_callback)",
"Creating environment from the given name, wrapped in a DummyVecEnv.\ncallback - first call\ncallback - second call\n"
]
],
[
[
"## First example: Auto saving best model\nIn RL, it is quite useful to keep a clean version of a model as you are training, as we can end up with burn-in of a bad policy. This is a typical use case for callback, as they can call the save function of the model, and observe the training over time.\n\nUsing the monitoring wrapper, we can save statistics of the environment, and use them to determine the mean training reward.\nThis allows us to save the best model while training.\n\nNote that this is not the proper way of evaluating an RL agent, you should create an test environment and evaluate the agent performance in the callback. For simplicity, we will be using the training reward as a proxy.",
"_____no_output_____"
]
],
[
[
"import os\n\nimport numpy as np\nfrom stable_baselines.bench import Monitor\nfrom stable_baselines.common.vec_env import DummyVecEnv\nfrom stable_baselines.results_plotter import load_results, ts2xy\n\ndef auto_save_callback(_locals, _globals):\n \"\"\"\n Callback called at each step (for DQN an others) or after n steps (see ACER or PPO2)\n :param _locals: (dict)\n :param _globals: (dict)\n \"\"\"\n # get callback variables, with default values if unintialized\n callback_vars = get_callback_vars(_locals[\"self\"], n_steps=0, best_mean_reward=-np.inf) \n set_trace()\n # skip every 20 steps\n if callback_vars[\"n_steps\"] % 20 == 0:\n # Evaluate policy training performance\n x, y = ts2xy(load_results(log_dir), 'timesteps')\n if len(x) > 0:\n mean_reward = np.mean(y[-100:])\n\n # New best model, you could save the agent here\n if mean_reward > callback_vars[\"best_mean_reward\"]:\n callback_vars[\"best_mean_reward\"] = mean_reward\n # Example for saving best model\n print(\"Saving new best model at {} timesteps\".format(x[-1]))\n _locals['self'].save(log_dir + 'best_model')\n callback_vars[\"n_steps\"] += 1\n return True\n\n# Create log dir\nlog_dir = \"/tmp/gym\"\nos.makedirs(log_dir, exist_ok=True)\n\n# Create and wrap the environment\nenv = gym.make('CartPole-v1')\nenv = Monitor(env, log_dir, allow_early_resets=True)\nenv = DummyVecEnv([lambda: env])\n\nmodel = A2C('MlpPolicy', env, verbose=0)\nmodel.learn(total_timesteps=10000, callback=auto_save_callback)",
"> \u001b[0;32m<ipython-input-7-dea9af705eb3>\u001b[0m(19)\u001b[0;36mauto_save_callback\u001b[0;34m()\u001b[0m\n\u001b[0;32m 17 \u001b[0;31m \u001b[0mset_trace\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 18 \u001b[0;31m \u001b[0;31m# skip every 20 steps\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m---> 19 \u001b[0;31m \u001b[0;32mif\u001b[0m \u001b[0mcallback_vars\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"n_steps\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0;36m20\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m0\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 20 \u001b[0;31m \u001b[0;31m# Evaluate policy training performance\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m 21 \u001b[0;31m \u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0my\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mts2xy\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mload_results\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlog_dir\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'timesteps'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\nipdb> log_dir\n'/tmp/gym/1'\nipdb> q\n"
]
],
[
[
"## Second example: Realtime plotting of performance\nWhile training, it is sometimes useful to how the training progresses over time, relative to the episodic reward.\nFor this, Stable-Baselines has [Tensorboard support](https://stable-baselines.readthedocs.io/en/master/guide/tensorboard.html), however this can be very combersome, especially in disk space usage. \n\n**NOTE: Unfortunately live plotting does not work out of the box on google colab**\n\nHere, we can use callback again, to plot the episodic reward in realtime, using the monitoring wrapper:",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib notebook\n\ndef plotting_callback(_locals, _globals):\n \"\"\"\n Callback called at each step (for DQN an others) or after n steps (see ACER or PPO2)\n :param _locals: (dict)\n :param _globals: (dict)\n \"\"\"\n # get callback variables, with default values if unintialized\n callback_vars = get_callback_vars(_locals[\"self\"], plot=None) \n \n # get the monitor's data\n x, y = ts2xy(load_results(log_dir), 'timesteps')\n if callback_vars[\"plot\"] is None: # make the plot\n plt.ion()\n fig = plt.figure(figsize=(6,3))\n ax = fig.add_subplot(111)\n line, = ax.plot(x, y)\n callback_vars[\"plot\"] = (line, ax, fig)\n plt.show()\n else: # update and rescale the plot\n callback_vars[\"plot\"][0].set_data(x, y)\n callback_vars[\"plot\"][-2].relim()\n callback_vars[\"plot\"][-2].set_xlim([_locals[\"total_timesteps\"] * -0.02, \n _locals[\"total_timesteps\"] * 1.02])\n callback_vars[\"plot\"][-2].autoscale_view(True,True,True)\n callback_vars[\"plot\"][-1].canvas.draw()\n \n# Create log dir\nlog_dir = \"/tmp/gym/\"\nos.makedirs(log_dir, exist_ok=True)\n\n# Create and wrap the environment\nenv = gym.make('MountainCarContinuous-v0')\nenv = Monitor(env, log_dir, allow_early_resets=True)\nenv = DummyVecEnv([lambda: env])\n \nmodel = PPO2('MlpPolicy', env, verbose=0)\nmodel.learn(20000, callback=plotting_callback)",
"_____no_output_____"
]
],
[
[
"## Third example: Progress bar\nQuality of life improvement are always welcome when developping and using RL. Here, we used [tqdm](https://tqdm.github.io/) to show a progress bar of the training, along with number of timesteps per second and the estimated time remaining to the end of the training:",
"_____no_output_____"
]
],
[
[
"from tqdm.auto import tqdm\n\n# this callback uses the 'with' block, allowing for correct initialisation and destruction\nclass progressbar_callback(object):\n def __init__(self, total_timesteps): # init object with total timesteps\n self.pbar = None\n self.total_timesteps = total_timesteps\n \n def __enter__(self): # create the progress bar and callback, return the callback\n self.pbar = tqdm(total=self.total_timesteps)\n \n def callback_progressbar(local_, global_):\n self.pbar.n = local_[\"self\"].num_timesteps\n self.pbar.update(0)\n \n return callback_progressbar\n\n def __exit__(self, exc_type, exc_val, exc_tb): # close the callback\n self.pbar.n = self.total_timesteps\n self.pbar.update(0)\n self.pbar.close()\n \nmodel = TD3('MlpPolicy', 'Pendulum-v0', verbose=0)\nwith progressbar_callback(2000) as callback: # this the garanties that the tqdm progress bar closes correctly\n model.learn(2000, callback=callback)",
"_____no_output_____"
]
],
[
[
"## Forth example: Composition\nThanks to the functional nature of callbacks, it is possible to do a composition of callbacks, into a single callback. This means we can auto save our best model, show the progess bar and episodic reward of the training:",
"_____no_output_____"
]
],
[
[
"%matplotlib notebook\n\ndef compose_callback(*callback_funcs): # takes a list of functions, and returns the composed function.\n def _callback(_locals, _globals):\n continue_training = True\n for cb_func in callback_funcs:\n if cb_func(_locals, _globals) is False: # as a callback can return None for legacy reasons.\n continue_training = False\n return continue_training\n return _callback\n\n# Create log dir\nlog_dir = \"/tmp/gym/\"\nos.makedirs(log_dir, exist_ok=True)\n\n# Create and wrap the environment\nenv = gym.make('CartPole-v1')\nenv = Monitor(env, log_dir, allow_early_resets=True)\nenv = DummyVecEnv([lambda: env])\n\nmodel = PPO2('MlpPolicy', env, verbose=0)\nwith progressbar_callback(10000) as progress_callback:\n model.learn(10000, callback=compose_callback(progress_callback, plotting_callback, auto_save_callback))",
"_____no_output_____"
]
],
[
[
"## Exercise: Code your own callback\n\n\nThe previous examples showed the basics of what is a callback and what you do with it.\n\nThe goal of this exercise is to create a callback that will evaluate the model using a test environment and save it if this is the best known model.\n\nTo make things easier, we are going to use a class instead of a function with the magic method `__call__`.",
"_____no_output_____"
]
],
[
[
"class EvalCallback(object):\n \"\"\"\n Callback for evaluating an agent.\n \n :param eval_env: (gym.Env) The environment used for initialization\n :param n_eval_episodes: (int) The number of episodes to test the agent\n :param eval_freq: (int) Evaluate the agent every eval_freq call of the callback.\n \"\"\"\n def __init__(self, eval_env, n_eval_episodes=5, eval_freq=20):\n super(EvalCallback, self).__init__()\n self.eval_env = eval_env\n self.n_eval_episodes = n_eval_episodes\n self.eval_freq = eval_freq\n self.n_calls = 0\n self.best_mean_reward = -np.inf\n \n\n def _evaluate(self,model):\n # This function will only work for a single Environment\n all_episode_rewards = []\n for i in range(self.n_eval_episodes):\n episode_rewards = []\n done = False\n obs = self.eval_env.reset()\n while not done:\n action, _states = model.predict(obs)\n obs, reward, done, info = self.eval_env.step(action)\n episode_rewards.append(reward)\n\n all_episode_rewards.append(sum(episode_rewards))\n\n mean_episode_reward = np.mean(all_episode_rewards)\n return mean_episode_reward\n \n \n def __call__(self, locals_, globals_):\n \"\"\"\n This method will be called by the model. This is the equivalent to the callback function\n used the previous examples.\n :param locals_: (dict)\n :param globals_: (dict)\n :return: (bool)\n \"\"\"\n # Get the self object of the model\n self_ = locals_['self']\n \n if self.n_calls % self.eval_freq == 0:\n # === YOUR CODE HERE ===#\n # Evaluate the agent:\n # you need to do self.n_eval_episodes loop using self.eval_env\n # hint: you can use self_.predict(obs)\n mean_episode_reward = self._evaluate(self_)\n # Save the agent if needed\n # and update self.best_mean_reward\n if mean_episode_reward > self.best_mean_reward:\n self.best_mean_reward = mean_episode_reward\n print(\"Best mean reward: {:.2f}\".format(self.best_mean_reward))\n self_.save(log_dir + 'best_model')\n # ====================== #\n\n self.n_calls += 1\n\n return True",
"_____no_output_____"
]
],
[
[
"### Test your callback",
"_____no_output_____"
]
],
[
[
"# Env used for training\nenv = gym.make(\"CartPole-v1\")\n# Env for evaluating the agent\neval_env = gym.make(\"CartPole-v1\")\n\n# === YOUR CODE HERE ===#\n# Create log dir - do I need it ?\nlog_dir = \"/tmp/gym/1\"\nos.makedirs(log_dir, exist_ok=True)\n\n# Create the callback object - do I need to wrap with Monitor ? else, how do I pass the logdir? \n# I could add it to EvallCallback constructor?\n# eval_env = Monitor(eval_env, log_dir, allow_early_resets=True)\ncallback = EvalCallback(eval_env)\n\n# Create the RL model\nmodel = PPO2('MlpPolicy', env, verbose=0)\n# ====================== #\n# Train the RL model\nmodel.learn(int(100000), callback=callback)\n",
"Best mean reward: 15.60\nBest mean reward: 38.80\nBest mean reward: 75.60\nBest mean reward: 125.20\nBest mean reward: 133.60\nBest mean reward: 221.40\nBest mean reward: 262.20\nBest mean reward: 289.20\nBest mean reward: 306.60\nBest mean reward: 382.60\nBest mean reward: 490.20\nBest mean reward: 500.00\n"
]
],
[
[
"# Conclusion\n\n\nIn this notebook we have seen:\n- that good hyperparameters are key to the success of RL, you should not except the default ones to work on every problems\n- what is a callback and what you can do with it\n- how to create your own callback\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ab2eff6224bce3638ffb512058167ab92bf2933
| 59,333 |
ipynb
|
Jupyter Notebook
|
code/voting_dapp.ipynb
|
matokovacik/tezos-edu
|
4a28a6d84d3d6decd5ecff5f6f82fac0ff19abfb
|
[
"MIT"
] | 1 |
2019-08-29T13:00:49.000Z
|
2019-08-29T13:00:49.000Z
|
code/voting_dapp.ipynb
|
matokovacik/tezos-edu
|
4a28a6d84d3d6decd5ecff5f6f82fac0ff19abfb
|
[
"MIT"
] | null | null | null |
code/voting_dapp.ipynb
|
matokovacik/tezos-edu
|
4a28a6d84d3d6decd5ecff5f6f82fac0ff19abfb
|
[
"MIT"
] | null | null | null | 45.852396 | 321 | 0.544655 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4ab2f197c6606959957c3107f5e040be3a1f569a
| 634,352 |
ipynb
|
Jupyter Notebook
|
Notebooks/morph.ipynb
|
RSPB/Birdman
|
d9be1621254eadbb2666680a91b3a89df531f401
|
[
"Apache-2.0"
] | 1 |
2019-04-25T11:34:55.000Z
|
2019-04-25T11:34:55.000Z
|
Notebooks/morph.ipynb
|
RSPB/Birdman
|
d9be1621254eadbb2666680a91b3a89df531f401
|
[
"Apache-2.0"
] | null | null | null |
Notebooks/morph.ipynb
|
RSPB/Birdman
|
d9be1621254eadbb2666680a91b3a89df531f401
|
[
"Apache-2.0"
] | null | null | null | 2,923.281106 | 305,632 | 0.965804 |
[
[
[
"import sys, os\nsys.path.append(\n os.path.abspath(os.path.join(os.path.dirname('__file__'), os.path.pardir)))\n\nimport numpy as np\nimport librosa\nimport scipy.signal as sig\nimport librosa.display\nimport matplotlib.pyplot as plt\n\nimport dsp\nfrom read_labels import read_labels\n\nplt.rcParams['figure.figsize'] = (32, 32)\n\n%matplotlib inline",
"_____no_output_____"
],
[
"rootdir = '/home/tracek/Data/Birdman/'\nfilename = os.path.join(rootdir, 'raw/STHELENA-02_20140605_200000_1.wav')\noutdir = os.path.join(rootdir, 'raw/samples/')\nfilename_noext = os.path.splitext(os.path.basename(filename))[0] \nsheet = read_labels('/home/tracek/Data/Birdman/labels/sthelena_labels.xls', sheetname=filename_noext)\n\n# in seconds [s]\nsignal_start_s = 0\nsignal_end_s = 95\n\nsr = 16000\nwin = 256 # samples\nhop = win // 2\n\ncondition = (sheet['Time Start'] > signal_start_s) & (sheet['Time End'] < signal_end_s)\nsheet_sample = sheet[condition]",
"_____no_output_____"
],
[
"y, sr = librosa.load(filename, sr=sr, dtype='float64')\ny = y[signal_start_s * sr: signal_end_s * sr]",
"_____no_output_____"
],
[
"import yaafelib\n\nfeature_plan = yaafelib.FeaturePlan(sample_rate=sr)\nsuccess = feature_plan.loadFeaturePlan('features.config')\nengine = yaafelib.Engine()\nengine.load(feature_plan.getDataFlow())\nafp = yaafelib.AudioFileProcessor()\nafp.processFile(engine, filename)\nfeats = engine.readAllOutputs()",
"_____no_output_____"
],
[
"C = np.flipud(np.log10(feats['CQT'][:1500].T))",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,20))\nplt.imshow(C)",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,20))\nplt.imshow(librosa.core.logamplitude(C))",
"_____no_output_____"
],
[
"plt.figure(figsize=(20,20))\nplt.plot(range(200))",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab305ad7af51d99fd795fa83cfa9603752d4e3a
| 9,518 |
ipynb
|
Jupyter Notebook
|
GitHub_MD_rendering/__call__ method.ipynb
|
kyaiooiayk/Python-Programming
|
b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0
|
[
"OLDAP-2.3"
] | null | null | null |
GitHub_MD_rendering/__call__ method.ipynb
|
kyaiooiayk/Python-Programming
|
b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0
|
[
"OLDAP-2.3"
] | null | null | null |
GitHub_MD_rendering/__call__ method.ipynb
|
kyaiooiayk/Python-Programming
|
b70dde24901cd24b38e2ead7c9a1b2d1808fc4b0
|
[
"OLDAP-2.3"
] | null | null | null | 19.384929 | 226 | 0.45934 |
[
[
[
"# Introduction\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
],
[
"\n**What?** `__call__` method\n\n",
"_____no_output_____"
],
[
"# Definition\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
],
[
"\n- `__call__` is a built-in method which enables to write classes where the instances behave like functions and can be called like a function.\n- In practice: `object()` is shorthand for `object.__call__()`\n\n",
"_____no_output_____"
],
[
"# _ _call_ _ vs. _ _init_ _\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
],
[
"\n- `__init__()` is properly defined as Class Constructor which builds an instance of a class, whereas `__call__` makes such a instance callable as a function and therefore can be modifiable.\n- Technically `__init__` is called once by `__new__` when object is created, so that it can be initialised\n- But there are many scenarios where you might want to redefine your object, say you are done with your object, and may find a need for a new object. With `__call__` you can redefine the same object as if it were new.\n\n",
"_____no_output_____"
],
[
"# Example #1\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
]
],
[
[
"class Example():\n def __init__(self):\n print(\"Instance created\")\n \n # Defining __call__ method\n def __call__(self):\n print(\"Instance is called via special method __call__\")",
"_____no_output_____"
],
[
"e = Example()",
"Instance created\n"
],
[
"e.__init__()",
"Instance created\n"
],
[
"e.__call__()",
"Instance is called via special method __call__\n"
]
],
[
[
"# Example #2\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
]
],
[
[
"class Product():\n def __init__(self):\n print(\"Instance created\")\n \n # Defining __call__ method\n def __call__(self, a, b):\n print(\"Instance is called via special method __call__\")\n print(a*b)",
"_____no_output_____"
],
[
"p = Product()",
"Instance created\n"
],
[
"p.__init__()",
"Instance created\n"
],
[
"# Is being call like if p was a function\np(2,3)",
"Instance is called via special method __call__\n6\n"
],
[
"# The cell above is equivalent to this call\np.__call__(2,3)",
"Instance is called via special method __call__\n6\n"
]
],
[
[
"# Example #3\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
]
],
[
[
"class Stuff(object):\n def __init__(self, x, y, Range):\n super(Stuff, self).__init__()\n self.x = x\n self.y = y\n self.Range = Range\n \n def __call__(self, x, y):\n self.x = x\n self.y = y\n print(\"__call with (%d, %d)\" % (self.x, self.y))\n \n def __del__(self, x, y):\n del self.x\n del self.y\n del self.Range \n ",
"_____no_output_____"
],
[
"s = Stuff(1, 2, 3)",
"_____no_output_____"
],
[
"s.x",
"_____no_output_____"
],
[
"s(7,8)",
"__call with (7, 8)\n"
],
[
"s.x",
"_____no_output_____"
]
],
[
[
"# Example #4\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
]
],
[
[
"class Sum():\n def __init__(self, x, y): \n self.x = x\n self.y = y \n print(\"__init__ with (%d, %d)\" % (self.x, self.y))\n \n def __call__(self, x, y):\n self.x = x\n self.y = y \n print(\"__call__ with (%d, %d)\" % (self.x, self.y))\n \n def sum(self):\n return self.x + self.y ",
"_____no_output_____"
],
[
"sum_1 = Sum(2,2)\nsum_1.sum()",
"__init__ with (2, 2)\n"
],
[
"sum_1 = Sum(2,2)\nsum_1(3,3)",
"__init__ with (2, 2)\n__call__ with (3, 3)\n"
],
[
"sum_1 = Sum(2,2)\n# This is equivalent to\nsum_1.__call__(3,3)",
"__init__ with (2, 2)\n__call__ with (3, 3)\n"
],
[
"# You can also do this\nsum_1 = Sum(2,2)(3,3)",
"__init__ with (2, 2)\n__call__ with (3, 3)\n"
]
],
[
[
"# References\n<hr style=\"border:2px solid black\"> </hr>",
"_____no_output_____"
],
[
"\n- https://www.geeksforgeeks.org/__call__-in-python/\n\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
]
] |
4ab3078b1f54c18af54962e1fdd690f59ce6b728
| 8,506 |
ipynb
|
Jupyter Notebook
|
notebooks/datasets/data/job_data/jobs_count.ipynb
|
Lambda-School-Labs/PT17_cityspire-c-ds
|
099acf14cadce7becef51ff576ea4d0b850d300d
|
[
"MIT"
] | null | null | null |
notebooks/datasets/data/job_data/jobs_count.ipynb
|
Lambda-School-Labs/PT17_cityspire-c-ds
|
099acf14cadce7becef51ff576ea4d0b850d300d
|
[
"MIT"
] | 10 |
2021-04-01T02:11:14.000Z
|
2021-04-22T03:53:53.000Z
|
notebooks/datasets/data/job_data/jobs_count.ipynb
|
Lambda-School-Labs/PT17_cityspire-c-ds
|
099acf14cadce7becef51ff576ea4d0b850d300d
|
[
"MIT"
] | 5 |
2021-03-11T05:03:27.000Z
|
2021-05-01T22:06:01.000Z
| 30.818841 | 335 | 0.385963 |
[
[
[
"import pandas as pd\nimport numpy as np\nimport psycopg2",
"/usr/local/lib/python3.7/dist-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use \"pip install psycopg2-binary\" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.\n \"\"\")\n"
],
[
"# connection credentials\nconn = psycopg2.connect(\n user = \"postgres\",\n password = \"0A96jbvaDJk%\",\n host = \"database-cityspire-c.c2uishzxxikl.us-east-1.rds.amazonaws.com\",\n port = \"5432\",\n database = \"postgres\"\n)\n\nsql = \"SELECT * FROM master_jobs_table\"",
"_____no_output_____"
],
[
"# get dataset from postgresql db\njobs_df = pd.read_sql(sql, conn)\njobs_df.head()",
"_____no_output_____"
],
[
"def get_jobs_count(city_state):\n cols = ['index']\n jobs_count = jobs_df.loc[jobs_df['city_state'] == city_state, cols].count()\n return jobs_count",
"_____no_output_____"
],
[
"get_jobs_count('San Francisco, CA')",
"_____no_output_____"
],
[
"def get_jobs_count_dict(city_state):\n cols = ['index']\n jobs_count = jobs_df.loc[jobs_df['city_state'] == city_state, cols].count().to_dict()\n return jobs_count",
"_____no_output_____"
],
[
"get_jobs_count_dict('San Francisco, CA')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab30d1e7d4b6e9e287aac80e4d08a8c0eb02797
| 605,925 |
ipynb
|
Jupyter Notebook
|
TensorFlow/TensorFlow.ipynb
|
SS47816/CarND-Notes
|
ac31b2ba4f0e251d022749b0d984e10b2e3bacac
|
[
"MIT"
] | null | null | null |
TensorFlow/TensorFlow.ipynb
|
SS47816/CarND-Notes
|
ac31b2ba4f0e251d022749b0d984e10b2e3bacac
|
[
"MIT"
] | null | null | null |
TensorFlow/TensorFlow.ipynb
|
SS47816/CarND-Notes
|
ac31b2ba4f0e251d022749b0d984e10b2e3bacac
|
[
"MIT"
] | null | null | null | 507.900251 | 176,868 | 0.936299 |
[
[
[
"# TensorFlow\n\nInstalling TensorFlow: `conda install -c conda-forge tensorflow`",
"_____no_output_____"
],
[
"## 1. Hello Tensor World! ",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\n# Create TensorFlow object called tensor\nhello_constant = tf.constant('Hello World!')\n\nwith tf.Session() as sess:\n # Run the tf.constant operation in the session\n output = sess.run(hello_constant)\n print(output)",
"b'Hello World!'\n"
]
],
[
[
"### a) Tensor\n\nIn TensorFlow, data isn’t stored as integers, floats, or strings. These values are encapsulated in an object called a tensor. In the case of `hello_constant = tf.constant('Hello World!')`, `hello_constant` is a 0-dimensional string tensor, but tensors come in a variety of sizes as shown below: \n\n```python\n\n# A is a 0-dimensional int32 tensor\nA = tf.constant(1234) \n# B is a 1-dimensional int32 tensor\nB = tf.constant([123,456,789]) \n# C is a 2-dimensional int32 tensor\nC = tf.constant([ [123,456,789], [222,333,444] ])\n\n```\n\n`tf.constant()` is one of many TensorFlow operations you will use in this lesson. The tensor returned by `tf.constant()` is called a constant tensor, because the value of the tensor never changes.\n\n",
"_____no_output_____"
],
[
"### b) Session\n\nTensorFlow’s api is built around the idea of a computational graph, a way of visualizing a mathematical process which you learned about in the MiniFlow lesson. Let’s take the TensorFlow code you ran and turn that into a graph:\n\n\n\nA \"TensorFlow Session\", as shown above, is an environment for running a graph. The session is in charge of allocating the operations to GPU(s) and/or CPU(s), including remote machines. Let’s see how you use it.\n\n```\n\nwith tf.Session() as sess:\n output = sess.run(hello_constant)\n print(output)\n \n```\n\nThe code has already created the tensor, `hello_constant`, from the previous lines. The next step is to evaluate the tensor in a session.\n\nThe code creates a session instance, `sess`, using `tf.Session`. The `sess.run()` function then evaluates the tensor and returns the results.\n\nAfter you run the above, you will see the following printed out: \n\n```\n\n'Hello World!'\n\n```\n\n",
"_____no_output_____"
],
[
"## 2. TensorFlow Input\n\nIn the last section, you passed a tensor into a session and it returned the result. What if you want to use a **non-constant**? This is where `tf.placeholder()` and `feed_dict` come into place. In this section, you'll go over the basics of feeding data into TensorFlow. \n\n### a) [tf.placeholder() ](https://www.tensorflow.org/api_docs/python/tf/placeholder)\n\nSadly you can’t just set `x` to your dataset and put it in TensorFlow, because over time you'll want your TensorFlow model to take in different datasets with different parameters. You need `tf.placeholder()`!\n\n`tf.placeholder()` returns a tensor that gets its value from data passed to the `tf.session.run()` function, allowing you to set the input right before the session runs.\n\n### b) Session’s feed_dict\n\n```python\n\nx = tf.placeholder(tf.string)\n\nwith tf.Session() as sess:\n output = sess.run(x, feed_dict={x: 'Hello World'})\n \n```\n\nUse the `feed_dict` parameter in `tf.session.run()` to set the placeholder tensor. The above example shows the tensor `x` being set to the string `\"Hello, world\"`. It's also possible to set more than one tensor using `feed_dict` as shown below.\n\n```python\n\nx = tf.placeholder(tf.string)\ny = tf.placeholder(tf.int32)\nz = tf.placeholder(tf.float32)\n\nwith tf.Session() as sess:\n output = sess.run(x, feed_dict={x: 'Test String', y: 123, z: 45.67})\n\n```\n\n**Note**: If the data passed to the `feed_dict` doesn’t match the tensor type and can’t be cast into the tensor type, you’ll get the error “`ValueError: invalid literal for`...”.",
"_____no_output_____"
],
[
"### Quiz\n\nLet's see how well you understand `tf.placeholder()` and `feed_dict`. The code below throws an error, but I want you to make it return the number `123`. Change line 11, so that the code returns the number `123`.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\ndef run():\n output = None\n x = tf.placeholder(tf.int32)\n\n with tf.Session() as sess:\n output = sess.run(x, feed_dict={x: 123})\n\n return output\n\nprint(run())",
"123\n"
]
],
[
[
"## 3. TensorFlow Math\n\nGetting the input is great, but now you need to use it. You're going to use basic math functions that everyone knows and loves - add, subtract, multiply, and divide - with tensors. ",
"_____no_output_____"
],
[
"### a) Addition\n\n```python\n\nx = tf.add(5, 2) # 7\n\n```\n\n### b) Subtraction and Multiplication\n\n```python\n\nx = tf.subtract(10, 4) # 6\ny = tf.multiply(2, 5) # 10\n\n```\n\nThe `x` tensor will evaluate to `6`, because `10 - 4 = 6`. The `y` tensor will evaluate to `10`, because `2 * 5 = 10`. That was easy!\n\n### c) Division\n\n`tf.divide(x, y)`\n\n### d) Converting types\n\nIt may be necessary to convert between types to make certain operators work together. For example, if you tried the following, it would fail with an exception:\n\n```python\n\ntf.subtract(tf.constant(2.0),tf.constant(1)) # Fails with ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32: \n\n```\n\nThat's because the constant `1` is an integer but the constant `2.0` is a floating point value and `subtract` expects them to match.\n\nIn cases like these, you can either make sure your data is all of the same type, or you can cast a value to another type. In this case, converting the `2.0` to an integer before subtracting, like so, will give the correct result:\n\n```python\n\ntf.subtract(tf.cast(tf.constant(2.0), tf.int32), tf.constant(1)) # 1\n\n```\n\n### d) Quiz \n\nLet's apply what you learned to convert an algorithm to TensorFlow. The code below is a simple algorithm using division and subtraction. Convert the following algorithm in regular Python to TensorFlow and print the results of the session. You can use `tf.constant()` for the values `10`, `2`, and `1`.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\n# TODO: Convert the following to TensorFlow:\nx = 10\ny = 2\nz = x/y - 1\n\nx = tf.constant(10)\ny = tf.constant(2)\nz = tf.subtract(tf.divide(x, y), tf.cast(tf.constant(1), tf.float64))\n\n# TODO: Print z from a session as the variable output\nwith tf.Session() as sess:\n output = sess.run(z)\n print(output)",
"4.0\n"
]
],
[
[
"## 4. TensorFlow Linear Function\n\nLet’s derive the function `y = Wx + b`. We want to translate our input, `x`, to labels, `y`.\n\nFor example, imagine we want to classify images as digits.\n\nx would be our list of pixel values, and `y` would be the logits, one for each digit. Let's take a look at `y = Wx`, where the weights, `W`, determine the influence of `x` at predicting each `y`. \n\n\n\n`y = Wx` allows us to segment the data into their respective labels using a line.\n\nHowever, this line has to pass through the origin, because whenever `x` equals 0, then `y` is also going to equal 0.\n\nWe want the ability to shift the line away from the origin to fit more complex data. The simplest solution is to add a number to the function, which we call “bias”.\n\n\n\nOur new function becomes `Wx + b`, allowing us to create predictions on linearly separable data. Let’s use a concrete example and calculate the logits.\n\n### a) Matrix Multiplication Quiz\n\nCalculate the logits a and b for the following formula.\n\n \n\nanswers: a = 0.16, b = 0.06\n\n### b) Transposition\n\nWe've been using the `y = Wx + b` function for our linear function.\n\nBut there's another function that does the same thing, `y = xW + b`. These functions do the same thing and are interchangeable, except for the dimensions of the matrices involved.\n\nTo shift from one function to the other, you simply have to swap the row and column dimensions of each matrix. This is called transposition.\n\nFor rest of this lesson, we actually use `xW + b`, because this is what TensorFlow uses.\n\n\n\nThe above example is identical to the quiz you just completed, except that the matrices are transposed.\n\n`x` now has the dimensions 1x3, `W` now has the dimensions 3x2, and `b` now has the dimensions 1x2. Calculating this will produce a matrix with the dimension of 1x2.\n\nYou'll notice that the elements in this 1x2 matrix are the same as the elements in the 2x1 matrix from the quiz. Again, these matrices are simply transposed.\n\n\n\nWe now have our logits! The columns represent the logits for our two labels.\n\nNow you can learn how to train this function in TensorFlow.\n",
"_____no_output_____"
],
[
"## 5. Weights and Bias in TensorFlow\n\nThe goal of training a neural network is to modify weights and biases to best predict the labels. In order to use weights and bias, you'll need a Tensor that can be modified. This leaves out `tf.placeholder()` and `tf.constant()`, since those Tensors can't be modified. This is where `tf.Variable` class comes in.\n\n### a) tf.Variable()\n\n```python\n\nx = tf.Variable(5)\n\n```\n\nThe `tf.Variable` class creates a tensor with an initial value that can be modified, much like a normal Python variable. This tensor stores its state in the session, so you must initialize the state of the tensor manually. You'll use the `tf.global_variables_initializer()` function to initialize the state of all the Variable tensors.\n\n#### Initialization\n\n```python\n\ninit = tf.global_variables_initializer()\nwith tf.Session() as sess:\n sess.run(init)\n\n```\n\nThe `tf.global_variables_initializer()` call returns an operation that will initialize all TensorFlow variables from the graph. You call the operation using a session to initialize all the variables as shown above. Using the `tf.Variable` class allows us to change the weights and bias, but **an initial value needs to be chosen**.\n\nInitializing the weights with random numbers from a normal distribution is good practice. Randomizing the weights helps the model from becoming stuck in the same place every time you train it. You'll learn more about this in the next lesson, when you study gradient descent.\n\nSimilarly, choosing weights from a normal distribution prevents any one weight from overwhelming other weights. You'll use the `tf.truncated_normal()` function to generate random numbers from a normal distribution.\n\n### b) tf.truncated_normal()\n\n```python\n\nn_features = 120\nn_labels = 5\nweights = tf.Variable(tf.truncated_normal((n_features, n_labels)))\n\n```\n\nThe `tf.truncated_normal()` function returns a tensor with random values from a normal distribution whose magnitude is no more than 2 standard deviations from the mean.\n\nSince the weights are already helping prevent the model from getting stuck, you don't need to randomize the bias. Let's use the simplest solution, setting the bias to 0.\n\n### c) tf.zeros()\n\n```python\n\nn_labels = 5\nbias = tf.Variable(tf.zeros(n_labels))\n\n```\n\nThe `tf.zeros()` function returns a tensor with all zeros.\n",
"_____no_output_____"
],
[
"## 6. Linear Classifier Quiz\n\n\n\nYou'll be classifying the handwritten numbers `0`, `1`, and `2` from the MNIST dataset using TensorFlow. The above is a small sample of the data you'll be training on. Notice how some of the `1`s are written with a serif at the top and at different angles. The similarities and differences will play a part in shaping the weights of the model.\n\n\n\nThe images above are trained weights for each label (0, 1, and 2). The weights display the unique properties of each digit they have found. Complete this quiz to train your own weights using the MNIST dataset.\n\n### Instructions\n\n1. Open quiz.py.\n 1. Implement `get_weights` to return a `tf.Variable` of weights\n 2. Implement `get_biases` to return a `tf.Variable of biases`\n 3. Implement `xW + b` in the `linear` function\n\n2. Open sandbox.py\n 1. Initialize all weights\n\nSince `xW` in `xW + b` is matrix multiplication, you have to use the `tf.matmul()` function instead of `tf.multiply()`. Don't forget that order matters in matrix multiplication, so `tf.matmul(a,b)` is not the same as `tf.matmul(b,a)`.",
"_____no_output_____"
],
[
"```python\nimport tensorflow as tf\n\ndef get_weights(n_features, n_labels):\n \"\"\"\n Return TensorFlow weights\n :param n_features: Number of features\n :param n_labels: Number of labels\n :return: TensorFlow weights\n \"\"\"\n # TODO: Return weights\n return tf.Variable(tf.truncated_normal((n_features, n_labels)))\n\n\ndef get_biases(n_labels):\n \"\"\"\n Return TensorFlow bias\n :param n_labels: Number of labels\n :return: TensorFlow bias\n \"\"\"\n # TODO: Return biases\n return tf.Variable(tf.zeros(n_labels))\n\n\ndef linear(input, w, b):\n \"\"\"\n Return linear function in TensorFlow\n :param input: TensorFlow input\n :param w: TensorFlow weights\n :param b: TensorFlow biases\n :return: TensorFlow linear function\n \"\"\"\n # TODO: Linear Function (xW + b)\n return tf.add(tf.matmul(input, w), b)\n\nfrom tensorflow.examples.tutorials.mnist import input_data\n\ndef mnist_features_labels(n_labels):\n \"\"\"\n Gets the first <n> labels from the MNIST dataset\n :param n_labels: Number of labels to use\n :return: Tuple of feature list and label list\n \"\"\"\n mnist_features = []\n mnist_labels = []\n\n mnist = input_data.read_data_sets('/datasets/mnist', one_hot=True)\n\n # In order to make quizzes run faster, we're only looking at 10000 images\n for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)):\n\n # Add features and labels if it's for the first <n>th labels\n if mnist_label[:n_labels].any():\n mnist_features.append(mnist_feature)\n mnist_labels.append(mnist_label[:n_labels])\n\n return mnist_features, mnist_labels\n\n\n# Number of features (28*28 image is 784 features)\nn_features = 784\n# Number of labels\nn_labels = 3\n\n# Features and Labels\nfeatures = tf.placeholder(tf.float32)\nlabels = tf.placeholder(tf.float32)\n\n# Weights and Biases\nw = get_weights(n_features, n_labels)\nb = get_biases(n_labels)\n\n# Linear Function xW + b\nlogits = linear(features, w, b)\n\n# Training data\ntrain_features, train_labels = mnist_features_labels(n_labels)\n\nwith tf.Session() as session:\n session.run(tf.global_variables_initializer())\n\n # Softmax\n prediction = tf.nn.softmax(logits)\n\n # Cross entropy\n # This quantifies how far off the predictions were.\n # You'll learn more about this in future lessons.\n cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)\n\n # Training loss\n # You'll learn more about this in future lessons.\n loss = tf.reduce_mean(cross_entropy)\n\n # Rate at which the weights are changed\n # You'll learn more about this in future lessons.\n learning_rate = 0.08\n\n # Gradient Descent\n # This is the method used to train the model\n # You'll learn more about this in future lessons.\n optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)\n\n # Run optimizer and get loss\n _, l = session.run(\n [optimizer, loss],\n feed_dict={features: train_features, labels: train_labels})\n\n# Print loss\nprint('Loss: {}'.format(l))\n\n```",
"_____no_output_____"
],
[
"## 6. Linear Update\n\nYou can’t train a neural network on a single sample. Let’s apply n samples of `x` to the function `y = Wx + b`, which becomes `Y = WX + B`.\n\n\n\nFor every sample of `X` (`X1`, `X2`, `X3`), we get logits for label 1 (`Y1`) and label 2 (`Y2`).\n\nIn order to add the bias to the product of `WX`, we had to turn `b` into a matrix of the same shape. This is a bit unnecessary, since the bias is only two numbers. It should really be a vector.\n\nWe can take advantage of an operation called broadcasting used in TensorFlow and Numpy. This operation allows arrays of different dimension to be added or multiplied with each other. For example:\n\n```python\n\nimport numpy as np\nt = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])\nu = np.array([1, 2, 3])\nprint(t + u)\n\n```\n\nThe code above will print...\n\n```python\n\n[[ 2 4 6]\n [ 5 7 9]\n [ 8 10 12]\n [11 13 15]]\n\n```\n\nThis is because `u` is the same dimension as the last dimension in `t`.",
"_____no_output_____"
],
[
"## 7. Softmax\n\n\n\nCongratulations on successfully implementing a linear function that outputs logits. You're one step closer to a working classifier.\n\nThe next step is to assign a probability to each label, which you can then use to classify the data. Use the softmax function to turn your logits into probabilities.\n\nWe can do this by using the formula above, which uses the input of y values and the mathematical constant \"e\" which is approximately equal to 2.718. By taking \"e\" to the power of any real value we always get back a positive value, this then helps us scale when having negative y values. The summation symbol on the bottom of the divisor indicates that we add together all the e^(input y value) elements in order to get our calculated probability outputs.\n\n### Quiz\n\nFor the next quiz, you'll implement a `softmax(x)` function that takes in `x`, a one or two dimensional array of logits.\n\nIn the one dimensional case, the array is just a single set of logits. In the two dimensional case, each column in the array is a set of logits. The `softmax(x)` function should return a NumPy array of the same shape as `x`.\n\nFor example, given a one-dimensional array:\n\n```python\n\n# logits is a one-dimensional array with 3 elements\nlogits = [1.0, 2.0, 3.0]\n# softmax will return a one-dimensional array with 3 elements\nprint softmax(logits)\n\n```\n\n```python\n\n[ 0.09003057 0.24472847 0.66524096]\n\n```\n\nGiven a two-dimensional array where each column represents a set of logits:\n\n```python\n\n# logits is a two-dimensional array\nlogits = np.array([\n [1, 2, 3, 6],\n [2, 4, 5, 6],\n [3, 8, 7, 6]])\n# softmax will return a two-dimensional array with the same shape\nprint softmax(logits)\n\n```\n\n```python\n\n[\n [ 0.09003057 0.00242826 0.01587624 0.33333333]\n [ 0.24472847 0.01794253 0.11731043 0.33333333]\n [ 0.66524096 0.97962921 0.86681333 0.33333333]\n]\n \n```\n\nImplement the softmax function, which is specified by the formula at the top of the page.\n\nThe probabilities for each column must sum to 1. Feel free to test your function with the inputs above.\n\n",
"_____no_output_____"
]
],
[
[
"import numpy as np\n\ndef softmax(x):\n \"\"\"Compute softmax values for each sets of scores in x.\"\"\"\n return np.exp(x) / np.sum(np.exp(x), axis=0)\n\nlogits = [3.0, 1.0, 0.2]\nprint(softmax(logits))\n",
"[0.8360188 0.11314284 0.05083836]\n"
]
],
[
[
"## 8. TensorFlow Softmax\n\nNow that you've built a softmax function from scratch, let's see how softmax is done in TensorFlow.\n\n```python\n\nx = tf.nn.softmax([2.0, 1.0, 0.2])\n\n```\n\nEasy as that! `tf.nn.softmax()` implements the softmax function for you. It takes in logits and returns softmax activations.\n\n### Quiz\n\nUse the softmax function in the quiz below to return the softmax of the logits.\n\n",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\n\ndef run():\n output = None\n logit_data = [2.0, 1.0, 0.1]\n logits = tf.placeholder(tf.float32)\n\n softmax = tf.nn.softmax(logits)\n\n with tf.Session() as sess:\n output = sess.run(softmax, feed_dict={logits: logit_data})\n\n return output",
"_____no_output_____"
]
],
[
[
"## 9. Corss Entropy\n\n### Minimizing Cross Entropy",
"_____no_output_____"
],
[
"## 10. Practical Aspect of Learning\n\n### a) How do you fill image pixels to this classifier?\n\n#### i) Numerical Stablility\n\nAdding small numbers to big numbers cause issues\n\n#### ii) Normalized Inputs and Initial Weights\n\n1. Inputs:\n\n 1. Zero Mean\n 2. Equal Variance(Small)\n\n2. Initial Weights\n \n 1. Random\n 2. Mean = 0\n 3. Equal Variance(Small)\n \n#### iii) Measuring Performance\n\nTraning Set\n\nValidation Set\n\nTesting Set\n\n#### iv) Validation and Test Set Size\n\nCross-Validation\n\nRule of '30'\n\n### b) Where do you initialize the optimization?\n\nTraining Logistic Regression:\n\nOptimizes error measure (Loss Function)\n\nScaling Issues\n\n#### i) Stochastic Gradient Descent (S.G.D.)\n\nComputing gradient descent takes about 3X than computing the loss function\n\nTake a very small sliver of the training data, compute the loss and derivative, and use as direction\n\n#### ii) Momentum and Learning Rate Decay\n\nKeep a running average of the gradients and use that running average instead of the direction of the current batch of the data\n\nMake learing rate samller and smaller as you train\n\n#### iii) Hyperparameter\n\nInitial Learing Rate\n\nLearing Rate Decay\n\nMomentum\n\nBatch Size\n\nWeight Initialization\n\n**ADAGRAD** Approach\n\n### Quiz 2: Mini-batch\n\nLet's use mini-batching to feed batches of MNIST features and labels into a linear model.\n\nSet the batch size and run the optimizer over all the batches with the `batches` function. The recommended batch size is 128. If you have memory restrictions, feel free to make it smaller.",
"_____no_output_____"
],
[
"```python\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\nimport numpy as np\nfrom helper import batches\n\nlearning_rate = 0.001\nn_input = 784 # MNIST data input (img shape: 28*28)\nn_classes = 10 # MNIST total classes (0-9 digits)\n\n# Import MNIST data\nmnist = input_data.read_data_sets('/datasets/ud730/mnist', one_hot=True)\n\n# The features are already scaled and the data is shuffled\ntrain_features = mnist.train.images\ntest_features = mnist.test.images\n\ntrain_labels = mnist.train.labels.astype(np.float32)\ntest_labels = mnist.test.labels.astype(np.float32)\n\n# Features and Labels\nfeatures = tf.placeholder(tf.float32, [None, n_input])\nlabels = tf.placeholder(tf.float32, [None, n_classes])\n\n# Weights & bias\nweights = tf.Variable(tf.random_normal([n_input, n_classes]))\nbias = tf.Variable(tf.random_normal([n_classes]))\n\n# Logits - xW + b\nlogits = tf.add(tf.matmul(features, weights), bias)\n\n# Define loss and optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)\n\n# Calculate accuracy\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n\n# TODO: Set batch size\nbatch_size = 128\nassert batch_size is not None, 'You must set the batch size'\n\ninit = tf.global_variables_initializer()\n\nwith tf.Session() as sess:\n sess.run(init)\n \n # TODO: Train optimizer on all batches\n for batch_features, batch_labels in batches(batch_size, train_features, train_labels):\n sess.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})\n\n # Calculate accuracy for test dataset\n test_accuracy = sess.run(\n accuracy,\n feed_dict={features: test_features, labels: test_labels})\n\nprint('Test Accuracy: {}'.format(test_accuracy))\n\n```",
"_____no_output_____"
],
[
"Outputs:\n\n```python\nSuccessfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.\nExtracting /datasets/ud730/mnist/train-images-idx3-ubyte.gz\nSuccessfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.\nExtracting /datasets/ud730/mnist/train-labels-idx1-ubyte.gz\nSuccessfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.\nExtracting /datasets/ud730/mnist/t10k-images-idx3-ubyte.gz\nSuccessfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.\nExtracting /datasets/ud730/mnist/t10k-labels-idx1-ubyte.gz\nTest Accuracy: 0.1454000025987625\n\n```\n\nThe accuracy is low, but you probably know that you could train on the dataset more than once. You can train a model using the dataset multiple times. You'll go over this subject in the next section where we talk about \"epochs\".",
"_____no_output_____"
],
[
"## 11. Epochs\n\nAn epoch is a single forward and backward pass of the whole dataset. This is used to increase the accuracy of the model without requiring more data. This section will cover epochs in TensorFlow and how to choose the right number of epochs.\n\nThe following TensorFlow code trains a model using 10 epochs.\n\n```python\nfrom tensorflow.examples.tutorials.mnist import input_data\nimport tensorflow as tf\nimport numpy as np\nfrom helper import batches # Helper function created in Mini-batching section\n\n\ndef print_epoch_stats(epoch_i, sess, last_features, last_labels):\n \"\"\"\n Print cost and validation accuracy of an epoch\n \"\"\"\n current_cost = sess.run(\n cost,\n feed_dict={features: last_features, labels: last_labels})\n valid_accuracy = sess.run(\n accuracy,\n feed_dict={features: valid_features, labels: valid_labels})\n print('Epoch: {:<4} - Cost: {:<8.3} Valid Accuracy: {:<5.3}'.format(\n epoch_i,\n current_cost,\n valid_accuracy))\n\nn_input = 784 # MNIST data input (img shape: 28*28)\nn_classes = 10 # MNIST total classes (0-9 digits)\n\n# Import MNIST data\nmnist = input_data.read_data_sets('/datasets/ud730/mnist', one_hot=True)\n\n# The features are already scaled and the data is shuffled\ntrain_features = mnist.train.images\nvalid_features = mnist.validation.images\ntest_features = mnist.test.images\n\ntrain_labels = mnist.train.labels.astype(np.float32)\nvalid_labels = mnist.validation.labels.astype(np.float32)\ntest_labels = mnist.test.labels.astype(np.float32)\n\n# Features and Labels\nfeatures = tf.placeholder(tf.float32, [None, n_input])\nlabels = tf.placeholder(tf.float32, [None, n_classes])\n\n# Weights & bias\nweights = tf.Variable(tf.random_normal([n_input, n_classes]))\nbias = tf.Variable(tf.random_normal([n_classes]))\n\n# Logits - xW + b\nlogits = tf.add(tf.matmul(features, weights), bias)\n\n# Define loss and optimizer\nlearning_rate = tf.placeholder(tf.float32)\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))\noptimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)\n\n# Calculate accuracy\ncorrect_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\ninit = tf.global_variables_initializer()\n\nbatch_size = 128\nepochs = 10\nlearn_rate = 0.001\n\ntrain_batches = batches(batch_size, train_features, train_labels)\n\nwith tf.Session() as sess:\n sess.run(init)\n\n # Training cycle\n for epoch_i in range(epochs):\n\n # Loop over all batches\n for batch_features, batch_labels in train_batches:\n train_feed_dict = {\n features: batch_features,\n labels: batch_labels,\n learning_rate: learn_rate}\n sess.run(optimizer, feed_dict=train_feed_dict)\n\n # Print cost and validation accuracy of an epoch\n print_epoch_stats(epoch_i, sess, batch_features, batch_labels)\n\n # Calculate accuracy for test dataset\n test_accuracy = sess.run(\n accuracy,\n feed_dict={features: test_features, labels: test_labels})\n\nprint('Test Accuracy: {}'.format(test_accuracy))\n\n```\n\nRunning the code will output the following:\n\n```python\nEpoch: 0 - Cost: 11.0 Valid Accuracy: 0.204\nEpoch: 1 - Cost: 9.95 Valid Accuracy: 0.229\nEpoch: 2 - Cost: 9.18 Valid Accuracy: 0.246\nEpoch: 3 - Cost: 8.59 Valid Accuracy: 0.264\nEpoch: 4 - Cost: 8.13 Valid Accuracy: 0.283\nEpoch: 5 - Cost: 7.77 Valid Accuracy: 0.301\nEpoch: 6 - Cost: 7.47 Valid Accuracy: 0.316\nEpoch: 7 - Cost: 7.2 Valid Accuracy: 0.328\nEpoch: 8 - Cost: 6.96 Valid Accuracy: 0.342\nEpoch: 9 - Cost: 6.73 Valid Accuracy: 0.36 \nTest Accuracy: 0.3801000118255615\n\n```\n\nEach epoch attempts to move to a lower cost, leading to better accuracy.\n\nThis model continues to improve accuracy up to Epoch 9. Let's increase the number of epochs to 100.\n\n```python\n...\nEpoch: 79 - Cost: 0.111 Valid Accuracy: 0.86\nEpoch: 80 - Cost: 0.11 Valid Accuracy: 0.869\nEpoch: 81 - Cost: 0.109 Valid Accuracy: 0.869\n....\nEpoch: 85 - Cost: 0.107 Valid Accuracy: 0.869\nEpoch: 86 - Cost: 0.107 Valid Accuracy: 0.869\nEpoch: 87 - Cost: 0.106 Valid Accuracy: 0.869\nEpoch: 88 - Cost: 0.106 Valid Accuracy: 0.869\nEpoch: 89 - Cost: 0.105 Valid Accuracy: 0.869\nEpoch: 90 - Cost: 0.105 Valid Accuracy: 0.869\nEpoch: 91 - Cost: 0.104 Valid Accuracy: 0.869\nEpoch: 92 - Cost: 0.103 Valid Accuracy: 0.869\nEpoch: 93 - Cost: 0.103 Valid Accuracy: 0.869\nEpoch: 94 - Cost: 0.102 Valid Accuracy: 0.869\nEpoch: 95 - Cost: 0.102 Valid Accuracy: 0.869\nEpoch: 96 - Cost: 0.101 Valid Accuracy: 0.869\nEpoch: 97 - Cost: 0.101 Valid Accuracy: 0.869\nEpoch: 98 - Cost: 0.1 Valid Accuracy: 0.869\nEpoch: 99 - Cost: 0.1 Valid Accuracy: 0.869\nTest Accuracy: 0.8696000006198883\n \n```\n\nFrom looking at the output above, you can see the model doesn't increase the validation accuracy after epoch 80. Let's see what happens when we increase the learning rate.\n\nlearn_rate = 0.1\n\n```python\nEpoch: 76 - Cost: 0.214 Valid Accuracy: 0.752\nEpoch: 77 - Cost: 0.21 Valid Accuracy: 0.756\nEpoch: 78 - Cost: 0.21 Valid Accuracy: 0.756\n...\nEpoch: 85 - Cost: 0.207 Valid Accuracy: 0.756\nEpoch: 86 - Cost: 0.209 Valid Accuracy: 0.756\nEpoch: 87 - Cost: 0.205 Valid Accuracy: 0.756\nEpoch: 88 - Cost: 0.208 Valid Accuracy: 0.756\nEpoch: 89 - Cost: 0.205 Valid Accuracy: 0.756\nEpoch: 90 - Cost: 0.202 Valid Accuracy: 0.756\nEpoch: 91 - Cost: 0.207 Valid Accuracy: 0.756\nEpoch: 92 - Cost: 0.204 Valid Accuracy: 0.756\nEpoch: 93 - Cost: 0.206 Valid Accuracy: 0.756\nEpoch: 94 - Cost: 0.202 Valid Accuracy: 0.756\nEpoch: 95 - Cost: 0.2974 Valid Accuracy: 0.756\nEpoch: 96 - Cost: 0.202 Valid Accuracy: 0.756\nEpoch: 97 - Cost: 0.2996 Valid Accuracy: 0.756\nEpoch: 98 - Cost: 0.203 Valid Accuracy: 0.756\nEpoch: 99 - Cost: 0.2987 Valid Accuracy: 0.756\nTest Accuracy: 0.7556000053882599\n \n```\n\nLooks like the learning rate was increased too much. The final accuracy was lower, and it stopped improving earlier. Let's stick with the previous learning rate, but change the number of epochs to 80.\n\n```python\nEpoch: 65 - Cost: 0.122 Valid Accuracy: 0.868\nEpoch: 66 - Cost: 0.121 Valid Accuracy: 0.868\nEpoch: 67 - Cost: 0.12 Valid Accuracy: 0.868\nEpoch: 68 - Cost: 0.119 Valid Accuracy: 0.868\nEpoch: 69 - Cost: 0.118 Valid Accuracy: 0.868\nEpoch: 70 - Cost: 0.118 Valid Accuracy: 0.868\nEpoch: 71 - Cost: 0.117 Valid Accuracy: 0.868\nEpoch: 72 - Cost: 0.116 Valid Accuracy: 0.868\nEpoch: 73 - Cost: 0.115 Valid Accuracy: 0.868\nEpoch: 74 - Cost: 0.115 Valid Accuracy: 0.868\nEpoch: 75 - Cost: 0.114 Valid Accuracy: 0.868\nEpoch: 76 - Cost: 0.113 Valid Accuracy: 0.868\nEpoch: 77 - Cost: 0.113 Valid Accuracy: 0.868\nEpoch: 78 - Cost: 0.112 Valid Accuracy: 0.868\nEpoch: 79 - Cost: 0.111 Valid Accuracy: 0.868\nEpoch: 80 - Cost: 0.111 Valid Accuracy: 0.869\nTest Accuracy: 0.86909999418258667\n \n```\n\nThe accuracy only reached 0.86, but that could be because the learning rate was too high. Lowering the learning rate would require more epochs, but could ultimately achieve better accuracy.\n\nIn the upcoming TensorFLow Lab, you'll get the opportunity to choose your own learning rate, epoch count, and batch size to improve the model's accuracy.",
"_____no_output_____"
],
[
"## 12. TensorFlow Neural Network Lab\n\n[Link](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html)\n\n[THE MNIST DATABASE\nof handwritten digits](http://yann.lecun.com/exdb/mnist/)\n\n\n\nWe've prepared a Jupyter notebook that will guide you through the process of creating a single layer neural network in TensorFlow.\n\n### The Notebook\nThe notebook has 3 problems for you to solve:\n\n Problem 1: Normalize the features\n Problem 2: Use TensorFlow operations to create features, labels, weight, and biases tensors\n Problem 3: Tune the learning rate, number of steps, and batch size for the best accuracy\n\nThis is a self-assessed lab. Compare your answers to the solutions in the **solutions.ipynb** . If you have any difficulty completing the lab, Udacity provides a few services to answer any questions you might have.\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4ab30fa1caf6922200cec364a3e793c8910314a1
| 773,112 |
ipynb
|
Jupyter Notebook
|
idaes/examples/power_generation/supercritical_steam_cycle/supercritical_steam_cycle.ipynb
|
dt-schwartz/NGFC
|
9ebbfc2288c9a0b55313998a04e42c80b332db49
|
[
"MIT"
] | 6 |
2020-05-22T03:44:44.000Z
|
2022-01-11T21:32:21.000Z
|
idaes/examples/power_generation/supercritical_steam_cycle/supercritical_steam_cycle.ipynb
|
dt-schwartz/NGFC
|
9ebbfc2288c9a0b55313998a04e42c80b332db49
|
[
"MIT"
] | 2 |
2020-06-17T18:36:27.000Z
|
2020-06-17T18:36:52.000Z
|
idaes/examples/power_generation/supercritical_steam_cycle/supercritical_steam_cycle.ipynb
|
dt-schwartz/NGFC
|
9ebbfc2288c9a0b55313998a04e42c80b332db49
|
[
"MIT"
] | 1 |
2021-12-10T18:30:36.000Z
|
2021-12-10T18:30:36.000Z
| 240.096894 | 2,117 | 0.678367 |
[
[
[
"# Supercritical Steam Cycle Example\n\nThis example uses Jupyter Lab or Jupyter notebook, and demonstrates a supercritical pulverized coal (SCPC) steam cycle model. See the ```supercritical_steam_cycle.py``` to see more information on how to assemble a power plant model flowsheet. Code comments in that file will guide you through the process.",
"_____no_output_____"
],
[
"## Model Description\n\nThe example model doesn't represent any particular power plant, but should be a reasonable approximation of a typical plant. The gross power output is about 620 MW. The process flow diagram (PFD) can be shown using the code below. The initial PFD contains spaces for model results, to be filled in later.\n\nTo get a more detailed look at the model structure, you may find it useful to review ```supercritical_steam_cycle.py``` first. Although there is no detailed boiler model, there are constraints in the model to complete the steam loop through the boiler and calculate boiler heat input to the steam cycle. The efficiency calculation for the steam cycle doesn't account for heat loss in the boiler, which would be a result of a more detailed boiler model.",
"_____no_output_____"
]
],
[
[
"# pkg_resources is used here to get the svg information from the \n# installed IDAES package\n\nimport pkg_resources\nfrom IPython.display import SVG, display\n\n# Get the contents of the PFD (which is an svg file) \ninit_pfd = pkg_resources.resource_string(\n \"idaes.examples.power_generation.supercritical_steam_cycle\",\n \"supercritical_steam_cycle.svg\"\n)\n\n# Make the svg contents into an SVG object and display it.\ndisplay(SVG(init_pfd))",
"_____no_output_____"
]
],
[
[
"## Initialize the steam cycle flowsheet\n\nThis example is part of the ```idaes``` package, which you should have installed. To run the example, the example flowsheet is imported from the ```idaes``` package. When you write your own model, you can import and run it in whatever way is appropriate for you. The Pyomo environment is also imported as ```pyo```, providing easy access to Pyomo functions and classes.\n\nThe supercritical flowsheet example main function returns a Pyomo concrete mode (m) and a solver object (solver). The model is also initialized by the ```main()``` function.",
"_____no_output_____"
]
],
[
[
"import pyomo.environ as pyo\nfrom idaes.examples.power_generation.supercritical_steam_cycle import (\n main,\n create_stream_table_dataframe,\n pfd_result,\n)\n \nm, solver = main()",
"2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:21 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:22 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\n2020-04-17 15:26:23 - Level 5 - idaes.init.Steam Cycle Model - Starting initialization\n2020-04-17 15:26:23 - ERROR - idaes.property_models.iapws95 - IAPWS library file not found. Was it compiled?\nERROR: evaluating object as numeric value: func_tau_sat(5350.0)\n (object: <class\n 'pyomo.core.expr.numeric_expr.NPV_ExternalFunctionExpression'>)\n dlopen(/Users/adlerlabadmin/.idaes/lib/iapws95_external.so, 6): image not\n found\n"
]
],
[
[
"Inside the model, there is a subblock ```fs```. This is an IDAES flowsheet model, which contains the supercritical steam cycle model. In the flowsheet, the model called ```turb``` is a multistage turbine model. The turbine model contains an expression for total power, ```power```. In this case the model is steady-state, but all IDAES models allow for dynamic simulation, and contain time indexes. Power is indexed by time, and only the \"0\" time point exists. By convention, in the IDAES framework, power going into a model is positive, so power produced by the turbine is negative. \n\nThe property package used for this model uses SI (mks) units of measure, so the power is in Watts. Here a function is defined which can be used to report power output in MW.",
"_____no_output_____"
]
],
[
[
"# Define a function to report gross power output in MW\ndef gross_power_mw(model):\n # pyo.value(m.fs.turb.power[0]) is the power consumed in Watts\n return -pyo.value(model.fs.turb.power[0])/1e6\n\n# Show the gross power\ngross_power_mw(m)",
"_____no_output_____"
]
],
[
[
"## Change the model inputs\n\nThe turbine in this example simulates partial arc admission with four arcs, so there are four throttle valves. For this example, we will close one of the valves to 25% open, and observe the result.",
"_____no_output_____"
]
],
[
[
"m.fs.turb.throttle_valve[1].valve_opening[:].value = 0.25",
"_____no_output_____"
]
],
[
[
"Next, we re-solve the model using the solver created by the ```supercritical_steam_cycle.py``` script.",
"_____no_output_____"
]
],
[
[
"solver.solve(m, tee=True)",
"Ipopt 3.12.13: tol=1e-07\nlinear_solver=ma27\nmax_iter=40\n\n\n******************************************************************************\nThis program contains Ipopt, a library for large-scale nonlinear optimization.\n Ipopt is released as open source code under the Eclipse Public License (EPL).\n For more information visit http://projects.coin-or.org/Ipopt\n\nThis version of Ipopt was compiled from source code available at\n https://github.com/IDAES/Ipopt as part of the Institute for the Design of\n Advanced Energy Systems Process Systems Engineering Framework (IDAES PSE\n Framework) Copyright (c) 2018-2019. See https://github.com/IDAES/idaes-pse.\n\nThis version of Ipopt was compiled using HSL, a collection of Fortran codes\n for large-scale scientific computation. All technical papers, sales and\n publicity material resulting from use of the HSL codes within IPOPT must\n contain the following acknowledgement:\n HSL, a collection of Fortran codes for large-scale scientific\n computation. See http://www.hsl.rl.ac.uk.\n******************************************************************************\n\nThis is Ipopt version 3.12.13, running with linear solver ma27.\n\nNumber of nonzeros in equality constraint Jacobian...: 3588\nNumber of nonzeros in inequality constraint Jacobian.: 0\nNumber of nonzeros in Lagrangian Hessian.............: 1721\n\nTotal number of variables............................: 1012\n variables with only lower bounds: 259\n variables with lower and upper bounds: 512\n variables with only upper bounds: 0\nTotal number of equality constraints.................: 1012\nTotal number of inequality constraints...............: 0\n inequality constraints with only lower bounds: 0\n inequality constraints with lower and upper bounds: 0\n inequality constraints with only upper bounds: 0\n\niter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls\n 0 0.0000000e+00 3.38e+01 1.00e+00 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0\n 1 0.0000000e+00 1.56e+06 4.13e+06 -1.0 3.56e+07 - 9.48e-01 9.90e-01h 1\n 2 0.0000000e+00 9.07e+04 3.29e+06 -1.0 9.20e+06 - 9.60e-01 9.90e-01h 1\n 3 0.0000000e+00 1.79e+02 1.03e+06 -1.0 7.45e+05 - 9.89e-01 1.00e+00h 1\n 4 0.0000000e+00 6.30e-01 9.21e+01 -1.0 2.37e+05 - 9.87e-01 1.00e+00h 1\n 5 0.0000000e+00 2.42e-03 1.79e-01 -1.0 1.56e+04 - 9.90e-01 1.00e+00h 1\n 6 0.0000000e+00 1.14e-04 3.27e-01 -1.0 9.44e+02 - 1.00e+00 1.00e+00h 1\n 7 0.0000000e+00 2.97e-06 1.33e-04 -2.5 5.71e+01 - 1.00e+00 1.00e+00h 1\n 8 0.0000000e+00 2.54e-06 4.77e-07 -3.8 3.44e+00 - 1.00e+00 1.00e+00h 1\n 9 0.0000000e+00 1.86e-08 6.34e-03 -5.7 1.20e-02 -4.0 1.00e+00 1.00e+00h 1\nCannot recompute multipliers for feasibility problem. Error in eq_mult_calculator\n\nNumber of Iterations....: 9\n\n (scaled) (unscaled)\nObjective...............: 0.0000000000000000e+00 0.0000000000000000e+00\nDual infeasibility......: 8.2485353511456182e+04 8.2485353511456182e+04\nConstraint violation....: 1.4139381221411895e-08 1.8626451492309570e-08\nComplementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00\nOverall NLP error.......: 1.4139381221411895e-08 8.2485353511456182e+04\n\n\nNumber of objective function evaluations = 10\nNumber of objective gradient evaluations = 10\nNumber of equality constraint evaluations = 10\nNumber of inequality constraint evaluations = 0\nNumber of equality constraint Jacobian evaluations = 10\nNumber of inequality constraint Jacobian evaluations = 0\nNumber of Lagrangian Hessian evaluations = 9\nTotal CPU secs in IPOPT (w/o function evaluations) = 0.727\nTotal CPU secs in NLP function evaluations = 8.025\n\nEXIT: Optimal Solution Found.\n"
]
],
[
[
"Now we can check the gross power output again.",
"_____no_output_____"
]
],
[
[
"gross_power_mw(m)",
"_____no_output_____"
]
],
[
[
"## Creating a PFD with results and a stream table\n\nA more detailed look at the model results can be obtained by creating a stream table and putting key results on the PFD. Of course, any unit model or stream result can be obtained from the model.",
"_____no_output_____"
]
],
[
[
"# Create a Pandas dataframe with stream results\ndf = create_stream_table_dataframe(streams=m._streams, orient=\"index\")\n\n# Create a new PFD with simulation results\nres_pfd = pfd_result(m, df, svg=init_pfd)",
"_____no_output_____"
],
[
"# Display PFD with results.\ndisplay(SVG(res_pfd))",
"_____no_output_____"
],
[
"# Display the stream table.\ndf",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab312e57ac7561121283f62a5a4eb6f1002a0b7
| 191,867 |
ipynb
|
Jupyter Notebook
|
pymaceuticals_HW.ipynb
|
CSwilliams88/Drug_Testing
|
f7bf3e939fd1525ec938458e54ac14db5de060a7
|
[
"ADSL"
] | null | null | null |
pymaceuticals_HW.ipynb
|
CSwilliams88/Drug_Testing
|
f7bf3e939fd1525ec938458e54ac14db5de060a7
|
[
"ADSL"
] | null | null | null |
pymaceuticals_HW.ipynb
|
CSwilliams88/Drug_Testing
|
f7bf3e939fd1525ec938458e54ac14db5de060a7
|
[
"ADSL"
] | null | null | null | 92.734171 | 19,796 | 0.733857 |
[
[
[
"## Observations and Insights ",
"_____no_output_____"
]
],
[
[
"# Dependencies and Setup\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport scipy.stats as st\nimport numpy as np\nfrom scipy.stats import linregress\n\n# Study data files\nmouse_metadata_path = \"data/Mouse_metadata.csv\"\nstudy_results_path = \"data/Study_results.csv\"\n\n# Read the mouse data and the study results\nmouse_metadata = pd.read_csv(mouse_metadata_path)\nstudy_results = pd.read_csv(study_results_path)\n\n# Combine the data into a single dataset\nmerged_df = pd.merge(mouse_metadata, study_results, on = \"Mouse ID\")\n# Display the data table for preview\nmerged_df.head(30)",
"_____no_output_____"
],
[
"# Checking the number of mice.\ntotal_mice = merged_df[\"Mouse ID\"].nunique()\ntotal_mice",
"_____no_output_____"
],
[
"# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint\nduplicate_id = merged_df.loc[merged_df.duplicated(subset = [\"Mouse ID\", \"Timepoint\"]), \"Mouse ID\"].unique()\nduplicate_id",
"_____no_output_____"
],
[
"# Optional: Get all the data for the duplicate mouse ID. \noptional_df = merged_df.loc[merged_df[\"Mouse ID\"]==\"g989\"]\noptional_df\n",
"_____no_output_____"
],
[
"# Create a clean DataFrame by dropping the duplicate mouse by its ID.\nclean_df = merged_df.loc[merged_df[\"Mouse ID\"]!=\"g989\"]\nclean_df\n\n",
"_____no_output_____"
],
[
"# Checking the number of mice in the clean DataFrame.\ntotal_mice = clean_df[\"Mouse ID\"].nunique()\ntotal_mice",
"_____no_output_____"
]
],
[
[
"## Summary Statistics",
"_____no_output_____"
]
],
[
[
"# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen\n# Use groupby and summary statistical methods to calculate the following properties of each drug regimen: \n# mean, median, variance, standard deviation, and SEM of the tumor volume. \n# Assemble the resulting series into a single summary dataframe.\n\nmean_data = clean_df.groupby(\"Drug Regimen\").mean()[\"Tumor Volume (mm3)\"]\nmedian_data = clean_df.groupby(\"Drug Regimen\").median()[\"Tumor Volume (mm3)\"]\nvariance_data = clean_df.groupby(\"Drug Regimen\").var()[\"Tumor Volume (mm3)\"]\nstd_data = clean_df.groupby(\"Drug Regimen\").std()[\"Tumor Volume (mm3)\"]\nsem_data = clean_df.groupby(\"Drug Regimen\").sem()[\"Tumor Volume (mm3)\"]\n\nstats_df = pd.DataFrame({\"Mean\":mean_data,\n \"Median\":median_data,\n \"Variance\":variance_data,\n \"STD\":std_data,\n \"SEM\":sem_data})\nstats_df\n\n",
"_____no_output_____"
],
[
"# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen\nsummary_df2 = clean_df.groupby(\"Drug Regimen\").agg({\"Tumor Volume (mm3)\":[\"mean\",\"median\",\"var\",\"std\",\"sem\"]})\n# Using the aggregation method, produce the same summary statistics in a single line\nsummary_df2",
"_____no_output_____"
]
],
[
[
"## Bar and Pie Charts",
"_____no_output_____"
]
],
[
[
"# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.\nbar_plot = clean_df.groupby([\"Drug Regimen\"]).count()[\"Mouse ID\"]\nbar_plot.plot(kind=\"bar\", figsize=(10,5))\nplt.title(\"Drug Distribution\")\nplt.xlabel(\"Drug Regimen\")\nplt.ylabel(\"Number of Mice\")\nplt.show()\nplt.tight_layout()\n\n\n",
"_____no_output_____"
],
[
"# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.\nbar_plot\n\nx_axis= np.arange(0, len(bar_plot))\ntick_locations = []\nfor x in x_axis:\n tick_locations.append(x)\n\nplt.title(\"Drug Distribution\")\nplt.xlabel(\"Drug Regimen\")\nplt.ylabel(\"# of Mice\")\n\nplt.xlim(0, len(bar_plot)-0.25)\nplt.ylim(0, max(bar_plot)+20)\n\nplt.bar(x_axis, bar_plot, facecolor=\"g\", alpha=0.5, align=\"center\")\nplt.xticks(tick_locations, [\"Capomulin\", \"Ceftamin\", \"Infubinol\", \"Ketapril\", \"Naftisol\", \"Placebo\", \"Propriva\", \"Ramicane\", \"Stelasyn\", \"Zoniferol\"], rotation = \"vertical\")\n\nplt.show()\n",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pandas\nmales = clean_df[clean_df[\"Sex\"]== \"Male\"][\"Mouse ID\"].nunique()\nfemales = clean_df[clean_df[\"Sex\"]== \"Female\"][\"Mouse ID\"].nunique()\n\ngender_df = pd.DataFrame({\"Sex\": [\"Male\", \"Female\"], \"Count\": [males, females]})\ngender_df_index = gender_df.set_index(\"Sex\")\n\nplot = gender_df_index.plot(kind=\"pie\", y=\"Count\", autopct=\"%1.1f%%\", startangle=120)\nplot\n",
"_____no_output_____"
],
[
"# Generate a pie plot showing the distribution of female versus male mice using pyp\nlabels = [\"Male\", \"Female\"]\nsizes = [\"125\", \"123\"]\ncolors = [\"Green\", \"Yellow\"]\n\nplt.pie(sizes, labels=labels, colors=colors,\n autopct=\"%1.1f%%\", shadow=True, startangle=140)\nplt.axis(\"equal\")\n\n\n",
"_____no_output_____"
]
],
[
[
"## Quartiles, Outliers and Boxplots",
"_____no_output_____"
]
],
[
[
"# Calculate the final tumor volume of each mouse across four of the treatment regimens: \n# Capomulin, Ramicane, Infubinol, and Ceftamin\n\nfilt_cap = clean_df.loc[clean_df[\"Drug Regimen\"] == \"Capomulin\"]\nfilt_ram = clean_df.loc[clean_df[\"Drug Regimen\"] == \"Ramicane\"]\nfilt_infu = clean_df.loc[clean_df[\"Drug Regimen\"] == \"Infubinol\"]\nfilt_ceft = clean_df.loc[clean_df[\"Drug Regimen\"] == \"Ceftamin\"]\n\n# Start by getting the last (greatest) timepoint for each mouse\nlast_timepoint_cap = filt_cap.groupby(\"Mouse ID\")[\"Timepoint\"].max()\nlast_timepoint_ram = filt_ram.groupby(\"Mouse ID\")[\"Timepoint\"].max()\nlast_timepoint_infu = filt_infu.groupby(\"Mouse ID\")[\"Timepoint\"].max()\nlast_timepoint_ceft = filt_ceft.groupby(\"Mouse ID\")[\"Timepoint\"].max()\n\n# Merge this group df with the original dataframe to get the tumor volume at the last timepoint\nfin_vol_cap = pd.DataFrame(last_timepoint_cap)\ncap_merge = pd.merge(fin_vol_cap, clean_df, on = (\"Mouse ID\", \"Timepoint\"), how = \"left\")\n\nfin_vol_ram = pd.DataFrame(last_timepoint_ram)\nram_merge = pd.merge(fin_vol_ram, clean_df, on = (\"Mouse ID\", \"Timepoint\"), how = \"left\")\n\nfin_vol_infu = pd.DataFrame(last_timepoint_infu)\ninfu_merge = pd.merge(fin_vol_infu, clean_df, on = (\"Mouse ID\", \"Timepoint\"), how = \"left\")\n\nfin_vol_ceft = pd.DataFrame(last_timepoint_ceft)\nceft_merge = pd.merge(fin_vol_ceft, clean_df, on = (\"Mouse ID\", \"Timepoint\"), how = \"left\")\n",
"_____no_output_____"
],
[
"# Put treatments into a list for for loop (and later for plot labels)\n\ntreatments = [cap_merge, ram_merge, infu_merge, ceft_merge]\n\n# Create empty list to fill with tumor vol data (for plotting)\ntumor_volume_data_plot = []\n\nfor treatment in treatments:\n print(treatment)",
" Mouse ID Timepoint Drug Regimen Sex Age_months Weight (g) \\\n0 b128 45 Capomulin Female 9 22 \n1 b742 45 Capomulin Male 7 21 \n2 f966 20 Capomulin Male 16 17 \n3 g288 45 Capomulin Male 3 19 \n4 g316 45 Capomulin Female 22 22 \n5 i557 45 Capomulin Female 1 24 \n6 i738 45 Capomulin Female 23 20 \n7 j119 45 Capomulin Female 7 23 \n8 j246 35 Capomulin Female 21 21 \n9 l509 45 Capomulin Male 17 21 \n10 l897 45 Capomulin Male 17 19 \n11 m601 45 Capomulin Male 22 17 \n12 m957 45 Capomulin Female 3 19 \n13 r157 15 Capomulin Male 22 25 \n14 r554 45 Capomulin Female 8 17 \n15 r944 45 Capomulin Male 12 25 \n16 s185 45 Capomulin Female 3 17 \n17 s710 45 Capomulin Female 1 23 \n18 t565 45 Capomulin Female 20 17 \n19 u364 45 Capomulin Male 18 17 \n20 v923 45 Capomulin Female 19 21 \n21 w150 10 Capomulin Male 23 23 \n22 w914 45 Capomulin Male 24 21 \n23 x401 45 Capomulin Female 16 15 \n24 y793 45 Capomulin Male 17 17 \n\n Tumor Volume (mm3) Metastatic Sites \n0 38.982878 2 \n1 38.939633 0 \n2 30.485985 0 \n3 37.074024 1 \n4 40.159220 2 \n5 47.685963 1 \n6 37.311846 2 \n7 38.125164 1 \n8 38.753265 1 \n9 41.483008 3 \n10 38.846876 1 \n11 28.430964 1 \n12 33.329098 1 \n13 46.539206 0 \n14 32.377357 3 \n15 41.581521 2 \n16 23.343598 1 \n17 40.728578 1 \n18 34.455298 0 \n19 31.023923 3 \n20 40.658124 2 \n21 39.952347 0 \n22 36.041047 2 \n23 28.484033 0 \n24 31.896238 2 \n Mouse ID Timepoint Drug Regimen Sex Age_months Weight (g) \\\n0 a411 45 Ramicane Male 3 22 \n1 a444 45 Ramicane Female 10 25 \n2 a520 45 Ramicane Male 13 21 \n3 a644 45 Ramicane Female 7 17 \n4 c458 30 Ramicane Female 23 20 \n5 c758 45 Ramicane Male 9 17 \n6 d251 45 Ramicane Female 8 19 \n7 e662 45 Ramicane Male 8 24 \n8 g791 45 Ramicane Male 11 16 \n9 i177 45 Ramicane Male 10 18 \n10 i334 45 Ramicane Female 8 20 \n11 j913 45 Ramicane Female 4 17 \n12 j989 45 Ramicane Male 8 19 \n13 k403 45 Ramicane Male 21 16 \n14 m546 45 Ramicane Male 18 16 \n15 n364 45 Ramicane Male 4 17 \n16 q597 45 Ramicane Male 20 25 \n17 q610 35 Ramicane Female 18 21 \n18 r811 45 Ramicane Male 9 19 \n19 r921 30 Ramicane Female 5 25 \n20 s508 45 Ramicane Male 1 17 \n21 u196 45 Ramicane Male 18 25 \n22 w678 5 Ramicane Female 5 24 \n23 y449 15 Ramicane Male 19 24 \n24 z578 45 Ramicane Male 11 16 \n\n Tumor Volume (mm3) Metastatic Sites \n0 38.407618 1 \n1 43.047543 0 \n2 38.810366 1 \n3 32.978522 1 \n4 38.342008 2 \n5 33.397653 1 \n6 37.311236 2 \n7 40.659006 2 \n8 29.128472 1 \n9 33.562402 3 \n10 36.374510 2 \n11 31.560470 1 \n12 36.134852 1 \n13 22.050126 1 \n14 30.564625 1 \n15 31.095335 1 \n16 45.220869 2 \n17 36.561652 2 \n18 37.225650 1 \n19 43.419381 1 \n20 30.276232 0 \n21 40.667713 3 \n22 43.166373 0 \n23 44.183451 0 \n24 30.638696 0 \n Mouse ID Timepoint Drug Regimen Sex Age_months Weight (g) \\\n0 a203 45 Infubinol Female 20 23 \n1 a251 45 Infubinol Female 21 25 \n2 a577 30 Infubinol Female 6 25 \n3 a685 45 Infubinol Male 8 30 \n4 c139 45 Infubinol Male 11 28 \n5 c326 5 Infubinol Female 18 25 \n6 c895 30 Infubinol Female 7 29 \n7 e476 45 Infubinol Male 23 26 \n8 f345 45 Infubinol Male 23 26 \n9 i386 40 Infubinol Female 23 29 \n10 k483 45 Infubinol Female 20 30 \n11 k804 35 Infubinol Female 23 29 \n12 m756 5 Infubinol Male 19 30 \n13 n671 30 Infubinol Male 18 25 \n14 o809 35 Infubinol Male 3 25 \n15 o813 5 Infubinol Male 24 28 \n16 q132 30 Infubinol Female 1 30 \n17 s121 25 Infubinol Male 23 26 \n18 v339 5 Infubinol Male 20 26 \n19 v719 20 Infubinol Female 17 30 \n20 v766 15 Infubinol Male 16 27 \n21 w193 20 Infubinol Male 22 30 \n22 w584 30 Infubinol Male 3 29 \n23 y163 45 Infubinol Female 17 27 \n24 z581 45 Infubinol Female 24 25 \n\n Tumor Volume (mm3) Metastatic Sites \n0 67.973419 2 \n1 65.525743 1 \n2 57.031862 2 \n3 66.083066 3 \n4 72.226731 2 \n5 36.321346 0 \n6 60.969711 2 \n7 62.435404 1 \n8 60.918767 1 \n9 67.289621 4 \n10 66.196912 3 \n11 62.117279 2 \n12 47.010364 1 \n13 60.165180 0 \n14 55.629428 1 \n15 45.699331 0 \n16 54.656549 4 \n17 55.650681 2 \n18 46.250112 0 \n19 54.048608 1 \n20 51.542431 1 \n21 50.005138 0 \n22 58.268442 1 \n23 67.685569 3 \n24 62.754451 3 \n Mouse ID Timepoint Drug Regimen Sex Age_months Weight (g) \\\n0 a275 45 Ceftamin Female 20 28 \n1 b447 0 Ceftamin Male 2 30 \n2 b487 25 Ceftamin Female 6 28 \n3 b759 30 Ceftamin Female 12 25 \n4 f436 15 Ceftamin Female 3 25 \n5 h531 5 Ceftamin Male 5 27 \n6 j296 45 Ceftamin Female 24 30 \n7 k210 45 Ceftamin Male 15 28 \n8 l471 45 Ceftamin Female 7 28 \n9 l490 30 Ceftamin Male 24 26 \n10 l558 10 Ceftamin Female 13 30 \n11 l661 45 Ceftamin Male 18 26 \n12 l733 45 Ceftamin Female 4 30 \n13 o287 45 Ceftamin Male 2 28 \n14 p438 45 Ceftamin Female 11 26 \n15 q483 40 Ceftamin Male 6 26 \n16 t573 0 Ceftamin Female 15 27 \n17 u149 25 Ceftamin Male 24 29 \n18 u153 0 Ceftamin Female 11 25 \n19 w151 45 Ceftamin Male 24 25 \n20 x226 0 Ceftamin Male 23 28 \n21 x581 45 Ceftamin Female 19 28 \n22 x822 45 Ceftamin Male 3 29 \n23 y769 45 Ceftamin Female 6 27 \n24 y865 45 Ceftamin Male 23 26 \n\n Tumor Volume (mm3) Metastatic Sites \n0 62.999356 3 \n1 45.000000 0 \n2 56.057749 1 \n3 55.742829 1 \n4 48.722078 2 \n5 47.784682 0 \n6 61.849023 3 \n7 68.923185 3 \n8 67.748662 1 \n9 57.918381 3 \n10 46.784535 0 \n11 59.851956 3 \n12 64.299830 1 \n13 59.741901 4 \n14 61.433892 1 \n15 64.192341 1 \n16 45.000000 0 \n17 52.925348 0 \n18 45.000000 0 \n19 67.527482 3 \n20 45.000000 0 \n21 64.634949 3 \n22 61.386660 3 \n23 68.594745 4 \n24 64.729837 3 \n"
],
[
"# Calculate the IQR and quantitatively determine if there are any potential outliers. \n# Determine outliers using upper and lower bounds\n\n#Capomulin\ncap_list = cap_merge[\"Tumor Volume (mm3)\"]\nquartiles = cap_list.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\n\n\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\n\ntumor_volume_data_plot.append(cap_list) \n\nprint(f\"Capomulin potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.\")\nprint (f\"Capomulin IQR is {iqr}.\")\n\n#Ramicane\nram_list = ram_merge[\"Tumor Volume (mm3)\"]\nquartiles = ram_list.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\n\n\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\n\ntumor_volume_data_plot.append(ram_list) \n\nprint(f\"Ramicane potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.\")\nprint (f\"Ramicane IQR is {iqr}.\")\n\n#Infubinol\ninfu_list = infu_merge[\"Tumor Volume (mm3)\"]\nquartiles = infu_list.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\n\n\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\n\ntumor_volume_data_plot.append(infu_list) \n\nprint(f\"Infubinol potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.\")\nprint (f\"Infubinol IQR is {iqr}.\")\n\n#Ceftamin\nceft_list = ceft_merge[\"Tumor Volume (mm3)\"]\nquartiles = ceft_list.quantile([.25,.5,.75])\nlowerq = quartiles[0.25]\nupperq = quartiles[0.75]\niqr = upperq-lowerq\n\n\nlower_bound = lowerq - (1.5*iqr)\nupper_bound = upperq + (1.5*iqr)\n\ntumor_volume_data_plot.append(ceft_list) \n\nprint(f\"Ceftamin potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.\")\nprint (f\"Ceftamin IQR is {iqr}.\")",
"Capomulin potential outliers could be values below 20.70456164999999 and above 51.83201549 could be outliers.\nCapomulin IQR is 7.781863460000004.\nRamicane potential outliers could be values below 17.912664470000003 and above 54.30681135 could be outliers.\nRamicane IQR is 9.098536719999998.\nInfubinol potential outliers could be values below 36.83290494999999 and above 82.74144559000001 could be outliers.\nInfubinol IQR is 11.477135160000003.\nCeftamin potential outliers could be values below 25.355449580000002 and above 87.66645829999999 could be outliers.\nCeftamin IQR is 15.577752179999997.\n"
],
[
"# Generate a box plot of the final tumor volume of each mouse across four regimens of interest\ntumor_volume_data_plot\n\nfig1, ax1 = plt.subplots()\nax1.set_title('Final Tumor Volume of Each Mouse')\nax1.set_ylabel('Final Tumor Volume (mm3)')\nax1.set_xlabel('Drug Regimen')\n\nax1.boxplot(tumor_volume_data_plot, labels = [\"Capomulin\", \"Ramicane\", \"Infubinol\", \"Ceftamin\"])\nplt.show()",
"_____no_output_____"
]
],
[
[
"## Line and Scatter Plots",
"_____no_output_____"
]
],
[
[
"# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin\nx_axis = np.arange(0,46,5)\ntumor_vol = [45, 45.41, 39.11, 39.77, 36.06, 36.61, 32.91, 30.20, 28.16, 28.48]\n\nplt.xlabel(\"Time Point\")\nplt.ylabel(\"Tumor Volume\")\nplt.title(\"Capomulin (x401)\")\nplt.ylim(25, 50)\nplt.xlim(0, 45)\n\ntumor_line, = plt.plot(x_axis, tumor_vol, marker=\"*\", color=\"blue\", linewidth=1, label=\"Capomulin\")\nplt.show()\n\n",
"_____no_output_____"
],
[
"# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen\ndrug_df = clean_df.loc[clean_df[\"Drug Regimen\"] == \"Capomulin\"]\n\nweight_tumor = drug_df.loc[:, [\"Mouse ID\", \"Weight (g)\", \"Tumor Volume (mm3)\"]]\n\navg_tumor_volume = pd.DataFrame(weight_tumor.groupby([\"Mouse ID\", \"Weight (g)\"])[\"Tumor Volume (mm3)\"].mean()).reset_index()\n\navg_tumor_volume = avg_tumor_volume.set_index(\"Mouse ID\")\n\navg_tumor_volume.plot(kind=\"scatter\", x=\"Weight (g)\", y=\"Tumor Volume (mm3)\", grid=True, figsize=(8,8), title=\"Weight vs. Average Tumor Volume for Capomulin\")\n\n\nplt.show()\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n",
"_____no_output_____"
]
],
[
[
"## Correlation and Regression",
"_____no_output_____"
]
],
[
[
"# Calculate the correlation coefficient and linear regression model \n# for mouse weight and average tumor volume for the Capomulin regimen\n\nmouse_weight = avg_tumor_volume.iloc[:,0]\ntumor_volume = avg_tumor_volume.iloc[:,1]\ncorrelation = st.pearsonr(mouse_weight,tumor_volume)\nprint(f\"The correlation between both factors is {round(correlation[0],2)}\")\n\n\n",
"The correlation between both factors is 0.84\n"
],
[
"x_values = avg_tumor_volume['Weight (g)']\ny_values = avg_tumor_volume['Tumor Volume (mm3)']\n(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)\nregress_values = x_values * slope + intercept\nline_eq = \"y = \" + str(round(slope,2)) + \"x + \" + str(round(intercept,2))\nplt.scatter(x_values,y_values)\nplt.plot(x_values,regress_values,\"r-\")\nplt.annotate(line_eq,(6,10),fontsize=15,color=\"red\")\nplt.xlabel('Mouse Weight (g)')\nplt.ylabel('Average Tumor Volume (mm3)')\nplt.title('Linear Regression')\nplt.show()\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4ab32fa34f654cf01101cbead0e4e93961ed4856
| 203,767 |
ipynb
|
Jupyter Notebook
|
Practical_3_magnitude_frequency.ipynb
|
fclubb/SciRes-Earthquakes
|
38356afca3f1e565366c408437d2ba83577d3967
|
[
"MIT"
] | null | null | null |
Practical_3_magnitude_frequency.ipynb
|
fclubb/SciRes-Earthquakes
|
38356afca3f1e565366c408437d2ba83577d3967
|
[
"MIT"
] | null | null | null |
Practical_3_magnitude_frequency.ipynb
|
fclubb/SciRes-Earthquakes
|
38356afca3f1e565366c408437d2ba83577d3967
|
[
"MIT"
] | null | null | null | 343.620573 | 65,163 | 0.921243 |
[
[
[
"<a href=\"https://colab.research.google.com/github/fclubb/SciRes-Earthquakes/blob/main/Practical_3_magnitude_frequency.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
],
[
"# Scientific Research Project 7 - Practical 3",
"_____no_output_____"
],
[
"## Magnitude-frequency plots\nLast week, we started to explore the distribution of earthquakes through time. In this practical, we're going to look at how we can use these distributions to explore seismic hazard.\n\nA fundamental step in seismic hazard analysis is making **magnitude-frequency plots**: where we plot the size of events versus their frequency. Many, many natural hazards show that there is a power law relationship between these two variables. As magnitude increases, frequency decreases, but the relationship is not linear.\n\nIf we can work out the relationship between magnitude and frequency, it gives us an idea of how often an event of a given magnitude might occur over the time period in question.\n\nSo, let's get started, and make our magnitude-frequency plots for the San Francisco area for 1900-2019.\n****",
"_____no_output_____"
],
[
"**Firstly, copy the notebook to your Google Drive using the COPY TO DRIVE button.**\r\n\r\nNow we'll import the packages that we need again. This should be familiar from last time.",
"_____no_output_____"
]
],
[
[
"# import several helpful packages that we'll use\nimport numpy as np # linear algebra\nimport pandas as pd # data processing. We can read in csv files using pd.read_csv('path/to/csv-file')\nimport matplotlib.pyplot as plt # package for making figures\nimport matplotlib.cm as cm # for getting colourmaps\nfrom scipy import stats # for linear regression",
"_____no_output_____"
]
],
[
[
"\nNow we have to load our earthquake data into Google Colab. This is the exact same as we did last week using the USGS CSV file that you downloaded. To do this, click the small folder icon on the left hand bar and then click `Upload to Session Storage`. You can then navigate to the CSV file we downloaded and add it to Google Colab.\n\n**NOTE** - you will have to re-upload the data to Google Colab whenever you want to run this notebook. \n\n",
"_____no_output_____"
],
[
"Once you have uploaded the file and you can see it in the left hand bar, we can then load it into Python again using the code in the cell below.",
"_____no_output_____"
]
],
[
[
"df = pd.read_csv('San_Francisco_earthquakes_1900-2020.csv') # This needs to correspond to the name of the file you uploaded. You can change it to represent your own file ",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
]
],
[
[
"## Getting the number of earthquakes of each magnitude",
"_____no_output_____"
],
[
"The first step in our analysis is to group our database by magnitude. We want to end up with different classes that represent earthquakes of similar magnitudes. We'll create a series of classes with a magnitude interval of 0.5, for example:\r\n\r\n* Magnitude 4.5 - 5 (4.5 is the lowest magnitude that we downloaded)\r\n* Magnitude 5 - 5.5\r\n* Magnitude 5.5 - 6\r\n* etc...\r\n\r\n\r\n\r\n",
"_____no_output_____"
]
],
[
[
"# first define the bins. Use the step parameter to control the interval spacing of the bins (we will use spacings of 0.5 magnitude)\r\nstep = 0.5\r\nstarting_magnitude = 4.5\r\nending_magnitude = 8.5\r\nbins = np.arange(starting_magnitude, ending_magnitude, step)\r\nprint(bins)\r\n\r\n# then separate the dataframe into each bin\r\ndf['mag_bins'] = pd.cut(df['mag'], bins=bins, include_lowest=True)",
"_____no_output_____"
]
],
[
[
"If we now look at the dataframe, you should see there is a new column called `mag_bins` which tells you which class each earthquake falls into. The first one, for example, is in the magnitude range 4.49 - 5.0.",
"_____no_output_____"
]
],
[
[
"df",
"_____no_output_____"
]
],
[
[
"Ok, so we've worked out which earthquakes are in each class. Now we need to get the number, or *frequency*, of earthquakes in each class.\r\nTo do that we can use the `groupby` function to group the earthquakes by class, and count the number in each class:",
"_____no_output_____"
]
],
[
[
"frequency_df = df.groupby('mag_bins')['mag'].count().reset_index()",
"_____no_output_____"
]
],
[
[
"Now let's look at our new table, called `frequency_df`. We can see that it has 2 columns: `mag_bins` which tells us the edges of each class, and the number of earthquakes in each class (the column titled `mag`).",
"_____no_output_____"
]
],
[
[
"frequency_df",
"_____no_output_____"
]
],
[
[
"## Making the magnitude-frequency plot",
"_____no_output_____"
],
[
"Ok, we've set up our data in the right format. From looking at the table above, you should see that we have many, many more smaller earthquakes compared to large ones (128 earthquakes of magnitude 4.5 - 5, compared to only 1 magnitude 7.5 - 8). This means that it's very unlikely that there is a *linear* relationship between magnitude and frequency of earthquakes.\r\n\r\nIndeed, we know that most natural phenomena follow what's called a *power-law* distribution, where there are more small events than large ones. The best way of plotting a power-law is to use a **logarithmic** scale, which allows us to easily compare small and large numbers of events. \r\n\r\nYou should remember from the lecture that we can use the laws of logarithms to transform a power law relationship into logarithmic space. Let's set out our predicted relationship between the number of earthquakes (*N*) and the magnitude (*M*). This is called the Gutenberg-Richter law:\r\n\r\n\r\n\r\nwhere *a* and *b* are constants. \r\n\r\nWe can then take the log of both sides of this equation:\r\n\r\n\r\n\r\nWe need to know two rules of logarithms:\r\n\r\n\r\n\r\nWe can use these two rules to transform our equation to:\r\n\r\n\r\n\r\nThis shows that a *power-law* relationship in linear space is equivalent to a *straight line* in log-log space! We're going to explore that in our plotting. This will allow us to get the **b value** for the region: from the equation, you can see that the b value is the gradient of the straight line in logarithmic space (the exponent of the power law relationship). From the lecture, you should know that we can use the b value to compare the seismic hazard of different regions.",
"_____no_output_____"
],
[
"We're going to create 2 new variables, which will be lists of the magnitude and frequency extracted from our dataframe. For the magnitude, we'll get the centre value of each class (so for magnitude 4.5 - 5, our value would be 4.75).",
"_____no_output_____"
]
],
[
[
"magnitude = frequency_df['mag_bins'].apply(lambda x: x.mid).to_numpy()\r\n# round the values to 2 decimal places\r\nmagnitude = np.round(magnitude, 2)",
"_____no_output_____"
],
[
"frequency = frequency_df['mag'].to_numpy()",
"_____no_output_____"
]
],
[
[
"Now let's take the log of frequency. Magnitude is already a log scale, so it doesn't need to change.",
"_____no_output_____"
]
],
[
[
"log_frequency = np.log10(frequency)",
"_____no_output_____"
]
],
[
[
"Now we can make our plot. As before, we'll use a scatter plot and have a look at the data. We need to set the frequency scale to `log` which will perform the logarithm calculation for us.\r\n",
"_____no_output_____"
]
],
[
[
"# define the figure size\r\nplt.figure(figsize=(8,6))\r\n\r\n# make the scatter plot of magnitude and frequency\r\nplt.scatter(magnitude, log_frequency, c='red', s=200, edgecolors='black')\r\n\r\n# add axis labels\r\nplt.xlabel('Earthquake magnitude', fontsize=16)\r\nplt.ylabel('log frequency', fontsize=16)\r\n\r\n# increase the fontsize of the ticks\r\nplt.xticks(fontsize=14)\r\nplt.yticks(fontsize=14)\r\n\r\nplt.savefig('San_Francisco_mag-freq.png')",
"_____no_output_____"
]
],
[
[
"--- \r\n\r\n## Exercise 1\r\n\r\n1. Take a look at the magnitude-frequency plot above. Does it look like a power-law would be a good fit to the data? Why, or why not? (We will try fitting one in the next exercise). \r\n\r\n2. From looking at this plot, how many times in the period from 1900 - 2019 was there a Magnitude 5.5 - 6 earthquake in the region?\r\n\r\n3. If this trend continued into the future, how often would we expect a Magnitude 5.5 - 6 earthquake to occur in San Francisco?\r\n\r\n2. Investigate how changing the interval of the magnitude classes changes the plot. For example, you could try changing the interval to 0.1 magnitude bins. Save your output as a separate plot.\r\n\r\n\r\n---\r\n\r\n\r\n\r\n\r\n\r\n",
"_____no_output_____"
],
[
"## Fitting a power law to the data\r\n\r\nThe next step is to fit a power law to these data and see if it's a good fit. This will also allow us to estimate our **b value** for the region (the gradient of the fit in logarithm space, or the exponent of the power law). \r\nWe already found the log of the frequencies from earlier on:",
"_____no_output_____"
]
],
[
[
"log_frequency",
"_____no_output_____"
]
],
[
[
"Now that we've got the logarithms, let's fit a linear regression. Remember, a linear regression in logarithmic space the **same as a power law**.\r\n\r\nLet’s create a linear regression model using our magnitude and frequency data. We first have to check if there are any classes that didn't have any earthquakes in them. These can cause problems in the code, so we have to create what's called a *mask* to remove them:",
"_____no_output_____"
]
],
[
[
"# Check if there are any problems with the data because there were no earthquakes in a class\r\nmask = log_frequency >= 0\r\nprint(mask)",
"_____no_output_____"
]
],
[
[
"Now we can run our linear regression model! You'll see this outputs some values that might be familiar to you:\r\n* slope: this is the gradient of the straight line in the linear regression. This is our **b value**!\r\n* intercept: this is our y-intercept of the straight line\r\n* r_value: We can use this to work out the $R^2$ value, or how well the data fits a straight line. To do this we just have to multiply this value by itself.\r\n* p_value: This tells us how confident we can be that our data are actually well described by a linear regression. If $p < 0.05$ then we can say that our data appear to be described by a power law at the 95% confidence interval. ",
"_____no_output_____"
]
],
[
[
"slope, intercept, r_value, p_value, std_err = stats.linregress(magnitude[mask], log_frequency[mask])",
"_____no_output_____"
]
],
[
[
"Let's print out these outputs to see what the fit looks like:",
"_____no_output_____"
]
],
[
[
"# What's the slope of the regression? This is our b value\r\nslope",
"_____no_output_____"
],
[
"# What's the R^2 value? This describes how close the data are to a straight line. R^2 can vary between 0 and 1: if R^2 = 1, then all the data would lie on the line.\r\nr2 = r_value*r_value\r\nr2",
"_____no_output_____"
],
[
"# How confident are we in this? Let's print out the p value. if p < 0.05 then we can be confident at a 95% confidence interval that our regression is significant.\r\np_value",
"_____no_output_____"
]
],
[
[
"---\r\n## Exercise 2\r\n\r\n1. What is the **b value** of the magnitude-frequency plot for the San Francisco region?\r\n\r\n2. How well does a power-law fit the data? Can we say that this is statistically significant to a 95% confidence level?\r\n\r\n3. How does this **b value** compare to the b values from Southern California and from the worldwide seismicity data? What does this tell us about the number of small events in the San Francisco region as a proportion of the total, compared to these other datasets?\r\n \r\n---",
"_____no_output_____"
],
[
"## Making the final plot - adding the regression\r\n\r\nNow we've calculated our linear regression, we can add it to our magnitude frequency plot to see how well it fits the data.",
"_____no_output_____"
]
],
[
[
"# calculate the linear regression model for adding the data. We're just using the equation of a straight line to predict a model value of Y for each magnitude.\r\ny = slope*magnitude + intercept\r\nprint(y)",
"_____no_output_____"
],
[
"# define the figure size\r\nplt.figure(figsize=(8,6))\r\n\r\n# make the scatter plot of magnitude and frequency\r\nplt.scatter(magnitude, log_frequency, c='red', s=200, edgecolors='black')\r\n\r\n# plot the linear regression as a line\r\nplt.plot(magnitude, y, color='black', linestyle='--', label = 'Power law fit')\r\n\r\n# add axis labels\r\nplt.xlabel('Earthquake magnitude', fontsize=16)\r\nplt.ylabel('log frequency', fontsize=16)\r\n\r\n# increase the fontsize of the ticks\r\nplt.xticks(fontsize=14)\r\nplt.yticks(fontsize=14)\r\n\r\n# add a legend\r\nplt.legend(loc='upper right', fontsize=14)\r\n\r\n# add the b value to the plot, rounded to 2 decimal places, using plt.text. The first number is the position on the X axis, the second is the position on the Y axis\r\n# and the s parameter is the text that you want to write.\r\nplt.text(7.0, 4, s='b = '+str(np.round(slope,2)), fontsize=20)\r\n\r\nplt.savefig('San_Francisco_mag-freq_regression.png')",
"_____no_output_____"
],
[
"",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab35ae3fd0d2ac5af3e6da283cb53dc90b875fd
| 11,408 |
ipynb
|
Jupyter Notebook
|
content/estimate_factor_loadings.ipynb
|
emilyntaylor/ledatascifi-2021
|
7add3109b564a9821d1451461765d0bd0a546197
|
[
"MIT"
] | 1 |
2021-05-28T17:24:51.000Z
|
2021-05-28T17:24:51.000Z
|
content/estimate_factor_loadings.ipynb
|
emilyntaylor/ledatascifi-2021
|
7add3109b564a9821d1451461765d0bd0a546197
|
[
"MIT"
] | 15 |
2021-02-01T06:23:48.000Z
|
2021-04-26T12:40:41.000Z
|
content/estimate_factor_loadings.ipynb
|
emilyntaylor/ledatascifi-2021
|
7add3109b564a9821d1451461765d0bd0a546197
|
[
"MIT"
] | 35 |
2021-02-01T17:41:34.000Z
|
2021-09-28T00:41:45.000Z
| 31.169399 | 238 | 0.454769 |
[
[
[
"# A canonical asset pricing job\n\nLet's estimate, for each firm, for each year, the alpha, beta, and size and value loadings.\n\nSo we want a dataset that looks like this:\n\n| Firm | Year | alpha | beta | \n| --- | --- | --- | --- |\n| GM | 2000 | 0.01 | 1.04 |\n| GM | 2001 | -0.005 | 0.98 |\n\n...but it will do this for every firm, every year!",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas_datareader as pdr\nimport seaborn as sns\n# import statsmodels.api as sm",
"_____no_output_____"
]
],
[
[
"Load your stock returns. Here, I'll use this dataset, but you can use anything. \n\nThe returns don't even have to be firms. \n\n**They can be any asset.** (Portfolios, mutual funds, crypto, ...)",
"_____no_output_____"
]
],
[
[
"crsp = pd.read_stata('https://github.com/LeDataSciFi/ledatascifi-2021/blob/main/data/3firm_ret_1990_2020.dta?raw=true')\ncrsp['ret'] = crsp['ret']*100 # convert to precentage to match FF's convention on scaling (daily % rets)",
"_____no_output_____"
]
],
[
[
"Then grab the market returns. Here, we will use one of the Fama-French datasets.",
"_____no_output_____"
]
],
[
[
"ff = pdr.get_data_famafrench('F-F_Research_Data_5_Factors_2x3_daily',start=1980,end=2010)[0] # the [0] is because the imported obect is a dictionary, and key=0 is the dataframe\nff = ff.reset_index().rename(columns={\"Mkt-RF\":\"mkt_excess\", \"Date\":\"date\"})",
"_____no_output_____"
]
],
[
[
"Merge the market returns into the stock returns.",
"_____no_output_____"
]
],
[
[
"crsp_ready = pd.merge(left=ff, right=crsp, on='date', how=\"inner\",\n indicator=True, validate=\"one_to_many\")",
"_____no_output_____"
]
],
[
[
"So the data's basically ready. Again, the goal is to estimate, for each firm, for each year, the alpha, beta, and size and value loadings. \n\nYou caught that right? I have a dataframe, and **for each** firm, and **for each** year, I want to \\<do stuff\\> (run regressions).\n \n**Pandas + \"for each\" = groupby!**\n\nSo we will _basically_ run `crsp.groupby([firm,year]).runregression()`. Except there is no \"runregression\" function that applies to pandas groupby objects. Small workaround: `crsp.groupby([firm,year]).apply(<our own reg fcn>)`.\n\nWe just need to write a reg function that works on groupby objects. \n",
"_____no_output_____"
]
],
[
[
"import statsmodels.api as sm\n\ndef reg_in_groupby(df,formula=\"ret_excess ~ mkt_excess + SMB + HML\"):\n '''\n Want to run regressions after groupby?\n \n This will do it! \n \n Note: This defaults to a FF3 model assuming specific variable names. If you\n want to run any other regression, just specify your model.\n \n Usage: \n df.groupby(<whatever>).apply(reg_in_groupby)\n df.groupby(<whatever>).apply(reg_in_groupby,formula=<whatever>)\n '''\n return pd.Series(sm.formula.ols(formula,data = df).fit().params)",
"_____no_output_____"
]
],
[
[
"Let's apply that to our returns! ",
"_____no_output_____"
]
],
[
[
"(\n crsp_ready # grab the data\n \n # Two things before the regressions:\n # 1. need a year variable (to group on)\n # 2. the market returns in FF are excess returns, so \n # our stock returns need to be excess as well\n .assign(year = crsp_ready.date.dt.year,\n ret_excess = crsp_ready.ret - crsp_ready.RF)\n \n # ok, run the regs, so easy!\n .groupby(['permno','year']).apply(reg_in_groupby)\n \n # and clean up - with better var names\n .rename(columns={'Intercept':'alpha','mkt_excess':'beta'})\n .reset_index()\n)",
"_____no_output_____"
]
],
[
[
"How cool is that!",
"_____no_output_____"
],
[
"## Summary\n\nThis is all you need to do:\n\n1. Set up the data like you would have to no matter what:\n 1. Load your stock prices.\n 1. Merge in the market returns and any factors you want to include in your model.\n 1. Make sure your returns are scaled like your factors (e.g., above, I converted to percentages to match the FF convention)\n 1. Make sure your asset returns and market returns are both excess returns (or both are not excess returns)\n 1. Create any variables you want to group on (e.g. above, I created a year variable)\n3. `df.groupby(<whatever>).apply(reg_in_groupby)`\n\nHoly smokes! ",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ab35bdf1201f5d52a6ef693c5b43c1313969332
| 1,768 |
ipynb
|
Jupyter Notebook
|
prepare_data.ipynb
|
AHNU2019/AHNU_captcha
|
336d16fceb810ba82983059e5ecca2c5d4a7d1ef
|
[
"MIT"
] | null | null | null |
prepare_data.ipynb
|
AHNU2019/AHNU_captcha
|
336d16fceb810ba82983059e5ecca2c5d4a7d1ef
|
[
"MIT"
] | null | null | null |
prepare_data.ipynb
|
AHNU2019/AHNU_captcha
|
336d16fceb810ba82983059e5ecca2c5d4a7d1ef
|
[
"MIT"
] | null | null | null | 20.55814 | 98 | 0.511312 |
[
[
[
"import json\nimport os\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"# results_json from YOLO v4 datections\nresults_json = '/home/acai/darknet/capt_result.json'",
"_____no_output_____"
],
[
"with open(results_json) as f:\n labels = json.load(f)",
"_____no_output_____"
],
[
"for i in tqdm(labels):\n filename = i['filename']\n objects = i['objects']\n sorted_objects = sorted(objects, key=lambda x: x['relative_coordinates']['center_x'])\n label = ''.join([obj['name'] for obj in sorted_objects])\n os.rename(filename, filename[:-4] + '_' + label + '.png')",
"100%|██████████| 200000/200000 [00:06<00:00, 32167.14it/s]\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
4ab363fdbbb2605496f671bcd0cd6fe4c4bc0e4d
| 683,445 |
ipynb
|
Jupyter Notebook
|
analysis/explorative_studies/Pluto_noisecomplaints_.ipynb
|
sunghoonyang/noise-capstone
|
6cb21f85c23bba93654d0160022d132b5d6aa319
|
[
"MIT"
] | 2 |
2019-03-03T18:57:47.000Z
|
2019-05-25T02:17:16.000Z
|
analysis/explorative_studies/Pluto_noisecomplaints_.ipynb
|
qygoh/noise-capstone
|
6cb21f85c23bba93654d0160022d132b5d6aa319
|
[
"MIT"
] | null | null | null |
analysis/explorative_studies/Pluto_noisecomplaints_.ipynb
|
qygoh/noise-capstone
|
6cb21f85c23bba93654d0160022d132b5d6aa319
|
[
"MIT"
] | 3 |
2019-03-29T18:16:52.000Z
|
2019-10-22T15:11:39.000Z
| 508.137546 | 537,840 | 0.945589 |
[
[
[
"# NYC PLUTO Data and Noise Complaints\n\nInvestigating how PLUTO data and zoning characteristics impact spatial, temporal and types of noise complaints through out New York City. Specifically looking at noise complaints that are handled by NYC's Department of Environmental Protection (DEP).\n\nAll work performed by Zoe Martiniak.",
"_____no_output_____"
]
],
[
[
"import os\nimport pandas as pd\nimport numpy as np\nimport datetime\nimport urllib\nimport requests\nfrom sodapy import Socrata\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport pylab as pl\nfrom pandas.plotting import scatter_matrix\n%matplotlib inline\n%pylab inline\n\n##Geospatial\nimport shapely\nimport geopandas as gp\nfrom geopandas import GeoDataFrame\nfrom fiona.crs import from_epsg\nfrom shapely.geometry import Point, MultiPoint\nimport io\nfrom geopandas.tools import sjoin\nfrom shapely.ops import nearest_points\n\n## Statistical Modelling\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nfrom statsmodels.datasets.longley import load\nimport sklearn.preprocessing as preprocessing\nfrom sklearn.ensemble import RandomForestRegressor as rfr\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import confusion_matrix\n\n\nfrom APPTOKEN import myToken\n## Save your SOTA API Token as variable myToken in a file titled SOTAPY_APPTOKEN.py\n## e.g.\n## myToken = 'XXXXXXXXXXXXXXXX'",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"# DATA IMPORTING\n\nApplying domain knowledge to only read in columns of interest to reduce computing requirements.",
"_____no_output_____"
],
[
"### PLUTO csv file",
"_____no_output_____"
]
],
[
[
"pluto = pd.read_csv(os.getenv('MYDATA')+'/pluto_18v2.csv', usecols=['borocode','zonedist1', \n 'overlay1', 'bldgclass', 'landuse',\n 'ownertype','lotarea', 'bldgarea', 'comarea',\n 'resarea', 'officearea', 'retailarea', 'garagearea', 'strgearea',\n 'factryarea', 'otherarea', 'numfloors',\n 'unitsres', 'unitstotal', 'proxcode', 'lottype','lotfront', \n 'lotdepth', 'bldgfront', 'bldgdepth',\n 'yearalter1', \n 'assessland', 'yearbuilt','histdist', 'landmark', 'builtfar',\n 'residfar', 'commfar', 'facilfar','bbl', 'xcoord','ycoord'])\n",
"_____no_output_____"
]
],
[
[
"### 2010 Census Blocks",
"_____no_output_____"
]
],
[
[
"census = gp.read_file('Data/2010 Census Blocks/geo_export_56edaf68-bbe6-44a7-bd7c-81a898fb6f2e.shp')",
"_____no_output_____"
]
],
[
[
"### Read in 311 Complaints",
"_____no_output_____"
]
],
[
[
"complaints = pd.read_csv('Data/311DEPcomplaints.csv', usecols=['address_type','borough','city',\n 'closed_date', 'community_board','created_date',\n 'cross_street_1', 'cross_street_2', 'descriptor', 'due_date',\n 'facility_type', 'incident_address', 'incident_zip',\n 'intersection_street_1', 'intersection_street_2', 'latitude',\n 'location_type', 'longitude', 'resolution_action_updated_date',\n 'resolution_description', 'status', 'street_name' ])\n",
"/Users/zoemartiniak/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py:2785: DtypeWarning: Columns (12,20) have mixed types. Specify dtype option on import or set low_memory=False.\n interactivity=interactivity, compiler=compiler, result=result)\n"
],
[
"## Many missing lat/lon values in complaints file\n## Is it worth it to manually fill in NaN with geopy geocded laton/long?\nlen(complaints[(complaints.latitude.isna()) | (complaints.longitude.isna())])/len(complaints)\n",
"_____no_output_____"
]
],
[
[
"### Mannually Filling in Missing Lat/Long from Addresses\n\nVery time and computationally expensive, so this step should be performed on a different machine.\nFor our intents and purposes, I will just be dropping rows with missing lat/long",
"_____no_output_____"
]
],
[
[
"import geopy\nfrom geopy.geocoders import Nominatim\nimport geopy.distance\ngeolocator = Nominatim(user_agent = 'python-script')\n",
"_____no_output_____"
],
[
"## Finding the index of complaints where lat/long is missing \n## & there is address info\n\nfill_address = complaints[((complaints.latitude.isna()) | \n (complaints.longitude.isna())) & \n complaints.incident_address.notna()].index\nfor i in fill_address:\n try:\n loc = geolocator.geocode(complaints['incident_address'][i]+ ' , '+\n complaints['borough'][i] )\n except:\n continue",
"_____no_output_____"
],
[
"## Finding the index of complaints where lat/long is missing \n## & there is street intersection info\n\nfill_cross_sec = complaints[((complaints.latitude.isna()) | \n (complaints.longitude.isna())) & \n (complaints.intersection_street_1.notna()) &\n (complaints.intersection_street_2.notna()) ].index\nfor i in fill_cross_sec:\n try:\n loc = geolocator.geocode(complaints['intersection_street_1'][i]+' & ' + \n complaints['intersection_street_2'][i]+ ' , '+\n complaints['borough'][i] )\n except:\n continue",
"_____no_output_____"
]
],
[
[
"complaints.dropna(subset=['longitude', 'latitude'],inplace=True)\ncomplaints['createdate'] = pd.to_datetime(complaints['created_date'])\ncomplaints = complaints[complaints.createdate >= datetime.datetime(2018,1,1)]\ncomplaints = complaints[complaints.createdate < datetime.datetime(2019,1,1)]\ncomplaints['lonlat']=list(zip(complaints.longitude.astype(float), complaints.latitude.astype(float)))\ncomplaints['geometry']=complaints[['lonlat']].applymap(lambda x:shapely.geometry.Point(x))\ncrs = {'init':'epsg:4326', 'no_defs': True}\ncomplaints = gp.GeoDataFrame(complaints, crs=crs, geometry=complaints['geometry'])",
"_____no_output_____"
]
],
[
[
"## NYC Zoning Shapefile",
"_____no_output_____"
]
],
[
[
"zoning = gp.GeoDataFrame.from_file('Data/nycgiszoningfeatures_201902shp/nyzd.shp')\nzoning.to_crs(epsg=4326, inplace=True)",
"_____no_output_____"
]
],
[
[
"# PLUTO Shapefiles\n\n## Load in PLUTO Shapefiles by Boro\nThe PLUTO shapefiles are incredibly large. I used ArcMAP to separate the pluto shapefiles by borough and saved them locally. \nMy original plan was to perform a spatial join of the complaints to the pluto shapefiles to find the relationship between PLUTO data on the building-scale and noise complaints.\nWhile going through this exploratory analysis, I discovered that the 311 complaints are actually all located in the street and therefore the points do not intersect with the PLUTO shapefiles. This brings up some interesting questions, such as how the lat/long coordinates are assigned by the DEP.\n\nI am including this step to showcase that the complaints do not intersect with the shapefiles, to justify my next step of simply aggregating by zoning type with the zoning shapefiles. ",
"_____no_output_____"
]
],
[
[
"## PLUTO SHAPEFILES BY BORO\n#files = ! ls Data/PLUTO_Split | grep '.shp'\nboros= ['bronx','brooklyn','man','queens','staten']\ncolumns_to_drop = ['FID_pluto_', 'Borough','CT2010', 'CB2010',\n 'SchoolDist', 'Council', 'FireComp', 'PolicePrct',\n 'HealthCent', 'HealthArea', 'Sanitboro', 'SanitDistr', 'SanitSub',\n 'Address','BldgArea', 'ComArea', 'ResArea', 'OfficeArea',\n 'RetailArea', 'GarageArea', 'StrgeArea', 'FactryArea', 'OtherArea',\n 'AreaSource','LotFront', 'LotDepth', 'BldgFront', 'BldgDepth', 'Ext', 'ProxCode',\n 'IrrLotCode', 'BsmtCode', 'AssessLand', 'AssessTot',\n 'ExemptLand', 'ExemptTot','ResidFAR', 'CommFAR', 'FacilFAR',\n 'BoroCode','CondoNo','XCoord', 'YCoord', 'ZMCode', 'Sanborn', 'TaxMap', 'EDesigNum', 'APPBBL',\n 'APPDate', 'PLUTOMapID', 'FIRM07_FLA', 'PFIRM15_FL', 'Version','BoroCode_1', 'BoroName']\n\n\nbx_shp = gp.GeoDataFrame.from_file('Data/PLUTO_Split/Pluto_bronx.shp')\nbx_311 = complaints[complaints.borough == 'BRONX']\nbx_shp.to_crs(epsg=4326, inplace=True)\nbx_shp.drop(columns_to_drop, axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"bk_shp =gp.GeoDataFrame.from_file('Data/PLUTO_Split/Pluto_brooklyn.shp')\nbk_311 = complaints[complaints.borough == 'BROOKLYN']\nbk_shp.to_crs(epsg=4326, inplace=True)\nbk_shp.drop(columns_to_drop, axis=1, inplace=True)",
"_____no_output_____"
],
[
"mn_shp =gp.GeoDataFrame.from_file('Data/PLUTO_Split/Pluto_man.shp')\nmn_311 = complaints[complaints.borough == 'MANHATTAN']\nmn_shp.to_crs(epsg=4326, inplace=True)\nmn_shp.drop(columns_to_drop, axis=1, inplace=True)",
"_____no_output_____"
],
[
"qn_shp =gp.GeoDataFrame.from_file('Data/PLUTO_Split/Pluto_queens.shp')\nqn_311 = complaints[complaints.borough == 'QUEENS']\nqn_shp.to_crs(epsg=4326, inplace=True)\nqn_shp.drop(columns_to_drop, axis=1, inplace=True)",
"_____no_output_____"
],
[
"si_shp =gp.GeoDataFrame.from_file('Data/PLUTO_Split/Pluto_staten.shp')\nsi_311 = complaints[complaints.borough == 'STATEN ISLAND']\nsi_shp.to_crs(epsg=4326, inplace=True)\nsi_shp.drop(columns_to_drop, axis=1, inplace=True)",
"_____no_output_____"
]
],
[
[
"## Mapping",
"_____no_output_____"
]
],
[
[
"f, ax = plt.subplots(figsize=(15,15))\n#ax.get_xaxis().set_visible(False)\n#ax.get_yaxis().set_visible(False)\nax.set_xlim(-73.91, -73.9)\nax.set_ylim(40.852, 40.86)\nbx_shp.plot(ax=ax, color = 'w', edgecolor='k',alpha=0.5, legend=True)\nplt.title(\"2018 Bronx Noise Complaints\", size=20)\nbx_311.plot(ax=ax,marker='.', color='red')#, markersize=.4, alpha=.4)\n#fname = 'Bronx2018zoomed.png'\n#plt.savefig(fname)\nplt.show()",
"_____no_output_____"
]
],
[
[
"**Fig1:** This figure shows that the complaint points are located in the street, and therefore do not intersect with a tax lot. Therefore we cannot perform a spatial join on the two shapefiles.",
"_____no_output_____"
],
[
"# Data Cleaning & Simplifying\n\nHere we apply our domain knowledge of zoning and Pluto data to do a bit of cleaning. This includes simplifying the zoning districts to extract the first letter, which can be one of the following five options:<br />\nB: Ball Field, BPC<br />\nP: Public Place, Park, Playground (all public areas)<br />\nC: Commercial<br />\nR: Residential<br />\nM: Manufacturing<br />",
"_____no_output_____"
]
],
[
[
"print(len(zoning.ZONEDIST.unique()))\nprint(len(pluto.zonedist1.unique()))",
"166\n163\n"
],
[
"def simplifying_zone(x):\n if x in ['PLAYGROUND','PARK','PUBLIC PLACE','BALL FIELD' ,'BPC']:\n return 'P'\n if '/' in x:\n return 'O'\n if x[:3] == 'R10':\n return x[:3]\n else:\n return x[:2]",
"_____no_output_____"
],
[
"def condensed_simple(x):\n if x[:2] in ['R1','R2', 'R3','R4']:\n return 'R1-R4'\n if x[:2] in ['R5','R6', 'R7']:\n return 'R5-R7'\n if x[:2] in ['R8','R9', 'R10']:\n return 'R8-R10'\n if x[:2] in ['C1','C2']:\n return 'C1-C2'\n if x[:2] in ['C5','C6']:\n return 'C5-C6'\n if x[:2] in ['C3','C4','C7','C8']:\n return 'C'\n if x[:1] =='M':\n return 'M'\n else:\n return x[:2]",
"_____no_output_____"
],
[
"cols_to_tidy = []\nnotcommon = []\nfor c in pluto.columns:\n if type(pluto[c].mode()[0]) == str:\n cols_to_tidy.append(c)",
"_____no_output_____"
],
[
"for c in cols_to_tidy: \n pluto[c].fillna('U',inplace=True)\npluto.fillna(0,inplace=True)",
"_____no_output_____"
],
[
"pluto['bldgclass'] = pluto['bldgclass'].map(lambda x: x[0])\npluto['overlay1'] = pluto['overlay1'].map(lambda x: x[:2])\npluto['simple_zone'] = pluto['zonedist1'].map(simplifying_zone)\npluto['condensed'] = pluto['simple_zone'].map(condensed_simple)",
"_____no_output_____"
]
],
[
[
"pluto['lonlat']=list(zip(pluto.ycoord,pluto_.xcoord))\npluto['geometry']=pluto[['lonlat']].applymap(lambda x:shapely.geometry.Point(x))\n",
"_____no_output_____"
]
],
[
[
"zoning_analysis = pluto[['lotarea', 'bldgarea', 'comarea',\n 'resarea', 'officearea', 'retailarea', 'garagearea', 'strgearea',\n 'factryarea', 'otherarea', 'areasource', 'numbldgs', 'numfloors',\n 'unitsres', 'unitstotal', 'lotfront', 'lotdepth', 'bldgfront',\n 'bldgdepth','lotfront', 'lotdepth', 'bldgfront',\n 'bldgdepth','yearbuilt',\n 'yearalter1', 'yearalter2','builtfar','simple_zone']]",
"_____no_output_____"
],
[
"zoning_analysis.dropna(inplace=True)",
"/Users/zoemartiniak/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"## Cleaning the Complaint file for easier 1-hot-encoding\n\ndef TOD_shifts(x):\n if x.hour <=7:\n return 'M'\n if x.hour >7 and x.hour<18:\n return 'D'\n if x.hour >= 18: \n return 'E'\n\ndef DOW_(x):\n weekdays = ['mon','tues','weds','thurs','fri','sat','sun']\n for i in range(7):\n if x.dayofweek == i:\n return weekdays[i]\n\ndef resolution_(x):\n descriptions = complaints.resolution_description.unique()\n for a in [2,3,4,5,11,12,14,17,20,23,25]:\n if x == descriptions[a]:\n return 'valid_no_vio'\n continue\n next\n if x == descriptions[1]:\n return 'violation'\n next\n for b in [0,6,10,16,19,21,24]:\n if x == descriptions[b]:\n return 'further_investigation'\n continue\n next\n for c in [7,8,9,13,15,18,22]:\n if x == descriptions[c]:\n return 'access_issue'\n ",
"_____no_output_____"
]
],
[
[
"#### SIMPLIFIED COMPLAINT DESCRIPTIONS\n0: Did not observe violation<br/>\n1: Violation issued <br/>\nNo violation issued yet/canceled/resolved because:<br/>\n2: Duplicate<br/>\n3: Not warranted<br/>\n4: Complainant canceled<br/>\n5: Not warranted<br/>\n6: Investigate further<br/>\n7: Closed becuase complainant didnt respond<br/>\n8: Incorrect complainant contact info (phone)<br/>\n9: Incorrect complainant contact info (address)<br/>\n10: Further investigation<br/>\n11: NaN<br/>\n12: Status unavailable<br/>\n13: Could not gain access to location<br/>\n14: NYPD<br/>\n15: Sent letter to complainant after calling<br/>\n16: Recieved letter from dog owner<br/>\n17: Resolved with complainant<br/>\n18: Incorrect address<br/>\n19: An inspection is warranted<br/>\n20: Hydrant<br/>\n21: 2nd inspection<br/>\n22: No complainant info<br/>\n23: Refer to other agency (not nypd)<br/>\n24: Inspection is scheduled<br/>\n25: Call 311 for more info<br/>\n\nViolation: [1]\nnot warranted/canceled/otheragency/duplicate: [2,3,4,5,11,12,14,17,20,23,25]\nComplainant/access issue: [7,8,9,13,15,18,22]\nFurther investigtion: [0,6,10,16,19,21,24]",
"_____no_output_____"
]
],
[
[
"complaints['TOD']=complaints.createdate.map(TOD_shifts)\ncomplaints['DOW']=complaints.createdate.map(DOW_)",
"_____no_output_____"
]
],
[
[
"## This takes a lot of computing power.\ncomplaints['Res']=complaints.resolution_description.map(resolution_)",
"_____no_output_____"
]
],
[
[
"## PLUTO/Zoning Feature Analysis",
"_____no_output_____"
]
],
[
[
"## Obtained this line of code from datascience.stackexchange @ the following link:\n## https://datascience.stackexchange.com/questions/10459/calculation-and-visualization-of-correlation-matrix-with-pandas\ndef drange(start, stop, step):\n r = start\n while r <= stop:\n yield r\n r += step\n \ndef correlation_matrix(df):\n from matplotlib import pyplot as plt\n from matplotlib import cm as cm\n\n fig = plt.figure(figsize=(10,10))\n ax1 = fig.add_subplot(111)\n cmap = cm.get_cmap('jet', 30)\n cax = ax1.imshow(df.corr(), interpolation=\"nearest\", cmap=cmap)\n ax1.grid(True)\n plt.title('PLUTO Correlation', size=20)\n labels =[x for x in zoning_analysis.columns ] \n ax1.set_yticklabels(labels,fontsize=14)\n ax1.set_xticklabels(labels,fontsize=14, rotation='90')\n\n # Add colorbar, make sure to specify tick locations to match desired ticklabels\n fig.colorbar(cax, ticks = list(drange(-1, 1, 0.25)))\n plt.show()\n\ncorrelation_matrix(zoning_analysis)\n",
"_____no_output_____"
],
[
"zoning_analysis.sort_values(['simple_zone'],ascending=False, inplace=True)\ny = zoning_analysis.groupby('simple_zone').mean()\nf, axes = plt.subplots(figsize=(8,25), nrows=6, ncols=1)\ncols = ['lotarea', 'bldgarea', 'comarea', 'resarea', 'officearea', 'retailarea']\nfor colind in range(6):\n y[cols[colind]].plot(ax = plt.subplot(6,1,colind+1), kind='bar')\n plt.ylabel('Avg. {} Units'.format(cols[colind]))\n plt.title(cols[colind])\n \n ",
"/Users/zoemartiniak/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame\n\nSee the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"zoning['simple_zone'] = zoning['ZONEDIST'].map(simplifying_zone)\nzoning['condensed'] = zoning['simple_zone'].map(condensed_simple)",
"_____no_output_____"
],
[
"zoning = zoning.reset_index().rename(columns={'index':'zdid'})",
"_____no_output_____"
]
],
[
[
"## Perform Spatial Joins",
"_____no_output_____"
]
],
[
[
"## Joining Census group shapefile to PLUTO shapefile\n\nsjoin(census, plutoshp)",
"_____no_output_____"
],
[
"## Joining the zoning shapefile to complaints\nzoning_joined = sjoin(zoning, complaints).reset_index()\nzoning_joined.drop('index',axis=1, inplace=True)\nprint(zoning.shape)\nprint(complaints.shape)\nprint(zoning_joined.shape)",
"_____no_output_____"
],
[
"zoning_joined.drop(columns=['index_right', 'address_type', 'borough',\n 'city', 'closed_date', 'community_board', 'created_date',\n 'cross_street_1', 'cross_street_2', 'due_date',\n 'facility_type', 'incident_address', 'incident_zip',\n 'intersection_street_1', 'intersection_street_2', \n 'location_type', 'resolution_action_updated_date',\n 'resolution_description', 'status', 'street_name', 'lonlat'], inplace=True)",
"_____no_output_____"
],
[
"## Joining each borough PLUTO shapefile to zoning shapefile\nbx_shp['centroid_colum'] = bx_shp.centroid\nbx_shp = bx_shp.set_geometry('centroid_colum')\npluto_bx = sjoin(zoning, bx_shp).reset_index()",
"_____no_output_____"
],
[
"print(zoning.shape)\nprint(bx_shp.shape)\nprint(pluto_bx.shape)",
"_____no_output_____"
],
[
"pluto_bx = pluto_bx.groupby('zdid')['LandUse', 'LotArea', 'NumBldgs', 'NumFloors', 'UnitsRes',\n 'UnitsTotal', 'LotType', 'YearBuilt','YearAlter1', 'YearAlter2','BuiltFAR'].mean()",
"_____no_output_____"
],
[
"pluto_bx = zoning.merge(pluto_bx, on='zdid')",
"_____no_output_____"
]
],
[
[
"# ANALYSIS",
"_____no_output_____"
],
[
"## Visual Analysis",
"_____no_output_____"
]
],
[
[
"x = zoning_joined.groupby('simple_zone')['ZONEDIST'].count().index\ny = zoning_joined.groupby('simple_zone')['ZONEDIST'].count()\n\nf, ax = plt.subplots(figsize=(12,9))\nplt.bar(x, y)\nplt.ylabel('Counts', size=12)\nplt.title('Noise Complaints by Zoning Districts (2018)', size=15)",
"_____no_output_____"
]
],
[
[
"**FIg 1** This shows the total counts of complaints by Zoning district. Clearly there are more complaints in middle/high-population density residential zoning districts. There are also high complaints in commercial districts C5 & C6. These commercial districts tend to have a residential overlay.",
"_____no_output_____"
]
],
[
[
"y.sort_values(ascending=False, inplace=True)\nx = y.index\ndescriptors = zoning_joined.descriptor.unique()\n\ndf = pd.DataFrame(index=x)\n\n\nfor d in descriptors:\n df[d] = zoning_joined[zoning_joined.descriptor == d].groupby('simple_zone')['ZONEDIST'].count()\n\ndf = df.div(df.sum(axis=1), axis=0)\nax = df.plot(kind=\"bar\", stacked=True, figsize=(18,12))\ndf.sum(axis=1).plot(ax=ax, color=\"k\")\nplt.title('Noise Complaints by Descriptor', size=20)\nplt.xlabel('Simplified Zone District (Decreasing Total Count -->)', size=12)\nplt.ylabel('%', size=12)\nfname = 'Descriptorpercent.jpeg'\n#plt.savefig(fname)\nplt.show()",
"_____no_output_____"
]
],
[
[
"**FIg 2** This figure shows the breakdown of the main noise complaint types per zoning district.",
"_____no_output_____"
]
],
[
[
"descriptors",
"_____no_output_____"
],
[
"complaints_by_zone = pd.get_dummies(zoning_joined, columns=['TOD','DOW'])\ncomplaints_by_zone = complaints_by_zone.rename(columns={'TOD_D':'Day','TOD_E':'Night',\n 'TOD_M':'Morning','DOW_fri':'Friday','DOW_mon':'Monday','DOW_sat':'Saturday',\n 'DOW_sun':'Sunday','DOW_thurs':'Thursday','DOW_tues':'Tuesday','DOW_weds':'Wednesday'})\ncomplaints_by_zone.drop(columns=['descriptor', 'latitude', 'longitude','createdate'],inplace=True)",
"_____no_output_____"
],
[
"complaints_by_zone = complaints_by_zone.groupby('zdid').sum()[['Day', 'Night', 'Morning', 'Friday',\n 'Monday', 'Saturday', 'Sunday', 'Thursday', 'Tuesday', 'Wednesday']].reset_index()\n",
"_____no_output_____"
],
[
"## Creating total counts of complaints by zoning district\ncomplaints_by_zone['Count_TOD'] = (complaints_by_zone.Day + \n complaints_by_zone.Night + \n complaints_by_zone.Morning)\ncomplaints_by_zone['Count_DOW'] = (complaints_by_zone.Monday + \n complaints_by_zone.Tuesday + \n complaints_by_zone.Wednesday +\n complaints_by_zone.Thursday + \n complaints_by_zone.Friday + \n complaints_by_zone.Saturday + \n complaints_by_zone.Sunday)\n\n## Verifying the counts are the same\ncomplaints_by_zone[complaints_by_zone.Count_TOD != complaints_by_zone.Count_DOW]\n",
"_____no_output_____"
],
[
"print(complaints_by_zone.shape)\nprint(zoning.shape)\n\ncomplaints_by_zone = zoning.merge(complaints_by_zone, on='zdid')\nprint(complaints_by_zone.shape)",
"_____no_output_____"
],
[
"f, ax = plt.subplots(1,figsize=(13,13))\nax.set_axis_off()\nax.set_title('Avg # of Complaints',size=15)\ncomplaints_by_zone.plot(ax=ax, column='Count_TOD', cmap='gist_earth', k=3, alpha=0.7, legend=True)\nfname = 'AvgComplaintsbyZD.png'\nplt.savefig(fname)\nplt.show()",
"_____no_output_____"
],
[
"complaints_by_zone['Norm_count'] = complaints_by_zone.Count_TOD/complaints_by_zone.Shape_Area*1000000\nf, ax = plt.subplots(1,figsize=(13,13))\nax.set_axis_off()\nax.set_title('Complaints Normalized by ZD Area',size=15)\ncomplaints_by_zone[complaints_by_zone.Norm_count < 400].plot(ax=ax, column='Norm_count', cmap='gist_earth', k=3, alpha=0.7, legend=True)\nfname = 'NormComplaintsbyZD.png'\nplt.savefig(fname)\nplt.show()\n\n",
"_____no_output_____"
]
],
[
[
"**Fig 3** This figure shows the spread of noise complaint density (complaints per unit area) of each zoning district.",
"_____no_output_____"
]
],
[
[
"complaints_by_zone.columns",
"_____no_output_____"
],
[
"TODcols = ['Day', 'Night', 'Morning']\nfig = pl.figure(figsize=(30,20))\n\nfor x in range(1,8):\n \n fig.add_subplot(2,3,x).set_axis_off()\n fig.add_subplot(2,3,x).set_title(title[x-1], size=28)\n pumashp.plot(column=column[x-1],cmap='Blues', alpha=1, \n edgecolor='k', ax=fig.add_subplot(2,3,x), legend=True)\n ",
"_____no_output_____"
],
[
"DOWcols = ['Friday', 'Monday', 'Saturday', 'Sunday', 'Thursday', 'Tuesday', 'Wednesday']\nfig = pl.figure(figsize=(30,20))\nfor x in range(1,7):\n \n fig.add_subplot(2,3,x).set_axis_off()\n fig.add_subplot(2,3,x).set_title(DOWcols[x-1], size=28)\n complaints_by_zone.plot(column=DOWcols[x-1],cmap='gist_stern', alpha=1, \n ax=fig.add_subplot(2,3,x), legend=True)\n \n",
"_____no_output_____"
]
],
[
[
"## Regression",
"_____no_output_____"
],
[
"Define lat/long coordinates of zoning centroids for regression",
"_____no_output_____"
]
],
[
[
"complaints_by_zone.shape",
"_____no_output_____"
],
[
"complaints_by_zone['centerlong'] = complaints_by_zone.centroid.x\ncomplaints_by_zone['centerlat'] = complaints_by_zone.centroid.y\n",
"_____no_output_____"
],
[
"mod = smf.ols(formula = \n 'Norm_count ~ centerlat + centerlong', data=complaints_by_zone)\nresults1 = mod.fit()\nresults1.summary()",
"_____no_output_____"
],
[
"len(complaints_by_zone.ZONEDIST.unique())",
"_____no_output_____"
],
[
"mod = smf.ols(formula = \n 'Norm_count ~ ZONEDIST', data=complaints_by_zone)\nresults1 = mod.fit()\nresults1.summary()",
"_____no_output_____"
],
[
"len(complaints_by_zone.simple_zone.unique())",
"_____no_output_____"
],
[
"mod = smf.ols(formula = \n 'Norm_count ~ simple_zone', data=complaints_by_zone)\nresults1 = mod.fit()\nresults1.summary()",
"_____no_output_____"
],
[
"complaints_by_zone.condensed.unique()",
"_____no_output_____"
],
[
"mod = smf.ols(formula = \n 'Norm_count ~ condensed', data=complaints_by_zone)\nresults1 = mod.fit()\nresults1.summary()",
"_____no_output_____"
]
],
[
[
"### PLAN \n- JOIN ALL ZONE DIST TO PLUTO SHAPEFILES, AGGREGATE FEATURES\n- PERFORM REGRESSION\n\nCOMPLEX CLASSIFIERS\n- DECISION TREE AND CLUSTERING ",
"_____no_output_____"
]
],
[
[
"from sklearn.tree import DecisionTreeClassifier\n\n# learn model\ndt=DecisionTreeClassifier()\ndt.fit(X_train,y_train)\n\n# in sample accuracy\nprint 'In sample accuracy:',dt.score(X_train,y_train)\n\n# out of sample accuracy\nprint 'Out of sample accuracy:',dt.score(X_test,y_test)",
"_____no_output_____"
]
],
[
[
"import folium\nfrom folium.plugins import HeatMap",
"_____no_output_____"
],
[
"hmap = folium.Map()\nhm_wide = HeatMap( list(zip()))\n\n\n",
"_____no_output_____"
],
[
"f, ax = plt.subplots(figsize=(15,15))\n#ax.get_xaxis().set_visible(False)\n#ax.get_yaxis().set_visible(False)\nzoning.plot(column='counts',ax=ax, cmap='plasma', alpha = 0.9, legend=True)\nplt.title(\"Complaints by Zone\", size=20)\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"raw",
"code",
"markdown",
"code",
"markdown",
"code",
"raw",
"markdown",
"code",
"markdown",
"code",
"raw",
"code",
"markdown",
"code",
"raw",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"raw",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"raw",
"raw",
"raw"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"raw",
"raw",
"raw",
"raw"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"raw"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"raw"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"raw"
],
[
"code",
"code",
"code"
]
] |
4ab374e28f96768c96377b4a3a0bbfee7c4a510e
| 26,988 |
ipynb
|
Jupyter Notebook
|
Data Milestone hw7.ipynb
|
jingyany/Machine-Learning-Algorithms
|
381a9bb0af531de580eea0e526ce0d462a43126a
|
[
"MIT"
] | null | null | null |
Data Milestone hw7.ipynb
|
jingyany/Machine-Learning-Algorithms
|
381a9bb0af531de580eea0e526ce0d462a43126a
|
[
"MIT"
] | null | null | null |
Data Milestone hw7.ipynb
|
jingyany/Machine-Learning-Algorithms
|
381a9bb0af531de580eea0e526ce0d462a43126a
|
[
"MIT"
] | null | null | null | 25.581043 | 303 | 0.568771 |
[
[
[
"import os\nimport re\nimport random\n\nimport tensorflow as tf\nimport tensorflow.python.platform\nfrom tensorflow.python.platform import gfile\nimport numpy as np\nimport pandas as pd\nimport sklearn\nfrom sklearn import metrics\nfrom sklearn import model_selection\nimport sklearn.linear_model\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score, confusion_matrix\nfrom sklearn.svm import SVC, LinearSVC\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport pickle\nimport scipy.linalg\nimport sklearn.preprocessing\nimport sklearn.linear_model\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.ensemble import BaggingClassifier",
"_____no_output_____"
]
],
[
[
"### Warm-up\n\n(a) In a one-vs-one fashion, for each pairs of classes, train a linear SVM classifier using scikit-learn's function LinearSVC, with the default value for the regularization parameter. Compute the multi-class misclassification error obtained using these classifiers trained in a one-vs-one fashion.",
"_____no_output_____"
]
],
[
[
"X_train = pickle.load(open('features_train_all','rb'))\ny_train = pickle.load(open('labels_train_all','rb'))",
"_____no_output_____"
],
[
"X_train1, X_test1, y_train1, y_test1 = train_test_split(X_train, y_train, test_size=0.2, random_state=42)",
"_____no_output_____"
],
[
"LinearSVC_ovo = SVC(C=1.0, kernel='linear', max_iter=1000, decision_function_shape = 'ovo')",
"_____no_output_____"
],
[
"LinearSVC_ovo.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_lrSVC_ovo = LinearSVC_ovo.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_lrSVC_ovo = accuracy_score(y_test1, y_lrSVC_ovo)\nmisclassification_error = 1 - accuracy_lrSVC_ovo\nprint(\"The multi-class misclassification error obtained using classifiers trained in a one-vs-one fashion is \", + misclassification_error)",
"The multi-class misclassification error obtained using classifiers trained in a one-vs-one fashion is 0.327546296296\n"
]
],
[
[
"(b) In a one-vs-rest fashion, for each class, train a linear SVM classifier using scikit-learn's function LinearSVC, with the default value for $\\lambda_c$. Compute the multi-class misclassification error obtained using these classifiers trained in a one-vs-rest fashion.",
"_____no_output_____"
]
],
[
[
"linearSVC_ovr = LinearSVC(C=1.0, loss='squared_hinge', penalty='l2',multi_class='ovr')",
"_____no_output_____"
],
[
"linearSVC_ovr.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_lrSVC_ovr = linearSVC_ovr.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_lrSVC_ovr = accuracy_score(y_test1, y_lrSVC_ovr)\nmisclassification_error1 = 1 - accuracy_lrSVC_ovr\nprint(\"The multi-class misclassification error obtained using classifiers trained in a one-vs-rest fashion is \", + misclassification_error1)",
"The multi-class misclassification error obtained using classifiers trained in a one-vs-rest fashion is 0.298611111111\n"
]
],
[
[
"(c) Using the option multi class='crammer singer' in scikitlearn's function LinearSVC, train a multi-class linear SVM classifier using the default value for the regularization parameter. Compute the multi-class misclassification error obtained using this multi-class linear SVM classifier.",
"_____no_output_____"
]
],
[
[
"linearSVC_cs = LinearSVC(C=1.0, loss='squared_hinge', penalty='l2',multi_class='crammer_singer')",
"_____no_output_____"
],
[
"linearSVC_cs.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_lrSVC_cs = linearSVC_cs.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_lrSVC_cs = accuracy_score(y_test1, y_lrSVC_cs)\nmisclassification_error2 = 1 - accuracy_lrSVC_cs\nprint(\"The multi-class misclassification error obtained using multi-class linear SVM classifier is \", + misclassification_error2)",
"The multi-class misclassification error obtained using multi-class linear SVM classifier is 0.295138888889\n"
]
],
[
[
"### Linear SVMs for multi-class classification\n\n- Redo all questions above now tuning the regularization parameters using cross-validation.",
"_____no_output_____"
]
],
[
[
"X_train_sub = X_train[:500]\ny_train_sub = y_train[:500]",
"_____no_output_____"
],
[
"#Redo Model one: linearSVC with one-vs-one\novo_svm = SVC(kernel='linear', max_iter=1000, decision_function_shape = 'ovo')\nparameters = {'C':[10**i for i in range(-4, 5)]}\nclf_ovo = GridSearchCV(ovo_svm, parameters)",
"_____no_output_____"
],
[
"clf_ovo.fit(X_train_sub, y_train_sub)",
"_____no_output_____"
],
[
"clf_ovo.best_params_",
"_____no_output_____"
],
[
"LinearSVC_ovo_opt = SVC(C=0.1, kernel='linear', max_iter=1000, decision_function_shape = 'ovo')",
"_____no_output_____"
],
[
"LinearSVC_ovo_opt.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_lrSVC_ovo_opt = LinearSVC_ovo_opt.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_lrSVC_ovo_opt = accuracy_score(y_test1, y_lrSVC_ovo_opt)\nmisclassification_error_opt = 1 - accuracy_lrSVC_ovo_opt\nprint(\"The multi-class misclassification error obtained using classifiers trained in a one-vs-one fashion with lambda=0.1 is \", + misclassification_error_opt)",
"The multi-class misclassification error obtained using classifiers trained in a one-vs-one fashion with lambda=0.1 is 0.326388888889\n"
],
[
"#Redo model 2: LinearSVC with one-vs-rest\novr_svm = LinearSVC(loss='squared_hinge', penalty='l2',multi_class='ovr')\nparameters = {'C':[10**i for i in range(-4, 5)]}\nclf_ovr = GridSearchCV(ovr_svm, parameters)",
"_____no_output_____"
],
[
"clf_ovr.fit(X_train_sub, y_train_sub)",
"_____no_output_____"
],
[
"clf_ovr.best_params_",
"_____no_output_____"
],
[
"linearSVC_ovr_opt = LinearSVC(C=0.01, loss='squared_hinge', penalty='l2',multi_class='ovr')",
"_____no_output_____"
],
[
"linearSVC_ovr_opt.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_lrSVC_ovr_opt = linearSVC_ovr_opt.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_lrSVC_ovr_opt = accuracy_score(y_test1, y_lrSVC_ovr_opt)\nmisclassification_error1_opt = 1 - accuracy_lrSVC_ovr_opt\nprint(\"The multi-class misclassification error obtained using classifiers trained in a one-vs-rest fashion with lambda=0.01 is \", + misclassification_error1_opt)",
"The multi-class misclassification error obtained using classifiers trained in a one-vs-rest fashion with lambda=0.01 is 0.289351851852\n"
],
[
"#Redo model 3: multi-class linear SVM\ncs_svm = LinearSVC(loss='squared_hinge', penalty='l2',multi_class='crammer_singer')\nparameters = {'C':[10**i for i in range(-4, 5)]}\nclf_cs = GridSearchCV(cs_svm, parameters)",
"_____no_output_____"
],
[
"clf_cs.fit(X_train_sub, y_train_sub)",
"_____no_output_____"
],
[
"clf_cs.best_params_",
"_____no_output_____"
],
[
"linearSVC_cs_opt = LinearSVC(C=0.1, loss='squared_hinge', penalty='l2',multi_class='crammer_singer')",
"_____no_output_____"
],
[
"linearSVC_cs_opt.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_lrSVC_cs_opt = linearSVC_cs_opt.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_lrSVC_cs_opt = accuracy_score(y_test1, y_lrSVC_cs_opt)\nmisclassification_error2_opt = 1 - accuracy_lrSVC_cs_opt\nprint(\"The multi-class misclassification error obtained using multi-class linear SVM with lambda=0.1 is \", + misclassification_error2_opt)",
"The multi-class misclassification error obtained using multi-class linear SVM with lambda=0.1 is 0.293981481481\n"
]
],
[
[
"### Kernel SVMs for multi-class classification\n\n- Redo all questions above now using the polynomial kernel of order 2 (and tuning the regularization parameters using cross-validation).",
"_____no_output_____"
]
],
[
[
"#Redo Model 1: polynomial kernel SVM of order 2 with one-vs-one\novo_svm_poly = SVC(kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovo')\nparameters = {'C':[10**i for i in range(-4, 5)], 'coef0': [0, 1e-1, 1e-2, 1e-3, 1e-4]}\nclf_ovo_poly = GridSearchCV(ovo_svm_poly, parameters)",
"_____no_output_____"
],
[
"clf_ovo_poly.fit(X_train_sub, y_train_sub)",
"_____no_output_____"
],
[
"clf_ovo_poly.best_params_",
"_____no_output_____"
],
[
"polySVC_ovo_opt = SVC(C=1000, coef0=0.1, kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovo')",
"_____no_output_____"
],
[
"polySVC_ovo_opt.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_ovo_poly = polySVC_ovo_opt.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_poly_ovo_opt = accuracy_score(y_test1, y_ovo_poly)\nmisclassification_error_poly1 = 1 - accuracy_poly_ovo_opt\nprint(\"The multi-class misclassification error obtained using polynomial kernel SVM in one-vs-one with lambda=1000 is \", + misclassification_error_poly1)",
"The multi-class misclassification error obtained using polynomial kernel SVM in one-vs-one with lambda=1000 is 0.327546296296\n"
],
[
"#Redo Model 2: polynomial kernel SVM of order 2 with one-vs-rest\novr_svm_poly = SVC(kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovr')\nparameters = {'C':[10**i for i in range(-4, 5)], 'coef0': [0, 1e-1, 1e-2, 1e-3, 1e-4]}\nclf_ovr_poly = GridSearchCV(ovo_svm_poly, parameters)",
"_____no_output_____"
],
[
"clf_ovr_poly.fit(X_train_sub, y_train_sub)",
"_____no_output_____"
],
[
"clf_ovr_poly.best_params_",
"_____no_output_____"
],
[
"polySVC_ovr_opt = SVC(C=1000, coef0=0.1, kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovr')",
"_____no_output_____"
],
[
"polySVC_ovr_opt.fit(X_train1, y_train1)",
"_____no_output_____"
],
[
"y_ovr_poly = polySVC_ovr_opt.predict(X_test1)",
"_____no_output_____"
],
[
"accuracy_poly_ovr_opt = accuracy_score(y_test1, y_ovr_poly)\nmisclassification_error_poly2 = 1 - accuracy_poly_ovr_opt\nprint(\"The multi-class misclassification error obtained using polynomial kernel SVM in one-vs-rest with lambda=1000 is \", + misclassification_error_poly2)",
"The multi-class misclassification error obtained using polynomial kernel SVM in one-vs-rest with lambda=1000 is 0.327546296296\n"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab392b0cb02e03e7bcdf572ed5d7647cc1087b1
| 354,426 |
ipynb
|
Jupyter Notebook
|
System_monitoringu/Dane_pomiarowe/KUBA_UPADEK.ipynb
|
Beanarny/Praca_inz
|
38f843af8deeb1f1be6c77b553cfdcc4ad2a7c00
|
[
"MIT"
] | null | null | null |
System_monitoringu/Dane_pomiarowe/KUBA_UPADEK.ipynb
|
Beanarny/Praca_inz
|
38f843af8deeb1f1be6c77b553cfdcc4ad2a7c00
|
[
"MIT"
] | null | null | null |
System_monitoringu/Dane_pomiarowe/KUBA_UPADEK.ipynb
|
Beanarny/Praca_inz
|
38f843af8deeb1f1be6c77b553cfdcc4ad2a7c00
|
[
"MIT"
] | null | null | null | 1,423.39759 | 218,220 | 0.960105 |
[
[
[
"WSZYSTKIE ODDECHY - JEDEN POD DRUGIM NA JEDNYM WYKRESIE <br>\nPRZESUNIETE O OK. 0.5G <br>\nJAK SIE TUTAJ UDA TO DODAC OZNACZENIA NAD WYKRESAMI ZE +0,5N LUB +0,5G",
"_____no_output_____"
]
],
[
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"df_kuby = pd.read_excel('kuba - upadek 10 razy.xlsx', sheet_name = 'Sheet1', skiprows = 7530, nrows= 1980, usecols = 'A:D',names=('mod','x','y','z'))\n# df_kuby = pd.read_excel('kuba - upadek 10 razy.xlsx', sheet_name = 'Sheet1', skiprows = 7530, nrows= 1980, usecols = 'A:D',names=('mod','x','y','z'))\n# df_karol = pd.read_excel('Karol - oddech - ostatnie 60 sekund.xlsx', sheet_name = 'Sheet1', skiprows = 2200, nrows= 2000, usecols = 'A:D',names=('mod','x','y','z'))\n# df_anna = pd.read_excel('ciocia - siadanie+wstawanie z krzesła 47 cm , upadek, oddech 45s.xlsx', sheet_name = 'Sheet1', skiprows = 4700, nrows= 2000, usecols = 'A:D',names=('mod','x','y','z'))\n# df_mar = pd.read_excel('Marzena_oddech.xlsx', sheet_name = 'Arkusz1', skiprows = 600, nrows= 2000, usecols = 'A:D',names=('mod','x','y','z'))",
"_____no_output_____"
],
[
"arr_odd_kuby = df_kuby['mod'].to_numpy()\n# arr_odd_kuby = arr_odd_kuby+0.07\narr_odd_kuby_czas = np.arange(0,len(arr_odd_kuby)*0.03,0.03) # zrobienie rownej dlugosci arraya co zadany odstep czasu",
"_____no_output_____"
],
[
"fig = plt.figure(figsize=(18, 16), dpi= 80, facecolor='w', edgecolor='k')\nax = plt.subplot(111)\nline1, = ax.plot(arr_odd_kuby_czas,arr_odd_kuby, label='Kuba')\nplt.xlabel(\"czas [s]\", fontsize=22)\nplt.ylabel(\"amplituda [g]\",fontsize=22)\nax.tick_params(axis='both', which='major', labelsize=20)\nax.tick_params(axis='both', which='minor', labelsize=20)\nplt.ylim(0, 4)\nplt.grid()\nplt.show()",
"_____no_output_____"
],
[
"# NIZEJ JEST DRUKOWANIE FFT, ZOSTAWIE NA POZNIEJ A JEST TEZ W INNYM PLIKU NAZWANYM FFT ODDECH KUBY",
"_____no_output_____"
],
[
"arr = df['x'].to_numpy()",
"_____no_output_____"
],
[
"arr",
"_____no_output_____"
],
[
"from scipy import fft",
"_____no_output_____"
],
[
"fft_res = fft(arr)",
"_____no_output_____"
],
[
"fft_res",
"_____no_output_____"
],
[
"import matplotlib.pyplot as plt",
"_____no_output_____"
],
[
"plt.figure(figsize=(18, 16), dpi= 80, facecolor='w', edgecolor='k')\nplt.plot(np.real(fft_res[5:200])) # 5:200 może pominąć peak \nplt.plot(np.imag(fft_res[5:200]))\nplt.xlabel('czestotliwosc')\nplt.ylabel('amplituda')\nplt.title('widmo oddechu Kuby')",
"_____no_output_____"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab395e2d4e404e3c5310d67cd6ac55f36f7de31
| 177,893 |
ipynb
|
Jupyter Notebook
|
Notebooks/Exploring_projections.ipynb
|
mathisme/WiD_Residency_2022_January
|
ff4af04aeb32d7e83afad7d16f84fe77b9ae784c
|
[
"MIT"
] | 1 |
2022-02-07T21:09:08.000Z
|
2022-02-07T21:09:08.000Z
|
Notebooks/Exploring_projections.ipynb
|
mathisme/WiD_Residency_2022_January
|
ff4af04aeb32d7e83afad7d16f84fe77b9ae784c
|
[
"MIT"
] | 2 |
2022-02-05T01:52:31.000Z
|
2022-02-08T03:13:18.000Z
|
Notebooks/Exploring_projections.ipynb
|
mathisme/WiD_Residency_2022_January
|
ff4af04aeb32d7e83afad7d16f84fe77b9ae784c
|
[
"MIT"
] | 3 |
2022-02-05T01:33:19.000Z
|
2022-02-08T14:31:39.000Z
| 78.748561 | 17,160 | 0.534372 |
[
[
[
"import time\nimport pandas as pd\nimport requests\nimport json5\nimport matplotlib.pyplot as plt",
"_____no_output_____"
]
],
[
[
"# Loading national data",
"_____no_output_____"
]
],
[
[
"df_nat = pd.read_csv(\"../Data/Employment_Projections.csv\").sort_values('Employment 2030',ascending=False)",
"_____no_output_____"
]
],
[
[
"# Loading CA data",
"_____no_output_____"
]
],
[
[
"df_CA = pd.read_csv(\"../Data/CA_Long_Term_Occupational_Employment_Projections.csv\").sort_values('Projected Year Employment Estimate',ascending=False)",
"_____no_output_____"
],
[
"df_Sac = df_CA[df_CA['Area Name (County Names)']=='Sacramento--Roseville--Arden-Arcade MSA (El Dorado, Placer, Sacramento, and Yolo Counties)'].copy()",
"_____no_output_____"
],
[
"df_Cal = df_CA[df_CA['Area Name (County Names)']=='California'].copy()",
"_____no_output_____"
]
],
[
[
"Filtering for those occupations that make 40k a year or more and cleaning occupational code for the national table to match the california tables",
"_____no_output_____"
]
],
[
[
"df_Sac_40k = df_Sac[df_Sac['Median Annual Wage']>=40000].copy()",
"_____no_output_____"
],
[
"df_nat['Occupation Code']=df_nat['Occupation Code'].str.extract(r'([0-9]{2}-[0-9]{4})')",
"_____no_output_____"
]
],
[
[
"need to bin education levels",
"_____no_output_____"
]
],
[
[
"df_Sac_40k['Entry Level Education'].value_counts()",
"_____no_output_____"
],
[
"education_levels = {'No formal educational credential':'<HS',\n 'High school diploma or equivalent':'HS+',\n \"Bachelor's degree\":'Associates+',\n \"Associate's degree\":'Associates+',\n 'Postsecondary non-degree award':'HS+',\n 'Some college, no degree':'HS+'\n \n}\ndf_Sac['Education bin_a'] = df_Sac['Entry Level Education'].replace(to_replace=education_levels)\ndf_Sac_40k['Education bin_a'] = df_Sac_40k['Entry Level Education'].replace(to_replace=education_levels)\ndf_Cal['Education bin_a'] = df_Cal['Entry Level Education'].replace(to_replace=education_levels)",
"_____no_output_____"
]
],
[
[
"Less than HS",
"_____no_output_____"
]
],
[
[
"less_hs = df_Sac[df_Sac['Education bin_a']=='<HS'].sort_values(by='Projected Year Employment Estimate',ascending=False)\nless_hs.head().transpose()",
"_____no_output_____"
],
[
"df_Sac_40k[df_Sac_40k['Education bin_a']=='<HS'].sort_values(by='Projected Year Employment Estimate',ascending=False).head().transpose()",
"_____no_output_____"
]
],
[
[
"HS or some colege",
"_____no_output_____"
]
],
[
[
"hs_plus = df_Sac[df_Sac['Education bin_a']=='HS+'].sort_values(by='Projected Year Employment Estimate',ascending=False)\nhs_plus.head().transpose()",
"_____no_output_____"
],
[
"df_Sac_40k[df_Sac_40k['Education bin_a']=='HS+'].sort_values(by='Projected Year Employment Estimate',ascending=False).head().transpose()",
"_____no_output_____"
]
],
[
[
"Associates plus",
"_____no_output_____"
]
],
[
[
"sac_degree = df_Sac[df_Sac['Education bin_a']=='Associates+'].sort_values(by='Projected Year Employment Estimate',ascending=False)\nsac_degree.head().transpose()",
"_____no_output_____"
],
[
"df_Sac_40k[df_Sac_40k['Education bin_a']=='Associates+'].sort_values(by='Projected Year Employment Estimate',ascending=False).head().transpose()",
"_____no_output_____"
]
],
[
[
"Looking at bar charts of training needed and histograms of Median Annual Wage",
"_____no_output_____"
]
],
[
[
"fig,axs = plt.subplots(1,3,figsize=(12,6))\naxs[0].hist(less_hs[less_hs['Median Annual Wage']>0]['Median Annual Wage'],color='g')\naxs[1].hist(hs_plus[hs_plus['Median Annual Wage']>0]['Median Annual Wage'],color='c')\naxs[2].hist(sac_degree[sac_degree['Median Annual Wage']>0]['Median Annual Wage'],color='m')\nplt.title('Distribution of Median Annual Salaries')",
"_____no_output_____"
]
],
[
[
"Ok, that is ugly",
"_____no_output_____"
]
],
[
[
"less_hs_counts = pd.DataFrame(less_hs['Job Training'].value_counts(normalize=True,sort=True,ascending=True,dropna=False))\nless_hs_counts['training needed']=less_hs_counts.index\nless_hs_counts.rename(columns={'Job Training':'frequency'}, inplace=True)\nplt.figure(figsize=(8,4))\nplt.barh(y='training needed',width='frequency',data=less_hs_counts,color='rosybrown')\nplt.title('Frequencies of training needed for occupations not requiring a high school diploma')",
"_____no_output_____"
],
[
"hs_counts = pd.DataFrame(hs_plus['Job Training'].value_counts(normalize=True,sort=True,ascending=True,dropna=False))\nhs_counts['training needed']=hs_counts.index\nhs_counts.rename(columns={'Job Training':'frequency'}, inplace=True)\nplt.figure(figsize=(8,4))\nplt.barh(y='training needed',width='frequency',data=hs_counts,color='rosybrown')\nplt.title('Frequencies of training needed for occupations requiring a high school diploma')",
"_____no_output_____"
],
[
"college_counts = pd.DataFrame(sac_degree['Job Training'].value_counts(normalize=True,sort=True,ascending=True,dropna=False))\ncollege_counts['training needed']=college_counts.index\ncollege_counts.rename(columns={'Job Training':'frequency'}, inplace=True)\nplt.figure(figsize=(8,4))\nplt.barh(y='training needed',width='frequency',data=college_counts,color='rosybrown')\nplt.title(\"Frequencies of training needed for occupations requiring an associates or bachelor's degree\")",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab396071a6f9f0ffc77ea4fa0f81c65940341bf
| 689,363 |
ipynb
|
Jupyter Notebook
|
Training multi-linear classifier with a one layer network.ipynb
|
alexanderbea/Multi-linear-classifier-using-CIFAR-10-and-one-two-k-layer-network
|
e5cf7f43aec665396f438e1f90c0b572bd5677f6
|
[
"MIT"
] | null | null | null |
Training multi-linear classifier with a one layer network.ipynb
|
alexanderbea/Multi-linear-classifier-using-CIFAR-10-and-one-two-k-layer-network
|
e5cf7f43aec665396f438e1f90c0b572bd5677f6
|
[
"MIT"
] | null | null | null |
Training multi-linear classifier with a one layer network.ipynb
|
alexanderbea/Multi-linear-classifier-using-CIFAR-10-and-one-two-k-layer-network
|
e5cf7f43aec665396f438e1f90c0b572bd5677f6
|
[
"MIT"
] | null | null | null | 641.267907 | 116,596 | 0.931296 |
[
[
[
"# Training a multi-linear classifier \n\n*In this assignment I had to train and test a one layer network with multiple outputs to classify images from the CIFAR-10 dataset. I trained the network using mini-batch gradient descent applied to a cost function that computes cross-entropy loss of the classifier applied to the labelled training data and an L2 regularization term on the weight matrix.*",
"_____no_output_____"
]
],
[
[
"#@title Installers\n#installers if needed\n#!pip install -U -q PyDrive\n# !pip uninstall scipy\n# !pip install scipy==1.2.0\n# !pip install texttable",
"_____no_output_____"
],
[
"#@title Import libraries\n#Import CIFAR-10 data from my google drive folder; I downoaded and unzipped the CIRAR-10 files and uploaded them to my drive\nfrom pydrive.auth import GoogleAuth\nfrom pydrive.drive import GoogleDrive\nfrom google.colab import auth\nimport pandas\nimport numpy\nfrom texttable import Texttable\nfrom sklearn.preprocessing import StandardScaler\nfrom oauth2client.client import GoogleCredentials\n# Authenticate and create the PyDrive client.\nauth.authenticate_user()\ngauth = GoogleAuth()\ngauth.credentials = GoogleCredentials.get_application_default()\ndrive = GoogleDrive(gauth)\n\nfrom PIL import Image\nimport pickle\nimport numpy as np\nfrom googleapiclient.discovery import build\ndrive_service = build('drive', 'v3')\n\nimport io\nfrom googleapiclient.http import MediaIoBaseDownload\nimport matplotlib.pyplot as plt\n\nfrom scipy import misc #remove, using PIL instead",
"_____no_output_____"
],
[
"#@title Functions: Decoding and displaying images\ndef unpickle(file):\n dict = pickle.load(file, encoding='bytes')\n return dict\n\ndef unpickle_getFromDrive(file_id):\n filename = GetFromDrive(file_id)\n dict = pickle.load(filename, encoding='bytes')\n return dict \n\ndef loadLabels(file_id):\n data = unpickle_getFromDrive(label_file)\n labels = [x.decode('ascii') for x in data[b'label_names']]\n return labels\n\ndef LoadBatch(file_id):\n filename = GetFromDrive(file_id)\n dataset = unpickle(filename)\n dataSamples = dataset[b'data'] / 255\n labels = dataset[b'labels']\n y = labels\n label_count = np.max(labels)\n X = dataSamples\n Y = np.array([[0 if labels[i] != j else 1 for j in range(label_count + 1)] for i in range(len(labels))])\n return X, Y, y\n\ndef GetFromDrive(file_id): \n request = drive_service.files().get_media(fileId=file_id)\n downloaded = io.BytesIO()\n downloader = MediaIoBaseDownload(downloaded, request)\n done = False\n while done is False:\n _, done = downloader.next_chunk()\n downloaded.seek(0)\n return downloaded\n\ndef plot(tr_loss, val_loss, tr_accuracy, val_accuracy):\n plt.subplot(1,2,1)\n plt.plot(tr_loss, 'g-', label='training loss')\n plt.plot(val_loss, 'r-', label='validation loss')\n plt.title('Cost function')\n plt.xlabel('epoch')\n plt.ylabel('cost')\n plt.legend()\n\n plt.subplot(1,2,2)\n plt.plot(tr_accuracy, 'g-', label='training data')\n plt.plot(val_accuracy, 'r-', label='validation data')\n plt.title('Accuracy')\n plt.xlabel('epoch')\n plt.ylabel('accuracy')\n plt.legend()\n\n plt.show()\n\ndef image(img, label=''):\n sq_img = np.rot90(np.reshape(img, (32, 32, 3), order='F'), k=3)\n plt.imshow(sq_img, interpolation='gaussian')\n plt.axis('off')\n plt.title(label)\n\ndef showImageFromWeightsWithLabels(W, labels):\n for i, row in enumerate(W):\n img = (row - row.min()) / (row.max() - row.min())\n plt.subplot(2, 5, i+1)\n image(img, label=labels[i])\n plt.show()",
"_____no_output_____"
]
],
[
[
"EXERCISE 1. PART 1.\n\n*Read in and store the training, validation and test data*",
"_____no_output_____"
]
],
[
[
"#@title Code: Load training-, validation- and test- data\n#string are my file-id.s from my drive\n#(you need exchange these with your own ids)\ndata_batch_1 = '1'\ndata_batch_2 = '2'\ndata_batch_3 = '3'\ndata_batch_4 = '4'\ndata_batch_5 = '5'\ntest_batch = '6'\nlabel_file = '7'\n\n# Read in and store the training, validation and test data \nlabels = loadLabels(label_file)\nX_train, Y_train, y_train = LoadBatch(data_batch_1) \nX_val, Y_val, y_val = LoadBatch(data_batch_2) \nX_test, Y_test, y_test = LoadBatch(test_batch) \nimage(X_train[1])",
"_____no_output_____"
]
],
[
[
"EXERCISE 1. PART 2.\n\n*Transform training data to have zero mean*",
"_____no_output_____"
]
],
[
[
"#@title Functions: Normalize data\ndef getNormalized(X):\n m = np.mean(X, axis = 0)\n return (X - m, m)",
"_____no_output_____"
],
[
"#@title Code: Normalize data\nX_train, normalMeans = getNormalized(X_train)\nX_test -= normalMeans \nX_val -= normalMeans \nimage(X_train[1])\n\nprint(\"X_train mean: \" + str(np.mean(X_train)))\nprint(\"X_val mean: \" + str(np.mean(X_val)))\nprint(\"X_test mean: \" + str(np.mean(X_test)))",
"Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n"
]
],
[
[
"EXERCISE 1. PART 3.\n\n*Initialize parameters of the model W and b with entry to have Gaussian random values (incl. zero mean and standard deviation of .01)*",
"_____no_output_____"
]
],
[
[
"mean = 0.0\ns = 0.01\nd = X_train.shape[1] \nK = Y_train.shape[1] \nW = np.random.normal(mean, s, (K, d)) # Weight matrix; Normal (Gaussian) distribution\nb = np.random.normal(mean, s, (K, 1)) # Bias vector; Normal (Gaussian) distribution",
"_____no_output_____"
]
],
[
[
"EXERCISE 1. PART 4.\n\n*Function that evaluates the network function*",
"_____no_output_____"
]
],
[
[
"#@title Functions: EvaluateClassifier and Softmax\n#Data size, per batch contains a dic\n#with data structre 10000*3072 and an RGB array 32*32*3, \n#where labels are of size 10000numbers in range 0-9 i.e. 10labels\n\ndef EvaluateClassifier(X, W, b): \n P = softmax(np.dot(W, X.T) + b)\n return P\n\ndef softmax(s):\n return np.exp(s) / np.sum(np.exp(s), axis=0)",
"_____no_output_____"
],
[
"P = EvaluateClassifier(X_train[:100], W, b) #Check subset\nnp.sum(P, axis = 0) # Check if the sums for each sample sum up to 1",
"_____no_output_____"
]
],
[
[
"EXERCISE 1. PART 5.\n\n*Function that computes the cost function*",
"_____no_output_____"
]
],
[
[
"#@title Functions: Compute Cost and Cross Entropy Loss \ndef CrossEntropyLoss(X, Y, W, b):\n log_X = np.multiply(Y.T , EvaluateClassifier(X,W,b)).sum(axis=0)\n log_X[log_X == 0] = np.finfo(float).eps\n return -np.log(log_X)\n \ndef ComputeCost(X, Y, W, b, lamda, scale_const = 1e+6):\n return np.mean(scale_const * CrossEntropyLoss(X, Y, W, b)) / scale_const \\\n + lamda * np.sum(scale_const * np.power(W, 2)) / scale_const\n",
"_____no_output_____"
],
[
"J = ComputeCost(X_train, Y_train, W, b, lamda = 0)\nprint(\"Loss from Cost Function: \" + str(J))",
"Loss from Cost Function: 2.32091356491174\n"
]
],
[
[
"EXERCISE 1. PART 6.\n\n*Function that computes the accuracy*",
"_____no_output_____"
]
],
[
[
"#@title Functions: Compute Accuracy\ndef ComputeAccuracy(X, y, W, b): \n predictions = np.argmax(EvaluateClassifier(X,W,b) , axis = 0)\n accuracy = (predictions == y).mean()\n return accuracy ",
"_____no_output_____"
],
[
"acc = ComputeAccuracy(X_train, y_train, W, b) \nprint(\"Check accuracy: \" + str(acc))",
"Check accuracy: 0.0898\n"
]
],
[
[
"EXERCISE 1. PART 7.\n\n*Function that evaluates, for a mini-batch, the gradients, of the cost function w.r.t. W and b*",
"_____no_output_____"
]
],
[
[
"#@title Functions: Compute gradients and display differences between methods\n# Check Check analytic gradient computations against numerical estimations of the gradients!\nclass FFNet(): #Feed Forward Neural Network, Single Layer \n def __init__(self, d, K, mean, s):\n self.d = d\n self.K = K\n self.W = np.random.normal(mean, s, (K, d)) \n self.b = np.random.normal(mean, s, (K, 1)) \n\n def computeGradsNum(self, X, Y, lamda, h = 1e-8): #finite difference method = Faster but less accurate calculation of the gradients\n # return (grad_W, grad_b)\n P = EvaluateClassifier(X, self.W, self.b)\n \"\"\" Converted from matlab code \"\"\"\n no \t= \tself.W.shape[0]\n d \t= \tX.shape[0]\n\n grad_W = np.zeros(self.W.shape);\n grad_b = np.zeros((no, 1));\n\n c = ComputeCost(X, Y, self.W, self.b, lamda);\n \n for i in range(len(self.b)):\n b_try = np.array(self.b)\n b_try[i] += h\n c2 = ComputeCost(X, Y, self.W, b_try, lamda)\n grad_b[i] = (c2-c) / h\n\n for i in range(self.W.shape[0]):\n for j in range(self.W.shape[1]):\n W_try = np.array(self.W)\n W_try[i,j] += h\n c2 = ComputeCost(X, Y, W_try, self.b, lamda)\n grad_W[i,j] = (c2-c) / h\n return [grad_W, grad_b] \n\n def computeGradsNumSlow(self, X, Y, lamda, h = 1e-8): #Centered difference formula = More exact calculation of the gradients but slower\n \"\"\" Converted from matlab code \"\"\"\n no \t= \tself.W.shape[0]\n d \t= \tX.shape[0]\n\n grad_W = np.zeros(self.W.shape);\n grad_b = np.zeros((no, 1));\n \n for i in range(len(self.b)):\n b_try = np.array(self.b)\n b_try[i] -= h\n c1 = ComputeCost(X, Y, self.W, b_try, lamda)\n\n b_try = np.array(self.b)\n b_try[i] += h\n c2 = ComputeCost(X, Y, self.W, b_try, lamda)\n\n grad_b[i] = (c2-c1) / (2*h)\n\n for i in range(self.W.shape[0]):\n for j in range(self.W.shape[1]):\n W_try = np.array(self.W)\n W_try[i,j] -= h\n c1 = ComputeCost(X, Y, W_try, self.b, lamda)\n\n W_try = np.array(self.W)\n W_try[i,j] += h\n c2 = ComputeCost(X, Y, W_try, self.b, lamda)\n\n grad_W[i,j] = (c2-c1) / (2*h)\n return [grad_W, grad_b]\n\n def computeAnalyticalGradients(self, X, Y, lamda): #Analytical computation of the gradient\n P = EvaluateClassifier(X, self.W, self.b)\n \n grad_W = np.zeros(self.W.shape)\n grad_b = np.zeros(self.b.shape)\n\n for i in range(X.shape[0]):\n x = X[i].reshape(1,-1)\n g = -(Y[i].reshape(-1,1) - EvaluateClassifier(x, self.W, self.b))\n grad_b += g\n grad_W += g.dot(x)\n grad_W /= X.shape[0]\n grad_W += self.W * 2 * lamda\n grad_b /= X.shape[0]\n return (grad_W, grad_b)\n\ndef relErr(grad1, grad2):\n rel_err = np.abs(grad1 - grad2) / (np.abs(grad1) + np.abs(grad2))\n return rel_err*100*100\n \ndef absErr(grad1, grad2):\n abs_err = np.abs(grad1 - grad2) \n return abs_err*100*100*100\n\ndef compareGradients(lamda, title):\n samples = 100\n FFnet = FFNet(d, K, mean, s)\n\n grad_W1, grad_b1 = FFnet.computeAnalyticalGradients(X_train[:samples, :d], Y_train[:samples], lamda)\n grad_W2, grad_b2 = FFnet.computeGradsNum(X_train[:samples, :d], Y_train[:samples], lamda)\n grad_W3, grad_b3 = FFnet.computeGradsNumSlow(X_train[:samples, :d], Y_train[:samples], lamda)\n\n err = Texttable()\n err_data = [] \n\n # Compare accurate numerical method with analytical estimation of gradient\n err_data.append(['Gradient', 'Method', 'Rel Diff Min [e+04]', 'Rel Diff Max [e+04]', 'Rel Diff Mean [e+04]', 'Abs Diff Max [e+06]', 'Abs Diff Mean [e+06]'])\n\n cdm_err_W = relErr(grad_W1, grad_W3)\n cdm_err_b = relErr(grad_b1, grad_b3)\n cdm_err_W_abs = absErr(grad_W1, grad_W3)\n cdm_err_b_abs = absErr(grad_b1, grad_b3)\n\n fdm_err_W = relErr(grad_W1, grad_W2)\n fdm_err_b = relErr(grad_b1, grad_b2)\n fdm_err_W_abs = absErr(grad_W1, grad_W2)\n fdm_err_b_abs = absErr(grad_b1, grad_b2)\n\n cdm_fdm_err_W= relErr(grad_W2, grad_W3)\n cdm_fdm_err_b= relErr(grad_b2, grad_b3)\n cdm_fdm_err_W_abs = absErr(grad_W2, grad_W3)\n cdm_fdm_err_b_abs = absErr(grad_b2, grad_b3)\n\n err_data.append([\"W\", \"ANL vs CDM\", str(np.min(cdm_err_W)),str(np.max(cdm_err_W)),str(np.mean(cdm_err_W)),str(np.max(cdm_err_W_abs)),str(np.mean(cdm_err_W_abs))])\n err_data.append([\"W\", \"ANL vs FDM\", str(np.min(fdm_err_W)),str(np.max(fdm_err_W)),str(np.mean(fdm_err_W)),str(np.max(fdm_err_W_abs)),str(np.mean(fdm_err_W_abs))])\n err_data.append([\"W\", \"CDM vs FDM\", str(np.min(cdm_fdm_err_W)),str(np.max(cdm_fdm_err_W)),str(np.mean(cdm_fdm_err_W)),str(np.max(cdm_fdm_err_W_abs)),str(np.mean(cdm_fdm_err_W_abs))])\n\n\n err_data.append([\"b\", \"ANL vs CDM\", str(np.min(cdm_err_b)),str(np.max(cdm_err_b)),str(np.mean(cdm_err_b)),str(np.max(cdm_err_b_abs)),str(np.mean(cdm_err_b_abs))])\n err_data.append([\"b\", \"ANL vs FDM\", str(np.min(fdm_err_b)),str(np.max(fdm_err_b)),str(np.mean(fdm_err_b)),str(np.max(fdm_err_b_abs)),str(np.mean(fdm_err_b_abs))])\n err_data.append([\"b\", \"CDM vs FDM\", str(np.min(cdm_fdm_err_b)),str(np.max(cdm_fdm_err_b)),str(np.mean(cdm_fdm_err_b)),str(np.max(cdm_fdm_err_b_abs)),str(np.mean(cdm_fdm_err_b_abs))])\n\n err.add_rows(err_data)\n print(title)\n print(err.draw()) ",
"_____no_output_____"
]
],
[
[
"Analytical (ANL) gradient computation is in the following result compared to the slow but accurate version based on the centered difference equation (CDM) and compared to the faster but less accurate finite difference method (FDM). The accuracy can be observed in the observed in the below tables which displays relative and absolute differences between the aformentioned methods. Note that absolute differences are less than 1e-6 and thereby considered to have produced the same result. ",
"_____no_output_____"
]
],
[
[
"compareGradients(lamda=0.0, title=\"Without Regularization i.e. Lambda = 0.0\")",
"Without Regularization i.e. Lambda = 0.0\n+----------+-----------+-----------+----------+----------+----------+----------+\n| Gradient | Method | Rel Diff | Rel Diff | Rel Diff | Abs Diff | Abs Diff |\n| | | Min | Max | Mean | Max | Mean |\n| | | [e+04] | [e+04] | [e+04] | [e+06] | [e+06] |\n+==========+===========+===========+==========+==========+==========+==========+\n| W | ANL vs | 0.000 | 547.345 | 0.104 | 0.081 | 0.016 |\n| | CDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| W | ANL vs | 0.000 | 1468.733 | 0.317 | 0.158 | 0.046 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| W | CDM vs | 0 | 2000 | 0.315 | 0.111 | 0.044 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| b | ANL vs | 0.000 | 0.016 | 0.004 | 0.039 | 0.015 |\n| | CDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| b | ANL vs | 0.002 | 0.040 | 0.015 | 0.106 | 0.062 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| b | CDM vs | 0.002 | 0.034 | 0.011 | 0.089 | 0.051 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n"
],
[
"compareGradients(lamda=1.0, title=\"With Regularization i.e. Lambda = 1.0\")",
"With Regularization i.e. Lambda = 1.0\n+----------+-----------+-----------+----------+----------+----------+----------+\n| Gradient | Method | Rel Diff | Rel Diff | Rel Diff | Abs Diff | Abs Diff |\n| | | Min | Max | Mean | Max | Mean |\n| | | [e+04] | [e+04] | [e+04] | [e+06] | [e+06] |\n+==========+===========+===========+==========+==========+==========+==========+\n| W | ANL vs | 0.000 | 85.277 | 0.051 | 0.125 | 0.027 |\n| | CDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| W | ANL vs | 0.000 | 185.524 | 0.078 | 0.178 | 0.039 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| W | CDM vs | 0 | 204.082 | 0.047 | 0.089 | 0.021 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| b | ANL vs | 0.001 | 0.026 | 0.006 | 0.054 | 0.022 |\n| | CDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| b | ANL vs | 0.001 | 0.037 | 0.009 | 0.080 | 0.031 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n| b | CDM vs | 0 | 0.021 | 0.003 | 0.044 | 0.009 |\n| | FDM | | | | | |\n+----------+-----------+-----------+----------+----------+----------+----------+\n"
]
],
[
[
"EXERCISE 1. PART 8.\n\n*Function that performs the mini-batch gradient descent algorithm to learn the network's parameters*\n\nAs the below result shows, after the first epoch the cost score decreases and the accuracy increases for each epoch.\n\nLearning rate: We can also tell from the same result, that when the learning rate (eta) is too large, the training of the model becomes unstable. This can be observed in the first figure where eta=0.1\n\nRegularization: The effect on accuracy when applying regularization is that it is narrower between the training data and validation data in difference to when not applying it. However, without regularization the accuracy is higher. Ideal is it not to have it too wide as this can be an indication of overfitting on the training data.",
"_____no_output_____"
]
],
[
[
"#@title Function: Mini-batch gradient descent\nclass FFNet_mbGD(FFNet):\n def miniBatchGD(self, X, Y, n_batch, eta, n_epochs , lamda, X_val = None, Y_val = None):\n results = ([],[],[],[])\n miniBatchNo = X.shape[0] // n_batch\n \n results[0].append(ComputeCost(X, Y,self.W, self.b, lamda))\n results[1].append(ComputeCost(X_val, Y_val,self.W, self.b, lamda))\n results[2].append(ComputeAccuracy(X, np.argmax(Y.T, axis = 0),self.W, self.b))\n results[3].append(ComputeAccuracy(X_val, np.argmax(Y_val.T, axis = 0),self.W, self.b))\n \n for i in range(n_epochs):\n for j in range(miniBatchNo):\n if(j >= miniBatchNo - 1):\n Xbatch = X[j * n_batch:]\n Ybatch = Y[j * n_batch:]\n else:\n j_start = j * n_batch\n j_end = j_start + n_batch\n Xbatch = X[j_start:j_end]\n Ybatch = Y[j_start:j_end]\n grad_W, grad_b = self.computeAnalyticalGradients(Xbatch, Ybatch,lamda)\n self.W -= eta * grad_W\n self.b -= eta * grad_b\n\n results[0].append(ComputeCost(X, Y, self.W, self.b, lamda)) \n results[1].append(ComputeCost(X_val, Y_val,self.W, self.b, lamda))\n results[2].append(ComputeAccuracy(X, np.argmax(Y.T, axis = 0),self.W, self.b))\n results[3].append(ComputeAccuracy(X_val, np.argmax(Y_val.T, axis = 0),self.W, self.b))\n return results",
"_____no_output_____"
],
[
"#@title Code: Run mini-batch gradient descent with difference parameters\n# Train for the following parameters\nlambdas = [0, 0, .1, 1]\netas = [.1, .001, .001, .001] \nn_batch = 100\nn_epochs = 40\nnp.random.seed(400) #400 specified in the assignment\n\nt = Texttable()\ndata = [] \ndata.append(['Parameters', 'Train Accuracy', 'Val Accuracy', 'Test Accuracy'])\n\nfor x in range(0, len(lambdas)):\n nm = FFNet_mbGD(d = X_train.shape[1], K = Y_train.shape[1], mean = 0.0, s = 0.01)\n tr_loss, val_loss, tr_accuracy, val_accuracy = nm.miniBatchGD(\n X_train, Y_train, \n n_batch, etas[x], n_epochs, lambdas[x],\n X_val = X_val, Y_val = Y_val)\n saveFortbl = \"lambda=\"+str(lambdas[x])+\", n epochs=\"+str(n_epochs)+\", n batch=\"+str(n_batch)+\", eta=\"+str(etas[x])+\"\"\n print(\"****************************************\")\n print(\"lambda=\"+str(lambdas[x])+\", n epochs=\"+str(n_epochs)+\", n batch=\"+str(n_batch)+\", eta=\"+str(etas[x])+\"\")\n print(\"****************************************\")\n data.append([saveFortbl,str(tr_accuracy[-1]), str(val_accuracy[-1]),str(ComputeAccuracy(X_test, y_test, nm.W, nm.b))])\n plot(tr_loss, val_loss, tr_accuracy, val_accuracy)\n showImageFromWeightsWithLabels(nm.W, labels)\nt.add_rows(data)\nprint(t.draw())\nprint(\" \")\n",
"****************************************\nlambda=0, n epochs=40, n batch=100, eta=0.1\n****************************************\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
]
] |
4ab396653942305cebb901d05f966b7002d7fc38
| 34,064 |
ipynb
|
Jupyter Notebook
|
20-descriptor/bulkfood/chapter20code.ipynb
|
kalpak92/Fluent-Python
|
f5691c8436a32d11e1d62262ad49639f059a9546
|
[
"MIT"
] | 86 |
2016-05-16T17:25:05.000Z
|
2022-03-26T22:21:19.000Z
|
20-descriptor/bulkfood/chapter20code.ipynb
|
dan-osull/fluent-python-notebooks
|
b71e738e2b816c962369b6d5c1ecb62065a65454
|
[
"MIT"
] | 3 |
2016-05-31T23:10:26.000Z
|
2017-01-22T17:38:31.000Z
|
20-descriptor/bulkfood/chapter20code.ipynb
|
dan-osull/fluent-python-notebooks
|
b71e738e2b816c962369b6d5c1ecb62065a65454
|
[
"MIT"
] | 69 |
2016-05-31T23:11:21.000Z
|
2021-08-19T10:15:10.000Z
| 33.726733 | 98 | 0.498356 |
[
[
[
"\"\"\"\nOverriding descriptor (a.k.a. data descriptor or enforced descriptor):\n\n# BEGIN DESCR_KINDS_DEMO1\n\n >>> obj = Managed() # <1>\n >>> obj.over # <2>\n -> Overriding.__get__(<Overriding object>, <Managed object>, <class Managed>)\n >>> Managed.over # <3>\n -> Overriding.__get__(<Overriding object>, None, <class Managed>)\n >>> obj.over = 7 # <4>\n -> Overriding.__set__(<Overriding object>, <Managed object>, 7)\n >>> obj.over # <5>\n -> Overriding.__get__(<Overriding object>, <Managed object>, <class Managed>)\n >>> obj.__dict__['over'] = 8 # <6>\n >>> vars(obj) # <7>\n {'over': 8}\n >>> obj.over # <8>\n -> Overriding.__get__(<Overriding object>, <Managed object>, <class Managed>)\n\n# END DESCR_KINDS_DEMO1\n\nOverriding descriptor without ``__get__``:\n\n(these tests are reproduced below without +ELLIPSIS directives for inclusion in the book;\nlook for DESCR_KINDS_DEMO2)\n\n >>> obj.over_no_get # doctest: +ELLIPSIS\n <descriptorkinds.OverridingNoGet object at 0x...>\n >>> Managed.over_no_get # doctest: +ELLIPSIS\n <descriptorkinds.OverridingNoGet object at 0x...>\n >>> obj.over_no_get = 7\n -> OverridingNoGet.__set__(<OverridingNoGet object>, <Managed object>, 7)\n >>> obj.over_no_get # doctest: +ELLIPSIS\n <descriptorkinds.OverridingNoGet object at 0x...>\n >>> obj.__dict__['over_no_get'] = 9\n >>> obj.over_no_get\n 9\n >>> obj.over_no_get = 7\n -> OverridingNoGet.__set__(<OverridingNoGet object>, <Managed object>, 7)\n >>> obj.over_no_get\n 9\n\nNon-overriding descriptor (a.k.a. non-data descriptor or shadowable descriptor):\n\n# BEGIN DESCR_KINDS_DEMO3\n\n >>> obj = Managed()\n >>> obj.non_over # <1>\n -> NonOverriding.__get__(<NonOverriding object>, <Managed object>, <class Managed>)\n >>> obj.non_over = 7 # <2>\n >>> obj.non_over # <3>\n 7\n >>> Managed.non_over # <4>\n -> NonOverriding.__get__(<NonOverriding object>, None, <class Managed>)\n >>> del obj.non_over # <5>\n >>> obj.non_over # <6>\n -> NonOverriding.__get__(<NonOverriding object>, <Managed object>, <class Managed>)\n\n# END DESCR_KINDS_DEMO3\n\nNo descriptor type survives being overwritten on the class itself:\n\n# BEGIN DESCR_KINDS_DEMO4\n\n >>> obj = Managed() # <1>\n >>> Managed.over = 1 # <2>\n >>> Managed.over_no_get = 2\n >>> Managed.non_over = 3\n >>> obj.over, obj.over_no_get, obj.non_over # <3>\n (1, 2, 3)\n\n# END DESCR_KINDS_DEMO4\n\nMethods are non-overriding descriptors:\n\n >>> obj.spam # doctest: +ELLIPSIS\n <bound method Managed.spam of <descriptorkinds.Managed object at 0x...>>\n >>> Managed.spam # doctest: +ELLIPSIS\n <function Managed.spam at 0x...>\n >>> obj.spam()\n -> Managed.spam(<Managed object>)\n >>> Managed.spam()\n Traceback (most recent call last):\n ...\n TypeError: spam() missing 1 required positional argument: 'self'\n >>> Managed.spam(obj)\n -> Managed.spam(<Managed object>)\n >>> Managed.spam.__get__(obj) # doctest: +ELLIPSIS\n <bound method Managed.spam of <descriptorkinds.Managed object at 0x...>>\n >>> obj.spam.__func__ is Managed.spam\n True\n >>> obj.spam = 7\n >>> obj.spam\n 7\n\n\n\"\"\"\n\n\"\"\"\nNOTE: These tests are here because I can't add callouts after +ELLIPSIS\ndirectives and if doctest runs them without +ELLIPSIS I get test failures.\n\n# BEGIN DESCR_KINDS_DEMO2\n\n >>> obj.over_no_get # <1>\n <__main__.OverridingNoGet object at 0x665bcc>\n >>> Managed.over_no_get # <2>\n <__main__.OverridingNoGet object at 0x665bcc>\n >>> obj.over_no_get = 7 # <3>\n -> OverridingNoGet.__set__(<OverridingNoGet object>, <Managed object>, 7)\n >>> obj.over_no_get # <4>\n <__main__.OverridingNoGet object at 0x665bcc>\n >>> obj.__dict__['over_no_get'] = 9 # <5>\n >>> obj.over_no_get # <6>\n 9\n >>> obj.over_no_get = 7 # <7>\n -> OverridingNoGet.__set__(<OverridingNoGet object>, <Managed object>, 7)\n >>> obj.over_no_get # <8>\n 9\n\n# END DESCR_KINDS_DEMO2\n\nMethods are non-overriding descriptors:\n\n# BEGIN DESCR_KINDS_DEMO5\n\n >>> obj = Managed()\n >>> obj.spam # <1>\n <bound method Managed.spam of <descriptorkinds.Managed object at 0x74c80c>>\n >>> Managed.spam # <2>\n <function Managed.spam at 0x734734>\n >>> obj.spam = 7 # <3>\n >>> obj.spam\n 7\n\n# END DESCR_KINDS_DEMO5\n\n\"\"\"\n\n# BEGIN DESCR_KINDS\n\n### auxiliary functions for display only ###\n\ndef cls_name(obj_or_cls):\n cls = type(obj_or_cls)\n if cls is type:\n cls = obj_or_cls\n return cls.__name__.split('.')[-1]\n\ndef display(obj):\n cls = type(obj)\n if cls is type:\n return '<class {}>'.format(obj.__name__)\n elif cls in [type(None), int]:\n return repr(obj)\n else:\n return '<{} object>'.format(cls_name(obj))\n\ndef print_args(name, *args):\n pseudo_args = ', '.join(display(x) for x in args)\n print('-> {}.__{}__({})'.format(cls_name(args[0]), name, pseudo_args))\n\n\n### essential classes for this example ###\n\nclass Overriding: # <1>\n \"\"\"a.k.a. data descriptor or enforced descriptor\"\"\"\n\n def __get__(self, instance, owner):\n print_args('get', self, instance, owner) # <2>\n\n def __set__(self, instance, value):\n print_args('set', self, instance, value)\n\n\nclass OverridingNoGet: # <3>\n \"\"\"an overriding descriptor without ``__get__``\"\"\"\n\n def __set__(self, instance, value):\n print_args('set', self, instance, value)\n\n\nclass NonOverriding: # <4>\n \"\"\"a.k.a. non-data or shadowable descriptor\"\"\"\n\n def __get__(self, instance, owner):\n print_args('get', self, instance, owner)\n\n\nclass Managed: # <5>\n over = Overriding()\n over_no_get = OverridingNoGet()\n non_over = NonOverriding()\n\n def spam(self): # <6>\n print('-> Managed.spam({})'.format(display(self)))\n\n# END DESCR_KINDS",
"_____no_output_____"
],
[
"\"\"\"\nOverriding descriptor (a.k.a. data descriptor or enforced descriptor):\n\n >>> obj = Model()\n >>> obj.over # doctest: +ELLIPSIS\n Overriding.__get__() invoked with args:\n self = <descriptorkinds.Overriding object at 0x...>\n instance = <descriptorkinds.Model object at 0x...>\n owner = <class 'descriptorkinds.Model'>\n >>> Model.over # doctest: +ELLIPSIS\n Overriding.__get__() invoked with args:\n self = <descriptorkinds.Overriding object at 0x...>\n instance = None\n owner = <class 'descriptorkinds.Model'>\n\n\nAn overriding descriptor cannot be shadowed by assigning to an instance:\n\n >>> obj = Model()\n >>> obj.over = 7 # doctest: +ELLIPSIS\n Overriding.__set__() invoked with args:\n self = <descriptorkinds.Overriding object at 0x...>\n instance = <descriptorkinds.Model object at 0x...>\n value = 7\n >>> obj.over # doctest: +ELLIPSIS\n Overriding.__get__() invoked with args:\n self = <descriptorkinds.Overriding object at 0x...>\n instance = <descriptorkinds.Model object at 0x...>\n owner = <class 'descriptorkinds.Model'>\n\n\nNot even by poking the attribute into the instance ``__dict__``:\n\n >>> obj.__dict__['over'] = 8\n >>> obj.over # doctest: +ELLIPSIS\n Overriding.__get__() invoked with args:\n self = <descriptorkinds.Overriding object at 0x...>\n instance = <descriptorkinds.Model object at 0x...>\n owner = <class 'descriptorkinds.Model'>\n >>> vars(obj)\n {'over': 8}\n\nOverriding descriptor without ``__get__``:\n\n >>> obj.over_no_get # doctest: +ELLIPSIS\n <descriptorkinds.OverridingNoGet object at 0x...>\n >>> Model.over_no_get # doctest: +ELLIPSIS\n <descriptorkinds.OverridingNoGet object at 0x...>\n >>> obj.over_no_get = 7 # doctest: +ELLIPSIS\n OverridingNoGet.__set__() invoked with args:\n self = <descriptorkinds.OverridingNoGet object at 0x...>\n instance = <descriptorkinds.Model object at 0x...>\n value = 7\n >>> obj.over_no_get # doctest: +ELLIPSIS\n <descriptorkinds.OverridingNoGet object at 0x...>\n\n\nPoking the attribute into the instance ``__dict__`` means you can read the new\nvalue for the attribute, but setting it still triggers ``__set__``:\n\n >>> obj.__dict__['over_no_get'] = 9\n >>> obj.over_no_get\n 9\n >>> obj.over_no_get = 7 # doctest: +ELLIPSIS\n OverridingNoGet.__set__() invoked with args:\n self = <descriptorkinds.OverridingNoGet object at 0x...>\n instance = <descriptorkinds.Model object at 0x...>\n value = 7\n >>> obj.over_no_get\n 9\n\n\nNon-overriding descriptor (a.k.a. non-data descriptor or shadowable descriptor):\n\n >>> obj = Model()\n >>> obj.non_over # doctest: +ELLIPSIS\n NonOverriding.__get__() invoked with args:\n self = <descriptorkinds.NonOverriding object at 0x...>\n instance = <descriptorkinds.Model object at 0x...>\n owner = <class 'descriptorkinds.Model'>\n >>> Model.non_over # doctest: +ELLIPSIS\n NonOverriding.__get__() invoked with args:\n self = <descriptorkinds.NonOverriding object at 0x...>\n instance = None\n owner = <class 'descriptorkinds.Model'>\n\n\nA non-overriding descriptor can be shadowed by assigning to an instance:\n\n >>> obj.non_over = 7\n >>> obj.non_over\n 7\n\n\nMethods are non-over descriptors:\n\n >>> obj.spam # doctest: +ELLIPSIS\n <bound method Model.spam of <descriptorkinds.Model object at 0x...>>\n >>> Model.spam # doctest: +ELLIPSIS\n <function Model.spam at 0x...>\n >>> obj.spam() # doctest: +ELLIPSIS\n Model.spam() invoked with arg:\n self = <descriptorkinds.Model object at 0x...>\n >>> obj.spam = 7\n >>> obj.spam\n 7\n\n\nNo descriptor type survives being overwritten on the class itself:\n\n >>> Model.over = 1\n >>> obj.over\n 1\n >>> Model.over_no_get = 2\n >>> obj.over_no_get\n 2\n >>> Model.non_over = 3\n >>> obj.non_over\n 7\n\n\"\"\"\n\n# BEGIN DESCRIPTORKINDS\ndef print_args(name, *args): # <1>\n cls_name = args[0].__class__.__name__\n arg_names = ['self', 'instance', 'owner']\n if name == 'set':\n arg_names[-1] = 'value'\n print('{}.__{}__() invoked with args:'.format(cls_name, name))\n for arg_name, value in zip(arg_names, args):\n print(' {:8} = {}'.format(arg_name, value))\n\n\nclass Overriding: # <2>\n \"\"\"a.k.a. data descriptor or enforced descriptor\"\"\"\n\n def __get__(self, instance, owner):\n print_args('get', self, instance, owner) # <3>\n\n def __set__(self, instance, value):\n print_args('set', self, instance, value)\n\n\nclass OverridingNoGet: # <4>\n \"\"\"an overriding descriptor without ``__get__``\"\"\"\n\n def __set__(self, instance, value):\n print_args('set', self, instance, value)\n\n\nclass NonOverriding: # <5>\n \"\"\"a.k.a. non-data or shadowable descriptor\"\"\"\n\n def __get__(self, instance, owner):\n print_args('get', self, instance, owner)\n\n\nclass Model: # <6>\n over = Overriding()\n over_no_get = OverridingNoGet()\n non_over = NonOverriding()\n\n def spam(self): # <7>\n print('Model.spam() invoked with arg:')\n print(' self =', self)\n\n#END DESCRIPTORKINDS\n",
"_____no_output_____"
],
[
"\"\"\"\n# BEGIN FUNC_DESCRIPTOR_DEMO\n\n >>> word = Text('forward')\n >>> word # <1>\n Text('forward')\n >>> word.reverse() # <2>\n Text('drawrof')\n >>> Text.reverse(Text('backward')) # <3>\n Text('drawkcab')\n >>> type(Text.reverse), type(word.reverse) # <4>\n (<class 'function'>, <class 'method'>)\n >>> list(map(Text.reverse, ['repaid', (10, 20, 30), Text('stressed')])) # <5>\n ['diaper', (30, 20, 10), Text('desserts')]\n >>> Text.reverse.__get__(word) # <6>\n <bound method Text.reverse of Text('forward')>\n >>> Text.reverse.__get__(None, Text) # <7>\n <function Text.reverse at 0x101244e18>\n >>> word.reverse # <8>\n <bound method Text.reverse of Text('forward')>\n >>> word.reverse.__self__ # <9>\n Text('forward')\n >>> word.reverse.__func__ is Text.reverse # <10>\n True\n\n# END FUNC_DESCRIPTOR_DEMO\n\"\"\"\n\n# BEGIN FUNC_DESCRIPTOR_EX\nimport collections\n\n\nclass Text(collections.UserString):\n\n def __repr__(self):\n return 'Text({!r})'.format(self.data)\n\n def reverse(self):\n return self[::-1]\n\n# END FUNC_DESCRIPTOR_EX\n",
"_____no_output_____"
],
[
"# %load ./bulkfood/bulkfood_v3.py\n\"\"\"\n\nA line item for a bulk food order has description, weight and price fields::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> raisins.weight, raisins.description, raisins.price\n (10, 'Golden raisins', 6.95)\n\nA ``subtotal`` method gives the total price for that line item::\n\n >>> raisins.subtotal()\n 69.5\n\nThe weight of a ``LineItem`` must be greater than 0::\n\n >>> raisins.weight = -20\n Traceback (most recent call last):\n ...\n ValueError: value must be > 0\n\nNegative or 0 price is not acceptable either::\n\n >>> truffle = LineItem('White truffle', 100, 0)\n Traceback (most recent call last):\n ...\n ValueError: value must be > 0\n\n\nNo change was made::\n\n >>> raisins.weight\n 10\n\n\"\"\"\n\n\n# BEGIN LINEITEM_V3\nclass Quantity: # <1>\n\n def __init__(self, storage_name):\n self.storage_name = storage_name # <2>\n\n def __set__(self, instance, value): # <3>\n if value > 0:\n instance.__dict__[self.storage_name] = value # <4>\n else:\n raise ValueError('value must be > 0')\n\n\nclass LineItem:\n weight = Quantity('weight') # <5>\n price = Quantity('price') # <6>\n\n def __init__(self, description, weight, price): # <7>\n self.description = description\n self.weight = weight\n self.price = price\n\n def subtotal(self):\n return self.weight * self.price\n# END LINEITEM_V3\n",
"_____no_output_____"
],
[
"# %load ./bulkfood/bulkfood_v4.py\n\"\"\"\n\nA line item for a bulk food order has description, weight and price fields::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> raisins.weight, raisins.description, raisins.price\n (10, 'Golden raisins', 6.95)\n\nA ``subtotal`` method gives the total price for that line item::\n\n >>> raisins.subtotal()\n 69.5\n\nThe weight of a ``LineItem`` must be greater than 0::\n\n >>> raisins.weight = -20\n Traceback (most recent call last):\n ...\n ValueError: value must be > 0\n\nNo change was made::\n\n >>> raisins.weight\n 10\n\nThe value of the attributes managed by the descriptors are stored in\nalternate attributes, created by the descriptors in each ``LineItem``\ninstance::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> dir(raisins) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE\n ['_Quantity#0', '_Quantity#1', '__class__', ...\n 'description', 'price', 'subtotal', 'weight']\n >>> getattr(raisins, '_Quantity#0')\n 10\n >>> getattr(raisins, '_Quantity#1')\n 6.95\n\n\"\"\"\n\n\n# BEGIN LINEITEM_V4\nclass Quantity:\n __counter = 0 # <1>\n\n def __init__(self):\n cls = self.__class__ # <2>\n prefix = cls.__name__\n index = cls.__counter\n self.storage_name = '_{}#{}'.format(prefix, index) # <3>\n cls.__counter += 1 # <4>\n\n def __get__(self, instance, owner): # <5>\n return getattr(instance, self.storage_name) # <6>\n\n def __set__(self, instance, value):\n if value > 0:\n setattr(instance, self.storage_name, value) # <7>\n else:\n raise ValueError('value must be > 0')\n\n\nclass LineItem:\n weight = Quantity() # <8>\n price = Quantity()\n\n def __init__(self, description, weight, price):\n self.description = description\n self.weight = weight\n self.price = price\n\n def subtotal(self):\n return self.weight * self.price\n# END LINEITEM_V4\n",
"_____no_output_____"
],
[
"# %load ./bulkfood/bulkfood_v4b.py\n\"\"\"\n\nA line item for a bulk food order has description, weight and price fields::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> raisins.weight, raisins.description, raisins.price\n (10, 'Golden raisins', 6.95)\n\nA ``subtotal`` method gives the total price for that line item::\n\n >>> raisins.subtotal()\n 69.5\n\nThe weight of a ``LineItem`` must be greater than 0::\n\n >>> raisins.weight = -20\n Traceback (most recent call last):\n ...\n ValueError: value must be > 0\n\nNo change was made::\n\n >>> raisins.weight\n 10\n\nThe value of the attributes managed by the descriptors are stored in\nalternate attributes, created by the descriptors in each ``LineItem``\ninstance::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> dir(raisins) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE\n ['_Quantity#0', '_Quantity#1', '__class__', ...\n 'description', 'price', 'subtotal', 'weight']\n >>> getattr(raisins, '_Quantity#0')\n 10\n >>> getattr(raisins, '_Quantity#1')\n 6.95\n\nIf the descriptor is accessed in the class, the descriptor object is\nreturned:\n\n >>> LineItem.weight # doctest: +ELLIPSIS\n <bulkfood_v4b.Quantity object at 0x...>\n >>> LineItem.weight.storage_name\n '_Quantity#0'\n\n\"\"\"\n\n\n# BEGIN LINEITEM_V4B\nclass Quantity:\n __counter = 0\n\n def __init__(self):\n cls = self.__class__\n prefix = cls.__name__\n index = cls.__counter\n self.storage_name = '_{}#{}'.format(prefix, index)\n cls.__counter += 1\n\n def __get__(self, instance, owner):\n if instance is None:\n return self # <1>\n else:\n return getattr(instance, self.storage_name) # <2>\n\n def __set__(self, instance, value):\n if value > 0:\n setattr(instance, self.storage_name, value)\n else:\n raise ValueError('value must be > 0')\n# END LINEITEM_V4B\n\n\nclass LineItem:\n weight = Quantity()\n price = Quantity()\n\n def __init__(self, description, weight, price):\n self.description = description\n self.weight = weight\n self.price = price\n\n def subtotal(self):\n return self.weight * self.price\n",
"_____no_output_____"
],
[
"# %load ./bulkfood/bulkfood_v4c.py\n\"\"\"\n\nA line item for a bulk food order has description, weight and price fields::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> raisins.weight, raisins.description, raisins.price\n (10, 'Golden raisins', 6.95)\n\nA ``subtotal`` method gives the total price for that line item::\n\n >>> raisins.subtotal()\n 69.5\n\nThe weight of a ``LineItem`` must be greater than 0::\n\n >>> raisins.weight = -20\n Traceback (most recent call last):\n ...\n ValueError: value must be > 0\n\nNo change was made::\n\n >>> raisins.weight\n 10\n\nThe value of the attributes managed by the descriptors are stored in\nalternate attributes, created by the descriptors in each ``LineItem``\ninstance::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> dir(raisins) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE\n ['_Quantity#0', '_Quantity#1', '__class__', ...\n 'description', 'price', 'subtotal', 'weight']\n >>> getattr(raisins, '_Quantity#0')\n 10\n >>> getattr(raisins, '_Quantity#1')\n 6.95\n\nIf the descriptor is accessed in the class, the descriptor object is\nreturned:\n\n >>> LineItem.weight # doctest: +ELLIPSIS\n <model_v4c.Quantity object at 0x...>\n >>> LineItem.weight.storage_name\n '_Quantity#0'\n\n\n\"\"\"\n\n# BEGIN LINEITEM_V4C\nimport model_v4c as model # <1>\n\n\nclass LineItem:\n weight = model.Quantity() # <2>\n price = model.Quantity()\n\n def __init__(self, description, weight, price):\n self.description = description\n self.weight = weight\n self.price = price\n\n def subtotal(self):\n return self.weight * self.price\n# END LINEITEM_V4C\n",
"_____no_output_____"
],
[
"# %load ./bulkfood/bulkfood_v4prop.py\n\"\"\"\n\nA line item for a bulk food order has description, weight and price fields::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> raisins.weight, raisins.description, raisins.price\n (10, 'Golden raisins', 6.95)\n\nA ``subtotal`` method gives the total price for that line item::\n\n >>> raisins.subtotal()\n 69.5\n\nThe weight of a ``LineItem`` must be greater than 0::\n\n >>> raisins.weight = -20\n Traceback (most recent call last):\n ...\n ValueError: value must be > 0\n\nNo change was made::\n\n >>> raisins.weight\n 10\n\nThe value of the attributes managed by the descriptors are stored in\nalternate attributes, created by the descriptors in each ``LineItem``\ninstance::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> dir(raisins) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE\n [... '_quantity:0', '_quantity:1', 'description',\n 'price', 'subtotal', 'weight']\n >>> getattr(raisins, '_quantity:0')\n 10\n >>> getattr(raisins, '_quantity:1')\n 6.95\n\n\"\"\"\n\n\n# BEGIN LINEITEM_V4_PROP\ndef quantity(): # <1>\n try:\n quantity.counter += 1 # <2>\n except AttributeError:\n quantity.counter = 0 # <3>\n\n storage_name = '_{}:{}'.format('quantity', quantity.counter) # <4>\n\n def qty_getter(instance): # <5>\n return getattr(instance, storage_name)\n\n def qty_setter(instance, value):\n if value > 0:\n setattr(instance, storage_name, value)\n else:\n raise ValueError('value must be > 0')\n\n return property(qty_getter, qty_setter)\n# END LINEITEM_V4_PROP\n\nclass LineItem:\n weight = quantity()\n price = quantity()\n\n def __init__(self, description, weight, price):\n self.description = description\n self.weight = weight\n self.price = price\n\n def subtotal(self):\n return self.weight * self.price\n\n",
"_____no_output_____"
],
[
"# %load ./bulkfood/model_v4c.py\n# BEGIN MODEL_V4\nclass Quantity:\n __counter = 0\n\n def __init__(self):\n cls = self.__class__\n prefix = cls.__name__\n index = cls.__counter\n self.storage_name = '_{}#{}'.format(prefix, index)\n cls.__counter += 1\n\n def __get__(self, instance, owner):\n if instance is None:\n return self\n else:\n return getattr(instance, self.storage_name)\n\n def __set__(self, instance, value):\n if value > 0:\n setattr(instance, self.storage_name, value)\n else:\n raise ValueError('value must be > 0')\n# END MODEL_V4\n",
"_____no_output_____"
],
[
"# %load ./bulkfood/bulkfood_v5.py\n\"\"\"\n\nA line item for a bulk food order has description, weight and price fields::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> raisins.weight, raisins.description, raisins.price\n (10, 'Golden raisins', 6.95)\n\nA ``subtotal`` method gives the total price for that line item::\n\n >>> raisins.subtotal()\n 69.5\n\nThe weight of a ``LineItem`` must be greater than 0::\n\n >>> raisins.weight = -20\n Traceback (most recent call last):\n ...\n ValueError: value must be > 0\n\nNo change was made::\n\n >>> raisins.weight\n 10\n\nThe value of the attributes managed by the descriptors are stored in\nalternate attributes, created by the descriptors in each ``LineItem``\ninstance::\n\n >>> raisins = LineItem('Golden raisins', 10, 6.95)\n >>> dir(raisins) # doctest: +ELLIPSIS +NORMALIZE_WHITESPACE\n ['_NonBlank#0', '_Quantity#0', '_Quantity#1', '__class__', ...\n 'description', 'price', 'subtotal', 'weight']\n >>> getattr(raisins, '_Quantity#0')\n 10\n >>> getattr(raisins, '_NonBlank#0')\n 'Golden raisins'\n\nIf the descriptor is accessed in the class, the descriptor object is\nreturned:\n\n >>> LineItem.weight # doctest: +ELLIPSIS\n <model_v5.Quantity object at 0x...>\n >>> LineItem.weight.storage_name\n '_Quantity#0'\n\nThe `NonBlank` descriptor prevents empty or blank strings to be used\nfor the description:\n\n >>> br_nuts = LineItem('Brazil Nuts', 10, 34.95)\n >>> br_nuts.description = ' '\n Traceback (most recent call last):\n ...\n ValueError: value cannot be empty or blank\n >>> void = LineItem('', 1, 1)\n Traceback (most recent call last):\n ...\n ValueError: value cannot be empty or blank\n\n\n\"\"\"\n\n# BEGIN LINEITEM_V5\nimport model_v5 as model # <1>\n\n\nclass LineItem:\n description = model.NonBlank() # <2>\n weight = model.Quantity()\n price = model.Quantity()\n\n def __init__(self, description, weight, price):\n self.description = description\n self.weight = weight\n self.price = price\n\n def subtotal(self):\n return self.weight * self.price\n# END LINEITEM_V5\n",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab3979c94bcf25705b4dd960f43ca2153600314
| 40,957 |
ipynb
|
Jupyter Notebook
|
Python_For_DSandAI_2_2_Lists.ipynb
|
ornob39/Python_For_DataScience_AI-IBM-
|
a6b3462d004425c7e80cc3dbdb2aa6c0f0354f23
|
[
"MIT"
] | 1 |
2020-08-12T07:17:45.000Z
|
2020-08-12T07:17:45.000Z
|
Python_For_DSandAI_2_2_Lists.ipynb
|
ornob39/Python_For_DataScience_AI-IBM-
|
a6b3462d004425c7e80cc3dbdb2aa6c0f0354f23
|
[
"MIT"
] | null | null | null |
Python_For_DSandAI_2_2_Lists.ipynb
|
ornob39/Python_For_DataScience_AI-IBM-
|
a6b3462d004425c7e80cc3dbdb2aa6c0f0354f23
|
[
"MIT"
] | null | null | null | 27.469484 | 720 | 0.436629 |
[
[
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <a href=\"https://cocl.us/PY0101EN_edx_add_top\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png\" width=\"750\" align=\"center\">\n </a>\n</div>",
"_____no_output_____"
],
[
"<a href=\"https://cognitiveclass.ai/\">\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png\" width=\"200\" align=\"center\">\n</a>",
"_____no_output_____"
],
[
"<h1>Lists in Python</h1>",
"_____no_output_____"
],
[
"<p><strong>Welcome!</strong> This notebook will teach you about the lists in the Python Programming Language. By the end of this lab, you'll know the basics list operations in Python, including indexing, list operations and copy/clone list.</p> ",
"_____no_output_____"
],
[
"<h2>Table of Contents</h2>\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n <ul>\n <li>\n <a href=\"#dataset\">About the Dataset</a>\n </li>\n <li>\n <a href=\"#list\">Lists</a>\n <ul>\n <li><a href=\"index\">Indexing</a></li>\n <li><a href=\"content\">List Content</a></li>\n <li><a href=\"op\">List Operations</a></li>\n <li><a href=\"co\">Copy and Clone List</a></li>\n </ul>\n </li>\n <li>\n <a href=\"#quiz\">Quiz on Lists</a>\n </li>\n </ul>\n <p>\n Estimated time needed: <strong>15 min</strong>\n </p>\n</div>\n\n<hr>",
"_____no_output_____"
],
[
"<h2 id=\"#dataset\">About the Dataset</h2>",
"_____no_output_____"
],
[
"Imagine you received album recommendations from your friends and compiled all of the recommandations into a table, with specific information about each album.\n\nThe table has one row for each movie and several columns:\n\n- **artist** - Name of the artist\n- **album** - Name of the album\n- **released_year** - Year the album was released\n- **length_min_sec** - Length of the album (hours,minutes,seconds)\n- **genre** - Genre of the album\n- **music_recording_sales_millions** - Music recording sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)\n- **claimed_sales_millions** - Album's claimed sales (millions in USD) on [SONG://DATABASE](http://www.song-database.com/)\n- **date_released** - Date on which the album was released\n- **soundtrack** - Indicates if the album is the movie soundtrack (Y) or (N)\n- **rating_of_friends** - Indicates the rating from your friends from 1 to 10\n<br>\n<br>\n\nThe dataset can be seen below:\n\n<font size=\"1\">\n<table font-size:xx-small style=\"width:100%\">\n <tr>\n <th>Artist</th>\n <th>Album</th> \n <th>Released</th>\n <th>Length</th>\n <th>Genre</th> \n <th>Music recording sales (millions)</th>\n <th>Claimed sales (millions)</th>\n <th>Released</th>\n <th>Soundtrack</th>\n <th>Rating (friends)</th>\n </tr>\n <tr>\n <td>Michael Jackson</td>\n <td>Thriller</td> \n <td>1982</td>\n <td>00:42:19</td>\n <td>Pop, rock, R&B</td>\n <td>46</td>\n <td>65</td>\n <td>30-Nov-82</td>\n <td></td>\n <td>10.0</td>\n </tr>\n <tr>\n <td>AC/DC</td>\n <td>Back in Black</td> \n <td>1980</td>\n <td>00:42:11</td>\n <td>Hard rock</td>\n <td>26.1</td>\n <td>50</td>\n <td>25-Jul-80</td>\n <td></td>\n <td>8.5</td>\n </tr>\n <tr>\n <td>Pink Floyd</td>\n <td>The Dark Side of the Moon</td> \n <td>1973</td>\n <td>00:42:49</td>\n <td>Progressive rock</td>\n <td>24.2</td>\n <td>45</td>\n <td>01-Mar-73</td>\n <td></td>\n <td>9.5</td>\n </tr>\n <tr>\n <td>Whitney Houston</td>\n <td>The Bodyguard</td> \n <td>1992</td>\n <td>00:57:44</td>\n <td>Soundtrack/R&B, soul, pop</td>\n <td>26.1</td>\n <td>50</td>\n <td>25-Jul-80</td>\n <td>Y</td>\n <td>7.0</td>\n </tr>\n <tr>\n <td>Meat Loaf</td>\n <td>Bat Out of Hell</td> \n <td>1977</td>\n <td>00:46:33</td>\n <td>Hard rock, progressive rock</td>\n <td>20.6</td>\n <td>43</td>\n <td>21-Oct-77</td>\n <td></td>\n <td>7.0</td>\n </tr>\n <tr>\n <td>Eagles</td>\n <td>Their Greatest Hits (1971-1975)</td> \n <td>1976</td>\n <td>00:43:08</td>\n <td>Rock, soft rock, folk rock</td>\n <td>32.2</td>\n <td>42</td>\n <td>17-Feb-76</td>\n <td></td>\n <td>9.5</td>\n </tr>\n <tr>\n <td>Bee Gees</td>\n <td>Saturday Night Fever</td> \n <td>1977</td>\n <td>1:15:54</td>\n <td>Disco</td>\n <td>20.6</td>\n <td>40</td>\n <td>15-Nov-77</td>\n <td>Y</td>\n <td>9.0</td>\n </tr>\n <tr>\n <td>Fleetwood Mac</td>\n <td>Rumours</td> \n <td>1977</td>\n <td>00:40:01</td>\n <td>Soft rock</td>\n <td>27.9</td>\n <td>40</td>\n <td>04-Feb-77</td>\n <td></td>\n <td>9.5</td>\n </tr>\n</table></font>",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<h2 id=\"list\">Lists</h2>",
"_____no_output_____"
],
[
"<h3 id=\"index\">Indexing</h3>",
"_____no_output_____"
],
[
"We are going to take a look at lists in Python. A list is a sequenced collection of different objects such as integers, strings, and other lists as well. The address of each element within a list is called an <b>index</b>. An index is used to access and refer to items within a list.",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsIndex.png\" width=\"1000\" />",
"_____no_output_____"
],
[
" To create a list, type the list within square brackets <b>[ ]</b>, with your content inside the parenthesis and separated by commas. Let’s try it!",
"_____no_output_____"
]
],
[
[
"# Create a list\n\nL = [\"Michael Jackson\", 10.1, 1982]\nL",
"_____no_output_____"
]
],
[
[
"We can use negative and regular indexing with a list :",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsNeg.png\" width=\"1000\" />",
"_____no_output_____"
]
],
[
[
"# Print the elements on each index\n\nprint('the same element using negative and positive indexing:\\n Postive:',L[0],\n'\\n Negative:' , L[-3] )\nprint('the same element using negative and positive indexing:\\n Postive:',L[1],\n'\\n Negative:' , L[-2] )\nprint('the same element using negative and positive indexing:\\n Postive:',L[2],\n'\\n Negative:' , L[-1] )",
"the same element using negative and positive indexing:\n Postive: Michael Jackson \n Negative: Michael Jackson\nthe same element using negative and positive indexing:\n Postive: 10.1 \n Negative: 10.1\nthe same element using negative and positive indexing:\n Postive: 1982 \n Negative: 1982\n"
]
],
[
[
"<h3 id=\"content\">List Content</h3>",
"_____no_output_____"
],
[
"Lists can contain strings, floats, and integers. We can nest other lists, and we can also nest tuples and other data structures. The same indexing conventions apply for nesting: \n",
"_____no_output_____"
]
],
[
[
"# Sample List\n\n[\"Michael Jackson\", 10.1, 1982, [1, 2], (\"A\", 1)]",
"_____no_output_____"
]
],
[
[
"<h3 id=\"op\">List Operations</h3>",
"_____no_output_____"
],
[
" We can also perform slicing in lists. For example, if we want the last two elements, we use the following command:",
"_____no_output_____"
]
],
[
[
"# Sample List\n\nL = [\"Michael Jackson\", 10.1,1982,\"MJ\",1]\nL",
"_____no_output_____"
]
],
[
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsSlice.png\" width=\"1000\">",
"_____no_output_____"
]
],
[
[
"# List slicing\n\nL[3:5]",
"_____no_output_____"
]
],
[
[
"We can use the method <code>extend</code> to add new elements to the list:",
"_____no_output_____"
]
],
[
[
"# Use extend to add elements to list\n\nL = [ \"Michael Jackson\", 10.2]\nL.extend(['pop', 10])\nL",
"_____no_output_____"
]
],
[
[
"Another similar method is <code>append</code>. If we apply <code>append</code> instead of <code>extend</code>, we add one element to the list:",
"_____no_output_____"
]
],
[
[
"# Use append to add elements to list\n\nL = [ \"Michael Jackson\", 10.2]\nL.append(['pop', 10])\nL",
"_____no_output_____"
]
],
[
[
" Each time we apply a method, the list changes. If we apply <code>extend</code> we add two new elements to the list. The list <code>L</code> is then modified by adding two new elements:",
"_____no_output_____"
]
],
[
[
"# Use extend to add elements to list\n\nL = [ \"Michael Jackson\", 10.2]\nL.extend(['pop', 10])\nL",
"_____no_output_____"
]
],
[
[
"If we append the list <code>['a','b']</code> we have one new element consisting of a nested list:",
"_____no_output_____"
]
],
[
[
"# Use append to add elements to list\n\nL.append(['a','b'])\nL",
"_____no_output_____"
]
],
[
[
"As lists are mutable, we can change them. For example, we can change the first element as follows:",
"_____no_output_____"
]
],
[
[
"# Change the element based on the index\n\nA = [\"disco\", 10, 1.2]\nprint('Before change:', A)\nA[0] = 'hard rock'\nprint('After change:', A)",
"Before change: ['disco', 10, 1.2]\nAfter change: ['hard rock', 10, 1.2]\n"
]
],
[
[
" We can also delete an element of a list using the <code>del</code> command:",
"_____no_output_____"
]
],
[
[
"# Delete the element based on the index\n\nprint('Before change:', A)\ndel(A[0])\nprint('After change:', A)",
"Before change: [10, 1.2]\nAfter change: [1.2]\n"
]
],
[
[
"We can convert a string to a list using <code>split</code>. For example, the method <code>split</code> translates every group of characters separated by a space into an element in a list:",
"_____no_output_____"
]
],
[
[
"# Split the string, default is by space\n\n'hard rock'.split()",
"_____no_output_____"
]
],
[
[
"We can use the split function to separate strings on a specific character. We pass the character we would like to split on into the argument, which in this case is a comma. The result is a list, and each element corresponds to a set of characters that have been separated by a comma: ",
"_____no_output_____"
]
],
[
[
"# Split the string by comma\n\n'A,B,C,D'.split(',')",
"_____no_output_____"
]
],
[
[
"<h3 id=\"co\">Copy and Clone List</h3>",
"_____no_output_____"
],
[
"When we set one variable <b>B</b> equal to <b>A</b>; both <b>A</b> and <b>B</b> are referencing the same list in memory:",
"_____no_output_____"
]
],
[
[
"# Copy (copy by reference) the list A\n\nA = [\"hard rock\", 10, 1.2]\nB = A\nprint('A:', A)\nprint('B:', B)",
"A: ['hard rock', 10, 1.2]\nB: ['hard rock', 10, 1.2]\n"
]
],
[
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsRef.png\" width=\"1000\" align=\"center\">",
"_____no_output_____"
],
[
"Initially, the value of the first element in <b>B</b> is set as hard rock. If we change the first element in <b>A</b> to <b>banana</b>, we get an unexpected side effect. As <b>A</b> and <b>B</b> are referencing the same list, if we change list <b>A</b>, then list <b>B</b> also changes. If we check the first element of <b>B</b> we get banana instead of hard rock:",
"_____no_output_____"
]
],
[
[
"# Examine the copy by reference\n\nprint('B[0]:', B[0])\nA[0] = \"banana\"\nprint('B[0]:', B[0])",
"B[0]: hard rock\nB[0]: banana\n"
]
],
[
[
"This is demonstrated in the following figure: ",
"_____no_output_____"
],
[
"<img src = \"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsRefGif.gif\" width=\"1000\" />",
"_____no_output_____"
],
[
"You can clone list **A** by using the following syntax:",
"_____no_output_____"
]
],
[
[
"# Clone (clone by value) the list A\n\nB = A[:]\nB",
"_____no_output_____"
]
],
[
[
" Variable **B** references a new copy or clone of the original list; this is demonstrated in the following figure:",
"_____no_output_____"
],
[
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%202/Images/ListsVal.gif\" width=\"1000\" />",
"_____no_output_____"
],
[
"Now if you change <b>A</b>, <b>B</b> will not change: ",
"_____no_output_____"
]
],
[
[
"print('B[0]:', B[0])\nA[0] = \"hard rock\"\nprint('B[0]:', B[0])",
"B[0]: banana\nB[0]: banana\n"
]
],
[
[
"<h2 id=\"quiz\">Quiz on List</h2>",
"_____no_output_____"
],
[
"Create a list <code>a_list</code>, with the following elements <code>1</code>, <code>hello</code>, <code>[1,2,3]</code> and <code>True</code>. ",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\na_list=[1, 'hello', [1,2,3],True]\na_list",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- Your answer is below:\na_list = [1, 'hello', [1, 2, 3] , True]\na_list\n-->",
"_____no_output_____"
],
[
"Find the value stored at index 1 of <code>a_list</code>.",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\na_list[1]",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- Your answer is below:\na_list[1]\n-->",
"_____no_output_____"
],
[
"Retrieve the elements stored at index 1, 2 and 3 of <code>a_list</code>.",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\na_list[1:4]",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- Your answer is below:\na_list[1:4]\n-->",
"_____no_output_____"
],
[
"Concatenate the following lists <code>A = [1, 'a']</code> and <code>B = [2, 1, 'd']</code>:",
"_____no_output_____"
]
],
[
[
"# Write your code below and press Shift+Enter to execute\nA = [1, 'a'] \nB = [2, 1, 'd']\nA=A+B\nA",
"_____no_output_____"
]
],
[
[
"Double-click <b>here</b> for the solution.\n\n<!-- Your answer is below:\nA = [1, 'a'] \nB = [2, 1, 'd']\nA + B\n-->",
"_____no_output_____"
],
[
"<hr>\n<h2>The last exercise!</h2>\n<p>Congratulations, you have completed your first lesson and hands-on lab in Python. However, there is one more thing you need to do. The Data Science community encourages sharing work. The best way to share and showcase your work is to share it on GitHub. By sharing your notebook on GitHub you are not only building your reputation with fellow data scientists, but you can also show it off when applying for a job. Even though this was your first piece of work, it is never too early to start building good habits. So, please read and follow <a href=\"https://cognitiveclass.ai/blog/data-scientists-stand-out-by-sharing-your-notebooks/\" target=\"_blank\">this article</a> to learn how to share your work.\n<hr>",
"_____no_output_____"
],
[
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<h2>Get IBM Watson Studio free of charge!</h2>\n <p><a href=\"https://cocl.us/PY0101EN_edx_add_bbottom\"><img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/BottomAd.png\" width=\"750\" align=\"center\"></a></p>\n</div>",
"_____no_output_____"
],
[
"<h3>About the Authors:</h3> \n<p><a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\" target=\"_blank\">Joseph Santarcangelo</a> is a Data Scientist at IBM, and holds a PhD in Electrical Engineering. His research focused on using Machine Learning, Signal Processing, and Computer Vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>",
"_____no_output_____"
],
[
"Other contributors: <a href=\"www.linkedin.com/in/jiahui-mavis-zhou-a4537814a\">Mavis Zhou</a>",
"_____no_output_____"
],
[
"<hr>",
"_____no_output_____"
],
[
"<p>Copyright © 2018 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href=\"https://cognitiveclass.ai/mit-license/\">MIT License</a>.</p>",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown",
"markdown"
]
] |
4ab39802234d2caa78f1c86d1fd5f6861e509b71
| 2,381 |
ipynb
|
Jupyter Notebook
|
Notebooks/MultimonialNB-2.ipynb
|
satya1013/Fake-News-1
|
7c53e60407dfdd46315647004d2acd00c17f2451
|
[
"MIT"
] | null | null | null |
Notebooks/MultimonialNB-2.ipynb
|
satya1013/Fake-News-1
|
7c53e60407dfdd46315647004d2acd00c17f2451
|
[
"MIT"
] | null | null | null |
Notebooks/MultimonialNB-2.ipynb
|
satya1013/Fake-News-1
|
7c53e60407dfdd46315647004d2acd00c17f2451
|
[
"MIT"
] | 1 |
2021-09-09T12:31:31.000Z
|
2021-09-09T12:31:31.000Z
| 25.063158 | 113 | 0.572869 |
[
[
[
"import pickle\nimport re\nimport string\nimport numpy as np\nimport pandas as pd\nimport contractions\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom sklearn.naive_bayes import MultinomialNB",
"_____no_output_____"
],
[
"tfidfVectorizer = pickle.load(open(\"../Dataset/tfidf_vectorizer-2.pickle\", \"rb\"))\n#print(tfidfVectorizer.get_feature_names())\n\ntrainTfidfVector = pickle.load(open(\"../Dataset/tfidf_train-2.pickle\", \"rb\"))\ndf_train = pd.DataFrame(data = trainTfidfVector.toarray(),columns = tfidfVectorizer.get_feature_names())\n#print(df_train)\n\ntestTfidfVector = pickle.load(open(\"../Dataset/tfidf_test-2.pickle\", \"rb\"))\ndf_test = pd.DataFrame(data = testTfidfVector.toarray(),columns = tfidfVectorizer.get_feature_names())\n#print(df_test)\n\ntrainLabels = pickle.load(open(\"../Dataset/label_train-2.pickle\", \"rb\"))\n#print(trainLabels)\n\ntestLabels = pickle.load(open(\"../Dataset/label_test-2.pickle\", \"rb\"))\n#print(testLabels)",
"_____no_output_____"
],
[
"clf = MultinomialNB()\nclf.fit(trainTfidfVector, trainLabels)\nprint(clf.score(trainTfidfVector, trainLabels))\nprint(clf.score(testTfidfVector, testLabels))",
"0.890031152647975\n0.8654205607476636\n"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code"
]
] |
4ab3a425b7e630e03fcbb5850dea274e1ce373c4
| 124,944 |
ipynb
|
Jupyter Notebook
|
Phase_4/NLP/ds-natural_language_pre-processing-kvm32-main/natural_language_pre-processing.ipynb
|
ismizu/ds-east-042621-lectures
|
3d962df4d3cb19a4d0c92c8246ec251a5969f644
|
[
"MIT"
] | 1 |
2021-08-12T21:48:21.000Z
|
2021-08-12T21:48:21.000Z
|
Phase_4/NLP/ds-natural_language_pre-processing-kvm32-main/natural_language_pre-processing.ipynb
|
ismizu/ds-east-042621-lectures
|
3d962df4d3cb19a4d0c92c8246ec251a5969f644
|
[
"MIT"
] | null | null | null |
Phase_4/NLP/ds-natural_language_pre-processing-kvm32-main/natural_language_pre-processing.ipynb
|
ismizu/ds-east-042621-lectures
|
3d962df4d3cb19a4d0c92c8246ec251a5969f644
|
[
"MIT"
] | null | null | null | 51.227552 | 31,552 | 0.727886 |
[
[
[
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Natural-Language-Pre-Processing\" data-toc-modified-id=\"Natural-Language-Pre-Processing-1\"><span class=\"toc-item-num\">1 </span>Natural Language Pre-Processing</a></span></li><li><span><a href=\"#Objectives\" data-toc-modified-id=\"Objectives-2\"><span class=\"toc-item-num\">2 </span>Objectives</a></span></li><li><span><a href=\"#Overview-of-NLP\" data-toc-modified-id=\"Overview-of-NLP-3\"><span class=\"toc-item-num\">3 </span>Overview of NLP</a></span></li><li><span><a href=\"#Preprocessing-for-NLP\" data-toc-modified-id=\"Preprocessing-for-NLP-4\"><span class=\"toc-item-num\">4 </span>Preprocessing for NLP</a></span><ul class=\"toc-item\"><li><span><a href=\"#Tokenization\" data-toc-modified-id=\"Tokenization-4.1\"><span class=\"toc-item-num\">4.1 </span>Tokenization</a></span></li></ul></li><li><span><a href=\"#Text-Cleaning\" data-toc-modified-id=\"Text-Cleaning-5\"><span class=\"toc-item-num\">5 </span>Text Cleaning</a></span><ul class=\"toc-item\"><li><span><a href=\"#Capitalization\" data-toc-modified-id=\"Capitalization-5.1\"><span class=\"toc-item-num\">5.1 </span>Capitalization</a></span></li><li><span><a href=\"#Punctuation\" data-toc-modified-id=\"Punctuation-5.2\"><span class=\"toc-item-num\">5.2 </span>Punctuation</a></span></li><li><span><a href=\"#Stopwords\" data-toc-modified-id=\"Stopwords-5.3\"><span class=\"toc-item-num\">5.3 </span>Stopwords</a></span><ul class=\"toc-item\"><li><ul class=\"toc-item\"><li><span><a href=\"#Numerals\" data-toc-modified-id=\"Numerals-5.3.0.1\"><span class=\"toc-item-num\">5.3.0.1 </span>Numerals</a></span></li></ul></li></ul></li></ul></li><li><span><a href=\"#Regex\" data-toc-modified-id=\"Regex-6\"><span class=\"toc-item-num\">6 </span>Regex</a></span><ul class=\"toc-item\"><li><span><a href=\"#RegexpTokenizer()\" data-toc-modified-id=\"RegexpTokenizer()-6.1\"><span class=\"toc-item-num\">6.1 </span><code>RegexpTokenizer()</code></a></span></li></ul></li><li><span><a href=\"#Exercise:-NL-Pre-Processing\" data-toc-modified-id=\"Exercise:-NL-Pre-Processing-7\"><span class=\"toc-item-num\">7 </span>Exercise: NL Pre-Processing</a></span></li></ul></div>",
"_____no_output_____"
],
[
"# Natural Language Pre-Processing",
"_____no_output_____"
]
],
[
[
"# Use this to install nltk if needed\n!pip install nltk\n# !conda install -c anaconda nltk",
"Requirement already satisfied: nltk in c:\\users\\im\\anaconda3\\lib\\site-packages (3.5)\nRequirement already satisfied: click in c:\\users\\im\\anaconda3\\lib\\site-packages (from nltk) (7.1.2)\nRequirement already satisfied: tqdm in c:\\users\\im\\appdata\\roaming\\python\\python38\\site-packages (from nltk) (4.56.0)\nRequirement already satisfied: joblib in c:\\users\\im\\appdata\\roaming\\python\\python38\\site-packages (from nltk) (1.0.0)\nRequirement already satisfied: regex in c:\\users\\im\\anaconda3\\lib\\site-packages (from nltk) (2020.10.15)\n"
],
[
"%load_ext autoreload\n%autoreload 2\n\nimport os\nimport sys\nmodule_path = os.path.abspath(os.path.join(os.pardir, os.pardir))\nif module_path not in sys.path:\n sys.path.append(module_path)\n \nimport pandas as pd\nimport nltk\nfrom nltk.probability import FreqDist\nfrom nltk.corpus import stopwords\nfrom nltk.tokenize import regexp_tokenize, word_tokenize, RegexpTokenizer\nimport matplotlib.pyplot as plt\nimport string\nimport re",
"The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n"
],
[
"# Use this to download the stopwords if you haven't already - only ever needs to be run once\n\nnltk.download(\"stopwords\")",
"_____no_output_____"
]
],
[
[
"# Objectives\n\n- Describe the basic concepts of NLP\n- Use pre-processing methods for NLP\n - Tokenization\n - Stopwords removal",
"_____no_output_____"
],
[
"# Overview of NLP\n\nNLP allows computers to interact with text data in a structured and sensible way. In short, we will be breaking up series of texts into individual words (or groups of words), and isolating the words with **semantic value**. We will then compare texts with similar distributions of these words, and group them together.\n\nIn this section, we will discuss some steps and approaches to common text data analytic procedures. Some of the applications of natural language processing are:\n- Chatbots \n- Speech recognition and audio processing \n- Classifying documents \n\nHere is an example that uses some of the tools we use in this notebook. \n -[chicago_justice classifier](https://github.com/chicago-justice-project/article-tagging/blob/master/lib/notebooks/bag-of-words-count-stemmed-binary.ipynb)\n\nWe will introduce you to the preprocessing steps, feature engineering, and other steps you need to take in order to format text data for machine learning tasks. \n\nWe will also introduce you to [**NLTK**](https://www.nltk.org/) (Natural Language Toolkit), which will be our main tool for engaging with textual data.",
"_____no_output_____"
],
[
"<img src=\"img/nlp_process.png\" style=\"width:1000px;\">",
"_____no_output_____"
]
],
[
[
"#No hard rule for model, could be knn, rfc, etc. ",
"_____no_output_____"
]
],
[
[
"# Preprocessing for NLP",
"_____no_output_____"
]
],
[
[
"#Curse of dimensionality",
"_____no_output_____"
]
],
[
[
"The goal when pre-processing text data for NLP is to remove as many unnecessary words as possible while preserving as much semantic meaning as possible. This will improve your model performance dramatically.\n\nYou can think of this sort of like dimensionality reduction. The unique words in your corpus form a **vocabulary**, and each word in your vocabulary is essentially another feature in your model. So we want to get rid of unnecessary words and consolidate words that have similar meanings.\n\nWe will be working with a dataset which includes both satirical** (The Onion) and real news (Reuters) articles. We refer to the entire set of articles as the **corpus**. ",
"_____no_output_____"
],
[
" ",
"_____no_output_____"
]
],
[
[
"corpus = pd.read_csv('data/satire_nosatire.csv')\ncorpus.shape",
"_____no_output_____"
],
[
"corpus.tail()",
"_____no_output_____"
]
],
[
[
"Our goal is to detect satire, so our target class of 1 is associated with The Onion articles. ",
"_____no_output_____"
]
],
[
[
"corpus.loc[10].body",
"_____no_output_____"
],
[
"corpus.loc[10].target",
"_____no_output_____"
],
[
"corpus.loc[502].body",
"_____no_output_____"
],
[
"corpus.loc[502].target",
"_____no_output_____"
]
],
[
[
"Each article in the corpus is refered to as a **document**.",
"_____no_output_____"
],
[
"It is a balanced dataset with 500 documents of each category. ",
"_____no_output_____"
]
],
[
[
"corpus.target.value_counts()",
"_____no_output_____"
]
],
[
[
"**Discussion:** Let's think about the use cases of being able to correctly separate satirical from authentic news. What might be a real-world use case? ",
"_____no_output_____"
]
],
[
[
"# Thoughts here\n\n",
"_____no_output_____"
]
],
[
[
"## Tokenization \n\nIn order to convert the texts into data suitable for machine learning, we need to break down the documents into smaller parts. \n\nThe first step in doing that is **tokenization**.\n\nTokenization is the process of splitting documents into units of observations. We usually represent the tokens as __n-grams__, where n represent the number of consecutive words occuring in a document that we will consider a unit. In the case of unigrams (one-word tokens), the sentence \"David works here\" would be tokenized into:\n\n- \"David\", \"works\", \"here\";\n\nIf we want (also) to consider bigrams, we would (also) consider:\n\n- \"David works\" and \"works here\".",
"_____no_output_____"
],
[
"Let's consider the first document in our corpus:",
"_____no_output_____"
]
],
[
[
"first_document = corpus.iloc[0].body",
"_____no_output_____"
],
[
"first_document",
"_____no_output_____"
],
[
"sample_document = corpus.iloc[1].body\nsample_document",
"_____no_output_____"
]
],
[
[
"There are many ways to tokenize our document. \n\nIt is a long string, so the first way we might consider is to split it by spaces.",
"_____no_output_____"
],
[
"**Knowledge Check:** How would we split our documents into words using spaces?\n\n<p>\n</p>\n<details>\n <summary><b><u>Click Here for Answer Code</u></b></summary>\n\n first_document.split(' ')\n \n</details>",
"_____no_output_____"
]
],
[
[
"# code\nsample_document.split()",
"_____no_output_____"
]
],
[
[
"But this is not ideal. We are trying to create a set of tokens with **high semantic value**. In other words, we want to isolate text which best represents the meaning in each document.",
"_____no_output_____"
],
[
"# Text Cleaning\n\nMost NL Pre-Processing will include the following tasks:\n\n 1. Remove capitalization \n 2. Remove punctuation \n 3. Remove stopwords \n 4. Remove numbers",
"_____no_output_____"
],
[
"We could manually perform all of these tasks with string operations.",
"_____no_output_____"
],
[
"## Capitalization\n\nWhen we create our matrix of words associated with our corpus, **capital letters** will mess things up. The semantic value of a word used at the beginning of a sentence is the same as that same word in the middle of the sentence. In the two sentences:\n\nsentence_one = \"Excessive gerrymandering in small counties suppresses turnout.\" \nsentence_two = \"Turnout is suppressed in small counties by excessive gerrymandering.\" \n\n'excessive' has the same semantic value, but will be treated as different tokens because of capitals.",
"_____no_output_____"
]
],
[
[
"sentence_one = \"Excessive gerrymandering in small counties suppresses turnout.\" \nsentence_two = \"Turnout is suppressed in small counties by excessive gerrymandering.\"\n\nExcessive = sentence_one.split(' ')[0]\nexcessive = sentence_two.split(' ')[-2]\nprint(excessive, Excessive)\nexcessive == Excessive",
"excessive Excessive\n"
],
[
"manual_cleanup = [word.lower() for word in first_document.split(' ')]",
"_____no_output_____"
],
[
"print(f\"Our initial token set for our first document is {len(manual_cleanup)} words long\")",
"Our initial token set for our first document is 154 words long\n"
],
[
"print(f\"Our initial token set for our first document has \\\n{len(set(first_document.split()))} unique words\")",
"Our initial token set for our first document has 117 unique words\n"
],
[
"print(f\"After removing capitals, our first document has \\\n{len(set(manual_cleanup))} unique words\")",
"After removing capitals, our first document has 115 unique words\n"
]
],
[
[
"## Punctuation\n\nLike capitals, splitting on white space will create tokens which include punctuation that will muck up our semantics. \n\nReturning to the above example, 'gerrymandering' and 'gerrymandering.' will be treated as different tokens.",
"_____no_output_____"
]
],
[
[
"no_punct = sentence_one.split(' ')[1]\npunct = sentence_two.split(' ')[-1]\nprint(no_punct, punct)\nno_punct == punct",
"gerrymandering gerrymandering.\n"
],
[
"## Manual removal of punctuation\n\nstring.punctuation",
"_____no_output_____"
],
[
"manual_cleanup = [s.translate(str.maketrans('', '', string.punctuation))\\\n for s in manual_cleanup]",
"_____no_output_____"
],
[
"print(f\"After removing punctuation, our first document has \\\n{len(set(manual_cleanup))} unique words\")",
"After removing punctuation, our first document has 114 unique words\n"
],
[
"manual_cleanup[:10]",
"_____no_output_____"
]
],
[
[
"## Stopwords",
"_____no_output_____"
],
[
"Stopwords are the **filler** words in a language: prepositions, articles, conjunctions. They have low semantic value, and often need to be removed. \n\nLuckily, NLTK has lists of stopwords ready for our use.",
"_____no_output_____"
]
],
[
[
"stopwords.words('english')[:10]",
"_____no_output_____"
],
[
"stopwords.words('greek')[:10]",
"_____no_output_____"
]
],
[
[
"Let's see which stopwords are present in our first document.",
"_____no_output_____"
]
],
[
[
"stops = [token for token in manual_cleanup if token in stopwords.words('english')]\nstops[:10]",
"_____no_output_____"
],
[
"print(f'There are {len(stops)} stopwords in the first document')",
"There are 63 stopwords in the first document\n"
],
[
"print(f'That is {len(stops)/len(manual_cleanup): 0.2%} of our text')",
"That is 40.91% of our text\n"
]
],
[
[
"Let's also use the **FreqDist** tool to look at the makeup of our text before and after removal:",
"_____no_output_____"
]
],
[
[
"fdist = FreqDist(manual_cleanup)\nplt.figure(figsize=(10, 10))\nfdist.plot(30);",
"_____no_output_____"
],
[
"manual_cleanup = [token for token in manual_cleanup if\\\n token not in stopwords.words('english')]",
"_____no_output_____"
],
[
"manual_cleanup[:10]",
"_____no_output_____"
],
[
"# We can also customize our stopwords list\n\ncustom_sw = stopwords.words('english')\ncustom_sw.extend([\"i'd\",\"say\"] )\ncustom_sw[-10:]",
"_____no_output_____"
],
[
"manual_cleanup = [token for token in manual_cleanup if token not in custom_sw]",
"_____no_output_____"
],
[
"print(f'After removing stopwords, there are {len(set(manual_cleanup))} unique words left')",
"After removing stopwords, there are 82 unique words left\n"
],
[
"fdist = FreqDist(manual_cleanup)\nplt.figure(figsize=(10, 10))\nfdist.plot(30);",
"_____no_output_____"
]
],
[
[
"#### Numerals\n\nNumerals also usually have low semantic value. Their removal can help improve our models. \n\nTo remove them, we will use regular expressions, a powerful tool which you may already have some familiarity with.",
"_____no_output_____"
]
],
[
[
"manual_cleanup = [s.translate(str.maketrans('', '', '0123456789')) \\\n for s in manual_cleanup]",
"_____no_output_____"
],
[
"# drop empty strings\n\nmanual_cleanup = [s for s in manual_cleanup if s != '' ]",
"_____no_output_____"
],
[
"print(f'After removing numbers, there are {len(set(manual_cleanup))} unique words left')",
"After removing numbers, there are 81 unique words left\n"
]
],
[
[
"# Regex\n\nRegex allows us to match strings based on a pattern. This pattern comes from a language of identifiers, which we can begin exploring on the cheatsheet found here:\n - https://regexr.com/",
"_____no_output_____"
],
[
"A few key symbols:\n - . : matches any character\n - \\d, \\w, \\s : represent digit, word, whitespace \n - *, ?, +: matches 0 or more, 0 or 1, 1 or more of the preceding character \n - [A-Z]: matches any capital letter \n - [a-z]: matches lowercase letter ",
"_____no_output_____"
],
[
"Other helpful resources:\n - https://regexcrossword.com/\n - https://www.regular-expressions.info/tutorial.html",
"_____no_output_____"
],
[
"We can use regex to isolate numerals:",
"_____no_output_____"
]
],
[
[
"first_document",
"_____no_output_____"
],
[
"pattern = '[0-9]'\nnumber = re.findall(pattern, first_document)\nnumber",
"_____no_output_____"
],
[
"pattern2 = '[0-9]+'\nnumber2 = re.findall(pattern2, first_document)\nnumber2",
"_____no_output_____"
]
],
[
[
"## `RegexpTokenizer()`\n\nSklearn and NLTK provide us with a suite of **tokenizers** for our text preprocessing convenience.",
"_____no_output_____"
]
],
[
[
"first_document",
"_____no_output_____"
],
[
"# Remember that the '?' indicates 0 or 1 of what follows!\n\nre.findall(r\"([a-zA-Z]+(?:'[a-z]+)?)\", \"I'd\")",
"_____no_output_____"
],
[
"pattern = \"([a-zA-Z]+(?:'[a-z]+)?)\"\ntokenizer = RegexpTokenizer(pattern)\nfirst_doc = tokenizer.tokenize(first_document)\n",
"_____no_output_____"
],
[
"first_doc = [token.lower() for token in first_doc]\nfirst_doc = [token for token in first_doc if token not in custom_sw]",
"_____no_output_____"
],
[
"first_document",
"_____no_output_____"
],
[
"first_doc[:10]",
"_____no_output_____"
],
[
"print(f'We are down to {len(set(first_doc))} unique words')",
"We are down to 75 unique words\n"
]
],
[
[
"# Exercise: NL Pre-Processing\n\n**Activity:** Use what you've learned to preprocess the second article. How does the length and number of unique words in the article change?\n\n<p>\n</p>\n<details>\n <summary><b><u>Click Here for Answer Code</u></b></summary>\n\n second_document = corpus.iloc[1].body\n print(f'We start with {len(second_document.split())} words')\n print(f'We start with {len(set(second_document.split()))} unique words')\n \n second_doc = tokenizer.tokenize(second_document)\n second_doc = [token.lower() for token in second_doc]\n second_doc = [token for token in second_doc if token not in custom_sw]\n print(f'We end with {len(second_doc)} words')\n print(f'We end with {len(set(second_doc))} unique words')\n\n \n</details>",
"_____no_output_____"
]
],
[
[
"second_document",
"_____no_output_____"
],
[
"len(set(corpus.iloc[1].body.split()))",
"_____no_output_____"
],
[
"list(set(corpus.iloc[1].body.split()))",
"_____no_output_____"
],
[
"len(second_document)",
"_____no_output_____"
],
[
"list(set(second_document))",
"_____no_output_____"
],
[
"second_doc",
"_____no_output_____"
],
[
"## Your code here\nsecond_document = corpus.iloc[1].body\n\nsecond_doc = tokenizer.tokenize(second_document)\n\nsecond_doc = [token.lower() for token in second_doc]\nsecond_doc = [token for token in second_doc if token not in custom_sw]\n\n#second_doc[:10], print(f'We are down to {len(second_doc)} words'),\\\n#print(f'We are down to {len(set(second_doc))} unique words')",
"We are down to 119 words\nWe are down to 110 unique words\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab3b15074ff413a0b98bc5354db3ebacacad369
| 28,466 |
ipynb
|
Jupyter Notebook
|
site/ru/tutorials/keras/overfit_and_underfit.ipynb
|
epicfaace/docs
|
3fdd8c7f6c2bc0240dd3f3f4cc0f06b544155ded
|
[
"Apache-2.0"
] | 2 |
2021-10-25T00:17:22.000Z
|
2021-11-17T10:24:19.000Z
|
site/ru/tutorials/keras/overfit_and_underfit.ipynb
|
tfboyd/docs
|
9b0a7552c5e6f14bebe192306867fa5516c92e4f
|
[
"Apache-2.0"
] | 1 |
2021-02-28T07:14:03.000Z
|
2021-02-28T07:14:03.000Z
|
site/ru/tutorials/keras/overfit_and_underfit.ipynb
|
tfboyd/docs
|
9b0a7552c5e6f14bebe192306867fa5516c92e4f
|
[
"Apache-2.0"
] | null | null | null | 36.494872 | 626 | 0.614206 |
[
[
[
"##### Copyright 2018 The TensorFlow Authors.",
"_____no_output_____"
]
],
[
[
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"_____no_output_____"
],
[
"#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.",
"_____no_output_____"
]
],
[
[
"# Знакомимся с переобучением и недообучением",
"_____no_output_____"
],
[
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/overfit_and_underfit\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />Читай на TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ru/tutorials/keras/overfit_and_underfit.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Запусти в Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/ru/tutorials/keras/overfit_and_underfit.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />Изучай код на GitHub</a>\n </td>\n</table>",
"_____no_output_____"
],
[
"\nКак и в предыдущий раз мы будем использовать `tf.keras` API, подробнее о котором ты можешь прочитать в нашем [руководстве по Keras](https://www.tensorflow.org/guide/keras).\n\nВ обоих предыдщих примерах с классификацией обзоров фильмов и предсказанием цен на жилье, мы увидели, что точность нашей модели на проверочных данных достигает пика после определенного количества эпох, а затем начинает снижаться.\n\nДругими словами, наша модель учится на одних и тех же данных слишком долго - это называется *переобучение*. Очень важно знать способы как можно предотвратить это. Несмотря на то, что при помощи переобучения можно достичь более высоких показателей точности, но только на *тренировочных данных*, нашей целью всегда является обучить нейросеть обобщать их и узнавать паттерны на проверочных, новых данных.\n\nОбратным случаем переобучения является *недообучение*: оно возникает когда все еще есть возможность улучшить показатели модели на проверочном наборе данных. Недообучение может произойти по разным причинам: например, если модель недостаточно сильная, или слишком сложная, или просто недостаточно тренировалась на данных. В любом случае это будет означать, что не были выучены основные паттерны из проверочного сета.\n\nЕсли ты будешь тренировать модель слишком долго, то модель начнет обучаться шаблонам, которые свойственны *только* тренировочным данным, и не научится узнавать паттерны в новых данных. Нам нужно найти золотую середину. Понимание того как долго тренировать модель, сколько эпох выбрать - это очень полезный навык, которому мы сейчас научимся.\n\nЧтобы избежать переобучения, наиболее оптимальным решением будет использовать больше тренировочных данных. Модели, обученные на большем количестве данных, естественным образом обобщают их лучше. Когда увеличить точность более не представляется возможным, то тогда мы начинаем использовать методы *регуляризации*. Они ограничивают количество и тип инофрмации, которые модель может хранить в себе. Если нейросеть может запомнить только небольшое количество паттернов, то тогда процесс оптимизации заставит ее сфокусироваться на самых важных, наиболее заметных шаблонах, которые будут иметь более высокий шанс обобщения.\n\nВ этом уроке мы познакомимся с двумя распространенными методами регуляризации: *регуляризация весов* и *исключение* (*dropout*). Мы используем их чтобы улучшить показатели нашей модели из урока по классификации обзоров фильмов из IMDB.",
"_____no_output_____"
]
],
[
[
"import tensorflow as tf\nfrom tensorflow import keras\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nprint(tf.__version__)",
"_____no_output_____"
]
],
[
[
"## Загружаем датасет IMDB\n\nВместо того, чтобы использовать *embedding* слой, как мы делали это в предыдущем уроке, здесь мы попробуем *multi-hot-encoding*. Наша модель быстро начнет переобучаться на тренировочных данных. Мы посмотрим как это произойдет и рассмотрим способы предотвращения этого.\n\nИспользование multi-hot-encoding на нашем массиве конвертирует его в векторы 0 и 1. Говоря конкретнее, это означает что например последовательность `[3, 5]` будет конвертирована в 10,000-размерный вектор, который будет состоять полностью из нулей за исключением 3 и 5, которые будут представлены в виде единиц.",
"_____no_output_____"
]
],
[
[
"NUM_WORDS = 10000\n\n(train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS)\n\ndef multi_hot_sequences(sequences, dimension):\n # Создаем матрицу формы (len(sequences), dimension), состоящую из нулей\n results = np.zeros((len(sequences), dimension))\n for i, word_indices in enumerate(sequences):\n results[i, word_indices] = 1.0 # назначаем единицу на конкретные показатели results[i]\n return results\n\n\ntrain_data = multi_hot_sequences(train_data, dimension=NUM_WORDS)\ntest_data = multi_hot_sequences(test_data, dimension=NUM_WORDS)",
"_____no_output_____"
]
],
[
[
"Давай посмотрим на один из получившихся multi-hot векторов. Номера слов были отсортированы по частоте, и вполне ожидаемо, что многие значения единицы будут около нуля. Проверим это на графике:",
"_____no_output_____"
]
],
[
[
"plt.plot(train_data[0])",
"_____no_output_____"
]
],
[
[
"## Продемонстрируем переобучение\n\nСамый простой способ предотвратить переобучение, это уменьшить размер модели, или количество обучаемых параметров, которые определяются количеством слоев и блоков на каждый слой. В глубоком обучении количество обучаемых параметров часто называют *емкостью модели*. Понятно, что модель с большим количество параметров будет иметь больший запас для обучения, и следовательно легче сможет выучить взаимосвязи между тренировочными образцами данных и целевыми проверочными. Обучение же без возможности обобщения окажется бесполезным, особенно если мы попытаемся получить предсказания на новых, ранее не виденных данных.\n\nВсегда помни об этом: модели глубокого обучения всегда хорошо справляются с подстраиванием под тренировочные данные, но наша конечная цель - обучение обощению.\n\nС другой стороны, если нейросеть имеет ограниченные ресурсы для запоминания шаблонов, то тогда она не сможет так же легко находить паттерны в данных. Чтобы сократить потери, такая модель будет вынуждена обучаться сжатым представлениям, которые имеют больше предсказательной силы. В то же самое время, если мы сделаем нашу модель слишком маленькой, тогда ей будет трудно подстроиться под тренировочный сет данных. Всегда нужно искать баланс между *слишком большой емкостью* и *недостаточной емкостью*.\n\nК сожалению, не существует магической формулы, чтобы определить правильный размер или архитектуру модели, говоря о количестве слоев или размере каждого слоя. Тебе необходимо попробовать использовать разные архитектуры модели, прежде чем найти подходящую.\n\nЧтобы найди подходящий размер модели лучше начать с относительно небольших слоев и параметров, затем начать увеличивать размер слоев или добавлять новые до тех пор, пока ты показатели не начнут ухудшаться на проверочных данных. Давай попробуем разобраться на примере нашей сети для классификации обзоров.\n\nДля начала мы построим простую модель используя только слои ```Dense``` в качестве основы, а затем сделаем маленькую и большую версию этой модели для сравнения.",
"_____no_output_____"
],
[
"### Строим основу для модели",
"_____no_output_____"
]
],
[
[
"baseline_model = keras.Sequential([\n # Параметр `input_shape` нужен только для того, чтобы заработал `.summary`\n keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),\n keras.layers.Dense(16, activation=tf.nn.relu),\n keras.layers.Dense(1, activation=tf.nn.sigmoid)\n])\n\nbaseline_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy', 'binary_crossentropy'])\n\nbaseline_model.summary()",
"_____no_output_____"
],
[
"baseline_history = baseline_model.fit(train_data,\n train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"_____no_output_____"
]
],
[
[
"### Создаем малый вариант",
"_____no_output_____"
],
[
"Давай построим модель с меньшим количесвом скрытых блоков и сравним ее с первой моделью:",
"_____no_output_____"
]
],
[
[
"smaller_model = keras.Sequential([\n keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),\n keras.layers.Dense(4, activation=tf.nn.relu),\n keras.layers.Dense(1, activation=tf.nn.sigmoid)\n])\n\nsmaller_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy', 'binary_crossentropy'])\n\nsmaller_model.summary()",
"_____no_output_____"
]
],
[
[
"И обучим модель используя те же данные:",
"_____no_output_____"
]
],
[
[
"smaller_history = smaller_model.fit(train_data,\n train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"_____no_output_____"
]
],
[
[
"### Создаем большую модель\n\nВ качестве упражнения ты можешь создать модель даже еще больше, и посмотреть как быстро она начнет переобучаться. Затем протестируем эту модель, которая будет иметь гораздо бóльшую емкость, чем требуется для решения нашей задачи: ",
"_____no_output_____"
]
],
[
[
"bigger_model = keras.models.Sequential([\n keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),\n keras.layers.Dense(512, activation=tf.nn.relu),\n keras.layers.Dense(1, activation=tf.nn.sigmoid)\n])\n\nbigger_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy','binary_crossentropy'])\n\nbigger_model.summary()",
"_____no_output_____"
]
],
[
[
"И опять потренируем уже новую модель используя те же данные:",
"_____no_output_____"
]
],
[
[
"bigger_history = bigger_model.fit(train_data, train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"_____no_output_____"
]
],
[
[
"### Построим графики потерь\n\n<!--TODO(markdaoust): This should be a one-liner with tensorboard -->",
"_____no_output_____"
],
[
"Непрерывные линии показывают потери во время обучения, а прерывистые - во время проверки (помни - чем меньше потери на проверочных данных, тем точнее модель). В нашем случае самая маленькая модель начинает переобучаться позже, чем основная (после 6 эпох вместо 4) и ее показатели ухудшаются гораздо медленее после переобучения.",
"_____no_output_____"
]
],
[
[
"def plot_history(histories, key='binary_crossentropy'):\n plt.figure(figsize=(16,10))\n \n for name, history in histories:\n val = plt.plot(history.epoch, history.history['val_'+key],\n '--', label=name.title()+' Val')\n plt.plot(history.epoch, history.history[key], color=val[0].get_color(),\n label=name.title()+' Train')\n\n plt.xlabel('Epochs')\n plt.ylabel(key.replace('_',' ').title())\n plt.legend()\n\n plt.xlim([0,max(history.epoch)])\n\n\nplot_history([('baseline', baseline_history),\n ('smaller', smaller_history),\n ('bigger', bigger_history)])",
"_____no_output_____"
]
],
[
[
"Обрати внимание, что большая сеть начинает переобучаться почти сразу же после первой эпохи, и ее метрики ухудшаются гораздо быстрее. Чем больше емкость модели, тем легче она сможет вместить тренировочный сет данных, что повлечет за собой низкие потери при обучении. Но в таком случае она будет более чувствительна к переобучению: разница в потерях между обучением и проверкой будет очень велика.",
"_____no_output_____"
],
[
"## Как решить проблему переобучения?",
"_____no_output_____"
],
[
"### Добавить регуляризацию весов\n\n",
"_____no_output_____"
],
[
"Тебе может быть знаком принцип *бритвы Оккама*: если есть 2 толкования явления, то правильным является самое \"простое\" - то, которое содержит меньше всего предположений. Этот принцип также применим к моделям, обучемым при помощи нейронных сетей: для одних и той же сети и данных существует несколько весовых значений, или моделей, которые могут быть обучены. Простые модели переобучиваются гораздо реже, чем сложные.\n\nВ этом контексте \"простая модель\" - та, в которой распределение значений параметров имеет меньшую энтропию. Другими словами, модель с меньшим количеством параметров, которую мы строили выше является простой. Таким образом, для предотвращение переобучения часто используется ограничение сложности сети путем уменьшения ее коэфицентов, что делает распределение более равномерным или *регулярным*. Этот метод называется *регуляризация весов*: к функции потерь нашей сети мы добавляем штраф (или *cost*, стоимость) за использование больших весов. \n\nШтраф имеет 2 вида:\n\n* Регуляризация L1 - штраф прямо пропорционален абсолютному значению коэффицентов весов (сокращенно мы называем его \"норма L1\")\n\n* Регуляризация L2 - штраф добавляется пропорционально квадрату значения коэффицента весов. Норму L2 также называют *угасанием весов*. Это два одинаковых названия для одной и той же математической формулы\n\nЧтобы осуществить регуляризацию в `tf.keras` мы добавим новый регулятор в блок со слоями как аргумент. Давай попробуем добавить L2 и посмотреть что получится:",
"_____no_output_____"
]
],
[
[
"l2_model = keras.models.Sequential([\n keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),\n activation=tf.nn.relu, input_shape=(NUM_WORDS,)),\n keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001),\n activation=tf.nn.relu),\n keras.layers.Dense(1, activation=tf.nn.sigmoid)\n])\n\nl2_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy', 'binary_crossentropy'])\n\nl2_model_history = l2_model.fit(train_data, train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"_____no_output_____"
]
],
[
[
"Значение ```l2(0.001)``` означает, что каждый коэффицент матрицы весов слоя будет добавлять ```0.001 * weight_coefficient_value**2``` к общей потери сети. Обрати внимание, что штраф добавляется только во время обучения, потери во время этой стадии будут гораздо выше, чем во время проверки.\n\nВот так выглядит влияние регуляризации L2:",
"_____no_output_____"
]
],
[
[
"plot_history([('Базовая модель', baseline_history),\n ('Регуляризация L2', l2_model_history)])",
"_____no_output_____"
]
],
[
[
"Как видишь, прошедшая L2 регуляризцию модель стала более устойчива к переобучению, чем наша изначальная, несмотря на то, что обе модели имели равное количество параметров.",
"_____no_output_____"
],
[
"### Добавить исключение Dropout\n\nМетод исключения (или выпадения) *Dropout* - один из самых эффективных и часто используемых приемов регуляризации нейронных сетей. Он был разработан Джеффом Хинтоном совместно с его студентами в Университете Торонто. Применяемый к слою Dropout состоит из случайно выпадающих (или равных нулю) признаков этого слоя.\n\nДопустим, что наш слой обычно возвращает вектор [0.2, 0.5, 1.3, 0.8, 1.1] на входной образец данных. После применения Dropout этот вектор будет случайным образом приравнивать к нулю какие-то его значения, например так - [0, 0.5, 1.3, 0, 1.1].\n\nТу часть признаков, которые \"выпадут\" или обнуляться называют *коэффицентом исключения dropout*. Обычно его устанавливают между 0.2 и 0.5. Во время проверки dropout не используется, и вместо этого все выходные значения уменьшаются на соотвествующий коэффиент (скажем, 0.5). Это поможет нам сбалансировать тот факт, что во время проверки было активировано больше блоков, чем во время обучения.\n\nВ `tf.keras` ты можешь использовать метод исключения в своей сети при помощи слоя Dropout, который применяется к выводу данных из предшествующего слоя.\n\nДавай добавим два слоя Dropout в нашу сеть на данных IMDB и посмотрим насколько хорошо она справится с переобучением:",
"_____no_output_____"
]
],
[
[
"dpt_model = keras.models.Sequential([\n keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)),\n keras.layers.Dropout(0.5),\n keras.layers.Dense(16, activation=tf.nn.relu),\n keras.layers.Dropout(0.5),\n keras.layers.Dense(1, activation=tf.nn.sigmoid)\n])\n\ndpt_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy','binary_crossentropy'])\n\ndpt_model_history = dpt_model.fit(train_data, train_labels,\n epochs=20,\n batch_size=512,\n validation_data=(test_data, test_labels),\n verbose=2)",
"_____no_output_____"
],
[
"plot_history([('Базовая модель', baseline_history),\n ('Метод Dropout', dpt_model_history)])",
"_____no_output_____"
]
],
[
[
"Метод Dropout имеет явные преимущества по сравнению с нашей изначальной, базовой моделью.\n\nПодведем итоги - вот самые основные способы предотвращения переобучения нейросетей:\n\n* Использовать больше данных для обучения\n* Уменьшить емкость сети\n* Использовать регуляризацию весов\n* Или dropout\n\nТакже существуют еще два важных подхода, которые не были продемонстрированы в этом уроке: увеличение или *аугментация данных* и *нормализация батча*.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4ab3bbda87e9a9c6f4827efc7b92045a4b4cc624
| 22,437 |
ipynb
|
Jupyter Notebook
|
NER-with-SpaCy/scripts/ner-spacy-pilote_files.ipynb
|
MiMoText/NER_novels
|
e722e50beda48bf8da6af39e7fd0d8e19961750b
|
[
"MIT"
] | null | null | null |
NER-with-SpaCy/scripts/ner-spacy-pilote_files.ipynb
|
MiMoText/NER_novels
|
e722e50beda48bf8da6af39e7fd0d8e19961750b
|
[
"MIT"
] | null | null | null |
NER-with-SpaCy/scripts/ner-spacy-pilote_files.ipynb
|
MiMoText/NER_novels
|
e722e50beda48bf8da6af39e7fd0d8e19961750b
|
[
"MIT"
] | null | null | null | 27.29562 | 401 | 0.509114 |
[
[
[
"# Named Entity Recognition on PILOT files using classic SpaCy pipeline\n\nMiMoText pilot files are: \n\n* Senac_Emigre\n* Maistre_Voyage\n* Sade_Aline\n* Sade_Justine\n* Bernadin_Paul\n* Laclos_Liaisons\n* Retif_Paysanne\n* Retif_Paysan\n* Mercier_An\n* Retif_AntiJustine\n* Rousseau_Julie\n* Voltaire_Candide\n\nFor full list of metadata and MiMoText IDs see https://docs.google.com/spreadsheets/d/10HrWlxkAuOiMxgyDa4K8cA7syvbFJGAW2kgbonyyDvQ/edit#gid=0 \n\nThe pretrained statistical models for French is multi-task CNN trained on UD French Sequoia and WikiNER. Assigns context-specific token vectors, POS tags, dependency parse and named entities.",
"_____no_output_____"
],
[
"When you call `nlp` on a text, spaCy first tokenizes the text to produce a `Doc` object. The `Doc` is then processed in several different steps – this is also referred to as the processing pipeline. The pipeline used by the default models consists of a tagger, a parser and an entity recognizer. Each pipeline component returns the processed `Doc`, which is then passed on to the next component.",
"_____no_output_____"
]
],
[
[
"import spacy\nimport re\nimport glob\nimport nltk\nimport sklearn\nfrom spacy import pipeline\nfrom spacy import morphology\nfrom spacy import displacy \nfrom collections import Counter\nimport fr_core_news_lg\nimport requests \nsklearn.feature_extraction.text.CountVectorizer\n\n# loading of french language model\nnlp = fr_core_news_lg.load()\n",
"_____no_output_____"
],
[
"# printing out a sorted list of the ten most common LOC entities within the text \nvoltaire_candide = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Voltaire_Candide.txt')\nvoltaire_candide = nlp(voltaire_candide.text)\nlistOfPER_voltaire_candide = [ent for ent in voltaire_candide.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfPER_voltaire_candide]).most_common(10)",
"_____no_output_____"
],
[
"# printing out a sorted list of the ten most common LOC entities within the text \nsenac_emigre = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Senac_Emigre.txt')\nsenac_emigre = nlp(senac_emigre.text)\nCounter([ent.text.strip() for ent in [ent for ent in senac_emigre.ents if ent.label_ == 'LOC']]).most_common(10)",
"_____no_output_____"
],
[
"maistre_voyage = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Maistre_Voyage.txt')\nmaistre_voyage = nlp(maistre_voyage.text)\nlistOfLOC_maistre_voyage = [ent for ent in maistre_voyage.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_maistre_voyage]).most_common(10)",
"_____no_output_____"
],
[
"laclos_liaisons = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Laclos_Liaisons.txt')\nlaclos_liaisons = nlp(laclos_liaisons.text)\nlistOfLOC_laclos_liaisons = [ent for ent in laclos_liaisons.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_laclos_liaisons]).most_common(10)",
"_____no_output_____"
],
[
"#Increasing the max_length for longer novels \nnlp.max_length = 1700000",
"_____no_output_____"
],
[
"rousseau_julie = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Rousseau_Julie.txt')\nrousseau_julie = nlp(rousseau_julie.text)\nlistOfLOC_rousseau_julie = [ent for ent in rousseau_julie.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_rousseau_julie]).most_common(10)",
"_____no_output_____"
],
[
"retif_paysanne = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Retif_Paysanne.txt')\nretif_paysanne= nlp(retif_paysanne.text)\nlistOfLOC_retif_paysanne = [ent for ent in retif_paysanne.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_retif_paysanne]).most_common(10)",
"_____no_output_____"
],
[
"#-->> Check: Why are there unusual LOC entitites ini retif_paysanne? Displacy renders the whole text with named entities (grey = PERS , orange = LOC, blue = ORG)\ndisplacy.render(retif_paysanne,style = 'ent', jupyter=True)",
"_____no_output_____"
],
[
"retif_antijustine = requests.get('https://raw.githubusercontent.com/MiMoText/roman18/master/plain/files/Retif_AntiJustine.txt')\nretif_antijustine= nlp(retif_antijustine.text)\nlistOfLOC_retif_antijustine = [ent for ent in retif_antijustine.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_retif_antijustine]).most_common(10)",
"_____no_output_____"
],
[
"sade_justine = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Sade_Justine.txt')\nsade_justine = nlp(sade_justine.text)\nlistOfLOC_sade_justine = [ent for ent in sade_justine.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_sade_justine]).most_common(10)\n",
"_____no_output_____"
],
[
"sade_aline = requests.get('https://raw.githubusercontent.com/MiMoText/roman-dixhuit/master/plain/files/Sade_Aline.txt')\nsade_aline = nlp(sade_aline.text)\nlistOfLOC_sade_aline = [ent for ent in sade_aline.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_sade_aline]).most_common(10)",
"_____no_output_____"
],
[
"bernadin_paul = requests.get('https://raw.githubusercontent.com/MiMoText/roman18/master/plain/files/Bernardin_Paul.txt')\nbernadin_paul = nlp(bernadin_paul.text)\nlistOfLOC_bernadin_paul = [ent for ent in bernadin_paul .ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_bernadin_paul ]).most_common(10)",
"_____no_output_____"
],
[
"mercier_an = requests.get('https://raw.githubusercontent.com/MiMoText/roman18/master/plain/files/Mercier_An.txt')\nmercier_an = nlp(mercier_an.text)\nlistOfLOC_mercier_an = [ent for ent in mercier_an.ents if ent.label_ == 'LOC']\nCounter([ent.text.strip() for ent in listOfLOC_mercier_an]).most_common(10)\n",
"_____no_output_____"
]
],
[
[
"# PER entities\n\nPrinting out a sorted list of the ten most common PER entities within the french novels (pilote corpus MiMoText)",
"_____no_output_____"
]
],
[
[
"Counter([ent.text.strip() for ent in [ent for ent in voltaire_candide.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in senac_emigre.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in maistre_voyage.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in laclos_liaisons.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in rousseau_julie.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in retif_paysanne.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in retif_antijustine.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in sade_justine.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in sade_aline.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in bernadin_paul.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"Counter([ent.text.strip() for ent in [ent for ent in mercier_an.ents if ent.label_ == 'PER']]).most_common(10)",
"_____no_output_____"
],
[
"# Computing Similarity with word vectors (SpaCy)",
"_____no_output_____"
],
[
"print('voltaire_candide et laclos_liaisons ',voltaire_candide.similarity(laclos_liaisons))\nprint('voltaire_candide et senac_emigre',voltaire_candide.similarity(senac_emigre))\nprint('voltaire_candide et sade aline',voltaire_candide.similarity(sade_aline))\nprint('voltaire_candide et maistre_voyage',voltaire_candide.similarity(maistre_voyage))",
"voltaire_candide et laclos_liaisons 0.9450388522973917\nvoltaire_candide et senac_emigre 0.9870599846699816\nvoltaire_candide et sade aline 0.9669153988417551\nvoltaire_candide et maistre_voyage 0.9868767207249752\n"
]
]
] |
[
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab3bc63e0571ee576ab9eefa3f5e5e9db3d4805
| 3,624 |
ipynb
|
Jupyter Notebook
|
example_projects/plot_dos/pyemto_dos.ipynb
|
hpleva/pyemto
|
653e68333d3fbb515ca9dae75c21c0889d031576
|
[
"MIT"
] | 11 |
2018-04-10T02:01:12.000Z
|
2021-12-10T06:44:54.000Z
|
example_projects/plot_dos/pyemto_dos.ipynb
|
hpleva/pyemto
|
653e68333d3fbb515ca9dae75c21c0889d031576
|
[
"MIT"
] | null | null | null |
example_projects/plot_dos/pyemto_dos.ipynb
|
hpleva/pyemto
|
653e68333d3fbb515ca9dae75c21c0889d031576
|
[
"MIT"
] | 2 |
2020-02-01T19:59:50.000Z
|
2020-04-07T20:53:40.000Z
| 27.876923 | 92 | 0.493653 |
[
[
[
"%pylab inline\nfigsize(18,10)\nfrom pyemto.emto_parser import Get_DOS",
"_____no_output_____"
]
],
[
[
"# DOS for L10 FeNi alloy with long-range order\n\nLong-range order parameter:\n\n0.50 = Fully random alloy.<br>\n0.00 = Fully ordered alloy.",
"_____no_output_____"
]
],
[
[
"random = Get_DOS('FeNi_0.50_dos.dos')\norder = Get_DOS('FeNi_0.00_dos.dos')",
"_____no_output_____"
],
[
"for ar, ao in zip(random.atoms, order.atoms):\n if ar.spin == 'up' and ao.spin == 'up':\n if ar.label == 'Ni' and ao.label == 'Ni':\n plot(ar.e, ar.dos, '--', label='random, site {}'.format(ar.sublattice))\n plot(ao.e, ao.dos, label= 'order, site {}'.format(ao.sublattice))\nplt.title('Ni UP')\nplt.legend()\nplt.show()\n\nfor ar, ao in zip(random.atoms, order.atoms):\n if ar.spin == 'down' and ao.spin == 'down':\n if ar.label == 'Ni' and ao.label == 'Ni':\n plot(ar.e, ar.dos, '--', label='random, site {}'.format(ar.sublattice))\n plot(ao.e, ao.dos, label= 'order, site {}'.format(ao.sublattice))\nplt.title('Ni DOWN')\nplt.legend()\nplt.show()\n\nfor ar, ao in zip(random.atoms, order.atoms):\n if ar.spin == 'up' and ao.spin == 'up':\n if ar.label == 'Fe' and ao.label == 'Fe':\n plot(ar.e, ar.dos, '--', label='random, site {}'.format(ar.sublattice))\n plot(ao.e, ao.dos, label= 'order, site {}'.format(ao.sublattice))\nplt.title('Fe UP')\nplt.legend()\nplt.show()\n\nfor ar, ao in zip(random.atoms, order.atoms):\n if ar.spin == 'down' and ao.spin == 'down':\n if ar.label == 'Fe' and ao.label == 'Fe':\n plot(ar.e, ar.dos, '--', label='random, site {}'.format(ar.sublattice))\n plot(ao.e, ao.dos, label= 'order, site {}'.format(ao.sublattice))\nplt.title('Fe DOWN')\nplt.legend()\nplt.show()",
"_____no_output_____"
],
[
"rNi = random.elems['Ni']\nrFe = random.elems['Fe']\noNi = order.elems['Ni']\noFe = order.elems['Fe']\n\nplot(rNi.e, rNi.ddos,'--', label='randon')\nplot(oNi.e, oNi.ddos, label='order')\nplt.title('Ni')\nplt.legend()\nplt.xlim(-0.5,0)\nplt.show()\n\nplot(rFe.e, rFe.ddos,'--', label='random')\nplot(oFe.e, oFe.ddos, label='order')\nplt.title('Fe')\nplt.legend()\nplt.xlim(-0.5,0)\nplt.show()",
"_____no_output_____"
]
]
] |
[
"code",
"markdown",
"code"
] |
[
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
]
] |
4ab3c862bda9b505311b5f5f4984702d0e55c5fe
| 17,867 |
ipynb
|
Jupyter Notebook
|
vimms/notebook/MakeSpectralLibraries.ipynb
|
hechth/vimms
|
ce5922578cf225d46cb285da8e7af97b5321f5aa
|
[
"MIT"
] | 11 |
2019-07-11T09:19:18.000Z
|
2021-03-07T08:44:36.000Z
|
vimms/notebook/MakeSpectralLibraries.ipynb
|
hechth/vimms
|
ce5922578cf225d46cb285da8e7af97b5321f5aa
|
[
"MIT"
] | 159 |
2019-12-11T14:41:40.000Z
|
2021-03-31T19:47:08.000Z
|
vimms/notebook/MakeSpectralLibraries.ipynb
|
hechth/vimms
|
ce5922578cf225d46cb285da8e7af97b5321f5aa
|
[
"MIT"
] | 4 |
2019-10-09T18:42:49.000Z
|
2020-07-10T14:21:59.000Z
| 36.612705 | 208 | 0.6151 |
[
[
[
"# Make spectral libraries",
"_____no_output_____"
]
],
[
[
"import sys, os\nsys.path.append('/Users/simon/git/vimms')\nsys.path.insert(0,'/Users/simon/git/mass-spec-utils/')\nfrom vimms.Common import save_obj\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"%load_ext autoreload\n%autoreload 2",
"_____no_output_____"
],
[
"library_cache = '/Users/simon/clms_er/library_cache'",
"_____no_output_____"
]
],
[
[
"## Massbank",
"_____no_output_____"
]
],
[
[
"from mass_spec_utils.library_matching.spec_libraries import MassBankLibrary",
"_____no_output_____"
]
],
[
[
"Path to the local version of the massbank repo",
"_____no_output_____"
]
],
[
[
"massbank_data_path = '/Users/simon/git/MassBank-Data/' # final slash is important!",
"_____no_output_____"
],
[
"mb = MassBankLibrary(mb_dir=massbank_data_path, polarity='POSITIVE')",
"_____no_output_____"
],
[
"save_obj(mb, os.path.join(library_cache, 'massbank_pos.p'))",
"_____no_output_____"
],
[
"mb = MassBankLibrary(mb_dir=massbank_data_path, polarity='NEGATIVE')\nsave_obj(mb, os.path.join(library_cache, 'massbank_neg.p'))",
"Loading records from /Users/simon/git/MassBank-Data/Athens_Univ/\n\t Loaded 5252 new records\nLoading records from /Users/simon/git/MassBank-Data/MetaboLights/\n\t Loaded 58 new records\nLoading records from /Users/simon/git/MassBank-Data/MPI_for_Chemical_Ecology/\n\t Loaded 691 new records\nLoading records from /Users/simon/git/MassBank-Data/JEOL_Ltd/\n\t Loaded 45 new records\nLoading records from /Users/simon/git/MassBank-Data/GL_Sciences_Inc/\n\t Loaded 174 new records\nLoading records from /Users/simon/git/MassBank-Data/Env_Anal_Chem_U_Tuebingen/\n\t Loaded 128 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN_ReSpect/\n\t Loaded 4642 new records\nLoading records from /Users/simon/git/MassBank-Data/Boise_State_Univ/\n\t Loaded 4 new records\nLoading records from /Users/simon/git/MassBank-Data/LCSB/\n\t Loaded 7299 new records\nLoading records from /Users/simon/git/MassBank-Data/PFOS_research_group/\n\t Loaded 413 new records\nLoading records from /Users/simon/git/MassBank-Data/Eawag/\n\t Loaded 11191 new records\nLoading records from /Users/simon/git/MassBank-Data/IPB_Halle/\n\t Loaded 677 new records\nLoading records from /Users/simon/git/MassBank-Data/Washington_State_Univ/\n\t Loaded 2626 new records\nLoading records from /Users/simon/git/MassBank-Data/Univ_Toyama/\n\t Loaded 253 new records\nLoading records from /Users/simon/git/MassBank-Data/UOEH/\n\t Loaded 35 new records\nLoading records from /Users/simon/git/MassBank-Data/Fukuyama_Univ/\n\t Loaded 340 new records\nLoading records from /Users/simon/git/MassBank-Data/Waters/\n\t Loaded 2992 new records\nLoading records from /Users/simon/git/MassBank-Data/UPAO/\n\t Loaded 12 new records\nLoading records from /Users/simon/git/MassBank-Data/UFZ/\n\t Loaded 1261 new records\nLoading records from /Users/simon/git/MassBank-Data/AAFC/\n\t Loaded 950 new records\nLoading records from /Users/simon/git/MassBank-Data/Metabolon/\n\t Loaded 149 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN_NPDepo/\n\t Loaded 1956 new records\nLoading records from /Users/simon/git/MassBank-Data/Eawag_Additional_Specs/\n\t Loaded 895 new records\nLoading records from /Users/simon/git/MassBank-Data/Nihon_Univ/\n\t Loaded 706 new records\nLoading records from /Users/simon/git/MassBank-Data/NAIST/\n\t Loaded 621 new records\nLoading records from /Users/simon/git/MassBank-Data/CASMI_2012/\n\t Loaded 26 new records\nLoading records from /Users/simon/git/MassBank-Data/HBM4EU/\n\t Loaded 1925 new records\nLoading records from /Users/simon/git/MassBank-Data/BGC_Munich/\n\t Loaded 903 new records\nLoading records from /Users/simon/git/MassBank-Data/Tottori_Univ/\n\t Loaded 16 new records\nLoading records from /Users/simon/git/MassBank-Data/BS/\n\t Loaded 1318 new records\nLoading records from /Users/simon/git/MassBank-Data/Chubu_Univ/\n\t Loaded 2563 new records\nLoading records from /Users/simon/git/MassBank-Data/MSSJ/\n\t Loaded 328 new records\nLoading records from /Users/simon/git/MassBank-Data/ISAS_Dortmund/\n\t Loaded 513 new records\nLoading records from /Users/simon/git/MassBank-Data/Kyoto_Univ/\n\t Loaded 184 new records\nLoading records from /Users/simon/git/MassBank-Data/Keio_Univ/\n\t Loaded 4780 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN_IMS/\n\t Loaded 1140 new records\nLoading records from /Users/simon/git/MassBank-Data/Literature_Specs/\n\t Loaded 39 new records\nLoading records from /Users/simon/git/MassBank-Data/Osaka_MCHRI/\n\t Loaded 20 new records\nLoading records from /Users/simon/git/MassBank-Data/KWR/\n\t Loaded 207 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN/\n\t Loaded 11935 new records\nLoading records from /Users/simon/git/MassBank-Data/Fiocruz/\n\t Loaded 1107 new records\nLoading records from /Users/simon/git/MassBank-Data/Fac_Eng_Univ_Tokyo/\n\t Loaded 12379 new records\nLoading records from /Users/simon/git/MassBank-Data/Univ_Connecticut/\n\t Loaded 510 new records\nLoading records from /Users/simon/git/MassBank-Data/NaToxAq/\n\t Loaded 3756 new records\nLoading records from /Users/simon/git/MassBank-Data/CASMI_2016/\n\t Loaded 622 new records\nLoading records from /Users/simon/git/MassBank-Data/Kazusa/\n\t Loaded 273 new records\nLoading records from /Users/simon/git/MassBank-Data/Osaka_Univ/\n\t Loaded 449 new records\n"
],
[
"mb = MassBankLibrary(mb_dir=massbank_data_path, polarity='all')\nsave_obj(mb, os.path.join(library_cache, 'massbank_all.p'))",
"Loading records from /Users/simon/git/MassBank-Data/Athens_Univ/\n\t Loaded 5252 new records\nLoading records from /Users/simon/git/MassBank-Data/MetaboLights/\n\t Loaded 58 new records\nLoading records from /Users/simon/git/MassBank-Data/MPI_for_Chemical_Ecology/\n\t Loaded 691 new records\nLoading records from /Users/simon/git/MassBank-Data/JEOL_Ltd/\n\t Loaded 45 new records\nLoading records from /Users/simon/git/MassBank-Data/GL_Sciences_Inc/\n\t Loaded 174 new records\nLoading records from /Users/simon/git/MassBank-Data/Env_Anal_Chem_U_Tuebingen/\n\t Loaded 128 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN_ReSpect/\n\t Loaded 4642 new records\nLoading records from /Users/simon/git/MassBank-Data/Boise_State_Univ/\n\t Loaded 4 new records\nLoading records from /Users/simon/git/MassBank-Data/LCSB/\n\t Loaded 7299 new records\nLoading records from /Users/simon/git/MassBank-Data/PFOS_research_group/\n\t Loaded 413 new records\nLoading records from /Users/simon/git/MassBank-Data/Eawag/\n\t Loaded 11191 new records\nLoading records from /Users/simon/git/MassBank-Data/IPB_Halle/\n\t Loaded 677 new records\nLoading records from /Users/simon/git/MassBank-Data/Washington_State_Univ/\n\t Loaded 2626 new records\nLoading records from /Users/simon/git/MassBank-Data/Univ_Toyama/\n\t Loaded 253 new records\nLoading records from /Users/simon/git/MassBank-Data/UOEH/\n\t Loaded 35 new records\nLoading records from /Users/simon/git/MassBank-Data/Fukuyama_Univ/\n\t Loaded 340 new records\nLoading records from /Users/simon/git/MassBank-Data/Waters/\n\t Loaded 2992 new records\nLoading records from /Users/simon/git/MassBank-Data/UPAO/\n\t Loaded 12 new records\nLoading records from /Users/simon/git/MassBank-Data/UFZ/\n\t Loaded 1261 new records\nLoading records from /Users/simon/git/MassBank-Data/AAFC/\n\t Loaded 950 new records\nLoading records from /Users/simon/git/MassBank-Data/Metabolon/\n\t Loaded 149 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN_NPDepo/\n\t Loaded 1956 new records\nLoading records from /Users/simon/git/MassBank-Data/Eawag_Additional_Specs/\n\t Loaded 895 new records\nLoading records from /Users/simon/git/MassBank-Data/Nihon_Univ/\n\t Loaded 706 new records\nLoading records from /Users/simon/git/MassBank-Data/NAIST/\n\t Loaded 621 new records\nLoading records from /Users/simon/git/MassBank-Data/CASMI_2012/\n\t Loaded 26 new records\nLoading records from /Users/simon/git/MassBank-Data/HBM4EU/\n\t Loaded 1925 new records\nLoading records from /Users/simon/git/MassBank-Data/BGC_Munich/\n\t Loaded 903 new records\nLoading records from /Users/simon/git/MassBank-Data/Tottori_Univ/\n\t Loaded 16 new records\nLoading records from /Users/simon/git/MassBank-Data/BS/\n\t Loaded 1318 new records\nLoading records from /Users/simon/git/MassBank-Data/Chubu_Univ/\n\t Loaded 2563 new records\nLoading records from /Users/simon/git/MassBank-Data/MSSJ/\n\t Loaded 328 new records\nLoading records from /Users/simon/git/MassBank-Data/ISAS_Dortmund/\n\t Loaded 513 new records\nLoading records from /Users/simon/git/MassBank-Data/Kyoto_Univ/\n\t Loaded 184 new records\nLoading records from /Users/simon/git/MassBank-Data/Keio_Univ/\n\t Loaded 4780 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN_IMS/\n\t Loaded 1140 new records\nLoading records from /Users/simon/git/MassBank-Data/Literature_Specs/\n\t Loaded 39 new records\nLoading records from /Users/simon/git/MassBank-Data/Osaka_MCHRI/\n\t Loaded 20 new records\nLoading records from /Users/simon/git/MassBank-Data/KWR/\n\t Loaded 207 new records\nLoading records from /Users/simon/git/MassBank-Data/RIKEN/\n\t Loaded 11935 new records\nLoading records from /Users/simon/git/MassBank-Data/Fiocruz/\n\t Loaded 1107 new records\nLoading records from /Users/simon/git/MassBank-Data/Fac_Eng_Univ_Tokyo/\n\t Loaded 12379 new records\nLoading records from /Users/simon/git/MassBank-Data/Univ_Connecticut/\n\t Loaded 510 new records\nLoading records from /Users/simon/git/MassBank-Data/NaToxAq/\n\t Loaded 3756 new records\nLoading records from /Users/simon/git/MassBank-Data/CASMI_2016/\n\t Loaded 622 new records\nLoading records from /Users/simon/git/MassBank-Data/Kazusa/\n\t Loaded 273 new records\nLoading records from /Users/simon/git/MassBank-Data/Osaka_Univ/\n\t Loaded 449 new records\n"
]
],
[
[
"## GNPS\n\nUsing Florian's file, because it has inchikeys",
"_____no_output_____"
]
],
[
[
"json_file = '/Users/simon/Downloads/gnps_positive_ionmode_cleaned_by_matchms_and_lookups.json'\nimport json\nwith open(json_file,'r') as f:\n payload = json.loads(f.read())",
"_____no_output_____"
],
[
"from mass_spec_utils.library_matching.spectrum import SpectralRecord\nneg_intensities = []\ndef json_to_spectrum(json_dat):\n precursor_mz = json_dat['precursor_mz']\n original_file = json_file\n spectrum_id = json_dat['spectrum_id']\n inchikey = json_dat['inchikey_smiles']\n peaks = json_dat['peaks_json']\n metadata = {}\n for k,v in json_dat.items():\n if not k == 'peaks':\n metadata[k] = v\n mz,i = zip(*peaks)\n if min(i) < 0:\n neg_intensities.append(spectrum_id)\n return None\n else:\n new_spectrum = SpectralRecord(precursor_mz, peaks, metadata, original_file, spectrum_id)\n return new_spectrum\n\nrecords = {}\nfor jd in tqdm(payload):\n new_spec = json_to_spectrum(jd)\n if new_spec is not None:\n records[new_spec.spectrum_id] = new_spec",
"_____no_output_____"
],
[
"def filter_min_peaks(spectrum, min_n_peaks=10):\n n_peaks = len(spectrum.peaks)\n if n_peaks < min_n_peaks:\n return None\n else:\n return spectrum\ndef filter_rel_intensity(spectrum, min_rel=0.01, max_rel=1.):\n pp = spectrum.peaks\n mz,i = zip(*pp)\n max_i = max(i)\n new_pp = []\n for p in pp:\n ri = p[1]/max_i\n if ri <= max_rel and ri >= min_rel:\n new_pp.append(p)\n spectrum.peaks = new_pp\n return spectrum",
"_____no_output_____"
],
[
"new_records = {}\nfor sid in tqdm(records.keys()):\n spec = records[sid]\n ss = filter_min_peaks(spec)\n if ss is not None:\n new_records[sid] = ss\n else:\n continue\n ss = filter_rel_intensity(ss)\n new_records[sid] = ss",
"_____no_output_____"
],
[
"for sid, ss in new_records.items():\n ss.metadata['inchikey'] = ss.metadata['inchikey_smiles']",
"_____no_output_____"
],
[
"from mass_spec_utils.library_matching.spec_libraries import SpectralLibrary\nsl = SpectralLibrary()\nsl.records = new_records\nsl.sorted_record_list = sl._dic2list()\n",
"_____no_output_____"
],
[
"save_obj(sl, os.path.join(library_cache,'gnps.p'))",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab3d159caa3da68b8653d30976e863abaa8297a
| 573,875 |
ipynb
|
Jupyter Notebook
|
Pandas_Data_Structures.ipynb
|
alekhaya99/Pandas_Tutorial
|
e413a8c97f6c94b716cf3e1be72383bc33ad9575
|
[
"MIT"
] | null | null | null |
Pandas_Data_Structures.ipynb
|
alekhaya99/Pandas_Tutorial
|
e413a8c97f6c94b716cf3e1be72383bc33ad9575
|
[
"MIT"
] | null | null | null |
Pandas_Data_Structures.ipynb
|
alekhaya99/Pandas_Tutorial
|
e413a8c97f6c94b716cf3e1be72383bc33ad9575
|
[
"MIT"
] | null | null | null | 208.378722 | 495,302 | 0.877053 |
[
[
[
"<a href=\"https://colab.research.google.com/github/alekhaya99/Pandas_Tutorial/blob/master/Pandas_Data_Structures.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>",
"_____no_output_____"
]
],
[
[
"import pandas as pd",
"_____no_output_____"
]
],
[
[
"# Series",
"_____no_output_____"
],
[
"Series is a one dimensional array-like object contaning a sequence of values and associated array of data labels called it index",
"_____no_output_____"
]
],
[
[
"object=pd.Series([4,7,-5,3])\nobject",
"_____no_output_____"
],
[
"#In order to get the index\nobject.index",
"_____no_output_____"
],
[
"#In order to find out the values \nobject.values",
"_____no_output_____"
],
[
"#In order to add labeling of the index\nobject_2 = pd.Series([1,2,3,4], index=['a','b','c','d'])\nobject_2",
"_____no_output_____"
],
[
"object_2['a']",
"_____no_output_____"
],
[
"object_2['a']=0",
"_____no_output_____"
],
[
"object_2[['a','b','c']]",
"_____no_output_____"
],
[
"object_2[object_2>2]",
"_____no_output_____"
],
[
"import numpy as np\nnp.exp(object_2)",
"_____no_output_____"
],
[
"'b' in object_2",
"_____no_output_____"
],
[
"#Converting a Python Dictionary into Panda series\nDict={'Singapore':'Singapore','India':'Delhi','Philippines':'Manila'}\nObject_3=pd.Series(Dict)\nObject_3",
"_____no_output_____"
],
[
"Countries=['Philippines','India','USA']\nObject_4=pd.Series(Object_3,index=Countries)\nObject_4",
"_____no_output_____"
],
[
"pd.isnull(Object_4)",
"_____no_output_____"
],
[
"pd.notnull(Object_4)",
"_____no_output_____"
],
[
"Object_3+Object_4",
"_____no_output_____"
],
[
"Object_4.name='Name of Contries'\nObject_4.index.name='Country'\nObject_4",
"_____no_output_____"
],
[
"object.index=['M','A','R','K']\nobject",
"_____no_output_____"
]
],
[
[
"# DataFrame",
"_____no_output_____"
],
[
"According to https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html\nDataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It is generally the most commonly used pandas object. Like Series, DataFrame accepts many different kinds of input:\n\n\n\n\n\n* Dict of 1D ndarrays, lists, dicts, or Series \n* 2-D numpy.ndarray\n\n\n* Structured or record ndarray\n* A Series\n\n\n* Another DataFrame\n\n\n\n\n\n\n\n\n\n\nAlong with the data, you can optionally pass index (row labels) and columns (column labels) arguments. If you pass an index and / or columns, you are guaranteeing the index and / or columns of the resulting DataFrame. Thus, a dict of Series plus a specific index will discard all data not matching up to the passed index.\n\nIf axis labels are not passed, they will be constructed from the input data based on common sense rules.",
"_____no_output_____"
]
],
[
[
"Class_Student=pd.DataFrame({'Name':['A','B','C','D','E','F'],'Marks':['Sixty-eight','Ninty-Eight','Fifty','Seventy','Hundred','Ninty'],'RollNo':[1,2,3,4,5,6]})\nClass_Student",
"_____no_output_____"
],
[
"#If the data frame is large the .head option could be used. This will only select the first five rows\nClass_Student.head()",
"_____no_output_____"
],
[
"#In order to arrang the data in the specific column\npd.DataFrame(Class_Student,columns=['RollNo','Name','Marks','Average'])",
"_____no_output_____"
],
[
"Class_Student['Name']",
"_____no_output_____"
],
[
"Class_Student.Name",
"_____no_output_____"
],
[
"#The rows attribuute could be retrived by the 'loc' command\nClass_Student.loc[0]",
"_____no_output_____"
],
[
"Class_Student1=pd.DataFrame(Class_Student,columns=['RollNo','Name','Marks','Status'])",
"_____no_output_____"
],
[
"#In Pandas we can modify the data structures\nClass_Student1['Status']='Pass'\nClass_Student1",
"_____no_output_____"
],
[
"import numpy as np\n",
"_____no_output_____"
],
[
"Class_Student1['Status']=np.random.rand()\nClass_Student1",
"_____no_output_____"
],
[
"Value=pd.Series([30,40,50],index=[0,1,2])\nClass_Student1['Status']=Value\nClass_Student1",
"_____no_output_____"
],
[
"#The 'del' keyword is used to delete the Columns\ndel Class_Student1['Status']\nClass_Student1",
"_____no_output_____"
],
[
"#We can add a dictionary to the DataFrame as well\nExample=pd.DataFrame({'ABC':{'Maths':49,'Physics':56,'Chemistry':76},'XYZ':{'Maths':49,'Physics':56,'Chemistry':76,'Biology':54}})\nExample",
"_____no_output_____"
],
[
"#We can also transpose the DataFrame\nExample.T",
"_____no_output_____"
],
[
"pd.DataFrame(Example,index=['English','Maths','Physics'])",
"_____no_output_____"
],
[
"Example.index.name='Subject'\nExample.columns.name='Name'\nExample",
"_____no_output_____"
],
[
"Example.values",
"_____no_output_____"
]
],
[
[
"",
"_____no_output_____"
],
[
"# Indexing",
"_____no_output_____"
]
],
[
[
"Object_5=pd.Series(range(1,6,2),index=['a','b','c'])",
"_____no_output_____"
],
[
"#The following will throw an error\nindex",
"_____no_output_____"
],
[
"index=Object_5.index\nindex",
"_____no_output_____"
],
[
"index[1:]",
"_____no_output_____"
],
[
"L=pd.Index(np.arange(1,6,2))",
"_____no_output_____"
],
[
"Object_6=pd.Series(['A','B','C'],index=L)",
"_____no_output_____"
],
[
"Object_6",
"_____no_output_____"
]
],
[
[
"Index Method and Properties",
"_____no_output_____"
]
],
[
[
"#In order to append \nObject_6.append(pd.DataFrame(np.arange(8,12,2),index=['D','E']))",
"_____no_output_____"
],
[
"\ndf = pd.DataFrame({\"A\":[5, 3, 6, 4], \n \"B\":[11, 2, 4, 3], \n \"C\":[4, 3, 8, 5], \n \"D\":[5, 4, 2, 8]},index=[1,2,3,4])\n#Name the rows and columns\ndf.columns.name='String'\ndf.index.name='Integer'\n \n# Print the dataframe \ndf ",
"_____no_output_____"
],
[
"#Compute the set difference as an Index along the row\ndf.diff(axis = 0, periods = 1) \n",
"_____no_output_____"
],
[
"#Compute the set difference as an Index along the column\ndf.diff(axis = 1, periods = 2) ",
"_____no_output_____"
],
[
"#Compute Set intersection\nidx1 = pd.Index([1, 2, 3, 4])\nidx2 = pd.Index([3, 4, 5, 6])\nidx1.intersection(idx2)",
"_____no_output_____"
],
[
"#Compute set union\nidx1 = pd.Index([1, 2, 3, 4])\nidx2 = pd.Index([3, 4, 5, 6])\nidx1.union(idx2)",
"_____no_output_____"
],
[
"#Compute Boolean Array indicating whether each value is contained in the collection\nidx1.isin([3])",
"_____no_output_____"
],
[
"#Delets the new Index with element at that particular index is deleted\nidx1.delete([3])",
"_____no_output_____"
],
[
"#creates a new index by deleting the passed value\nidx1.drop([2])",
"_____no_output_____"
],
[
"#Compute a new index by inserting an element at index i\nidx1.insert(0,9)",
"_____no_output_____"
],
[
"#Returns True if each element is greater than or equal to the previous element\nidx1.is_monotonic",
"_____no_output_____"
],
[
"#Returns True if index has no duplicate value\nidx1.is_unique",
"_____no_output_____"
],
[
"#Compute the array of unique values in the Index\nidx = pd.Index(['Harry', 'Mike', 'Arther', 'Nick', \n 'Harry', 'Arther'], name ='Student') \nidx.unique()",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab3e62a9a4214ca797d077e1a3b8b80c31bfa79
| 17,022 |
ipynb
|
Jupyter Notebook
|
tutorials/weldxfile.ipynb
|
CagtayFabry/weldx
|
463f949d4fa54b5edafa2268cb862716865a62c2
|
[
"BSD-3-Clause"
] | null | null | null |
tutorials/weldxfile.ipynb
|
CagtayFabry/weldx
|
463f949d4fa54b5edafa2268cb862716865a62c2
|
[
"BSD-3-Clause"
] | 3 |
2022-03-06T00:22:32.000Z
|
2022-03-27T00:23:51.000Z
|
tutorials/weldxfile.ipynb
|
CagtayFabry/weldx
|
463f949d4fa54b5edafa2268cb862716865a62c2
|
[
"BSD-3-Clause"
] | null | null | null | 31.233028 | 430 | 0.614499 |
[
[
[
"# How to handle WelDX files\nIn this notebook we will demonstrate how to create, read, and update ASDF files created by WelDX. All the needed funcationality is contained in a single class named `WeldxFile`. We are going to show different modes of operation, like working with physical files on your harddrive, and in-memory files, both read-only and read-write mode.\n\n## Imports\nThe WeldxFile class is being imported from the top-level of the weldx package.",
"_____no_output_____"
]
],
[
[
"from datetime import datetime\n\nimport numpy as np\n\nfrom weldx import WeldxFile",
"_____no_output_____"
]
],
[
[
"## Basic operations\nNow we create our first file, by invoking the `WeldxFile` constructor without any additional arguments. By doing so, we create an in-memory file. This means, that your changes will be temporary until you write it to an actual file on your harddrive. The `file_handle` attribute will point to the actual underlying file. In this case it is the in-memory file or buffer as shown below.",
"_____no_output_____"
]
],
[
[
"file = WeldxFile()\nfile.file_handle",
"_____no_output_____"
]
],
[
[
"Next we assign some dictionary like data to the file, by storing it some attribute name enclosed by square brackets.\nThen we look at the representation of the file header or contents. This will depend on the execution environment.\nIn JupyterLab you will see an interactive tree like structure, which can be expanded and searched.\nThe root of the tree is denoted as \"root\" followed by children created by the ASDF library \"asdf_library\" and \"history\". We attached the additional child \"some_data\" with our assignment.",
"_____no_output_____"
]
],
[
[
"data = {\"data_sets\": {\"first\": np.random.random(100), \"time\": datetime.now()}}",
"_____no_output_____"
],
[
"file[\"some_data\"] = data\nfile",
"_____no_output_____"
]
],
[
[
"Note, that here we are using some very common types, namely an NumPy array and a timestamp. For weldx specialized types like the coordinates system manager, (welding) measurements etc., the weldx package provides ASDF extensions to handle those types automatically during loading and saving ASDF data. You do not need to worry about them. If you try to save types, which cannot be handled by ASDF, you will trigger an error.",
"_____no_output_____"
],
[
"We could also have created the same structure in one step:",
"_____no_output_____"
]
],
[
[
"file = WeldxFile(tree=data, mode=\"rw\")\nfile",
"_____no_output_____"
]
],
[
[
"You might have noticed, that we got a warning about the in-memory operation during showing the file in Jupyter.\nNow we have passed the additional argument mode=\"rw\", which indicates, that we want to perform write operations just in memory,\nor alternatively to the passed physical file. So this warning went away.",
"_____no_output_____"
],
[
"We can use all dictionary operations on the data we like, e.g. update, assign, and delete items.",
"_____no_output_____"
]
],
[
[
"file[\"data_sets\"][\"second\"] = {\"data\": np.random.random(100), \"time\": datetime.now()}\n\n# delete the first data set again:\ndel file[\"data_sets\"][\"first\"]\nfile",
"_____no_output_____"
]
],
[
[
"We can also iterate over all keys as usual. You can also have a look at the documentation of the builtin type `dict` for a complete overview of its features. ",
"_____no_output_____"
]
],
[
[
"for key, value in file.items():\n print(key, value)",
"_____no_output_____"
]
],
[
[
"### Access to data by attributes\nThe access by key names can be tedious, when deeply nested dictionaries are involved. We provide a handling via attributes like this",
"_____no_output_____"
]
],
[
[
"accessible_by_attribute = file.as_attr()\naccessible_by_attribute.data_sets.second",
"_____no_output_____"
]
],
[
[
"## Writing files to disk\nIn order to make your changes persistent, we are going to save the memory-backed file to disk by invoking `WeldxFile.write_to`.",
"_____no_output_____"
]
],
[
[
"file.write_to(\"example.asdf\")",
"_____no_output_____"
]
],
[
[
"This newly created file can be opened up again, in read-write mode like by passing the appropriate arguments.",
"_____no_output_____"
]
],
[
[
"example = WeldxFile(\"example.asdf\", mode=\"rw\")\nexample[\"updated\"] = True\nexample.close()",
"_____no_output_____"
]
],
[
[
"Note, that we closed the file here explicitly. Before closing, we wanted to write a simple item to tree. But lets see what happens, if we open the file once again.",
"_____no_output_____"
]
],
[
[
"example = WeldxFile(\"example.asdf\", mode=\"rw\")\ndisplay(example)\nexample.close()",
"_____no_output_____"
]
],
[
[
"As you see the `updated` state has been written, because we closed the file properly. If we omit closing the file, \nour changes would be lost when the object runs out of scope or Python terminates.",
"_____no_output_____"
],
[
"## Handling updates within a context manager\nTo ensure you will not forget to update your file after making changes, \nwe are able to enclose our file-changing operations within a context manager.\nThis ensures that all operations done in this context (the `with` block) are being written to the file, once the context is left.\nNote that the underlying file is also closed after the context ends. This is useful, when you have to update lots of files, as there is a limited amount of file handles an operating system can deal with.",
"_____no_output_____"
]
],
[
[
"with WeldxFile(\"example.asdf\", mode=\"rw\") as example:\n example[\"updated\"] = True\n fh = example.file_handle\n # now the context ends, and the file is being saved to disk again.\n\n# lets check the file handle has been closed, after the context ended.\nassert fh.closed",
"_____no_output_____"
]
],
[
[
"Let us inspect the file once again, to see whether our `updated` item has been correctly written. ",
"_____no_output_____"
]
],
[
[
"WeldxFile(\"example.asdf\")",
"_____no_output_____"
]
],
[
[
"In case an error got triggered (e.g. an exception has been raised) inside the context, the underlying file is still updated. You could prevent this behavior, by passing `sync=False` during file construction. ",
"_____no_output_____"
]
],
[
[
"try:\n with WeldxFile(\"example.asdf\", mode=\"rw\") as file:\n file[\"updated\"] = False\n raise Exception(\"oh no\")\nexcept Exception as e:\n print(\"expected error:\", e)",
"_____no_output_____"
],
[
"WeldxFile(\"example.asdf\")",
"_____no_output_____"
]
],
[
[
"## Keeping a log of changes when manipulating a file\nIt can become quite handy to know what has been done to file in the past. Weldx files provide a history log, in which arbitrary strings can be stored with time stamps and used software. We quickly run you through the process of adding history entries to your file.",
"_____no_output_____"
]
],
[
[
"filename_hist = \"example_history.asdf\"\nwith WeldxFile(filename_hist, mode=\"rw\") as file:\n file[\"some\"] = \"changes\"\n file.add_history_entry(\"added some changes\")",
"_____no_output_____"
],
[
"WeldxFile(filename_hist).history",
"_____no_output_____"
]
],
[
[
"When you want to describe a custom software,\nwhich is lets say a library or tool used to generate/modify the data in the file and we passed it into the creation of our WeldxFile.",
"_____no_output_____"
]
],
[
[
"software = dict(\n name=\"my_tool\", version=\"1.0\", homepage=\"https://my_tool.org\", author=\"the crowd\"\n)\nwith WeldxFile(filename_hist, mode=\"rw\", software_history_entry=software) as file:\n file[\"some\"] = \"changes\"\n file.add_history_entry(\"added more changes\")",
"_____no_output_____"
]
],
[
[
"Let's now inspect how we wrote history.",
"_____no_output_____"
]
],
[
[
"WeldxFile(filename_hist).history[-1]",
"_____no_output_____"
]
],
[
[
"The entries key is a list of all log entries, where new entries are appended to. We have proper time stamps indicating when the change happened, the actual log entry, and optionally a custom software used to make the change.",
"_____no_output_____"
],
[
"## Handling of custom schemas\nAn important aspect of WelDX or ASDF files is, that you can validate them to comply with a defined schema. A schema defines required and optional attributes a tree structure has to provide to pass the schema validation. Further the types of these attributes can be defined, e.g. the data attribute should be a NumPy array, or a timestamp should be of type `pandas.Timestamp`.\nThere are several schemas provided by WelDX, which can be used by passing them to the `custom_schema` argument. It is expected to be a path-like type, so a string (`str`) or `pathlib.Path` is accepted. The provided utility function `get_schema_path` returns the path to named schema. So its output can directly be used in WeldxFile(schema=...)",
"_____no_output_____"
]
],
[
[
"from weldx.asdf.util import get_schema_path",
"_____no_output_____"
],
[
"schema = get_schema_path(\"single_pass_weld-0.1.0\")\nschema",
"_____no_output_____"
]
],
[
[
"This schema defines a complete experimental setup with measurement data, e.g requires the following attributes to be defined in our tree:\n\n - workpiece\n - TCP\n - welding_current\n - welding_voltage\n - measurements\n - equipment\n\nWe use a testing function to provide this data now, and validate it against the schema by passing the `custom_schema` during WeldxFile creation.\nHere we just have a look at the process parameters sub-dictionary.",
"_____no_output_____"
]
],
[
[
"from weldx.asdf.cli.welding_schema import single_pass_weld_example\n\n_, single_pass_weld_data = single_pass_weld_example(out_file=None)\ndisplay(single_pass_weld_data[\"process\"])",
"_____no_output_____"
]
],
[
[
"That is a lot of data, containing complex data structures and objects describing the whole experiment including measurement data.\nWe can now create new `WeldxFile` and validate the data against the schema.",
"_____no_output_____"
]
],
[
[
"WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode=\"rw\")",
"_____no_output_____"
]
],
[
[
"But what would happen, if we forget an import attribute? Lets have a closer look...",
"_____no_output_____"
]
],
[
[
"# simulate we forgot something important, so we delete the workpiece:\ndel single_pass_weld_data[\"workpiece\"]\n\n# now create the file again, and see what happens:\ntry:\n WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode=\"rw\")\nexcept Exception as e:\n display(e)",
"_____no_output_____"
]
],
[
[
"We receive a ValidationError from the ASDF library, which tells us exactly what the missing information is. The same will happen, if we accidentally pass the wrong type.",
"_____no_output_____"
]
],
[
[
"# simulate a wrong type by changing it to a NumPy array.\nsingle_pass_weld_data[\"welding_current\"] = np.zeros(10)\n\n# now create the file again, and see what happens:\ntry:\n WeldxFile(tree=single_pass_weld_data, custom_schema=schema, mode=\"rw\")\nexcept Exception as e:\n display(e)",
"_____no_output_____"
]
],
[
[
"Here we see, that a `signal` tag is expected, but a `asdf/core/ndarray-1.0.0` was received. \nThe ASDF library assigns tags to certain types to handle their storage in the file format. \nAs shown, the `signal` tag is contained in `weldx/measurement` container, provided by `weldx.bam.de`. The tags and schemas also provide a version number, so future updates in the software become manageable.\n\nCustom schemas can be used to define own protocols or standards describing your data.",
"_____no_output_____"
],
[
"## Summary\nIn this tutorial we have encountered how to easily open, inspect, manipulate, and update ASDF files created by WelDX. We've learned that these files can store a variety of different data types and structures.\n\nDiscussed features:\n\n * Opening in read/write mode `WeldxFile(mode=\"rw\")`.\n * Creating files in memory (passing no file name to `WeldxFile()` constructor).\n * Writing to disk (`WeldxFile.write_to`).\n * Keeping log of changes (`WeldxFile.history`, `WeldxFile.add_history_entry`).\n * Validation against a schema `WeldxFile(custom_schema=\"/path/my_schema.yaml\")`",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
]
] |
4ab3e98d24ddc8743dd5317befa88cd74d8ba98f
| 249,218 |
ipynb
|
Jupyter Notebook
|
wandb/run-20211018_095417-2cey3iw7/tmp/code/00.ipynb
|
Programmer-RD-AI/Car-Object-Detection-V1-Learning-Object-Detection
|
c8b657ee26d549677a81512de77c4a3f658c4c63
|
[
"Apache-2.0"
] | null | null | null |
wandb/run-20211018_095417-2cey3iw7/tmp/code/00.ipynb
|
Programmer-RD-AI/Car-Object-Detection-V1-Learning-Object-Detection
|
c8b657ee26d549677a81512de77c4a3f658c4c63
|
[
"Apache-2.0"
] | null | null | null |
wandb/run-20211018_095417-2cey3iw7/tmp/code/00.ipynb
|
Programmer-RD-AI/Car-Object-Detection-V1-Learning-Object-Detection
|
c8b657ee26d549677a81512de77c4a3f658c4c63
|
[
"Apache-2.0"
] | null | null | null | 339.072109 | 157,580 | 0.918557 |
[
[
[
"import os\nimport json,cv2\nimport pandas as pd\nimport numpy as np\nimport torch,torchvision\nimport wandb\nfrom torch.nn import *\nfrom torch.optim import *\nimport matplotlib.pyplot as plt\nfrom tqdm import tqdm\nfrom sklearn.model_selection import train_test_split\nfrom torchvision.models import *",
"_____no_output_____"
],
[
"import wandb\ndevice = 'cuda'\nPROJECT_NAME = 'Car-Object-Detection-V1-Learning-Object-Detection'",
"_____no_output_____"
],
[
"torch.__version__,torchvision.__version__,wandb.__version__,json.__version__,pd.__version__,np.__version__",
"_____no_output_____"
],
[
"data = pd.read_csv('./data.csv').sample(frac=1)",
"_____no_output_____"
],
[
"data",
"_____no_output_____"
],
[
"img = cv2.imread('./data/vid_4_12300.jpg')",
"_____no_output_____"
],
[
"xmin,ymin,xmax,ymax = 386,185,554,230",
"_____no_output_____"
],
[
"x = xmin\ny = ymin\nw = xmax - xmin\nh = ymax - ymin",
"_____no_output_____"
],
[
"crop = img[y:y+h,x:x+w]",
"_____no_output_____"
],
[
"plt.imshow(crop)",
"_____no_output_____"
],
[
"cv2.imwrite('./crop.png',crop)",
"_____no_output_____"
],
[
"plt.imshow(cv2.rectangle(img,(x,y),(x+w,y+h),(200,0,0),2))",
"_____no_output_____"
],
[
"cv2.imwrite('./box.png',cv2.rectangle(img,(x,y),(x+w,y+h),(200,0,0),2))",
"_____no_output_____"
],
[
"def load_data():\n new_data = []\n for idx in tqdm(range(len(data)):)\n new_data_iter = []\n info = data.iloc[idx]\n new_data.append([\n cv2.resize(cv2.imread(f'./data/{info[\"image\"]}'),(112,112))/255.0,\n [info['xmin'],info['ymin'],info['xmax'],info['ymax']]\n ])\n X = []\n y = []\n for d in new_data:\n X.append(d[0])\n y.append(d[1])\n X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.5,shuffle=False)\n X_train = torch.from_numpy(np.array(X_train)).to(device).view(-1,3,56,56).float()\n y_train = torch.from_numpy(np.array(y_train)).to(device).float()\n X_test = torch.from_numpy(np.array(X_test)).to(device).view(-1,3,56,56).float()\n y_test = torch.from_numpy(np.array(y_test)).to(device).float()\n return X,y,X_train,X_test,y_train,y_test,new_data",
"_____no_output_____"
],
[
"X,y,X_train,X_test,y_train,y_test,new_data = load_data()",
"_____no_output_____"
],
[
"torch.save(X_train,'X_train.pt')\ntorch.save(y_train,'y_train.pt')\ntorch.save(X_test,'X_test.pt')\ntorch.save(y_test,'y_test.pt')\ntorch.save(X_train,'X_train.pth')\ntorch.save(y_train,'y_train.pth')\ntorch.save(X_test,'X_test.pth')\ntorch.save(y_test,'y_test.pth')",
"_____no_output_____"
],
[
"def get_loss(model,X,y,criterion):\n preds = model(X)\n loss = criterion(preds,y)\n return loss.item()",
"_____no_output_____"
],
[
"def get_accuracy(model,X,y):\n preds = model(X)\n correct = 0\n total = 0\n for pred,yb in zip(preds,y):\n pred = int(torch.argmax(pred))\n yb = int(torch.argmax(yb))\n if pred == yb:\n correct += 1\n total += 1\n acc = round(correct/total,3)*100\n return acc",
"_____no_output_____"
],
[
"model = resnet18(pretrained=True).to(device)\nmodel.fc = Linear(512,4)\ncriterion = MSELoss()\noptimizer = Adam(model.parameters(),lr=0.001)\nepochs = 100\nbatch_size = 32",
"_____no_output_____"
],
[
"wandb.init(project=PROJECT_NAME,name='baseline-TL-True')\nfor _ in tqdm(range(epochs)):\n for i in range(0,len(X_train),batch_size):\n try:\n X_batch = X_train[i:i+batch_size]\n y_batch = y_train[i:i+batch_size]\n model.to(device)\n preds = model(X_batch)\n loss = criterion(preds,y_batch)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n except:\n pass\n model.eval()\n torch.cuda.empty_cache()\n wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})\n torch.cuda.empty_cache()\n wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})\n torch.cuda.empty_cache()\n model.train()\nwandb.finish()",
"\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mranuga-d\u001b[0m (use `wandb login --relogin` to force relogin)\n\u001b[34m\u001b[1mwandb\u001b[0m: wandb version 0.12.4 is available! To upgrade, please run:\n\u001b[34m\u001b[1mwandb\u001b[0m: $ pip install wandb --upgrade\n"
],
[
"y_batch.shape,preds.shape",
"_____no_output_____"
],
[
"model = resnet18(pretrained=False).to(device)\nmodel.fc = Linear(512,4)\ncriterion = MSELoss()\noptimizer = Adam(model.parameters(),lr=0.001)\nepochs = 100\nbatch_size = 32",
"_____no_output_____"
],
[
"wandb.init(project=PROJECT_NAME,name='baseline-TL-False')\nfor _ in tqdm(range(epochs)):\n for i in range(0,len(X_train),batch_size):\n X_batch = X_train[i:i+batch_size]\n y_batch = y_train[i:i+batch_size]\n model.to(device)\n preds = model(X_batch)\n loss = criterion(preds,y_batch)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n model.eval()\n torch.cuda.empty_cache()\n wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})\n torch.cuda.empty_cache()\n wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})\n torch.cuda.empty_cache()\n model.train()\nwandb.finish()",
"_____no_output_____"
],
[
"class Model(Module):\n def __init__(self):\n super().__init__()\n self.max_pool2d = MaxPool2d((2,2),(2,2))\n self.activation = ReLU()\n self.conv1 = Conv2d(3,7,(5,5))\n self.conv2 = Conv2d(7,14,(5,5))\n self.conv2bn = BatchNorm2d(14)\n self.conv3 = Conv2d(14,21,(5,5))\n self.linear1 = Linear(21*3*3,256)\n self.linear2 = Linear(256,512)\n self.linear2bn = BatchNorm1d(512)\n self.linear3 = Linear(512,256)\n self.output = Linear(256,len(labels))\n \n def forward(self,X):\n preds = self.max_pool2d(self.activation(self.conv1(X)))\n preds = self.max_pool2d(self.activation(self.conv2bn(self.conv2(preds))))\n preds = self.max_pool2d(self.activation(self.conv3(preds)))\n preds = preds.view(-1,21*3*3)\n preds = self.activation(self.linear1(preds))\n preds = self.activation(self.linear2bn(self.linear2(preds)))\n preds = self.activation(self.linear3(preds))\n preds = self.output(preds)\n return preds",
"_____no_output_____"
],
[
"model = Model().to(device)\ncriterion = MSELoss()\noptimizer = Adam(model.parameters(),lr=0.001)\nepochs = 100\nbatch_size = 32",
"_____no_output_____"
],
[
"wandb.init(project=PROJECT_NAME,name='baseline-CNN')\nfor _ in tqdm(range(epochs)):\n for i in range(0,len(X_train),batch_size):\n X_batch = X_train[i:i+batch_size]\n y_batch = y_train[i:i+batch_size]\n model.to(device)\n preds = model(X_batch)\n loss = criterion(preds,y_batch)\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n model.eval()\n torch.cuda.empty_cache()\n wandb.log({'Loss':(get_loss(model,X_train,y_train,criterion)+get_loss(model,X_batch,y_batch,criterion))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val Loss':get_loss(model,X_test,y_test,criterion)})\n torch.cuda.empty_cache()\n wandb.log({'Acc':(get_accuracy(model,X_train,y_train)+get_accuracy(model,X_batch,y_batch))/2})\n torch.cuda.empty_cache()\n wandb.log({'Val ACC':get_accuracy(model,X_test,y_test)})\n torch.cuda.empty_cache()\n model.train()\nwandb.finish()",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab3eb9045935fd4368a7d900cf6b3e34e658272
| 836,121 |
ipynb
|
Jupyter Notebook
|
12-Particle-Filters.ipynb
|
asfaltboy/Kalman-and-Bayesian-Filters-in-Python
|
4669507d7a8274a40cff93a011d34b6171227ea6
|
[
"CC-BY-4.0"
] | 4 |
2017-10-17T06:53:41.000Z
|
2021-04-03T14:16:06.000Z
|
12-Particle-Filters.ipynb
|
plusk01/Kalman-and-Bayesian-Filters-in-Python
|
4669507d7a8274a40cff93a011d34b6171227ea6
|
[
"CC-BY-4.0"
] | null | null | null |
12-Particle-Filters.ipynb
|
plusk01/Kalman-and-Bayesian-Filters-in-Python
|
4669507d7a8274a40cff93a011d34b6171227ea6
|
[
"CC-BY-4.0"
] | 4 |
2017-12-08T09:27:49.000Z
|
2022-02-21T17:14:06.000Z
| 71.038318 | 132,745 | 0.670587 |
[
[
[
"[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)",
"_____no_output_____"
],
[
"# Particle Filters",
"_____no_output_____"
]
],
[
[
"#format the book\n%matplotlib notebook\nfrom __future__ import division, print_function\nfrom book_format import load_style\nload_style()",
"_____no_output_____"
]
],
[
[
"## Motivation\n\nHere is our problem. We have moving objects that we want to track. Maybe the objects are fighter jets and missiles, or maybe we are tracking people playing cricket in a field. It doesn't really matter. Which of the filters that we have learned can handle this problem? Unfortunately, none of them are ideal. Let's think about the characteristics of this problem. \n\n* **multimodal**: We want to track zero, one, or more than one object simultaneously.\n\n* **occlusions**: One object can hide another, resulting in one measurement for multiple objects.\n\n* **nonlinear behavior**: Aircraft are buffeted by winds, balls move in parabolas, and people collide into each other.\n\n* **nonlinear measurements**: Radar gives us the distance to an object. Converting that to an (x,y,z) coordinate requires a square root, which is nonlinear.\n\n* **non-Gaussian noise:** as objects move across a background the computer vision can mistake part of the background for the object. \n\n* **continuous:** the object's position and velocity (i.e. the state space) can smoothly vary over time.\n\n* **multivariate**: we want to track several attributes, such as position, velocity, turn rates, etc.\n\n* **unknown process model**: we may not know the process model of the system\n\nNone of the filters we have learned work well with all of these constraints. \n\n* **Discrete Bayes filter**: This has most of the attributes. It is multimodal, can handle nonlinear measurements, and can be extended to work with nonlinear behavior. However, it is discrete and univariate.\n\n* **Kalman filter**: The Kalman filter produces optimal estimates for unimodal linear systems with Gaussian noise. None of these are true for our problem.\n\n* **Unscented Kalman filter**: The UKF handles nonlinear, continuous, multivariate problems. However, it is not multimodal nor does it handle occlusions. It can handle noise that is modestly non-Gaussian, but does not do well with distributions that are very non-Gaussian or problems that are very nonlinear.\n\n* **Extended Kalman filter**: The EKF has the same strengths and limitations as the UKF, except that is it even more sensitive to strong nonlinearities and non-Gaussian noise.",
"_____no_output_____"
],
[
"## Monte Carlo Sampling\n\nIn the UKF chapter I generated a plot similar to this to illustrate the effects of nonlinear systems on Gaussians:",
"_____no_output_____"
]
],
[
[
"from code.book_plots import interactive_plot\nimport code.pf_internal as pf_internal\n\nwith interactive_plot():\n pf_internal.plot_monte_carlo_ukf()",
"_____no_output_____"
]
],
[
[
"The left plot shows 3,000 points normally distributed based on the Gaussian\n\n$$\\mu = \\begin{bmatrix}0\\\\0\\end{bmatrix},\\, \\, \\, \\Sigma = \\begin{bmatrix}32&15\\\\15&40\\end{bmatrix}$$\n\nThe right plots shows these points passed through this set of equations:\n\n$$\\begin{aligned}x&=x+y\\\\\ny &= 0.1x^2 + y^2\\end{aligned}$$ \n\nUsing a finite number of randomly sampled points to compute a result is called a [*Monte Carlo*](https://en.wikipedia.org/wiki/Monte_Carlo_method) (MC) method. The idea is simple. Generate enough points to get a representative sample of the problem, run the points through the system you are modeling, and then compute the results on the transformed points. \n\nIn a nutshell this is what particle filtering does. The Bayesian filter algorithm we have been using throughout the book is applied to thousands of particles, where each particle represents a *possible* state for the system. We extract the estimated state from the thousands of particles using weighted statistics of the particles.",
"_____no_output_____"
],
[
"## Generic Particle Filter Algorithm\n\n1. **Randomly generate a bunch of particles**\n \n Particles can have position, heading, and/or whatever other state variable you need to estimate. Each has a weight (probability) indicating how likely it matches the actual state of the system. Initialize each with the same weight.\n \n2. **Predict next state of the particles**\n\n Move the particles based on how you predict the real system is behaving.\n\n3. **Update**\n\n Update the weighting of the particles based on the measurement. Particles that closely match the measurements are weighted higher than particles which don't match the measurements very well.\n \n4. **Resample**\n\n Discard highly improbable particle and replace them with copies of the more probable particles.\n \n5. **Compute Estimate**\n\n Optionally, compute weighted mean and covariance of the set of particles to get a state estimate.\n\nThis naive algorithm has practical difficulties which we will need to overcome, but this is the general idea. Let's see an example. I wrote a particle filter for the robot localization problem from the UKF and EKF chapters. The robot has steering and velocity control inputs. It has sensors that measures distance to visible landmarks. Both the sensors and control mechanism have noise in them, and we need to track the robot's position.\n\nHere I run a particle filter and plotted the positions of the particles. The plot on the left is after one iteration, and on the right is after 10. The red 'X' shows the actual position of the robot, and the large circle is the computed weighted mean position.",
"_____no_output_____"
]
],
[
[
"with interactive_plot():\n pf_internal.show_two_pf_plots()",
"_____no_output_____"
]
],
[
[
"If you are viewing this in a browser, this animation shows the entire sequence:",
"_____no_output_____"
],
[
"<img src='animations/particle_filter_anim.gif'>",
"_____no_output_____"
],
[
"After the first iteration the particles are still largely randomly scattered around the map, but you can see that some have already collected near the robot's position. The computed mean is quite close to the robot's position. This is because each particle is weighted based on how closely it matches the measurement. The robot is near (1,1), so particles that are near (1, 1) will have a high weight because they closely match the measurements. Particles that are far from the robot will not match the measurements, and thus have a very low weight. The estimated position is computed as the weighted mean of positions of the particles. Particles near the robot contribute more to the computation so the estimate is quite accurate.\n\nSeveral iterations later you can see that all the particles have clustered around the robot. This is due to the *resampling* step. Resampling discards particles that are very improbable (very low weight) and replaces them with particles with higher probability. \n\nI haven't fully shown *why* this works nor fully explained the algorithms for particle weighting and resampling, but it should make intuitive sense. Make a bunch of random particles, move them so they 'kind of' follow the robot, weight them according to how well they match the measurements, only let the likely ones live. It seems like it should work, and it does. ",
"_____no_output_____"
],
[
"## Probability distributions via Monte Carlo\n\nSuppose we want to know the area under the curve $y= \\mathrm{e}^{\\sin(x)}$ in the interval [0, $\\pi$]. The area is computed with the definite integral $\\int_0^\\pi \\mathrm{e}^{\\sin(x)}\\, \\mathrm{d}x$. As an exercise, go ahead and find the answer; I'll wait. \n\nIf you are wise you did not take that challenge; $\\mathrm{e}^{\\sin(x)}$ cannot be integrated analytically. The world is filled with equations which we cannot integrate. For example, consider calculating the luminosity of an object. An object reflects some of the light that strike it. Some of the reflected light bounces off of other objects and restrikes the original object, increasing the luminosity. This creates a *recursive integral*. Good luck with that one.\n\nHowever, integrals are trivial to compute using a Monte Carlo technique. To find the area under a curve create a bounding box that contains the curve in the desired interval. Generate randomly positioned point within the box, and compute the ratio of points that fall under the curve vs the total number of points. For example, if 40% of the points are under the curve and the area of the bounding box is 1, then the area under the curve is approximately 0.4. As you tend towards infinite points you can achieve any arbitrary precision. In practice, a few thousand points will give you a fairly accurate result.\n\nYou can use this technique to numerically integrate a function of any arbitrary difficulty. this includes non-integrable and noncontinuous functions. This technique was invented by Stanley Ulam at Los Alamos National Laboratory to allow him to perform computations for nuclear reactions which were unsolvable on paper.\n\nLet's compute $\\pi$ by finding the area of a circle. We will define a circle with a radius of 1, and bound it in a square. The side of the square has length 2, so the area is 4. We generate a set of uniformly distributed random points within the box, and count how many fall inside the circle. The area of the circle is computed as the area of the box times the ratio of points inside the circle vs. the total number of points. Finally, we know that $A = \\pi r^2$, so we compute $\\pi = A / r^2$.\n\nWe start by creating the points.\n\n```python\nN = 20000\npts = uniform(-1, 1, (N, 2))\n```\n\nA point is inside a circle if its distance from the center of the circle is less than or equal to the radius. We compute the distance with `numpy.linalg.norm`, which computes the magnitude of a vector. Since vectors start at (0, 0) calling norm will compute the point's distance from the origin.\n\n```python\ndist = np.linalg.norm(pts, axis=1)\n```\n\nNext we compute which of this distances fit the criteria. This code returns a bool array that contains `True` if it meets the condition `dist <= 1`:\n\n```python\nin_circle = dist <= 1\n```\n\nAll that is left is to count the points inside the circle, compute pi, and plot the results. I've put it all in one cell so you can experiment with alternative values for `N`, the number of points.",
"_____no_output_____"
]
],
[
[
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom numpy.random import uniform \n\nN = 20000 # number of points\nradius = 1\narea = (2*radius)**2\n\npts = uniform(-1, 1, (N, 2))\n\n# distance from (0,0) \ndist = np.linalg.norm(pts, axis=1)\nin_circle = dist <= 1\n\npts_in_circle = np.count_nonzero(in_circle)\npi = area * (pts_in_circle / N)\n\n# plot results\nwith interactive_plot():\n plt.scatter(pts[in_circle,0], pts[in_circle,1], \n marker=',', edgecolor='k', s=1)\n plt.scatter(pts[~in_circle,0], pts[~in_circle,1], \n marker=',', edgecolor='r', s=1)\n plt.axis('equal')\n\nprint('mean pi(N={})= {:.4f}'.format(N, pi))\nprint('err pi(N={})= {:.4f}'.format(N, np.pi-pi))",
"_____no_output_____"
]
],
[
[
"This insight leads us to the realization that we can use Monte Carlo to compute the probability density of any probability distribution. For example, suppose we have this Gaussian:",
"_____no_output_____"
]
],
[
[
"from filterpy.stats import plot_gaussian_pdf\nwith interactive_plot():\n plot_gaussian_pdf(mean=2, variance=3);",
"_____no_output_____"
]
],
[
[
"The probability density function (PDF) gives the probability that the random value falls between 2 values. For example, we may want to know the probability of x being between 0 and 2 in the graph above. This is a continuous function, so we need to take the integral to find the area under the curve, as the area is equal to the probability for that range of values to occur. \n \n$$P[a \\le X \\le b] = \\int_a^b f_X(x) \\, dx$$\n\nIt is easy to compute this integral for a Gaussian. But real life is not so easy. For example, the plot below shows a probability distribution. There is no way to analytically describe an arbitrary curve, let alone integrate it.",
"_____no_output_____"
]
],
[
[
"with interactive_plot():\n pf_internal.plot_random_pd()",
"_____no_output_____"
]
],
[
[
"We can use Monte Carlo methods to compute any integral. The PDF is computed with an integral, hence we can compute the PDF of this curve using Monte Carlo. ",
"_____no_output_____"
],
[
"## The Particle Filter\n\nAll of this brings us to the particle filter. Consider tracking a robot or a car in an urban environment. For consistency I will use the robot localization problem from the EKF and UKF chapters. In this problem we tracked a robot that has a sensor which measures the range and bearing to known landmarks. \n\nParticle filters are a family of algorithms. I'm presenting a specific form of a particle filter that is intuitive to grasp and relates to the problems we have studied in this book. This will leave a few of the steps seeming a bit 'magical' since I haven't offered a full explanation. That will follow later in the chapter.\n\nTaking insight from the discussion in the previous section we start by creating several thousand *particles*. Each particle has a position that represents a possible belief of where the robot is in the scene, and perhaps a heading and velocity. Suppose that we have no knowledge of the location of the robot. We would want to scatter the particles uniformly over the entire scene. If you think of all of the particles representing a probability distribution, locations where there are more particles represent a higher belief, and locations with fewer particles represents a lower belief. If there was a large clump of particles near a specific location that would imply that we were more certain that the robot is there.\n\nEach particle needs a weight - ideally the probability that it represents the true position of the robot. This probability is rarely computable, so we only require it be *proportional* to that probability, which is computable. At initialization we have no reason to favor one particle over another, so we assign a weight of $1/N$, for $N$ particles. We use $1/N$ so that the sum of all probabilities equals one.\n\nThe combination of particles and weights forms the *probability distribution* for our problem. Think back to the *Discrete Bayes* chapter. In that chapter we modeled positions in a hallway as discrete and uniformly spaced. This is very similar except the particles are randomly distributed in a continuous space rather than constrained to discrete locations. In this problem the robot can move on a plane of some arbitrary dimension, with the lower right corner at (0,0).\n\nTo track our robot we need to maintain states for x, y, and heading. We will store `N` particles in a `(N, 3)` shaped array. The three columns contain x, y, and heading, in that order. \n\nIf you are passively tracking something (no control input), then you would need to include velocity in the state and use that estimate to make the prediction. More dimensions requires exponentially more particles to form a good estimate, so we always try to minimize the number of random variables in the state.\n\nThis code creates a uniform and Gaussian distribution of particles over a region:",
"_____no_output_____"
]
],
[
[
"from numpy.random import uniform\n\ndef create_uniform_particles(x_range, y_range, hdg_range, N):\n particles = np.empty((N, 3))\n particles[:, 0] = uniform(x_range[0], x_range[1], size=N)\n particles[:, 1] = uniform(y_range[0], y_range[1], size=N)\n particles[:, 2] = uniform(hdg_range[0], hdg_range[1], size=N)\n particles[:, 2] %= 2 * np.pi\n return particles\n\ndef create_gaussian_particles(mean, std, N):\n particles = np.empty((N, 3))\n particles[:, 0] = mean[0] + (randn(N) * std[0])\n particles[:, 1] = mean[1] + (randn(N) * std[1])\n particles[:, 2] = mean[2] + (randn(N) * std[2])\n particles[:, 2] %= 2 * np.pi\n return particles",
"_____no_output_____"
]
],
[
[
"For example:",
"_____no_output_____"
]
],
[
[
"create_uniform_particles((0,1), (0,1), (0, np.pi*2), 4)",
"_____no_output_____"
]
],
[
[
"### Predict Step\n\nThe predict step in the Bayes algorithm uses the process model to update the belief in the system state. How would we do that with particles? Each particle represents a possible position for the robot. Suppose we send a command to the robot to move 0.1 meters while turning by 0.007 radians. We could move each particle by this amount. If we did that we would soon run into a problem. The robot's controls are not perfect so it will not move exactly as commanded. Therefore we need to add noise to the particle's movements to have a reasonable chance of capturing the actual movement of the robot. If you do not model the uncertainty in the system the particle filter will not correctly model the probability distribution of our belief in the robot's position.",
"_____no_output_____"
]
],
[
[
"def predict(particles, u, std, dt=1.):\n \"\"\" move according to control input u (heading change, velocity)\n with noise Q (std heading change, std velocity)`\"\"\"\n\n N = len(particles)\n # update heading\n particles[:, 2] += u[0] + (randn(N) * std[0])\n particles[:, 2] %= 2 * np.pi\n\n # move in the (noisy) commanded direction\n dist = (u[1] * dt) + (randn(N) * std[1])\n particles[:, 0] += np.cos(particles[:, 2]) * dist\n particles[:, 1] += np.sin(particles[:, 2]) * dist",
"_____no_output_____"
]
],
[
[
"### Update Step\n\nNext we get a set of measurements - one for each landmark currently in view. How should these measurements be used to alter our probability distribution as modeled by the particles?\n\nThink back to the **Discrete Bayes** chapter. In that chapter we modeled positions in a hallway as discrete and uniformly spaced. We assigned a probability to each position which we called the *prior*. When a new measurement came in we multiplied the current probability of that position (the *prior*) by the *likelihood* that the measurement matched that location:\n\n```python\ndef update(likelihood, prior):\n posterior = prior * likelihood\n return normalize(posterior)\n```\n\nwhich is an implementation of the equation\n\n$$x = \\| \\mathcal L \\bar x \\|$$\n\nwhich is a realization of Bayes theorem:\n\n$$\\begin{aligned}P(x \\mid z) &= \\frac{P(z \\mid x)\\, P(x)}{P(z)} \\\\\n &= \\frac{\\mathtt{likelihood}\\times \\mathtt{prior}}{\\mathtt{normalization}}\\end{aligned}$$",
"_____no_output_____"
],
[
"We do the same with our particles. Each particle has a position and a weight which estimates how well it matches the measurement. Normalizing the weights so they sum to one turns them into a probability distribution. The particles those that are closest to the robot will generally have a higher weight than ones far from the robot.",
"_____no_output_____"
]
],
[
[
"def update(particles, weights, z, R, landmarks):\n weights.fill(1.)\n for i, landmark in enumerate(landmarks):\n distance = np.linalg.norm(particles[:, 0:2] - landmark, axis=1)\n weights *= scipy.stats.norm(distance, R).pdf(z[i])\n\n weights += 1.e-300 # avoid round-off to zero\n weights /= sum(weights) # normalize",
"_____no_output_____"
]
],
[
[
"In the literature this part of the algorithm is called *Sequential Importance Sampling*, or SIS. The equation for the weights is called the *importance density*. I will give these theoretical underpinnings in a following section. For now I hope that this makes intuitive sense. If we weight the particles according to how how they match the measurements they are probably a good sample for the probability distribution of the system after incorporating the measurements. Theory proves this is so. The weights are the *likelihood* in Bayes theorem. Different problems will need to tackle this step in slightly different ways but this is the general idea.",
"_____no_output_____"
],
[
"### Computing the State Estimate\n\nIn most applications you will want to know the estimated state after each update, but the filter consists of nothing but a collection of particles. Assuming that we are tracking one object (i.e. it is unimodal) we can compute the mean of the estimate as the sum of the weighted values of the particles. \n\n$$ \\mu = \\frac{1}{N}\\sum\\limits_{i=1}^N w^ix^i$$\n\nHere I adopt the notation $x^i$ to indicate the i$^{th}$ particle. A superscript is used because we often need to use subscripts to denote time steps the k$^{th}$ or k+1$^{th}$ particle, yielding the unwieldy $x^i_{k+1}$. \n\nThis function computes both the mean and variance of the particles:",
"_____no_output_____"
]
],
[
[
"def estimate(particles, weights):\n \"\"\"returns mean and variance of the weighted particles\"\"\"\n\n pos = particles[:, 0:2]\n mean = np.average(pos, weights=weights, axis=0)\n var = np.average((pos - mean)**2, weights=weights, axis=0)\n return mean, var",
"_____no_output_____"
]
],
[
[
"If we create a uniform distribution of points in a 1x1 square with equal weights we get a mean position very near the center of the square at (0.5, 0.5) and a small variance.",
"_____no_output_____"
]
],
[
[
"particles = create_uniform_particles((0,1), (0,1), (0, 5), 1000)\nweights = np.array([.25]*1000)\nestimate(particles, weights)",
"_____no_output_____"
]
],
[
[
"### Particle Resampling\n\nThe SIS algorithm suffers from the *degeneracy problem*. It starts with uniformly distributed particles with equal weights. There may only be a handful of particles near the robot. As the algorithm runs any particle that does not match the measurements will acquire an extremely low weight. Only the particles which are near the robot will have an appreciable weight. We could have 5,000 particles with only 3 contributing meaningfully to the state estimate! We say the filter has *degenerated*.\n\nThis problem is usually solved by some form of *resampling* of the particles. Particles with very small weights do not meaningfully describe the probability distribution of the robot. \n\nThe resampling algorithm discards particles with very low probability and replaces them with new particles with higher probability. It does that by duplicating particles with relatively high probability. The duplicates are slightly dispersed by the noise added in the predict step. This results in a set of points in which a large majority of the particles accurately represent the probability distribution.\n\nThere are many resampling algorithms. For now let's look at one of the simplest, *simple random resampling*, also called *multinomial resampling*. It samples from the current particle set $N$ times, making a new set of particles from the sample. The probability of selecting any given particle should be proportional to its weight.\n\nWe accomplish this with NumPy's `cumsum` function. `cumsum` computes the cumulative sum of an array. That is, element one is the sum of elements zero and one, element two is the sum of elements zero, one and two, etc. Then we generate random numbers in the range of 0.0 to 1.0 and do a binary search to find the weight that most closely matches that number:",
"_____no_output_____"
]
],
[
[
"def simple_resample(particles, weights):\n N = len(particles)\n cumulative_sum = np.cumsum(weights)\n cumulative_sum[-1] = 1. # avoid round-off error\n indexes = np.searchsorted(cumulative_sum, random(N))\n\n # resample according to indexes\n particles[:] = particles[indexes]\n weights[:] = weights[indexes]\n weights /= np.sum(weights) # normalize",
"_____no_output_____"
]
],
[
[
"We don't resample at every epoch. For example, if you received no new measurements you have not received any information from which the resample can benefit. We can determine when to resample by using something called the *effective N*, which approximately measures the number of particles which meaningfully contribute to the probability distribution. The equation for this is\n\n$$\\hat{N}_\\text{eff} = \\frac{1}{\\sum w^2}$$\n\nand we can implement this in Python with",
"_____no_output_____"
]
],
[
[
"def neff(weights):\n return 1. / np.sum(np.square(weights))",
"_____no_output_____"
]
],
[
[
"If $\\hat{N}_\\text{eff}$ falls below some threshold it is time to resample. A useful starting point is $N/2$, but this varies by problem. It is also possible for $\\hat{N}_\\text{eff} = N$, which means the particle set has collapsed to one point (each has equal weight). It may not be theoretically pure, but if that happens I create a new distribution of particles in the hopes of generating particles with more diversity. If this happens to you often, you may need to increase the number of particles, or otherwise adjust your filter. We will talk more of this later.",
"_____no_output_____"
],
[
"## SIR Filter - A Complete Example\n\nThere is more to learn, but we know enough to implement a full particle filter. We will implement the *Sampling Importance Resampling filter*, or SIR.\n\nI need to introduce a more sophisticated resampling method than I gave above. FilterPy provides several resampling methods. I will describe them later. They take an array of weights and returns indexes to the particles that have been chosen for the resampling. We just need to write a function that performs the resampling from these indexes:",
"_____no_output_____"
]
],
[
[
"def resample_from_index(particles, weights, indexes):\n particles[:] = particles[indexes]\n weights[:] = weights[indexes]\n weights /= np.sum(weights) ",
"_____no_output_____"
]
],
[
[
"To implement the filter we need to create the particles and the landmarks. We then execute a loop, successively calling `predict`, `update`, resampling, and then computing the new state estimate with `estimate`.",
"_____no_output_____"
]
],
[
[
"from filterpy.monte_carlo import systematic_resample\nfrom numpy.linalg import norm\nfrom numpy.random import randn\nimport scipy.stats\n\ndef run_pf1(N, iters=18, sensor_std_err=.1, \n do_plot=True, plot_particles=False,\n xlim=(0, 20), ylim=(0, 20),\n initial_x=None):\n landmarks = np.array([[-1, 2], [5, 10], [12,14], [18,21]])\n NL = len(landmarks)\n \n plt.figure()\n \n # create particles and weights\n if initial_x is not None:\n particles = create_gaussian_particles(\n mean=initial_x, std=(5, 5, np.pi/4), N=N)\n else:\n particles = create_uniform_particles((0,20), (0,20), (0, 6.28), N)\n weights = np.zeros(N)\n\n if plot_particles:\n alpha = .20\n if N > 5000:\n alpha *= np.sqrt(5000)/np.sqrt(N) \n plt.scatter(particles[:, 0], particles[:, 1], \n alpha=alpha, color='g')\n \n xs = []\n robot_pos = np.array([0., 0.])\n for x in range(iters):\n robot_pos += (1, 1)\n\n # distance from robot to each landmark\n zs = (norm(landmarks - robot_pos, axis=1) + \n (randn(NL) * sensor_std_err))\n\n # move diagonally forward to (x+1, x+1)\n predict(particles, u=(0.00, 1.414), std=(.2, .05))\n \n # incorporate measurements\n update(particles, weights, z=zs, R=sensor_std_err, \n landmarks=landmarks)\n \n # resample if too few effective particles\n if neff(weights) < N/2:\n indexes = systematic_resample(weights)\n resample_from_index(particles, weights, indexes)\n\n mu, var = estimate(particles, weights)\n xs.append(mu)\n\n if plot_particles:\n plt.scatter(particles[:, 0], particles[:, 1], \n color='k', marker=',', s=1)\n p1 = plt.scatter(robot_pos[0], robot_pos[1], marker='+',\n color='k', s=180, lw=3)\n p2 = plt.scatter(mu[0], mu[1], marker='s', color='r')\n \n xs = np.array(xs)\n #plt.plot(xs[:, 0], xs[:, 1])\n plt.legend([p1, p2], ['Actual', 'PF'], loc=4, numpoints=1)\n plt.xlim(*xlim)\n plt.ylim(*ylim)\n print('final position error, variance:\\n\\t', mu, var)\n\nfrom numpy.random import seed\nseed(2) \nrun_pf1(N=5000, plot_particles=False)",
"_____no_output_____"
]
],
[
[
"Most of this code is devoted to initialization and plotting. The entirety of the particle filter processing consists of these lines:\n\n```python\n# move diagonally forward to (x+1, x+1)\npredict(particles, u=(0.00, 1.414), std=(.2, .05))\n\n # incorporate measurements\nupdate(particles, weights, z=zs, R=sensor_std_err, \n landmarks=landmarks)\n \n# resample if too few effective particles\nif neff(weights) < N/2:\n indexes = systematic_resample(weights)\n resample_from_index(particles, weights, indexes)\n\nmu, var = estimate(particles, weights)\n```\n\nThe first line predicts the position of the particles with the assumption that the robot is moving in a straight line (`u[0] == 0`) and moving 1 unit in both the x and y axis (`u[1]==1.414`). The standard deviation for the error in the turn is 0.2, and the standard deviation for the distance is 0.05. When this call returns the particles will all have been moved forward, but the weights are no longer correct as they have not been updated.\n\nThe next line incorporates the measurement into the filter. This does not alter the particle positions, it only alters the weights. If you recall the weight of the particle is computed as the probability that it matches the Gaussian of the sensor error model. The further the particle from the measured distance the less likely it is to be a good representation.\n\nThe final two lines example the effective particle count ($\\hat{N}_\\text{eff})$. If it falls below $N/2$ we perform resampling to try to ensure our particles form a good representation of the actual probability distribution.\n\nNow let's look at this with all the particles plotted. Seeing this happen interactively is much more instructive, but this format still gives us useful information. I plotted the original random distribution of points in a very pale green and large circles to help distinguish them from the subsequent iterations where the particles are plotted with black pixels. The number of particles makes it hard to see the details, so I limited the number of iterations to 8 so we can zoom in and look more closely.",
"_____no_output_____"
]
],
[
[
"seed(2)\nrun_pf1(N=5000, iters=8, plot_particles=True, \n xlim=(0,8), ylim=(0,8))",
"_____no_output_____"
]
],
[
[
"From the plot it looks like there are only a few particles at the first two robot positions. This is not true; there are 5,000 particles, but due to resampling most are duplicates of each other. The reason for this is the Gaussian for the sensor is very narrow. This is called *sample impoverishment* and can lead to filter divergence. I'll address this in detail below. For now, looking at the second step at x=2 we can see that the particles have dispersed a bit. This dispersion is due to the motion model noise. All particles are projected forward according to the control input `u`, but noise is added to each particle proportional to the error in the control mechanism in the robot. By the third step the particles have dispersed enough to make a convincing cloud of particles around the robot. \n\nThe shape of the particle cloud is an ellipse. This is not a coincidence. The sensors and robot control are both modeled as Gaussian, so the probability distribution of the system is also a Gaussian. The particle filter is a sampling of the probability distribution, so the cloud should be an ellipse.\n\nIt is important to recognize that the particle filter algorithm *does not require* the sensors or system to be Gaussian or linear. Because we represent the probability distribution with a cloud of particles we can handle any probability distribution and strongly nonlinear problems. There can be discontinuities and hard limits in the probability model. ",
"_____no_output_____"
],
[
"### Effect of Sensor Errors on the Filter\n\nThe first few iterations of the filter resulted in many duplicate particles. This happens because the model for the sensors is Gaussian, and we gave it a small standard deviation of $\\sigma=0.1$. This is counterintuitive at first. The Kalman filter performs better when the noise is smaller, yet the particle filter can perform worse.\n\n\nWe can reason about why this is true. If $\\sigma=0.1$, the robot is at (1, 1) and a particle is at (2, 2) the particle is 14 standard deviations away from the robot. This gives it a near zero probability. It contributes nothing to the estimate of the mean, and it is extremely unlikely to survive after the resampling. If $\\sigma=1.4$ then the particle is only $1\\sigma$ away and thus it will contribute to the estimate of the mean. During resampling it is likely to be copied one or more times.\n\nThis is *very important* to understand - a very accurate sensor can lead to poor performance of the filter because few of the particles will be a good sample of the probability distribution. There are a few fixes available to us. First, we can artificially increase the sensor noise standard deviation so the particle filter will accept more points as matching the robots probability distribution. This is non-optimal because some of those points will be a poor match. The real problem is that there aren't enough points being generated such that enough are near the robot. Increasing `N` usually fixes this problem. This decision is not cost free as increasing the number of particles significantly increase the computation time. Still, let's look at the result of using 100,000 particles.",
"_____no_output_____"
]
],
[
[
"seed(2) \nrun_pf1(N=100000, iters=8, plot_particles=True, \n xlim=(0,8), ylim=(0,8))",
"_____no_output_____"
]
],
[
[
"There are many more particles at x=1, and we have a convincing cloud at x=2. Clearly the filter is performing better, but at the cost of large memory usage and long run times.\n\nAnother approach is to be smarter about generating the initial particle cloud. Suppose we guess that the robot is near (0, 0). This is not exact, as the simulation actually places the robot at (1, 1), but it is close. If we create a normally distributed cloud near (0, 0) there is a much greater chance of the particles matching the robot's position.\n\n`run_pf1()` has an optional parameter `initial_x`. Use this to specify the initial position guess for the robot. The code then uses `create_gaussian_particles(mean, std, N)` to create particles distributed normally around the initial guess. We will use this in the next section.",
"_____no_output_____"
],
[
"### Filter Degeneracy From Inadequate Samples\n\nThe filter as written is far from perfect. Here is how it performs with a different random seed.",
"_____no_output_____"
]
],
[
[
"seed(6) \nrun_pf1(N=5000, plot_particles=True, ylim=(-20, 20))",
"_____no_output_____"
]
],
[
[
"Here the initial sample of points did not generate any points near the robot. The particle filter does not create new points during the resample operation, so it ends up duplicating points which are not a representative sample of the probability distribution. As mentioned earlier this is called *sample impoverishment*. The problem quickly spirals out of control. The particles are not a good match for the landscape measurement so they become dispersed in a highly nonlinear, curved distribution, and the particle filter diverges from reality. No particles are available near the robot, so it cannot ever converge.\n\nLet's make use of the `create_gaussian_particles()` method to try to generate more points near the robot. We can do this by using the `initial_x` parameter to specify a location to create the particles.",
"_____no_output_____"
]
],
[
[
"seed(6) \nrun_pf1(N=5000, plot_particles=True, initial_x=(1,1, np.pi/4))",
"_____no_output_____"
]
],
[
[
"This works great. You should always try to create particles near the initial position if you have any way to roughly estimate it. Do not be *too* careful - if you generate all the points very near a single position the particles may not be dispersed enough to capture the nonlinearities in the system. This is a fairly linear system, so we could get away with a smaller variance in the distribution. Clearly this depends on your problem. Increasing the number of particles is always a good way to get a better sample, but the processing cost may be a higher price than you are willing to pay.",
"_____no_output_____"
],
[
"## Importance Sampling\n\nI've hand waved a difficulty away which we must now confront. There is some probability distribution that describes the position and movement of our robot. We want to draw a sample of particles from that distribution and compute the integral using MC methods. \n\nOur difficulty is that in many problems we don't know the distribution. For example, the tracked object might move very differently than we predicted with our state model. How can we draw a sample from a probability distribution that is unknown? \n\nThere is a theorem from statistics called [*importance sampling*](https://en.wikipedia.org/wiki/Importance_sampling)[1]. Remarkably, it gives us a way to draw samples from a different and known probability distribution and use those to compute the properties of the unknown one. It's a fantastic theorem that brings joy to my heart. \n\nThe idea is simple, and we already used it. We draw samples from the known probability distribution, but *weight the samples* according to the distribution we are interested in. We can then compute properties such as the mean and variance by computing the weighted mean and weighted variance of the samples.\n\nFor the robot localization problem we drew samples from the probability distribution that we computed from our state model prediction step. In other words, we reasoned 'the robot was there, it is perhaps moving at this direction and speed, hence it might be here'. Yet the robot might have done something completely different. It may have fell off a cliff or been hit by a mortar round. In each case the probability distribution is not correct. It seems like we are stymied, but we are not because we can use importance sampling. We drew particles from that likely incorrect probability distribution, then weighted them according to how well the particles match the measurements. That weighting is based on the true probability distribution, so according to the theory the resulting mean, variance, etc, will be correct!\n\nHow can that be true? I'll give you the math; you can safely skip this if you don't plan to go beyond the robot localization problem. However, other particle filter problems require different approaches to importance sampling, and a bit of math helps. Also, the literature and much of the content on the web uses the mathematical formulation in favor of my rather imprecise \"imagine that...\" exposition. If you want to understand the literature you will need to know the following equations.\n\nWe have some probability distribution $\\pi(x)$ which we want to take samples from. However, we don't know what $\\pi(x)$ is; instead we only know an alternative probability distribution $q(x)$. In the context of robot localization, $\\pi(x)$ is the probability distribution for the robot, but we don't know it, and $q(x)$ is the probability distribution of our measurements, which we do know.\n\nThe expected value of a function $f(x)$ with probability distribution $\\pi(x)$ is\n\n$$\\mathbb{E}\\big[f(x)\\big] = \\int f(x)\\pi(x)\\, dx$$\n\nWe don't know $\\pi(x)$ so we cannot compute this integral. We do know an alternative distribution $q(x)$ so we can add it into the integral without changing the value with\n\n$$\\mathbb{E}\\big[f(x)\\big] = \\int f(x)\\pi(x)\\frac{q(x)}{q(x)}\\, dx$$\n\nNow we rearrange and group terms\n\n$$\\mathbb{E}\\big[f(x)\\big] = \\int f(x)q(x)\\, \\, \\cdot \\, \\frac{\\pi(x)}{q(x)}\\, dx$$\n\n$q(x)$ is known to us, so we can compute $\\int f(x)q(x)$ using MC integration. That leaves us with $\\pi(x)/q(x)$. That is a ratio, and we define it as a *weight*. This gives us\n\n$$\\mathbb{E}\\big[f(x)\\big] = \\sum\\limits_{i=1}^N f(x^i)w(x^i)$$\n\nMaybe that seems a little abstract. If we want to compute the mean of the particles we would compute\n\n$$\\mu = \\sum\\limits_{i=1}^N x^iw^i$$\n\nwhich is the equation I gave you earlier in the chapter.\n\nIt is required that the weights be proportional to the ratio $\\pi(x)/q(x)$. We normally do not know the exact value, so in practice we normalize the weights by dividing them by $\\sum w(x^i)$.\n\nWhen you formulate a particle filter algorithm you will have to implement this step depending on the particulars of your situation. For robot localization the best distribution to use for $q(x)$ is the particle distribution from the `predict()` step of the filter. Let's look at the code again:\n\n```python\ndef update(particles, weights, z, R, landmarks):\n weights.fill(1.)\n for i, landmark in enumerate(landmarks):\n dist = np.linalg.norm(particles[:, 0:2] - landmark, axis=1)\n weights *= scipy.stats.norm(dist, R).pdf(z[i])\n\n weights += 1.e-300 # avoid round-off to zero\n weights /= sum(weights) # normalize\n```\n \nThe reason for `self.weights.fill(1.)` might have confused you. In all the Bayesian filters up to this chapter we started with the probability distribution created by the `predict` step, and this appears to discard that information by setting all of the weights to 1. Well, we are discarding the weights, but we do not discard the particles. That is a direct result of applying importance sampling - we draw from the known distribution, but weight by the unknown distribution. In this case our known distribution is the uniform distribution - all are weighted equally.\n\nOf course if you can compute the posterior probability distribution from the prior you should do so. If you cannot, then importance sampling gives you a way to solve this problem. In practice, computing the posterior is incredibly difficult. The Kalman filter became a spectacular success because it took advantage of the properties of Gaussians to find an analytic solution. Once we relax the conditions required by the Kalman filter (Markov property, Gaussian measurements and process) importance sampling and monte carlo methods make the problem tractable.",
"_____no_output_____"
],
[
"## Resampling Methods\n\nThe resampling algorithm effects the performance of the filter. For example, suppose we resampled particles by picking particles at random. This would lead us to choosing many particles with a very low weight, and the resulting set of particles would be a terrible representation of the problem's probability distribution. \n\nResearch on the topic continues, but a handful of algorithms work well in practice across a wide variety of situations. We desire an algorithm that has several properties. It should preferentially select particles that have a higher probability. It should select a representative population of the higher probability particles to avoid sample impoverishment. It should include enough lower probability particles to give the filter a chance of detecting strongly nonlinear behavior. \n\nFilterPy implements several of the popular algorithms. FilterPy doesn't know how your particle filter is implemented, so it cannot generate the new samples. Instead, the algorithms create a `numpy.array` containing the indexes of the particles that are chosen. Your code needs to perform the resampling step. For example, I used this for the robot:",
"_____no_output_____"
]
],
[
[
"def resample_from_index(particles, weights, indexes):\n particles[:] = particles[indexes]\n weights[:] = weights[indexes]\n weights /= np.sum(weights)",
"_____no_output_____"
]
],
[
[
"### Multinomial Resampling\n\nMultinomial resampling is the algorithm that I used while developing the robot localization example. The idea is simple. Compute the cumulative sum of the normalized weights. This gives you an array of increasing values from 0 to 1. Here is a plot which illustrates how this spaces out the weights. The colors are meaningless, they just make the divisions easier to see.",
"_____no_output_____"
]
],
[
[
"from code.pf_internal import plot_cumsum\nprint('cumulative sume is', np.cumsum([.1, .2, .1, .6]))\nplot_cumsum([.1, .2, .1, .6])",
"cumulative sume is [ 0.1 0.3 0.4 1.0]\n"
]
],
[
[
"To select a weight we generate a random number uniformly selected between 0 and 1 and use binary search to find its position inside the cumulative sum array. Large weights occupy more space than low weights, so they will be more likely to be selected. \n\nThis is very easy to code using NumPy's [ufunc](http://docs.scipy.org/doc/numpy/reference/ufuncs.html) support. Ufuncs apply functions to every element of an array, returning an array of the results. `searchsorted` is NumPy's binary search algorithm. If you provide is with an array of search values it will return an array of answers; one answer for each search value. ",
"_____no_output_____"
]
],
[
[
"def multinomal_resample(weights):\n cumulative_sum = np.cumsum(weights)\n cumulative_sum[-1] = 1. # avoid round-off errors\n return np.searchsorted(cumulative_sum, random(len(weights))) ",
"_____no_output_____"
]
],
[
[
"Here is an example:",
"_____no_output_____"
]
],
[
[
"from code.pf_internal import plot_multinomial_resample\nplot_multinomial_resample([.1, .2, .3, .4, .2, .3, .1])",
"_____no_output_____"
]
],
[
[
"This is an $O(n \\log(n))$ algorithm. That is not terrible, but there are $O(n)$ resampling algorithms with better properties with respect to the uniformity of the samples. I'm showing it because you can understand the other algorithms as variations on this one. There is a faster implementation of this multinomial resampling that uses the inverse of the CDF of the distribution. You can search on the internet if you are interested.\n\nImport the function from FilterPy using\n\n```python\nfrom filterpy.monte_carlo import multinomal_resample\n```",
"_____no_output_____"
],
[
"### Residual Resampling\n\nResidual resampling both improves the run time of multinomial resampling, and ensures that the sampling is uniform across the population of particles. It's fairly ingenious: the normalized weights are multiplied by *N*, and then the integer value of each weight is used to define how many samples of that particle will be taken. For example, if the weight of a particle is 0.0012 and $N$=3000, the scaled weight is 3.6, so 3 samples will be taken of that particle. This ensures that all higher weight particles are chosen at least once. The running time is $O(N)$, making it faster than multinomial resampling.\n\nHowever, this does not generate all *N* selections. To select the rest, we take the *residual*: the weights minus the integer part, which leaves the fractional part of the number. We then use a simpler sampling scheme such as multinomial, to select the rest of the particles based on the residual. In the example above the scaled weight was 3.6, so the residual will be 0.6 (3.6 - int(3.6)). This residual is very large so the particle will be likely to be sampled again. This is reasonable because the larger the residual the larger the error in the round off, and thus the particle was relatively under sampled in the integer step.",
"_____no_output_____"
]
],
[
[
"def residual_resample(weights):\n N = len(weights)\n indexes = np.zeros(N, 'i')\n\n # take int(N*w) copies of each weight\n num_copies = (N*np.asarray(weights)).astype(int)\n k = 0\n for i in range(N):\n for _ in range(num_copies[i]): # make n copies\n indexes[k] = i\n k += 1\n\n # use multinormial resample on the residual to fill up the rest.\n residual = w - num_copies # get fractional part\n residual /= sum(residual) # normalize\n cumulative_sum = np.cumsum(residual)\n cumulative_sum[-1] = 1. # ensures sum is exactly one\n indexes[k:N] = np.searchsorted(cumulative_sum, random(N-k))\n\n return indexes",
"_____no_output_____"
]
],
[
[
"You may be tempted to replace the inner for loop with a slice `indexes[k:k + num_copies[i]] = i`, but very short slices are comparatively slow, and the for loop usually runs faster.\n\nLet's look at an example:",
"_____no_output_____"
]
],
[
[
"from code.pf_internal import plot_residual_resample\nplot_residual_resample([.1, .2, .3, .4, .2, .3, .1])",
"_____no_output_____"
]
],
[
[
"You may import this from FilterPy using\n\n```python\n from filterpy.monte_carlo import residual_resample\n```",
"_____no_output_____"
],
[
"### Stratified Resampling\n\nThis scheme aims to make selections relatively uniformly across the particles. It works by dividing the cumulative sum into $N$ equal sections, and then selects one particle randomly from each section. This guarantees that each sample is between 0 and $\\frac{2}{N}$ apart.\n\nThe plot below illustrates this. The colored bars show the cumulative sum of the array, and the black lines show the $N$ equal subdivisions. Particles, shown as black circles, are randomly placed in each subdivision.",
"_____no_output_____"
]
],
[
[
"from pf_internal import plot_stratified_resample\nplot_stratified_resample([.1, .2, .3, .4, .2, .3, .1])",
"_____no_output_____"
]
],
[
[
"The code to perform the stratification is quite straightforward. ",
"_____no_output_____"
]
],
[
[
"def stratified_resample(weights):\n N = len(weights)\n # make N subdivisions, chose a random position within each one\n positions = (random(N) + range(N)) / N\n\n indexes = np.zeros(N, 'i')\n cumulative_sum = np.cumsum(weights)\n i, j = 0, 0\n while i < N:\n if positions[i] < cumulative_sum[j]:\n indexes[i] = j\n i += 1\n else:\n j += 1\n return indexes",
"_____no_output_____"
]
],
[
[
"Import it from FilterPy with\n\n```python\nfrom filterpy.monte_carlo import stratified_resample\n```",
"_____no_output_____"
],
[
"### Systematic Resampling\n\nThe last algorithm we will look at is systemic resampling. As with stratified resampling the space is divided into $N$ divisions. We then choose a random offset to use for all of the divisions, ensuring that each sample is exactly $\\frac{1}{N}$ apart. It looks like this.",
"_____no_output_____"
]
],
[
[
"from pf_internal import plot_systematic_resample\nplot_systematic_resample([.1, .2, .3, .4, .2, .3, .1])",
"_____no_output_____"
]
],
[
[
"Having seen the earlier examples the code couldn't be simpler.",
"_____no_output_____"
]
],
[
[
"def systematic_resample(weights):\n N = len(weights)\n\n # make N subdivisions, choose positions \n # with a consistent random offset\n positions = (np.arange(N) + random()) / N\n\n indexes = np.zeros(N, 'i')\n cumulative_sum = np.cumsum(weights)\n i, j = 0, 0\n while i < N:\n if positions[i] < cumulative_sum[j]:\n indexes[i] = j\n i += 1\n else:\n j += 1\n return indexes",
"_____no_output_____"
]
],
[
[
" \nImport from FilterPy with\n\n```python\nfrom filterpy.monte_carlo import systematic_resample\n ```",
"_____no_output_____"
],
[
"### Choosing a Resampling Algorithm\n\nLet's look at the four algorithms at once so they are easier to compare.",
"_____no_output_____"
]
],
[
[
"a = [.1, .2, .3, .4, .2, .3, .1]\nnp.random.seed(4)\nplot_multinomial_resample(a)\nplot_residual_resample(a)\nplot_systematic_resample(a)\nplot_stratified_resample(a)",
"_____no_output_____"
]
],
[
[
"The performance of the multinomial resampling is quite bad. There is a very large weight that was not sampled at all. The largest weight only got one resample, yet the smallest weight was sample was sampled twice. Most tutorials on the net that I have read use multinomial resampling, and I am not sure why. Multinomial resampling is rarely used in the literature or for real problems. I recommend not using it unless you have a very good reason to do so.\n\nThe residual resampling algorithm does excellently at what it tries to do: ensure all the largest weights are resampled multiple times. It doesn't evenly distribute the samples across the particles - many reasonably large weights are not resampled at all. \n\nBoth systematic and stratified perform very well. Systematic sampling does an excellent job of ensuring we sample from all parts of the particle space while ensuring larger weights are proportionality resampled more often. Stratified resampling is not quite as uniform as systematic resampling, but it is a bit better at ensuring the higher weights get resampled more.\n\nPlenty has been written on the theoretical performance of these algorithms, and feel free to read it. In practice I apply particle filters to problems that resist analytic efforts, and so I am a bit dubious about the validity of a specific analysis to these problems. In practice both the stratified and systematic algorithms perform well and similarly across a variety of problems. I say try one, and if it works stick with it. If performance of the filter is critical try both, and perhaps see if there is literature published on your specific problem that will give you better guidance. ",
"_____no_output_____"
],
[
"## Summary\n\nThis chapter only touches the surface of what is a vast topic. My goal was not to teach you the field, but to expose you to practical Bayesian Monte Carlo techniques for filtering. \n\nParticle filters are a type of *ensemble* filtering. Kalman filters represents state with a Gaussian. Measurements are applied to the Gaussian using Bayes Theorem, and the prediction is done using state-space methods. These techniques are applied to the Gaussian - the probability distribution.\n\nIn contrast, ensemble techniques represent a probability distribution using a discrete collection of points and associated probabilities. Measurements are applied to these points, not the Gaussian distribution. Likewise, the system model is applied to the points, not a Gaussian. We then compute the statistical properties of the resulting ensemble of points.\n\nThese choices have many trade-offs. The Kalman filter is very efficient, and is an optimal estimator if the assumptions of linearity and Gaussian noise are true. If the problem is nonlinear than we must linearize the problem. If the problem is multimodal (more than one object being tracked) then the Kalman filter cannot represent it. The Kalman filter requires that you know the state model. If you do not know how your system behaves the performance is poor.\n\nIn contrast, particle filters work with any arbitrary, non-analytic probability distribution. The ensemble of particles, if large enough, form an accurate approximation of the distribution. It performs wonderfully even in the presence of severe nonlinearities. Importance sampling allows us to compute probabilities even if we do not know the underlying probability distribution. Monte Carlo techniques replace the analytic integrals required by the other filters. \n\nThis power comes with a cost. The most obvious costs are the high computational and memory burdens the filter places on the computer. Less obvious is the fact that they are fickle. You have to be careful to avoid particle degeneracy and divergence. It can be very difficult to prove the correctness of your filter. If you are working with multimodal distributions you have further work to cluster the particles to determine the paths of the multiple objects. This can be very difficult when the objects are close to each other.\n\nThere are many different classes of particle filter; I only described the naive SIS algorithm, and followed that with a SIR algorithm that performs well. There are many classes of filters, and many examples of filters in each class. It would take a small book to describe them all. \n\nWhen you read the literature on particle filters you will find that it is strewn with integrals. We perform computations on probability distributions using integrals, so using integrals gives the authors a powerful and compact notation. You must recognize that when you reduce these equations to code you will be representing the distributions with particles, and integrations are replaced with sums over the particles. If you keep in mind the core ideas in this chapter the material shouldn't be daunting. ",
"_____no_output_____"
],
[
"## References\n\n[1] *Importance Sampling*, Wikipedia.\nhttps://en.wikipedia.org/wiki/Importance_sampling\n",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown",
"markdown"
]
] |
4ab41356d34539eb71f806f3c42d7f95a977aaef
| 87,493 |
ipynb
|
Jupyter Notebook
|
mandelbrot.ipynb
|
curio-sitas/Fractals
|
b42ba2ca476b5d1d5618435530c2ed25d90a07ca
|
[
"MIT"
] | null | null | null |
mandelbrot.ipynb
|
curio-sitas/Fractals
|
b42ba2ca476b5d1d5618435530c2ed25d90a07ca
|
[
"MIT"
] | null | null | null |
mandelbrot.ipynb
|
curio-sitas/Fractals
|
b42ba2ca476b5d1d5618435530c2ed25d90a07ca
|
[
"MIT"
] | null | null | null | 667.885496 | 84,975 | 0.953471 |
[
[
[
"import PIL\nimport numpy as np\nimport pylab as plt\nfrom tqdm import tqdm",
"_____no_output_____"
],
[
"Nx= 1000\nNy = Nx\nnmax = 70\nx0 = -0.5\n\ndx=2\ndy = dx*Ny/Nx\ny0 = -0\nx = x0+np.linspace(-dx,dx,Nx,dtype=np.float64)\ny = y0+np.linspace(-dy, dy,Ny, dtype=np.float64)\nS = np.zeros((Nx,Ny))\n\nfor i in range(len(x)):\n for j in range(len(y)):\n xx = x[i]\n yy = y[j]\n it = 0\n zx = 0\n zy = 0\n while zx*zx + zy*zy < 4 and (it < nmax):\n xtemp = zx*zx - zy*zy + xx\n zy = 2*zx*zy + yy\n zx = xtemp\n it = it + 1\n S[i,j] = it\n\n",
"_____no_output_____"
],
[
"plt.figure(figsize=(10,10))\nplt.imshow(S, cmap='inferno')\nplt.show()",
"_____no_output_____"
],
[
"plt.imsave('mandelbrot.png', S, cmap='inferno', dpi=600)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code"
]
] |
4ab418ee715e9c9a04aff0069f7fe0c7f6a90975
| 12,458 |
ipynb
|
Jupyter Notebook
|
Xeno-Canto_and_Google_Audioset.ipynb
|
UCSD-E4E/AID_ICML_2021
|
1bc5e4c938bfa3480b67ad1ef2662004a3431be8
|
[
"MIT"
] | null | null | null |
Xeno-Canto_and_Google_Audioset.ipynb
|
UCSD-E4E/AID_ICML_2021
|
1bc5e4c938bfa3480b67ad1ef2662004a3431be8
|
[
"MIT"
] | null | null | null |
Xeno-Canto_and_Google_Audioset.ipynb
|
UCSD-E4E/AID_ICML_2021
|
1bc5e4c938bfa3480b67ad1ef2662004a3431be8
|
[
"MIT"
] | null | null | null | 32.358442 | 98 | 0.452882 |
[
[
[
"from microfaune.detection import RNNDetector\nimport csv\nimport os\nimport glob\nimport pandas as pd\nfrom microfaune import audio\nimport scipy.signal as scipy_signal\nfrom IPython.display import clear_output\nfrom shutil import copyfile",
"_____no_output_____"
],
[
"weightsPath = \"\"\nXCDataPath = \"\"\ncolumn_names = [\"Folder\",\"Clip\",\"Bird_Label\",\"Global Score\"]\ndf = pd.DataFrame(columns = column_names)\nbird_detector = RNNDetector(weightsPath)\nNormalized_Sample_Rate = 44100",
"_____no_output_____"
],
[
"dataList = []",
"_____no_output_____"
],
[
"list = os.listdir(XCDataPath) # dir is your directory path\nnum_filesXC = len(list)\ncountXC = 0\nerrCount = 0\nrepCountXC = 1\nrepListXC = []\n# with open(\"DAXC.csv\",mode='a') as dataset:\n# writer = csv.writer(dataset,delimiter=\",\")\n# writer.writerow([\"Folder\",\"Clip\",\"Bird_Label\",\"Global Score\"])\nfor file in glob.glob(XCDataPath + \"*.wav\"):\n path_list = file.split(\"/\")\n folder_name = path_list[len(path_list) - 2 ]\n clip_name = path_list[len(path_list) - 1 ]\n \n if \"(1)\" in clip_name:\n repCountXC += 1\n repListXC.append(clip_name)\n continue\n \n SAMPLE_RATE, SIGNAL = audio.load_wav(XCDataPath + clip_name)\n \n # downsample the audio if the sample rate > 44.1 kHz\n # Force everything into the human hearing range.\n # May consider reworking this function so that it upsamples as well\n if SAMPLE_RATE > Normalized_Sample_Rate:\n rate_ratio = Normalized_Sample_Rate / SAMPLE_RATE\n SIGNAL = scipy_signal.resample(\n SIGNAL, int(len(SIGNAL)*rate_ratio))\n SAMPLE_RATE = Normalized_Sample_Rate\n # resample produces unreadable float32 array so convert back\n #SIGNAL = np.asarray(SIGNAL, dtype=np.int16)\n\n #print(SIGNAL.shape)\n # convert stereo to mono if needed\n # Might want to compare to just taking the first set of data.\n if len(SIGNAL.shape) == 2:\n SIGNAL = SIGNAL.sum(axis=1) / 2\n \n try:\n microfaune_features = bird_detector.compute_features([SIGNAL])\n global_score, local_score = bird_detector.predict(microfaune_features)\n clear_output(wait=True)\n dataList.append([folder_name, clip_name,'y',global_score[0][0]])\n countXC += 1\n print(str(countXC) + \"/\" + str(num_filesXC))\n \n except:\n print(file + \" Failed\")\n errCount += 1\n continue\n# with open(\"DAXC.csv\",mode='a') as dataset:\n# writer = csv.writer(dataset,delimiter=\",\")\n# writer.writerow([folder_name,clip_name,'y',global_score[0][0]])\nprint(\"Errors: \" + str(errCount))",
"4774/4776\nErrors: 1\n"
],
[
"nonBirdPath = \"\"\nlist = os.listdir(nonBirdPath) # dir is your directory path\nnum_files = len(list)\ncountNB = 0\nerrCount = 0\nrepCountNB = 0\nrepListNB = []\nfor file in glob.glob(nonBirdPath + \"*.wav\"):\n path_list = file.split(\"/\")\n folder_name = path_list[len(path_list) - 2 ]\n clip_name = path_list[len(path_list) - 1 ]\n \n if \"(1)\" in clip_name:\n repCountNB += 1\n repListNB.append(clip_name)\n continue\n \n SAMPLE_RATE, SIGNAL = audio.load_wav(nonBirdPath + clip_name)\n \n # downsample the audio if the sample rate > 44.1 kHz\n # Force everything into the human hearing range.\n # May consider reworking this function so that it upsamples as well\n if SAMPLE_RATE > Normalized_Sample_Rate:\n rate_ratio = Normalized_Sample_Rate / SAMPLE_RATE\n SIGNAL = scipy_signal.resample(\n SIGNAL, int(len(SIGNAL)*rate_ratio))\n SAMPLE_RATE = Normalized_Sample_Rate\n # resample produces unreadable float32 array so convert back\n #SIGNAL = np.asarray(SIGNAL, dtype=np.int16)\n\n #print(SIGNAL.shape)\n # convert stereo to mono if needed\n # Might want to compare to just taking the first set of data.\n if len(SIGNAL.shape) == 2:\n SIGNAL = SIGNAL.sum(axis=1) / 2\n \n try:\n microfaune_features = bird_detector.compute_features([SIGNAL])\n global_score, local_score = bird_detector.predict(microfaune_features)\n clear_output(wait=True)\n dataList.append([folder_name,clip_name,'n',global_score[0][0]])\n countNB += 1\n print(str(countNB) + \"/\" + str(num_files))\n # There are more non bird files than bird present files so we balance them\n if (countNB >= countXC):\n break\n except:\n print(file + \" Failed\")\n errCount += 1\n continue\nprint(\"Errors: \" + str(errCount))",
"4774/6448\nErrors: 0\n"
],
[
"df = pd.DataFrame(dataList, columns = [\"Folder\",\"Clip\",\"Bird_Label\",\"Global Score\"])",
"_____no_output_____"
],
[
"df",
"_____no_output_____"
],
[
"csvName = \"\"\ndf.to_csv(csvName)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab419c37ae9ac6838260136bace4337f9d132e1
| 7,251 |
ipynb
|
Jupyter Notebook
|
examples/notebook/contrib/set_covering_deployment.ipynb
|
remiomosowon/or-tools
|
f15537de74088b60dfa325c3b2b5eab365333d03
|
[
"Apache-2.0"
] | 8,273 |
2015-02-24T22:10:50.000Z
|
2022-03-31T21:19:27.000Z
|
examples/notebook/contrib/set_covering_deployment.ipynb
|
remiomosowon/or-tools
|
f15537de74088b60dfa325c3b2b5eab365333d03
|
[
"Apache-2.0"
] | 2,530 |
2015-03-05T04:27:21.000Z
|
2022-03-31T06:13:02.000Z
|
examples/notebook/contrib/set_covering_deployment.ipynb
|
remiomosowon/or-tools
|
f15537de74088b60dfa325c3b2b5eab365333d03
|
[
"Apache-2.0"
] | 2,057 |
2015-03-04T15:02:02.000Z
|
2022-03-30T02:29:27.000Z
| 34.364929 | 261 | 0.557302 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4ab434087e740d57087907fe08c7d8c45db9c22e
| 78,220 |
ipynb
|
Jupyter Notebook
|
modulo - 1 Fundamentos/desafio_final1.ipynb
|
k3ybladewielder/bootcamp_igti_ml
|
d8e929af62b3202e24e7d5ccaf14549a38dc6139
|
[
"MIT"
] | 1 |
2021-01-14T01:41:28.000Z
|
2021-01-14T01:41:28.000Z
|
modulo - 1 Fundamentos/desafio_final1.ipynb
|
k3ybladewielder/bootcamp_igti_ml
|
d8e929af62b3202e24e7d5ccaf14549a38dc6139
|
[
"MIT"
] | null | null | null |
modulo - 1 Fundamentos/desafio_final1.ipynb
|
k3ybladewielder/bootcamp_igti_ml
|
d8e929af62b3202e24e7d5ccaf14549a38dc6139
|
[
"MIT"
] | 1 |
2021-07-16T22:39:14.000Z
|
2021-07-16T22:39:14.000Z
| 33.599656 | 3,874 | 0.388967 |
[
[
[
"## Desafio Final 1\n\nBootcamp Analista de Machine Learning @ IGTI",
"_____no_output_____"
],
[
"**Objetivos**:\n* Pré-processamento dos dados.\n* Detecção de anomalias\n* Processamento dos dados.\n* Correlações.\n* Redução da dimensionalidade.\n* Algoritmos supervisionados e não supervisionados\n\n\n**Análise com:**\n* Redução de dimensionalidade\n* Clusterização com K-means\n* Classificação supervisionada",
"_____no_output_____"
]
],
[
[
"import pandas as pd\nimport numpy as np\nimport seaborn as sns\n",
"_____no_output_____"
],
[
"from google.colab import drive\ndrive.mount('/content/drive')",
"Mounted at /content/drive\n"
],
[
"cars = pd.read_csv('/content/drive/My Drive/Data Science/Bootcamp Analista de ML/Desafio Final/cars.csv')",
"_____no_output_____"
]
],
[
[
"## Conhecendo o dataset",
"_____no_output_____"
],
[
"**Significado das classes:**\n* mpg = miles per gallon\n* cylinders = quantidade de cilindros, que é a origem da força mecânica que possibilita o deslocamento do veículo\n* cubicinches = volume total de ar e combustível queimado pelos cilindros através do motor\n* hp = horse power\n* weightlbs = peso do carro em libras\n* time-to-60 = capacidade em segundos do carro de ir de 0 a 60 milhas por horas\n* year = ano de fabricação\n* brand = marca, origem, etc.\n\n1 kg = 2,20462 lbs",
"_____no_output_____"
]
],
[
[
"cars.head()",
"_____no_output_____"
],
[
"cars.describe()",
"_____no_output_____"
],
[
"#linhas x colunas\ncars.shape",
"_____no_output_____"
],
[
"#Existem dados faltantes ?\ncars.isnull().sum()",
"_____no_output_____"
],
[
"cars.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 261 entries, 0 to 260\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 mpg 261 non-null float64\n 1 cylinders 261 non-null int64 \n 2 cubicinches 261 non-null object \n 3 hp 261 non-null int64 \n 4 weightlbs 261 non-null object \n 5 time-to-60 261 non-null int64 \n 6 year 261 non-null int64 \n 7 brand 261 non-null object \ndtypes: float64(1), int64(4), object(3)\nmemory usage: 16.4+ KB\n"
]
],
[
[
"## Teste: Desafio Final",
"_____no_output_____"
],
[
"Pergunta 1 - Após a utilização da biblioteca pandas para a leitura dos dados sobre os valores lidos, é CORRETO afirmar que:",
"_____no_output_____"
]
],
[
[
"cars.isnull().sum()",
"_____no_output_____"
]
],
[
[
"**Não foram encontrados valores nulos após a leitura dos dados.**",
"_____no_output_____"
],
[
"Pergunta 2 - Realize a transformação das colunas “cubicinches” e “weightlbs” do tipo “string” para o tipo numérico utilizando o pd.to_numeric(), utilizando o parâmetro errors='coerce'. Após essa transformação, é CORRETO afirmar:",
"_____no_output_____"
]
],
[
[
"#Convertendo valores objects para numeric\ncars['cubicinches'] = pd.to_numeric(cars['cubicinches'], errors='coerce')\ncars['weightlbs'] = pd.to_numeric(cars['weightlbs'], errors='coerce')",
"_____no_output_____"
],
[
"#Verificando resultado\ncars.info()",
"<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 261 entries, 0 to 260\nData columns (total 8 columns):\n # Column Non-Null Count Dtype \n--- ------ -------------- ----- \n 0 mpg 261 non-null float64\n 1 cylinders 261 non-null int64 \n 2 cubicinches 259 non-null float64\n 3 hp 261 non-null int64 \n 4 weightlbs 258 non-null float64\n 5 time-to-60 261 non-null int64 \n 6 year 261 non-null int64 \n 7 brand 261 non-null object \ndtypes: float64(3), int64(4), object(1)\nmemory usage: 16.4+ KB\n"
],
[
"cars.isnull().sum()",
"_____no_output_____"
]
],
[
[
"**Essa transformação adiciona valores nulos ao nosso dataset.**",
"_____no_output_____"
],
[
"Pergunta 3 - Indique quais eram os índices dos valores presentes no dataset que “forçaram” o pandas a compreender a variável “cubicinches” como string.",
"_____no_output_____"
]
],
[
[
"indices_cub = [cars[cars['cubicinches'].isnull()]]\nindices_cub",
"_____no_output_____"
]
],
[
[
"Pergunta 4 - Após a transformação das variáveis “string” para os valores numéricos, quantos valores nulos (células no dataframe) passaram a existir no dataset?",
"_____no_output_____"
]
],
[
[
"cars.isnull().sum()",
"_____no_output_____"
]
],
[
[
"Pergunta 5 - Substitua os valores nulos introduzidos no dataset, após a transformação, pelo valor médio das colunas. Qual é o novo valor médio da coluna “weightlbs”?",
"_____no_output_____"
]
],
[
[
"cars['cubicinches'] = cars['cubicinches'].fillna(cars['cubicinches'].mean())\ncars['weightlbs'] = cars['weightlbs'].fillna(cars['weightlbs'].mean())",
"_____no_output_____"
],
[
"cars.isnull().sum()",
"_____no_output_____"
],
[
"cars['weightlbs'].mean()",
"_____no_output_____"
]
],
[
[
"Pergunta 6 - Após substituir os valores nulos pela média das colunas, selecione as colunas ['mpg', 'cylinders', 'cubicinches', 'hp', 'weightlbs', 'time-to-60', 'year']. Qual é o valor da mediana para a característica 'mpg'?",
"_____no_output_____"
]
],
[
[
"cars['mpg'].median()",
"_____no_output_____"
]
],
[
[
"Pergunta 7 - Qual é a afirmação CORRETA sobre o valor de 14,00 para a variável “time-to-60”?",
"_____no_output_____"
]
],
[
[
"cars.describe()",
"_____no_output_____"
]
],
[
[
"75% dos dados são maiores que o valor de 14,00.",
"_____no_output_____"
],
[
"8 - Sobre o coeficiente de correlação de Pearson entre as variáveis “cylinders” e “mpg”, é correto afirmar",
"_____no_output_____"
]
],
[
[
"from scipy import stats\nstats.pearsonr(cars['cylinders'], cars['mpg'])",
"_____no_output_____"
],
[
"from sklearn.metrics import r2_score\nr2_score(cars['cylinders'], cars['mpg'])",
"_____no_output_____"
]
],
[
[
"Mesmo não sendo igual a 1, é possível dizer que à medida em que a variável “cylinders” aumenta, a variável “mpg” também aumenta na mesma direção.",
"_____no_output_____"
],
[
"9 - Sobre o boxplot da variável “hp”, é correto afirmar, EXCETO:",
"_____no_output_____"
]
],
[
[
"sns.boxplot(cars['hp'])",
"/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.\n FutureWarning\n"
]
],
[
[
"Cada um dos quartis possui a mesma quantidade de valores para a variável “hp”.",
"_____no_output_____"
],
[
"10 - Após normalizado, utilizando a função StandardScaler(), qual é o maior valor para a variável “hp”? ",
"_____no_output_____"
]
],
[
[
"cars.head()",
"_____no_output_____"
],
[
"cars_normalizar = cars.drop('brand', axis=1)\ncars_normalizar.head()",
"_____no_output_____"
],
[
"from sklearn.preprocessing import StandardScaler\nnormalizar = StandardScaler() #instanciando o standart scaler\n\nscaler = normalizar.fit(cars_normalizar.values) #fitando o dataset para normalizar\n\ncars_normalizado = scaler.transform(cars_normalizar.values) #normalizando\n\ncars_normalizado = pd.DataFrame(cars_normalizado, columns=cars_normalizar.columns) #transformando o array numpy em data frame do pandas",
"_____no_output_____"
],
[
"cars_normalizado['hp'].max()",
"_____no_output_____"
]
],
[
[
"11 - Aplicando o PCA, conforme a definição acima, qual é o valor da variância explicada com pela primeira componente principal",
"_____no_output_____"
]
],
[
[
"from sklearn.decomposition import PCA\npca = PCA(n_components=7)",
"_____no_output_____"
],
[
"principais = pca.fit_transform(cars_normalizado)\npca.explained_variance_ratio_",
"_____no_output_____"
]
],
[
[
"12 - Utilize os três primeiros componentes principais para construir o K-means com um número de 3 clusters. Sobre os clusters, é INCORRETO afirmar que",
"_____no_output_____"
]
],
[
[
"principais.explained_variance_ratio_",
"_____no_output_____"
],
[
"principais_componentes = pd.DataFrame(principais)\nprincipais_componentes.head()",
"_____no_output_____"
],
[
"principais_componentes_k = principais_componentes.iloc[:, :3] #selecionando todas as linhas e as 3 primeiras colunas\nprincipais_componentes_k.columns = ['componente 1', 'componente 2', 'componente 3']",
"_____no_output_____"
],
[
"from sklearn.cluster import KMeans\n\nkmeans = KMeans(n_clusters=3, random_state=42).fit(principais_componentes_k) #Parâmetros dados no desafio",
"_____no_output_____"
],
[
"principais_componentes_k['cluster'] = kmeans.labels_ #adicionando coluna do cluster em que o carro está\nprincipais_componentes_k",
"/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: SettingWithCopyWarning: \nA value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_indexer,col_indexer] = value instead\n\nSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n \"\"\"Entry point for launching an IPython kernel.\n"
],
[
"principais_componentes_k['cluster'].value_counts() #Contando a quantidade de elementos dos clusters gerados",
"_____no_output_____"
]
],
[
[
"13 - Após todo o processamento realizado nos itens anteriores, crie uma coluna que contenha a variável de eficiência do veículo. Veículos que percorrem mais de 25 milhas com um galão (“mpg”>25) devem ser considerados eficientes. Utilize as colunas ['cylinders' ,'cubicinches' ,'hp' ,'weightlbs','time-to-60'] como entradas e como saída a coluna de eficiência criada.\n\nUtilizando a árvore de decisão como mostrado, qual é a acurácia do modelo?",
"_____no_output_____"
]
],
[
[
"cars.head()",
"_____no_output_____"
],
[
"entradas = np.array(cars[['cylinders' ,'cubicinches' ,'hp' ,'weightlbs' ,'time-to-60']])\nsaidas = np.array(cars['mpg'] > 25).astype(int) #zero = maior, 1 = menor",
"_____no_output_____"
],
[
"entradas",
"_____no_output_____"
],
[
"saidas",
"_____no_output_____"
],
[
"from sklearn.model_selection import train_test_split\nx_train, x_test, y_train, y_test = train_test_split(entradas, saidas, test_size=0.30, random_state=42)",
"_____no_output_____"
],
[
"from sklearn.tree import DecisionTreeClassifier\nclassificador = DecisionTreeClassifier(random_state=42)",
"_____no_output_____"
],
[
"classificador.fit(x_train, y_train)\ny_pred = classificador.predict(x_test)",
"_____no_output_____"
],
[
"from sklearn.metrics import accuracy_score\nacuracia = accuracy_score(y_test, y_pred)\nacuracia",
"_____no_output_____"
]
],
[
[
"14 - Sobre a matriz de confusão obtida após a aplicação da árvore de decisão, como mostrado anteriormente, é INCORRETO afirmar:",
"_____no_output_____"
]
],
[
[
"from sklearn.metrics import confusion_matrix\nconfusion_matrix(y_test, y_pred)\n",
"_____no_output_____"
]
],
[
[
"Existem duas vezes mais veículos considerados não eficientes que instâncias de veículos eficientes",
"_____no_output_____"
],
[
"15 - Utilizando a mesma divisão de dados entre treinamento e teste empregada para a análise anterior, aplique o modelo de regressão logística como mostrado na descrição do trabalho.\n\n\nComparando os resultados obtidos com o modelo de árvore de decisão, é INCORRETO afirmar que:",
"_____no_output_____"
]
],
[
[
"from sklearn.linear_model import LogisticRegression\n\nlogreg = LogisticRegression(random_state=42).fit(x_train, y_train)\nlogreg_y_pred = logreg.predict(x_test)",
"/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\n"
],
[
"accuracy_score(y_test, logreg_y_pred)",
"_____no_output_____"
]
],
[
[
"# Fim\n\n# Visite o meu [github](https://github.com/k3ybladewielder) <3",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown",
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown",
"markdown"
],
[
"code",
"code"
],
[
"markdown"
]
] |
4ab4377c7a3a7c8c317cd1fd6a0e0323e76df16f
| 4,410 |
ipynb
|
Jupyter Notebook
|
notebooks/WebKB.ipynb
|
Lkxz/categorical-kernels
|
49357a5b6c5f12c97afde4bac24810560e8ec152
|
[
"MIT"
] | 1 |
2020-07-12T19:17:51.000Z
|
2020-07-12T19:17:51.000Z
|
notebooks/WebKB.ipynb
|
Lkxz/categorical-kernels
|
49357a5b6c5f12c97afde4bac24810560e8ec152
|
[
"MIT"
] | null | null | null |
notebooks/WebKB.ipynb
|
Lkxz/categorical-kernels
|
49357a5b6c5f12c97afde4bac24810560e8ec152
|
[
"MIT"
] | null | null | null | 31.276596 | 88 | 0.387755 |
[
[
[
"empty"
]
]
] |
[
"empty"
] |
[
[
"empty"
]
] |
4ab43a701a9c53a6471d191b2174ff97d3a97c66
| 627,066 |
ipynb
|
Jupyter Notebook
|
gridding/JG_pchip_interpolation/Visualize pchip profiles.ipynb
|
BillMills/argo-database
|
a22d87fdeacf1a12280201b995509a671f9d90e4
|
[
"MIT"
] | 2 |
2020-04-28T08:11:40.000Z
|
2020-10-20T18:37:28.000Z
|
gridding/JG_pchip_interpolation/Visualize pchip profiles.ipynb
|
BillMills/argo-database
|
a22d87fdeacf1a12280201b995509a671f9d90e4
|
[
"MIT"
] | 1 |
2019-09-18T23:18:16.000Z
|
2019-09-18T23:18:16.000Z
|
gridding/JG_pchip_interpolation/Visualize pchip profiles.ipynb
|
BillMills/argo-database
|
a22d87fdeacf1a12280201b995509a671f9d90e4
|
[
"MIT"
] | 1 |
2021-12-16T19:09:56.000Z
|
2021-12-16T19:09:56.000Z
| 630.217085 | 23,212 | 0.947929 |
[
[
[
"import pandas as pd\nimport numpy as np\n\nimport os, glob\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\n#%matplotlib notebook\nimport seaborn as sns\nsns.reset_orig()\nimport matplotlib.pyplot as plt\nfrom datetime import datetime, timedelta\nimport pdb\nimport requests\n\nimport sys\n\nfrom importlib import reload\nfrom pchipOceanSlices import PchipOceanSlices\n\nimport visualizeProfs as vp\n#reload(visualizeProfs)",
"/home/tyler/anaconda3/envs/AR/lib/python3.6/site-packages/matplotlib/__init__.py:855: MatplotlibDeprecationWarning: \nexamples.directory is deprecated; in the future, examples will be found relative to the 'datapath' directory.\n \"found relative to the 'datapath' directory.\".format(key))\n/home/tyler/anaconda3/envs/AR/lib/python3.6/site-packages/matplotlib/__init__.py:846: MatplotlibDeprecationWarning: \nThe text.latex.unicode rcparam was deprecated in Matplotlib 2.2 and will be removed in 3.1.\n \"2.2\", name=key, obj_type=\"rcparam\", addendum=addendum)\n"
],
[
"coord = {}\ncoord['lat'] = 0\ncoord['long'] = 59.5\nshape = vp.construct_box(coord, 20, 20)",
"_____no_output_____"
],
[
"ids = ['5901721_24',\n'3900105_196',\n'4900595_140',\n'4900593_152',\n'4900883_92',\n'5901898_42',\n'6900453_3',\n'6900453_5',\n'6900453_6',\n'3900495_188',\n'3900495_189',\n'3900495_190',\n'4901139_74',\n'2901211_144',\n'2900784_254',\n'2901709_19',\n'1901218_88',\n'4901787_0',\n'6902566_44',\n'4901787_6',\n'6901002_100',\n'2902100_104',\n'6901002_102',\n'6901541_103',\n'2901703_157',\n'2901765_1',\n'4901750_125',\n'4902382_4',\n'4901285_208',\n'4901285_209',\n'4902107_54',\n'6901448_149',\n'6901740_126',\n'5901884_302',\n'4901466_156',\n'4901462_174',\n'4901798_110',\n'4901798_112',\n'4902391_58',\n'6902661_118',\n'4901824_91',\n'4902457_2',\n'5904485_280',\n'5904485_284',]",
"_____no_output_____"
],
[
"startDate='2007-6-15'\nendDate='2007-7-31'\npresRange='[15,35]'\n\n#profiles = get_selection_profiles(startDate, endDate, shape, presRange)\nprofiles = vp.get_profiles_by_id(str(ids).replace(' ',''), None, True)\nif len(profiles) > 0:\n selectionDf = vp.parse_into_df(profiles)\nselectionDf.replace(-999, np.nan, inplace=True)",
"https://argovis.colorado.edu/catalog/mprofiles/?ids=['5901721_24','3900105_196','4900595_140','4900593_152','4900883_92','5901898_42','6900453_3','6900453_5','6900453_6','3900495_188','3900495_189','3900495_190','4901139_74','2901211_144','2900784_254','2901709_19','1901218_88','4901787_0','6902566_44','4901787_6','6901002_100','2902100_104','6901002_102','6901541_103','2901703_157','2901765_1','4901750_125','4902382_4','4901285_208','4901285_209','4902107_54','6901448_149','6901740_126','5901884_302','4901466_156','4901462_174','4901798_110','4901798_112','4902391_58','6902661_118','4901824_91','4902457_2','5904485_280','5904485_284']\n"
],
[
"selectionDf.head()",
"_____no_output_____"
],
[
"pos = PchipOceanSlices()",
"_____no_output_____"
],
[
"iCol = 'temp'\nxLab = 'pres'\nyLab = iCol\nxintp = 20\npLevelRange = [15,25]\npos = PchipOceanSlices(pLevelRange)\niDf = pos.make_interpolated_df(selectionDf, xintp, xLab, yLab)\niDf.date = pd.to_datetime(iDf.date)",
"_____no_output_____"
],
[
"print(iDf.shape)\niDf.head()",
"(38, 10)\n"
],
[
"for profile_id, df in selectionDf.groupby('profile_id'):\n #fig.subplots_adjust(hspace=.35, wspace=.35)\n pdf = iDf[iDf['profile_id'] == profile_id]\n if pdf.empty:\n continue\n fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6,6))\n ax = vp.plot_scatter(df, profile_id, 'temp', 'pres', axes)\n ax.scatter(pdf.temp.iloc[0], pdf.pres.iloc[0])",
"/home/tyler/anaconda3/envs/AR/lib/python3.6/site-packages/matplotlib/pyplot.py:514: RuntimeWarning: More than 20 figures have been opened. Figures created through the pyplot interface (`matplotlib.pyplot.figure`) are retained until explicitly closed and may consume too much memory. (To control this warning, see the rcParam `figure.max_open_warning`).\n max_open_warning, RuntimeWarning)\n"
],
[
"badProfiles = ['3900495_189']",
"_____no_output_____"
],
[
"for row in iDf.itertuples():\n coord = {}\n coord['lat'] = row.lat\n coord['long'] = row.lon\n startDate = datetime.strftime(row.date - timedelta(days=15), '%Y-%m-%d')\n endDate = datetime.strftime(row.date + timedelta(days=15), '%Y-%m-%d')\n shape = vp.construct_box(coord, 5, 5)\n print(row.profile_id)\n vp.build_selection_page_url(startDate, endDate, shape, presRange)",
"_____no_output_____"
]
]
] |
[
"code"
] |
[
[
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
4ab442a0d1abaf120975c4bbe027f4614755cff1
| 70,639 |
ipynb
|
Jupyter Notebook
|
_ipynb/No Way Anova - Interactions need more power.ipynb
|
simkovic/simkovic.github.io
|
98d0e30f5547894ee11925548a4627ad7c28fa68
|
[
"MIT"
] | 2 |
2015-07-09T13:51:42.000Z
|
2015-07-27T15:06:48.000Z
|
_ipynb/No Way Anova - Interactions need more power.ipynb
|
simkovic/simkovic.github.io
|
98d0e30f5547894ee11925548a4627ad7c28fa68
|
[
"MIT"
] | null | null | null |
_ipynb/No Way Anova - Interactions need more power.ipynb
|
simkovic/simkovic.github.io
|
98d0e30f5547894ee11925548a4627ad7c28fa68
|
[
"MIT"
] | null | null | null | 438.751553 | 43,308 | 0.917652 |
[
[
[
"I quote myself from the last post:\n\n> The number of tests and the probability to obtain at least one significant result increases with the number of variables (plus interactions) included in the Anova. According to Maxwell (2004) this may be a reason for prevalence of underpowered Anova studies. Researchers target some significant result by default, instead of planning sample size that would provide enough power so that all effects can be reliably discovered.\n\nMaxwell (2004, p. 149) writes: \n\n> a researcher who designs a 2 $\\times$ 2 study with 10 participants per cell has a 71% chance of obtaining at least\none statistically significant result if the three effects he or she tests all reflect medium effect sizes. Of course, in\nreality, some effects will often be smaller and others will be larger, but the general point here is that the probability of\nbeing able to find something statistically significant and thus potentially publishable may be adequate while at the same\ntime the probability associated with any specific test may be much lower. Thus, from the perspective of a researcher who\naspires to obtain at least one statistically significant result, 10 participants per cell may be sufficient, despite the fact that a methodological evaluation would declare the study to be underpowered because the power for any single hypothesis is only .35. \n\nWhat motivates the researcher to keep the N small? Clearly, testing more subjects is costly. But I think that in Anova designs there is additional motivation to keep N small. If we use large N we obtain all main effects and all interactions significant. This is usually not desirable because some of the effects/interactions are not predicted by researcher's theory and non-significant main effect/interaction is taken as an evidence for a lack of this component. Then the researcher needs to find some N that balances between something significant and everything significant. In particular the prediction of significant main effects and non significant interaction is attractive because it is much easier to achieve than other patterns. \n\nLet's look at the probability of obtaining significant main effects and interaction in Anova. I'm lazy so instead of deriving closed-form results I use simulation. Let's assume 2 $\\times$ 2 Anova design where the continuous outcome is given by $y= x_1 + x_2 + x_1 x_2 +\\epsilon$ with $\\epsilon \\sim \\mathcal{N}(0,2)$ and $x_1 \\in \\{0,1\\}$ and $x_2 \\in \\{0,1\\}$. We give equal weight to all three terms to give them equal start. It is plausible to include all three terms, because with psychological variables everything is correlated (CRUD factor). Let's first show that the interaction requires larger sample size than the main effects.",
"_____no_output_____"
]
],
[
[
"%pylab inline\nfrom scipy import stats\nNs=np.arange(20,200,4); \nK=10000;\nps=np.zeros((Ns.size,3))\nres=np.zeros(4)\ncs=np.zeros((Ns.size,8))\ni=0\nfor N in Ns:\n for k in range(K):\n x1=np.zeros(N);x1[N/2:]=1\n x2=np.mod(range(N),2)\n y= 42+x1+x2+x1*x2+np.random.randn(N)*2\n tot=np.square(y-y.mean()).sum()\n \n x=np.ones((N,4))\n x[:,1]=x1*x2\n x[:,2]=x1*(1-x2)\n x[:,3]=(1-x1)*x2\n res[0]=np.linalg.lstsq(x,y)[1]\n \n x=np.ones((N,2))\n x[:,1]=x1\n res[1]=tot-np.linalg.lstsq(x,y)[1]\n \n x[:,1]=x2\n res[2]=tot-np.linalg.lstsq(x,y)[1]\n \n res[3]=tot-res[0]-res[1]-res[2]\n \n mss=res/np.float32(np.array([N-4,1,1,1]))\n F=mss[1:]/mss[0]\n p=1-stats.f.cdf(F,1,N-4)\n p=p<0.05\n ps[i,:]+=np.int32(p)\n cs[i,p[0]*4+p[1]*2+p[2]]+=1\n i+=1\nps/=float(K)\ncs/=float(K)\nfor k in range(ps.shape[1]): plt.plot(Ns/4, ps[:,k])\nplt.legend(['A','B','X'],loc=2)\nplt.xlabel('N per cell')\nplt.ylabel('expected power');",
"Populating the interactive namespace from numpy and matplotlib\n"
]
],
[
[
"Now we look at the probability that the various configurations of significant and non-significant results will be obtained. ",
"_____no_output_____"
]
],
[
[
"plt.figure(figsize=(7,6))\nfor k in [0,1,2,3,6,7]: plt.plot(Ns/4, cs[:,k])\nplt.legend(['nothing','X','B','BX','AB','ABX'],loc=2)\nplt.xlabel('N per cell')\nplt.ylabel('pattern frequency');",
"_____no_output_____"
]
],
[
[
"To keep the figure from too much clutter I omitted A and AX which is due to symmetry identical to B and BX. By A I mean \"main effect A is significant and main effect B plus the interaction are not significant\". X designates the presence of a significant interaction. \n\nTo state the unsurprising results first, if we decrease the sample size we are more likely to obtain no significant result. If we increase the sample size we are more likely to obtain the true model ABX. Because interaction requires large sample size to reach significance for medium sample size AB is more likely than the true model ABX. Furthermore, funny things happen if we make main effects the exclusive focus of our hypothesis. In the cases A,B and AB we can find a small-to-medium sample size that is optimal if we want to get our hypothesis significant. All this can be (unconsciously) exploited by researchers to provide more power for their favored pattern.\n\nIt is not difficult to see the applications. We could look up the frequency of various patterns in the psychological literature. This could be done in terms of the reported findings but also in terms of the reported hypotheses. We can also ask whether the reported sample size correlates with the optimal sample size. \n\nNote, that there is nothing wrong with Anova. The purpose of Anova is NOT to provide a test for composite hypotheses such as X, AB or ABX. Rather it helps us discover sources of variability that can then be subjected to a more focused analysis. Anova is an exploratory technique and should not be used for evaluating of hypotheses.",
"_____no_output_____"
]
]
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
[
[
"markdown"
],
[
"code"
],
[
"markdown"
],
[
"code"
],
[
"markdown"
]
] |
4ab4549965a9dc681d945b66c9ec2b91d72432e5
| 5,514 |
ipynb
|
Jupyter Notebook
|
l7-language/code/Word embedding with Gensim.ipynb
|
biplav-s/course-nl
|
d012d29cd2265cf5d9f449a00bbccd4836790bce
|
[
"MIT"
] | 2 |
2021-07-13T14:46:37.000Z
|
2021-11-18T23:50:04.000Z
|
l7-language/code/Word embedding with Gensim.ipynb
|
biplav-s/course-nl
|
d012d29cd2265cf5d9f449a00bbccd4836790bce
|
[
"MIT"
] | null | null | null |
l7-language/code/Word embedding with Gensim.ipynb
|
biplav-s/course-nl
|
d012d29cd2265cf5d9f449a00bbccd4836790bce
|
[
"MIT"
] | 5 |
2020-09-13T21:08:27.000Z
|
2020-10-24T11:05:44.000Z
| 29.021053 | 215 | 0.543526 |
[
[
[
"# First Introduction",
"_____no_output_____"
]
],
[
[
"# Based on https://machinelearningmastery.com/develop-word-embeddings-python-gensim/\n# Import\nfrom gensim.models import Word2Vec",
"_____no_output_____"
],
[
"corpus = [\n['An', 'alpha', 'document', '.'],\n['A', 'beta', 'document', '.'],\n['Guten', 'Morgen', '!'],\n['Gamma', 'manuscript', 'is', 'old', '.'],\n['Whither', 'my', 'document', '?']\n]",
"_____no_output_____"
],
[
"# train model\nmodel = Word2Vec(corpus, min_count=1)\n# summarize the loaded model\nprint(\"INFO: Model - \\n\" + str(model))",
"INFO: Model - \nWord2Vec(vocab=16, size=100, alpha=0.025)\n"
],
[
"# summarize vocabulary\nwords = list(model.wv.vocab)\nprint(\"INFO: Words found - \\n\" + str(words))",
"INFO: Words found - \n['An', 'alpha', 'document', '.', 'A', 'beta', 'Guten', 'Morgen', '!', 'Gamma', 'manuscript', 'is', 'old', 'Whither', 'my', '?']\n"
],
[
"# access vector for one word - specified by myword\nmyword = 'document'\nprint(\"INFO: Model of '\" + myword + \"' - \\n\" + str(model[myword]))",
"INFO: Model of 'document' - \n[-2.4414996e-03 2.6622412e-03 -3.0177636e-03 3.2341545e-03\n 2.7868759e-03 -8.0836623e-04 -2.7568194e-03 -9.8301226e-04\n -4.9753767e-03 -4.3506171e-03 3.4021356e-03 3.0122376e-03\n 4.2746733e-03 1.8962767e-04 3.8011130e-03 -2.6230237e-03\n 2.3607069e-03 2.6423734e-04 -8.3725079e-04 -4.0987371e-03\n -2.4278930e-03 -8.1508531e-04 4.3672943e-03 -1.7755051e-03\n -4.5722127e-03 -1.5989904e-03 2.7475185e-03 -4.1078446e-03\n 4.8740730e-03 -2.8241226e-03 2.5529547e-03 -1.9859953e-03\n 4.0683937e-03 -4.0076894e-04 -3.8113804e-03 -1.0060021e-03\n 2.4440982e-03 1.7245926e-03 2.3956454e-04 2.8477099e-03\n -2.8460389e-03 3.8492999e-03 1.6536568e-03 3.9764848e-03\n -3.6650000e-04 -4.4846153e-03 4.1305758e-03 3.0103759e-03\n -1.7798992e-03 -3.5809260e-04 2.5686924e-03 -2.6307274e-03\n -1.7510224e-03 2.4429525e-03 1.1701276e-04 2.9069011e-04\n 3.0633255e-03 -4.6290536e-04 2.1549740e-03 2.7690046e-03\n -4.7105481e-03 2.8129166e-03 4.7868816e-03 3.8892095e-04\n -3.0217474e-04 1.5879645e-03 1.1575058e-03 2.3696022e-03\n -3.9787227e-03 2.0108104e-03 -2.9138487e-03 3.0013481e-03\n -2.4293053e-03 -1.8413435e-03 -4.0246436e-04 -3.4308885e-03\n 2.6239043e-03 -6.4142462e-04 1.0767326e-03 -1.5278280e-03\n -3.4052425e-03 2.0104947e-03 1.2726536e-03 -4.7812676e-03\n -2.6684217e-04 -2.1302279e-03 2.8905654e-03 6.1179750e-04\n 4.6679759e-03 4.0740548e-03 4.5602848e-03 -4.2071445e-03\n -1.8929217e-03 1.4760091e-03 3.9314241e-03 -3.7244433e-03\n -1.8450000e-03 -1.8249759e-03 -2.8342532e-03 -2.6469741e-05]\n"
],
[
"# save model\nmodel.save('../data/model.bin')\nmodel.wv.save_word2vec_format('../data/model.txt', binary=False)",
"_____no_output_____"
],
[
"# load model\nnew_model = Word2Vec.load('../data/model.bin')\nprint(\"INFO: Reloaded Model - \\n\" + str(new_model))",
"INFO: Reloaded Model - \nWord2Vec(vocab=16, size=100, alpha=0.025)\n"
]
]
] |
[
"markdown",
"code"
] |
[
[
"markdown"
],
[
"code",
"code",
"code",
"code",
"code",
"code",
"code"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.